id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
8151775 | pes2o/s2orc | v3-fos-license | Genomic responses in rat cerebral cortex after traumatic brain injury
Background Traumatic brain injury (TBI) initiates a complex sequence of destructive and neuroprotective cellular responses. The initial mechanical injury is followed by an extended time period of secondary brain damage. Due to the complicated pathological picture a better understanding of the molecular events occurring during this secondary phase of injury is needed. This study was aimed at analysing gene expression patterns following cerebral cortical contusion in rat using high throughput microarray technology with the goal of identifying genes involved in an early and in a more delayed phase of trauma, as genomic responses behind secondary mechanisms likely are time-dependent. Results Among the upregulated genes 1 day post injury, were transcription factors and genes involved in metabolism, e.g. STAT-3, C/EBP-δ and cytochrome p450. At 4 days post injury we observed increased gene expression of inflammatory factors, proteases and their inhibitors, like cathepsins, α-2-macroglobulin and C1q. Notably, genes with biological function clustered to immune response were significantly upregulated 4 days after injury, which was not found following 1 day. Osteopontin and one of its receptors, CD-44, were both upregulated showing a local mRNA- and immunoreactivity pattern in and around the injury site. Fewer genes had decreased expression both 1 and 4 days post injury and included genes implicated in transport, metabolism, signalling, and extra cellular matrix formation, e.g. vitronectin, neuroserpin and angiotensinogen. Conclusion The different patterns of gene expression, with little overlap in genes, 1 and 4 days post injury showed time dependence in genomic responses to trauma. An early induction of factors involved in transcription could lead to the later inflammatory response with strongly upregulated CD-44 and osteopontin expression. An increased knowledge of genes regulating the pathological mechanisms in trauma will help to find future treatment targets. Since trauma is a risk factor for development of neurodegenerative disease, this knowledge may also reduce late negative effects.
Background
The completion of the rice (Oryza sativa) genome sequence allowed further functional classification of the coding sequences of this important crop and model of grass species [1,2]. Detailed annotation of the rice genome revealed that nearly a quarter of the rice open reading frame (ORF) coding capacity has features of transposable elements (TEs) and are therefore defined as TE-related genes [3]. Like other genes, these TE-related genes have predicted normal gene structure with protein coding capacity. However, they share significant sequence similarity with known TEs in either or both of the following ways: they have TE signature sequences in The Institute for Genomic Research (TIGR) Oryza Repeat Database [4] or they contain TE-related protein domains [3]. By this definition, TE-related genes can include potentially active TEs (based on the existence of a functional ORF) as well as cellular genes derived from TEs. Many of these TErelated genes encode reverse transcriptases, transposases, or other related proteins [5], and they can be further classified based on protein domain and other sequence features [3,4]. Those TEs overwhelming in number that lack functional ORFs are not considered to be genes [3]. Although there are many TE-related genes, the biologic functions of these genes remain elusive [6].
TEs are considered to be important for the maintenance and diversification of genomes. TEs are usually separated into two classes that differ in the mode of propagation: retrotransposons, or type I elements, which transpose by reverse transcription of an RNA intermediate; and type II elements, which only use a DNA intermediate in movement within the genome. Both classes can be further divided into several superfamilies, each with a unique evolutionary history. Representatives of virtually all superfamilies of TEs have been detected in grass genomes [7][8][9]. Accumulating evidence suggests that TE activities have profound impact on the genome [5], influencing genome size, genome rearrangement, chromatin transcription, and gene evolution [10][11][12][13][14][15]; many of these factors relying specifically on the transposition activity of TEs.
Although most TEs are considered inactive [16,17], there have been isolated reports of TE transposition in rice and other grasses [18]. A common condition promoting transposition is stress, including that which occurs in in vitro cell or tissue culture [19][20][21][22]. Developmental regulation of transposition has also been reported in intact plants [23,24]. Transcription of TE-related genes is required for their own transposition and that of other related TEs, although transcription itself may not be sufficient for transposition [20,25,26]. Analysis of TE-related genes from certain subgroups of the type I class and the Mutator-like superfamily of the type II class suggests that their transcripts are widely present in grasses [27,28]. Most of these transcribed TEs have coding capacity and are therefore considered TE-related genes. A recent study of expressed sequence tags (ESTs) in sugarcane identified 267 active TE-related transcripts [29]. Transcription of TE-related genes was also reported in an unbiased survey of the transcriptional activity of a single rice chromosome using a tiling microarray [30].
Apart from the potentially active TEs among these TE-related genes, domesticated TE-related genes, which acquire new functions for the host, have also been found to exist. Although our current classification for distinguishing TE-related genes from non-TE-related genes is not definitive [31], two recent studies in Arabidopsis identified domesticated TE-related genes contributing to cellular processes [32,33]. Similar examples were also found in animals [34,35]. Such findings in part support the hypothesis that TE-related genes may influence the evolution of their host by providing a source of novel coding capacity.
The potential impact of domesticated TE-related genes on the evolution of genomes requires systematic investigation. One attempt to identify further domesticated TE-related genes is sequence mining [36]. Because a change of position through transcription can be detrimental to the host, transposonderived genes with known host function usually lack mobility. As a consequence, they may be devoid of transposon-specific terminal sequences [32,36]. By employing this criterion in a search, one particular member of the MULE superfamily was identified as a domesticated gene candidate [36]. Transcription is an important feature of domesticated TE-related genes, because it is generally required in cellular functions of the host [32,33]. By surveying transcriptional activity and combining other approaches, we would be able to identify domesticated TE-derived gene candidates.
Another mechanism for the evolution of new genes from TEs is through their ability to acquire and fuse fragments of genes to new genomic locations, as seen in plant Pack-MULE and, more recently, in certain Helitron-like and CACTA elements [13,14,37,38]. However, many of these Pack-MULEs have been suggested to possess pseudogene-like features [39]. Pack-MULE, as a unique group of TE-related genes, is relatively well annotated and is a current focus of interest regarding the origin of genes [37].
Given the paucity of information on TE-related genes, a systematic study of their transcriptional activity in a well characterized genome is required to enhance our understanding of the activity of TE-related genes. That the sequence of the rice genome is now completely annotated makes it a good resource for such a genome-wide survey [3]. Recent advances in microarray technology allow us to study the transcriptional activity of genes in a high-throughput manner. It is therefore possible to conduct a genome-wide survey of the transcriptional activity of rice TE-related genes, especially those more divergent ones for which unique oligomer probes can be designed. Different from simple TEs composing mostly repetitive sequences, many TE-related genes are diverged enough to have short oligomers representing their unique sequence regions. Such an approach has recently been utilized to analyze transcription of TE-related genes in plants and animals [11,30,40]. In addition to TE-related genes, TEs without protein-coding capacity and other tandem repeats may also exhibit transcriptional activity [26,41]. Transcripts derived from tandem repeats in the heterochromatin can give rise to small RNAs, which in turn direct the modification of histones and DNA in TE-related sequences and nearby regions by means of RNA interference [16]. Although transcripts from tandem repeats are important for the genome, their highly repetitive nature prohibits characterization of their unique identities in chromosomal organization on a genome-wide scale [42,43].
We conducted an expression analysis for rice TE-related genes using 70-mer oligonucleotide microarrays. Expression profiles from 4,728 oligonucleotides covering organs from rice plants were analyzed under both normal conditions at various developmental stages as well as under stress conditions. Clear but restricted transcription of TE-related genes were found for all major superfamilies of TE-related genes. Mechanisms controlling representative TE transcription were further analyzed.
Representation of TE-related genes by an oligonucleotide microarray
A 70-mer oligonucleotide set was previously developed to span the rice genome [44]. Many TE-related genes are included in this oligomer set design, allowing survey of a large number of rice TE-related genes. However, for the sake of simplicity, those oligonucleotide probes representing TErelated genes were removed from analysis in all prior genome profiling analyses [44][45][46][47]. Here, we collected all of our available datasets and systematically examined the transcriptional activities of TE-related genes in various tissues and growth conditions. In particular, we included datasets representing cell cultures and stress-exposed tissues.
According to the rice genome annotation at TIGR [3] and a literature review [27,48], a total of 14,404 genes were identified as TE-related genes, based on the presence of TE signature sequences in the TIGR Oryza Repeat Database [4] or TErelated Pfam domains. Among these TE-related genes, 9,493 were classified as type I (retrotransposons) TE-related genes and 4,159 were classified as type II (DNA transposon) TErelated genes. These TE-related genes were further classified into superfamilies according to sequence signatures ( Table 1). The classification at TIGR was followed, modified in accordance with recently published studies [27,48]. There were another 752 TE-related genes without further classification. A remapping of oligonucleotides in our microarray [44] to annotated genes indicated that 2,191 (15.2%) TE-related genes were represented by at least one 70-mer oligonucleotide that was free from cross-hybridization (see Materials and methods, below). Most oligomers, if not all, mapped to unique coding regions instead of repetitive sequences. In addition, 1,966 70-mer oligonucleotides mapped to more than one TE-related gene while remaining cross-hybridization free from non-TE-related genes. These oligonucleotides covered another 9,396 (65.2%) TE-related genes.
Transcriptional activity of TE-related genes
To obtain a comprehensive picture of the transcriptional activity of TE-related genes, we assembled their transcription profiles into a collection of 15 datasets acquired from various tissues and under various physical conditions (Table 2). Five tissues grown under normal conditions from different developmental stages, four cell cultures, and six tissue samples under conditions of salinity or drought were included [44][45][46][47]. Three or more independent biologic replicates for Table 1 Summary of annotated TE-related genes in rice and coverage by (cross-hybridization free) microarray probes each sample were analyzed. In order to assemble a compendium of transcription profiles with minimal sample variation, quantified microarray hybridization signals from different experiments were pulled together and subjected to an automatic processing pipeline, with manual inspection to correct for slide background, normalize experimental variations, filter problem spots, and check data quality. A previously described method, which takes into account both negative and positive controls as well as data reproducibility, was applied here to determine the expression threshold [44]. Such an experimental expression threshold was also supported by reverse transcription (RT)-polymerase chain reaction (PCR) of randomly selected genes.
Examination of the expression of TE-related genes in each sample indicates that heading stage panicle has the greatest level of detected expression at 33%, whereas expression percentage in somatic shoot culture is the lowest, at 26% ( Figure 1a). We also found that DNA transposons (type II) have 11% to 18% higher expression percentage than retrotransposons (type I) in all samples analyzed ( Figure 1a).
By monitoring the expression of 2,191 TE-related genes using unique oligomer probes, we identified expression of 1,084 (61.7%) TE-related genes in at least one of our 15 samples. This is in contrast to findings in non-TE-related genes, 85.8% of which are expressed in at least one sample and 22.6% in all samples, using the same selection criteria. Expressed TErelated genes tend to exhibit transcription in a relatively small number of samples. The percentages of expressed TE-related genes in a wide range of samples are markedly lower than those of non-TE-related genes ( Figure 1b). For those oligonucleotide probes that match multiple TE-related genes, 73.7% and 5.1% had hybridization signals in at least one sample or in all samples, respectively. Considering that those probes match multiple repetitive genes, a smaller portion of those TE-related genes that they represent is expected to be transcribed.
To probe quantitatively for the transcriptional activity of TErelated genes, the expression intensities of those 1,084 transcribed TE-related genes and an similar number of randomly selected transcribed non-TE-related genes are visually juxtaposed after clustering ( Figure 2). Even though only transcribed genes are being compared here, it is clear that the transcription of TE-related genes was in general weaker than that of their non-TE-related counterparts. Furthermore, a large portion of the transcribed TE-related genes exhibited detectable transcription in fewer rice samples than was the case for non-TE-related genes. However, there are clearly a few clusters of TE-related genes with rampant transcription in most rice samples, and some of this transcription is quite marked (Figure 2). A few organ-specific clusters, such as one for cultured cells (lanes 7, 8 and 9 in Figure 2), were also found.
To gauge the reliability of our microarray data for TE-related genes, we first compared rice cDNA and EST collections with our data. We found 496 TE-related genes in the cDNA/EST collection in TIGR database [3]. These cDNAs and ESTs were derived from six rice samples: callus, seed, shoot and stem, leaf, root, and flower (heading panicle). We have similar (although not identical) rice samples with microarray expression profiles for all of them except seed. A survey of these TErelated cDNAs/ESTs indicates that 80% of those covered by our microarray also had detectable transcription. We further used RT-PCR to verify the microarray data. An attempt to amplify a series of TE-related genes with different levels of microarray signals supported our choice of threshold used to determine expression. Of the 10 genes with expression level within 100 units above the threshold, seven were amplified by RT-PCR; in contrast, only two out of 10 with expression below the threshold were amplified. Moreover, 34 randomly selected TE-related genes identified through microarray analysis as being shoot expressed were tested with RT-PCR using seedling shoot RNA samples. Twenty-nine (85%) of them were clearly detected. An independent tiling microarray analysis of rice transcriptome also covered a significant portion of the TE-related genes [43]. A preliminary survey of the transcriptional activities of TE-related genes in this dataset gives a similar portion of expression (about 30%) among tissues examined [49], although a different platform and hybridization detection procedure were used [43].
Transcription of type I TE-related genes
In addition to taking an inventory of transcribed TE-related genes in various tissues and under multiple growth conditions, the availability of high-quality complete genome sequence provided an opportunity to elucidate how transcriptional activities evolve following sequence divergence. To this end, phylogenic trees were generated for all major TE-related gene superfamilies and were integrated with their members' expression profiles.
The type I TE-related genes can be classified into two groups according to the presence or absence of long terminal repeats (LTRs). TE-related genes without LTRs belong to the long interspersed elements (LINEs) type, which may encode retrotransposase and mobilize noncoding short interspersed elements (SINEs). Only 34 LINE-type TE-related genes were identified in rice (Table 1). We found a relatively small portion (usually below 20%) of this family transcribed ( Figure 3). One rice LINE-type retrotransposon named Karma with active transposition has been reported [20]; its transcriptional activity was detected in a wide range of organs and cultured cells. A 5'-truncated version of Karma was also identified in the rice genome [20], which lacks transcriptional activity in all samples we tested ( Figure 3).
LTR-type TE-related genes belong to two superfamilies, namely Ty1/copia and Ty3/gypsy, which are both ubiquitous throughout plants and believed to have contributed significantly to the evolution of genome structure and function [10]. Both families are quite diverse in rice, with Ty3/gypsy elements outnumbering Ty1/copia elements [48]. Our expression data indicate that both families are similarly transcribed at low levels at around 25% in most samples, but there are members in both families with strong transcription in widespread tissues. However, they are spread in different clades with only remote similarity (Additional data files 1 and 2). A few active LTR retrotransposons have been reported in rice. Among them, Tos17 is the best characterized and is known to exhibit active transposition in tissue culture [19]. We found active transcription of Tos17 not only in cultured cells but also in a wide range of organs (Additional data file 1), suggesting that tissue culture may provide a way to propagate somatic transposition events to progeny. Sireviruses are a plant-specific lineage of the Ty1/copia retrotransposons that interact specifically with proteins related to dynein light chain 8 [50]. We found one member of this lineage with ubiquitous strong Global expression map showing transcriptional activity of TE-related and randomly selected non-TE-related genes Figure 2 Global expression map showing transcriptional activity of TE-related and randomly selected non-TE-related genes. Only 1,353 TE-related genes with transcription in at least one sample are included. Another 1,353 non-TE-related genes randomly picked from those with transcription in at least one samples are shown in parallel. Each lane represents one sample in the same order as in Table 2. Shades of gray indicate the magnitude of transcription signals, which are based on microarray hybridization signals without units. TE, transposable element. TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD HPS SS TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD HPS TE Non-TE 0 100 500 2,000 10,000 transcription and several others with transcription in selected rice samples (Additional data file 1).
SS
A large number of type I TE-related genes have not yet been further classified (Table 1). We detected transcription of a smaller proportion of this group of genes than for Ty1/copia and Ty3/gypsy superfamilies.
Transcription of type II TE-related genes
Type II TE-related genes are in general more actively transcribed than type I TE-related genes. Different from type I, Degrees of lineage-specific transcription in the LINE superfamily Figure 3 Degrees of lineage-specific transcription in the LINE superfamily. The phylogenetic tree was generated from a multiple alignment of conceptually translated sequences by using neighbor-joining methods and rooted with human L1. Bootstrap values were calculated from 1,000 replicates. Sample numbers are identical to those in Table 2. Shades of gray indicate the magnitude of transcription signals, which are based on microarray hybridization signals without units. Names of previously reported members are shown. *Previously reported members with transcription or transposition. † Previously reported inactivate members. LINE, long interspersed element. Os07g04110 SS TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD Mutator-like superfamily (MULE) is one of the first groups of identified transposases with a few reported transcriptionally active members in rice [27]. There are 607 autonomous members of this superfamily (Table 1), which has one of the strongest transcription levels, at 35% to 40% in each sample ( Figure 4). The MULEs can be further divided into three branches: MuDR-like, Jittery-like, and TRAP-like [27]. The TRAP-like branch may have recently been amplified, and high similarity among family members has resulted in lack of unique oligo probes with which to examine their expression profiles. Interestingly, we have found at least three clades with clear active transcription in MuDR-like and Jittery-like branches ( Figure 4). The one highly transcribed clade in the MuDR-like branch included MUG1, an evolutionarily conserved MULE sequence found in diverse angiosperms and a candidate for categorization as a domesticated transposaserelated gene [36]. The larger, highly transcribed clade in the Jittery-like branch includes homologs to Arabidopsis genes FAR1 and FHY3, both of which are transposon-derived genes with demonstrated host function as transcription factors downstream of phytochrome A [32,51,52]. There are no reports on any members of the other highly transcribed clade in the Jittery-like branch, which has rampant transcription ( Figure 4, middle).
The CACTA superfamily is a diverse group of high-copy repetitive genes in grasses [53,54]. CACTA transposons with active transcription or even transposition have been reported in rice and other grass genomes [54][55][56][57]. A total of 2,276 intact CACTA transposase-coding genes are identified in rice, making it the largest superfamily in type II TE-related genes ( Table 1). The CACTA superfamily is also highly active, with more than 40% transcribed in each sample. Several clades with active transcription were identified (Additional data file 4). Among them, two clades include over 20 members. No members within these actively transcribed CACTA transposons have previously been characterized.
The hAT-like superfamily is another widespread superfamily in grasses [58]. It is a medium-sized superfamily in rice with 184 autonomous members (Table 1). About 20% of this superfamily is transcribed in a single sample ( Figure 5). Interestingly, we found a small clade of four genes that exhibited relatively uniform and strong transcription across a wide range of samples. A sequence comparison indicates that these genes have high similarity with a recently identified domesticated Arabidopsis transposase DAYSLEEPER, which is a pleiotropic regulator of development through its specific DNA-binding activity [33]. There is one reported hAT-like transposon group in rice, Dart, which is capable of active transposition in plants [24,59]. Sequence analysis indicates that Dart is a recently amplified clade with 30 almost identical members. Although no oligonucleotide probes have been developed to represent individual members, there are a few probes that can detect all or most of them. Clear hybridization signals have been found for these probes in all shoot and cell culture samples. This finding suggests that some or all members of Dart are highly transcribed in a large number of rice samples.
Both PIF/Pong-like and Mariner-like TE-related genes are autonomous partners of nonautonomous miniature inverted repeat transposable elements (MITEs), which are ubiquitous in the rice genome [12]. Low proportions of both families have detectable transcription (<20%) in each sample ( Figure 6 and Additional data file 4). Two transpositionally active PIF/Pong-like elements were recently reported: maize PIF and rice Pong [22,23,60]. Interestingly, the rice homolog of PIF, namely OsPIF1 [60], was not expressed in any samples ( Figure 6). There are six almost identical Pong elements in the rice genome, which are represented by a single probe in the microarray. This probe detected transcription activity in tillering shoot and drought-exposed panicles only ( Figure 6), suggesting rigorous regulation at the transcriptional level for members of this family. We did not detect any transcriptional activity of the Pong element in cultured cells. The Marinerlike superfamily has a much smaller member size [61]; this superfamily includes a small proportion of transcribed genes, similar to that for the PIF/Pong-like superfamily (Additional data file 4).
A recently identified unique type II TE superfamily, Helitronlike, is relatively under-characterized in the rice genome [62]. Strikingly, Helitron-like transposons have the potential to move and shuffle genes or exons in maize [13,14]. In rice, we found only one member with transcriptional activity in all the samples. There is no other Helitron-like transposon among the seven examined ones with transcriptional activity in any samples (Additional data file 5).
We were unable to further classify another 787 type II TErelated genes into any superfamilies (Table 1). Interestingly, a large percentage (>40% out of 128 with unique oligomer probes) was found to be transcribed.
Transcription of Pack-MULE
Genes or exons can be transduplicated by MULEs [9,63], which have recently been suggested to be important facilitators of the evolution of genes in higher plants, and have therefore been termed Pack-MULE [37]. However, a detailed sequence analysis suggests that the products of this process are more likely to be pseudogenes than novel functional genes [39]. To gain better insight into this group, we examined their transcriptional activities using microarray analysis, because transcription is usually a prerequisite for biologic function of
MuDR-like
Jittery-like Os10g05810 SS TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD a protein-coding gene. By testing the transcription of recently identified 137 Pack-MULEs on chromosomes 1 and 10 that are covered by our microarray [37], we found that the transcription rates of Pack-MULEs fall between those of TErelated gene models and non-TE-related gene models ( Figure 7), being slightly closer to those of TE-related gene models.
On the other hand, more Pack-MULEs are transcribed in several samples than for TE-related gene models and non-TErelated gene models (Figure 7).
Association of transcription with DNA and histone modification
TEs, including TE-related ORF encoding genes, are under multiple levels of epigenetic control, including DNA methylation and histone modifications [26]. In Arabidopsis, DNA methylation and histone H3 lysine-9 methylation (H3K9m) correlates with the silencing of TEs, and histone H3 lysine-4 methylation (H3K4m) is associated with transcribed genes [64]. However, H3K4m is also found in silenced genes and therefore may not always be a marker for active transcription [65].
To determine whether transcribed TE-related genes have different chromatin modification status, we selected nine transcribed and three silenced TE-related genes, including both autonomous TE genes and TE-derived genes, in order to assess histone and DNA methylation (Figure 8a). These are Tos17 and Tos3 of the Ty1/copia superfamily; Ty3/gypsy elements Os09g15460, Os03g32070 and OSR30; MULE superfamily DNA transposons MUG1, FAR1-like and Os11g05820; CACTA DNA transposons Os10g31320, Os09g29980 and Os04g08710; and DAYSLEEPER-like from the hAT-like superfamily. Seedling shoot samples were used for all analyses discussed here. To verify transcription independently, we used PCR to amplify reverse-transcribed cDNA (RT-PCR). Transcript accumulation assayed by RT-PCR is in general consistent with microarray results (Figure 8a). Using chromatin immunoprecipitation (ChIP) analysis, we found that only silenced genes were associated with high levels of H3K9m. H3K4m was significant for all genes examined, regardless of whether they were transcribed or silenced (Figure 8a). Similar to H3K9m, only silenced genes were heavily methylated at the DNA level (at cytosine, by McrBC digestion assay; Figure 8a). These data imply that levels of H3K9m and DNA methylation were lower in transcribed TErelated genes. Similar correlations of histone and DNA methylation with transcription were also found in non-TE-related genes (controls in Figure 8a). Furthermore, no distinction was found between autonomous TE genes and TE-derived genes from these data.
To explore these relationships further, we selected five TErelated genes with transcription in cultured cells but not in seedling shoots: the Ty1/copia retroelement Os10g22210; Ty3/gypsy retrotransposons Os09g11940 and Os10g06250; and CACTA DNA transposons Os07g23660 and Os08g32100 (Figure 8b). Three of these five genes were associated with higher levels of H3K9m in shoots (silenced) as compared with in cultured cells (transcribed), according to ChIP-PCR analysis. Levels of H3K4m did not exhibit a clear difference between shoots and cultured cells (Figure 8b). DNA methylation was reduced in three genes in cultured cells compared with shoots ( Figure 8b). Thus, lower levels of DNA methylation and H3K4m tend to accompany TE-related gene transcription under developmental regulation.
It has been shown that small RNAs derived from repetitive genome sequences repress transcription by means of RNA interference in Arabidopsis [16]. Small RNAs, both microR-NAs (miRNAs) and small interfering RNAs (siRNAs), have also been identified in rice, albeit at a small scale [66,67]. Sixteen out of a total of 44 predicted siRNAs have at least one TErelated gene as their target gene [66], whereas few miRNA have a TE-related gene target [67]. For the five target TErelated genes covered by microarray, we found active transcription for only one. It is interesting to note that for siRNAs targeting multiple genes, the transcriptional profiles of these target genes may not be at all similar. For example, siRNA P96-E12 has two targets: Os07g10770 (a cellulose synthase) and Os01g05370 (a Ty1/copia family retrotransposon). The cellulose synthase gene has strong transcription in almost all samples we profiled. In contrast, the retrotransposon target does not exhibit transcription in any sample.
Upstream gene transcription affects TE-related gene transcription
It was recently reported in Arabidopsis, as well as in several other eukaryotes, that some adjacent genes tend to have coexpression patterns [68][69][70][71]. Readthrough of TEs derived from upstream genes is also reported in isolated studies [41,72,73]. We therefore suspected that transcription of neighboring genes might influence the transcription of a TErelated gene. To test this hypothesis, we calculated the frequency of transcribed TE-related genes relative to the transcriptional activity of neighboring genes. Two scenarios were considered: the upstream gene and the downstream TE-
Degrees of lineage-specific transcription in MULE superfamily (excluding the TRAP-like class) Figure 4 (see previous page)
Degrees of lineage-specific transcription in MULE superfamily (excluding the TRAP-like class). The phylogenetic tree was generated from a multiple alignment of conceptually translated sequences by using neighbor-joining methods and rooted with soybean Soymar1. Bootstrap values were calculated from 1,000 replicates. Sample numbers are identical to those in Table 2 Degrees of lineage-specific transcription in hAT-like superfamily Figure 5 Degrees of lineage-specific transcription in hAT-like superfamily. The phylogenetic tree was generated from a multiple alignment of conceptually translated sequences by using neighbor-joining methods and rooted with soybean Soymar1. Bootstrap values were calculated from 1,000 replicates. Sample numbers are identical to those given in Table 2. Shades of gray indicate the magnitude of transcription signals, which are based on microarray hybridization signals without units. Names of previously reported members are shown. *Previously reported members with transcription or transposition. SS TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD related gene were in the same orientation (or the same strand); and these two were in opposite orientations. In both cases, there was a clear positive association between gene transcription and the neighboring TE-related gene transcription ( Figure 9). However, the effect was more significant if the non-TE-related and TE-related genes were in the same orientation. An increase of 16% of downstream transcription was found when transcribed upstream genes were in the same orientation (P < 10 -16 , by Welch two-sample t-test). In the case of opposite orientation, an increase of 9% in transcription level was found (P < 10 -16 ). By comparing the effects of transcribed upstream gene orientation in these two scenarios, we found that the same orientation corresponded to 6% more expression than the other scenario (P < 10 -7 ). There is no clear distinction between the two scenarios for TE-related genes with untranscribed upstream genes (26% versus 27%; P = 0.3). The orientation of downstream non-TE-related genes did not significantly affect the transcription of upstream TE-related genes.
Functions of cis-elements in transcription
To explore further the possible underlying mechanisms that control the transcription of TE-related genes, we attempted to identify possible involvement of cis elements in transcription.
To this end, we searched for enrichment of cis elements in the promoter regions of transcribed TE-related genes. We grouped TE-related genes based on the number of samples with transcription and searched for frequency of occurrence of all reported cis elements within each group. Among 439 reported elements in plants [74], nine of them exhibited marked enrichment in TE-related genes with active transcription ( Figure 10), whereas no element was found with similar enrichment patterns from randomized datasets. In addition, most of these elements were found by searching for enrichment in active members in Ty1/copia, Ty3/gypsy, or the CACTA superfamily. TATA box was identified, which is usually found in the 5'-upstream region of eukaryotic genes and is critical for accurate initiation of transcription [75]. The Tbox is part of the scaffold/matrix attachment region, which was recently found to regulate the transcription of nearby genes in Arabidopsis [76]. We also identified the enrichment of motifs (G-box, Myb binding site, and ATHB5-core) for the major plant transcription factor families (bHLH, Myb, and homeodomain-leucine zipper). In addition, enrichment was also detectable from the light response motifs Hex-motif, pathogen response motif GCC-core, gibberellin response motif Pyrimidine-box, and meristem specific motif site IIa.
Transcription profiles of TE-related genes in rice
TEs account for an overwhelming proportion of plant genomes. To ensure the viability of their host and hence their own survival, the transposition of TEs should be tightly controlled [17]. Transcribed autonomous TEs among TE-related genes have the potential to self-activate or activate transcrip-tion of related nonautonomous TEs. Transcriptional regulation is therefore one major control step used by plants, but it remains insufficiently understood. The recently available rice genome sequence has enabled us to characterize TErelated gene transcription on a genome-wide scale.
Using 70-mer oligonucleotide microarrays covering more than 2,000 rice TE-related genes, we surveyed the transcription profiles under a wide range of organ samples under various conditions. Considering that TE-derived cellular genes are relatively rare, autonomous TEs probably contribute to most of these TE-related genes. Genome profiling revealed that 25% to 30% of the TE-related genes were transcribed in one sample, which was much lower than the corresponding percentage of non-TE-related genes (Figures 1 and 2). Moreover, TE-related genes differed from their non-TErelated counterparts in two additional aspects. First, TErelated genes tended to be transcribed in only a subset of organs or developmental stages, whereas non-TE-related genes had transcription in more samples on average ( Figure 1 and Figure 2). Second, transcribed TE-related genes exhibited weaker transcription overall compared with non-TErelated genes in all of the samples we profiled (Figure 2). It worth noting that our estimation of TE-related gene transcription was biased toward low-copy elements, because it was difficult to distinguish transcripts among recently duplicated high-copy TE-related genes, which share high sequence similarity within clades. It has been reported in Arabidopsis and Drosophila that the activity of TE elements may reduce as the copy number increases [77,78]. Therefore, we expect the transcriptional activity of those high-copy TE-related genes will be lower than for low-copy ones.
Among TE-related genes, a smaller proportion of type I than type II genes were transcribed (Figure 1a), a discrepancy that resulted primarily from the strong transcription of MULE and CACTA superfamilies as well as unclassified type II members. It is interesting to note that all TE-related gene superfamilies with potential to severely expand, including all type I TE-related genes and PIF/Pong-like, Mariner-like and Helitron-like type II TE-related genes, were more tightly controlled at the transcription level. Type I TE-related genes are amplified through a copy-and-paste mechanism [79]. PIF/Pong-like and Mariner-like superfamilies regulate the activity of MITEs, which dominate the rice genome [12]. Members of the Helitron-like superfamily go through a unique rolling cycle replication to rapidly amplify themselves [62].
Many TE-related genes exhibit organ-specific, growth stagespecific, and stress-specific expression profiles in our collection of samples. These genes exist in all superfamilies, as shown in Figures 3 to 7. A number of them, again from various superfamilies of both type I and type II TE-related genes, exhibit clear induction in cultured cells, in certain organs, or in certain stress challenged organs (Figure 2). The precise biologic significance for this observation remains to be elucidated.
It is important to note that transcriptional activity does not necessarily correspond to transpositional activity. Transcription is just the first of several steps required for the transposition of type I and type II TEs [79,80]. Active transcription and even translation of TE-related genes has been reported in several isolated cases [28], but only in a few cases was transposition actually confirmed by observed copy number change [20]. A two-step regulatory mechanism was therefore proposed for retrotransposons [20]. In this model, some elements may have slipped the leash of transcriptional gene silencing [25]. Nevertheless, they can be controlled by post-transcriptional gene silencing [18]. We observed transcription of all major TE-related gene superfamilies in rice, but it is probable that most of them, if not all, are not actively transpositioned. It is therefore likely that such a two-step regulation exists not only for retrotransposons but also for other classes. Post-transcriptional regulation, which is still largely unexplored, is thought to repress transposition activity further [81].
Transcription of domesticated TE-related genes in the rice genome
It is well accepted that some TE-related genes have actually acquired host functions and play physiologic roles in the host. They can either be derived from TEs or include hijacked TEs or TE fragments by cellular genes. Not surprisingly, we have discovered active transcription of all potential domesticated TE genes previously described in Arabidopsis and rice. Interestingly, domesticated TE genes tend to be within actively transcribed TE gene clades. The rice homologs of the two reported cases of domesticated transposons in Arabidopsis, namely FAR1/FHY3 and DAYSLEEPER, were located in two actively transcribed clades. MUG1, a putative domesticated gene revealed by cross-species sequence comparison analysis, was shown to be transcribed from our data and located within an actively transcribed clade. These examples may suggest that actively transcribed clades of TE-related genes are a rich source for domesticated TE genes. In fact, several other actively transcribed clades have been observed, especially for the MULE and CACTA superfamilies, from our analysis (Figures 4 and 5). It is reasonable to suspect that those transcriptionally active clades may contain genes co-opted by hosts to serve adaptive functions. This notion will be worth testing in future research. Clearly, the combination of transcriptional analysis with phylogenetic analysis is instrumental in identifying those TE-derived genes with adapted host function.
A specific mechanism for the evolution of new genes by mobile DNA elements is through their ability to acquire and fuse fragments of genes to new genomic locations, as represented by Pack-MULE [37]. By exploring the transcriptional activity of a subset of Pack-MULEs, we have shown that their transcriptional activity falls in between the levels of TErelated and non-TE-related gene models (Figure 7). This result suggests that many of them might not have biologic functions, and both pseudogenes and evolving new functional genes exist among these annotated Pack-MULEs. Alternatively, functional diversification of recently evolved genes may be another explanation, because newly formed genes usually have more specific expression profiles [82].
Mechanisms controlling TE-related genes transcription
The presence of such a diverse array of transcribed TE-related genes raises questions regarding the mechanisms that control the transcription. At the chromatin level, we found that actively transcribed TE-related genes have reduced levels of H3K9m and DNA methylation. This finding indicates that proper chromatin modification status is usually required for transcription of TE-related genes. However, histone and DNA modifications are unlikely to be efficient markers for distinguishing between autonomous TE genes and TE-derived cellular genes.
Consistent with the existence of chromatin-level control, we found that transcribed TE-related genes tend to be located near to transcribed neighboring genes. It is possible that the status of a chromatin domain is marked by histone and DNA modifications. Such chromatin status affects a few genes located in the same or neighboring chromatin domains. The orientation of upstream genes affects downstream TE-related gene transcription ( Figure 9). If both genes are in the same orientation, then the downstream TE-related gene would have a greater chance of being transcribed. Readthrough of TE-related genes derived from upstream genes may account for this difference, besides possible chromatin effects.
Small RNA has been suggested to be a key regulator to silence TE elements transcriptionally and post-transcriptionally [18,81]. However, only a few examples were found in our dataset. Small RNAs are known to be highly abundant in the Arabidopsis genome [83], whereas their counterparts in rice Degrees of lineage-specific transcription in PIF/Pong-like superfamily Figure 6 (see following page) Degrees of lineage-specific transcription in PIF/Pong-like superfamily. The phylogenetic tree was generated from a multiple alignment of conceptually translated sequences by using neighbor-joining methods and rooted with soybean Soymar1. Bootstrap values were calculated from 1,000 replicates. Sample numbers are identical to those in Table 2 SS TS TR FL HP FP SC CR CS TSD TSS FLD FLS HPD HPS Os02g08450 Os11g38660 Os05g13960 are yet to be discovered. A full catalog of small RNAs in rice will provide a better picture of their role in TE transcription.
Another possible mechanism controlling TE transcription is the existence of cis elements in their promoter regions. Examples have been found previously for LTR retrotransposons, which employ alternating cis elements present in their LTRs [29,[84][85][86]. Here, we identified nine cis elements that were clearly enriched in the promoter regions of transcribed TErelated genes. Among them, both basic transcription-related cis elements and elements that respond to developmental or environmental regulation are found to be enriched in the upstream regions of those transcribed TE-related genes (Figure 10). In addition, these enriched cis elements are probably not limited to a certain superfamily but rather widely spread in several superfamilies. Taken together, our data show that transcription of TE-related genes, mostly autonomous TE genes, in rice is a complex process, which is controlled, at least in part, by chromatin-level regulation and cis elements in promoters.
Microarray analysis
The rice 70-mer oligonucleotide set was described previously [44]. Briefly, 70-mer oligonucleotides were designed based on a combination of FGENESH predicted genes from an improved shotgun sequence [2] and the available full-length cDNAs and ESTs [87]. Designed 70-mer oligonucleotides correspond to the sequence within the coding region of genes, and the design was corrected for such factors as oligo crosshybridization, uniform TM value, GC content, and hairpin/ stem nucleotide number. All oligonucleotides were remapped to TIGR rice genome annotation version 3.1 genes [3] using BLAST. We requested greater than 90% alignment of a 70-mer oligonucleotide probe to a gene during the remapping. Moreover, only those 70-mer probes without a greater than 80% second-best aligned gene were considered to be free from cross-hybridization. These criteria were selected because a mismatch of 20% removes more than 90% of the hybridization signals, whereas a 10% mismatch retains at least half of the hybridization signals [88].
TE-related genes were identified in accordance with TIGR annotation, with supplemental literature review of published TE-related genes. A total of 2,191 TE-related genes are represented by at least one oligonucleotide free from cross-hybridization. In addition, there are 1,966 70-mer oligonucleotides mapped to several but only TE-related genes. These oligonucleotides represent another 9,396 TE-related genes.
Oligonucleotides were custom synthesized by Operon Biotechnologies Inc. (Huntsville, AL, USA) and printed onto poly-L-lysin coated microscope slides using a contact microarrayer. The same recommended set of 12 unique negative control 70-mer oligonucleotides based on heterologous genes [89] were included in all slides. There were 240 negative control spots on each slide.
Microarray data and plant materials
Microarray experiments and detailed rice sample preparation were described previously [44][45][46][47]. Samples include organs harvested under normal growth conditions (seeding stage shoot, tillering stage shoot, tillering stage root, heading stage flag leaf, heading stage panicle, and filling stage panicle), organs under conditions of salinity or drought (tillering stage shoot, heading stage flag leaf, and heading stage panicle), and cultured cells (suspension-cultured cells, somatic root in culture, and somatic shoot in culture). A summary is provided in Table 2. The microarray data discussed in this publication [90] and are accessible through GEO series numbers GSE2360, GSE2691, GSE6533, and GSE6552.
Microarray data processing
Microarray spot intensity signals were acquired using Axon GenePix Pro 3.0 software package (Molecular Devices, Sun-nyvale, CA, USA). To identify and remove systematic sources of variation, including dye and spatial effects, spot intensities from the GenePix Pro output files of all repeats of a given sample pair were normalized using limma, a software package for the analysis of gene expression microarray [91]. This normalization process identified and ameliorated spatial, intensity-based, and dye-specific artifacts using multiple step corrections. To determine objectively whether a gene exhibited significant expression in a given sample, we followed a method that relied on negative control spots and data reproducibility [44]. To estimate nonspecific hybridization, a distribution of normalized intensities was obtained from the subset of negative control spots present on each array slide. From this distribution, we chose an intensity cutoff at which less than 10% of the distribution was greater than or equal to this threshold. Expression of a gene was only considered detectable if it was above the threshold in two or more repeats out of the three. These criteria had been demonstrated suitable for oligonucleotide arrays with an error rate range of 1% to 3% false negatives [44]. RT-PCR results and independent analysis using different microarrays and statistical approaches [43] further supported this threshold.
Sequence analysis
TE family classification was according to TIGR annotation [3]. Hand analysis led to the identification of another 208 TErelated genes according to published sequences and BLAST search. Multiple sequence alignments were conducted using Clustal W [92]. The weighing matrix used was Gonnet Pam Chromatin-level modifications of TE-related genes Effects of relative orientation of upstream genes on transcription of downstream TE-related genes Figure 9 Effects of relative orientation of upstream genes on transcription of downstream TE-related genes. All TE-related genes were divided into two groups according to the relative orientations of themselves and upstream genes. Portions of transcribed TE-related genes were calculated for those with transcribed upstream genes and those with silent upstream genes in both groups. TE, transposable element. 250 with the penalty of gap opening 10 and gap extension 0.2. Phylogenetic trees were generated based on the neighborjoining method, using PAUP* version 4.0b10 with default parameters [93].
Cluster analysis
Cluster analysis was applied to all TE-related genes and 1,353 randomly selected non-TE-related genes showing expression in at least one sample. Average normalized log-transformed expression intensities were subjected to cluster analysis. For hierarchical clustering, Pearson correlation was used to compute similarities, and a complete linkage clustering algorithm was used. Cluster analysis was performed using the software Cluster [94] and visualized using custom scripts.
RT-PCR analysis
Total RNA was extracted from independently prepared rice seedling shoots using Qiagen RNeasy kit (Qiagen, Valencia, CA, USA). After DNase I treatment, total RNA was used for cDNA synthesis using Superscript II (Invitrogen, Carlsbad, CA, USA) in accordance with the manufacturer's protocol. PCR primers were designed according to sequence using Primer3 [95]. The amplification reaction was carried out for 35 cycles and at an annealing temperature of 55°C. Products were separated by 1% agarose gel electrophoresis. Negative controls using mock cDNA synthesis products without reverse transcriptase were included for all genes to detect potential genomic DNA contamination.
Histone and DNA methylation
ChIP was carried out as described elsewhere [64] using seedling shoots and cultured cells. Histone H3 anti-dimethyl lysine-4 or anti-dimethyl lysine-9 antibodies (Upstate, Avon, NY, USA) were used to precipitate genomic DNA, which was resuspended in water for PCR analysis. The same PCR and gel electrophoresis conditions were used as for RT-PCR analysis.
Methylation of DNA was assessed by McrBC digestion following a previously published protocol [81]. Genomic DNA was isolated from seedling shoots and cultured cells using Qiagen DNeasy plant kit and divided into two equal samples. One sample was digested with McrBC, a methylation-dependent restriction enzyme that cuts the sequence A/G 5 mC (New England Biolabs, Beverly, MA, USA). Both digested and untreated samples were subject to PCR amplification as described previously. Successful amplification after digestion indicates lack of methylation.
Motif search
The genome sequences 2 kilobases upstream of the annotated translation start site were retrieved from the TIGR database. Both DNA strands were searched for known plant motifs | 2014-10-01T00:00:00.000Z | 2005-11-30T00:00:00.000 | {
"year": 2005,
"sha1": "5b059781b657be2c43f05a231616c24fe85d046b",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-6-69",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "f87c08ae8201b616738180c26569496701fbd5a3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
31140682 | pes2o/s2orc | v3-fos-license | Significance of quality of care for quality of life in persons with dementia at risk of nursing home admission: a cross-sectional study
Background Quality of life in persons with dementia is, in large part, dependent on the quality of care they receive. Investigating both subjective and objective aspects of quality of care may reveal areas for improvement regarding their care, which information may ultimately enable persons with dementia to remain living in their own homes while maintaining quality of life. The aim of this study was to 1) describe self-reported quality of life in persons with dementia at risk of nursing home admission. 2) describe subjective and objective aspects of quality of care, 3) investigate the significance of quality of care for quality of life. Methods A cross-sectional interview study design was used, based on questionnaires about quality of life (QoL-AD) and different aspects of quality of care (CLINT and quality indicators). The sample consisted of 177 persons with dementia living in urban and rural areas in Skåne County, Sweden. Descriptive and comparative statistics (Mann-Whitney U-test) were used to analyse the data. Results Based upon Lawton’s conceptual framework for QoL in older people, persons with pain showed significantly lower quality of life in the dimensions behavioural competence (p = 0.026) and psychological wellbeing (p = 0.006) compared with those without pain. Satisfaction with care seemed to have a positive effect on quality of life. The overall quality of life was perceived high even though one-third of the persons with dementia had daily pain and had had a weight loss of ≥4% during the preceding year. Furthermore, 23% of the persons with dementia had fallen during the last month and 40% of them had sustained an injury when falling. Conclusion This study indicates need for improvements in home care and services for persons with dementia at risk for nursing home admission. Registered nurses are responsible for nursing interventions related to pain, patient safety, skin care, prevention of accidents, and malnutrition. Therefore, it is of great importance for nurses to have knowledge about areas that can be improved to be able to tailor interventions and thereby improve quality of care outcomes such as quality of life in persons with dementia living at home.
Background
Persons with dementia, at risk of nursing home admission need health care and social services of highest quality to maintain or, better, improve their quality of life (QoL) [1,2]. Their need for security in the care experience as well as support for the informal caregiver should govern the design of health care. There is a need to further explore QoL in home care settings, since previous research has tended to focus more on QoL in nursing home environments [3][4][5]. Dementia is strongly related to old age and a serious chronic condition affecting all aspects of daily living [6]. Because of deterioration in cognition, function and behaviour, persons with dementia have complex needs for health care and social services. Compared with older persons without dementia they need more personal care, more hours of care and more supervision, all of which requirements are associated with greater caregiver strain [7]. The need for help with activities of daily living (ADLs) starts early in the disease course and evolves constantly over time [6]. Receiving help with ADLs from others has been found to be significantly related to low QoL, as has not being able to remain alone at home without help [8].
Neuropsychiatric symptoms, cognitive impairment and dependency have been found to predict the risk of institutionalization in persons with dementia. Moreover, informal caregiver experiences of burden and/or strain seem to predict the care recipient moving into institutional care. Furthermore, when formal health care and social services are insufficient and fail to meet the person with dementia's needs, the risk of nursing home admission appears to increase [9,10]. At present, home care is put forward as the best way of caring for persons with dementia, based on both providing a better QoL and for being more cost-effective compared with institutional care [11,12]. Still, research is contradictory regarding the person's QoL when remaining at home rather than moving into a nursing home since the reasons for nursing home admission differ [9,10]. Some older people prefer home care instead of any other option, since home is a place of emotional and physical associations, memories, and comfort. Although, when older people realize that a nursing home is a better option, leaving home can be disruptive and depressing [12].
To understand QoL in old age, not only the distress and impairments resulting from poor health, but also nonhealth-related aspects need to be considered [13]. Quality of life is commonly viewed and assessed as a multidimensional concept [14][15][16][17] encompassing different domains (emotional, physical, social, and environmental) of a person's wellbeing [17]. Today these are considered crucial outcome measures for health service research. This reflects concerns about capturing important ways in which health care conditions impact on a person's QoL and can bring about meaningful understanding to change treatments [18].
Lawton [16,17] describes a conceptual framework for QoL in older people, including four domains of importance (Fig. 1). The first domain is behavioural competence: how well a person functions in the domains of physical health, ADLs, cognition, and social behaviour. The second domain is environmental quality, which includes housing quality. The third domain is perceived quality of life and entails the evaluation of one's neighbourhood, family, friends, etc. The fourth domain is psychological wellbeing: the global aspects of mental health. Each of these domains is highly relevant to evaluating QoL in persons with dementia.
The QoL in vulnerable older people, such as persons with dementia, may be improved by high quality of care (QoC), among others [1,2]. Quality of care can be defined as the degree to which health care and social services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge [1]. Quality of care indicators are objective measures that reflect care standards and are used as guides to monitor and evaluate the QoC [19,20]. These indicators show how structure and processes impact on a person's wellbeing, health and/or QoL. Quality of care indicators can also bring about meaningful understanding that can lead to changes in treatment [18]. Important QoC indicators in the care of older people are pain, falls, pressure ulcers and weight loss, indicating deterioration in chronic conditions such as dementia [21][22][23][24][25][26].
Another way of measuring QoC is satisfaction with care. There is no universally accepted definition of or measure for satisfaction with care, but a care recipient's satisfaction is none the less regarded as an important aspect of QoC. While some researchers focus on care recipients' satisfaction with the quality and type of health care services received, others focus on people's satisfaction with the health system more generally [27]. In this study, satisfaction with health care and social services is understood to concern care recipients' and informal caregivers' experience of utilized care in relation to their expectations and needs [18]. Therefore, by investigating both subjective and objective aspects of QoC we may reveal areas for improvement regarding health care and social services at home. Such information may ultimately enable persons with dementia to remain living in their own homes while maintaining QoL, since QoL in persons with dementia is, in large part, dependent on the QoC they receive [1,2].
Aim
This study aims to 1) describe self-reported QoL in persons with dementia at risk of nursing home admission, 2) describe subjective and objective aspects of QoC and 3) investigate the significance of QoC for QoL in persons with dementia at risk of nursing home admission.
Design
A cross-sectional study design was used, based on a structured interview with persons with dementia at risk of nursing home admission, and their informal caregiver as proxy raters.
Setting
The responsibility for the Swedish welfare system is shared by the central government, county councils (n = 20) and municipalities (n = 290). The role of the central government is to establish principles and guidelines, and to set the political agenda for health and medical care. Access to formal care and social services are based on assessments of individual needs and being available to all members of society on equal terms [28] The county councils are largely divided into hospital care, out-patient specialist care and primary care and are responsible for health care delivery such as assessments leading to dementia diagnosis, treatment, and follow-up. The municipalities are responsible for providing assistance for those older persons who are receiving formal health care and social services at home, in day care or are living in a nursing home [29]. Most of the formal care providers working in home care in Sweden are assistant nurses [30] providing health care and social services including help with IADLs, PADLs and medical treatments [31]. Other formal care providers are registered nurses in charge of home nursing care (e.g. administering wound dressings, injections), social workers, occupational therapists, and physiotherapists in charge of rehabilitation and needs assessments [32]. The care of persons with dementia is guided by the Swedish National Guidelines for Care in Cases of Dementia [33].
Participants
Inclusion criteria in this study were persons with dementia ≥65 years old living at home, receiving formal health care and social services. An additional criterion was being at risk of nursing home admission within six months as per the assessment of their formal nursing caregiver being familiar with the person with dementia situation, having a dementia diagnosis, with a Standardized Mini Mental State Examination (S-MMSE) [34,35] score ≤ 24, and having an informal caregiver visiting at least twice a month. Both persons living in urban and persons in rural areas in Skåne County, Sweden, being cared for by either a public home care organization or private home care entrepreneurs, were included in the sample. An exclusion criterion was Korsakoff's syndrome.
Of the approached 243 participants, 66 dropped out. In the time between being invited to participate in the study and being contacted by a researcher, twelve persons with dementia had moved into a nursing home and four had deceased. The remaining 50 drop-outs had either changed their minds or were too tired to participate. The drop-outs consisted of 52% women with dementia, the same proportion as for those included in the study. No further information on the drop-outs is available. In total, 177 persons with dementia were included in the study.
Measurements
Background questions Table 1 presents participants' socio-demographic background characteristics including age, gender, marital status, living conditions, type of dementia and information about whether the person with dementia was on a waiting list for nursing home placement. To assess cognitive impairment, we used S-MMSE scores [34,35]. The possible score ranges from 0 to 30. Higher scores indicate less cognitive impairment. Functional independence was measured using the Katz Index of Independence in Activities of Daily Living [36]. The possible total sum of this scale ranges from 0 to 6. Higher scores on this scale indicate greater independency in ADL.
Quality of life (QoL)
Quality of life was assessed by the persons with dementia using the Quality of Life in Alzheimer's Disease (QoL-AD) scale [37,38]. The instrument consists of 13 items relating to physical health, energy level, mood, living situation, memory, relationships with spouse, friends, and family, self as a whole, ability to do chores around the house, ability to do things for fun, financial situation, and life as a whole. Each item is measured on a 4-point scale ranging from 1 = poor to 4 = excellent. The total score ranges from 13 to 52, with higher scores indicating a higher QoL.
Drawing upon Lawton's [15] model of QoL, the 13 items in the QoL-AD were sorted into four categories: behavioural competence contained the items physical health, energy level, memory, ability to do chores around the house, and ability to do things for fun. Environmental quality consisted of the items living situation and financial situation. Perceived quality of life contained the items relationships with spouse, friends, and relationships with family. Psychological wellbeing contained the items mood, self as a whole, and life as a whole.
Quality of care (QoC)
Quality of care was assessed in three ways. Firstly, for the subjective judgement of the informal caregiver, we used an adapted version of the Client Interview (CLINT) instrument [2]. The CLINT for the home care setting consists of nine questions concerning satisfaction of informal caregiver with the health care and social services received by the person with dementia. The questions concern quality of interaction with staff, hygiene, cleaning, gardening, and food; also, there is a general question about satisfaction with care. The response alternatives are "yes, always", "yes, usually", "sometimes", "seldom" and "never". The total score ranges from 9 to 45. The higher the score, the lower the rated QoC.
The internal consistency reliability for the CLINT for all nine items on the scale was α = 0.59. The item gardening had a high frequency of missing values (n = 134) and was therefore removed from our analysis. Cronbach's alpha after the exclusion was α = 0.70.
Our second way of assessing QoC was by asking one question about dementia-specific care: "Do you or your relative use any dementia-specific care (such as day care or respite care)?" The answer alternatives to this question were "yes" and "no". The answer "yes" was followed up by one question about satisfaction with received dementia-specific care. Response alternatives were: "very dissatisfied", "dissatisfied", "neither satisfied nor dissatisfied", "satisfied" and "very satisfied".
As a third way of assessing QoC, we evaluated QoC indicators including presence of pain, fall, pressure ulcer, and weight loss. Pain was evaluated by asking how often the person with dementia had expressed signs of pain in the last seven days. Response alternatives were "no pain", "no daily pain" and "daily pain". The question regarding fall was, "Has the person with dementia fallen in the past month?" Response alternatives were "yes" and "no". "Yes" for fall was followed up with a question to find out if the person had sustained injury when falling, with response alternatives "yes" and "no". In addition, questions about presence of pressure ulcers and weight loss of ≥4% in the previous year were answered by "yes" or "no".
Procedure for the data collection
Data were collected between January 2011 and January 2013. The recruitment of participants was done through 15 contact persons; registered nurses specialized in dementia care, in twelve municipalities. The contact persons asked formal caregivers, i.e. registered nurses, and social workers, who were well known to the person with dementia, to give verbal information about the study to the person with dementia and their informal caregiver. They were also asked if a researcher could contact them to give more detailed information about the study and implications of participation. The formal caregivers gave the information back to the contact persons, who in turn contacted the researchers. After verbal permission, the informal caregiver was contacted by phone by a researcher who gave detailed information about the study and asked for verbal consent for participation; the time and place of the interview was then agreed. Just before the interview the researcher again clarified the purpose of the interview, both verbally and written, and gave the participants opportunity to ask questions before signing the informed consent. Nine specific trained researchers interviewed the person with dementia and informal caregiver via face-to-face interviews in the person with dementia's own home or at a day care facility. The researchers asked questions, starting with the person with dementia, answering the questionnaires S-MMSE and QoL-AD. Remaining questionnaires were answered by the informal caregiver as a proxy rater.
Statistical analyses
Not all questionnaires were filled out or answered completely and several individual items had missing data. When the total score was calculated a maximum of one missing item in the QoL-AD and CLINT instruments was replaced by the mean score of the remaining items of the participant. Where more than one item was missing, no total score for the QoL-AD and CLINT or for any of the individual QoL-AD dimensions was calculated. Since the item gardening was excluded from the CLINT the total score, the total score in this study ranged from 8 to 40.
The QoC indicators fall, injuries from falling, and weight loss of ≥4% were dichotomized into "present" and "not present". Pain was dichotomized into "no pain" ("no pain" and "no daily pain") and "daily pain". The median total score, 14, for the CLINT was used to dichotomize satisfaction with care into two groups, "high satisfaction" (score 0-13) and "low satisfaction" (score . Responses to dementia-specific care questions were dichotomized into "yes" and "no". Since the sample was not normal distributed, the Mann-Whitney U-test was applied to compare the QoC indicators and the perceptions of the significance of care for the four QoL dimensions, as well as for the total QoL-AD score. Only one person had a pressure ulcer and therefore was this indicator excluded from the analysis. A p-value of ≤0.05 was considered statistically significant. For data analysis, IBM SPSS Statistics for Windows, version 23.0 (IBM Corp., Armonk, NY, USA), was used.
Results
The participants consisted of 52% women aged 65-98 years. Most of them were either married or widowed and the most common living condition was living together with the informal caregiver, followed by living alone. Alzheimer's disease was the most reported dementia diagnosis, followed by vascular dementia. Fourteen per cent of participants were on a waiting list for nursing home placement. The median total KATZ-ADL score was 4; the median S-MMSE score was 16 ( Table 1).
The persons with dementia had a total median score of 36 (first quartile (Q1)third quartile (Q3) = 33-39) for QoL. The items in the QoL-AD reached a median score of 3 except memory, which received a median score of 2. After we grouped the items into Lawton's four dimensions of QoL the results showed a median score of 3 for all dimensions ( Table 2).
The informal caregiver's total median CLINT score was 14 (Q1-Q3 = 11-16), indicating overall satisfaction with received health care and social services. Informal caregivers were somewhat more satisfied with staff being honest, food portions and overall health care and services received, than with the other indicators ( Table 3).
Regarding the QoC indicators, 31% of the persons with dementia had daily pain and 29% had suffered a weight loss of ≥4% during the previous year. Furthermore, 23% of the persons with dementia had fallen during the last month and 40% (16/40) of them had sustained injury when falling (Table 3).
Comparing QoL dimensions with the QoC indicators revealed that QoL in the dimensions behavioural competence and psychological wellbeing was significantly lower (z = −2.2, p = 0.026, and z = − 2.8, p = 0.006, respectively) in persons with dementia expressing signs of daily pain (n = 54) compared with those showing no pain (n = 121). The results revealed similar differences whether pain less than once a day was included or excluded in the category pain (p = 0.029 and p = 0.006, respectively). No other significant differences were found between the QoC indicators and the QoL dimensions, or the QoL-AD total score (Table 4).
Comparing those with high satisfaction with received health care and social services (CLINT score 0-13, n = 60) with those with lower satisfaction (CLINT score 14-40, n = 60) showed significantly higher QoL in the dimension environmental quality (z = −2.1, p = 0.039) and a significantly higher QoL-AD total score (z = −2.8, p = 0.006). However, there were no significant differences in QoL between those receiving dementia-specific care (n = 140) and those not receiving dementia-specific care (n = 36) ( Table 4).
Discussion
Overall the persons with dementia in this study reported a high total QoL-AD score as well as a high score in the four domains was scored. It should be noted that 68% of the study population co-habited with their informal caregivers which may have affected the results. Previous research found that living alone is significantly associated with lower QoL [8,40] while a stronger social network contributes to higher QoL [41]. Furthermore, high QoL in persons with dementia living in north and western part of Europe is not an unexpected result. Previous research report that persons aged 65 years or older in the Nordic countries are generally more satisfied with life compared with the average for for their peers in other European countries [42,43]. Additionally, the informal caregiver reported high satisfaction with health care and social services, according to both the CLINT scores and responses regarding dementia-specific care. The results from this study also reveal that satisfaction with health care and the social services seems to have a positive effect on QoL total scores and the dimension environmental quality. However, this significance was not found for those receiving dementia-specific cares.
Regarding the significance of QoC indicators for QoL, the results reveal that one-third of the persons with dementia in this study had daily pain and that these persons had significantly lower QoL in the dimensions behavioural competence and psychological wellbeing compared with those without daily pain. The dimension behavioural competence contains the individual's functions and capacity for adaptive behaviour [15] and will Table 3 Proxy rating of quality of care, by next of kin Variable Median (Q1-Q3) CLINT score, total (n = 150) (range 8-40 a ) 14 (11)(12)(13)(14)(15)(16) Personal interaction (range 1- Weight loss ≥4% the previous year c 52 (29) a The underlined score is the most favourable score b missing n = 2; c missing n = 15 CLINT = Client Interview instrument Table 4 Quality of life self-reported by the persons with dementia, and proxy assessment of quality of care .039 .067 .220 .006 High satisfaction (0-13) probably be further reduced by pain. In the dimension psychological wellbeing, the items mood, self as a whole and life as a whole were negatively affected by pain. We conclude that pain in persons with dementia will probably lead to negative effects such as anxiety, depression, agitation and worrying [15]. Pain has previously been found to be almost doubled in persons with dementia compared with persons without dementia [22]. The difficulties in detecting pain, often communicated via non-verbal behaviour, and presenting as behavioural disorders in persons with dementia, may lead to inadequate treatment with neuroleptics or sedatives rather than analgesic drugs, leading to concealment of pain-related symptoms and consequently hindering tailored treatment of pain [21,22]. Thus, by identifying and treating underlying causes of pain we may resolve problematic behaviour, relieve pain, and improve QoL in persons with dementia.
Almost one-fourth of the persons with dementia in this study had fallen in the preceding month and a substantial percentage of these (40%) had sustained an injury when falling. However, the results from our study could not detect any effect on QoL regarding these QoC indicators. Earlier research has found that approximately 10% of falls in older people cause injury [44], making the frequency in this study four times higher compared with that for the general population of people aged >65 years. Previous research has also found a significant relationship between falling and dementia [23] and reports that the risk of falling is doubled for persons with dementia compared with older people without cognitive impairment [44]. Moreover, Sweden has been identified as having one of the highest fall-related injury rates (including injuries such as fractures) in the world [44]. The reason is not clear but has been suggested to be associated with heterogeneity in fracture probability and reduced sunlight exposure [44].
The inclusion criterion of being at risk of nursing home admission within six months could be the explanation as to why one-third of persons with dementia in this study had lost ≥4% weight in the preceding year. Weight loss is commonly associated with dementia and seems to increase with the severity and progression of the disease [45]. Persons with dementia develop several feeding difficulties such as changed dietary habits, and physical changes, but also difficulties in preparing food, eating and swallowing [45]. Weight loss is therefore an important predictor for institutionalization [25] and mortality [45], but was not found to have significance for QoL in this study.
Methodological limitations
The results from this study should be interpreted with caution because of some limitations. Firstly, we report on a specific sample: persons with dementia at risk of nursing home admission. Thus, our results cannot be generalized to all persons with dementia receiving home care. Secondly, the sample may not be representative for the whole of Sweden since the participants were recruited in a selected geographic area and not randomly selected from the national population. Furthermore, home care can differ between different Swedish municipalities since each municipality is independent when it comes to decisions about provision of health care and social services. Consequently, the results may not be representative of all municipalities, thus complicating the generalization of results. On the other hand, the sample was selected from twelve municipalities in both rural and urban areas as well as from both public and private home care organizations.
One possible explanation for the high satisfaction with received health care and social services at home and for the self-reported high QoL could be the 50 drop-outs who either had changed their minds or were too tired to participate. It is possible that they would have rated QoL and QoC lower and that data from their point of view could have affected the results.
Other aspects to consider is that the informal caregivers' dependency on formal care and services at home and hesitations about negatively evaluating formal care and services. These aspects could have affected the results, which may have led to underreporting of dissatisfaction with care and services. However, to minimize this effect the interviews were carried out independently of the care and services delivered to the persons with dementia.
In dementia research, self-report of QoL is not possible in many cases, as dementia affects cognitive abilities, which raises doubts about the ability of persons with dementia to make valid assessments and give reliable answers regarding their QoL. However, there is a growing body of evidence suggesting that persons with mild to moderate dementia can complete standardized questionnaires on self-reported QoL [37,46]. The QoL-AD is a self-reported, multi-dimensional instrument specifically designed for persons with Alzheimer's disease [37]. It has been suggested to be the most widely used self-report QoL instrument internationally because of ease and rapidity of administration (10-15 min) focusing on QoL domains assessed to be important for cognitively impaired older persons [37,47]. It has been found to be a reliable and valid self-report instrument for persons with Alzheimer's disease with Mini Mental State Examination (MMSE) scores >10 [37,48] and is appropriate to use in persons with dementia with MMSE scores as low as 3 [48]. The sample in this study had a median score of 16 on the S-MMSE. Owing to cognitive impairment 13 persons with dementia were unable to answer the QoL-AD questions and nine did not answer the item "relations with wife/husband", probably because they were either widowed or not married.
It had been possible to make factor analysis then the number of items and the number of participants were enough (10 times more people than items). However, this was judged not to be applicable in this study when the range of variation (IQ) was at most only 2 steps. The second reason was that the sample is specific, i.e. persons with dementia at risk of nursing home admission. Thus, the result of a factor analysis might be difficult to generalize to all persons with dementia receiving home care. However, this could be of interest to further analyze in future studies.
This study used informal caregiver's perceptions of QoC instead of obtaining responses regarding QoC from the persons with dementia, which would have been a more adequate perspective. However, the difficulties described above using persons with dementia as respondents were the reason for using informal caregiver as proxy raters. It should be noted that proxy ratings may be influenced by the proxy's own expectations, burden and depression [37] and that this may have affected the results.
Another way of investigating QoC could have been using the interRAI home care quality indicators based on the MDS/RAI [49]. However, the MDS/RAI is not common applied from a Swedish Context since no translation of the form or the manual is yet published.
Conclusions
In this study, we found a high overall self-reported QoL in persons with dementia and a general satisfaction with received health care and social services at home. With regard to the QoC indicators, only pain was significantly related to lower QoL. However, the results indicate need for improvement of health care and social services since one-third of the persons with dementia had daily pain and had suffered a weight loss of ≥4% during the preceding year. Furthermore, nearly one-fourth had fallen during the preceding month and 40% of these had sustained injury when falling. Registered nurses are responsible for nursing interventions related to pain, patient safety, skin care, prevention of accidents and malnutrition. Therefore, from a nursing perspective, this knowledge about improvable aspects of dementia care is of great importance to enable tailoring of nursing interventions, thereby improving QoC outcomes such as QoL in persons with dementia. | 2017-07-17T22:14:54.219Z | 2017-07-14T00:00:00.000 | {
"year": 2017,
"sha1": "7dd1aa4700b207831dc2068070b451ac992f876b",
"oa_license": "CCBY",
"oa_url": "https://bmcnurs.biomedcentral.com/track/pdf/10.1186/s12912-017-0230-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7dd1aa4700b207831dc2068070b451ac992f876b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264313799 | pes2o/s2orc | v3-fos-license | AI HELPING SCIENCE: THE ‘SHAPE’ OF THINGS TO COME
When we started working with artificial intelligence (AI) more than a decade ago, people were skeptical about whether this technology would develop enough in the foreseeable future to do anything useful. But we held on to our faith in AI’s potential to benefit humanity. We used games like chess, Go and Atari to train and test our AI systems to become smarter and more capable. In 2016, we decided to use our smart systems to try to solve a 50-year-old fundamental problem in biology, called the protein-folding problem. This was the birth of AlphaFold, our AI system that predicts the three-dimensional structures of proteins based on their amino acid sequence. In this article, you will learn about AlphaFold’s achievements, which demonstrate the power of AI to dramatically accelerate scientific discovery and benefit society.
which is considered to be an AI-based solution to the -year grand challenge of predicting the structures of proteins based on their amino acid sequence.AlphaFold has been used to create the most accurate and complete picture of the human proteome-the set of all the proteins in the human body-with enormous potential to accelerate biological and medical research.
Graphical Abstract Graphical Abstract ( ) We started our journey in by training our AI systems to play and win classic computer games.( ) We then moved to playing more complicated games against real people and, in , our system won a challenge match of Go against the reigning world champion.( ) Shortly after, we began to tackle the protein-folding problem and trained our system on known protein structures.( ) To further train our system, we taught it to use additional databases containing information about how proteins evolved between species.( ) In , our system achieved .% average accuracy in the prediction of the three-dimensional structures of proteins.( ) We hope that our system will contribute to the development of new drugs, new tools for addressing climate change, and help scientists understand these tiny molecular machines that are the building blocks of life.Proteins are made of small building blocks called amino acids (to learn
AMINO ACIDS
The building blocks of proteins.more about proteins and their composition, see this video).You can think of a protein like a string of beads, where the amino acids are the beads.There are di erent amino acids, and they can be arranged in various combinations to make up a protein string.Proteins are made in a "factory" inside cells called the ribosome (to learn more about the ribosome, read this Nobel Collection article).In the ribosome, instructions from our genetic code (our DNA) get translated into chains of amino acids.Then, something amazing happens-these strings of amino acids fold up into complex, three-dimensional structures that in turn determine the functions proteins can perform.
A -YEAR-OLD PROBLEM
Since the early s, scientists have been trying to understand exactly how the particular sequence of an amino acid chain results in the particular three-dimensional structure of a protein.This is known as the protein-folding problem [ ].Because proteins are so important
PROTEIN-FOLDING PROBLEM
A scientific question posed in the s asking how proteins fold to their three-dimensional structure based on their amino acid sequence.
for living things, the protein-folding problem was considered one of the most important problems in biochemistry.When scientists study any protein, they can easily determine which amino acids that protein contains-and even the exact order of amino acids in the protein string.But it has been much more di cult over the years to figure out the final three-dimensional shape that the string of amino acids folds into, to create the working protein machine.After all, proteins are much too small to simply examine under the microscope to see their shapes.
To figure out the three-dimensional structure of proteins, scientists have traditionally used a technique called X-ray crystallography
X-RAY CRYSTALLOGRAPHY
An experimental method for determining the three-dimensional structure of protein using X-rays.(Figure ).This involves crystallizing the protein, which means "freezing" many copies of it in a repeating D pattern.The crystallized protein is then examined using a huge machine that bounces high-energy X-rays o the protein (Figure A).Finally, the researcher must look at the patterns produced by those X-rays and perform very complex math to interpret the results and determine the actual structure of the protein.This process can take up to a few years for each protein!In the past years, the structures of about , proteins have been determined by methods like X-ray crystallography, cryo-electron microscopy (to read more about cryo-electron microscopy, see here), and nuclear magnetic resonance analysis, and those structures have been made openly available in the Protein Data Bank.
While this process has been successful, it is clearly too slow and expensive, especially if we want to find all the structures of the more Hassabis and Jumper AlphaFold than million proteins that we know of.This is over , times more proteins than the number of structures we have determined so far!Why is it so challenging to figure out the final three-dimensional shape of a protein?Well, just like a shoestring, there are an enormous number of ways that a chain of amino acids could potentially fold.Even a small protein, composed of just amino acids, could be in as many as possible configurations ( is followed by zeroes-that is more than the number of stars in the universe!).With so many possible ways to fold a protein, how could scientists ever know which one is correct without doing time-consuming and expensive experiments like X-ray crystallography?This is why, at Google DeepMind, we decided to use the power of artificial intelligence-the ability of computers to learn from
ARTIFICIAL INTELLIGENCE
The ability of computers to learn like the human brain does and mimic human intelligence.
examples and gain insights to solve complex problems-to tackle the protein-folding problem.This approach has proven very useful and saves a lot of time, money, and human e ort while also giving us new insights into how proteins work (Figure B).Traditionally, the structure of proteins has been determined by experiments that use very large, expensive machines to bounce X-rays o a crystallized protein (X-ray crystallography), followed by complex math to interpret the results.(B) Our approach at Google DeepMind is to use sophisticated AI systems that can use known protein structures and protein databases to learn to predict the structures of proteins that have not been experimentally tested yet.This approach saves a great deal of time and resources.
FROM WINNING GAMES TO SOLVING SCIENTIFIC PROBLEMS
Our approach at Google DeepMind is to combine our passion for AI and our passion for science to find ways for AI to help humanity.At first, we taught our systems how to play simple computer games by teaching them the rules of the games and letting them improve through experience.Our next goal was to make these systems win more complex games, as a steppingstone to tackling di cult real-world problems.This included training an AI model to play a board game called Go, which is a very complex game with more than possible board configurations (more than the number of atoms in Hassabis and Jumper AlphaFold the known universe!).For a few years, we developed and tested AI systems in game situations, to see how well they were doing and to keep training them to get better.In , one of our systems called AlphaGo defeated a world champion Go player named Lee Sedol-an achievement that was previously considered unimaginable.This was a huge steppingstone, and it proved that our AI systems were smart enough to deal with complex problems.
Google DeepMind has proud roots in scientific research, and so the protein-folding problem was a natural next step for us (Figure ).Shortly after AlphaGo's achievement in , we assembled a team that started working on predicting the structures of proteins from their amino acid sequences.This new AI system was called AlphaFold (Figure A).AlphaFold was designed to learn from existing information about protein structures that had been published in open databases like the Protein Data Bank.Overall, we had access to about , known protein structures, which we used to train our AI system.We designed AlphaFold to process information somewhat similarly to the way the human brain does, using a computer science idea called artificial neural networks (to learn more about artificial neural networks and machine learning, read this Frontiers for Young Minds article).Like the human brain, AlphaFold can learn from experience and improve its performance.The more examples of protein structures we gave it, the better it got at predicting the structures of new proteins.evolutionarily related to the protein AlphaFold is making a prediction for, and together those sequences contain clues about the structure.The shapes of proteins determine the functions they can perform, and Hassabis and Jumper AlphaFold many organisms must perform the same biological function, such as carrying oxygen in the blood.This means that the three-dimensional structures of all oxygen-carrying proteins from di erent organisms probably stayed similar over the course of evolution, even if their underlying amino acid sequences changed.For that to happen, it means that whenever one amino acid changed in one place in the protein, another amino acid in the protein-the one closest to it in the three-dimensional structure-also had to change accordingly, to preserve the original shape.We call this co-evolution of amino acids, and by feeding this information into AlphaFold, we allowed the system to detect hidden relationships between amino acids.
Once we entered enough information into AlphaFold, the system could predict basic information about the shape of a protein, including the distances (Figure D) and angles between every two amino acids in the protein and the certainty of the prediction (how reliable it is).This information was "recycled" a few times within the system, and in each round AlphaFold improves its prediction.Finally it uses its basic idea of the protein shape to predict the D position of every atom in the protein structure (Figure E).When we started, we tested AlphaFold's predictions on proteins whose structures were already known and let AlphaFold improve by learning from its errors and repeatedly correcting itself until its predictions became much better.After it was trained, we used the same network to run on unsolved structures and provide predictions for them.
THE EVOLUTION OF ALPHAFOLD
One exciting milestone in our journey with AlphaFold occurred in , when AlphaFold came first in a biannual protein structure prediction challenge called CASP.AlphaFold received an average accuracy score of around out of on the hardest proteins [ ], which was a great leap from the previous best score (which was about ).This made us even more confident in AlphaFold's capabilities, and we decided to improve the system even further for the next assessment.In our next version, called AlphaFold , we incorporated more of our scientific knowledge about the physics and geometry of amino acid chains into the system's learning process and aligned it with everything we understood about the protein-folding problem.Essentially, we taught AlphaFold how to perform MSA analysis, and then used that improved MSA analysis to gain a better understanding of protein folding (and therefore the physics and geometry of amino acid chains).This back-and-forth flow of information improved AlphaFold 's performance.
In the CASP structure prediction challenge, AlphaFold won with an astounding accuracy score of .out of [ ].This is approaching the accuracy of determining protein structures using experiments such as X-ray crystallography, but without the high time Figure
FigureFigure
Figure Did you know that almost all the processes happening in your body are performed by tiny biological machines called proteins?Proteins help to see, to move, to digest food, to fight diseases, and to perform many other essential actions needed to keep organisms like us alive and healthy (to learn more about proteins, check out this video).There us | 2023-10-20T15:29:30.878Z | 2023-10-17T00:00:00.000 | {
"year": 2023,
"sha1": "35b7ae26084841a15e445007f2ba63c458d080bd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frym.2023.1241472/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cd8b2eb0e551e4c4f1dc2a6ff79f2973ea43140c",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": []
} |
52497079 | pes2o/s2orc | v3-fos-license | The super tree property at the successor of a singular
For an inaccessible cardinal $\kappa$, the super tree property (ITP) at $\kappa$ holds if and only if $\kappa$ is supercomact. However, just like the tree property, it can hold at successor cardinals. We show that ITP holds at the successor of the limit of $\omega$ many supercompact cardinals. Then we show that it can consistently hold at $\aleph_{\omega+1}$. We also consider a stronger principle, ISP, and certain weaker variations of it. We determine which level of ISP can hold at a successor of a singular. These results fit in the broad program of testing how much compactness can exist in the universe, and obtaining large cardinal-type properties at smaller cardinals.
Introduction
A major theme in set theory is the question of how much compactness can consistently exist in the universe. We regard an object as satisfying a compactness principle if whenever a property holds for all strictly smaller substructures of the object, then this property holds for the object itself. Such principles typically follow from large cardinals, but can often occur at successor cardinals as well. So the aforementioned question can be rephrased as: What combinatorial properties of large cardinals can consistently hold at small ones?
A key instance of such a combinatorial principle is the tree property. A regular cardinal µ has the tree property if every tree of height µ, all of whose levels have size less than µ, has a cofinal branch. For inaccessible µ, the tree property is equivalent to weak compactness of µ; but this combinatorial principle can consistently hold at successor cardinals. Results of Mitchell and Silver show that the tree property at ℵ 2 is equiconsistent with a weakly compact cardinal. Mitchell's 1972 proof [9] obtaining the consistency of the tree property at ℵ 2 initiated a long, ongoing project in set theory: Obtain the tree property at all regular cardinals greater than ℵ 1 .
There are strengthenings of the tree property that capture the essence of larger cardinals in a similar way. Jech defined a principle which we call the strong tree property that characterizes strongly compact cardinals. Then Magidor isolated a further strengthening, ITP (or the super tree property) that can characterize supercompact cardinals. We will give the precise definitions in the next section, but the highlight is the following: (1) (Jech, 1973 [6]) The strong tree property holds at µ if and only if µ is strongly compact. (2) (Magidor, 1974 [7]) ITP holds at µ if and only if µ is supercompact.
Much as with the tree property, these combinatorial characterizations can consistently hold at successor cardinals. Indeed, if one starts with a supercompact λ and forces with the Mitchell poset to make the tree property hold at λ = ℵ 2 = 2 ω , then even ITP will hold at ℵ 2 in the generic extension. Similar remarks hold for the strong tree property.
So we have an even more ambitious version of the project to obtain the tree property everywhere: Can we consistently obtain the strong or super tree properties at every regular cardinal greater than ℵ 1 ?
Knowing that the tree property and its strenghtenings can be obtained at ℵ 2 , the natural follow up is what happens at higher ℵ n 's. Below we summarize some history on the subject: (1) (Abraham, 1983 [1]) Starting from a supercompact and a weakly compact above it, one can force the tree property simultaneously at ℵ 2 and ℵ 3 . (2) (Cummings-Foreman, 1998 [2]) Starting from ω-many supercompact cardinals, the tree property can be forced to hold simultaneously at every ℵ n , for n > 1. (3) (Fontanella [3]; Unger [12] independently, 2013-4), In the Cummings-Foreman model, ITP holds an every ℵ n , for n > 1.
A classical theorem of Aronszajn shows the tree property, hence also its strengthenings, fail at ℵ 1 . An old generalization by Specker of this theorem implies that even obtaining the tree property at ν + and ν ++ with ν strong limit requires a violation of the singular cardinals hypothesis (SCH) at ν. We note that obtaining this situation just at ν = ℵ ω is an open problem. The crux of this program will thus likely be encountered at successors of singulars.
Our focus here is on this region. The first result in that direction is by Magidor and Shelah [8], who showed that if ν is a singular limit of supercompact cardinals, then µ = ν + has the tree property. They also showed that the tree property can be forced at ℵ ω+1 . The original large cardinal hypothesis included a huge cardinal. It was later reduced to ω-many supercompact cardinals, using a Prikry construction in [11]. More recently, Neeman [10] showed that from this situation, one can force µ = ℵ ω+1 to have the tree property with a product of Levy collapses.
A few years ago, Fontanella [4] generalized both arguments in [8] and [10], showing that the strong tree property holds the successor of a singular limit of strongly compact cardinals, and also can consistently hold at ℵ ω+1 .
In this paper we show that even ITP can hold at the successor of a singular cardinal. We prove the following theorems: Theorem 1.2. Suppose that κ n n<ω is an increasing sequence of supercompact cardinals, ν = sup n κ n , and µ = ν + . Then ITP holds at µ.
Then we show this can be forced at smaller cardinals: Theorem 1.3. Suppose that κ n n<ω is an increasing sequence of supercompact cardinals, ν = sup n κ n and µ = ν + . Then there is a forcing extension in which µ = ℵ ω+1 and ITP holds at ℵ ω+1 .
We also consider a strengthening of ITP, the so-called ineffable slender list property (ISP). This principle has been of interest in connection with the SCH as well as that of the consistency strength of the proper forcing axiom (PFA). Viale and Weiß showed that under PFA, ISP holds at ℵ 2 [14]. In [13] Viale showed that ISP at ℵ 2 together with stationary many internally unbounded models implies that SCH holds; it is still open whether ISP by itself is enough. Viale and Weiss gave a striking application [14], showing that any standard iteration to force PFA must start with a strongly compact cardinal. If in addition the iteration is proper, then there must be a supercompact cardinal in the ground model.
This paper is organized as follows. In section 2, we give the definitions of the principles discussed above and fix some notation. In section 3 we prove that ITP holds at successor of limit of supercompacts. In section 4, we prove the theorems regarding ISP, and in section 5 we prove the consistency of ITP at ℵ ω+1 . We then conclude with some open questions.
Preliminaries and notation
In this section we define the notion of lists and the strengthenings of the tree property discussed in the last section.
Definition 2.1. Suppose µ is a regular cardinal and λ ≥ µ. We say that d = d z z∈Pµ(λ) is a P µ (λ)-list if for all z ∈ P µ (λ), d z ⊆ z. A tree-like structure is obtained from a list by regarding the levels, indexed by z ∈ P µ (λ), as consisting of restrictions of d y 's above: Note that if µ is inaccessible, then every P µ (λ)-list is thin.
Definition 2.2. We say TP(µ, λ) holds if for every thin P µ (λ)-list d, there is a cofinal branch b through d. The strong tree property holds at µ if TP(µ, λ) holds for all λ ≥ µ.
Thus the levels of a list d consist of small approximations to a subset of λ, and a cofinal branch is a subset of λ that is approximated at every level. Note that TP(µ, µ) is equivalent to the tree property at µ.
A further strengthening of ITP called ISP was defined by Weiß in [16].
All thin lists are slender, though the converse can fail. Consequently, ISP implies ITP. Moreover, like ITP, ISP can consistently hold at ℵ 2 . For example, in [14] it is shown that PFA implies ISP holds at ℵ 2 .
Viale and Weiß gave a characterization of ISP via guessing models. We will discuss this in more detail later, together with a refinement of slenderness to analyze what happens when µ is the successor of a singular cardinal.
We use standard notation. For conditions in a forcing poset P, p ≤ q denotes that p is stronger than q. We say that P is κ-closed to mean that decreasing sequences of length less than κ have a lower bound. P is <κdistributive if it adds no new sequences of size less than κ.
ITP at the successor of a singular limit of supercompacts
We begin with the simplest instance of the super tree property, namely ITP(µ, µ).
We will show that there is a b ⊂ µ such that {α < µ | d α = b ∩ α} is stationary. The ineffability of b will come from a basic fact about sets in supercompactness measures.
Fact 3.2. Let µ be regular and suppose U is a normal measure on P κ (µ) with κ ≤ µ. Then for all A ∈ U , {sup x | x ∈ A} is stationary.
For each n, let U n be a normal measure on P κn (µ), and let j Un be the corresponding ultrapower embedding. Lemma 3.3. There are n < ω, an unbounded S ⊂ µ, and A ∈ U 0 such that for all x ∈ A and α ∈ x ∩ S, there is ξ < κ n with d sup x ∩ α = σ α (ξ).
Proof. Let i = j U 0 and consider id sup i"µ . For all α < µ, there is some n and Then for some n, there is a cofinal set S ⊂ µ such that for all α ∈ S, id sup i"µ ∩ i(α) has index below i(κ n ) in Lev id (i(α)); namely, for some ξ < i(κ n ), Then for all α ∈ S, there is a measure one set A α ∈ U 0 such that for all Then S, A are as desired.
Fix S, A, n as in the conclusion of the lemma. Note that then for all α < β both in S, there are ξ, δ < κ n such that σ α (ξ) = σ β (δ) ∩ α.
For the purposes of the following lemma, define a (not necessarily cofinal) branch through d as a set b ⊆ µ so that b ∩ α ∈ Lev d (α) for all α ≤ sup b. (Note b may be a cofinal branch even if b is bounded as a subset of µ.) Lemma 3.4. There is a sequence b δ | δ < κ n of (possibly bounded) branches through d and a measure one set A ′ ∈ U 0 such that for all x ∈ A ′ and α ∈ x, there is δ < κ n such that d sup Proof. Let j = j U n+1 : V → M , and let γ ∈ j(S) \ sup j"µ. By elementarity, for all α ∈ S, there are ξ, δ < κ n such that j(σ α (ξ)) = jσ j(α) (ξ) = jσ γ (δ) ∩ j(α).
For each δ < κ n , let That is, b δ is the pullback by j of "the predecessors" of jσ γ (δ). We have the following: • Each b δ is a branch, as it is the union of a coherent sequence of elements of the α-th level of d ranging over α < µ. • There is some δ < κ n such that The last item follows by the second item, but we gain more information from the following direct argument.
Fix α < µ. Choose x ∈ j(A), such that γ ∈ x and there is some α ′ ∈ S \ α with j(α ′ ) ∈ x. Then we apply elementarity. In particular, for all such x's there are some ξ, δ < κ n such that: So we have in M that for all α < µ, and for j(U 0 )-measure one many Then by elementarity, in V we have for all α < µ that there is a measure Fix the branches b δ | δ < κ n and A ′ ∈ U 0 as in the above lemma. By restricting to a subset of κ n , if necessary, assume that for η < δ, b η and b δ are distinct branches. (Note that this is not automatic for all η < δ, since our top node γ may be strictly above sup j"µ.) Then for distinct η, δ < κ n , let α η,δ be such that for all α ≥ α η,δ , b η ∩ α = b δ ∩ α (it exists, otherwise they will be the same branch). Letᾱ = sup η,δ<κn α η,δ < µ.
But then for all x ∈ A ′ withᾱ ∈ x, there is a unique δ < κ n , such that for all α ∈ x \ᾱ, d sup x ∩ α = b δ ∩ α. By intersecting with a measure one set, we may assume that for all x ∈ A ′ , x is unbounded in sup x. Then This completes the proof of Theorem 3.1.
Next, we argue for the two-cardinal version.
For each n, let U n be a normal measure on P κn (λ). Note that λ = |P µ (λ)|. Let i = j U 0 and set z * := i"P µ (λ), and let g : Proof. Suppose that C is a club in P µ (λ). Then in M , i"C is a directed subset of i(C) of size less than i(µ). So, z * = i"P µ (λ) = i"C ∈ j(C).
For all z ∈ P µ (λ), there is some n and ξ < j(κ n ), such that id z * ∩ i(z) = iσ j(z) (ξ) Then for some n, there is a stationary set S ⊂ P µ (λ), such that for all z ∈ S, there is ξ < i(κ n ), Then for all z ∈ S, there is a measure one set A z ∈ U 0 , such that for all Next we want to take a diagonal intersection of the A z 's. To that end, fix a bijection c :
So let us assume that for all
Then if x ∈ A, z ∈ S, and c(z) ∈ x, there is ξ < κ n , such that d g(x) ∩ z = σ z (ξ). As a corollary, we have that for all z ⊂ w, both in S, there are ξ, δ < κ n , such that σ z (ξ) = σ w (δ) ∩ z.
Next we prove the analogous result from Lemma 2 in Theorem 3.1.
Lemma 3.8. There is a sequence b δ | δ < κ n of (possibly bounded) branches through the list and a measure one set A ′ ∈ U 0 , such that for all x ∈ A ′ , for all z ∈ P µ (λ) with c(z) ∈ x, there is δ < κ n , such that For each δ < κ n , let I.e., analogously as before, b δ is the pullback of "the predecessors" of jσ u (δ).
Then each b δ is the union of a coherent sequence of elements of the z-th level of d ranging over z ∈ P µ (λ) (it may be bounded). And for each z ∈ P µ (λ), there is δ < κ n , such that b δ ∩ z is in the z-th level of d.
Then there there is some δ < κ n , such that jd jg(x) ∩ u = jσ u (δ), and for some ξ < κ n , Then by elementarity, in V , for all z ∈ P µ (λ), there is a measure one set This is as desired.
Fix the branches b δ | δ < κ n and A ′ ∈ U 0 as in the above lemma. By passing to a subset of κ n if necessary, assume that for η < δ, b η and b δ are distinct branches. As before, for η < δ < κ n , let z η,δ be such that for all This completes the proof of Theorem 3.5.
ISP at the successor of a singular cardinal
In this section we analyze a somewhat stronger principle at successor of a singular, called ISP. Let us begin with some definitions. Definition 4.1. Let M ≺ H θ for some θ, and suppose z ⊆ a for some a ∈ M , and δ ∈ M is a cardinal. We say M δ-approximates z if for all We say the principle ISP(δ, µ, λ) holds if every δ-slender P µ (λ)-list has an ineffable branch.
In [14], Viale and Weiß gave a characterization of ISP via guessing models. A model M is δ-guessing if whenever M δ-approximates x with x a subset of some a ∈ M , then x is M -guessed, i.e. there is b ∈ M such that b ∩ a = x. For more on these objects, see [15].
Viale and Weiss showed that if ISP(ℵ 1 , µ, |H θ |) holds, then there are stationarily many ℵ 1 -guessing models M ≺ H θ with |M | < µ; and ISP(ℵ 1 , µ, λ) holds for all λ ≥ µ if and only if there are stationarily many ℵ 1 -guessing models of size less than µ in H θ for all large θ. Similarly, we have: Let us now observe a limitation on the extent to which this principle can hold for µ the successor of a strong limit singular cardinal. In [5], it was shown that the principle ISP as defined by Weiß (i.e. ISP(ℵ 1 , µ, λ) for all λ ≥ µ) cannot hold at the single or at the double successor of a singular strong limit cardinal. The next theorem generalizes this fact. Note the proof of Fact 4.2 is embedded in this argument. Proof. Suppose for contradiction ISP(ν, µ, 2 ν ) holds; write τ = 2 ν . We seek a ν-slender P µ (τ ) list d with no ineffable branch.
So for each z ∈ P µ (τ ) such that f "z ∈ C, denote M z = f "z and let x z be a subset of ν that is not M z -guessed. Then put d z = {α ∈ τ | f (α) ∈ x z }. Since f ∈ M z , d z ⊆ z, and (any P µ (τ )-list extending) d z Mz∈C is clearly ν-slender.
Let b be an ineffable branch through d. Since by construction f "d z ⊆ ν for a club of z, we have f "b ⊆ ν. So fix z such that M z ∈ C, f "b ∈ M z , and was defined as a subset of ν that was not M z -guessed. This is a contradiction.
It turns out that the above theorem is sharp. In particular, next we show that a modification of the arguments from the previous section yield ISP(µ, µ, λ) when µ is the successor of a limit of supercompacts (note that in this situation, 2 ν = µ).
As before, we have that for all clubs C ⊆ P µ H θ in V , N * ∈ i(C). In particular, by slenderness of d, M satisfies that N * i(µ)-approximates id N * ∩i(λ) . Thus for any z ∈ P µ (λ), we have i(z) ∩ id N * ∩i(λ) ∈ N * .
It follows that for each z ∈ P µ (λ), there is an elementary substructure M z ∈ P µ (H θ ) such that z ∪ {z} ⊆ M z , and we may assume |M z | = ν. For each such z, let us enumerate P µ (z) ∩ M z as σ z (ξ) | ξ < ν .
Lemma 4.5. There exist n < ω, a stationary S ⊆ P µ (λ), and A ∈ U 0 such that for all z ∈ S and x ∈ A with c(z) ∈ x, there is some ξ < κ n such that Proof. For each z ∈ P µ (λ), there is some ξ < i(ν) so that i(z) ∩ id N * ∩i(λ) = iσ i(z) (ξ); let n z be least so that ξ < i(κ n ). The map z → n z is constant on a cofinal S, say with value n. For each z ∈ S, we have by Loś's theorem As before we wish to take a diagonal intersection of the A z over z ∈ S. Recall that we fixed an injective c : P µ (λ) → θ; define h : P κ 0 (θ) → P(P µ (λ)) by It's not hard to see that [h] = i"P µ (λ); and for U 0 -many x, g(x)∩λ = h(x). Then let Then A = △ z∈S A z is as in the statement of the lemma.
Let A ′ z be this measure one set. We again take the diagonal intersection, Again, by passing to a subset of κ n if necessary, we assume that for all δ < η, b δ = b η . Takez "above all the splitting", so that b η ∩z = b δ ∩z for all η = δ < κ n . Let T = {z ∈ P µ (λ) |z ⊆ z and for some δ < κ n , d z = b δ ∩ z}. Now by the above remarks, there are measure one many x ∈ A ′ so that d g(x)∩λ = d {z|c(z)∈x} . Thus T ⊃ {g(x) ∩ λ | x ∈ A ′ }, and so by (the same argument as in) Claim 3.6, T is a stationary set. Again, there is some δ so that T δ = {z ∈ P µ (λ) | d z = b δ ∩ z} is stationary. This b δ is the desired ineffable branch.
Combining Fact 4.2 with the previous two theorems, we obtain the following Corollary, answering Questions 8.2 and 8.3 of Viale in [15]. Corollary 4.6. Suppose µ = ν + with ν the limit of ω-many supercompacts. Then for all cardinals θ taken sufficiently large, there are stationarily many µ-guessing models of size ν in H θ ; and none of these is δ-guessing for any δ ≤ ν.
ITP at ℵ ω+1
We next show, assuming the existence of infinitely many supercompacts, that it is consistent for ℵ ω+1 to have the super tree property. We begin by showing it is consistent to have ITP(ℵ ω+1 , ℵ ω+1 ). The forcing will almost be the same as that used by Neeman [10] to obtain the tree property. We first take the product of Levy collapses to turn κ n into κ +n 0 ; we then show there exists some inaccessible ρ < κ 0 so that collapsing to make ρ +ω+1 become ℵ 1 , and κ 0 become ℵ 3 , forces the tree property at ℵ ω+1 . In fact the argument will show there are measure one many (in the normal measure on κ 0 induced by our supercompactness measure) such ρ.
Work in V [H]
. Supposing otherwise, we have, for every such τ < κ 0 , a L τ name for a thin µ-listḋ τ forced by L τ to have no ineffable branch. Assume that for all α < µ, ½ Lτ forces that the α-th level ofḋ τ is enumerated by the names {σ τ α (ξ) | ξ < ν}; we may furthermore assume that for sufficiently large α < µ, it is forced that there are no repetitions in the sequence σ τ α (ξ) | ξ < ν .
By indestructibility, let U 0 be a normal measure on P κ 0 (µ) in V [H], and for each n > 0, let U n be a normal measure on P κn (µ) in V .
Proof. By the above remarks, . Choosing the q α inductively, we arrange that q α | α < µ is decreasing.
Note that p α ∈ Col(ω, ν) are finite conditions; then there are an unbounded S ⊂ µ and fixed n and p such that for all α ∈ S, p α = p and n α = n. By µ-closure of Col(µ + , <i(κ)), we can take q to be a common strengthening of the q α .
Set A := △ α∈S A α . Then S, A are as desired.
Fix n, S, A, x → (p x , q x ) as in the conclusion of the above lemma. Much as in [8] and later [10], we require the notion of a system. Definition 5.3. Let D ⊆ Ord, ρ ∈ Ord, and I be an index set. A system on D × ρ is a family R s s∈I of transitive, reflexive relations on D × ρ, so that (1) If (α, ξ)R s (β, ζ) and (α, ξ) = (β, ζ) then α < β.
A branch through R s is a subset of D × ρ that is linearly ordered by R s and downwards R s -closed (in particular, a branch is a partial function b : D ⇀ ρ). A system of branches through R s s∈I is a family b η η∈J so that each b η is a branch through some R s(η) , and D = η∈J dom(b η ).
As before, branches in a system need not be cofinal; however, note that now a branch b η through R s is cofinal iff dom(b η ) is cofinal in D.
Proof. That (1) and (2) hold is immediate by definition and the preceding paragraph, and the above lemma gives (3). Let γ ∈ j(S) \ sup j"µ. Since κ n < crit(j), by elementarity applied to Lemma 5.2, we have for all α ∈ S that if we let x ∈ j(A) so that j(α), γ ∈ x, then there exist ξ, δ < κ n and s = (µ x , p x , q x ) ∈ j(I) = I such that (p x , q x ) j(σ µx α (ξ)) = jσ µx j(α) (ξ) = jσ µx γ (δ) ∩ j(α). For each δ < κ n and s = (τ, p, q) ∈ I, let b s,δ = {(α, ξ) | α ∈ S, ξ < κ n , (p, q) Lτ j(σ τ α (ξ)) = jσ τ γ (δ) ∩ j(α)}. We have that b s,δ | s ∈ I, δ < κ n is a system of branches through R s s∈I : Each is clearly linearly ordered and downward closed; and we have just shown that any x ∈ j(A) with γ, j(α) ∈ x witnesses α ∈ dom b s,δ for some δ < κ n , so that dom b s,δ = S. This system may not belong to V [H], but we now satisfy precisely hypothesis (1) of the branch preservation lemma, Lemma 3.3, of [10]. So there is some (s, δ) ∈ I × κ n so that b s,δ is cofinal and belongs to V So the set S ′ := (s,δ)∈D dom(b s,δ ) is unbounded in µ, and we have that b s,δ (s,δ)∈D is a system of branches through R s ↾ S ′ × κ n s∈I . Also, by passing to a subset of I × κ n if necessary (any such will be in V [H] by distributivity), we may assume that for all s ∈ I and η < δ < κ n , if b s,η and b s,δ are both cofinal, then they are distinct. This is done by simply removing duplicates (which may exit if the splitting between jσ τ γ (η) and jσ τ γ (δ) is forced to be above sup j"µ).
Our next goal is to show that these ground model branches are enough for us to repeat the final argument of Theorem 3.1. What we need is a strengthened version of 5.2 for those branches from D.
For x ∈ A ′ let (p x , q x ) ∈ L µx be as in the conclusion of Lemma 5.2; and set s x = (µ x , p x , q x ). Lemma 5.6. There exist an unboundedS ⊆ S ′ andĀ ∈ U 0 withĀ ⊆ A, so that for all x ∈Ā, for all α ∈S ∩ x we have ( † x,α ) for some δ < κ n , (s x , δ) ∈ D and (p x , q x ) Lµ xḋ µx sup x ∩α =π sx,δ ∩α. First let us see how to finish the proof of Theorem 5.1 assuming the lemma. By our choice ofᾱ, any namesπ s,δ andπ s,η corresponding to cofinal branches of V [H] are, by elementarity and the definition of these names, forced outright to disagree belowᾱ.
Suppose we have x ∈Ā with x ∩S unbounded in sup x,ᾱ < sup x, and let G x be generic for L µx with (p x , q x ) ∈ G x . By our definition ofᾱ, there exists a strengthening (p ′ x , q ′ x ) ∈ G x of (p ′ x , q ′ x ) that forcesπ s,δ ∩ α =π s,η ∩ α for all α >ᾱ and distinct η, δ such that (s, η), (s, δ) ∈ D represent cofinal branches; and α is above the domains of all bounded b s,δ 's. Now for any α ∈ x ∩S, α >ᾱ, we have some δ < κ n so that (p x , q x ) ḋ µx sup x ∩ α =π s,δ ∩ α =π s ′ ,δ ∩ α. Since we are above the splitting, these must be the same branch, and so without loss of generality this δ must be the same for each α ∈ x ∩S. It follows that (p x , q x ) ḋ µx sup x =π s,δ ∩ sup x. Letting, for (s, δ) ∈ D with s = (τ, p, q), what we have shown is that T := (s,δ)∈D T s,δ ⊇ {sup x | x ∈Ā,ᾱ < sup x, x ∩S unbounded in sup x}. So T is stationary; since |D| ≤ κ n < µ, there is some fixed (s, δ) so that T s,δ is stationary. But since L τ preserves stationarity of subsets of µ, we have that b s,δ defines an ineffable branch throughḋ τ in any extension containing (p, q), a contradiction.
Proof of Lemma 5.6. It is sufficient to show that if we set So supposeS is bounded. Fix α 0 < µ so thatᾱ < α 0 , and A α / We wish to show that if R ′ s is obtained by deleting all ground model branches b s,δ from R s , then the resulting family R ′ s s∈I is a system on (S ′ \ α 0 ) × κ n . That is, for each s, let , and for all δ < κ n , if (s, δ) ∈ D then (α, ξ) / ∈ b s,δ .
. But these conditions and our definition ofπ sx,δ contradict (α, ξ) ∈ b sx,δ . Now that R ′ s s∈I is a system, we let for each (s, δ) / ∈ D the branch b ′ s,δ be the restriction of b s,δ to R ′ s . Then b ′ s,δ | (s, δ) / ∈ D is a system of branches through the system R ′ s | s ∈ I . Now recapitulating the argument of Lemma 5.5, we see that there exists some s and a δ so that b ′ s,δ ↾ R ′ s is cofinal and belongs to V [H]. Since b s,δ can be recovered from any cofinal subset, we must have (s, δ) ∈ D. But this contradicts our definition of the system R ′ s s∈I . This contradiction completes the proof.
Proof. Suppose otherwise. Then for every τ < κ, there is some λ, such that ITP(µ, λ) fails in the extension of V [H] by L τ . By taking a supremum, assume the λ is the same for all τ and that λ µ = λ.
So for each τ , letḋ τ be a name for a P µ (λ) thin list which is forced by , κ is still supercompact, each κ n , n > 0, is generically supercompact for the right type of quotient, and λ is µ + . So, this will be a similar argument as in Theorem 5.1. Except that here we work in V [H][K] instead of V [H], and consider P µ (λ) = P µ (µ + ) lists and λ = µ + -supercompact embedding.
We outline the proof, skipping some of the details.
and in the ultrapower.
Proof. Suppose otherwise; say 1 Q forces that eachḃ j / ∈ W , whereḃ j is a Q-name in W for the j-th branch. Let G = G ξ | ξ < χ be Q χ -generic over W . Here Q χ is the full support χ power of Q. For every ξ < χ, j ∈ J, let b ξ j =ḃ j [G ξ ]. First note that since Q χ is τ -closed in V , it must be < τ distributive in W . Working in W [G], for each non cofinal branch b ξ j , there is z ξ j ∈ P W µ (λ) such that for all z ⊃ z ξ j , z / ∈ dom(b ξ j ). Let z 0 be their union. By distributivity, z 0 ∈ P W µ (λ). Similarly, we can find z 1 ⊃ z 0 , in P W µ (λ), such that for all cofinal b ξ j , b η j for ξ < η, for all z ⊃ z 1 , b ξ j (z) = b η j (z) (possibly because one of them is not defined) if they are branches through the same relation. This splitting follows by mutual genericity.
Open problems
Having obtained ITP at the successor of a singular, the direction of forcing ITP at successive cardinals past a singular looks very promising. As a first step one can try and combine the construction in [10] with the results in this paper to force ITP at every ℵ n , n > 1 together with ITP at ℵ ω+1 . We conjecture that this should be possible.
Next, as mentioned in the introduction, in order to get the tree property everywhere one needs failures of SCH. The reason is that the tree property (or strong tree property or ITP) at κ ++ for a singular strong limit κ implies that SCH fails at κ. So here are some questions to consider: Question 1. Can we obtain ITP at κ + for a singular strong limit cardinal κ together with failure of SCH at κ? Question 2. Can we obtain ITP at κ + and κ ++ for a singular strong limit cardinal κ? Question 3. Can we get the above for κ = ℵ ω 2 ? Or much more ambitiously, for κ = ℵ ω ? | 2018-06-03T16:07:34.000Z | 2018-06-03T00:00:00.000 | {
"year": 2020,
"sha1": "2adff283a21164827fcac99555902acc2f02c076",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.00820",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0900885cecb6ee0ab0e7ef94a35c0e93c5947b5b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
12437320 | pes2o/s2orc | v3-fos-license | The connection between star formation and stellar mass: Specific star formation rates to redshift one
We investigate the contribution of star formation to the growth of stellar mass in galaxies over the redshift range 0.5<z<1.1 by studying the redshift evolution of the specific star formation rate (SSFR), defined as the star formation rate per unit stellar mass. We use an I-band selected sample of 6180 field galaxies from the Munich Near-Infrared Cluster Survey (MUNICS) with spectroscopically calibrated photometric redshifts. The SSFR decreases with stellar mass at all redshifts. The low SSFRs of massive galaxies indicates that star formation does not significantly change their stellar mass over this redshift range: The majority of massive galaxies have assembled the bulk of their mass before redshift unity. Furthermore, these highest mass galaxies contain the oldest stellar populations at all redshifts. The line of maximum SSFR runs parallel to lines of constant star formation rate. With increasing redshift, the maximum SFR is generally increasing for all stellar masses, from SFR ~ 5 M_sun/yr at z = 0.5 to SFR ~ 10 M_sun/yr at z = 1.1. We also show that the large SSFRs of low-mass galaxies cannot be sustained over extended periods of time. Finally, our results do not require a substantial contribution of merging to the growth of stellar mass in massive galaxies over the redshift range probed. We note that highly obscured galaxies which remain undetected in our sample do not affect these findings for the bulk of the field galaxy population.
INTRODUCTION
During the last decade, observational research on galaxy formation and evolution made a lot of progress. Especially two quantities and their redshift evolution have gained considerable attention: The stellar mass function of galaxies and the star formation rate (SFR). The stellar mass function was measured out to redshifts of z ∼ 1.5 (e.g. Bell et al. 2003;Dickinson et al. 2003;Fontana et al. 2004), while deep pencil beam sur-⋆ E-mail: feulner@usm.lmu.de † Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA), operated by the Max-Planck-Institut für Astronomie, Heidelberg, jointly with the Spanish National Commission for Astronomy. ‡ Based on observations collected at the VLT (Chile) operated by the European Southern Observatory in the course of the observing proposals 66.A-0123 and 66.A-0129.
However, the total integrated SFR and its evolution with redshift does not tell us about the contribution of star formation to the build up of stellar mass for different galaxy masses. For example we cannot say whether the general rise of the SFR to redshift one is produced by high-mass or lowmass galaxies, and how much stellar mass galaxies of different mass form during this period. Cowie et al. (1996) used K-band luminosities and [OII] equivalent widths to investigate this connection and noted an emerging population of massive, heavily star forming galaxies at higher redshifts, a phenomenon they termed 'downsizing'.
A more direct measure of this connection is the 'specific star formation rate' (SSFR, Guzman et al. 1997;Brinchmann & Ellis 2000) which is defined as the SFR per unit stellar mass. This quantity allows us to ex- The left-hand panel shows a comparison of photometric and spectroscopic redshift for the different MUNICS fields (indicated by the different colours), while the middle panel gives the corresponding error histogram (red) and a Gaussian fit to it (blue). In the right-hand panel we present the distribution of absolute I magnitudes M I versus redshift z phot , where the different colours indicate different model SEDs ranging from early types (red) to late types (purple). Open symbols identify objects spectroscopically classified as AGN.
plore the relation between stellar mass and SFR directly. The SSFR has been studied before, both locally (Pérez-González et al. 2003;Brinchmann et al. 2004) and at higher redshifts (Guzman et al. 1997;Brinchmann & Ellis 2000;Fontana et al. 2003;Bauer et al. 2004). Our study, which relies on photometric redshifts, extends previous work by investigating a large sample of galaxies at higher redshifts.
In this letter, we present measurements of the SSFR and its evolution of redshift based on an I-band selected catalogue of more than 6000 field galaxies from the Munich Near-Infrared Cluster Survey (MUNICS), allowing us to trace the change of the SSFR with cosmic time with high statistical accuracy.
This letter is organised as follows. First we introduce the galaxy sample in Section 2 and describe our methods to derive SFRs and stellar masses. In Section 3 we present our results on the SSFR, before we summarise our findings and discuss their implications in Section 4. Throughout this work we assume Ωm = 0.3, ΩΛ = 0.7 and H0 = 70 km s −1 Mpc −1 . All magnitudes are given in the Vega system.
THE GALAXY SAMPLE
Galaxies used in this study are drawn from the Munich Near-Infrared Cluster Survey (MUNICS, ), a wide-area, medium deep photometric and spectroscopic survey in the BV RIJK bands covering an area of about 0.3 square degrees down to K ≃ 19 and R ≃ 24 (Snigula et al. 2002). In contrast to previous work on the K-selected sample ("MUNICS K", Drory et al. , 2003, this work is based on an I-band selected galaxy catalogue ("MU-NICS I") which will be described in detail in a forthcoming paper (Feulner et al. 2005, in preparation). Object detection and photometry was performed using YODA (Drory 2003) in much the same way as for the K-selected sample ). We use the same sub-set of high-quality fields as in Drory et al. ( , 2003. Stars are ex-cluded based on their spectral energy distributions (SEDs), leaving 6180 galaxies for further analysis.
Photometric redshifts are derived using the method described in Bender et al. (2001). This is the same method also used on MUNICS K and discussed in detail in Drory et al. (2003). The photometric redshifts are calibrated using the spectroscopic redshifts presented in Feulner et al. (2003). Fig. 1 shows a comparison of photometric and spectroscopic redshifts for MUNICS I as well as the distribution of absolute I-band magnitudes MI versus redshift. The distribution of redshift errors is similar to MUNICS K with a width of ∆z/(1 + z) = 0.057.
We estimate the star formation rates (SFRs) of our galaxies from the SEDs by deriving the luminosity at λ = 2800 ± 100Å and converting it to an SFR as described in Madau et al. (1998) assuming a Salpeter initial mass function (IMF; Salpeter 1955). We have convinced ourselves that these photometrically derived SFRs are in reasonable agreement with spectroscopic indicators for objects with available spectroscopy. Note that since our bluest band is B, this is an extrapolation for z < 0.4. Hence we restrict any further analysis to redshifts z > 0.4, where the ultraviolet continuum at λ ≃ 2800Å is shifted into or beyond the B band. The SFR density as a function of redshift derived from our sample agrees well with previous results and will be discussed in a future paper (Feulner et al., 2005, in preparation).
Stellar masses are computed from the multi-colour photometry using a method similar to the one used in . It is described in detail and tested against spectroscopic and dynamical mass estimates in . In brief, we derive stellar masses by fitting a grid of stellar population synthesis models by Bruzual & Charlot (2003) with a range of star formation histories (SFHs), ages, metalicities and dust attenuations to the broad-band photometry. We describe star formation histories (SFHs) by a two-component model consisting of a main component with a smooth SFH ∝ exp(−t/τ ) and a burst. We allow SFH timescales τ ∈ [0.1, ∞] Gyr, metalicities [Fe/H] ∈ [−0.6, 0.3], ages between 0.5 Gyr and the age of the universe at the objects redshift, and extinctions AV ∈ [0, 1.5]. The SFRs derived from this model fitting is in good agreement with the ones from the UV continuum. Note that we apply the extinction correction derived from this fitting also to the SFRs.
THE SPECIFIC STAR FORMATION RATE
We investigate the connection between SFR and stellar mass and its evolution with redshift by considering the 'specific star formation rate' SSFR (Guzman et al. 1997;Brinchmann & Ellis 2000), defined as the SFR per unit stellar mass. In Fig. 2 we show the SSFR as a function of stellar mass for four different redshift bins from z = 0.5 to z = 1.1. The general shape is in very good agreement with a similar study based on spectroscopic data (Bauer et al. 2004).
Let us first understand the limits of the object distribution in this diagram as indicated by the dotted lines. First, the sharp cut-off at the high mass end at log Mstar/M⊙ ≃ 11.5 is produced by the high-mass cut-off of the stellar mass function (see e.g. Fontana et al. 2004). Secondly, the lower limit at log SSFR ≃ −11.3 is due to the fact that data points fit by the same model SED occupy horizontal slices in the diagram, with the reddest (oldest, least active) galaxies at the bottom and subsequently bluer models along the distribution to higher values of log SSFR. Finally, the limit of the point distribution to the left of the diagram is due to a combination of the selection band and the limiting magnitudes in all filters of the MUNICS survey. This completeness limit will run parallel to lines of constant SFR in a B-selected galaxy sample, but have a much steeper slope for near-infrared selected galaxies.
The first result which can be derived from Fig. 2 is that in our optically-selected survey there is an upper bound on the SSFR (with a few galaxies with very high SFRs which are likely starburst galaxies or AGN). It runs parallel to lines of constant star-formation over a wide range of masses M 10 9 M⊙ and at all redshifts, meaning that this upper limit of the SFR does not depend on galaxy mass. Furthermore, this maximum SFR is generally increasing with increasing redshift for all stellar masses, from SFR ≃ 5M⊙yr −1 at z ≃ 0.5 to SFR ≃ 10M⊙yr −1 at z ≃ 1.1. Note that, while the lower part of the SSFR in the diagram is affected by incompleteness, the constraints on the upper envelope are robust. This is evident from Fig. 3 where we show the histogram of the SFR for the four different redshift bins, clearly showing the increase of the maximum SFR with redshift.
We note that our sample might be missing highly obscured, heavily star forming massive galaxies which could occupy the upper right part of Fig. 2. Indeed, mid-infrared studies have shown that these galaxies exist and that the upper bound of the SSFR is partly a selection effect due to their dust content (Hammer et al. 2001;Franceschini et al. 2003). However, we can conclude from their number density that they contribute at most 10% to the field galaxy population (the objects studied by Franceschini et al. (2003) have 25% of the number density of our sample, but their optical data go roughly 2 mag deeper than ours). Thus, even if our sample should miss these galaxies, our conclusions still hold for the larger part of the field galaxy population. While differences in extinction between galaxies of different mass might influence the shape of the upper bound, its existence and observed change are robust to these differential effects.
Hints for a shift of this upper envelope to higher SSFRs with redshift were already noted by Brinchmann & Ellis (2000) and Bauer et al. (2004) from smaller galaxy samples, but our large sample of more than 6000 galaxies allows to constrain this change in a much more robust way.
Furthermore, we can study the distribution of the ages of the model stellar populations in Fig. 2. It is clear that the most massive galaxies contain the oldest stellar populations at all redshifts, with ages close to the age of the universe at each epoch.
Finally we indicate the SSFR needed to double a galaxy's stellar mass between the epoch of observation and today (assuming a constant SFR). Clearly, the most massive galaxies are well below this line at all redshifts, indicating that they formed the bulk of their stars at earlier times, in agreement with the age distribution discussed above. This also means that star formation contributes much more to the mass build-up of less massive galaxies than to high-mass systems. While between redshifts z = 1 and z = 0 the mass of a 10 11 M⊙ system would typically change by ∼ 40% due to star formation, the mass of 10 10 M⊙ galaxies would grow by a factor of ∼ 5 and that of 10 9 M⊙ systems by a factor of ∼ 40. This example assumes a constant SFR of̺⋆ = 5 M⊙ yr −1 over a period of 7.7 Gyr which, as will be shown below, is likely to be unrealistic (at least for the lower-mass systems).
SUMMARY AND CONCLUSIONS
We have presented the specific star formation rate (SSFR) as a function of stellar mass and redshift for a large sample of more than 6000 I-band selected galaxies. The SSFR decreases with mass at all redshifts, although we might not detect highly obscured galaxies. The low values of the SSFR of the most massive galaxies suggests that most of these massive systems formed the bulk of their stars at earlier epochs. Furthermore, stellar population synthesis models show that these most massive systems contain the oldest stellar populations at all redshifts. This is in agreement with the detection of old, massive galaxies at redshifts 1 z 2 Cimatti et al. 2004).
In our optically-selected sample, there is an upper bound to the SSFR of the majority of field galaxies which is parallel to lines of constant star formation rate (SFR). This upper limit on the SFR is independent of stellar mass, but increases with redshift from SFR ≃ 5 M⊙ yr −1 at z ≃ 0.5 to SFR ≃ 10 M⊙ yr −1 at z ≃ 1.1.
We can also infer from Fig. 2 that star formation in lower mass galaxies cannot proceed at constant SFR for a long time: All galaxies above the dot-dashed line in the diagram have the potential to double their stellar mass between the epoch of observation and today (assuming a constant SFR). While lower mass galaxies at low redshift tend to be gas rich, there is a large spread in measured gasto-stellar-mass fractions (Mateo 1998;Pérez-González et al. 2003;Kannappan 2004). However, very gas-rich systems are rare (Davies et al. 2001), i.e. the majority of these galaxies does not have huge gas supplies, which might lead us to believe that low-mass galaxies cannot exhibit constant star formation over longer time-scales, but show variable star for-0.40 < z < 0.60 (5.0 Gyr) Figure 2. The SSFR as a function of stellar mass for MUNICS I. The solid and dashed lines correspond to SFRs of 1 M ⊙ yr −1 and 5 M ⊙ yr −1 , respectively. While this is a good measure of the upper envelope of the majority of objects at z ∼ 0.5, the point distribution shifts to higher SSFRs with increasing redshift. The dotted lines indicate the limits of the point distribution due to magnitude limits, the model SED set and the mass function (see the text for details). Objects are coloured according to the age of the CSP model fit to the photometry, ranging from 9 Gyr (red) to 0.05 Gyr (purple). The dot-dashed line is the SSFR required to double a galaxy's mass between each redshift epoch and today (assuming constant SFR); the corresponding look-back time is indicated in each panel. The error cross in each panel gives an idea of the typical errors. mation histories, like the ones derived for the -even lower mass -dwarf galaxies in the Local Group (see e.g. Mateo 1998;Tosi 2001;Grebel 2004 for reviews). Due to the degeneracy of different star formation histories in colour space, it is not possible to say from our data whether we see these galaxies in the process of formation or during one of multiple episodes of active star formation. However, it is likely that we pick them up during an active phase of star formation. Also, it is clear from the completeness limits that we cannot detect low-mass galaxies with low SSFR.
Considering the high-mass end, we can try to draw some conclusions about the contributions of star formation and merging to the change of stellar mass. Between redshifts z ≃ 1.1 and z ≃ 0.5, the characteristic mass of the cut-off of the galaxies' stellar mass function changes by ∆ log M ≃ 0.15 dex Fontana et al. 2004;Conselice et al. 2004). For a M⋆ = 10 11 M⊙ stellar mass galaxy, a constant SFR of̺⋆ = 5 M⊙ yr −1 over a period of time of ∆t = 3.1 Gyr (the difference in time between these redshift values), yields a growth in stellar mass of ∆M⋆ ≃ 2 · 10 10 M⊙, or ∆ log M⋆ ≃ 0.1 dex. Consider-ing the uncertainty of the results and our lack of knowledge about the star formation histories of these galaxies, we cannot really decide about the relative importance of star formation and merging. We note, however, that our results on the growth of stellar mass in massive galaxies does not require a substantial contribution of merging over the redshift range 0 z 1. Overall it is clear that there is a marked difference between the star formation histories of low-mass and high-mass galaxies in agreement with findings from the stellar populations of today's galaxies (Heavens et al. 2004;Thomas et al. 2004).
ACKNOWLEDGMENTS
We thank the anonymous referee for his comments which helped to improve the presentation of this letter. The authors would like to thank the staff at Calar Alto Observatory for their support. Furthermore, we thank Amanda Bauer for discussion and Jan Snigula for help with the colour figures. . Normalised histogram of the SFR for the four different redshift bins. The shift of the high-SFR cut-off to higher SFRs with increasing redshift is clearly visible. The individual histograms are divided by the number of objects in each redshift bin. Note that at higher redshifts incompleteness starts to cut away objects with low SFR as indicated in Fig. 2. We acknowledge funding by the DFG (SFB 375). This research has made use of NASA's Astrophysics Data System (ADS) Abstract Service. | 2014-10-01T00:00:00.000Z | 2004-11-30T00:00:00.000 | {
"year": 2004,
"sha1": "5beab9197ffcb8e38adcc81f915ce7460e37cd8c",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnrasl/article-pdf/358/1/L1/2965304/358-1-L1.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "5beab9197ffcb8e38adcc81f915ce7460e37cd8c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
220886192 | pes2o/s2orc | v3-fos-license | Atmospheric Forcing of the High and Low Extremes in the Sea Surface Temperature over the Red Sea and Associated Chlorophyll-a Concentration
: Taking advantage of 37-year-long (1982–2018) of high-quality satellite datasets, we examined the role of direct atmospheric forcing on the high and low sea surface temperature (SST) extremes over the Red Sea (RS). Considering the importance of SST in regulating ocean physics and biology, the associated impacts on chlorophyll (Chl-a) concentration were also explored, since a small change in SST can cause a significant impact in the ocean. After describing the climate features, we classified the top 5% of SST values ( ≥ 31.5 ◦ C) as extreme high events (EHEs) during the boreal summer period and the lowest SST values ( ≤ 22.8 ◦ C) as extreme low events (ELEs) during the boreal winter period. The spatiotemporal analysis showed that the EHEs (ELEs) were observed over the southern (northern) basin, with a significant warming trend of 0.027 (0.021) ◦ C year − 1 , respectively. The EHEs were observed when there was widespread less than average sea level pressure (SLP) over southern Europe, northeast Africa, and Middle East, including in the RS, leading to the cold wind stress from Europe being relatively less than usual and the intrusion of stronger than usual relatively warm air mass from central Sudan throughout the Tokar Gap. Conversely, EHEs were observed when above average SLP prevailed over southern Europe and the Mediterranean Sea as a result of the Azores high and westward extension of the Siberian anticyclone, which led to above average transfer of cold and dry wind stress from higher latitudes. At the same time, notably less wind stress due to southerlies that transfer warm and humid air masses northward was observed. Furthermore, physical and biological responses related to extreme stress showed distinct ocean patterns associated with each event. It was found that the Chl-a concentration anomalies over the northern basin caused by vertical nutrient transport through deep upwelling processes are the manifestation of the superimposition of ELEs. The situation was the opposite for EHEs due to the stably stratified ocean boundary layer, which is a well-known consequence of global warming.
Introduction
An extreme sea surface temperature (SST) event is an important oceanographical phenomenon, which can have a serious impact on biodiversity and may result in consequences that are, as yet, unrecognized in marine ecosystems, especially under a climate change background. For example, these events alter the frequency and intensity of blooms, reduce the deep-water nutrient flux to surface
Datasets
The study was based on five data sets. A gridded monthly SST dataset from NOAA Optimal Interpolation (OI, version 2) on a spatial resolution of 0.25°, spanning from 1982 to 2018 [25], was used. It merges satellite ocean skin temperatures, infrared satellite retrievals from the Advanced Very High Resolution Radiometer (AVHRR), and in situ temperatures from ships and buoy platforms from the ICOADS (International Comprehensive Ocean Atmosphere Data Set) on regular global resolutions [26]. The data are freely available for download at https://www.esrl.noaa.gov/psd/data/gridded/data.noaa.oisst.v2.highres.html.
Sea level pressure (SLP) and wind data were taken from the fifth-generation reanalysis dataset However, the reported results reveal that climate change-related SST variation has been nonuniform in terms of both the time of occurrence and/or space. This is probably due to the presence of different driving forces, either from direct ocean interactions or throughout the atmosphere and overall underlying mechanisms. Therefore, far-reaching analysis of extreme events may be considered a good answer to some specific questions. For instance, thermal collapse can be defined as the temperature that exceeds the thermal capacity of organisms, which is, therefore, dependent on the extremes on both sides, rather than the mean temperature variability. Furthermore, the successive and persistent extreme SST events can contribute to a remarkable difference in intermediate-and deep-water formation in the basin.
The current study employed a monthly satellite SST dataset to highlight the occurrence of extreme high events (EHEs) and extreme low events (ELEs) over the RS, which has not yet been explored in previous studies. We addressed the atmospheric factors driving the interannual variability of these events during the summer and winter seasons and further explored their impacts on ocean physics and biology. The monthly optimum interpolation SST has recently been used for different purposes in the basin, including the study of heatwaves [17], long-term trends and variability [20], and the relations with large-scale climate modes [24].
The manuscript is structured as follows: Potentially useful data sets and detailed methods are presented separately in Section 2. The results and discussion, including information about the climatological background over the RS, the EHEs and ELEs spatiotemporal scales of variability, diagnosis of the associated physical processes throughout the atmosphere, and further, the ocean response in terms of the physical and biological productivity for each event, are given in Section 3. Finally, a brief summary of the major findings is given in Section 4.
Datasets
The study was based on five data sets. A gridded monthly SST dataset from NOAA Optimal Interpolation (OI, version 2) on a spatial resolution of 0.25 • , spanning from 1982 to 2018 [25], was used. It merges satellite ocean skin temperatures, infrared satellite retrievals from the Advanced Very High Resolution Radiometer (AVHRR), and in situ temperatures from ships and buoy platforms from the ICOADS (International Comprehensive Ocean Atmosphere Data Set) on regular global resolutions [26]. The data are freely available for download at https://www.esrl.noaa.gov/psd/data/gridded/data.noaa. oisst.v2.highres.html.
Sea level pressure (SLP) and wind data were taken from the fifth-generation reanalysis dataset (ERA5) from ECMWF, which replaced ERA-Interim, which stopped recently. ERA5 is produced on different global spatial and temporal scales using advanced modelling and data assimilation systems combining the available historical and satellite observations. We used the data from 1982 to 2018 in 0.25 • format, which can be downloaded from https://cds.climate.copernicus.eu/#!/search?text=ERA5& type=dataset.
The objectively analyzed net surface air-sea heat flux dataset was provided by the Woods Hole Oceanographic Institution, which was produced by combining the turbulent terms (latent and sensible heat) and radiative terms (short-and long-wave radiation) datasets. It is also freely available in a 1 • grid resolution and spans the period 1982-2018 (http://oaflux.whoi.edu/data.html).
The mixed layer depth (MLD) data, which are determined by temperature criteria, were taken from the Simple Ocean Data Assimilation (SODA) [27]. The SODA is a backward forecast re-analysis effort that was launched to reconstruct the historical global ocean physical and biogeochemical variability. In this paper, we used the latest release (Version 3.3.1), which was produced by MERRA2 in a 0.5 • resolution and includes data from 1982 to 2015. It can be found at http://apdrc.soest.hawaii.edu/erddap/ griddap/hawaii_soest_d95c_faf2_959c.html.
Gridded monthly mean chlorophyll-a (Chl-a) concentration datasets were extracted from the MODIS-Aqua (Moderate Resolution Imaging Spectro-radiometer) sensor. Chl-a is a widely used satellite dataset that spans from 2002 to the present. Full documentation is described in [28] and is freely available from the NASA database as Level 3 standard mapped images, with a 4-km spatial resolution via https://oceandata.sci.gsfc.nasa.gov/MODISAqua/Mapped/Monthly/4km/chlor_a/.
Methods
The main aim of this article was to investigate the extreme SST on both sides (EHEs and ELEs) and to identify the associated driving forces over the RS. First, we organized the OISST data into boreal summer (June, July, and August) and boreal winter (December, January, and February) seasons. Next, we determined the seasonal climatology of the SST covering the entire basin for each season, which was computed by averaging the months at each grid point.
Extreme SST Events
To identify the extreme SST events, we used the percentiles technique. First, we examined threshold values (highest 5% from boreal summer and lowest 5% from boreal winter) from the climatology mean SST data (mean of 37 years). The threshold value was identified according to the range between maximum and minimum values of SST. We chose those seasons because they have the highest probability of experiencing the highest (EHEs) and lowest (ELEs) SST events in the year, respectively. The threshold value for EHEs was 31.5 • C, and for ELEs values was 22.8 • C. Second, again, we examined the highest and lowest 5% of SST but for each time step (37 years). Then, the years that have values ≥31.5 • C were classified EHEs, while ≤22.8 • C were classified ELEs. Our method revealed 10 EHEs and 15 ELEs, which are shown in Table 1.
Composite Analysis
In this part, we hypothesized that the RS extreme events are a manifestation of broader regional climate variation, which likely operates directly through the atmosphere. In order to understand how the signals reach the basin, we demonstrated the general atmospheric circulation patterns that favor the EHEs and ELEs SST values for the year, as shown in Table 1. For these events, we averaged the corresponding atmospheric variables to produce composite maps separately for each event. Specifically, we looked at SLP and wind stress. Last, particular attention was given to the composite map of MLD, taking into account its roles in ocean convection and deep-and intermediate-water formation over the northern half of the RS as well as its roles in biogeochemical processes. Lastly, we investigated the biological response to atmospheric forcing in the Red Sea ecosystem in terms of Chl-a. Figure 2 shows the seasonal climatology of the SST, averaged over the summer and winter seasons of 1982-2018. The results show a strong meridional gradient of approximately 6 • C along a distance of 1500 km [29]. Summer values reaching 32 • C were observed over the southwestern part of the basin, while the lowest values of up to 26 • C were observed over the far northern end [30], especially on the Gulf of Suez (Figure 2a). This finding can be explained by the northerly winds that blow from relatively cold areas and cover the entire basin, which are associated with a negligible amount of water inflow from the Gulf of Aden during this season [31]. This gradient seems to be the factor triggering thermohaline-driven circulation, which is an important term in the RS circulation [32,33]. For the winter season (Figure 2b), a meridional gradient is also presented, but it is weak in comparison with the summer gradient, mainly due to the entrance of the relatively cold Gulf of Aden intermediate water [34]. In this situation, the maximum temperature shifts to the wind convergence zone in the central basin, where the wind is weak or calm [35,36].
Climatological Features over the RS
Remote Sens. 2020, 12, x FOR PEER REVIEW 5 of 16 Figure 2 shows the seasonal climatology of the SST, averaged over the summer and winter seasons of 1982-2018. The results show a strong meridional gradient of approximately 6 °C along a distance of 1500 km [29]. Summer values reaching 32 °C were observed over the southwestern part of the basin, while the lowest values of up to 26 °C were observed over the far northern end [30], especially on the Gulf of Suez (Figure 2a). This finding can be explained by the northerly winds that blow from relatively cold areas and cover the entire basin, which are associated with a negligible amount of water inflow from the Gulf of Aden during this season [31]. This gradient seems to be the factor triggering thermohaline-driven circulation, which is an important term in the RS circulation [32,33]. For the winter season (Figure 2b), a meridional gradient is also presented, but it is weak in comparison with the summer gradient, mainly due to the entrance of the relatively cold Gulf of Aden intermediate water [34]. In this situation, the maximum temperature shifts to the wind convergence zone in the central basin, where the wind is weak or calm [35,36]. To obtain background information about the atmospheric circulation that influences the extreme SST in the RS, we also present the climatology shown in the mean sea level pressure (SLP) and wind stress maps. In summer, the entire basin is affected by the westward extension of a monsoon low that crosses the RS and combines with the Sudan thermal low (or equatorial African low-pressure system in some studies), which is a favorable condition for the development of a clear pressure gradient in southern Europe (Figure 2c). During winter, the Sudan thermal low seems to be associated with the Red Rea Trough, which may play a key role in the projection of both central Asia and Azores high-pressure systems ( Figure 2d). As a result of the pressure gradients, a wide blowing of northerly wind stress over the entire RS and surrounding area was observed during the summer season ( Figure 2e). The same features were observed during the winter season (Figure 2f), which is associated with the blowing of southerly wind stress that converges in the central basin.
Spatial and Temporal EHEs and ELEs Variability
The spatial distributions of EHEs and ELEs anomalies are presented separately in Figure 3. Visually, the distributions of these two event types are completely different, as they have different atmospheric circulation. During EHEs, the RS experiences significant positive anomalies, up to 0.5 • C over the southern and northern ends, with less than this value in the center ( Figure 3a). In addition to oceanic and atmospheric factors, the shallow area near the southern end seems to contribute to warming the water, since the land warming is much faster and vigorous than the water. During ELEs, the RS experiences significant negative anomalies of up to −0.6 • C over the northern end; this value gradually decreases toward the southern end ( Figure 3b).
Physical Mechanisms
Atmospheric forcing on the ocean basins has been discussed on various spatial and temporal scales, since it can regulate air-sea heat fluxes, including the SST, and vice versa [24,[37][38][39][40][41]. In this , including the years that have EHEs (red shaded area) and ELEs (blue shaded area) superimposed with their corresponding linear trends. The trend was fitted using least squares method, and the significance (99%) of the result was tested using the Mann-Kendall. It is clear that the EHEs that were greater than or equal to the threshold value (31.5 • C) started in the mid-1990s, with these events being absent prior to this period. The red line, which refers to the SST trend, indicates that the highest 5 % values including the EHEs are increasing by 0.027 • C year −1 (Figure 3c).
These results support the previous finding of Alawad et al. (2020) [19] that the whole RS has experienced a nonuniform warming trend since 1996, which has amplified over the northern half to reach 0.04 • C year −1 -approximately 4 times higher than the global trend. Figure 3d shows that the ELEs mostly occurred before 1996, being absent in the last 7 years. The trend analysis for the time series of the lowest 5 % including the ELEs shows an increasing tendency (0.027 • C year −1 ), which corresponds to the previous study showing that the annual mean SST (1982-2016) exhibited significant warming trend of 0.029 • C year −1 [20].
In brief, EHEs occur over the southern and northern ends, while ELEs over the northern ends only. Moreover, the positive trend in the SST indicates that warmer ocean climate conditions over the RS could be expected in the near future, increasing (decreasing) the EHEs (ELEs).
Physical Mechanisms
Atmospheric forcing on the ocean basins has been discussed on various spatial and temporal scales, since it can regulate air-sea heat fluxes, including the SST, and vice versa [24,[37][38][39][40][41]. In this part, a composite analysis was carried out by averaging the corresponding SLP, wind stress, and net surface air-sea heat flux to determine the atmospheric pattern associated with each event.
SLP
SLP is considered an important atmospheric variable that is closely related to the general atmospheric circulation. Furthermore, it can give an indication about the surface air temperature, humidity, cloudiness, and wind flow in term of speed and direction. Figure 4 shows the composite SLP maps corresponding to the EHEs and ELEs and their deviances from the climatology mean. The main conspicuous summer features are the monsoon low system and eastward extension of the Azores anticyclonic system over the southeastern Mediterranean Sea. The monsoonal trough has a westward propagation, which enables it to adjoin the equatorial African low belts, including the Red Sea Trough and Sudan thermal low. As a result of these synoptic features, the atmospheric circulation over the RS and adjoining regions is modulated by the pressure gradients between the monsoon low with less than 1002 hPa and the Azores high with more than 1014 hPa [37,42]. During EHEs, there is widespread negative SLPs that cover the extension area of both Azores and monsoon systems (Figure 4c). The most negative value (1 hPa) was found in the Azores high over the southeastern Mediterranean Sea. This situation decreases the above-mentioned pressure gradient, which weakens the wind advection to the RS and adjoining regions in the Middle East and Africa.
The most conspicuous winter features are the prominent existence of a westward extension of the Siberian anticyclone over eastern Asia (greater than 1032 hPa) jointly with the relatively weak eastward extension of the Azores high, covering a broad area from central Asia to southern Europe and North Africa. This synoptic condition creates a significant SLP gradient for transferring continental, cold, and dry air through the arid land mass around the northern part of the basin to the central part [23,32,43]. This gradient is clear from the existence of a strong positive SLP anomaly during ELEs over Turkey and the Mediterranean Sea (greater than 2 hPa) and it extends to central Africa. Charabi and Al-Hatrushi (2010) [44] considered these gradients to be important factors that modulate the winter rainfall (wet seasons) variability over northern Oman. Remote Sens. 2020, 12, x FOR PEER REVIEW 8 of 16 The most conspicuous winter features are the prominent existence of a westward extension of the Siberian anticyclone over eastern Asia (greater than 1032 hPa) jointly with the relatively weak eastward extension of the Azores high, covering a broad area from central Asia to southern Europe and North Africa. This synoptic condition creates a significant SLP gradient for transferring continental, cold, and dry air through the arid land mass around the northern part of the basin to the central part [23,32,43]. This gradient is clear from the existence of a strong positive SLP anomaly during ELEs over Turkey and the Mediterranean Sea (greater than 2 hPa) and it extends to central Africa. Charabi and Al-Hatrushi (2010) [44] considered these gradients to be important factors that modulate the winter rainfall (wet seasons) variability over northern Oman.
In brief, the intensity and the position variability of the main circulation pattern, namely the Siberian and Azores high and monsoon low, may trigger the occurrence as well as determine the intensity of EHEs and ELEs during the summer and winter periods, respectively. Figure 5 depicts the wind stress flow over the RS and surrounding area. From an atmospheric point of view, the winds follow the above spatial distributions of SLP in terms of speed and direction to form wind stress. For instance, the pressure gradient between the Azores high and monsoon low shapes the north-northwesterly wind over the entire RS and surrounding area during the summer season. In particular, for EHEs cases, the northerly wind stress experiences less than average values in all areas, including the RS. This means that the relatively cold air masses that are transferred from southern Europe to the southern RS are reduced during EHEs cases, which may be a possible factor that enhances the occurrence of EHEs. An interesting result is the intrusion of air masses (red box) In brief, the intensity and the position variability of the main circulation pattern, namely the Siberian and Azores high and monsoon low, may trigger the occurrence as well as determine the intensity of EHEs and ELEs during the summer and winter periods, respectively. Figure 5 depicts the wind stress flow over the RS and surrounding area. From an atmospheric point of view, the winds follow the above spatial distributions of SLP in terms of speed and direction to form wind stress. For instance, the pressure gradient between the Azores high and monsoon low shapes the north-northwesterly wind over the entire RS and surrounding area during the summer season. In particular, for EHEs cases, the northerly wind stress experiences less than average values in all areas, including the RS. This means that the relatively cold air masses that are transferred from southern Europe to the southern RS are reduced during EHEs cases, which may be a possible factor that enhances the occurrence of EHEs. An interesting result is the intrusion of air masses (red box) from the mountain gap (Tokar gap) along the Sudanese coasts on the western RS side, with greater than average values (Figure 5b). These air masses are relatively warm and dry (locally called Hababai) and are mainly advected by the rain that is associated with the movement of the inter-tropical convergence zone that dominates eastern and central Africa during the summertime.
Wind Stress
When entering the RS, it joins the northerly wind stress that governs the basin, and both move forward in a southerly direction. This mechanism may explain the occurrence of EHEs in the southern half of the basin during the summer period.
Remote Sens. 2020, 12, x FOR PEER REVIEW 9 of 16 from the mountain gap (Tokar gap) along the Sudanese coasts on the western RS side, with greater than average values (Figure 5b). These air masses are relatively warm and dry (locally called Hababai) and are mainly advected by the rain that is associated with the movement of the intertropical convergence zone that dominates eastern and central Africa during the summertime. When entering the RS, it joins the northerly wind stress that governs the basin, and both move forward in a southerly direction. This mechanism may explain the occurrence of EHEs in the southern half of the basin during the summer period.
In brief, the combined effect of a relatively weaker than usual cold wind stress from the north and the intrusion of relatively stronger than usual warm air massed from the Tokar Gap are possible reasons for the occurrence of EHEs in the RS during the summer period.
Similarly, for the ELEs during the winter period (Figure 5d), the northerly wind stress strengthens over the northern half of the basin due to the strong pressure gradient shown in Figure 4c,d. At the same time, the stronger than usual wind stress coincides with the presence of less than usual southerly wind stress over the southern end, especially in Bab-Al-Mandab, where both winds converge in the central basin. This process may explain the occurrence of ELEs in the northern half of the basin. Moreover, the western land that surrounds the RS has experienced more wind stress than the eastern side, in particular Egypt and Sudan. This may be due to the eastern extension of the Azores high that makes the SLP center tilt to the west with reference to the RS. Previous studies have investigated the role of the atmospheric circulation in the RS circulation, and all have identified the immediate importance of the wind stress contribution on different spatial and temporal scales [30,31,[45][46][47].
In brief, the strengthening of northerly wind stress, which advected cold air masses to the basin, coincides with the presence of less than usual southerly wind stress over the southern end, which In brief, the combined effect of a relatively weaker than usual cold wind stress from the north and the intrusion of relatively stronger than usual warm air massed from the Tokar Gap are possible reasons for the occurrence of EHEs in the RS during the summer period.
Similarly, for the ELEs during the winter period (Figure 5d), the northerly wind stress strengthens over the northern half of the basin due to the strong pressure gradient shown in Figure 4c,d. At the same time, the stronger than usual wind stress coincides with the presence of less than usual southerly wind stress over the southern end, especially in Bab-Al-Mandab, where both winds converge in the central basin. This process may explain the occurrence of ELEs in the northern half of the basin. Moreover, the western land that surrounds the RS has experienced more wind stress than the eastern side, in particular Egypt and Sudan. This may be due to the eastern extension of the Azores high that makes the SLP center tilt to the west with reference to the RS. Previous studies have investigated the role of the atmospheric circulation in the RS circulation, and all have identified the immediate importance of the wind stress contribution on different spatial and temporal scales [30,31,[45][46][47].
In brief, the strengthening of northerly wind stress, which advected cold air masses to the basin, coincides with the presence of less than usual southerly wind stress over the southern end, which transfers warm and humid air northward. This is a possible reason for the occurrence of ELEs in the RS during the winter period.
Taken together, the wind analysis emphasizes the vital role of atmospheric circulation in enhancing the occurrence as well as the frequency of EHEs and ELEs during the summer and winter periods, respectively.
The Net Surface Air-Sea Heat Flux
To understand in depth the transferring of atmospheric circulation signals to the sea, we showed the net air-sea heat fluxes anomalies during the EHEs and ELEs. Note that we present 8 years out of 10 and 15 years for the EHEs and ELEs, respectively, since the data span from 1984-2009. As the natural result of the presence of relatively warm air masses associated with less than average SLP and wind stress over the basin during the EHEs, the net surface air-sea heat flux is positive over the entire basin (heat gain from air to sea), with values up to 100 W m 2 (not shown). These values produce positive anomalies mainly over the central basin (Figure 6a).
Remote Sens. 2020, 12, x FOR PEER REVIEW 10 of 16 transfers warm and humid air northward. This is a possible reason for the occurrence of ELEs in the RS during the winter period. Taken together, the wind analysis emphasizes the vital role of atmospheric circulation in enhancing the occurrence as well as the frequency of EHEs and ELEs during the summer and winter periods, respectively.
The Net Surface Air-Sea Heat Flux
To understand in depth the transferring of atmospheric circulation signals to the sea, we showed the net air-sea heat fluxes anomalies during the EHEs and ELEs. Note that we present 8 years out of 10 and 15 years for the EHEs and ELEs, respectively, since the data span from 1984-2009. As the natural result of the presence of relatively warm air masses associated with less than average SLP and wind stress over the basin during the EHEs, the net surface air-sea heat flux is positive over the entire basin (heat gain from air to sea), with values up to 100 W m 2 (not shown). These values produce positive anomalies mainly over the central basin ( Figure 6a).
Conversely, during the ELEs, the net surface air-sea heat flux is negative over the entire basin (heat loss from sea to air), associated with a strong meridional gradient ranging from -1 W m 2 in the south to over -200 W m 2 in the north (not shown). These values represent less than average anomalies over the entire basin, except the southern end (Figure 6b). Interestingly, the far northern end seems to differ from the entire basin; this may be due to a complex physical process that enhances the intermediate-and deep-water formation. Further investigation using the re-analysis dataset and modelling approach can be more beneficial and a key point to understand the water formation process, which still remains unresolved [38].
In brief, the net surface air-sea heat flux results are consistent with the distinguished spatial distribution of SLP and wind stress for each event. The above average heat gain (loss) due to the presence of relatively warm (cold) air masses and less than (above) average SLP and wind stress is a possible reason for the occurrence of EHEs (ELEs).
Associated MLD and Chl-a
In order to understand the roles of atmospheric forcing and ocean physics variability on the biological ecosystem in the basin, we present the anomalies of MLD and Chl-a for EHEs and ELEs, which were calculated from the difference between composite maps of those events and their climatology values. The latter two maps are not shown in this analysis, and we only present the anomaly values for each event. The analysis included a nonsignificant anomaly in MLD for EHEs during the summer period (Figure 7a). The warmer the ocean surface and the more stable the stratified boundary layer, the shallower the MLD [48]; this is a well-known consequence of global warming [49]. The biological response during EHEs revealed a lower than average Chl-a concentration, especially in the central basin (Figure 7b). This result is in line with previous studies Conversely, during the ELEs, the net surface air-sea heat flux is negative over the entire basin (heat loss from sea to air), associated with a strong meridional gradient ranging from -1 W m 2 in the south to over -200 W m 2 in the north (not shown). These values represent less than average anomalies over the entire basin, except the southern end ( Figure 6b). Interestingly, the far northern end seems to differ from the entire basin; this may be due to a complex physical process that enhances the intermediateand deep-water formation. Further investigation using the re-analysis dataset and modelling approach can be more beneficial and a key point to understand the water formation process, which still remains unresolved [38].
In brief, the net surface air-sea heat flux results are consistent with the distinguished spatial distribution of SLP and wind stress for each event. The above average heat gain (loss) due to the presence of relatively warm (cold) air masses and less than (above) average SLP and wind stress is a possible reason for the occurrence of EHEs (ELEs).
Associated MLD and Chl-a
In order to understand the roles of atmospheric forcing and ocean physics variability on the biological ecosystem in the basin, we present the anomalies of MLD and Chl-a for EHEs and ELEs, which were calculated from the difference between composite maps of those events and their climatology values. The latter two maps are not shown in this analysis, and we only present the anomaly values for each event. The analysis included a nonsignificant anomaly in MLD for EHEs during the summer period (Figure 7a). The warmer the ocean surface and the more stable the stratified boundary layer, the shallower the MLD [48]; this is a well-known consequence of global warming [49]. The biological response during EHEs revealed a lower than average Chl-a concentration, especially in the central basin (Figure 7b). This result is in line with previous studies conducted on the RS [17,50]. The Chl-a concentration revealed a negative correlation [23], and reached its minimum values during heat wave events [20]. Comparable results have been observed in other ocean basins [51,52] and these have been linked with a warmer global climate [53]. Note that, due to the availability of data on Chl-a, which started to be recorded in 2003, we used 9 out of the 15 years available to analyze the EHEs during summer and only 3 out of 10 years to analyze the ELEs in the composite analysis. Visually, the map of MLD anomalies looks somewhat similar to the spatial distribution of the SST during ELEs, since the area of the above average MLD corresponds to the area of the spatial distribution of the SST in the far northern end of the basin (compare Figures 7c and 3c). The above average MLD is a natural impact of the unstable water column caused by the theoretical existence of dense water in the surface through the contribution of a less than average SST. A further natural impact is due to the occurrence of an above average Chl-a concentration over the same area as the water column mixing process increases. Triantafyllou et al. (2014) [54] and Sofianos and Johns (2003) [33] confirmed that vertical nutrient transport in this area is controlled by a deep convection or upwelling process. Conversely, the Chl-a concentration decreases over the southern entrance of the basin during ELEs (Figure 7d). This may be due to the wind stress weakening in the area, since Chl-a is advected by the southerly wind [18,55].
In brief, the analysis emphasizes links between the atmospheric circulations; ocean physical factors, including SST and MLD; and ocean fertility in terms of the Chl-a concentration.
Conclusions
This study explored the atmospheric circulation influencing the EHEs and ELEs over the RS, an area that has not yet been explored in previous studies. The question of how the physical processes of the atmosphere can affect the RS circulation is a highly important research topic, while the expected impact on the ecosystem is a challenge for near-future conditions. We focused on the summer and winter months only, since EHEs and ELEs are likely to take place during the hottest and coldest seasons of the year. The 37-year-long OISST record concludes that the EHEs (ELEs) observed over the southern (northern) basin have had a significant warming trend of 0.027 (0.021) • C year −1 .
Based on the findings shown in this study (Figure 8), we propose a distinct type of atmospheric circulation for each event: winter months only, since EHEs and ELEs are likely to take place during the hottest and coldest seasons of the year. The 37-year-long OISST record concludes that the EHEs (ELEs) observed over the southern (northern) basin have had a significant warming trend of 0.027 (0.021) °C year −1 .
Based on the findings shown in this study (Figure 8), we propose a distinct type of atmospheric circulation for each event: EHEs during the summer period mainly occur over the southern RS when the westward monsoonal trough is dominant and adjoins the equatorial African low belts, including the Red Sea Trough and Sudan thermal low. Negative SLP anomalies are widespread over the RS and surrounding area, centered over the Mediterranean Sea. The SLP gradient deceases in southern Europe. Overall, anomalous wind stress decreases over the RS and increases over the Tokar Gap area. Overall, anomalous Chl-a values decrease due to the stably stratified ocean boundary layer. This is a straightforward negative consequence of EHEs on the chlorophyll concentration.
ELEs during the winter period mainly occur over the northern RS when the westward extension of the Siberian anticyclone high adjoins the eastward extension of the Azores high. There is a strong positive SLP anomaly over Turkey, Mediterranean Sea, and northeast Africa. The SLP gradient significantly increases in southern Europe. The anomalous wind stress increases over northern RS and decreases over the southern part. The anomalous Chl-a values increase over northern RS due to a significant increase in anomalous MLD but decrease over the southern part. This is a straightforward positive consequence of the effect of ELEs on the chlorophyll concentration.
These results bring us a step closer toward the ability to report and understand the extreme SST variability seen in the RS. Our findings raise the possibility that a warmer global climate could make the RS ecosystem less productive following the tropical ocean [49,53], which provides a useful background on how a warmer climate scenario can alter marine ecosystems. Furthermore, there is a need to include paleo-biological data to allow us to closely look for RS productivity under past climatic conditions before simulating the effects of future climate change using climate models. In addition, it is important to understand that the atmospheric circulations that forced the RS and surrounding area climate, including EHEs and ELEs, are not local or regional phenomena but are a manifestation of superimposing remote impacts of the large-scale climatic mode from the tropical and polar regions [24,37,46,47,[56][57][58].
Finally, sensitive experiments using ocean models could determine the exact roles of the SST and wind on RS productivity, enabling better projection of future climatic conditions, and this could help decision makers to mitigate the harmful impacts of global warming on the region. | 2020-07-16T09:07:47.058Z | 2020-07-11T00:00:00.000 | {
"year": 2020,
"sha1": "07c5246d26f074255f80e7bc774b150f2a1ab410",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/12/14/2227/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "82947c20da2fc3eb242c8176a9d6c48f0c76461a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Environmental Science"
]
} |
232087220 | pes2o/s2orc | v3-fos-license | Argonaute Proteins Take Center Stage in Cancers
Simple Summary The dysregulation of RNA interference (RNAi) has often been observed in cancers, where the main focus of research has been on the small RNA molecules directing RNAi. In this review, we focus on the activity of Argonaute proteins, central components of RNAi, in tumorigenesis, and also highlight their potential applications in grading tumors and anti-cancer therapies. Abstract Argonaute proteins (AGOs) play crucial roles in RNA-induced silencing complex (RISC) formation and activity. AGOs loaded with small RNA molecules (miRNA or siRNA) either catalyze endoribonucleolytic cleavage of target RNAs or recruit factors responsible for translational silencing and target destabilization. miRNAs are well characterized and broadly studied in tumorigenesis; nevertheless, the functions of the AGOs in cancers have lagged behind. Here, we discuss the current state of knowledge on the role of AGOs in tumorigenesis, highlighting canonical and non-canonical functions of AGOs in cancer cells, as well as the biomarker potential of AGO expression in different of tumor types. Furthermore, we point to the possible application of the AGOs in development of novel therapeutic approaches.
Introduction
RNA interference (RNAi) plays a crucial role in post-transcriptional regulation of gene expression. In mammalian cells, RNAi is mediated by three classes of small RNA (smRNA), approximately 20-25 nt long, endogenous piwiRNA (piRNA), microRNAs (miRNAs) or artificial small interfering RNAs (siRNAs). These smRNAs are loaded to a member of the Argonaute (AGO) protein family, which comprises an AGO subclass (loaded with si/miRNA) and PIWI subclass (loaded with piRNA) [1]. Together, they form the core of the RNA-induced silencing complex (RISC) [2]. Since the discovery of RNAi, AGO involvement in RISC assembly and small RNA (smRNA) maturation has been extensively studied [3]. However, the complex network of AGO-miRNA interactions, promoting the maturation and function of specific miRNAs, is still largely unexplored. As a core component of the RISC ribonucleoprotein complex, AGO proteins hold a great importance in fine tuning cellular protein and RNA profiles in order to ensure normal development and homeostasis [1]. It is now estimated that approximately 60% of protein coding genes in human cells are controlled in an RNAi-dependent manner [4].
miRNAs are abundantly expressed in all mammalian cell types and at every stage of development. However, the smRNA profiles greatly differ spatiotemporally, which is characteristic for each cellular lineage throughout maturation [5]. In such a manner, AGO1-4 protein expression is tightly controlled during development. Normally, the AGOs are expressed in a tissue-specific manner where, for example, AGO2 is highly expressed in trachea but to a much lesser degree in kidneys [6]. Generally, AGO1 and AGO2 expression is significantly more prominent as compared to the remaining AGO proteins [7,8]; hence, [7,8]; hence, the volume of published data on AGO1 and AGO2 involvement in tu igenesis is relatively large compared to AGO3 and AGO4 publications (Figure 1a,b). thermore, the ratio of different AGO protein levels, with the emphasis on AGO1 AGO2, are tightly controlled during early stages of mammalian embryonic developm [9]. The expression of the AGO proteins changes along the development of organs, e plified by elevation of AGO1-4 levels in the fetal brain as compared to adult tissue [7 The dysregulation of RNAi processes has significant negative impacts on cellula meostasis and has frequently been reported in neoplasia. miRNAs are often dysregu in cancers [10][11][12]. In fact, it was reported that there is a global downregulation of miR expression in cancer tissues [12]. Numerous miRNA molecules have been described a ther cancer promoting oncogenes, referred to as oncomiRs, or cancer demoting tumor pressors [10][11][12][13][14][15]. Moreover, similarly to normal tissues, cancers derived from differen gins are characterized by various miRNA dysregulations. Of note, miRNAs exhibitin mor suppressive activity in one type of cancer may promote development of another [ . Created with BioRender.com (accessed date: 21 January 2021).
AGO proteins are viewed as the mediators of miRNA function rather than the star players of RNAi, and therefore, in much of the literature, documenting the impact of RNAi dysregulation in tumorigenesis focuses on miRNAs and not their partner proteins. Never-theless, given the crucial importance of RNAi in the maintenance of cellular homeostasis, AGO proteins are inevitably engaged in cancer development and progression. Hence, unsurprisingly, the 2010s brought a substantial number of reports documenting AGO protein function, dysregulated expression, and mutations in cancer cells and tissues ( Figure 1c). Herein, we provide a scrutinized summary of the relationship between AGO1-4 proteins and tumorigenesis.
The Functional Role of AGO Proteins in RISC
The AGO subclass includes four members, AGO1-4, each exhibiting high affinity to miRNA molecules and able to form functional RISC complexes [40]. The AGO proteins are characterized by four domains: amino-terminal (N), PAZ (PIWI-ARGONAUTE-ZWILLE), MID (middle) and PIWI domains. The PAZ and MID domains are involved in miRNA binding, via anchoring of 5 and 3 end of the smRNA molecule, respectively. In mammalian cells, AGO1-4 proteins mediate miRNA-dependent translational repression. Moreover, AGO2, and, to a limited extent, AGO3, possess endoribonuclease activity, which can catalyze smRNA-directed ribonucleolytic cleavage of target mRNAs [41][42][43].
Mature miRNAs and siRNAs are single stranded; however, AGO proteins recognize and bind their double-stranded precursors, containing functional guide and passenger strands (the latter is subsequently discarded). Therefore, AGO proteins are crucial not only for the functionality but also maturation of smRNAs. Formation of miRNA:AGO complexes are done in two steps-loading, which is facilitated by TRBP and DICER proteins, and unwinding of the double-stranded RNA [1]. The mechanisms of the unwinding process in mammalian cells remains elusive, because the partner proteins involved in passenger strand removal and degradation are yet to be identified [2].
After disposal of the passenger strand, mature miRNA or siRNA molecules guide the AGO protein to complementary sequences in the transcriptome. For siRNAs, the complementarity with the target sequence is complete; in the case of miRNA-mediated gene silencing, only partial base pairing is necessary. The mechanism of smRNA-mediated negative regulation of gene expression is dependent on the specific AGO protein enrolled in the RISC formation. Out of the four AGO proteins, only AGO2 is equipped with a fully functional endoribonucleolytic domain and is able to directly cleave target sequences with extensive complementarity to the bound siRNA. The function of RISC, containing the remaining AGO proteins, is exerted through the interaction with partner proteins, facilitating the destabilization of the target transcript or its translational inhibition. Further detailed mechanism of canonical RISC function will not be discussed here, because it has been thoroughly reviewed elsewhere [1,[44][45][46].
The Involvement of AGO Proteins in Tumor Associated Processes
The literature regarding the dysregulation of RNAi factors in pathological states focuses at large on the smRNA molecules directing RNAi. However, it is important to consider that smRNAs alone cannot catalyze any chemical reactions. Therefore, the activity and function of smRNAs must be considered with regards to their partner proteins, AGO1-4. In fact, numerous reports have shed new light on the involvement of canonical and non-canonical AGO functions in tumorigenesis (Figure 2). . Figure 2. Graphical depiction of AGO-regulated tumor associated processes. (a) AGO4 enhances miRNA methylation, which inhibits miRNA activity [38]; (b) mutated KRAS protein negatively regulates AGO2 activity [28,29]; (c) AGO proteins mediate RNAi in cytoplasm [1] and (d) nucleus [47] in cancer cells; (e) AGO1 and AGO2 bind to promoters of oncogenes to activate transcription [18][19][20]; (f) AGO1 translational read-through variant, AGO1x, inhibits interferon-induced apoptosis via the depletion of nuclear dsRNA [22]; (g) AGO2 recruits RAD51, MMSET and KAT5 proteins to dsDNA breaks to facilitate DNA repair [33,34]; (h) AGO2 promotes telomerase activity [37]; (i) AGO2 tethers MYC mRNA and increases its stability [27]; (j) AGO2 activity is regulated via ERβ [31]. Created with BioRender.com (accessed date: 21 January 2021).
Regulation of AGO:miRNAs by Post-Transcriptional Modifications
The activity of AGO proteins is, to a degree, regulated via post-transcriptional modifications (PTMs) including prolyl-4-hydroxylation, phosphorylation, ubiquitination, poly-ADP-ribosylation (PARylation) and SUMOylation [40]. PTMs of AGOs are fine-tuned by numerous proteins, which are also dysregulated in pathological states, such as cancer. For instance, AGO2 is phosphorylated by a protein product of the well-characterized oncogene, EGFR, at Y393, which weakens AGO2: DICER interaction, resulting in the inhibition of miRNA maturation [48]. This phenomenon was found to be enhanced under hypoxia, a stress state often observed in solid tumors [48] and characterized by low oxygen levels. Moreover, phosphorylation is catalyzed by CSNK1A1 kinase, an implicated oncoprotein in leukaemia, at residues S824-S834, which negatively affects the interaction of RISC with target transcripts [49][50][51][52]. AGO2 is further regulated via acetylation at K355, K493, and K720 exerted by P300/CBP complex [53]. Two of these PTMs (acetylation at K493 and K270, but not K355) facilitate enhanced maturation of miR-19b, an oncogenic miRNA [53]. The P300/CBP/AGO2/miR-19b axis promotes growth and proliferation of A549, a lung cancer cell line, and in vivo lung cancer xenograft mouse model [53]. Moreover, mitogenic Akt3 kinase phosphorylates AGO2 protein at S387, resulting in the switch of AGO2-RISC activity from mRNA cleavage to translational repression and increased localization of the complexes to P-bodies, cytoplasmic foci of mRNA turnover [54,55]. Furthermore, this PTM was found to be dependent on the oncogenic KRAS/MEK/ERK signaling pathway, which negatively affects the sorting of AGO2 protein to exosomes [56].
The activity of RISC in mammalian cells may also be regulated by modifications of the miRNAs, such as cytosine methylation, which inhibits miRNA:target mRNA binding [57]. AGO4 has been shown to facilitate the recruitment of DNMT3A methyltransferase to the associated miRNA, thus enhancing miRNA methylation (Figure 2a) [38]. Interestingly, high methylation of the tumor suppressor miR-181a-5p is correlated with poor outcomes of glioblastoma multiforme [38]. Hence, AGO4 may contribute to neoplasia formation via promotion of miRNA cytosine methylation, and thus inhibition of tumor suppressor miRNAs.
Regulation of AGOs by Protein/RNA Co-Factors
The RISC machinery in cancer can be regulated by miRNAs. The production of AGO2 was found to be attenuated by miR-99a in hepatocellular carcinoma cells [58]. Moreover, there was a notable reduction in hepatocellular carcinoma tumorigenicity upon miR-99a overexpression or AGO2 silencing, which points to an oncogenic potential of AGO2 [58]. Another layer of regulation of RISC in physiological and pathological states is executed by the ratios of AGO1-4 protein content in the cell, because the function of specific miRNA molecules may depend on the type of AGO protein to which it is bound [25,59]. The suppressive activity of miR-145-5p is, to exemplify, manifested only in RISC complexes with AGO2 and not the remaining AGO proteins [25]. On the other hand, oncogenic miR-10a is able to negatively regulate target gene expression only in complex with AGO1 and AGO3 [59]. In this manner, the impact of dysregulation of specific AGO proteins on tumorigenesis may vary.
Apart from being an RNA binding protein, the function of AGO proteins is dependent on their interactions with protein co-factors, for some of which RNAi is not the main function. For instance, in multiple myeloma cells, AGO2 is destabilized upon binding with cereblon, a therapeutic target of immunomodulatory drugs [60]. The interaction markedly contributed to cytotoxicity of the drugs [60]. Furthermore, AGO2 was found to be an important binding partner of the protein product of KRAS, a well-characterized oncogene ( Figure 2b) [28,29]. It was shown that high levels of AGO2 protein enhanced neoplastic transformation driven by KRAS mutants, whereas knockout of AGO2 resulted in the growth arrest of KRAS-dependent cancer cells [28]. Moreover, AGO2-mutated KRAS interaction holds crucial importance in pancreatic ductal adenocarcinoma progression, where AGO2 expression is required for overcoming oncogene-induced senescence and cancer development [29].
AGOs in the Nucleus Affect Cancer Progression
In the early days of RNAi research, the general consensus was that RISC-mediated gene silencing is limited to cytoplasmic transcripts, particularly mature mRNAs (Figure 2c) [61,62]. However, a growing body of data documents the presence of functional RISC components in cell nuclei [17,47,63,64]. Even though miRNA loading factors are absent from the nucleus, RNAi factors associate into active complexes (Figure 2d) [47,63], Furthermore, the assessment of miRNA binding sites in stem cell nuclei revealed that approximately 50% of the miRNAs are bound to intronic regions of pre-mRNAs [47]. The significance of nuclear RNAi for tumorigenesis, however, remains largely unexplored.
Beyond RNAi, the suggested involvement of the AGOs in nuclear processes spans from chromatin remodeling to transcriptional regulation, splicing, DNA repair and regulation of telomerase activity [40]. Generally, in mammalian cell nuclei, the literature points to AGO-mediated negative transcriptional regulation via the recruitment of either protein or DNA methyltransferases to promoter regions of the target genes by the AGO proteins. The interaction is followed by heterochromatin formation and inhibition of the gene transcriptional activity via similar mechanisms as those described in yeast and drosophila [65][66][67][68][69][70][71]. Interestingly, in some cases, smRNA-directed AGO proteins can bind to promoter regions to facilitate the recruitment of RNA Polymerase II and gene transcription (Figure 2e) [72][73][74][75]. For instance, the involvement of AGO2 in transcriptional activation was described in breast cancer cells, in which AGO2 was found to promote cell growth by inducing mRNA synthesis of the progesterone receptor (PR) by binding to the promoter region of the PR gene [76].
In human cancer cells, the involvement of AGO1 in chromatin remodeling has been reported [19,20]. The protein was found to enhance the recruitment of RNA Polymerase II to promoters of genes involved in cell growth and survival in prostate cancer cells [18]. Moreover, AGO1-mediated transcriptional activation of cancer related genes, in a manner dependent on AT repeats at the promoter region, was implicated in neoplasia of various origin [19]. Moreover, one report points to the tumor-promoting activity of AGO1x, a translational readthrough variant of AGO1, dependent on nuclear scattering of dsRNAs and silencing of interferon-induced apoptosis in breast cancer cells (Figure 2f) [22].
Regulation of DNA Integrity
The correctness of the cellular genome is ensured via the DNA damage response (DDR) pathway [77]. Depending on the severeness of the genetic aberrations, either repair mechanisms are triggered or the cell is directed towards senescence or apoptosis [77]. The rapid growth and divisions of cancer cells, characterized by numerous genetic aberrations, is reliant on the dysregulation of the DDR pathway [77]. DDR is partially regulated via smRNAs, which are induced by double-stranded DNA breaks [33]. The smRNAs are recognized by AGO2 protein, which in turn recruits Rad51 and enhances DNA repair via homologous recombination (Figure 2g) [33]. Furthermore, it was suggested that AGO2, loaded with smRNAs, facilitates the enrolment of methyltransferase MMSET (WHSC1) and the acetyltransferase Tip60 (KAT5) to sites of double-stranded breaks on the DNA, leading to enhanced Histone H4 di-and tri-methylation at lysine 20 and H4 acetylation at lysine 16 ( Figure 2g) [34]. Consequently, these histone modifications lead to an open chromatin configuration, which in turn facilitate the recruitment of the DNA repair machinery to double-stranded DNA breaks [34]. Moreover, the importance of AGO2 in the DDR pathway is manifested by the impairment of the DNA repair potential of AGO2-deficient osteosarcoma cells [78].
The involvement of AGO2 in the regulation of the DDR pathway is not limited to facilitating or triggering repair of double-stranded DNA breaks [77]. Upon recognition of DNA damage, cell fate is decided between DNA repair and death or senescence [77]. The decision is surveilled by P53, a well-characterized tumor suppressor protein [77]. Upon DNA damage, P53 interacts with AGO2 to fine-tune the subset of AGO2 associated miRNAs. One of the notable P53 mediated alteration in the AGO2:miRNA interactome is the enhancement of loading of the suppressive let-7 miRNA family into functional RISC complexes [79].
The integrity of chromosome ends is ensured by telomers, which are nucleoprotein structures localized at chromosome ends containing repetitive nucleotide sequences [80]. Telomeres can be elongated by the telomerase enzyme, which is physiologically only active in human gametes and stem cells [80]. However, in advanced cancers telomerase is reactivated, which results in telomere elongation and strengthens the proliferative potential of cancer cells [81]. Studies in HeLa cells, a cervical cancer cell line, uncovered the interaction between AGO2 and telomerase reverse transcriptase (TERT), as well as the telomerase RNA component (TERC), which promotes TERT and TERC association [37]. TERT and TERC constitute the core of the telomerase enzyme; hence, AGO2-dependentent promotion of binding between the two components pointed to a new role of AGO2 in regulating telomerase activity (Figure 2h). Indeed, AGO2 silencing caused significant shortening of the telomeres in HeLa cells, which further implicated the importance of the protein in telomere length tuning [37].
Cellular Differentiation
Apart from rapid proliferation and cell growth, tumor tissues are characterized by the maintenance of the undifferentiated status of cancer cells. A crucial role of AGO proteins in embryonic development and neoplastic transformation of leukaemic myeloid progenitors has been implicated by numerous research groups [9,23,32,47,82,83]. It was shown that increased expression of AGO2 protein is necessary for monocyte differentiation of leukemic myeloid progenitors, whereas granulocyte differentiation requires maintenance of high AGO1 levels [23,32]. Furthermore, the presence of both AGO1 and AGO2 proteins was essential for the successful induction of differentiation of leukaemia cells upon treatment with retinoic acid or 1,25-dihydroxyvitamin D3, respectively [23,32]. Therefore, high expression of AGO proteins in tumor tissues may be favorable for therapeutic-induced differentiation of cancer cells.
The involvement of AGO proteins in cellular differentiation has also been assessed in neuroblastoma. Potenza et al. reported a selective increase in AGO4 expression in differentiating neuroblastoma cells, whereas the levels of other AGO proteins decreased [39]. Such patterns of AGO protein expression implies crucial importance of AGO4-mediated regulation of gene expression in the differentiation of neuroblastoma cells [39].
Angiogenesis
Inside solid tumors, cancer cells are often under hypoxic stress, which can contribute to tumorigenesis by simulating angiogenesis [84]. The inhibitory effect of AGO2 downregulation on umbilical vein endothelial cell growth and tube formation has been previously reported [85,86]. In fact, the crucial role of AGO2 in stimulating tumor-mediated angiogenesis was demonstrated in hepatocellular carcinoma [35]. The expression of AGO2 in six hepatocellular carcinoma cell lines was correlated with the expression and release of VEGF, a key factor promoting vascularization [35]. Moreover, silencing of AGO2 in these cell lines led to a decrease in VEGF levels [35]. The pro-angiogenesis activity of AGO2 was also described in multiple myeloma, in which AGO2 promoted the secretion of miRNAs, stimulating the formation of new blood vessels [36].
Upon hypoxia, AGO2 function is fine-tuned by post-transcriptional modifications (PTM). In low oxygen conditions, AGO2 is hydroxylated, which stabilizes the protein as well as promotes intracellular activity and release of miR-210 [87,88]. miR-210 exerts pro-angiogenic functions; therefore, AGO2 hydroxylation may contribute to the vascularization of solid tumors [89]. Hydroxylated AGO2 is also recognized by HSP90, which triggers translocation of RISC complexes into stress granules [90]. Another AGO2 hypoxiamediated PTM is phosphorylation at Y393, which is catalyzed by EGFR. This modification reduces the maturation of miRNA precursors, apart from a specific set of miRNAs involved in the promotion of cell survival and invasiveness, including miR-21 and miR-192 [48].
Despite numerous reports documenting pro-angiogenic activity of AGO2, AGO1 was found to repress VEGF expression and vascularization under hypoxic stress [24]. Moreover, some miRNAs, including miR-103/107 and let-7, are upregulated upon hypoxia and repress AGO1 translation to promote angiogenesis in HUVEC cells [24].
Motility and Metastasis
An important aspect of cancer pathogenesis is the ability of cancer cells to migrate from the place of origin and form metastatic tumors. The mechanisms underlying cancer metastasis have been extensively studied, allowing for identification and characterization of epithelial-mesenchymal transition (EMT), a process enhancing motility and metastatic potential of tumor cells [91]. In hepatocellular carcinomas, a correlation between activation of the EMT process and AGO1 has been identified. It was shown that the depletion of AGO1 in HCCLM3 cell lines resulted in a significant inhibition of migration and downregulation of proteins involved in EMT, which points to the metastasis-promoting activity of AGO1 [21].
AGO2 has also been linked to enhanced metastasis activity in hepatocellular carcinoma. The pro-metastatic activity of AGO2 is not connected with its canonical function, but rather it depends on AGO2-mediated transcriptional activation [92]. Mechanistically, it was shown that synthesis of the focal adhesion kinase (FAK) mRNA, derived from the FAK gene promoter, one of the key EMT promoters, is triggered by the binding of AGO2 to the promoter region [92]. Furthermore, reports also point to the potential of metastasis facilitation by AGO2 canonical function. The protein interacts with newly-identified prometastatic protein LASP1 in breast cancer cells, in a manner which is dependent on LASP1 phosphorylation by C-X-C chemokine receptor type 4 [93]. This leads to modified activity of the AGO2-associated miRNAs. Most importantly, the interaction between LASP1 and AGO2 causes the inhibition of anti-metastatic let-7 activity [93].
Tumor-Promoting and Anti-Cancer Functions of the AGOs
The oncogenic function of AGO2 was documented in hypopharyngeal cancer where knock down of AGO2 led to the inhibition of cell growth and tumor formation in mice, as well as activation of the mitogenic FAK/PI3K/AKT pathway [26]. Additionally, AGO2 was found to tether MYC mRNA, increasing its stability in hepatocellular cancer cells, and therefore promote cell survival and proliferation (Figure 2i) [27]. Moreover, AGO1 was shown to exhibit tumor-promoting activity in hepatocellular carcinoma cells, which underwent potent proliferation arrest upon AGO1 silencing [21] Despite growing evidence of oncogenic potential of AGO proteins discussed above, a portion of the published data points to the tumor suppressor activity of AGO2. In fact, overexpression of AGO2 resulted in decreased proliferation and motility of H1299 lung cancer cells [30]. Moreover, AGO2 protein seems to be involved in the negative regulation of FGF2, which is elevated in numerous cancers and contributes to rapid proliferation of cancer cells [94]. Additionally, AGO2 has been found to interact with the ERβ receptor, a protein exerting tumor suppressive functions, which regulates both canonical and noncanonical AGO2 activities (Figure 2j) [31].
The Prognostic Value of AGO Protein Expression in Cancer
The dysregulated expression of the genes encoding AGO proteins, with emphasis on AGO2 and, to a lesser extent AGO1, was demonstrated in neoplastic tissues of numerous cancer types (Figure 3). Hence, the biomarker potential of AGO1 and AGO2 in cancers of different origin has been extensively explored, pointing to a prognostic value of AGO proteins in solid tumors as well as leukaemia.
In particular, the analysis of AGO protein expression in 103 ovarian carcinoma specimens uncovered elevated levels of AGO1 and AGO2 in metastatic tumors (Figure 3a) [95]. In addition, high levels of AGO2 mRNA were correlated with shorter progression-free survival (Figure 3a) [95]. Furthermore, a significant number of published data documents the potential of AGO2 as a biomarker in breast cancer [96,97]. Analysis of gene expression data derived from The Cancer Genome Atlas and 291 breast cancer specimens pointed to a correlation of high expression of AGO2 with unfavorable, hormone receptor-positive, subtypes of disease, and poor clinical outcome (Figure 3b) [96,97]. Moreover, the assessment of AGO2 expression profiles resulted in an improvement of prediction of the breast cancer subtype and estrogen receptor or progesterone receptor status by 15-20% [97].
The Biomarker Potential of Modifications of AGO Proteins
The level of activity and specific functions of AGO proteins are controlled not only via regulation at the protein level, but also by various PTMs [48]. The modifications of AGO proteins significantly affect the involvement of proteins in tumor associated processes; therefore, the possible biomarker potential of AGO PTMs in neoplasia has been documented. For instance, high levels of acetylated AGO2 was correlated with poor prognosis for lung cancer patients [53]. Another AGO2 PTM, phosphorylation at Y393, was also found to be an adverse prognosis factor, in this case for breast cancer [48]. The association between the other AGO proteins and PTMs in cancer progression and prognosis, however, remains largely elusive. Furthermore, AGO2 upregulation was implicated as an adverse prognostic biomarker for urothelial carcinoma of the bladder [98,99]. Based on the analysis of 106 cases, high expression of AGO2 correlated both with metastasis and lower overall survival of the patients (Figure 3c) [98]. Looking in detail, the assessment of AGO2 expression levels in bladder tissues derived from urothelial carcinoma was sufficient to discriminate between muscle invasive and non-muscle invasive tumors [99]. Moreover, the assessment of AGO2 expression may serve as a prognostic factor for grading patients suffering from colorectal cancer [100]. Evaluation of AGO2 in 76 colorectal tumor samples revealed a significant correlation between an increase in gene expression and progression of disease from II to III stage tumors (Figure 3e) [100]. Similarly, the association of high AGO2 expression and adverse tumor characteristics were also found for gastric cancer [101]. Evaluating AGO2 in 363 gastric cancer samples uncovered a significant correlation between tumor cell differentiation and lymph node invasion (Figure 3d) [101]. Despite the described upregulation of AGO2 in gastric cancer, decreased expression of AGO2 was found in HER-2-positive cases [101], adding complexity to the potential application of AGO2 for grading gastric cancer patients. Moreover, augmented expression of AGO2 correlated with advert prognosis of glioma cases [102]. The assessment of AGO2 mRNA and protein levels in 129 glioma specimens uncovered the correlation between elevated expression of AGO2 and lower overall survival, as well as progression-free survival (Figure 3f) [102]. In addition, it was shown that AGO2 level increased during glioma progression [102].
The prognostic value of AGO1 has been implicated in colon cancer [103]. The upregulation of AGO1 was inversely correlated with survival rates of colon cancer patients, based on the assessment of 75 cases (Figure 3e) [103]. AGO1 was also suggested as the biomarker of head and neck squamous cell carcinoma, when analysis of 21 tumor tissues revealed a significant upregulation of the AGO1 gene expression (Figure 3g) [104]. Interestingly, for 3 of the assessed 21 cases, amplification of AGO1 was observed [104].
Despite the large body of data documenting upregulation of the AGOs in numerous types of cancer, melanoma tumors are characterized by lower expression of AGO2 compared to other neoplastic transformations, or even normal tissues (Figure 3h) [7]. Even more striking, for other skin cancers, i.e., actinic keratoses, basal cell carcinomas, and squamous cell carcinomas, the overexpression of AGO1 and AGO2 was instead observed [105]. The decrease in AGO2 in cancer cells vs. normal controls was also found for childhood acute lymphoblastic leukaemia, which was based on the analysis of 25 cases [106]. Additionally, the decline in AGO2 levels were associated with progression of the disease (Figure 3i) [106]. A similar correlation between AGO2 expression and tumorigenesis has been implicated for clear cell renal cell carcinoma [107]. The evaluation of mRNA expression data from The Cancer Genome Atlas revealed decreased levels of AGO2 in clear cell renal cell carcinoma tumors as compared to normal tissues (Figure 3j) [107].
The relationship of AGO3 and AGO4 expression with tumorigenesis has not been frequently reported. However, the decreased expression of AGO3 and AGO4 genes was reported for primary hepatocellular carcinomas as compared to healthy tissues (Figure 3k) [108]. On the contrary, colon cancer tissues augmented AGO3 and AGO4, along with AGO2, and higher levels were found as compared to normal controls (Figure 3e) [103]. Furthermore, elevated AGO2-4 expression correlated with distant metastases of colon tumors [103].
The Biomarker Potential of Modifications of AGO Proteins
The level of activity and specific functions of AGO proteins are controlled not only via regulation at the protein level, but also by various PTMs [48]. The modifications of AGO proteins significantly affect the involvement of proteins in tumor associated processes; therefore, the possible biomarker potential of AGO PTMs in neoplasia has been documented. For instance, high levels of acetylated AGO2 was correlated with poor prognosis for lung cancer patients [53]. Another AGO2 PTM, phosphorylation at Y393, was also found to be an adverse prognosis factor, in this case for breast cancer [48]. The association between the other AGO proteins and PTMs in cancer progression and prognosis, however, remains largely elusive.
The Biomarker Potential of Genetic Variations of AGO Proteins
Several single nucleotide polymorphisms (SNPs) of AGO1 and AGO2 genes have been described in a substantial number of neoplastic vs. control case studies (Table 1). For instance, genotyping of 855 nasopharyngeal carcinoma samples revealed 25 AGO2 SNPs, of which one (rs3928672 GA + AA), was associated with significantly increased risk of the disease [109]. Moreover, the polymorphism was also correlated with elevated AGO2 expression levels in cancer tissues [109]. The impact of AGO2 SNPs on disease susceptibility has also been assessed in breast cancer by different research groups. Based on the analysis of 488 breast cancer specimens, Sung and colleagues depicted that two AGO2 genetic variants, rs11786030 A/G and rs2292779 C/G, were correlated with elevated risk of breast cancer [110]. In like manner, the polymorphism of the AGOs encoding genes in breast cancer cases was analyzed for 417 Russian patients [111]. The study did not signal a significant impact of AGO2 SNPs; however, rs595055 C/T, one of the AGO1 genetic variants, was associated with augmented breast cancer risk [111]. Further studies on the influence of AGO1 and AGO2 SNPs on breast cancer susceptibility were conducted on 93 Mediterranean cases and uncovered another AGO1 SNP (rs636832 A/A), as well as one more AGO2 (rs2977490 G/G) variant associated with elevated risk of the disease [112]. [109] rs11786030 A/G Breast cancer Increased [110] rs2292779 C/G Breast cancer Increased [110] rs2977490 G/G Breast cancer Increased [112]
AGO Proteins as Potential Therapeutic Applications
The engagement of RNAi in therapeutic strategy is an exciting and promising perspective. To date, two RNAi-based drugs, ONPATTRO and GIVLAARI, have been approved by the FDA for the treatment of acute hepatic porphyria and peripheral nerve disease, respectively [123]. Cancer development is driven by overexpression or aberrant activation of proto-oncogenes; therefore, various therapeutics taking advantage of endogenous RNAi machinery, specifically targeting transcripts upregulated in cancer cells, have been developed and are currently undergoing clinical trials [124]. Given the efforts put into delineation of the detailed mechanisms of RNAi, development of novel RNAi-based approaches and methods of delivery, we are on the verge of broad engagement of RNAi for anti-cancer therapies.
Besides the potential of RNAi phenomena per se for the development of novel therapeutic strategies, some reports also point to the feasibility of manipulating AGO activity for reducing tumor growth. In particular, the small molecule TPE, which inhibits the association between AGO2 and miRNAs, was shown to suppress growth of 3T3 mouse embryonic cells and completely inhibit tumor formation in mouse models in vivo [125]. The [109] rs11786030 A/G Breast cancer Increased [110] rs2292779 C/G Breast cancer Increased [110] rs2977490 G/G Breast cancer Increased [112]
AGO Proteins as Potential Therapeutic Applications
The engagement of RNAi in therapeutic strategy is an exciting and promising perspective. To date, two RNAi-based drugs, ONPATTRO and GIVLAARI, have been approved by the FDA for the treatment of acute hepatic porphyria and peripheral nerve disease, respectively [123]. Cancer development is driven by overexpression or aberrant activation of proto-oncogenes; therefore, various therapeutics taking advantage of endogenous RNAi machinery, specifically targeting transcripts upregulated in cancer cells, have been developed and are currently undergoing clinical trials [124]. Given the efforts put into delineation of the detailed mechanisms of RNAi, development of novel RNAi-based approaches and methods of delivery, we are on the verge of broad engagement of RNAi for anti-cancer therapies.
Besides the potential of RNAi phenomena per se for the development of novel therapeutic strategies, some reports also point to the feasibility of manipulating AGO activity for reducing tumor growth. In particular, the small molecule TPE, which inhibits the association between AGO2 and miRNAs, was shown to suppress growth of 3T3 mouse embryonic cells and completely inhibit tumor formation in mouse models in vivo [125]. The potential of this therapeutic approach was further indicated when targeting the binding domain of AGO proteins with a small molecular inhibitor, which resulted in enhanced granulocytic differentiation of promyelocytic leukaemia cell line NB4 upon treatment with retinoid acid [126].
A series of case vs. control studies implicated the possible application of AGO1 genotyping in cancer risk assessment. Analysis of AGO1 genetic variations of 628 Chinese gastric cancer specimens uncovered lower disease susceptibility for individuals bearing the rs636832 AA + A variant [113]. AGO1 polymorphisms have also been assessed as an influence on lung cancer risk. Based on the analysis of 473 Chinese patients' DNA, the rs595961 AG genetic variant of AGO1 was found to correlate with elevated susceptibility of lung carcinomas [114]. Further genotyping experiments, including 622 Korean lung cancer cases, revealed a protective effect of the rs636832 A > G AGO1 variant [115]. Another neoplasia, partially facilitated by specific genetic variants of AGO1, is clear cell renal cell carcinoma. AGO1 rs595961 AG + GG genotype was found to decrease the odds ratio of the disease for 279 American male subjects [116].
There is a significant amount of literature concerning the impact of polymorphisms on AGO encoding genes and the risk of cancer; therefore, Dobrijevic et al. performed a meta-analysis to seek a general influence of AGO1 and AGO2 SNPs on neoplasia susceptibility based on eleven reports [98,109,[113][114][115][116][117][118][119][120][121]. Two genetic variants of AGO1 (rs636832, rs595961) and a single AGO2 SNP (rs4961280) were analyzed [122]. The most striking results were achieved for the AGO1 rs636832 GA genetic variant, which was found to hold a protective effect on overall cancer risk [122]. Moreover, another of the analyzed AGO1 SNPs variant, rs595961 AG, was implicated as a cancer risk factor for the Asian population [122].
AGO Proteins as Potential Therapeutic Applications
The engagement of RNAi in therapeutic strategy is an exciting and promising perspective. To date, two RNAi-based drugs, ONPATTRO and GIVLAARI, have been approved by the FDA for the treatment of acute hepatic porphyria and peripheral nerve disease, respectively [123]. Cancer development is driven by overexpression or aberrant activation of proto-oncogenes; therefore, various therapeutics taking advantage of endogenous RNAi machinery, specifically targeting transcripts upregulated in cancer cells, have been developed and are currently undergoing clinical trials [124]. Given the efforts put into delineation of the detailed mechanisms of RNAi, development of novel RNAi-based approaches and methods of delivery, we are on the verge of broad engagement of RNAi for anti-cancer therapies.
Besides the potential of RNAi phenomena per se for the development of novel therapeutic strategies, some reports also point to the feasibility of manipulating AGO activity for reducing tumor growth. In particular, the small molecule TPE, which inhibits the association between AGO2 and miRNAs, was shown to suppress growth of 3T3 mouse embryonic cells and completely inhibit tumor formation in mouse models in vivo [125]. The potential of this therapeutic approach was further indicated when targeting the binding domain of AGO proteins with a small molecular inhibitor, which resulted in enhanced granulocytic differentiation of promyelocytic leukaemia cell line NB4 upon treatment with retinoid acid [126]. miRNA profiling from serum is a promising tool for routine check-up of patient response to therapy, especially given the simplicity of collecting the input patient material. Fuji and colleagues described a novel option for monitoring the outcome of colorectal cancer patients based on the levels of circulating AGO2:miR-21/200c complexes [127]. Therefore, AGO proteins can also be utilized to enhance the biomarker potential of circulating miRNAs.
Although numerous miRNA and siRNA molecules exhibit potential to reduce tumor growth, efficient and safe modes of delivery remain a limiting step for the broad application of RNAi in clinics. However, usage of AGO2-conjugated nanoparticles to deliver the tumor suppressor miR-376 into the xenograft mouse tumor was shown to be an efficient and non-toxic strategy [128], demonstrating that miRNAs/siRNAs could be targeted for cancer therapies through the AGOs.
Conclusions
The tight regulation of RNAi activity is necessary in order to ensure normal development of the human body. Even minute dysregulations of miRNA molecules and proteins engaged in RNAi may constitute the basis of severe malignancies, including cancer. Although small RNA functions and their potential application in neoplasia has been greatly explored in the last 20 years, the studies on their partner proteins, AGO1-4, and the involvement in cancer development has largely lagged behind. However, due to the publication of numerous reports on AGO1-4 function in tumorigenesis, new light has been shed on the mediators of RNAi processes in the context of malignant tissues. Given the impact of the presented advances in the delineation of AGO1 and AGO2 dysregulation and activity in cancers of various origin, the broad function of the AGOs in neoplastic transformation will be further documented in the foreseeable future. Understanding of AGO1-4 involvement in tumorigenesis will most probably be followed by the clinical application of the knowledge to aid the development of RNAi-based anti-cancer strategies. | 2021-03-03T05:22:15.266Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "fe3463fab9247a6e4b11b9437f77f74fe34d2687",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/4/788/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe3463fab9247a6e4b11b9437f77f74fe34d2687",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
13862635 | pes2o/s2orc | v3-fos-license | Dynamical mechanism of atrial fibrillation: a topological approach
While spiral wave breakup has been implicated in the emergence of atrial fibrillation, its role in maintaining this complex type of cardiac arrhythmia is less clear. We used the Karma model of cardiac excitation to investigate the dynamical mechanisms that sustain atrial fibrillation once it has been established. The results of our numerical study show that spatiotemporally chaotic dynamics in this regime can be described as a dynamical equilibrium between topologically distinct types of transitions that increase or decrease the number of wavelets, in general agreement with the multiple wavelets hypothesis. Surprisingly, we found that the process of continuous excitation waves breaking up into discontinuous pieces plays no role whatsoever in maintaining spatiotemporal complexity. Instead this complexity is maintained as a dynamical balance between wave coalescence -- a unique, previously unidentified, topological process that increases the number of wavelets -- and wave collapse -- a different topological process that decreases their number.
Atrial fibrillation is a type of cardiac arrhythmia featuring multiple wavelets that continually interact with each other, appear, and disappear. The genesis of this spatiotemporally chaotic state has been linked to the alternans instability that leads to conduction block and wave breakup, generating an increasing number of wavelets. Less clear are the dynamical mechanisms that sustain this state and, in particular, maintain the balance between the creation and destruction of spiral wavelets. Even the relation between wave breakup and conduction block, which is wellunderstood qualitatively, at present lacks proper quantitative description. This paper introduces a topological description of spiral wave chaos in terms of the dynamics of wavefronts, wavebacks, and point defects -phase singularities -that anchor the wavelets. This description both allows a dramatic simplification of the spatiotemporally chaotic dynamics and enables quantitative prediction of the key properties of excitation patterns.
I. INTRODUCTION
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia 1 . While not itself lethal, it has a number of serious side effects, such as increased risk of stroke and systemic thromboembolism 2 . The origin of AF has been debated through much of the previous century 3 . In 1913, Mines proposed that fibrillation is caused by a reentrant process 4 , which leads to a highfrequency wave propagating away from the reentry site and breaking up into smaller fragments. This mechanism is presently referred to as anatomical reentry and requires a structural heterogeneity of the cardiac tissue, such as a blood vessel (e.g., vena cava). Reentry could also be functional 5 , where the heterogeneity (i.e., tissue refractoriness) is dynamical in nature. Neither picture, however, explains the complexity and irregularity of the resulting dynamics.
The first qualitative explanation of this complexity came in the form of the multiple wavelet hypothesis proposed by Moe 6 . In this hypothesis multiple independent wavelets circulate around functionally refractory tissue, with some wavelets running into regions of reduced excitability and disappearing and others breaking up into several daughter wavelets, leading to a dynamical equilibrium. This picture was subsequently evaluated and refined based on numerical simulations 7 and experiments 8 .
Krinsky 9 and then Winfree 10 suggested that the dynamical mechanism of fibrillation relies on the formation and interaction of spiral waves. Spiral waves rotate around phase singularities that may or may not move, producing reentry that requires neither structural nor dynamical heterogeneity in refractoriness. The presence and crucial role of spiral waves in AF was later confirmed in optical phase mapping experiments [11][12][13] . Experimental evidence shows that spiral waves tend to be very unstable: only a small fraction of these complete a full rotation [14][15][16] with the dynamics dominated by what appears to be wavebreaks or wave breakups (WBs).
Although there is plentiful experimental and computational 17 evidence that WBs play a crucial role in the transition to fibrillation, it is far less obvious that this mechanism is essential for maintaining AF. As Liu et al. 16 write, "for a wave to break, its wavelength must become zero at a discrete point somewhere along the wave. This can happen if the wave encounters refractoriness that creates local block (wavelength = 0), while propagating elsewhere. Therefore, WBs can be detected at locations where activating wavefronts meet the repolarization wavebacks." The data produced by experimental studies is highly unreliable in this regard, since detecting the position of wavefronts and wavebacks based on optical recordings is far from straightforward. Numerical simulations, on the other hand, have focused mostly on the transition, rather than sustained AF. Theoretical studies of model systems such as the complex Ginzburg-Landau equation 18 and FitzHugh-Nagumo equation 19 lack dynamical features, such as the alternans instability 20 , that are believed to play an essential role in conduction block that leads to WBs and AF 21 .
Even if WBs do play a role in maintaining AF, they tell only a part of the story. Indeed, in sustained AF, despite some variation, the quantitative metrics such as the number of wavelets or phase singularities, have to remain in dynamical equilibrium. While WBs may explain the increase in the number of wavelets and phase singularities, it cannot explain how these numbers might ever decrease. The multiple wavelet hypothesis 6 comes the closest to providing all the necessary ingredients for such a dynamical equilibrium, but it lacks sufficient detail to be either validated or refuted.
The main objective of this paper is to construct a mathematically rigorous topological description of the dynamics and to use this description to characterize and classify different dynamical events that change the topological structure of the pattern of excitation waves in a state of sustained atrial fibrillation. We will focus on the smoothed version 22 of the Karma model 23,24 , where u(t, x) = [u 1 , u 2 ](t, x), where Θ s (u) = [1 + tanh(su)]/2, u 1 is the (fast) voltage variable, and u 2 is the (slow) gating variable. The parameter describes the ratio of the corresponding time scales, s is the smoothing parameter, and the diagonal matrix D of diffusion coefficients describes spatial coupling between neighboring cardiac cells (cardiomyocytes). The parameters of the model 25 are M = 4, = 0.01, s = 1.2571, β = 1.389, u * 1 = 1.5415, D 11 = 4.0062, and D 22 = 0.20031. This is the simplest model of cardiac tissue that develops sustained spiral wave chaos from an isolated spiral wave through the amplification of the alternans instability resulting in conduction block and wave breaks, mirroring the transition from tachycardia to fibrillation.
The outline of the paper is as follows: Section II introduces the topological description of the complicated multi-spiral states. Section III discusses the relationship between tissue refractoriness and conduction block. The dynamical mechanisms that maintain spiral wave chaos are presented in Section IV. Section V discusses the statistical measures quantifying sustained dynamics, and Section VI contains the discussion of our results and conclusions.
II. WAVE ANATOMY
To quantitatively describe the topological changes such as wave breakup and creation/destruction of spiral cores we must first discuss the anatomy of excitation waves and define the appropriate terminology.
A. Wavefront and waveback
The region of excitation can be thought of as being bounded by the wavefront, which describes fast depolarization of cardiac cells, and waveback, which describes a typically slower repolarization. The most conventional definition of the wavefront and waveback that has been used in both experiment 15,26 and numerical simulations 17,27 is based on a level-set of the voltage variable, u 1 (t, x) =ū 1 . If we define the region of repolarization then the wavefront/back is the part of the level set outside/inside R. The choice of the voltage thresholdū 1 is arbitrary and is typically taken as a percentage of the difference between the voltage maximum and its value in the rest state 17 . This very simple definition allows easy identification of the action potential duration (APD) and diastolic interval (DI), where the percentage is often used as a subscript which refers to the choice of the threshold (e.g., APD 80 corresponds to 80% of the difference).
Analytical studies tend to use a different definition 28-30 for the wavefront and waveback which is based on scale separation between the dynamics of fast variables, such as voltage, and slow variables, such as potassium concentration (the voltage u 1 and the gating variable u 2 , respectively, in the Karma model). For the simplest two-variable models (Karma, Barkley 31 , FitzHugh-Nagumo 32 , Rinzel-Keller 33 , etc.), in the limit → 0, the excitation waves are related to a limit cycle oscillation in the (u 1 , u 2 ) plane governed by the system of coupled ordinary differential equations 30 The wavefront and waveback correspond to the segments of the limit cycle solution of (4) connecting the two stable branches of the u 1 -nullcline f 1 (u) = 0 for which du 2 /du 1 < 0 (they are denoted with superscripts − and +, as illustrated in Fig. 1(a)).
In the limit → 0 these segments become horizontal lines and describe very fast (in time) variation of the voltage variable. In space, both the wavefront and the waveback have widths that scale as √ . In the limit → 0 they become very sharp (green curves in Fig. 1(b)) and can be thought of as the boundaries of the region of FIG. 1. Phase diagram for a generic two-variable model in the → 0, D → 0 limit. (a) The nullclines f1(u) = 0 (blue), f2(u) = 0 (red) and the limit cycle oscillation (solid green). The labels WF and WB denote the wavefront and waveback, respectively. The dashed green line corresponds to the "stall" valueū2 which defines a stationary front. The excited region E is shaded, and the gray arrows show the local direction of the vector field f (u). (b) The wavefront (solid green line) and waveback (dashed green line), the level setsu1 = 0 (solid blue line) andu2 = 0 (dashed blue line), spiral wave tip (green circle) and phase singularity (blue circle). The arrows denote the direction of the normal velocity c.
excited tissue shown as the green shaded area in Fig. 1(b). These boundaries can be defined with equal precision using any curve in the (u 1 , u 2 ) plane that bisects both the wavefront and the waveback segments of the limit cycle. If we define this curve as the zero level set of an indicator function the wavefront and the waveback in the physical space at a particular time t are given by In particular, a level set of the voltage variable discussed previously corresponds to a vertical line in Fig. 1(a), A less arbitrary and dynamically better justified choice g(u) = f 0 1 (u) corresponds to the unstable branch of the u 1 -nullcline (for which du 2 /du 1 > 0). In order to generalize this choice to finite values of , the unstable branch has to be extended beyond its end points u ≶ where du 2 /du 1 = 0, e.g., The excited region where g(u) > 0 according to (8) is shaded green in Fig. 1(a). Yet another alternative suggested by Fig. 1(a) is to use the u 2 -nullcline g(u) = f 2 (u) that also bisects both the wavefront and the waveback. As time evolves, the level set (7) moves with normal velocity where b = ∂g/∂u, ∂ n = n·∇ is the directional derivative, and n is the outside normal to ∂E. This normal velocity is taken to be positive for the wavefront and negative for the waveback. In particular, for g(u) = u 1 −ū 1 , (9) simplifies, yielding c = −∂ t u 1 /∂ n u 1 , so the sign of c corresponds to the sign of ∂ t u 1 for proper choices ofū 1 such that ∂ n u 1 < 0 over the entire level set. Whatever the choice of the bisecting curve g(u) = 0 in the (u 1 , u 2 ) plane is, it may not define a continuous curve in the physical space at all times. Indeed, the PDE model (1) does not take the cellular structure of the tissue into account. Instead, a spatial discretization of (1) should be used, so that the field u becomes a discontinuous function of space. In this case the wavefront and waveback should instead be defined as the boundary ∂E of the region rather than the level set (7). There are several serious problems with the "local" definitions discussed above, which are based on the kinetics of isolated cardiac cells. For one, they ignore the coupling between neighboring cells in tissue (electrotonic effects) and hence cannot correctly describe the essential properties of excitability and refractoriness, making quantitative description of conduction block impossible. Furthermore, since the value of is not vanishingly small for physiologically relevant models, the widths of both the wavefront and the waveback become finite as well, so different choices of g(u) can produce rather distinct results in the physical space. Additional complications will be discussed below.
The definitions of the wavefront and waveback can be generalized for a tissue model by noticing that the level sets such as f 0 1 (u) = 0 or f 2 (u) = 0, in the limit → 0, coincide with the level sets ∂ t u 1 = 0 and ∂ t u 2 = 0. These are special cases of a more general relation where a = (cos α, sin α) and 0 < α < π is a parameter that can be chosen to properly describe the refractoriness and excitability of the model for finite values of . The wavefront and waveback can again be distinguished as the parts of the level set that lie outside or inside R, respectively. We will set α = π/2 below, which yields the following definition The level set (12) is shown as the dashed blue line in Fig. 1(b). As Fig. 2(a-b) illustrates, for the Karma model, it gives an extremely good agreement with the more conventional definition based on (8) for both the wavefront and the waveback. Variation in α by O( ) has a very weak effect on the position of the wavefront (which is very sharp), but has a more pronounced effect on the position of the waveback (which is much broader).
B. Phase singularities
Typically (although certainly not always 34 ), the temporal frequency and wavelength of spiral waves are controlled by their central region, usually referred to as a spiral core or rotor 35 . This region is spatially extended and its size can be characterized using the adjoint eigenfunctions of the linearization 36,37 . In practice it is more convenient to deal with a single point that describes the location of the core region. In particular, the center of this region is associated with a phase singularity, where the amplitude of oscillation vanishes. The location of the phase singularity depends on the definition of the phase, however, and the proper definition is far from obvious for strongly nonlinear oscillations characteristic of excitable systems. The methods based on phase 38,39 or amplitude 22 reconstruction rely on the dynamics being nearly recurrent and break down for spatiotemporally chaotic states featuring frequent topologically nontrivial events such as the creation or annihilation of spiral cores.
A more conventional (and convenient) approach is to use instead the spiral tip, which is a point on the boundary ∂E of the excited region that separates the depolarization wavefront from the repolarization waveback (green circle in Fig. 1(b)). A number of different definitions of the spiral tip have been introduced in the literature. The most popular are the ones based on the level-set intersection (LSI) 40 or zero normal velocity (ZNV) 41 or the curvature κ of the level set u 1 (t, x) =ū 1 42 . In particular, ZNV and LSI define the spiral tip(s) x p (t) as intersection of two level sets which are much easier to compute than the curvature. LSI can be thought of as the limiting case of ZNV where D 11 → 0, so the difference between the positions of spiral tips defined using these two methods provides a measure of the importance of electrotonic effects.
Albeit they could be simple to define, spiral tips typically exhibit spurious dynamical effects. For instance, they move (in circular trajectories) for spiral wave solutions of (1) rigidly rotating around an origin x (née relative equilibria) which satisfy where ∂ θ =ẑ · (x − x ) × ∇ and ω = 2π/T is the angular frequency. Indeed, even if the normal velocity (9) of the spiral tip vanishes, its tangential velocity will not vanish, unlessū 1 = u 1 (x ). Hence, spiral tips are not ideally suited to be used as indicators of the wave dynamics. The phase singularity, unlike the spiral wave tip, should remain stationary for a rigidly rotating spiral wave. This requires that the location x o (t) of every phase singularity satisfies Equivalently, x o (t) correspond to the intersections of the level sets and ∂E defined according to (12), i.e., phase singularities are points on the boundary of the excited region that separate the refractory region from the excitable region. Note that, for ∂E defined by (11), its intersections with ∂R are independent of α, and so is the definition of the phase singularities. This is explicit in the definition (16).
It is easy to see that the boundaries ∂E and ∂R merely correspond to different level sets ϕ = α ± π/2 and ϕ = ±π/2 of the phase field so x o (t) indeed corresponds to a phase singularity of the phase field (18). Since they are defined locally (just like the spiral wave tips defined via LSI and ZNV), the phase singularities can be easily determined for arbitrarily complicated solutions. More generally, x o (t) can be interpreted as the instantaneous center of rotation for slowly drifting spiral waves, i.e., for which the rotation-averaged translation of the spiral wave core is much smaller than the typical propagation velocity c of excitation waves. The positions of spiral wave tips and phase singularities are compared in Fig. 2(c). For the LSI and ZNV definitions, the positions of the spiral tips are shown for 1.68 ≤ū 1 ≤ 2.11, corresponding to the voltage threshold between APD 70 and APD 90 , respectively 17 . Clearly, electrotonic effects are non-negligible for the present model, as the tip positions predicted by LSI are dramatically different from those defined by ZNV over a range of choices ofū 1 . The tip positions defined by ZNV much more closely match the phase singularities defined by (16), and mostly differ in position in the direction normal to the wavefront (along the local gradient of u 1 ).
In conclusion of this section, we should mention that, for a typical multi-spiral solution, u 1 varies rather significantly across the spatial domain, while u 2 is restricted to a narrow range of O( ) width around the valueū 2 that corresponds to the "stall solution" for a planar front connecting the two stable branches of the u 1 -nullcline where f ± 1 (u ± 1 ,ū 2 ) = 0 (cf. Fig. 1(a)). This is a characteristic value that corresponds to the phase singularities, as shown by Fife 43 . For the parameters used here 37 u 2 = 0.9724.
C. Topological description
We can associate chirality (topological charge) q j = ±1 with each of the phase singularities (enumerated by j = 1, 2, . . . ), which determines whether the spiral wave rotation is counter-or clockwise. Chirality can be defined locally 44 as For spatially discretized models it is more reliable to use a nonlocal definition of chirality. Let us define a neighborhood of each phase singularity x o,j using the window function where r j = |x − x o,j | and d j = min k =j |x o,j − x o,k | is the distance to the nearest distinct phase singularity. Further, let us define the pseudo-chiralityq j of each spiral wave as the value for which the function (22) is minimized. The functional J(q j ) defines a local reference frame rotating with angular velocityq j ω around the phase singularity x o,j ; this functional is minimized for spiral waves that are stationary in that reference frame. For a single rigidly rotating spiral wave, chirality is precisely ±1. Minimization of (22) for complex multispiral states produces pseudo chirality values equal to ±1 within a few percent, such that we can safely define q j = sign(q j ). In practice, this definition proves very robust when spiral cores are sufficiently well separated, i.e., when d j exceeds the width of an isolated spiral core 25 . Since phase singularities by definition (16) lie on the level set ∂E, for periodic boundary conditions, wavefronts and wavebacks can only terminate at a spiral core (or, more precisely, phase singularity). Conversely, in multi-spiral states, each wavefront and waveback is bounded by a pair of spiral cores of opposite chirality. In modern electrophysiology literature wavelets are identified with wavefronts 13 . Consequently, the events when a wavelet is created (destroyed) are associated with an increase (decrease) in the number of spiral cores by two. Although the total number of spiral cores is not conserved, the total topological charge is conserved 45,46 . A number of topologically distinct processes which respect (23) are possible. Although some of these correspond to the time-reversed version of the others, the dynamics of the dissipative systems are not time-reversible and do not have to respect the symmetry between these processes. In fact, as we will see below, in excitable systems such as the Karma model the dominant topological processes increasing/decreasing the number of spiral cores are not related by time-reversal symmetry.
III. CONDUCTION BLOCK AND TISSUE REFRACTORINESS
As we mentioned previously, conduction block plays a major role in wave breakup, which is essential for transition to fibrillation and spiral wave chaos in general. The origin of conduction block can be structural, i.e., related to tissue heterogeneity 47 , but it can also be dynamical, i.e., occur in homogeneous tissue as a result of an instability. For instance, conduction block can occur when a receding waveback is moving slower than the subsequent advancing wavefront 48 . In fact, there is a variety of other dynamical mechanisms leading to conduction block 17 .
Conduction block refers to the failure of an excitation front to propagate because the tissue ahead of it is refractory and cannot be excited. Refractoriness is traditionally 49 defined on the level of individual cardiac cells by quantifying whether a voltage perturbation applied to the quiescent state of the cell will trigger an action potential. These definitions are not particularly useful for understanding conduction block in tissue for two reasons [50][51][52] : First of all, in tissue the excitation wave is triggered by coupling between neighboring cells, rather than a voltage perturbation to an isolated cell. Second, in tissue, especially during AF or tachycardia, which are both characterized by very short DIs, the cells never have sufficient time to return to the rest state.
A. Low-curvature wavefront In the Karma model, conduction block can arise as a result of discordant alternans instability 53,54 which leads to variation in the width and duration of action potentials. For excitation waves with low curvature, we can determine the boundary of the refractory region by considering a one-dimensional periodic pulse train. In the reference frame moving with velocity c of the wavefront, the voltage variable satisfies the evolution equation provided the pulse train does not change shape, where For sufficiently small DI, the conduction velocity c decreases monotonically with DI and vanishes identically at finite DI 24 . This means that there are no propagating solutions below this value of DI. At the critical value of DI, we have c = 0, so the wavefront fails to propagate when D 11 u 1 + f 1 (u) = 0. For plane waves in two dimensions, u 1 = ∇ 2 u 1 , so combining this with the evolution equation (1) we find that the boundary of the refractory region is given by and coincides with the boundary (17) of the repolarization region. Similarly, the refractory region can be identified with the region of repolarization (3). This makes intuitive sense: whatever the conditions are, the voltage increases outside the refractory region. Although derived for a very special case of onedimensional periodic pulse trains, this definition of the refractory region works well even for states that are not time-periodic and feature excitation waves with significant curvature. This is illustrated in Fig. 3 which shows the time trace of of the variables u 1 (t, x 0 ) and u 2 (t, x 0 ) for a spatiotemporally chaotic solution similar to that shown in Fig. 2. The point x 0 was chosen near the spatial location where conduction block occurs (such as the center of the marked region in Fig. 7 below). The excited and refractory intervals (temporal analogues of the Excited intervals are shaded red and refractory intervals are shaded blue. The first refractory interval ends at t ≈ 26.5, just before the next excited interval begins at t ≈ 29.5, leading to a short, small-amplitude action potential. excitable and refractory regions) are shown as red-and blue-shaded rectangles in Fig. 3; they are bounded by the level sets ∂R and ∂E in Fig. 7.
As expected, we find that, when the wavefronts are well separated from the trailing edges of the refractory intervals (e.g., at t ≈ 140), long, large-amplitude action potentials are found. This is in sharp contrast with the short and low-amplitude action potential that is initiated at t ≈ 29.5, soon after the previous refractory interval ends at t ≈ 26.5. As the location of conduction block is approached (not shown), the separation between the trailing edge of the refractory interval and the subsequent excitation wavefront vanishes along with the action potential itself. This suggests that conduction block occurs when and where the level sets ∂R and ∂E first touch, in agreement with Winfree's critical point hypothesis 55 .
B. High-curvature wavefront
There are, however, other dynamical mechanisms that can lead to conduction block. Consider, for instance, the opposite situation when the curvature of the wavefront is high. For curved wavefronts the propagation speed c decreases as the curvature κ of the wavefront increases, so there is a critical value of the curvature at which the wave fails to propagate 56 . It can be estimated in the limit D 22 → 0 using the eikonal approximation 57 which gives where c 0 is the velocity of a planar wavefront. Using the value of c 0 = λ/T ≈ 1.44 which corresponds to a large rigidly rotating spiral wave 58 , we find κ −1 = r c ≈ 2.8, where r c is the critical radius of curvature of the wavefront. A more accurate estimate for r c can be obtained using the condition (25) for conduction block and the definition of the wavefront (12) rewritten via (1) in polar coordinates (r, θ). For small , u 2 varies slowly in both time and space and can be considered a constant, u 2 =ū 2 given by (19). The first three terms of (28) can therefore be ignored, so (28) reduces to f 2 (u) = 0. The third term D 11 r −2 ∂ 2 θ u 1 in (27) can also be neglected, since u 1 varies much faster in the direction normal to the wavefront r = r c than in the tangential direction. Since u 2 is constant, subject to boundary conditions ∂ r u 1 = 0 at r = 0 and r = ∞, (27) has rotationally symmetric solutions u 1 = u 1 (r) with a stationary wavefront at the critical radius r c given by The stationary solution of (27) and (29) is shown in Fig. 4. It corresponds to the critical radius of curvature r c ≈ 6, which is a factor of two larger than the value obtained using the eikonal approximation (26). This solution is a two-dimensional analogue of the one-dimensional "critical nucleus" for excitation 52 . Wavefronts with radius of curvature larger than r c propagate forward, while the wavefronts with radius of curvature smaller than r c retract (i.e., become wavebacks).
Before we discuss the numerical results, let us emphasize that, with the proper choice of variables, the definitions of the wavefronts and wavebacks (12), leading and trailing edges of the refractory region (17), and phase singularities (16) are model-independent and can be used to analyze both numerical and experimental data, provided that measurements of two independent variables (e.g., voltage and calcium) are available. Generalization of the topological description presented above to higherdimensional models is discussed in the Appendix.
IV. NUMERICAL RESULTS
As mentioned in the introduction, during AF most spiral waves do not complete a full rotation. Spiral wave chaos in the Karma model produces qualitatively similar dynamics: topological changes involving a change in the number of spiral wave cores occur on the same time scale as the rotation. (For parameters considered in this study rotation period is T ≈ 51, which corresponds to 127 ms in dimensional units 58 .) The larger the spatial domain, the more frequent are the topological changes in the structure of the solution. However, as we discussed in Sect. II C, each topological event is essentially local and involves either birth or annihilation of a pair of spiral cores of opposite chirality.
Of the different types of topological events, spiral wave breakup -associated with a creation of a new pair of spiral cores -received a lion's share of the attention due to its role in the initiation of fibrillation. However, the number of spiral cores cannot increase forever; eventually a dynamic equilibrium is reached when the number of cores fluctuates about some average, with core creation balanced by core annihilation. To the best of our knowledge, the process(es) responsible for core annihilation, however, have never been studied systematically. To investigate which of the topological events dominate and what the dynamical mechanisms underlying these events are, we performed a numerical study of the Karma model (1)-(2) on a square domain of side-length L = 192 (5.03 cm), which is close to the minimal size required to support spiral wave chaos. Spatial derivatives were evaluated using a second-order finite-difference stencil and a fourth-order Runge-Kutta method was used for time integration 58 . To avoid spurious topological transitions involving a boundary, periodic boundary conditions were used, unless noted otherwise.
Before identifying topological transitions in the numerical simulations, it is worth enumerating the topologically distinct local configurations. In the following it will be convenient to use the following shorthand notations: ∂E + (wavefront, ∂ t u 1 > 0), ∂E − (waveback, ∂ t u 1 < 0), ∂R + (leading edge of the refractory region, ∂ t u 2 > 0), and ∂R − (trailing edge of the refractory region, ∂ t u 2 < 0). For a wave train in the region with no spiral cores, the boundaries of the refractory and excitable regions will follow the periodic sequence (. . . , ∂R − , ∂E − , ∂R + , ∂E + , . . . ) in the co-moving frame (cf. Fig. 5(e)).
A. Virtual pairs
Deformation of (nearly planar) waves due to instability or heterogeneous refractoriness can lead to intersection of any pair of adjacent level sets (e.g., ∂E + and ∂R − ) and, correspondingly, creation of a new pair of spiral cores. The transitions that correspond to crossing of two level sets followed by the reverse transition (indicated by horizontal or vertical double gray arrows in Fig. 5) produce a "virtual" core pair that appears and quickly disappears, restoring the original topological structure. The number of spiral cores and wavelets before and after these transitions remains exactly the same, so while such events do occur rather frequently they do not play a dynamically important role and can be safely ignored. As discussed below, the topological transitions identified with the arrows in Fig. 5 are all very fast; they occur on a time scale much shorter than the typical rotation period of a spiral. The much slower transitions between panels 5(a) → 5(b) → 5(c) → 5(f) → 5(i) → 5(h) → 5(g) → 5(d) → · · · associated with figure-8 re-entry for two well-separated counter-rotating spirals are not shown for clarity. While the topology of some level sets (either ∂E of ∂R) changes during these transitions, neither the topological charge nor the number of phase singularities does, so these are not proper topological transitions, as defined in Section II C. In contrast, the topology of ∂E and ∂R may not change during the proper topological transitions.
The trajectories of the two phase singularities and the distance between them in a representative example of a virtual pair are shown in Fig. 6. The phase singularities do not move far from their initial positions and remain close at all times. In fact, the distance between them never exceeds a fraction of the typical separation d 0 between persistent spiral cores (to be be discussed in more detail in Sect. V). Since the level sets are smooth curves, their intersections that define the positions of the cores move with infinite velocities at the time instants when the cores are created and destroyed. As a result, their motion near those times can not be resolved using time stepping, explaining the gaps at the beginning and the end of the trajectories in Fig. 6(a). Fig. 5(f) deserves a special mention. It corresponds to the phenomenon of "back-ignition" observed in some reaction-diffusion models whereby the waveback can become a source of a new backward propagating wave under appropriate conditions. While topologically permissible, this configuration is only observed as a very short transient in the present model, reflecting the asymmetry of initial conditions imposed by the dynamics. The relative likelihood of the four transient configurations shown in Figs. 5(b), 5(d), 5(f), and 5(h) can in principle be computed using stability analysis of a planar wave train solution for any tissue model, but this is outside the scope of the present study.
B. Wavelet/pair creation
Next consider the transitions from the four intermediate configurations shown in Figs. 5(b), 5(d), 5(f), and 5(h) to configurations other than the initial one shown in Fig. 5(e). For each of these four transient configurations there are four distinct possibilities. Two possibilities are shown in Fig. 5 (the other two will be discussed in the next Section): either of the two level set fragments connecting the cores can reconnect with the neighboring level set of the same type. For instance, the configuration shown in Fig. 5(h) Note that the transition between the configurations shown in Fig. 5(e) and 5(a) corresponds to wave breakup. It occurs when and where the wavefront reconnects with the waveback of the same excitation wave 55,59 as a result of conduction block. Since this topological process increases the number of disconnected excited regions, it is quite natural to find that it plays an important role in the transition from, say, normal rhythm or tachycardia (featuring a single excitation wave) to AF (featuring many separate wavelets). While wave breakup may be prevalent during the initial stage when AF is being established, we have not found a single instance of this topological event in our numerical simulations of sustained spiral wave chaos, casting serious doubt on the premise that wave breakup plays a dynamically important role in maintaining AF in tissue or in other models.
Our numerical simulations reveal only one topological process that leads to a lasting increase in the complexity of the pattern. This process which we call "wave coalescence" corresponds to the transition from the initial configuration shown in Fig. 5(e) to the final configuration shown in Fig. 5 Fig. 7, where a purple rectangle marks the region of interest. Outside of this region the wavefronts are well-separated from the refractory tails, but inside the separation is markedly smaller (cf. Fig. 7(a)). The separation quickly decreases (cf. Fig. 7(b)) until the level sets ∂E − and ∂R − cross and two new spiral cores with opposite chirality are created (cf. Fig. 7(c)). Immediately after this the two parts of the level set ∂E + reconnect, bringing the configuration to the topological state shown in Fig. 5(i). The cores separate (cf. Fig. 7(c)) and the excited regions of two subsequent waves coalesce in the gap flanked by these two cores.
(i), either directly or through the intermediate transient configurations shown in Figs. 5(f) and 5(h). A representative example from the simulations is shown in
Due to the high curvature of ∂E the two new cores are quickly pulled apart, and two new counter-rotating spiral waves emerge, "locking in" the resulting topological configuration. This is illustrated by Fig. 8 which shows the trajectories of the cores and the distance between them. It is worth noting that, before the spiral waves complete even half a revolution, the separation between the cores approaches the typical equilibrium distance 37 d 0 .
Our numerical simulations did not produce any examples of topological transitions to the configurations shown in Figs. 5(c) or 5(g). In the horizontal band bounded by the cores (indicated by lighter-shade gray), the corresponding states are characterized by the voltage variable that is changing slowly in space, since the distance d between the minimum of u 1 (dashed white line ∂R − ) and the maximum of u 1 (solid white line ∂R + ) is extremely large. Hence, the term D 11 ∇ 2 u 1 ∝ d −2 in (1) is negligible. Since D 22 is small, we can ignore the diffusive terms D 22 ∇ 2 u 2 as well and consider all cells in this region to be spatially decoupled, such that their dynamics is described well by (4) and the phase diagram shown in Fig. 1(a). Consider the part of the band where u 1 is slowly and monotonically increasing in time (to the left of ∂R + in Fig. 5(c)) or decreasing in time (to the left of ∂R − in Fig. 5(g)). The cells in this region should be in the state that lies close to either one of the stable u 1 -nullclines (f − 1 = 0 in the former case and f + 1 = 0 in the latter case). According to Fig. 1(a), this entire region should lie either to the left or to the right of the u 2 -nullcline, so ∂ t u 2 should be sign-definite, while in both Fig. 5(c) and 5(g) the sign of ∂ t u 2 changes (when the level set ∂E connecting the two phase singularities is crossed). Hence, while these configurations are not forbidden on topological grounds, they are forbidden dynamically. Fig. 5(e). While these transitions are allowed topologically, they appear to be forbidden dynamically. The only dynamically allowed (direct or indirect) transition irreversibly transforms the configuration with no spiral cores (Fig. 5(e)) to the configuration (Fig. 5(i)) with two spiral cores, increasing the total number of cores by two and the number of wavelets by one.
C. Wavelet/pair destruction
Finally, let us consider the topological transitions from the four intermediate configurations that have not been considered in the previous sections. We have redrawn these four configurations in Fig. 9 in the same locations as in Fig. 5, dropping all non-essential level sets. For each of these intermediate configurations there are two possibilities which involve reconnection between the two extended branches of a level set that terminate at the cores. For instance, the configuration shown in Fig. 9(h) can transform to the configurations shown in Figs. 9(g) or 9(i). None of these transitions (shown as horizontal or vertical gray lines), in either the forward or the reverse direction, have been observed in numerical simulations, however.
As we discussed previously, if the crossing and reconnection of the level sets occur simultaneously, the configuration transitions directly between the persistent con- figuration with no level set intersections ( Fig. 9(e)) and one of the persistent configurations with a pair of intersections shown in Figs. 9(a), 9(c), 9(g), and 9(i) without passing through any of the intermediate configurations.
The dynamically allowed direct transitions observed in the simulations are shown as diagonal gray arrows. Note that again there is no time-reversal symmetry: only the transitions that destroy the existing core pairs are dynamically allowed. Therefore the observed direct transitions shown in Fig. 9 reduce the net number of spiral cores and wavelets balancing the increase due to wave coalescence. The configurations shown in Figs. 9(a), 9(c), 9(g), and 9(i) all describe a pair of counter-rotating spiral waves. In particular, the configurations in Figs. 9(c) and 9(g) correspond to multi-spiral states (wavefronts and/or wavebacks connect spiral cores inside and outside the region shown) and hence are quite typical. On the other hand, the configurations in Figs. 9(a) and 9(i) correspond to configurations with a single pair of spirals (wavefronts and wavebacks connect the phase singularities inside the region shown) and are never observed during sustained spiral wave chaos. Consequently, only transitions from the configurations in Figs. 9(c) and 9(g) are found in the simulations, with the vast majority of transitions involving the former configuration.
To understand why and when this transition happens, consider the interaction between a pair of isolated counter-rotating spiral waves separated by distance d.
(The interaction is short-range, so the presence of other, remote, spiral wave cores does not change the outcome.) Using the approximate mirror symmetry of the configuration, the dynamics can be understood by considering a single spiral interacting with a planar no-flux boundary at a distance ζ = d/2. As we showed previously 37,58 , at large separations the spiral cores can be considered essentially non-interacting, while at smaller separations the equilibrium distance d becomes quantized, with the smallest stable separation 37 equal to d 0 = 2ζ 0 ≈ 40 (10.4 mm in dimensional units) for the values of the parameters considered in this study. For separations below some critical distance d c < d 0 , the cores attract each other, eventually colliding and destroying both spiral waves. As the cores approach each other, the wavefront confined between them collapses, so we will refer to this process as "wavelet collapse" or "wave collapse". The details of wave collapse depend on the relation between the initial phase of the spiral waves and separation between their cores. A very typical example of wave collapse is shown in Fig. 10. In this particular example we find that the curvature of the wavefront becomes quite large before collapse. The curvature at which this happens can be related to the mechanism of conduction block discussed in Sect. III B. Since the cores are moving relatively slowly prior to wave collapse (cf. Fig. 10(a)), as the wavefront propagates its curvature gradually increases (cf. Fig. 10(b)). The largest value of the curvature is related to the distance between the cores, κ −1 ≈ d/2. Once the curvature becomes comparable to the inverse of the critical radius r c ≈ 6, the wave stops propagating, the cores slide towards each other along the wavefront and annihilate (cf. Fig. 10(c)), the wavebacks merge, and the wave starts to retract (cf. Fig. 10(d)).
This picture predicts that the minimal distance at which the spiral cores with opposite chirality can persist without annihilating with each other is given by d c = 2r c = 12 (3.1 mm in dimensional units). This value is in good agreement with the critical isthmus width (2.5 mm) found for conduction block in isolated sheets of ventricular epicardial muscle with an expanding geometry 56 . Our numerical simulations show that the minimal distance is d c = 16 (4.2 mm), also close to the predicted value. The core trajectories and the distance between them in the example from Fig. 10 are shown in Fig. 11. The initial distance in this case was d = 18 (4.7 mm), illustrating that, under appropriate conditions, wave collapse can also occur for cores separations somewhat larger than d c (but still less than d 0 ). The transition between the configurations shown in Figs. 9(g) and 9(e) corresponds to merger between two wavefronts that were originally separated by a waveback. Hence, we shall refer to this topological transition as a "wave merger" event. Wave mergers, however, are extremely rare, so a reduction in the total number of cores and wavelets is due almost entirely to wave collapse events. This is similar to the dynamical asymmetry between the wave breakup and wave coalescence events. Therefore, dynamical equilibrium in sustained spiral wave chaos can be understood, at least in the Karma model, as a balance between wave coalescence and wave collapse.
V. DYNAMICAL EQUILIBRIUM
Although the topological description itself is not quantitative, it helps identify the key dynamical mechanisms, such as wave coalescence and wave collapse, responsible for maintaining AF. This should, in turn, enable a quantitative description of the dynamics in general and dynamical equilibrium in particular and give the answers to open questions that have been debated for a long time. For instance, it is presently not well understood either what the minimal size of tissue is that can sustain AF or what the minimal number of wavelets is in sustained AF. The leading-circle concept 5 suggests that the number of wavelets that the atria can accommodate should be related to the wavelength. Moe's computer model 7 predicted that between 23 and 40 wavelets are necessary for the maintenance of AF, while Allessie 8 places the minimal number of wavelets between four and six.
These hypotheses can be easily tested in the context of the Karma model. Let us start by determining whether the wavelength (λ = 78 for the values of parameters considered here) is a relevant length scale. The size (diameter) of a reentry circle with the perimeter equal to the wavelength is d = λ/π ≈ 25 which is larger that the minimal separation d c between persistent spiral wave cores, but considerably smaller than the minimal stable separation d 0 between the cores.
To show that d 0 is the relevant length scale, we computed the probability density function P (d) for core-core separation on a square domain of side L = 192 (this is the smallest domain with no-flux boundary conditions that supports sustained spiral wave chaos). For each time t and each core j we computed the distance d j to the nearest core (cf. Sect. II C), then averaged over j and t. The resulting distribution, for both no-flux and periodic boundary conditions, is shown in Fig. 12. In both cases we find that the distribution P (d) is rather narrow, with the maximum achieved at d = d 0 .
The effect of the boundary conditions on the shape of the distribution is somewhat subtle: on a bi-periodic domain, the probability of large core separations (d = O(L)) is decreased compared with the same size domain with no-flux boundary conditions. Effectively, as there must always be a chirally-matched pair on the periodic domain, the furthest these cores may be is d max = L/ √ 2, as opposed to an isolated spiral matched with it's mirror image across the no-flux boundary, which corresponds to maximal distance d max = √ 2L. Thus, on a periodic domain, the maximal accessible distance is precisely 1/2 the maximal distance available on a no-flux domain of the same size.
The upper bound for the number of spiral cores can be estimated as the ratio of the total area of the domain (i.e., L 2 ) to the area of the smallest tiles 22 supporting one persistent spiral wave (i.e., d 2 0 ), that isn c < L 2 /d 2 0 = 23 (in fact, we should haven c ≤ 22 since the net topological charge is zero). In reality the tiles tend not to be squarish and have a larger area on average, giving a lower average number of spiral cores,n c = 10, as the probability distribution function P (n c ) illustrates (cf. Fig. 13). The number n w of separate wavelets is exactly half of the number n c of cores (on a domain with periodic boundary conditions), so on averagen w =n c /2 ≈ 5, in perfect agreement with the results of Allessie 8 . The number of cores exhibits considerable fluctuation (between 4 and 16), correspondingly the number of wavelets varies between 2 and 8. The likelihood of these extreme values is, however, rather small (an order of magnitude smaller than that corresponding to the average value).
The observation that the minimal number of wavelets is just two is a sign that the dynamics are on the border of spontaneous collapse of spiral wave chaos (recall that our domain is just large enough to sustain this regime). We should have P (0) = 0, because once all the spiral cores disappear, so does the mechanism of reentry (at least in our homogeneous model), resulting in a transi-tion to the rest state or, in the presence of pacing, normal rhythm. The smallest number of spiral cores required for reentry (in a domain with periodic boundary conditions) is two, so one could, in principle, expect P (2) to be nonzero. However, as our results show, the mechanism that sustains spiral wave chaos is wave coalescence, which requires at least two wavelets, and therefore at least four spiral cores, to be present.
VI. DISCUSSION
This paper presents a general topological approach for studying spiral wave chaos in two-dimensional excitable media. It is illustrated using the Karma model which, in a certain parameter regime, produces dynamics that are remarkably similar to those observed during atrial fibrillation. Therefore, our results could shed new light on this important and complicated phenomenon.
The confusing and often contradictory results regarding the dynamical origins of AF reported in experimental and numerical studies are to some extent due to the complexity of the patterns of excitation. The descriptive language and intuition developed primarily in the context of simple structures -plane or spiral waves -often fail us when applied to states that are topologically complicated and nonstationary. To give a few examples, the mental picture of a spatially localized excitation wave, or wavelet, that is bounded by a wavefront and a waveback falls apart when applied to complex multi-spiral patterns since the boundary of one excited region is often composed of multiple wavefronts and wavebacks, as Figs. 7 and 10 illustrate. As a result, the number of excited regions almost never corresponds to the number of wavelets. Neither is the notion of a spiral wave immediately useful for describing such complicated patterns, which only resemble spiral waves in small neighborhoods of spiral cores. Similarly, a reduction of complicated field configurations to the number and positions of phase singularities is also problematic, both because they appear, move, and disappear for sustained spiral chaos and because identifying them using existing approaches, such as phase mapping 11,12 , is notoriously unreliable when the data is noisy.
This paper aims to rectify some of these difficulties by introducing a topological description that can rigorously and easily identify the dynamically important elements of the excitation patterns -wavefronts, wavebacks, phase singularities, etc. -without modeling assumptions and in a manner that can be implemented in both simulations and experiments. By defining the phase singularities as intersections of level sets of an appropriately defined phase field, this topological description directly connects the dynamics of excitation waves and phase singularities; it can be used not only to quantify and classify the excitation patterns, but also to identify the dynamical mechanisms that lead to qualitative changes in the pattern. In particular, we show that the qualitative changes can be conveniently described and classified based on the dynamics of spiral cores which are created or destroyed in pairs, leading to an increase or decrease in the number of wavelets, with a one-to-one correspondence between the number of cores and wavelets.
The topological description also allowed us to identify the dominant dynamical mechanisms responsible for maintaining AF in a model of atrial tissue. In particular, it allowed us to make a major discovery with implications that, in all likelihood, go far beyond the simple model considered here. We found that wave breakup due to conduction block that is widely believed to be the key mechanism responsible for maintaining AF plays no role whatsoever in sustaining this regime. While wave breakup does play a key role in the transition to AF, it is a dynamically and topologically distinct event -wave coalescence -that is responsible for maintaining AF. Wave coalescence which leads to the increase in the number of spiral cores and wavelets is balanced by wave collapse which decreases the number of spiral cores and wavelets. It is this delicate balance that is responsible for maintaining the complexity of the pattern and of the dynamics and it is this balance that controls whether AF persists or terminates.
Past studies of the dynamical origins and control of AF tended to focus solely on the mechanism(s) that lead to an increase in the complexity of the pattern. Indeed suppressing the processes that generate new spiral cores and new wavelets is one way to terminate or prevent AF. However, enhancing the processes that destroy the spiral cores and wavelets could be just as effective. Therefore, both wave coalescence and wave collapse are attractive targets for electrical, surgical, and pharmacological approaches to the treatment of AF. While this study has not focused on the interaction of excitation waves with noflux boundaries, the methods and approaches presented here are applicable to this situation as well. Hence topological analysis could be quite helpful for improving treatment of chronic AF using surgical procedures such as ablation that effectively introduce additional boundaries.
In conclusion, we should point out that our results raise new questions regarding the role of conduction block in maintaining AF. While conduction block undoubtedly plays a crucial role in wave collapse, it is not at all clear that it is relevant in wave coalescence. Therefore, quite paradoxically, we find that conduction block plays a more important role in decreasing the complexity of the excitation pattern than in increasing its complexity. Further studies are needed in order to fully understand the dynamical mechanisms behind wave coalescence, wave collapse, and possibly other topologically allowed events important in maintaining AF using more detailed and physiologically accurate models of atrial tissue. | 2017-04-14T00:29:57.572Z | 2017-03-24T00:00:00.000 | {
"year": 2017,
"sha1": "049628e255fad32a220fb140ca1bbc5af8d2b277",
"oa_license": null,
"oa_url": "https://ore.exeter.ac.uk/repository/bitstream/10871/29806/2/part_one.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7f867410ced5f4a42610cd96d8d5f63692f70b99",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine",
"Computer Science",
"Physics"
]
} |
18291325 | pes2o/s2orc | v3-fos-license | Intraoperative Adrenal Insufficiency in a Patient with Prader-Willi Syndrome
Prader-Willi syndrome (PW) is a rare genetic disorder with multi-organ system involvement. These patients present many perioperative challenges including sleep-related breathing disorders, morbid obesity, thick salivary secretions, mental retardation, and difficult intravenous access. PW has been suggested to be associated with central adrenal insufficiency. We report a novel case of persistent severe hypotension from previously undiagnosed and asymptomatic adrenal insufficiency in a pediatric patient with Prader-Willi syndrome during spine surgery that resolved upon treatment with hydrocortisone.
Introduction
Prader-Willi syndrome is a genetic condition characterized by neonatal hypotonia, mental retardation, developmental delay, kyphoscoliosis, hyperphagia resulting in obesity, short stature, hypogonadism, hypothalamic dysfunction, and characteristic facial appearance [1]. Its genetic basis is linked to a 15q11-13 deletion, which is inherited from the paternal chromosome in the majority of cases [1,2]. The diagnosis is suspected based on clinical findings and confirmed with genetic testing. These patients frequently present to the operating room for spine and other orthopedic procedures [3,4]. We describe a case of adrenal insufficiency in a patient with Prader-Willi syndrome undergoing spine surgery that resulted in persistent intraoperative hypotension requiring glucocorticoid replacement.
Case Report
A 16 year-old male with Prader-Willi syndrome weighing 109 kg (body mass index 41 kg/m 2 ) presented for T1-L2 posterior instrumented spinal fusion due to progressive severe kyphoscoliosis (Fig. 1). His past medical history was significant for hypogonadism and sleep apnea that resolved after undergoing tonsillectomy and adenoidectomy. The patient was not taking any medications prior to surgery. Neurologic examination revealed normal strength and sensation in the upper and lower extremities. Preoperative pulmonary function testing revealed mild obstruction with forced vital capacity 4.0 L (88% predicted), forced expiratory volume in one second 3.0 L (77% predicted), and forced expiratory volume in one second to forced vital capacity ratio 75%. Prior to surgery, a pediatric endocrinology consult was obtained, and 100 mg of hydrocortisone was recommended upon induction of anesthesia, given the potential association of Prader-Willi with adrenal insufficiency. The patient previously had not been diagnosed with or had symptoms related to adrenal insufficiency. Preoperative hemoglobin was 12.6 g/dL.
General anesthesia was induced with intravenous fentanyl (250 μg), lidocaine (100 mg), propofol (100 mg), and succinylcholine (100 mg), and the trachea was easily intubated with a 7 mm endotracheal tube. Two large bore peripheral IVs (18 g and 16 g), a left radial arterial line, and a right internal jugular triple lumen 7 French central line were placed. Anesthesia was maintained with isoflurane, fentanyl, and hydromorphone. Muscle relaxant was not utilized as motor evoked potentials, somatosensory evoked potentials, and electromyography was monitored. On induction, hydrocortisone 100 mg was administered intravenously, and the patient was positioned prone for the procedure.
Approximately 7 hours into the surgical procedure, the patient developed persistent hypotension (persistent systolic pressures around 80 mmHg and diastolic pressures around 50 mmHg). Phenylephrine (800 μg), calcium chloride (300 mg), 4 L crystalloid, 500 mL 5% albumin, 1 unit packed red blood cells, 300 mL cell saver, and 300 mL fresh frozen plasma were administered without resolution of the hypotension. Intraoperative nadir hemoglobin was 9.8 g/dL. With persistent refractory hypotension despite attempted volume resuscitation and multiple vasopressors, additional glucocorticoid supplementation was administered in the form 50 mg of intravenous hydrocortisone at hour 9 of surgery. Resolution of the hypotension was promptly noted at approximately the 10th hr of surgery. Estimated blood loss for the procedure was approximately 3000 mL, and urine output was 4569 mL.
Postoperatively, the patient remained intubated and was transferred to the intensive care unit for further monitoring. On day 8 he was successfully discharged home from the hospital.
Discussion
Initially described in 1956, Prader-Willi syndrome has an estimated incidence of 1:10,000 -1:52,000 and a reported male predominance [4][5][6][7][8]. Neonatal hypotonia, poor feeding requiring special assistance, and growth restriction are hallmarks of Prader-Willi syndrome in the affected neonate and infant [1,3,9]. Characteristic facial appearance consists of infantile dolichocephaly (disproportionately longer and narrower head size), narrow face, small mouth with downturned corners, thin superior lip, and almond-shaped eyes [1]. Hypogonadism, developmental delay, and mental retardation may be present. After 12 months of age, the infantile feeding difficulties change to hyperphagia, weight gain, and obsession with food, resulting in obesity if not regulated. Additional features may include non-insulin dependent diabetes mellitus, extremely viscous saliva, sleep-related breathing disorders, and infantile temperature instability [1,6,10]. The death rate has been estimated at 3% per year, with commonly reported mortality causes consisting of respiratory failure, cor pulmonale, and infantile aspiration [7,8,11]. Consensus diagnostic scoring criteria exist to guide the clinician in obtaining appropriate genetic confirmation of the diagnosis or diagnosing the syndrome if genetic testing is not available [1,9].
One case series on pediatric patients with this syndrome reported 71% of patients having spinal deformity [4]. In addition to surgery of the spine, patients with Prader-Willi may present for orchidopexy, dental procedures, and tonsillectomy, making a thorough understanding of the multiorgan involvement essential to successful management [3,12].
Prader-Willi syndrome presents unique challenges to physicians in the perioperative period. Infantile hypotonia may result in spontaneous respirations being inadequate, necessitating mechanical ventilation [3]. Extrinsic restrictive pulmonary disease from spinal abnormalities can lead difficulty with mechanical ventilation, and this coupled with sleep-related breathing disorders make the patient more likely to have apnea and hypoventilation postoperatively and with preoperative sedation [3,4,12]. Both central and obstructive sleep apnea has been noted to occur with Prader-Willi syndrome even in the absence of obesity [1,3,12,13]. Antisialagogues such as atropine, glycopyrrolate, and scopolamine are not advised given the thick oral secretions in these patients [3]. Morbid obesity may complicate intravenous access and anesthetic management; however, airway difficulty has not been noted [3,14]. Previous case reports and small case series have described the anesthetic management of these patients, but none have reported intraoperative hypotension refractory to volume resuscitation and vasopressors that was successfully treated with glucocorticoids [3,12].
Central adrenal insufficiency is noted to affect as many as 60% of Prader-Willi patients when stressed [15]. Interestingly, these patients have normal cortisol levels in the absence of stress. Several authors have hypothesized that unrecognized central adrenal insufficiency may explain the high annual death rate in patients with this syndrome [14,15]. Even in patients without Prader-Willi syndrome who may have iatrogenic adrenal insufficiency from exogenous glucocorticoids, controversy exists whether perioperative steroids are necessary [16,17]. Nonetheless, case reports exist of patients who, despite receiving the same dose of stress dose steroids upon induction that our patient received, required additional intraoperative glucocorticoids for acute Addisonian crisis manifested by severe refractory hypotension [18]. We hypothesize that with the continual stress of major spine surgery, our patient developed adrenal insufficiency that was undertreated with the initial dose of hydrocortisone given seven hours prior to the onset of severe hypotension. The refractoriness of his hypotension to repletion of intravascular volume deficits, additional increases in preload, and multiple vasopressors combined with the resolution of the hypotension with additional hydrocortisone doses make adrenal insufficiency the likely etiology of hypotension.
In conclusion, Prader-Willi patients present many perioperative challenges to anesthesiologists and surgeons. This case illustrates the importance of understanding the multiorgan clinical features of the syndrome and recognizing the possibility of adrenal insufficiency contributing to severe hypotension during periods of stress such as with surgery.
For major surgery, glucocorticoids should be considered as prophylaxis or, at very least, be readily available should refractory hypotension or other signs of adrenal insufficiency develop. | 2016-05-04T20:20:58.661Z | 2012-09-11T00:00:00.000 | {
"year": 2012,
"sha1": "634924d7baf84ace0b76115ca5b731056dceda73",
"oa_license": "CCBY",
"oa_url": "http://www.jocmr.org/index.php/JOCMR/article/download/1039/533",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "634924d7baf84ace0b76115ca5b731056dceda73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119105372 | pes2o/s2orc | v3-fos-license | Resonances in QCD
We report on the EMMI Rapid Reaction Task Force meeting 'Resonances in QCD', which took place at GSI October 12-14, 2015. A group of 26 people met to discuss the physics of resonances in QCD. The aim of the meeting was defined by the following three key questions: What is needed to understand the physics of resonances in QCD? Where does QCD lead us to expect resonances with exotic quantum numbers? What experimental efforts are required to arrive at a coherent picture? For light mesons and baryons only those with ${\it up}$, ${\it down}$ and ${\it strange}$ quark content were considered. For heavy-light and heavy-heavy meson systems, those with ${\it charm}$ quarks were the focus. This document summarizes the discussions by the participants, which in turn led to the coherent conclusions we present here.
Introduction
After many decades of theoretical and experimental work, low energy QCD remains the most challenging frontier in the physics of the Standard Model. The Lagrangian of QCD is simple, yet from its strong coupling dynamics emerge all the phenomena of the nuclear and hadron world. How colour confinement, chiral symmetry breaking, the properties of the hadron spectrum, and inter-nucleon forces result from the strong dynamics encoded in QCD defines the low energy frontier.
A most striking phenomenon of QCD is the formation of the constituents of ordinary baryonic matter, the proton and the neutron, out of massless gluons and almost massless quarks. To gain insight into this complex dynamics the baryonic excitation spectrum has been studied for many decades. Significant progress has been achieved with the recent availability of new polarization data, with polarized beams and/or polarized targets. While there is an increasingly rich collection of empirical data on photon-nucleon reactions, the world data on the key pion-nucleon and kaon-nucleon reactions are more limited. Each provides complementary information on the baryon resonance spectrum of QCD. Even less exploited are antiproton-nucleon, proton-proton and e + e − reactions for the study of baryon resonances.
Unfortunately the available empirical data are not matched by a deep theoretical understanding of such reactions from first principles QCD. Though QCD lattice simulations have progressed significantly over the last decade, reducing statistical uncertainties and employing robust techniques for spin identification [1], a calculation of the physical excited baryon spectrum is still a tough challenge with present computing power.
The key question is what are the relevant degrees of freedom for the resonance physics of QCD. Are the so-called constituent quarks, quasi-particles, whose mass is a consequence of the spontaneously broken chiral symmetry of QCD, the most efficient way to describe reaction amplitudes and the excitation spectrum of QCD with light quarks? Though the simple quark model has had many successes, it is more and more challenged by the steadily increasing empirical data set. Examples of unconventional states are the light scalar states, the Λ(1405), the N * (1440), the D * s0 (2317) + , the XY Z states and more. To what extent are diquark correlations, gluonic modes or hadronic degrees of freedom important in this physics? An extreme point of view would be the hadrogenesis conjecture, which attempts to describe the spectrum in terms of a selected set of hadronic degrees of freedom, very much like nuclear structure physics that is very successfully described in terms of just the proton and neutron as the effective degrees of freedom.
For long it had been thought that states with hidden heavy flavour were simpler. Both charmonium and bottomonium states could be described as q q systems (with q = c or b) bound by a confining potential. Effective Field Theories have allowed us to obtain such potentials, static and with relativistic corrections, directly from QCD and to calculate them at high order in the perturbative expansion for small quark-antiquark distance or on the lattice in the large distance region [2]. In this way the properties of the lowest quarkonia resonances are understood in terms of QCD [3]. This picture however is valid only away from the strong decay threshold, where new degrees of freedom enter and a different picture arises. In fact, new data on charmonium-like systems have for the first time unambiguously established the existence of exotic resonant states that involve a minimal configuration of four quarks. Theqq potential model is too simple: more degrees of freedom must be effective. This should have important implications for the resonance physics of up, down and strange quarks too. It was the purpose of the EMMI task force to scrutinize such consequences for selected sectors of QCD. The main mission was to work on the following three questions • What is needed to understand the physics of resonances in QCD?
• Where does QCD lead us to expect resonances with exotic quantum numbers?
• What experimental efforts are required to arrive at a coherent picture?
A diversity of experimental programmes on resonance physics in QCD is presently going on or planned for the next few years. Moreover, future work using lattice QCD simulations requires detailed planning to meet the increasing demand for such calculations both in terms of computer power and manpower resources.
In order to focus the task force meeting, discussions were restricted to four specific areas of QCD. Light mesons and baryons with up, down and strange quark content were considered. In addition heavy-light and heavyheavy meson systems with charm quarks were discussed. For all four sectors an attempt was made to identify the most promising key experiments which should lead to progress in the understanding of resonances in QCD. Members of different collaborations were invited, in particular of Belle, BESIII, COM-PASS, GlueX, JPARC, LHCb and PANDA being key players in this field. This was complemented by a group of leading theoreticians with a balance of phenomenology and lattice experts. In this short report the referencing can only be selective. There is no claim of completeness and we implicitly assume the relevance of the references embodied in the works we cite.
Day 1: Light mesons
We are far from having a profound understanding of the meson spectrum composed of up, down and strange quarks. Experimental data indicate a possible proliferation of states compared to the simple quark model picture. In the low mass region, chiral symmetry appears to describe the first scalar and axial-vector states [4,5] quite successfully. However, how to link this to the empirical spectrum at higher masses and unravel the driving degrees of freedom of QCD remains a challenge. Here lattice QCD simulations are starting to have critical impact: scattering phase shifts can be extracted from energy levels of finite box QCD at least where only a few hadronic channels intrude. The issues considered cover: are there glueballs [6], hybrid, 4-quark [7] or molecular states [4,5]? How far do the Regge trajectories extend [8] with increasing mass and spin?
Light mesons are promising systems to search for gluonic degrees of freedom. The gluon self-interaction is a defining element of QCD. It allows bound states of gluons, even in a world without quarks. From the earliest models and calculations the ground state "pure" glueball is expected around a mass of 1.5 GeV. Such a mass is readily reached experimentally in several production mechanisms which are believed to provide gluon-rich environments, such as J/ψ decays, central production in proton-proton collisions, or proton-anti-proton annihilation. However, the experimental evidence so far is inconclusive, or at least controversially discussed. The inevitable strong mixing with conventional states of identical quantum numbers makes the unambiguous identification of glueballs a major challenge. A systematic study of the spectrum and the properties of mesons up to higher masses is needed to establish if there are indeed any supernumerary states indicative of glueballs.
The calculation of the glueball spectrum with dynamical quarks is a challenge requiring very high statistics as well as disentangling the allowed mesonglueball mixing. For now, the quenched calculation [9] remains a benchmark. In unquenched calculations it is difficult to disentangle the consequences of meson and glueball operators. As a consequence, clear predictions for decay rates, and mixing with conventional scalars, are missing at present, pending further study.
Experimental studies
Diffractive production as well as the decays of J/ψ, ψ(2S), D and D s , or even η c , offer a great laboratory to study scalar/tensor isoscalar mesons and isovectors too. The production of glueballs in J/ψ radiative decays at BESIII seems small despite the expectation that this is a gluon-rich system. Does pp annihilation provide a better glue-rich environment? The observation of both the scalar f 0 (1500) and the pseudoscalar η(1405) glueball candidates at LEAR with Crystal Barrel and OBELIX suggest that pp is indeed a good reaction for glueball searches.
Is it easier to detect hybrid mesons than glueballs? Since many decay channels are kinematically allowed, both may have large widths. Thus, a detailed Partial Wave Analysis (PWA) is the only reliable approach to extracting their signals. While the light glueballs all have conventional quan-tum numbers, many hybrid states may have quantum numbers not allowed in the quark model. These would be more readly identified in so called "exotic" waves in the PWA. The upcoming programme at Jefferson Lab has been strongly motivated by lattice predictions [10] of a spectrum of hybrid states in the 1.6-2.4 GeV region. The GlueX experiment will search in photoproduction, while CLAS12 will use electroproduction. The process of pp annihilation has no final state baryons which complicate the analyses. Indeed PANDA complements and extends other experimental programmes and has the ability to confirm results that might otherwise be controversial. Moreover, the facility to access higher-mass systems, where most of the hybrids are expected to be, is unique. Even though the rates at PANDA will be high (≥10 7 candidate events per day), the very significant background of light meson production with identical kinematical signatures will require sophisticated analysis techniques, such as a multi-variate classifier, to improve the signal-to-background ratio. In addition PANDA will have the capability to search for states with gluonic degrees of freedom (hybrids and glueballs) in the charmonium energy region, where the overlap with conventional states will be much smaller, making the background situation much more favourable.
In any experimental study it is important to reconstruct both charged and neutral final states. An example is the a 1 (1420), with J P C = 1 ++ , recently observed by COMPASS in its charged state [11]. The a 1 (1420) is a rare signal, representing only 0.25% of the total intensity in that mass region. The observation of this state relies on sophisticated Amplitude Analysis techniques, including up to 88 partial waves, and making use of the full 5-dimensional distribution of the three-body final state, π − π − π + . The identification of its neutral partner using the π 0 f 0 (980) mode would help in establishing this as an isotriplet resonance.
The COMPASS results have highlighted how all experiments must prepare for the robust extraction of small signals if they want to definitively identify new states such as hybrids and glueballs. This is an area of fertile cooperation between experimentalists and theorists necessary to ensure the required methodologies are reliable and firmly routed in current understanding of reaction theory and the analytic S-matrix. This is essential if states like the a 1 (1420) are to be confirmed as members of the hadron spectrum and not complex kinematical effects.
Day 1: Open-charm meson systems
Meson resonances composed of a heavy quark and a light antiquark are further promising systems for unraveling the dynamical degrees of freedom underlying how structure is formed in QCD. From a theoretical point of view such systems are unique, since here chiral symmetry for the light quarks and heavy-quark symmetry are both manifest. These constrain the nature of the internal interactions significantly. Indeed, this is the only system where coupled-channel dynamics based on the leading order chiral Lagrangian predicts the existence of an SU (3) flavour multiplet of resonance states with exotic quantum numbers [12]. Clear measurable signals in specific final states are foreseen. So far these predictions have neither been disproven nor confirmed by experiment. A detailed study of the open-charm meson spectrum is expected to provide important clues for a deeper understanding of resonances in QCD.
LHCb has recently illustrated the power of using exclusive decays of Bmesons for the identification of excited charmed mesons. Amplitude Analyses based on an assumed isobar model with excellent signal-to-background ratio have been performed. Exploring masses up to almost 3 GeV, LHCb has been able to identify a D sJ with spin as high as J = 3. So far, BaBar und Belle have not been able to find D mesons heavier than 3 GeV either. While current studies are dominated by charged particle final states, decays into neutral mesons are critical to understanding the dynamics of these states. Belle II will explore how to complement current results by data with neutral particles. Final states involving a η D or η D * are important channels to discover a possible exotic flavour sextet. Such states may decouple from the π D or π D * channels, but are expected to have a large branching fraction into channels with η mesons, making the ability to detect the decay η → γγ crucial.
Lattice QCD predictions for the low-mass spectrum are underway for heavy-light mesons. For instance, first results on the low energy scattering phase shift for D K were presented at the Task Force meeting [13]. It should also be noted that exploratory lattice QCD studies suggest so far unseen D s meson hybrid states at masses ≥ 3.4 GeV [14]. This will be an important search at current and future experiments.
A stringent test of our understanding of the workings of QCD is posed by narrow states. For instance, for the well established D * s0 (2317) + , experiment currently provides an upper bound of a few MeV for its total width. Any hadronic decay is forbidden by isospin symmetry. Depending on different dynamical assumptions, decay widths from 10 keV to more than 100 keV are predicted [15,16]. Here a careful discrimination of the various theoretical calculations is required. Over the last decade an amazingly consistent hadronic decay pattern was worked out relying on the flavour SU (3) chiral Lagrangian, including the charmed meson ground states with J P = 0 − , 1 − . The width of the D * s0 (2317) + is then predicted to be around 140 keV by theory. The mechanism behind this is a mixing of the conventional flavour triplet with the exotic flavour sextet that arises in the presence of isospin violating effects. Since in the flavour sextet channel the chiral Lagrangian predicts significant attraction, the matrix elements in the sextet channels are large. This explains the unexpectedly large predicted width of the D * s0 (2317) + of ≥100 keV. Moreover, this chiral Lagrangian approach appears consistent with recent lattice QCD simulations in this sector, which provide constraints on some low energy phase shifts. High precision measurements of the width are needed to scrutinize this picture. Remarkably, PANDAis expected to go down to about 100 keV by means of a threshold scan in pp → D sDs0 (2317). No other experiment can be that precise. In order to perform this measurement, analysis tools need to be developed to improve the signal-to-background ratio, which for this channel is of the order of 1/10 6 .
Day 2: Light baryon systems
Our ultimate aim is to clarify what are the driving degrees of freedom for baryon resonances. That these are not always just three constituent quarks has been highlighted by the fact that some very well established baryon states have for long proved an enigma. They are difficult to reconcile with a simple quark model picture, or even lattice calculations albeit with heavy pions. For instance, the Λ(1405) with J P = 1/2 − and the Roper N * with J P = 1/2 + do not fit the conventional picture of three quarks. Should we expect doubly strange baryons with similarities to these? Indeed a decade ago a series of studies showed that the lowest baryon states with J P = 1/2 − and 3/2 − are naturally predicted by the chiral Lagrangian formulated for three light flavours [17,18]. Therein the Goldstone bosons couple to the baryon octet and decuplet ground state fields with J P = 1/2 + and J P = 3/2 + , respectively. If the leading-order chiral interaction is used as input in a coupled-channel calculation, states like the Λ(1405) will be dynamically generated. This mechanism is analogous to the one which describes the lightest scalar and axial-vector mesons (with or without charm) from the chiral Lagrangian. Such studies have shown that the Λ(1405) is not necessarily an exceptional state, as was argued for long in the literature, rather it may be more typical among excited states. However, this prediction awaits experimental confirmation in other strangeness sectors. It is an unsolved puzzle how this consequence of QCD squares with the quark-model picture of baryon resonances. The role of the approximate chiral flavour SU (3) symmetry remains largely a mystery, even though it has been illustrated that this symmetry seems to be a feature of at least part of the excitation spectrum. Is a similar dynamical mechanism applicable to high-mass baryon resonances? What is the role of the light vector mesons, which are suggested to be of crucial relevance by the hadrogenesis conjecture?
Another approach to the excited baryon spectrum relies on properties of QCD in the limit of a large number of colours (N c ). In this limit it is argued that the spectrum exhibits simpler symmetry properties [19,20]. Back at the physical point with N c = 3 one may expect an approximate O(3) ⊗ SU (6) symmetry, which allows the classification of the spectrum in multiplets. The current knowledge of the N * and ∆ * spectrum seems to leave out a 20-plet (a flavor octet with J P = 1/2 + and a flavor singlet with J P = 3/2 + ). What does this imply for the strange baryon resonances? We note that large-N c QCD is currently also used to systematically constrain counter terms in the chiral Lagrangian [21,22]. That renders coupled-channel computations based on the chiral Lagrangian highly predictive. Such constraints are also important in studies of the quark-mass dependence of the baryon ground-state masses [23].
Pioneering lattice studies for the highly excited baryon states in finite volume QCD appear to agree qualitatively with an excitation spectrum expected from the constituent quark model, or more generally from the multiplet pattern predicted by large-N c QCD [1]. However, this is to be taken with a grain of salt. These studies by the Hadron Spectrum Collaboration from a few years ago [1] rely on using only 3-quark operators that were constructed with light quarks that are unphysically heavy. Meson-baryon interpolating operators have to be included, particularly as calculations proceed to lighter quark masses. Then the excited baryon states will decay and using the Lüscher connection of the finite to infinite volume S-matrix, scattering phase shifts should be extracted. The spectrum deduced from such coupled-channel stud- ies may be significantly modified compared to the exploratory results [24]. Within the next few years one may expect further results on phase shifts. Branching ratios may then become available for some resonant states. Nevertheless, phase shifts for light baryons much above 2 GeV will remain difficult to obtain from lattice QCD because of the many open decay channels.
Experimental studies
Using photoproduction for baryon spectroscopy relies on well understood production amplitudes from electromagnetic interactions. In the last decades ELSA, MAMI, JLab and LEPS have collected a huge data set on resonance production, complemented by measurements with a polarized target and/or a polarized beam [25]. The polarization data have made a dramatic impact on state-of-the-art Partial Wave Analyses. They have uncovered new nucleon and ∆ resonances that were not previously revealed in the analysis of experiments with pion beams [26,27,28]. However, the ability to photoproduce and then identify excited baryons much above 2 GeV may be difficult. A complementary source for nucleon resonances is the J/ψ factory of the BE-SIII experiment, which is already used to study excited N * 's from dedicated partial-wave analyses [29].
Baryon resonances with strangeness are significantly more difficult to study and the available data set is rather limited. This is particularly so for doubly-and triply-strange systems. So far empirical information stems mainly from reactions with either photon, pion or kaon beams, where a certain number of kaons have to be tagged in order to access baryonic systems with negative strangeness. For the few established doubly-strange states little is known about their J P assignments. In a simple quark model picture, the strange states will systematically fit into the appropriate multiplets as those of the u, d sectors. However, it could be that the dynamics of excited baryons differs from that of the lower lying states. Their pattern of decays may be systematically different. Parity doublets may appear in some sectors with increasing mass. It is therefore of crucial importance to collect a significant data base to provide decisive physics clues from studies of baryons with strangeness. Indeed for the crypto-exotic N * (P c ) recently discovered around 4.4 GeV by LHCb [30], the Λ * resonance states contribute to a significant kinematical reflection in the region of the putative pentaquarks producing a strong interference pattern. A reliable extraction of the exotic signal requires a detailed understanding of the Λ * spectrum.
JLab has presented plans for a dedicated search for double strange baryons with GlueX and CLAS12. A reliable estimate of the production rates is difficult. In the existing data the number of peaks due to Ξ-resonances seems to be quite limited. Larger rates are foreseen with PANDA, which is expected to support a dedicated programme to search for doubly-strange baryons [31], with an accompanying partial wave analysis. Moreover, the rate estimates for baryons with three strange quarks are promising and warrant dedicated studies.
LHCb and Belle have presented interesting data sets on S = −1, −2, −3 baryon ground states. These data sets should be extended to search for excited states. Hyperon spectroscopy is not the focus of either LHCb or Belle. Thus, it is difficult to estimate the discovery potential. While the strange baryon ground states can be cleanly identified owing to their long weak-decay lengths, excited states decay by single or multi-pion emission and therefore are more difficult to study. The identification of resonance states requires the use of PWA tools. This will be a challenge at LHCb owing to a) the combinatorial background in the large multiplicity environment and b) the absence of a clean production process. Most promising are exclusive b-hadron decays that allow a PWA based on a well defined initial state. Belle II multiplicities are much lower than those of LHCb. Hyperons are produced in part from the continuum and may stem from exclusive reactions (in addition to B-decays). In particular pair-production of baryons may be a good laboratory for hyperon studies. COMPASS has observed strange baryons in exclusive p p scattering with KK pairs in the final state. The S = −1 baryons could be addressed this way.
Complex analysis tools are being developed for LHCb, Belle and COM-PASS, but must be further refined to possibly identify decays of excited hyperons.
Day 3: Charmonium-like systems
It is the charmonium-like sector that has excited the interest of the hadron and nuclear community most widely. Here thanks to recent progress in both effective field theories and lattice calculations, the properties of the lowest resonances have been understood directly from QCD with great precision and in terms of dominant quark-antiquark degrees of freedom. Yet close to or just above the strong decay threshold, there has been an explosion of new discoveries by BESIII, BELLE, Babar (some confirmed by Fermilab) and now the LHC experiments. Indeed, these have provided strong evidence that more complex configurations allowed by QCD contribute to the observed spectrum with new resonances called XY Z states. Many phenomenological models with additional degrees of freedom have been developed to describe such resonances. Possibilities are (a) tetraquarks, with e.g. diquark anti-diquark forces contributing to the binding, (b) weakly bound molecules of open-charm mesons, (c) excitations of light quarks orbiting a heavy quark pair in the centre (often called "hadro-charmonium"), or (d) hybrids, with excitations of the gluon flux tube between the two heavy quarks. These assumptions provide different excitation patterns, which can be compared directly to the experimentally observed patterns [32]. First exploratory lattice results are available [33,34]. Recently a QCD derived effective field theory description for hybrids has been formulated [35].
One of the most important tasks is to map out the pattern of XY Z states, in particular their yet unobserved (spin) partners. For instance, in the hadrocharmonium model [36], one may expect the Y (4260) to be related to the J/ψf 0 (980) channel. Moreover, the D 1 (2420)D threshold is quite close and may also play an important dynamical role [37]. Depending on the assumed internal configurations (a)-(d) above, the pattern of partner states is different. For instance, in the P -wave tetraquark model, the 1 −− Y (4260) would be degenerate with a 3 −− state [16], which would be accessible at PANDA. Angular momentum barrier effects may make it difficult to observe such states in B-meson decays. Consequently, the complete pattern of partners may have to await the running of PANDA.
Some of the charged Z states have been observed in B decays, while others are seen in e + e − collisions, in particular in Y (4260) decays. However, no Z state has been observed in both production mechanisms. This suggests that other production processes, such as pp collisions, may be required to understand why. Assuming an integrated luminosity of 0.5 pb −1 /day, which corresponds to the low luminosity mode with 10 31 cm −2 s −1 for the start of PANDA data taking, significant numbers of X(3872), Y (4260) and Z + (3900) are expected already in year one. Cross-sections are in the order of nanobarn, compared to e + e − collisions [38,39].
One of the prime examples for an exotic, charmonium-like state remains the X(3872). Even a decade after its discovery its nature remains unresolved [40,41]. We identify three important tasks for upcoming experiments: the very close vicinity of the X(3872) to the D 0 D * 0 threshold is one of the most striking observations, possibly pointing to a large molecular component in its wave function. A significant signal of the X(3872) has been observed in the decay B + → K + X(3872) [42] [43]. However, to our knowledge, there has not been a complete study of B + → K + D * 0 D * 0 or B + → K + D * + D * − , due to limitations in the reconstruction of low momentum π ± and π 0 . The latter decays would be the primary search channels for a possible 2 ++ partner of the X(3872) at the D * 0 D * 0 threshold [44]. We expect both LHCb and Belle II to contribute to this search in the near future. For the width of the X(3872), an upper limit of 1.2 MeV is presently determined [45]. If the Fock decomposition of the X(3872) contains molecular components, its width may be significantly larger than the width of the ψ(2S) with Γ(ψ(2S)) 300 keV. Belle II may be able to reach this value with a multidimensional, kinematically overconstrained fit. Indeed, if the X(3872) is purely molecular, then there is a direct relation between its width and its binding energy [46], giving a precise determination of its width added importance.
Rare decays of the X(3872) are an opportunity for PANDA . The enormous statistics provided by 10 4 X(3872)'s produced per day, even at the start of data taking, makes such studies feasible. Decays to light mesons should be OZI suppressed if the X(3872) is a pure cc state. If the X(3872) contains tetraquark eigenstates in its Fock decomposition, light quark rearrangement might enhance the branching fractions. While radiative transitions from and to the X(3872) are suppressed by α em 1/137, they nevertheless provide insight into the complete spectroscopic pattern of the state [47]. Colliding antiprotons on nuclei with PANDA would allow the A-dependence of the production of ψ and X(3872) near threshold to be compared. This may, after appropriate theoretical study, provide a good way to expose an extended D * D component of the X(3872) state function.
Summary and conclusions
The task force was organized into public morning sessions and closed afternoon brainstorming discussion rounds. While in the morning the topic was introduced with overview talks, in the afternoon short contributions were prepared to help the discussions. The main guide for the afternoon discussions were the two questions • What do we need and why?
• How do we get there?
Though the discussions were naturally divide into theoretical and experimental issues, there was an important synergy between them.
On the theory side there appears a consensus. Significant progress from lattice QCD simulations has been made. Very recently the community started to compute scattering phase shifts from the set of energy levels in a finite box. In order to apply Lüscher's framework studies on several (and larger) lattice volumes including coupled channels (see, e.g., [48,49]) are required. In the alternative HAL QCD pseudo-potential approach [50] the HAL QCD collaboration is setting up configurations in boxes of spatial sizes up to 10 fm. The determination of the phase shifts will involve complicated coupledchannel dynamics. Here it is important to consider source fields that couple the quark-gluon dynamics to all important hadronic final states, as already done in the simpler ππ and πK sectors, [48]. The HAL-QCD has attempted to compute and apply multiple-channel potentials to calculating the S-matrix for exotic charmonium-like mesons. A few examples were shown at the meeting of the striking effects of including such sources on the volume dependence of the energy levels. Further attention to explore channel coupling effects in lattice QCD calculations is necessary to bridge the "finite-volume discrete states" on the lattice and the physical resonances made of a variety of open and closed channels. A close interaction of the lattice community with experts on hadronic final state interactions is already very fruitful. The challenge is to compute reaction amplitudes, extract pole positions on higher Riemann sheets and study the quark mass-dependence of the resonance properties.
In addition, to be able to deduce a physical picture from experimental or lattice data we need QCD motivated models. Here it is important to work out extreme cases that help discriminate the distinct patterns associated with a specific choice of degrees of freedom. Model calculations aiming at a description of the XY Z spectrum should reproduce and predict patterns of excited states with their branching ratios rather than describe the properties of a few selected states. From such studies we want to learn what are the most effective coloured and/or colour neutral degrees of freedom in the resonance physics of QCD.
Different sectors elucidate different properties of QCD. In the charmonium sector the confinement of heavy quarks is probed. On the other hand, for a system composed out of up, down and strange quarks, the details of colour confinement are as yet poorly understood. The dominant QCD property here may be its spontaneously broken chiral symmetry. Quantitative insight can be gained from the limit of QCD where the number of colours approaches infinity. How such properties are reflected in hadronic final state interactions is a crucial challenge the theory community needs to address vigorously.
Heavy-light systems composed of a charm quark and a light quark (u, d or s) are constrained not only by chiral symmetry, but also by the heavyquark spin symmetry. Such mesonic systems are particularly exciting. It is here that chiral SU (3) symmetry suggests a flavour sextet of states, that cannot occur in a quark-antiquark picture. This prediction can be probed with lattice simulations.
Experimental needs
In order to make progress in the understanding of these hadronic states, spin-parity and full partial wave analyses are required, which require high statistics data samples, large angular acceptance and as complete particle identification as possible. Assuming purely experimental issues of statistical and systematic apparatus effects are being solved, the quest is for complementary reaction processes in order to narrow down the interpretation of new states being, or to be, observed. For instance, production by t-channel exchange, which allows clean partial wave analyses, is characterized by coherent overlap of various exchange processes, while production in s-channel reactions, or decay processes, are typically amenable to Dalitz plot analyses. Both are required to shed light on the interpretation of resonances observed (generic vs reaction specific resonance interpretation). Similarly, continuum (s-channel) production in colliders must be complemented with studies of heavy flavour or heavy lepton decays -processes with similar systematics. Such complementary approaches are mandatory for the determination of pole parameters, for which an extrapolation in the complex energy plane could well be ambiguous in single experiments. It is therefore crucial that the various data sets are published so as to allow combined fits, and that plans are made for the tremendous efforts required from individual collaborations to be carried out jointly.
Currently COMPASS is enlarging the data base on light-meson spectroscopy [51,52]. The hunt for gluonic degrees of freedom in spectroscopy is ongoing. With GlueX a dedicated search for light hybrid states has started. We have seen exciting and unexpected new results from LHCb using an unprecedented data sample with ∼10 13 events of directly produced open charm mesons. So far no unambiguous exotic signals have been seen. In the hidden charm sector, LHCb has recently claimed novel pentaquark states. In all cases, it is important to extend such studies to neutral channels. Here Belle II and PANDA have experimental advantages and their discovery potential for charmed hybrids will be particularly high for narrow states.
Recent data on polarization observables in photo-and electro-production experiments from ELSA, MAMI and JLab have had a huge impact on excited nucleon and ∆ studies. Such spin observables have led to more reliable Amplitude Analyses and have revealed new resonant states. Here it has proved very successful to combine data from different experiments. High quality pion and kaon beam data are expected from J-PARC and will further stabilize the current PWAs. Complementary information comes from J/ψ decays studied at BESIII, but presently this has not been included in more global analyses. The strange baryon spectrum requires further experiments if a systematic detailed knowledge of strangeness −1 to −3 is to be acquired. There are plans at JLab for a dedicated search for double strange baryons with GlueX and CLAS12, and forthcoming studies at J-PARC. Even larger production rates are estimated for PANDA, which should make possible the identification of the high-mass strange partners of the newly observed nucleon and ∆ resonances. This is essential for an understanding of the flavour structure of the highly excited baryons.
In all the sectors, it remains a challenge to sharpen the tools required to analyse the present and future data sets. Theoretical work is under way to develop multi-channel analyses that go beyond a simple isobar treatment, and extend the K-matrix model of 2-body interactions to many-body final states. Such multi-channel frameworks fulfill coupled-channel unitarity for more than 2 → 2 reactions as well as respect the condition of microcausality. The latter implies specific analytic properties of the partial-wave amplitudes [53,54,55]. This provides a basis for distinguishing between what effects may be kinematical and which are really due to states in the spectrum of QCD. More widely, such methodologies are required to extract definitive information from data, whether from experiment or from lattice simulations. Intense interaction between experiment and theory has started with the NABIS, HaSpect and JPAC programmes.
Numerous recent experimental observations of narrow charmonium-like states (XY Z states) have been reported from BESIII and Belle but also LHCb [56,57,58]. The quest for the pattern of XY Z states can be addressed by finding partners to the presently observed states and measuring their radiative transitions. PANDA can create any quantum number in pp, which enables access to possible transitions between the family of X, Y and Z states, and so map out their connections.
It is a feature of many channels that there is an unexpected or unexplained enhancement at their threshold. Some are likely direct reflections of QCD dynamics. It is therefore important to be able to perform dedicated threshold scans that unravel this dynamics. Similarly there are narrow states, for which their width is unknown, presently limited by the resolution of the detectors. Scan experiments should be planned that determine their line shapes and width parameters and have the ability to distinguish the threshold behavior of different J P C values. The high resolution provided by a cooled antiproton beam makes PANDA the unique facility for such dedicated scans with the potential for further discoveries.
Moreover, in all sectors there is a rather limited knowledge of high-spin states, which may well be narrower and easier to identify. Such knowledge is an essential component of building an understanding of the nature of the hadron spectrum in all its complexity. The fascinating XY Z states we presently see may very well be the glimpses of a new world of resonance physics, whose study both theoretically and experimentally will lead to a deeper understanding of the strong dynamics of QCD. | 2015-11-30T15:40:37.000Z | 2015-11-30T00:00:00.000 | {
"year": 2016,
"sha1": "650a338a30ded9feacc4e6bbe86dbb20f19dbcad",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.09353",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "650a338a30ded9feacc4e6bbe86dbb20f19dbcad",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3443002 | pes2o/s2orc | v3-fos-license | IgG4-related cerebral pseudotumor with perineural spreading along branches of the trigeminal nerves causing compressive optic neuropathy
Abstract Rationale: Immunoglobulin G4-related disease (IgG4-RD) is characterized by tumor-like lesions, a dense lymphoplasmacytic infiltrate rich in IgG4-positive plasma cells, storiform fibrosis, and obliterative phlebitis. IgG4-RD has been described in a variety of organ systems; however, it rarely involves the central nervous system. Patient concerns: A 17-year-old woman visited our clinic with a complaint of blurred vision for the past 5 months. She also reported a painless right submandibular mass that had been present for 1 year. Her best-corrected visual acuity (BCVA) was 2.0 LogMAR, with an almost total visual field defect in the right eye. Diagnoses: Magnetic resonance imaging (MRI) revealed lobulated parasellar tumors with perineural spreading along branches of the trigeminal nerves causing right optic nerve compression. A craniotomy with tumor removal and submandibular gland biopsy was performed. Histopathological analysis of the tumor revealed stromal fibrosis with atypical lymphoid infiltrations. Histopathological and immunohistochemical analysis of the submandibular gland confirmed the diagnosis of IgG4-RD. Interventions: The patient was administered 500mg/d of pulse methylprednisolone for 3 days, 500mg of intravenous rituximab every 2 weeks (for a total of 2 doses), and 500mg of intravenous pulse cyclophosphamide every month (for a total of 3 doses). Outcomes: Two months after the initiation of immunosuppressive therapy, the patient's BCVA returned to 0.1 LogMAR with visual field defect recovery. The follow-up MRI showed the almost complete disappearance of the previously contrast-enhanced lesions. Lessons: Herein, we report a rare case of IgG4-RD presenting as a parasellar tumor and present a review of the related literature. Based on the case report, we propose that aggressive therapy with glucocorticoid, rituximab, and cyclophosphamide may potentially be useful for treating such cases.
Introduction
Immunoglobulin G4-related disease (IgG4-RD) is a recently recognized fibroinflammatory condition characterized by tumor-like lesions, a dense lymphoplasmacytic infiltrate rich in IgG4-positive plasma cells, storiform fibrosis, and obliterative phlebitis. [1] IgG4-RD has been described in a variety of organ systems: the pancreas, biliary tree, liver, salivary glands, lacrimal glands, periorbital tissues, lymph nodes, thyroid, retroperitoneum, kidneys, aorta, lungs, prostate, meninges, and pituitary gland. [2] The fibroinflammatory lesions frequently form a mass that may destroy the involved organ, mimicking malignancy. [3] Histopathological and immunohistochemical analyses of biopsy specimens remain the cornerstone in the diagnosis of IgG4-RD. Elevated concentrations of IgG4 in serum are also helpful in diagnosing IgG4-RD, but approximately 30% of patients with IgG4-RD have normal serum IgG4 levels. [4] Patients often respond well to corticosteroid therapy. [5] Herein, we report a rare case of IgG4-RD presenting as a parasellar tumor that showed a good response to corticosteroid and immunosuppressive therapy.
Methods and results
2.1. Case report 2.1.1. Patient information and clinical findings. A 17-year-old adolescent girl with a 9-year history of atopic dermatitis visited our clinic with a complaint of blurred vision for the past 5 months. She also reported a painless right submandibular mass that had been present for 1 year. Physical examination revealed enlargement of bilateral submandibular glands with right side predominance; moreover, an enlarged lymph node, about 1.2 cm in size, was noted on the right side of the neck. The neurologic examination disclosed decreased sensation to pin-prick over the right perinasal area. Her best-corrected visual acuity (BCVA) was 2.0 LogMAR, with an almost total visual field defect in the right eye (Fig. 1A). The visual evoked potentials showed absent response in the right eye and prolonged P100 latency (130 ms) in the left eye, which was suggestive of functional perturbation of the bilateral prechiasmatic optic pathway, with the right side being worse. Magnetic resonance imaging (MRI) revealed lobulated good contrast-enhancing tumors in the bilateral parasellar regions with extracranial extension along the ophthalmic (V1), maxillary (V2), and mandibular (V3) branches of the bilateral trigeminal nerves ( Fig. 2A-F). The tumor had a larger component on the right and extended into the right orbital apex through the superior orbital fissure. There was a contrast-enhancing soft tissue in the right orbital apex suggesting perineural spreading of the tumor along the nasociliary branch of the ophthalmic nerve (V1) with compression of the right optic nerve. The perineural spreading of the parasellar tumors was also evident with erosion and widening
Diagnostic assessment.
A craniotomy with tumor removal and submandibular gland biopsy was performed. Histopathological examination of the parasellar tumor revealed stromal fibrosis with atypical lymphoid infiltration (Fig. 3). Abundant plasma cells and focal eosinophils infiltration were noted as well. Those atypical lymphoid infiltrates consisted largely of CD3-positive T cells with focal CD20-positive B cells (data not shown). The immunohistochemical (IHC) study for kappa and lambda light chains did not reveal a monoclonal B-cell population. An IHC stain for anaplastic lymphoma kinase-1 (ALK-1) was negative. Histopathological examination of the submandibular gland showed dense infiltration of lymphocytes and plasma cells with storiform fibrosis; in addition, phlebitis without obliteration of the lumen was noted (Fig. 4). An IHC study of the submandibular gland revealed CD138-positive plasma cells with mixed CD3 and CD20-positive small lymphocyte infiltration. Increased IgG4-positive plasma cell (>50 IgG4+ cells/HPF with IgG4+/IgG+ cell ratio >40%) infiltration was also noted. According to the above histopathological features and immunophenotypic findings of the submandibular gland, a diagnosis of immunoglobulin G4-related sialadenitis was considered. However, the patient's visual acuity did not recover after tumor removal and, moreover, elevated serum IgG4 (IgG4: 511 mg/dL, IgG: 1770 mg/dL) was noted. Hence, a diagnosis of IgG4-related cerebral pseudotumor with compressive optic neuropathy was considered.
Therapeutic intervention, follow-up, and outcomes.
The patient was then administered 500 mg/d of pulse methylprednisolone for 3 days, 500 mg of intravenous rituximab every 2 weeks (for a total of 2 doses), and 500 mg of intravenous pulse cyclophosphamide every month (for a total of 3 doses). The dose of prednisolone was kept at 10 mg/d for maintenance therapy. Two months after the initiation of immunosuppressive therapy, the patient's BCVA returned to 0.1 LogMAR (Fig. 5) with visual field defect recovery (Fig. 1B). The follow-up MRI showed the almost complete disappearance of the previously contrastenhanced lesions with only a small amount of residual soft tissue in the lateral part of the right cavernous sinus (Fig. 2G-I).
Discussion and conclusions
Orbital inflammatory pseudotumors (OIPTs) often manifest as unilateral extraocular myositis, [6,7] causing periorbital pain, restriction of extraocular muscle movement, proptosis, or diplopia. Such OIPTs are commonly restricted to the orbit; www.md-journal.com however, the extraorbital extension of pseudotumors with duralbased infiltrate involving the cranial fossa and cavernous sinus has been reported. [6][7][8][9] In recent years, it has been suggested that some cases of OIPTs may be related to the entity known as IgG4-RD. [8,9] Moreover, IgG4-RD has previously been described in the central nervous system, with hypophysitis [10] and pachymeningitis [11] being the most common manifestations. Here we present the first case, to our knowledge, of an IgG4-related cerebral pseudotumor with perineural spreading along branches of the trigeminal nerves causing compressive optic neuropathy. The histopathology of the parasellar tumor revealed atypical lymphoid infiltration, mimicking lymphoid neoplasm. This atypical lymphocytic infiltrate was composed predominantly of T cells, with scattered aggregates of B-cells; moreover, no immunoglobulin light chain restriction was identified. Hence, the possibility of lymphoid malignancy was excluded. There was abundant plasma cell infiltration with stromal fibrosis and scattered eosinophils were also noted, observations which were compatible with the histopathological findings indicating IgG4-RD. To date, no international consensus criteria have been established for the diagnosis of extra-pancreatic IgG4-RD. A variety of cutoff points, ranging from >10 to >50 IgG4-positive plasma cells per high-power field (cells/HPF), have been proposed with regard to different organ sites. [1] Some studies advocate using the ratio of IgG4-positive plasma cells [5] to IgG-positive plasma cells to assist in making the diagnosis of extra-pancreatic IgG4-RD, where a ratio >30% to 50% is suggestive of a diagnosis. [12][13][14][15][16] One study, which analyzed the histopathological features of IgG4-related meningeal disease, demonstrated an increased number of IgG4-positive cells ranging from 11.8 to 54.2 cells/HPF. [17] The authors recommend the use of ≥10 IgG4positive cells/HPF as a minimum criterion for the diagnosis of IgG4-related meningeal disease. [17] In the case presented herein, the immunohistochemical examination of the parasellar tumor revealed 20 IgG4-positive cells/HPF, with the ratio of IgG4/IgGpositive cells being around 10%. Although the lobulated parasellar tumor extended along the ophthalmic (V1), maxillary (V2), and mandibular (V3) branches of the bilateral trigeminal nerves, it did not cause excruciating trigeminal neurological symptoms except for decreased sensitivity to pin-pricks over the right perinasal area, which was supplied by the external nasal branch of the right anterior ethmoidal nerve (from the nasociliary branch of the right ophthalmic nerve). In addition, the patient in this case initially presented with compressive optic neuropathy. At the same time, right submandibular gland swelling was noted along with elevated serum IgG4. The histopathological examination of the submandibular gland suggested the diagnosis of IgG4-RD. Even though the patient had blurred vision for as long as 5 months, with an almost total visual field defect, after corticosteroid and immunosuppressive therapy, the patient's BCVA returned to 0.1 LogMAR with visual field defect recovery. Furthermore, the follow-up MRI showed almost complete remission of the brain tumor. Since autoimmune pancreatitis (AIP) is considered to be the pancreatic manifestation of an IgG4related systemic disease, the international consensus of diagnostic criteria (ICDC) for AIP has included elevated serum IgG4 levels and the histological changes of other organ involvement (OOI) as findings that can assist in making the diagnosis. [18] Our case also indicates that elevated serum IgG4 levels and OOI can assist in making the diagnosis of IgG4-RD in atypical presentations. Moreover, we propose that aggressive therapy with glucocorticoid, rituximab, and cyclophosphamide may be a potentially beneficial treatment for IgG4-related cerebral pseudotumor. In summary, we report herein a unique case of IgG4-related cerebral pseudotumor with perineural spreading along branches of the trigeminal nerves causing compressive optic neuropathy. We also identify the histopathological and MRI features of IgG4related cerebral pseudotumor. Moreover, we propose that aggressive therapy with glucocorticoid, rituximab, and cyclophosphamide may potentially be useful for treating such cases. Figure 5. Visual acuity changes during the treatment course. The patient underwent craniotomy with tumor removal in June 2014. Her visual acuity only improved to 0.7 LogMAR in the following 2 months. She was then administered pulse methylprednisolone at 500 mg/d for 3 days, 500 mg of intravenous rituximab every 2 weeks (for a total of 2 doses), and 500 mg of intravenous pulse cyclophosphamide every month (for a total of 3 doses). The dose of prednisolone was kept at 10 mg/d for maintenance therapy. Two months after the initiation of immunosuppressive therapy, her BCVA returned to 0.1 LogMAR. The serum IgG4 also decreased after the initiation of immunosuppressive therapy. | 2018-04-03T06:01:30.127Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "a6741851d1a6eb475231e49937bf375bf5f5b0ac",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000008709",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6741851d1a6eb475231e49937bf375bf5f5b0ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265497634 | pes2o/s2orc | v3-fos-license | Neuroprotective Effects of Human Adipose–Derived Mesenchymal Stem Cells in Oxygen-Induced Retinopathy
This study was designed to provide evidence of the neuroprotective of human adipose–derived mesenchymal stem cells (hADSCs) in oxygen-induced retinopathy (OIR). In vivo, hADSCs were intravitreally injected into OIR mice. Various assessments, including HE (histological evaluation), TUNEL (terminal deoxynucleotidyl transferase dUTP nick end labeling) staining, electroretinogram (ERG) analysis, and retinal flat-mount examination, were performed separately at postnatal days 15 (P15) and 17 (P17) to evaluate neurological damage and functional changes. Western blot analysis of ciliary neurotrophic factor (CNTF), glial cell line–derived neurotrophic factor (GDNF), and brain-derived neurotrophic factor (BDNF) was conducted at P17 to elucidate the neuroprotective mechanism. The P17 OIR group exhibited a significant increase in vascular endothelial cell nuclei and neovascularization that breached the ILM (inner limiting membrane) to the P17 control group. In addition, the retinal nonperfusion areas in the P17 OIR group and the number of apoptotic retinal cells in the P15 OIR group were significantly higher than in the corresponding hADSCs treatment group and control group. There was no significant thickness change in the inner nuclear layer (INL) but the outer nuclear layer (ONL) in the P17 OIR treatment group compared with the P17 OIR group. The cell density in the INL and ONL at P17 in the hADSCs treatment group was not significantly different from the OIR group. The amplitude of a-wave and b-wave in scotopic ERG analysis for the P17 OIR group was significantly lower than in the P17 hADSCs treatment group and the P17 control group. Furthermore, the latency of the a-wave and b-wave in the P17 OIR group was significantly longer than in the P17 hADSCs treatment group and the P17 control group. In addition, the expression levels of CNTF and BDNF in the P17 OIR group were statistically higher than those in the P17 control group, whereas the expression of GDNF was statistically lower in the P17 OIR group, compared with the P17 control group. The expression of CNTF and GDNF in the P17 hADSCs treatment group was statistically higher than in the P17 OIR group. However, the expression of BDNF in the P17 hADSCs treatment group was statistically lower than in the P17 OIR group. This study provides evidence for the neuroprotective effects of hADSCs in OIR.
Introduction
Diabetic retinopathy (DR), retinal vein occlusion (RVO), ROP (retinopathy of prematurity), and other ischemic retinopathy can cause irreversible vision loss, and the incidence of these diseases is increased yearly 1,2 .Ischemic retinopathy's pathological mechanism involves hemorrhage and edema resulting from vascular leakage 3,4 and neuronal death due to glycolysis and reduced oxidative phosphorylation rates 5 .Although anti-vascular endothelial growth factor (VEGF) therapy effectively inhibits neovascularization, reduces edema 6 , and is widely used in clinical trials, studies have shown that 15% to 20% of DR patients 7 do not respond adequately or fully to anti-VEGF therapy.Moreover, the extent of VEGF's role in neural cell survival is crucial 8,9 , and chronic inhibition of VEGF-A has been linked to a significant retinal ganglion cell loss 10 .Considering the need for repetitive injections and associated risks, there is a critical need to develop new treatments to reduce neovascularization and neural damage.
Mesenchymal stem cell (MSC) therapy, owing to its unique advantages 11,12 , such as cell replacement, paracrine, and lack of immune rejection, holds promise for treating retinal diseases.It has also been proven safe for intraocular use in oxygen-induced retinopathy (OIR) 13 .Among the various sources of MSCs, adipose-derived mesenchymal stem cells (ADSCs) are relatively easy to obtain and have higher proliferation rates than bone marrow-derived mesenchymal stem cells (BMSCs), making them particularly attractive 14,15 .Studies have already demonstrated that MSCs can reduce neovascularization in ischemic retinopathy 16,17 .However, whether the MSC offers neuroprotection in ischemic retinopathy has yet to be thoroughly investigated.In this study, we intravitreally injected human adipose-derived mesenchymal stem cells (hADSCs) into OIR mice, a well-established model for ischemic retinopathy.Histological evaluation (HE), retina flat mount, TUNEL (terminal deoxynucleotidyl transferase dUTP nick end labeling), and electroretinogram (ERG) were performed to assess changes in neuro function.In addition, Western blot analysis was used to evaluate the expression of the ciliary neurotrophic factor (CNTF), the glial cell line-derived neurotrophic factor (GDNF), and the brain-derived neurotrophic factor (BDNF) to elucidate the underlying mechanism of hADSCs action.This study provides valuable data supporting the neuroprotective potential of hADSCs in ischemic retinopathy.
Culture and Characterization of HADSCs
The second passage of hADSCs was obtained from the Tissue Engineering Center of Peking Union Medical College, China.The hADSCs were cultured in a humidified incubator, and 48 h later, half of the culture medium was replaced.Subculturing was performed when the cells reached 80% confluence.Flow cytometry was used to characterize the third passage of hADSCs by analyzing specific surface antigens, including CD29, CD34, CD44, CD105, Flk-1, and HLA-DR (BD Biosciences, Franklin Lakes, NJ, USA), according to the manufacturer's instructions.Data analysis was conducted using SPSS 23.0 (IBM Corp., Armonk, NY).
Oxygen-Induced Retinopathy Model
Pregnant C57BL/6J mice were provided by the Laboratory Animal Center of Southern Medical University, China.All animal experiments followed the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and received approval from the local animal welfare committee.Mice pups were maintained in a normal environment without any treatment (control group) or exposed to 75% ± 2% oxygen for 5 days from P7 to P12 (OIR group) along with their mothers as described by Smith et al. 18 After exposure, half of the OIR mice were maintained in a normal environment, whereas the other half received intravitreal injections of hADSCs at P12 (hADSCs treatment group).Relevant tests were conducted at P17.A schematic diagram illustrates the treatment and examination time points (Fig. 1B).
Histological Evaluation
Enucleated eyes were fixed with 4% paraformaldehyde, dehydrated in gradient ethanol and xylene, and embedded in paraffin wax.Retinal slices, 4 μm in thickness and parallel to the sagittal axis of the optic nerve, were obtained and processed for HE.Vascular endothelial cell (VEC) nuclei that broke the retina's inner limiting membrane (ILM) were counted under 10X magnification in one slice.The cell density described previously by Lee et al. 19 and the thickness of inner nuclear layer (INL) and outer nuclear layer (ONL) of the central retinal area were analyzed.In addition, HE was performed on five eyes in each control, OIR, and hADSCs treatment group at P17.
Retinal Flat Mount
Mice pups were anesthetized and received retro-orbital injections of fluorescein isothiocyanate dextran 20 .Thereafter, 10 s after injection, the mice were euthanized with pentobarbital.Enucleated eyes were fixed in 4% paraformaldehyde for 30 min at room temperature.The retinas were separated, cut into four parts, and mounted.Retinal flat mounts were photographed using a fluorescence microscope (Zeiss Axioplan 2 Imaging, Zeiss, Gottingen, Germany) to generate images of the entire retina.Retinal flat mount from five eyes in each control, OIR, and hADSCs treatment group were performed at P17.
Terminal Deoxynucleotidyl Transferase Dutp Nick End Labeling (TUNEL)
Retina assays were performed according to the manufacturer's instructions (Fluorescein Kit, Roche, Roche, Switzerland, 11684795910).The assay utilized the green channel, with DAPI staining in the blue channel.Images were captured using a fluorescence microscope (Zeiss Axioplan 2 Imaging, Zeiss, Gottingen, Germany).The TUNEL assay was performed on five eyes in each control, OIR, and hADSCs treatment group at P15.
Electroretinogram
After dark adaptation for 12 h, mice were anesthetized using intraperitoneal injection of 10% chloral hydrate (2.5 mL/kg).The eyes were dilated with 0.5% tropicamide.Following the application of a small drop of 2.5% hypromellose, a ringshaped contact electrode was gently positioned on the cornea.Reference and ground electrodes were appropriately placed under the tongue and tail, respectively.Stimulation and recording of the ERGs were performed using the RETIscan system (Roland Consult, Brandenburg, Germany).The scotopic flash ERG was recorded with a white flash at an intensity ranging from 0.01 to 3.7 cd.s/m 2 .The ERG recordings were obtained from five eyes in each control, OIR, and hADSCs treatment group performed at P17.
Western Blot Analysis
Retinal protein was extracted and eluted using Laemmli buffer, then separated on a 12% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE).Subsequently, the proteins were transferred onto a polyvinylidene fluoride membrane.The membrane was then blocked with 5% dried nonfat milk and incubated in Tris-buffered saline containing 0.1% Tween-20 (TBST) at room temperature for 2 h.After three TBST washes (10 min each time), the membrane was incubated overnight at 4°C with primary antibodies.The primary antibodies used in this study included Anti-CNTF antibody (1:200, Santa Cruz Biotechnology, TX, USA, sc-365210), Anti-GDNF antibody (1:1,000, Santa Cruz Biotechnology, TX, USA, sc-13147), Anti-BDNF antibody (1:500, Santa Cruz Biotechnology, TX, USA, sc-546), and Anti-beta Actin antibody (1:5,000, Abcam, MA, USA, ab6276).Subsequently, the membrane underwent three additional TBST washes (10 min each time) and was then incubated with goat anti-mouse secondary antibody (1:10,000, Invitrogen, CA, USA, 31430) at room temperature for 1 h.Finally, the membrane was subjected to enhanced chemiluminescence and detected using photographic film.Western blot analysis was performed on samples from the control, OIR, and hADSCs treatment groups at P17.
Statistical Analysis
Data are expressed as mean ± standard deviation.Statistical analysis was conducted using SPSS 23.0 (IBM Corp., Armonk, NY).Differences were conducted using one-way analysis of variance (ANOVA) followed by Fisher least significant difference (LSD) test.P < 0.05 was considered statistically significant.
Histological Evaluation
In the P17 control group, no VEC nuclei or neovascularization that breached the ILM of the retina were observed (Fig. 2A).The P17 OIR group showed that numerous VEC nuclei and neovascularization breached the ILM, with the VEC nuclei numbering 45.3 ± 3.06 (Fig. 2B).In the P17 hADSCs treatment group, VEC nuclei and neovascularization breaking through the retina ILM were infrequent, with the number of VEC nuclei being 8.3 ± 1.53 (Fig. 2C).The injected cells were located in the vitreous cavity and above the retina.A few number of infiltrating cells were discernible within the retinal tissue.Compared with the P17 control group, the P17 OIR group showed a significant increase in the number of VEC nuclei that broke through the ILM (P < 0.0001).The P17 hADSCs treatment group exhibited a significant decrease in the number of VEC nuclei breaking through the ILM compared with the P17 OIR group (P < 0.0001; Fig. 2D).The cells density and thickness of the INL in the P17 normal group, the OIR group, and hADSCs treatment group were 19.67 ± 1.53 cells/30 µm², 17.33 ± 1.15 cells/30 µm², and 18 ± 1 cells/30 µm², and 43.04 ± 0.76 µm, 31.94 ± 1.39 µm, and 30.37 ± 2.93 µm, respectively.The cells density and thickness of the ONL in the P17 normal group, the OIR group, and hADSCs treatment group were 131 ± 6.23 cells/50 µm², 115.3 ± 3.51 cells/50 µm², and 123.67 ± 4.04 cells/50 µm², and 54.94 ± 1.02 µm, 52.7 ± 1.25 µm, and 50.67 ± 0.49 µm, respectively.Statistical analysis revealed a significant reduction in both INL and ONL thickness in the P17 OIR group compared with the P17 normal group (P = 0.000 and P = 0.030; Fig. 2I, 2J).There was no significant change in INL thickness, but ONL thickness was reduced in the P17 OIR treatment group compared with the P17 OIR group (P = 0.358 and P = 0.043; Fig. 2I, J).Moreover, the cell density in the INL at P17 in the OIR group was significantly lower than in the P17 normal group (P = 0.007; Fig. 2K), with no significant difference compared with the P17 hADSCs treatment group (P = 0.075; (Fig. 2K).Furthermore, there were no significant differences in cell density within the ONL between the P17 control group and the hADSCs treatment group compared with the P17 OIR group (P = 0.062 and P = 0.537; Fig. 2L).
Terminal Deoxynucleotidyl Transferase Dutp Nick End Labeling (TUNEL)
In the P15 control group, no apparent TUNEL-positive cells were observed (Fig. 3B), with only 0.333 ± 0.577 positive cells per field.The OIR group showed numerous TUNEL-positive cells, primarily located in the INL (Fig. 3E), with the number of positive cells reaching 21.667 ± 1.528 per field.In the hADSCs treatment group, some TUNEL-positive cells were present (Fig. 3H), with the number of positive cells amounting to 9.176 ± 1.023 per field.The number of positive cells in the P15 OIR group was significantly higher than in both the P15 control (P < 0.0001) and the hADSCs treatment group (P < 0.0001).
Electroretinogram
Scotopic ERG measurements showed that, in the P17 control group, P17 OIR group, and P17 hADSCs treatment group, the amplitudes of a-wave were 22 ± 2.89 μν, 1.7 ± 0.173 μν, and 54.67 ± 4.041 μν, respectively, while the amplitudes of b-wave were 70 ± 6.03 μν, 22.2 ± 1.732 μν, 40.33 ± 1.528 μν, respectively.The amplitudes of both the a-wave and b-wave in the P17 OIR group were significantly lower than in the P17 control group (P < 0.0001 for both).The amplitudes of both the a-wave and b-wave in the P17 hAD-SCs treatment group were significantly higher than in the P17 OIR group (P < 0.0001 for a-wave, P < 0.001 for b-wave; Fig. 4A, B).The latencies of a-wave and b-wave in the P17 control group, P17 OIR group, and P17 hADSCs treatment group were as follows: a-wave-18 ± 2.00 ms, 25.67 ± 2.52 ms, and 15.33 ± 1.53 ms; b-wave-38 ± 5.03 ms, 46 ± 3.79 ms, and 25.33 ± 3.61 ms, respectively.The latencies of both the a-wave and b-wave in the P17 OIR group were significantly longer than in the P17 control group (P = 0.003 for a-wave, P = 0.047 for b-wave).The latencies of both the a-wave and b-wave in the P17 hADSCs treatment group were significantly shorter than in the P17 OIR group (P = 0.001 for both; Fig. 4A, C).
Expression of CNTF, GDNF, and BDNF
The expression of CNTF and BDNF in the P17 OIR group was statistically higher than in the P17 control group (P < 0.0001).The expression of GDNF in the P17 OIR group was statistically lower than in the P17 control group (P < 0.0001).The expression of CNTF and GDNF in the P17 hADSCs treatment group was statistically higher than in the P17 OIR group (P < 0.0001).The expression of BDNF in the P17 hADSCs treatment group was statistically lower than in the P17 OIR group (P < 0.0001; Fig. 5).
Discussion
Ischemic retinopathy can lead to retinal neuronal dysfunction 21 .Our study was designed to provide evidence for the neuroprotective effects of hADSCs in OIR.We used HE, retina flat mount, TUNEL assays, ERG, and Western blot analysis to assess neuroprotection.The results from this study provide neuroprotective evidence for the intraocular use of hADSCs in OIR.
Our results revealed that the thickness and cell density of the INL and ONL in the P17 OIR group were significantly lower than in the P17 normal group.These findings align with our ERG results, further corroborating the detrimental impact on retinal neurofunction.Intriguingly, the INL and ONL thickness in the P17 hADSCs treatment group was lower than that in the P17 OIR group.This observation diverges from a prior study that reported an increase in INL and ONL thickness following treatment 22 .Despite no statistically significant difference, it is noteworthy that the cell density within INL and ONL of the P17 hADSCs treatment group was slightly higher, compared with that of the P17 OIR group.This alignment with the ERG results underscores the potentially influential role of cell density in shaping retinal function.It strengthens the suggestion that hADSC treatment can potentially enhance neurological function.
The occurrence of retinal nonperfusion areas signifies inadequate perfusion in the retina, leading to altered neuronal and dopaminergic neurotransmitter signaling 23 .It also reduces the expression of synaptic protein 24 and disrupts glial function 25 .In our study, the nonperfusion areas in the P17 hAD-SCs treatment group were significantly smaller than those in the P17 OIR group, suggesting that hADSCs improved neuronal retina function.The TUNEL analysis, another morphological test for neurological changes in our study, provides insight into cell apoptosis and the extent of tissue function damage.Existing reports indicate that ischemic retinopathy induces neuronal death in the inner retina, particularly around P14 and P15 26,27 , leading to retinal neuronal dysfunction 28 .Our results demonstrate a significantly higher number of apoptotic cells in the P15 OIR group compared with the P15 control group, primarily within the INL.Fletcher`s research suggests that neurochemical changes in OIR can be attributed to the alterations in Müller cells 29 .Interestingly, Müller cells constitute the INL, aligning with our observation of apoptotic cells predominantly in the INL.Furthermore, our study reveals a significantly reduced number of apoptotic cells in the P15 hADSCs treatment group compared with the P15 OIR group.These findings suggest that the hADSCs effectively mitigate damage and protect the retinal neurofunction in OIR.
The neuroprotective effects of hADSCs in OIR mice were further confirmed by the ERG analysis in our study.Both scotopic and photopic ERG parameters minimally reflect inner retinal function 30 .The a-and b-waves of ERG reflect the function of photoreceptor potentials, bipolar cells, and Müller cells.Studies emphasized the clinical significance of changes in amplitude 31 .In our research, the amplitudes of a-wave and b-wave in the P17 OIR group were significantly reduced, compared with the P17 control group, indicating impaired neuronal function in the OIR model.This finding aligns with reports of retinal function defects in the rat OIR model 32 .Notably, the P17 hADSCs treatment group exhibited significantly higher a-wave and b-wave amplitudes than the P17 OIR group, suggesting that the hADSCs improved the retinal neurofunction.This effect is consistent with the report that BMSCs enhance ERG amplitude 33 .In addition, we observed the latencies of the a-and b-waves, finding a significant prolongation in the OIR group, akin to studies reporting prolonged latency in retinal ischemia 34 .Furthermore, the latencies of a-wave and b-wave in the P17 hADSCs treatment group were shorter than the P17 OIR group, indicating improved retinal function.
The CNTF, GDNF, and BDNF mediate neuroprotective retina functions and play pivotal roles in the nervous system development and function [35][36][37] .In our study, we examined the expression of these neurotrophic factors using Western blot analysis to uncover the potential mechanisms of hAD-SCs' neuroprotection in OIR.Our results revealed that CNTF The expression of CNTF and BDNF in the P17 OIR group was statistically higher than in the P17 control group (P < 0.0001).The expression of GDNF in the P17 OIR group was statistically lower than in the P17 control group (P < 0.0001).The expression of CNTF and GDNF in the P17 hADSCs treatment group was statistically higher than in the P17 OIR group (P < 0.0001).The expression of BDNF in the P17 hADSCs treatment group was statistically lower than in the P17 OIR group (P < 0.0001).CNTF: ciliary neurotrophic factor; GDNF: glial cell line-derived neurotrophic factor; BDNF: brain-derived neurotrophic factor; NOR: normal group; OIR: oxygeninduced retinopathy; hADSCs: human adipose-derived mesenchymal stem cells; MSC: mesenchymal stem cell.
expression was significantly increased in P17 OIR mice, compared with the P17 control group, in line with reports of increased CNTF levels in OIR models 38,39 .The BDNF showed a similar expressive pattern to CNTF, with many reports confirming increased BDNF and fibroblast growth factor 2 (FGF2) during the retinal damage 40,41 .Our findings also indicated that the expression of GDNF was significantly decreased in P17 OIR mice, compared with the P17 control group, consistent with reports of reduced GDNF in damaged retinopathy 42 .Following hADSCs' treatment, our study showed a significant increase in the expression of both CNTF and GDNF.Numerous studies have confirmed that elevated CNTF and GDNF exert neuroprotective effects on the retina 43,44 , suggesting that increased expression of CNTF and GDNF may be one of the mechanisms underlying hADSCs' neuroprotection in OIR.However, BDNF expression decreased, differing from reports of increased BDNF after MSC application in traumatic optic neuropathy 45 .Given the absence of reference related to the OIR model, we speculate that hADSCs' intravitreal injection may not have a significant short-term effect on BDNF in the OIR model.
The one limitation of our study is that we primarily investigated three neurotrophic factors affecting the retina and did not investigate whether the effects of hADSCs are autocrine or paracrine.The mechanism underlying hADSCs' effects is likely complex and involves multiple factors.Identifying additional neuroprotective factors and exploring downstream factors will be essential.Another limitation is that it is important to recognize that, while our research has corroborated the notion that enhancing non-perfusion areas can lead to neuronal alterations, it is crucial to acknowledge that "improvement of vascularization" cannot supplant the fundamental concept of "neuroprotection."Moreover, while the experiments have proved that intraocular injection of stem cells inhibits inflammation 46,47 and the HE staining of the hADSCs group showed the appearance of infiltrating cells, it is essential to acknowledge that we did not conduct a comprehensive follow-up study to definitively confirm the nature and extent of the inflammatory response associated with xenogeneic MSC transplantation.Further research is warranted to address these questions before considering the clinical application of hADSCs.
In conclusion, to the best of our knowledge, this study is the first attempt to evaluate the neuroprotection of hADSCs in OIR.We provide preliminary theoretical data supporting the neuroprotective function and potential mechanisms of hADSCs in OIR.Using hADSCs to enhance retinal neurofunction could be a promising strategy for clinical treatment.
Figure 1 .
Figure 1.Culture of hADSCs and experimental timeline.(A) The third passage of hADSCs displays spindle and polygonal shapes.(B) A schematic diagram illustrating the experimental time points.hADSCs: human adipose-derived mesenchymal stem cells; OIR: oxygeninduced retinopathy; MSC: mesenchymal stem cell.
Figure 2 .
Figure 2. The HE and retinal flat mount results.(A) The HE of the P17 control group.No VEC nuclei and neovascularization breached the retina's ILM.(B) The HE of the P17 OIR group.Numerous VEC nuclei and neovascularization breached the ILM (black arrows).(C) The HE of the P17 hADSCs treatment group.Few VEC nuclei and neovascularization broke through the ILM, and the injected cells were observed in the vitreous cavity and above the retina (green arrows).(D) Quantification of VEC nuclei that breached the retina`s ILM.The P17 OIR group exhibited a significant increase compared with the P17 control group (P < 0.0001) and the P17 hADSCs treatment group (P < 0.0001).(n = 5) (E) Retinal flat mount of the P17 control group.No apparent neovascular fluorescence and nonperfusion areas were observed.(F) Retinal flat mount of the P17 OIR group.Extensive neovascular fluorescence and nonperfusion areas (yellow arrows) were evident.(G) Retinal flat mount of the P17 hADSCs treatment group.Slight neovascular fluorescence and no apparent nonperfusion areas were detected.(H) The extent of retinal nonperfusion areas.The P17 OIR group exhibited a significant increase compared with the P17 control group (P < 0.0001) and a significant decrease compared with the P17 hADSCs treatment group (P < 0.0001).(I, J) INL and ONL thickness.Significant reduction in both INL and ONL thickness in the P17 OIR group compared with the P17 normal group (P = 0.000 and P = 0.030; Fig.2I, J).There was no significant change in INL thickness, but ONL thickness was reduced in the P17 OIR treatment group compared with the P17 OIR group (P = 0.358 and P = 0.043).(K, L) Cells in INL and ONL.The cell density in the INL at P17 OIR group was significantly lower than in the P17 normal group (P = 0.007), with no significant difference compared with the P17 hADSCs treatment group (P = 0.075).There were no significant differences within the ONL between the P17 control group and the hADSCs treatment group compared with the P17 OIR group (P = 0.062 and P = 0.537; n = 5).HE: Histological evaluation; VEC: vascular endothelial cell; ILM: inner limiting membrane; NOR: normal group; OIR: oxygen-induced retinopathy; hADSCs: human adipose-derived mesenchymal stem cells; INL: inner nuclear layer; ONL: outer nuclear layer; MSC: mesenchymal stem cell; GCL: ganglion cell layer; IPL: inner plexiform layer; OPL: outer plexiform layer; *P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001.
Figure 3 .
Figure 3. TUNEL results.(A, B, C) TUNEL results from the P15 control group.No apparent TUNEL-positive cells were observed.(D, E, F) TUNEL results from the P15 OIR group.A substantial number of TUNEL-positive cells were evident (white arrows), primarily localized in the INL.(G, H, I) TUNEL results from the P15 hADSCs treatment group.Some TUNEL-positive cells were observed.OIR: oxygen-induced retinopathy; INL: inner nuclear layer; hADSCs: human adipose-derived mesenchymal stem cells; TUNEL: terminal deoxynucleotidyl transferase dUTP nick end labeling.
Figure 4 .
Figure 4. Electroretinogram results.(A, B) Amplitudes of the a-wave and b-wave.The amplitudes of the a-wave and b-wave in the P17 OIR group were significantly lower than in the P17 control group (P: < 0.0001, < 0.0001).The amplitudes of the a-wave and b-wave in the P17 hADSCs treatment group were significantly higher than in the P17 OIR group (P: < 0.0001, = 0.001).(n = 5) (A, C) Latencies of the a-wave and b-wave.The latencies of the a-wave and b-wave in the P17 OIR group were significantly longer than in the P17 control group (P: = 0.003, = 0.001).The latencies of the a-wave and b-wave in the P17 hADSCs treatment group were significantly shorter than in the P17 OIR group (P: =0.047, = 0.001).(n = 5).*P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001.NOR: normal group; OIR: oxygen-induced retinopathy; hADSCs: human adipose-derived mesenchymal stem cells; MSC: mesenchymal stem cell.
Figure 5 .
Figure 5. Expression of CNTF, GDNF, and BDNF.(A) Western blot analysis of the protein expressions of CNTF, GDNF, and BDNF in the P17 control, OIR, and hADSCs treatment groups.The relative protein expression levels were normalized to β-Actin.(B) Quantification illustrating trends in the expressions of CNTF, GDNF, and BDNF in the P17 control, OIR, and hADSCs treatment group.The expression of CNTF and BDNF in the P17 OIR group was statistically higher than in the P17 control group (P < 0.0001).The expression of GDNF in the P17 OIR group was statistically lower than in the P17 control group (P < 0.0001).The expression of CNTF and GDNF in the P17 hADSCs treatment group was statistically higher than in the P17 OIR group (P < 0.0001).The expression of BDNF in the P17 hADSCs treatment group was statistically lower than in the P17 OIR group (P < 0.0001).CNTF: ciliary neurotrophic factor; GDNF: glial cell line-derived neurotrophic factor; BDNF: brain-derived neurotrophic factor; NOR: normal group; OIR: oxygeninduced retinopathy; hADSCs: human adipose-derived mesenchymal stem cells; MSC: mesenchymal stem cell. | 2023-11-30T06:17:32.751Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "bb6b3ac502d53a008267d2c0e338a390e9e65aa3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/09636897231213309",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8869930c82e14852b56d2bc9a490728fa63792d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9168822 | pes2o/s2orc | v3-fos-license | Response to Stoll and Resta
To the Editor: We thank Stoll and Resta1 for their feedback on our data in their letter titled “Considering the Cost of Expanded Carrier Screening Panels,” and welcome discussion on the merits of expanded carrier screening. We understand this is the beginning, not the end, of genomic applications in reproductive care and fully expect that enhancements will continually increase the test's efficacy. As we consider the correct path to a test's maximal clinical utility, an analogy to prenatal screening for Down syndrome seems applicable.
term needs to be considered and measured, as well as what carrier individuals and couples actually do with the information. Thus, carrier panels lower the cost of testing but could conversely increase the other costs of a carrier screening program.
Furthermore, the vast majority of conditions included in the panel are extremely rare; at least 30 conditions have an incidence of <1 in 1 million, and all but a handful occur in <1 in 5,000 individuals (ironically, α-thalassemia, perhaps the most common genetic disease in the world, is not included in the panel, we presume for technical reasons). Therefore the likelihood of follow-up carrier testing identifying a mutation in the partner is expected to be small.
In this study, 127 carrier couples (0.54% of all patients who underwent testing) were identified. Of note, 47 of these cases were positive for α-1-antitrypsin deficiency, with both the S and Z allele included in the panel. The S allele is known to be common in some populations and is not thought to be of much clinical importance unless paired with a more severe allele, and even then would be expected to cause a milder phenotype. On removing α-1-antitrypsin and the conditions for which screening guidelines already exist (through the American College of Obstetricians and Gynecologists and/or the American College of Medical Genetics and Genomics), such as cystic fibrosis, sickle cell anemia, β-thalassemia, spinal muscular atrophy, Tay-Sachs disease, the detection of carrier couples would drop to <0.1%. Included in this figure are conditions such as familial Mediterranean fever, factor XI deficiency, and GJB2-related hearing loss. Considering the mild and variable phenotype, age of onset, and treatment options for conditions such as these, significant ethical dilemmas accompany including these on a preconception/prenatal carrier screening panel.
Apart from the cost argument for increased screening, the authors suggest that the increasing population ethnic admixture is further justification for expanded "panethnic" carrier testing. Although we agree that the increasingly diverse background of the US population presents new carrier screening and risk assessment challenges, the possibility that this increased diversity may actually be decreasing the incidence of recessive genetic disease should be considered. The authors state that the "data show a number of severe Mendelian disorders are more prevalent than commonly understood. " On the basis of the presented data, there was nothing to show an increased incidence of disease. The carrier frequencies were higher than previously reported for some conditions and lower than previously reported for others, but there is no measure of prevalence of these recessive conditions.
The stated carrier frequencies do not take into account the possibility of ascertainment bias in what is not likely to be a random sample of the population. For example, a couple from a population with a low incidence of a particular recessive disorder might happen to have a family history of the disorder, which led them to undergo the testing in the first place. This would elevate the apparent carrier frequency in the population. The possibility of ascertainment bias is suggested by the identification of 78 homozygotes/compound heterozygotes. This To the Editor: We thank Stoll and Resta 1 for their feedback on our data in their letter titled "Considering the Cost of Expanded Carrier Screening Panels, " and welcome discussion on the merits of expanded carrier screening. We understand this is the beginning, not the end, of genomic applications in reproductive care and fully expect that enhancements will continually increase the test's efficacy. As we consider the correct path to a test's maximal clinical utility, an analogy to prenatal screening for Down syndrome seems applicable.
Down syndrome screening began with crude risk estimates based on maternal age. Introduction of the α-fetoprotein biochemical assay improved sensitivity, but it was still poorly reliable by current standards. False reassurances occurred, as did difficulties regarding counseling and results interpretations. Nonetheless, these tests were implemented. They represented an improvement over contemporary approaches but did not signal the end of related research. Today, the options for prenatal aneuploidy screening are more promising than ever and yet still merit further refinement. Similarly, expanded carrier screening represents a vast improvement over an ethnicitybased approach for a small number of diseases, and routine implementation can serve to further development.
Open
We are first obliged to address specifics from the correspondents' letter. They make a factually incorrect assertion: in the disease list of the original study, the detection rate is actually 80% or greater in at least one ethnicity for more than half (n = 55) of the diseases and is more than 50% for almost all (n = 84). Panethnic screening by targeted genotyping results in a given ethnicity experiencing a lower detection rate for a subset of diseases. We believe this approach is acceptable for the ease of having a single protocol, in contrast with the ethnic stratification approach, because of the substantial limitations we described in the original paper and that were acknowledged by Stoll and Resta.
The authors call for more investigation into clinical outcomes, counseling resources, and ultimately, clinical guidelines issued by the relevant professional organizations. We eagerly await these guidelines and will continue to furnish data to support the field's development. Reasonably, organizations such as the American College of Medical Genetics and Genomics aim to make evidence-based recommendations. Therefore, data creation through testing implementation is a necessary first step. Individual use of the test prompts guidelines, not vice versa. Noninvasive aneuploidy testing of cell-free fetal DNA is a most recent example of this ordering.
Stoll and Resta express concern about costs associated with follow-up testing, including gene sequencing, and posit that these should be included in the total cost of a carrier screening program. Using cystic fibrosis as an established standard, panethnic carrier screening was implemented despite reduced ethnic-specific detection rates and expensive sequencing.
They also suggest that the diseases tested are too rare by noting that most have a prevalence of <1 in 5,000 individuals. Mendelian diseases, by their very nature, are uncommon. A 1 in 5,000 prevalence strikes us as an unreasonable threshold because few monogenic diseases exceed it. In fact, many diseases already endorsed for screening by the American College of Medical Genetics and Genomics occur with a lower frequency, including spinal muscular atrophy.
"Rare" must be examined in light of the ability to analyze genes in a multiplex fashion. In fact, ~1 in 400 US births is affected by a disease in our original study. According to the criteria for general population screening established by the World Health Organization in 1968, "a disease need not have a high degree of prevalence to be considered an important problem. " 2 Individually rare diseases, when viewed collectively, are acknowledged to occur with common frequency and confer significant public health and patient burden. 3 Finally, psychosocial implications of carrier screening have been exhaustively studied. A recent meta-analysis of long-and short-term effects concluded that (i) anxiety was overridden by knowledge of reproductive options and may be allayed by counseling services, and (ii) guilt was significantly associated with individuals who discovered carrier status after they had an affected child (i.e., "survivor guilt"). 4 On the basis of this, one may conclude that the benefits of testing more often outweigh the risks, and that the effects of not testing carry potentially greater psychosocial risks. Many similar conclusions have been drawn on persons who have undergone genomic analysis.
As genomic technologies continue to advance, we look forward to their many positive contributions to the health of our populations while pursuing the answers to ensuing difficulties. While this undoubtedly long process unfolds, we consider the bioethical principles that guide medical care. The suggestion of an "ethical dilemma" by the authors may strike a dissuasive chord. Yet a fundamental ethical principle Stoll and Resta do not address is respect for a patient's autonomy. Current carrier screening models hinge critically on personal, not provider, values regarding reproductive autonomy. Given that screening remains voluntary, we argue for allowing individuals the selfdetermination of their reproductive options, with the benefits and limitations that accompany them. Patients, by and large, are able to weigh and decide on complex information, as they do every day in all parts of medicine. A more significant ethical problem, then, is to not provide patients the opportunity to learn and act on information that can be easily gained with today's technological resources. | 2017-11-08T18:48:37.646Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "811a41ec7f0b0dafebc13e8d8387240630a9305f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/gim201319.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "37de091664b929bf819d29b4b707d6a867b8ea1e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249354180 | pes2o/s2orc | v3-fos-license | Isolated Infiltrative Optic Neuropathy in an Acute Lymphoblastic Leukemia Relapse
Optic nerve infiltration as the first sign of isolated central nervous system relapse of acute lymphoblastic leukemia (ALL) is rare. A seven-year-old girl with standard-risk B-cell ALL who was in remission presented with sudden onset of left eye pain and loss of vision. Examination revealed no perception to light in the left eye with positive relative afferent pupillary defect. The optic disc was hyperemic and swollen with total obscuration of the disc margin associated with central retinal artery and vein occlusion. Magnetic resonance imaging of the brain and optic nerve showed left intraorbital optic nerve thickening associated with perineural enhancement and intraconal fat involvement. Lumbar puncture revealed leukemic infiltration with blast cells after a week of eye symptoms, while bone marrow aspiration was negative for malignant cells. A diagnosis of left leukemic optic nerve infiltration with central retinal artery and vein occlusion was made. A high index of suspicion with repeat cerebrospinal fluid sampling is crucial to confirm the diagnosis as vitreous biopsy may fail to reveal infiltrative cells.
Introduction
Optic nerve involvement is reported to occur in one-sixth to 34% of leukemic cases [1,2]. However, isolated central nervous system (CNS) relapse in acute lymphoblastic leukemia (ALL) with optic nerve infiltration as the initial presentation is rare. The majority of leukemic optic nerve involvement occurs in patients with acute bone marrow disease [1,3]. Optic nerve infiltration may be the first presenting sign of acute leukemic relapse before the hematological involvement [3]. We report a rare case of ALL relapse presenting with optic nerve infiltration associated with central retina vein and artery occlusion without evidence of hematological relapse.
This article was previously presented as a meeting poster at the 33rd Asia-Pacific Association of Cataract & Refractive Surgeons (APACRS)-Singapore National Eye Centre (SNEC) 30th Anniversary Virtual Meeting on July 30-31, 2021.
Case Presentation
A seven-year-old girl with standard-risk B-cell ALL diagnosed two years ago who had completed her systemic and prophylactic intrathecal chemotherapy and was in complete remission for two months presented with sudden onset of left eye pain and loss of vision. It occurred upon awakening from sleep and was associated with throbbing headaches. She did not have any complaint over the right eye. She had no history of hyperleucocytosis, extramedullary disease, or CNS involvement during her first diagnosis of ALL. Systematically, there was no preceding fever or signs of systemic infection.
Her left visual acuity was non-perception to light with positive relative afferent pupillary defect. Right visual acuity was 3/3 on the Sheridan Gardiner test. Anterior segment was unremarkable in both eyes. Left eye fundus examination revealed a markedly swollen, hyperemic optic disc bulging into the vitreous with total obscuration of the disc margin. Retinal veins appeared very dilated and tortuous with presence of pale, edematous macula and retina. There were flame-shaped intraretinal hemorrhages in all four quadrants with vitreous hemorrhage inferiorly. There were no cotton wool spots ( Figure 1, Panel B). Her right eye fundus showed sectoral blurred optic disc margin nasally (Figure 1, Panel A). Otherwise, the retina was normal. A clinical diagnosis of left leukemic optic nerve infiltration with central retinal artery and vein occlusion was made. An urgent magnetic resonance imaging (MRI) of the brain and optic nerve showed left intraorbital optic nerve thickening associated with perineural enhancement and intraconal fat involvement ( Figure 2). The right optic nerve was normal with an absence of enhancement or fat stranding. Otherwise, no abnormal leptomeningeal enhancement or features suggestive of CNS leukemia or increased intracranial pressure were noted. Bone marrow aspiration and trephine biopsy (BMAT) and initial lumbar puncture (LP) were negative for malignancy. Second LP was done a week later together with anterior chamber tapping and vitreous biopsy.
Cerebrospinal fluid cytology showed presence of blast cells, suggestive of leukemic infiltration of the CNS ( Figures 3A, 3B). Anterior chamber tapping and vitreous biopsy were negative for malignancy. Repeat BMAT showed reactive lymphocytosis with no excess blast noted. In view of relapse of leukemia with isolated CNS involvement, she was started on reinduction chemotherapy UK ALL R3 including intrathecal methotrexate. Neither local ocular chemotherapy nor radiotherapy was initiated. To date, her left eye remains without perception to light with right eye vision maintained without deterioration to date.
Discussion
ALL is an aggressive malignant proliferation of lymphoblasts with peak prevalence between the ages of one and four [4]. It can invade blood, bone marrow, and extramedullary sites including the CNS. Treatment regimens are based on risk stratification and prognostic factors which include age, gender, ethnicity, blood count at diagnosis, cell lineage, and CNS involvement [4].
Optic nerve is known to be a sanctuary for leukemic cells which is relatively unaffected by systemic chemotherapy and even intrathecal chemotherapy [5]. Cases of both unilateral and bilateral optic nerve involvement have been reported [6,7].
In our case, despite intrathecal chemotherapy, which was given for CNS prophylaxis at the first diagnosis of ALL, and achieving complete remission after completion of chemotherapy, she developed isolated CNS relapse.
The majority of patients with CNS relapse present with symptoms such as altered mental status, headache, and cranial nerve palsy while only a minority (6.5%) report blurred vision as the initial complaint [8]. This was seen in our case where the patient presented with sudden loss of vision with headaches without other symptoms such as diplopia or altered mental state. Despite having unilateral eye complaint, fundoscopic examination revealed involvement of both optic nerves with early optic disc swelling in the contralateral eye.
Rosenthal classified optic nerve infiltration in leukemia into prelaminar and retrolaminar [9]. In prelaminar invasion, infiltrates occur superficially to the lamina cribosa and can be associated with moderate edema and hemorrhage of the optic nerve head. Visual acuity might be minimally affected unless there is coexisting macular edema and hemorrhage. Retrolaminar optic nerve invasion usually presents with significant disc elevation and edema with marked visual impairment. Our case had prelaminar and retrolaminar optic nerve invasion with total obscuration of the disc margin and significant visual loss and bulging of the optic disc into the vitreous, as evidenced by an MRI of the orbit. Nevertheless, the central retinal artery occlusion was believed to be the main contributing factor for the acute loss of vision rather than the optic nerve ischemia from the infiltrative optic neuropathy [5].
There are limited cases reported on central retinal vascular occlusion in ALL in contrast to other lymphoproliferative disorders such as lymphoma [10][11][12]. A case of pediatric age group leukemic optic neuropathy with sequential bilateral central retinal artery occlusion has been reported by Lin et al. [6] in which bilateral central retinal artery occlusion developed few hours apart. It has been postulated that vascular disturbances that occur in leukemic optic neuropathy are due to leukemic infiltration into or surrounding the blood vessels or secondary to neoplastic emboli [13].
A review by Myers et al. found that malignant cells are identified in the cerebrospinal fluid in a majority of leukemic infiltration cases [14]. In our case, the initial cerebrospinal fluid analysis was negative for blast cells and was only positive in a repeat examination performed a week later. A similar result occurred on immunophenotyping with flow cytometry where evidence of B-lymphoblastic infiltration in cerebrospinal fluid was only revealed in the repeat sample. Thus, a repeat investigation with fresh samples for detecting malignant cells might be indicated if there is a high index of suspicion for infiltrative optic neuropathy when the initial result does not support the diagnosis.
Vitreous biopsy has value in diagnosing a leukemic relapse when there is presence of dense vitreous cellular infiltration [15]. Shenoy et al. reported a similar case of isolated optic nerve relapse with cerebrospinal fluid analysis negative for blast cells which was confirmed by vitreous biopsy [16]. However, internal limiting membrane of the retina appears to be preserved from leukemic cell penetration, which, in turn, limits cell invasion into the vitreous [17]. This may explain our negative findings of blast cells in the vitreous sample, similar to a case reported by Bansal et al. [18]. Because of high risk and low yield, optic nerve biopsy is not recommended [19]. To date, there is no literature on the risk of tumor seeding in vitreous biopsy via trans par plana vitrectomy. On the other hand, the occurrence of tumor cells seeding in needle track following fine needle aspiration of biopsy of intraocular tumor varied from none to 54% [20,21]. Hence, benefits should outweigh the risk when considering vitreous biopsy.
Conclusions
Optic nerve infiltration may be the only initial presentation indicating relapse of ALL following attainment of disease remission. A high index of suspicion and prompt diagnosis may not only avoid irreversible loss of vision but more importantly can be life-saving for the patient. Repeated cerebrospinal fluid analysis may be necessary in the event of initial negative results for malignancy.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-06-05T15:17:56.616Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "d7930fb28638e2df91e523cb4dd626c0740efb4a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "84d9bce78b7340e5b45f12d78f10d1ffbca607f0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253190047 | pes2o/s2orc | v3-fos-license | The analysis of nursing diagnoses determined by students for patients in rehabilitation units
This study aimed to analyze nursing diagnoses determined by the nursing students for patients in rehabilitation unit. Data were collected from 190 case reports submitted by the nursing students who practiced in the rehabilitation unit, and analyzed on the basis of North American Nursing Diagnosis Association (NANDA) International, Inc. nursing diagnoses. Thirty different diagnoses were documented in rehabilitation unit. The most frequent nursing diagnosis was impaired physical mobility (n=68, 14.6%). The 30 diagnoses were grouped into 10 domains and 20 classes of the NANDA International, Inc. human response patterns. The average quality of nursing statements corresponded to a score of 8.63, indicating relatively good quality. The results of this study will help to improve the quality of nursing process education and provide guidelines to improve the quality of nursing care for the rehabilitation nursing situation in Korea.
INTRODUCTION
Nursing is part of the healthcare system and includes activities for the maintenance and promotion of health status; prevention of disease; and care for individual, family, and community in all healthcare environments. To guarantee the adequacy of nursing care, nursing staff share the task of planning with healthcare professionals (Adamy et al., 2019;Azevedo and Cruz, 2021). While providing nursing care, nurses use the nursing process to solve a patient problem that includes assessments, diagnoses, plans/interventions, and evaluations (Alsadat Feo et al., 2018). The five sequential steps of the nursing process are essential for nursing education and practice because they help minimize mistakes or omissions by systematizing nursing through a dynamic and cyclical process to improve health status of the patients and guide nurses to patient-centered nursing care (Chang et al., 2021;Mousavinasab et al., 2020;Seçer and Karaca, 2021;Sezer and Şahin, 2021).
In Korea, since the 1980s, the nursing process has been actively applied in clinical practice; nursing students receive education on major subjects and have the opportunity to clinically apply the nursing process. In particular, the nursing process helps them to accurately understand the health problems of patients in clinical practice and study scientific evidence. Thus, the Korean Accreditation Board of Nursing has set the nursing process as one of the major learning outcomes to be achieved before graduation. As nurses' ability to perform the nursing process leads to evidencebased practical competency, continuous attention to the nursing process is required in education and practice (Jung and Yoo, 2022;Koldestam et al., 2021;Song et al., 2019). Despite the fact that the nursing process is an important indicator covering a high proportion of guidance and evaluation areas in clinical practice education, it is reported that nursing students still have difficulties in the clinical application of this process. In other words, the significance of the nursing process in clinical practice has been emphasized, but specific efforts for improvement remain insufficient (Bayram et al., 2022;Keiffer, 2018;Melin-Johansson et al., 2017;Parvan et al., 2021). Therefore, it is necessary to review the nursing process written by nursing students in the undergraduate course first, as part of a specific effort to reduce difficulties of nursing students in applying the nursing process and to help them apply the nursing process correctly. For this, it is necessary to review the nursing course report that nursing students actually write after clinical practice. In particular, it is necessary to examine whether the nursing diagnosis, which is the essence of the nursing process, is being made correctly.
Nursing diagnosis is a statement that reflects whether nursing students well identified the patient's problem (signs or symptoms) and related/risk factors in the initial assessment stage. When a correct nursing diagnosis is made, it leads to correct nursing intervention and evaluation, but when a diagnosis is made incorrectly, the direction of intervention and evaluation is also wrong.
The ability to identify patient problems is a very important factor in patient care, and precise description of patient problems or demands in the statement of nursing diagnosis is also an important precondition to ensure quality patient care. Therefore, in order to analyze whether nursing students made the correct nursing diagnosis by properly selecting the defining characteristic and related/risk factors that are indicators of nursing diagnosis, it is necessary to closely examine the nursing diagnosis actually made by nursing students. In the review process, we used quality of nursing diagnosis (QOD), a quality evaluation tool for nursing diagnosis (Florin et al., 2005) to more objectively measure the accuracy and QOD.
Therefore, the purpose of this study was to identify the frequent nursing diagnosis applied to rehabilitation patients, and to identify the defining characteristics and related or risk factors of each diagnosis by reviewing case reports reported by nursing students during clinical practice in the rehabilitation unit. In addition, in order to evaluate the accuracy of nursing diagnosis described by students, the QOD is also evaluated.
Research design
This was a retrospective study using the North American Nursing Diagnosis Association (NANDA) International, Inc. nursing diagnoses to analyze and identify the key nursing diagnoses in case reports submitted by 4th grade nursing students who had completed rehabilitation nursing practice.
Data collection
This study analyzed nursing diagnoses extracted from 190 case reports submitted by 4th grade nursing students in the department of nursing at Kyungpook National University (approval number: 2022-0157) who had completed rehabilitation nursing practice from the first semester and the second semester from 2018 to 2019 (clinical practice for 8 weeks at one general hospital in Korea). Case reports that include nursing processes consisting of patient assessment and intervention data are the basis for nursing diagnoses recorded by students during the clinical practice period. Usually, students observe and provide nursing care to patients and then select one of them to write a case report focusing on the nursing process. Students are required to make at least one nursing diagnosis per patient.
Data analysis
Using the case reports recorded by nursing students, the researchers entered demographic information, code of selected NANDA International, Inc. nursing diagnoses, defining characteristics (symptoms or signs), and related/risk factors into Microsoft Access. The validity of the analysis process was secured by categorizing it based on opinions agreed upon through discussion between two researchers.
Data preprocessing
As preprocessing for the analysis, we refined the nursing diagnoses recorded in the students' case reports. Nursing students can make more than one diagnosis when multiple problems are identified in a patient. The diagnoses were reviewed, and in the case of different translations of English nursing diagnosis, the same type of diagnosis was unified according to the NANDA International, Inc. list (Herdman and Kamitsuru, 2018). If the students used the defining characteristic and related/risk factors suggested by NANDA International, Inc. but did not find the appropriate diagnosis, we were matched with the most appropriate diagnosis from the list of diagnoses. A small number of other medical diagnoses and nursing problems with ambiguous meanings were excluded from the analysis.
NANDA International, Inc. nursing diagnoses
A nursing diagnosis is a scientific decision of the response of individuals, families, or communities about actual/potential-health problems and processes of life. The NANDA International, Inc. nursing diagnoses included 244 diagnoses for clinical practice, test, and refinement. The diagnoses are classified with 13 domains and 47 classes. The domain is an area of interest (e.g., health promotion, nutrition, elimination/exchange, activity and rest, percep-tion and cognition, self-perception, role of relationships, sexuality, coping and stress tolerance, life principles, safety and protection, and comfort) for nurses. The domains are classified into classes, which are groupings that sharing properties in common. The NANDA International, Inc. nursing diagnoses are submitted and reviewed by nurses based on evidence from various countries. The association is continuously revised and developed every 4 years with the approval of practicing nurses, nursing researchers, and nursing educators worldwide (Herdman and Kamitsuru, 2018;Zhang et al., 2021).
Appropriateness and quality of nursing diagnoses
When evaluating the adequacy of nursing diagnoses, it is important to note that just having a diagnosis or a list of diagnoses is not enough. As each nursing diagnosis has a definition and label, the researcher must know the diagnostic indicators. Thus, the appropriateness of nursing diagnoses was confirmed through a quantitative evaluation of the extent to which the defining characteristics of the diagnoses were reflected at the nursing assessment stage. Diagnostic indicators are the information used to distinguish different diagnoses and include defining characteristics, related or risk factors. Defining characteristics are observable clues that are categorized as signs or symptoms of a problem-focused or healthpromotion diagnosis. Nursing assessments support the accuracy of the diagnosis by identifying a number of defining characteristics (De Groot et al., 2019;Zhang et al., 2021).
The quality of nursing diagnostic statements was assessed using the QOD tool. The QOD scale consists of four components and 14 criteria that evaluate diagnostic structure and relevance. The four components reflect the problem, etiology, and signs/symptoms (PES) format with an additional general component. Each criterion was given one point, with a maximum score of 14 points. Higher scores indicated better quality of the nursing diagnostic statements. The quality evaluation was performed by two experts with experience in clinical practice guidance. In a previous study, the Cronbach alpha value was reported as 0.86 (Florin et al., 2005), and the value derived from the Kuder-Richardson formula for this study was 0.80.
Ethical considerations
This study was approved by the Institutional Review Board (IRB) of Kyungpook National University (approval number: 2022-0157). The case reports used for data analysis were submitted and used in the study after deleting all personally identifiable information. The IRB approved the waiver of consent to access the data.
Top two defining characteristics related to the top ten nursing diagnoses
The defining characteristics of the top ten nursing diagnoses are listed in Table 3. The most prevalent defining characteristics for each NANDA International, Inc. diagnoses were (a) decrease in gross motor skills for impaired physical mobility, (b) fatigue for activity intolerance, (c) insufficient knowledge for deficient knowledge, (d) impaired ability to access bathroom for bathing self-care deficit, (e) impaired ability to complete toilet hygiene for toileting self-care deficit, (f) proxy report of pain behavior/activity changes for chronic pain, (g) discontent with situation for impaired com-fort, (h) dependency for powerlessness, and (i) alteration in body function for disturbed body image.
Top two related or risk factors related to the top ten nursing diagnoses
The related or risk factors to the top 10 nursing diagnoses are listed in Table 4. Activity intolerance was most frequently used for impaired physical mobility, physical deconditioning for activity intolerance, insufficient information for deficient knowledge, environmental barrier for bathing self-care deficit, physical barrier for risk for injury, impaired mobility for toileting self-care deficit, injury agent for chronic pain, insufficient environmental control for impaired comfort, anxiety for powerlessness, and alteration in self-perception for disturbed body image.
The quality of nursing diagnostic statements for the top five nursing diagnoses
The average quality of the nursing diagnosis statements written by the students is shown in Table 5. The average score for the nurs- Cumulative percent for each diagnosis. b) According to the NANDA International, Inc., the risk for diagnosis does not include defining characteristics.
DISCUSSION
The study aimed to analyze nursing diagnoses using NANDA International, Inc. nursing diagnoses to identify the key nursing diagnoses in case analysis reports submitted by 4th grade nursing students who have completed rehabilitation nursing practice. Based on the analysis results, we suggest a specific and rational direction for the nursing process applied in clinical practice, in addition to evaluating the appropriateness of clinical reasoning and diagnosis and the quality of diagnosis focusing on the key nursing diagnosis. The following discussion is based on the data obtained from the study results.
First, the most frequent nursing diagnoses in the nursing process made by the nursing students were "impaired physical mobility," "activity intolerance," and "deficient knowledge," in that order. In a study analyzing the physical function and immobility problems of stroke patients in the rehabilitation ward, it can be seen that "body mobility impairment" appears as a priority problem that must be solved (McGlinchey et al., 2020). In this study, it seems that the frequency of diagnosis related to physical mobility was high as stroke, paraplegia/tetraplegia, and hemiplegia patients accounted for 80%. In this regard, a previous study also reported that "impaired physical mobility" was a common nursing diagnosis for patients with cerebral vascular disease, and it was included as a necessary nursing diagnosis for stroke patients in previous studies (Granel and Bernabeu-Tamayo, 2020;McGlinchey et al., 2020;Pizzol et al., 2019). The high frequency of these nursing diagnoses is justified by the fact that cerebrovascular disease is a motor neuron disease that can lead to loss of voluntary movement control. As motor neurons in the cerebral hemispheres cross over in the brainstem, voluntary motor control disorders on one side of the body reflect motor neuron lesions on the other side of the brain, which can lead to disorders such as hemiplegia (Clark et al., 2021;Skidmore and Shih, 2022). Since the nursing process in clinical case studies is also applied according to priority among the health problems of the patients, nursing diagnoses with high priority, such as impaired physical mobility, should be given importance. In other words, nursing educators need to keep track of nursing processes that are frequently applied in practice, such as impaired physical mobility, and conduct research so that the latest knowledge can be implemented in clinical practice.
Second, according to the results found in this research, the top five nursing diagnoses were distributed in the activity/rest (4), perception/cognition (5), and safety/protection (11) domains. The domain distribution of these diagnoses reflects the fact that patients in the rehabilitation unit experience mobility impairment among disabilities, and in terms of functionality, they are considered unable to move freely. In addition, analysis of the top five domains in which diagnoses were distributed and the classes by domain revealed that the diagnoses were distributed in classes 2 (activity/exercise) and 5 (self-care) in domain 4 (activity/rest), class 4 (cognition) in domain 5 (perception/cognition), class 2 (physical injury) in domain 11 (safety/protection), class 1 (physical comfort) and 2 (environmental comfort) in domain 12 (comfort), and class 3 (body image) in domain 6 (self-perception). These findings suggest that physical restrictions may appear abruptly or slowly, depending on their severity and duration, but may also contribute to health problems, ranging from lack of self-management to impaired social interaction. Most of the other nursing diagnoses found in this study (such as bathing self-care deficit, toileting self-care deficit, risk for injury, impaired comfort, powerlessness, disturbed body image, impaired swallowing, impaired verbal communication, and constipation) also occurred due to motor impairment, indicating that disability affects the entire life of rehabilitation patients. It can also be seen that nursing students reflected not only the physical problems of rehabilitation patients, but also psy- chosocial problems in the nursing process. According to the preceding literature, the occurrence of self-care deficit after stroke is usually aftereffects of cerebral hypoperfusion (Everard et al., 2021;Oliveira-Kumakura et al., 2020;Oliveira-Kumakura et al., 2021), and nursing diagnosis associated with self-care deficits also show an association with motor and sensorial aftereffects. Third, it is necessary to examine the diagnostic indicators that includes defining characteristics, related/risk factors of each nursing diagnosis that nursing students select for rehabilitation patients. Defining characteristics are used to make a nursing diagnosis and are one of the diagnostic indicators used to distinguish different diagnoses. They are observable clues that are categorized as signs or symptoms of a problem-focused or health-promotion diagnosis. Related or risk factors are essential elements of all problem-based diagnoses and etiologies, which have some type of relationship with nursing diagnoses (Chang et al., 2021;Sezer and Şahin, 2021). A review of patients' history is helpful in identifying related or risk factors. The signs and symptoms related to the top 10 diagnoses are described in Table 3. Most nursing students are educated to follow the PES format for problem-focused or health-promotion diagnosis when stating a nursing diagnosis (Burucu and Arslan, 2021;Florin et al., 2005). Problems refer to the response of the patients, etiology refers to the related or risk factor that causes nursing problems, and signs or symptoms are specific responses that occur in a patient, supporting that the cause is related to the nursing problem (Bayram et al., 2022;Karaca and Aslan, 2018;Ozkan et al., 2021). For example, with the statement "impaired physical mobility related to activity intolerance," it is difficult to clearly ascertain the evidence for making the diagnosis. However, if the statement is revised to "impaired physical mobility related to activity intolerance with a decrease in gross motor skills," the rationale for the diagnosis will be more clearly understood. However, the disadvantage of the PES format is that the sentences are long and can be more complicated; therefore, the two-part nursing diagnosis is more common. In addition, in the case of a risk nursing diagnosis in which the patient does not show symptoms, three-part nursing diagnosis cannot be used. Therefore, since most two-part nursing diagnoses consist only of related factors and problems without mentioning signs or symptoms, it is also necessary to check the signs and symptoms, which are mostly confirmed in the assessment data in the form of defining characteristics. The PES format is recommended for nursing students who are not skilled in diagnosis because the problem statement is more descriptive, containing symptoms and signs to provide the basis for making a diagnosis.
Fourth, as a result of evaluating the quality of nursing students' nursing diagnosis statements to identify the accuracy of the diagnoses, the average quality was 8.63 out of 14 points. Although direct comparison is difficult because there are no preceding studies evaluating the QOD statements among nursing students, compared with the results of 8.8 points in a nursing course education intervention study for nurses (Florin et al., 2005), the quality of nursing students' nursing diagnosis statements in this study is relatively good. In conclusion, it can be seen that the rehabilitation nursing unit is a special unit, and nursing diagnoses reflecting the characteristics of rehabilitation patients are mainly selected by students. It can be confirmed that the definitions of characteristics for each major nursing diagnosis and related factors or risk factors differ from those of other nursing units. Therefore, considering that nursing students can appropriately select clinically applicable diagnoses through theoretical and practical education, efforts are needed to establish a continuous curriculum and feedback through which nursing students can identify and review procedural errors in the usage of the nursing process. In addition, in the process of deriving a nursing diagnosis, the practice field leader or academic advisor should allocate sufficient time for practice guidance to provide feedback and make an effort to review the improvements. Based on these results, this study intends to make the following proposals: This study analyzed the nursing diagnosis statements written by the nursing students and evaluated the accuracy and quality of the nursing diagoses. The key nursing diagnoses were impaired physical mobility, activity intolerance, deficient knowledge, risk for injury, bathing self-care deficit. And defining characteristics, related or risk factors were analyzed together. However, the study is limited in that it is difficult to generalize the characteristics of the nursing process because data were collected from a single nursing college. In addition, the evaluation based on the opinions of experts is limited as no tools have yet been prepared to evaluate errors in the nursing process. Therefore, future research should aim to develop a tool that can objectively evaluate and quantify errors in the nursing process. Furthermore, in order to generalize the research results, we propose repeated research for nursing students in different regions in consideration of variations in conditions such as the learning environment and university location as well as nursing practice involving children, women, and psychiatric nursing practice.
CONFLICT OF INTEREST
No potential conflict of interest relevant to this article was reported. | 2022-10-29T15:09:13.691Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "94d03dc8557ac35be61b0c44e42d9336d5fa8704",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-jer.org/upload/jer-18-5-299.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "12aca2399b1aed7118a13608e85517cabb9fa3ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6812064 | pes2o/s2orc | v3-fos-license | The relevance of periprocedural troponin rise: the never ending story!
High-sensitivity troponin is—per definition—highly sensitive to detect all sorts of myocardial injury. This does not necessarily mean that permanent damage has been done to the myocyte. It is known that the troponin level may well be elevated after exceptional physical exercise, like in marathon runners.1 Of course, long-term prognosis will not be compromised in these athletes, but also in non-coronary conditions like aortic valve or mitral valve disease, elevated troponin values were detected, with different implications on prognosis (table 1).
View this table:
Table 1
Troponin as a prognostic marker
A variety of studies addressed the frequent finding of elevated biomarker values following coronary angiography and percutaneous interventions with or without stent deployment in patients with stable coronary artery disease.2–4 Potential mechanism of periprocedural infarcts are (1) side branch occlusion, (2) distal embolisation, (3) prolonged or multiple balloon inflation, (4) coronary dissection with slow flow or (5) microthrombi and no reflow.5
However, the …
High-sensitivity troponin is-per definition-highly sensitive to detect all sorts of myocardial injury. This does not necessarily mean that permanent damage has been done to the myocyte. It is known that the troponin level may well be elevated after exceptional physical exercise, like in marathon runners. 1 Of course, long-term prognosis will not be compromised in these athletes, but also in non-coronary conditions like aortic valve or mitral valve disease, elevated troponin values were detected, with different implications on prognosis (table 1).
A variety of studies addressed the frequent finding of elevated biomarker values following coronary angiography and percutaneous interventions with or without stent deployment in patients with stable coronary artery disease. [2][3][4] Potential mechanism of periprocedural infarcts are (1) side branch occlusion, (2) distal embolisation, (3) prolonged or multiple balloon inflation, (4) coronary dissection with slow flow or (5) microthrombi and no reflow. 5 However, the definition of periprocedural myocardial injury varies among different authors and the interpretation of these data may proof difficult. In particular, as an isolated troponin elevation might have less prognostic impact if compared with true myocardial necrosis with a creatine kinase MB (CK-MB) rise. 5 Tricoci and collegues compared the prognostic impact of Creainkinase-MB (CK-MB) and troponin rise. Interestingly enough, the mortality risk of a CK-MB rise >3× upper limit of normal (ULN) was comparable to a cTroponin rise >60× ULN. 5 In the interventional community, it is widely accepted that an isolated minor troponin rise following percutaneous coronary procedures will not affect prognosis. Therefore, no guidelines recommend routine evaluation of biomarkers in patients with an uneventful postinterventional course. However, the European Society of Cardiology defined the percutaneous coronary intervention (PCI)-associated myocardial ischaemia as a Type 4a infarct. 4 The Type 4a infarct is characterised by an elevation of troponin values >5×99th percentile ULN in patients with normal baseline values and (1) symptoms suggestive of myocardial ischaemia, (2) new ischaemic ECG changes or new left bundle branch block, (3) angiographic loss of patency of a major coronary artery or a side branch or persistent flow or no flow or embolisation or (4) imaging demonstration of new loss of viable myocardium or new regional wall motion abnormality. 4 In the present issue of 'Open Heart', Hamaya and collegues investigate the impact of high-sensitivity I troponin elevation. Their study included 538 stable patients who underwent a diagnostic coronary angiogram. The authors identified patients with minor procedure-related myocardial necrosis and those with major procedure-related myocardial necrosis with troponin elevation >3-5x ULN. The troponin was measured just before the angiogram and 18-24 hours postprocedure. Importantly, in patients with significant coronary artery disease, any revascularisation procedure was rescheduled for a second session.
The main findings of this study were that patients with troponin elevation were older, female, had previous coronary interventions and a longer procedural time. Patients with major elevations of troponin had higher levels of N-terminal -Brain Natriuertic Protein (NT-proBNP) and a higher left ventricular enddiastolic pressure. Moreover, aortic stenosis or pressure wire measurements were associated with a troponin rise. In addition, the authors conclude that a major troponin rise was associated with a worse long-term outcome.
Indeed, it is not surprising that older and sicker patients will experience a more pronounced troponin rise. If this troponin rise does translate into a worse outcome remains somehow speculative. Unfortunately, the patient number in the present study is too small to elucidate this research question.
In general, the interpretation of the presented data is impeded by several potential unmeasured confounders. In particular, the outcome of the revascularisation procedure during a second session is not reflected in the statistical analysis. Was full revascularisation achieved in all patients or not, for example? Did the patient solely experience a minor troponin rise or was it a true type 4a myocardial infarct following the percutaneous intervention?
It is hard to believe that a troponin rise following a diagnostic procedure should impact on survival, while a minor isolated troponin elevation after percutaneous intervention is considered to be negligible?
In conclusion, it is unlikely that this study will change current clinical practice.
Contributors GMF: Idea, draft of the manuscript. DML: Table and proof reading.
Competing interests None declared.
Provenance and peer review Commissioned; internally peer reviewed.
Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/ | 2017-09-26T13:41:09.447Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "39f3603f9593b9a2e60808033ce9fa2988cc303b",
"oa_license": "CCBYNC",
"oa_url": "https://openheart.bmj.com/content/openhrt/4/2/e000590.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "880f00b506ef3a67a3950dd9ae6a0d9894872532",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225612720 | pes2o/s2orc | v3-fos-license | PATHOGENIC POTENTIAL OF Vibrio parahaemolyticus ISOLATED FROM TROPICAL ESTUARINE ENVIRONMENTS IN CEARÁ, BRAZIL
Vibrio parahaemolyticus is a potentially pathogenic bacterium that occurs naturally in estuarine environments worldwide. This research aimed to investigate the occurrence of V. parahaemolyticus in estuarine environments and determine the virulence profile in an aquaculture environment by molecular techniques and conventional microbiological methods. Sampling was conducted in four estuaries in the State of Ceará (Pacoti, Choró, Pirangi and Jaguaribe), Brazil, between January and April 2009. The analysis included 64 samples of water (n=32) and sediment (n=32) collected from the estuaries. The samples yielded 64 isolates suspected to be V. parahaemolyticus. The isolates were submitted to biochemical identification using a dichotomous key and PCR for the detection of the species-specific tlh gene. Virulence was assessed by testing for urea hydrolysis and β-hemolysis in erythrocytes (Kanagawa phenomenon) and simultaneous detection of the tdh and trh genes. All but one of the isolates (63/64) were confirmed to be V. parahaemolyticus by genotypic detection of tlh gene. The tdh and trh genes were detected in 57 and 19 isolates, respectively. The Kanagawa test was positive for 51 isolates. Only one isolate was positive for urease. The incidence of tdh/trh-positivity was very high in isolates recovered from the environment. The present study demonstrates the need to increase knowledge of the ecology and pathogeny of V. parahaemolyticus.
INTRODUCTION
The genus Vibrio includes opportunistic pathogenic species, capable of causing diseases in host organisms under stress or compromised immune defense systems (Defoirdt et al., 2008). Of particular concern to public health, the enteric pathogen Vibrio parahaemolyticus can cause conditions such as acute gastroenteritis and septicemia in humans exposed to raw or undercooked seafood (Zhao et al., 2011). The species is Gram-negative and halophilic and is widely distributed in estuarine and marine environments (Griffit et al., 2011;Hassan et al., 2012).
The earliest reports of diseases caused by seafood associated with V. parahaemolyticus date from the 1950s in Japan. The incidence has since been increasing, especially in the US, Southeast Asia, Canada and Mexico (Rahman et al., 2006;Tunung et al., 2011). The first Brazilian report of V. parahaemolyticus dates from mid-1975, associated with an outbreak of gastroenteritis in a small town in the Northeast (Cascavel, Ceará) (Hofer, 1983).
Despite the increasing incidence of V. parahaemolyticus in clinical samples, it is not as frequently detected in food and environment samples as might be expected, possibly due to limitations found in conventional microbiology techniques (Martinez-Urtaza et al., 2008).
Circumventing these limitations, several molecular biology techniques have been developed to identify and determine the pathogenicity of V. parahaemolyticus isolates, especially PCR-based techniques capable of targeting specific genes (Blanco-Abad et al., 2009). The thermolabile hemolysin (TL) is encoded by the thl gene, used for identification of clinical and environmental V. parahaemolyticus and V. alginolyticus isolates (Cariani et al., 2012).
Epidemiological studies have revealed a strong association between the Kanagawa phenomenon (characterized by β-hemolysis) and clinical isolates from outbreaks of gastroenteritis, although the phenotype is observed in only 1-2% of environmental isolates. β-hemolysis is therefore considered an important marker of virulence in V. parahaemolyticus isolates (Nishibuchi & Kaper, 1995;Rizvi & Bej, 2010). The main factors which enable V. parahaemolyticus isolates to induce β-hemolysis in human erythrocytes (hence markers of virulence) are the hemolysins TDH (thermostable direct hemolysin) and TRH (thermostable direct hemolysin-related hemolysin) encoded by the tdh and trh genes, respectively (Sobrinho et al., 2011;Zhao et al., 2011). The ability of certain clinical isolates to hydrolyze urea has also been identified as an indicator of virulence (Magalhães et al., 1991).
Considering the high incidence of V. parahaemolyticus in seafood and the potential threat it poses to human consumers, the purpose of the present study was to determine the virulence profile of isolates recovered from the environment using conventional (culture-dependent) microbiology techniques and molecular biology (PCR) techniques.
MATERIAL AND METHODS
Thirty-two water samples and thirty-two sediment samples were collected from the estuaries of 4 rivers in Ceará State, Brazil: Pacotí, Choró, Pirangi and Jaguaribe. Two points in each river were analyzed: one near and one far from the mouth of each river. Sample collections were performed monthly ( Figure) Water samples were collected at a depth of 50 cm. The water temperature was recorded at the time of collection and the salinity and pH determinations were performed in the Laboratory of Environmental and Fish Microbiology (LAMAP/ LABOMAR/UFC) for rapid analysis. Important to note that amber 1 L sterilized bottles were used for sample collection. On the other hand, sediment samples were collected and transported to the LAMAP using isothermal boxes (Menezes et al., 2017).
Biochemical testing was performed using the Noguerola & Blanch (2008) dichotomous key, an update of the key proposed by Alsina & Blanch (1994a, 1994b. To detect virulence phenotypes among V. parahaemolyticus, two tests were conducted: the Kanagawa test, using Wagatsuma agar (Wagatsuma, 1968) (containing a freshly collected 20% suspension of washed human blood group O erythrocytes) and the urease test, using urea broth (DIFCO TM ) (ICMSF, 1978). A reference V. parahaemolyticus isolate (IOC 17082), supplied by the Oswaldo Cruz Institute (Rio de Janeiro, Brazil), was used as positive control.
The Spearman correlation coefficient (rs) was used to determine the correlation between environmental parameters and the abundance of V. parahaemolyticus. The SPSS software package (SPSS Inc., version 17.0 for Windows) was used for statistical analysis (Spearman, 1904).
The DNA was obtained using a commercially available kit (DNeasy Blood & Tissue Kit®, Qiagen, Brazil). V. parahaemolyticus identified by biochemical testing were further confirmed by PCR, using primers and conditions for detection of the tlh gene (450bp), while the presence of virulenceassociated genetic determinants was evaluated by using primers for the tdh (269bp) and trh (500bp) genes (Croma BioTechnologies®, Brazil) (Bej et al., 1999). V. parahaemolyticus IOC 18950 was used as a positive control for the presence of all the genes under investigation. Agarose gels at 1% were used for electrophoresis for 60 minutes. Molecular size was estimated by using 1000-bp DNA ladder (Sigma, Brazil). The gels were photo-documented with a Kodak EDAS290 digital camera, Brazil (Menezes et al., 2016).
RESULTS
In relation to environmental studied parameters, the water temperature varied from 28 to 36.5°C, the salinity ranged from 2.0 to 48.0, and the pH between 6.96 and 8.32 in all four estuaries investigated. The abundance of V. parahemolyticus isolates in the samples collected was positively correlated with the water temperature (r s = 0.50; p < 0.05) and negatively correlated with pH (r s = -0,43; p < 0.005) and salinity (r s = 0.65; p < 0.005).
Sixty-four V. parahaemolyticus isolates were recovered from 32 samples of water (n=27) and 32 samples of sediment (n=37) and identified phenotypically. Of these, 63 were confirmed genetically (Table 1). Using specific primers, 57 (89%) isolates were positive for tdh while 19 (29%) isolates were positive for trh. Both genes were present in 19 (29%) isolates. In other words, 45 (70%) isolates were negative for trh while only 7 (10%) were negative for tdh. Moreover, five isolates identified as V. parahaemolyticus were positive for tdh but did not express the corresponding phenotype in the Kanagawa test. Isolates carrying virulence genes were more abundant in sediment than in water. Thirty-seven isolates were identified in sediment samples, with positivity for tlh (n=36), tdh (n=33), trh (n=11) and tdh+trh (n=11), while 27 isolates were identified in water samples, with positivity for tlh (n=27), tdh (n=24), trh (n=8) and tdh+trh (n=8). The Kanagawa test was positive for 30 (81%) isolates obtained from sediment and for 21 (77%) isolates recovered from water (Table 2). Salinity is fundamental for the occurrence of marine Vibrio, as for example, V. parahaemolyticus (Takemura et al., 2014). In relation to pH, this parameter was stable within the favorable limits for Vibrio development, namely 7.5 to 8.5 (Sousa, 2004;Han et al., 2018).
The high level of positivity for the tlh gene observed in this study confirms the efficiency of the key proposed by Noguerola & Blanch (2008) for the identification of Vibrio species. Likewise, Croci et al. (2007) found the Alsina & Blanch (1994a, 1994b Vibrio identification key -a forerunner to the Noguerola & Blanch (2008) key -to be more efficient for identifying environmental isolates than the commercial kits API 20E and API 20NE.
Vibrio parahaemolyticus is known to cause gastroenteritis and even septicemia. It is the Vibrio species most frequently implicated in outbreaks of food-borne infections due to its wide distribution in estuarine and marine environments and frequent association with seafood (Klein et al., 2014).
The virulence of V. parahaemolyticus depends on the presence of two toxins, TDH and TRH, encoded by the tdh and trh genes, respectively (Chao et al., 2009;Silva et al., 2018). Therefore, the presence of tdh gene characterized by a β-hemolysis on Wagatsuma agar (Kanagawa phenomenon), the trh gene correlated to a positive urease test, or both serve as markers for pathogenic isolates (Chao et al., 2009). According to Rodrigues et al. (1993) and The virulence-associated tdh and trh genes were also observed in a study of environmental V. parahaemolyticus isolates recovered from water, sediment and shrimps from the coast of India (Gopal et al., 2005). However, only 2 out of 70 isolates were tdh/trh-positive, one (1%) from a shrimp sample and one (1%) from a sediment sample. On the other hand, Chang et al. Despite the recognized specificity of the PCR method for the identification of bacteria, our results show that the Noguerola & Blanch (2008) dichotomous key is a reasonably reliable technique to identify V. parahaemolyticus isolates from aquaculture environments. The high percentage of detection of phenotypes and genotypes markers of virulence among V. parahamolyticus isolated from aquatic environments is relevant to epidemiological studies on these pathogenic bacteria. The ability of V. parahaemolyticus to survive and maintain the virulence factors in aquaculture environments presents a risk to human health and causes significant economic issues in the aquaculture industry worldwide.
The present study demonstrates that the need to better understand the ecology and pathogeny of V. parahaemolyticus in aquatic environments is critical to identify and understand the risk for human and animal health related to the environment and activities in coastal areas. | 2020-07-30T02:05:14.522Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "468c0a2a33dfeac1bf609f34bb10c8bb74031ca5",
"oa_license": "CCBYNC",
"oa_url": "https://www.revistas.ufg.br/iptsp/article/download/61338/35070",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9dbf7c914df7ff8803afb4ebfe640fefea9412a6",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
52008925 | pes2o/s2orc | v3-fos-license | Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort
The use of machine learning for NLP generally requires resources for training. Tasks performed in a low-resource language usually rely on labeled data in another, typically resource-rich, language. However, there might not be enough labeled data even in a resource-rich language such as English. In such cases, one approach is to use a hand-crafted approach that utilizes only a small bilingual dictionary with minimal manual verification to create distantly supervised data. Another is to explore typical machine learning techniques, for example adversarial training of bilingual word representations. We find that in event-type detection task—the task to classify [parts of] documents into a fixed set of labels—they give about the same performance. We explore ways in which the two methods can be complementary and also see how to best utilize a limited budget for manual annotation to maximize performance gain.
Introduction
For most languages of the world, few or no language processing tools or resources exist (Baumann and Pierrehumbert, 2014). This hinders efforts to apply certain language technologies enjoyed by languages like English, in which much current research is done.
To perform natural language processing tasks in resource-poor languages, one way to overcome data scarcity is to tap on resources from another resource-rich language. Assuming that there are already good resources and tools to solve the same tasks in the more resource-rich language (henceforth, auxiliary language), the only remaining challenge is to transfer the learning process into the resource-poor language (henceforth, target language) and adapt it to the specifics of that language. One way to do this is to build a shared word representation across the two languages and train an NL engine on this shared representation, perhaps using an adversarial domain adaptation approach to handle the domain (language) shift (Chen et al., 2017). Usually, these approaches assume the availability of large labeled data in the auxiliary language, on the order of hundred thousands to millions.
However, for some more complex or specialized tasks, there might not be enough available training data even in a resource-rich language such as English. A case in point is the event-type classification task over the publicly available datasets, such as ACE 2005 1 and TAC KBP 2 datasets, which usually contain only a few hundred to a few thousand documents. The situation frame (SF) detection task is one example of event-type classification task, where the objective is to extract from each document one or more situation frames with their corresponding arguments. A situation frame (SF) is either an issue being described in the articles, such as civil unrest or terrorism, or a need situation such as the need for water or medical aid. In our task there are 11 situation frame types, each associated with a set of arguments, namely the location, status, relief, and urgency. For example, an article titled "Millions of people are at the risk of starvation due to the food shortage in South Sudan", with content describing the details and the cause of the food shortage, including a mention of difficulty accessing certain regions, can be classified as describing a food need and an infrastructure need.
As described below, we have tried two approaches: (1) a simple keyword-matching system utilizing only a small bilingual dictionary and minimal manual verification and (2) a sophisticated neural adversarial network that learns bilingual word representations for cross-lingual transfer. We find that the methods have similar performance. We therefore explore ways in which few keyword-based models can create additional, distantly supervised data to improve the performance of a neural cross-lingual event type detection system. Our contributions are: (1) an evaluation of a state-of-the-art method in a different task showing its similar result against a simple baseline, (2) ways to improve performance of such models, and (3) an analysis of the result, with insights to practitioners as to where to focus the available, yet limited, budget for manual annotation work. This paper is organized as follows: we first describe the related work on cross-lingual NLP tasks in low-resource settings, specifically how the available resources are used. Based on previous work, we then apply our proposed training data augmentation methods and run experiments to show the effectiveness of our methods. We then analyze the results, and follow with a few suggestions on how to best utilize the available annotation effort for maximum gain.
Related Work
Keyword-based Models A keyword-based heuristic model is a simple yet effective approach to extract specific information such as events (Keane et al., 2015), because keywords often indicate a strong presence of important information contained in documents (Marujo et al., 2015). Such methods have been used in different tasks like text categorization (Özgür et al., 2005) and information retrieval (Marujo et al., 2013) to extract the required information. Keyword heuristics have also been used to overcome language and domain barriers using bilingual dictionaries (Szarvas, 2008;Tran et al., 2013). However, a weak bilingual dictionary could result in low coverage with this method. Hence, to overcome the limiting bilingual dictionary people employ bootstrapping methods to improve the coverage (Knopp, 2011;Ebrahimi et al., 2016).
Cross-Lingual Text Classification
Cross-lingual event type detection is closely related to crosslingual text classification (CLTC), which aims to classify text in a target language using training data from an auxiliary language (Bel et al., 2003).
To bridge the language gap, early approaches of CLTC relied on a comprehensive bilingual dictionary to translate documents between languages (Bel et al., 2003;Shi et al., 2010;Mihalcea et al., 2007). However, in resource-poor languages, bilingual dictionaries may be small and sparse. Therefore, the performance of direct word translation will be unsatisfactory. Some researchers utilized the bilingual dictionary to translate the models instead (Xu et al., 2016;Littell et al., 2017).
Another line of work focuses on the use of automatic machine translation as an oracle. The various learning algorithms treated the translations as a second view of document and facilitate cross-lingual learning with co-training (Wan, 2009), majority learning (Amini et al., 2009), matrix completion (Xiao and Guo, 2013) and multi-view co-regularization (Guo and Xiao, 2012a).
Bilingual Word Embedding
The most recent method for sharing document representation between languages is bilingual word embedding (Mikolov et al., 2013a;Faruqui and Dyer, 2014;Luong et al., 2015). The goal is to learn a shared embedding space between words in two languages. With the shared embedding, we are able to project all documents into a shared space. The model trained in one language can, therefore, be used in the other language.
Models
To see how well recent state-of-the-art methods for CLTC work in our task, we implemented a convolutional neural classifier. We compare this against a simple keyword-based method. Figure 1: Architecture of the neural classifier with adversarial domain (language) adaptation by Ganin and Lempitsky (2015). Arrows show the flow of gradient.
Adversarial Convolutional Network
The first step is to train a bilingual word embedding as a shared feature representation space between the two languages. We trained our bilingual word embedding for English and the incident language using the method proposed in XlingualEmb (Duong et al., 2016). This method is a cross-lingual extension from word2vec model (Mikolov et al., 2013b) to bilingual text using two large monolingual corpora and a bilingual dictionary.
Based on the shared representation, we then used a convolutional neural network (CNN) (Kim, 2014) to perform the classification. There are two main advantages of choosing a deep neural classifier over a shallow one. First, CNN outperforms shallow models like SVM or Logistic Regression in various text classification benchmark datasets (Kim, 2014;Lai et al., 2015;Johnson and Zhang, 2015;Xu and Yang, 2017). Second, CNN takes dense word vector representations as input, allowing one to incorporate the state-of-the-art bilingual word embedding methods into the pipeline.
The CNN model takes a sequence of word embeddings as input and applies 1-D convolutional operation on the input to extract semantic features. The features are then passed through a fully-connected layer before reaching the final soft-max layer. The model is trained in English using the ReliefWeb dataset (Littell et al., 2017, Sec 2.3), which is annotated at sentence level with disaster relief needs and emergency situations. Thanks to the bilingual word embedding, which maps the words from the two languages to the same distributional semantic space, the model trained in English can be applied to documents in the target language.
Ideally, if the bilingual word embedding captures the ground-truth mapping between two languages, a classifier learned from English training documents should generalize well on the target language. However, in practice, we can observe obvious domain gaps between documents in different languages when we represent them with bilingual word embeddings. In order to close the domain (language) gap between training and testing, we adapt our learned model in English to the target language with similar adversarial training techniques used in (Xu and Yang, 2017;Chen et al., 2017). In order to alleviate the domain mismatch, we are essentially looking for a feature extractor that only captures the semantics of the event types but not the difference in language usage between English and the target language. In other words, we want the features captured by CNN to be informative for the event type classification and to be language-invariant at the same time. To achieve this goal, we include an auxiliary classifier that takes the features extracted by CNN and predicts the language that the input belongs to. During training, we update our parameter to simultaneously minimize the loss of the event type classification and maximize the loss of language classification through Gradient Reversal Layer (Ganin and Lempitsky, 2015).
Keyword-based Model
As mentioned previously, a keyword-based model is a simple, quick, yet effective approach to perform text classification without much training data. In our case, we do this in two steps: (1) build a list of keywords for each SF type in English, then (2) translate the keywords into the target language automatically Instances Distribution of Situation Frames (%) Visualization terror violence regime food water med infra shelter evac utils search none Eng-Orig ( O ) 82,096 2.4 1.8 3.9 14.0 33.0 8.2 6.0 9.2 3.7 4.3 8.6 4.9 Eng-KW ( E ) 1,356,425 17.5 29.2 6.0 11.3 12.6 9.9 2.7 4.0 3.6 1.6 1. 3) and from our English keyword model's output on ReliefWeb corpus, respectively, while Tgt-Boot and Tgt-Ann refer to training data in the target language obtained from bootstrapped keyword-spotting and from annotation, respectively. The "none" class signifies negative examples in the data. The last column shows a visualization of the SF types, excluding "none". Note: for Test Data, the instances refer to documents, while for the rest, instances refer to sentences.
using a bilingual dictionary. We also asked native speakers of the target language to refine the translation, especially for domain-specific keywords which are not adequately captured by the bilingual dictionary. 3 Building English keywords is again a two-step process. First, we use the ReliefWeb dataset to generate a list of 100 candidate keywords for each class by taking the top-100 words with the highest TF.IDF scores. Similar to the keyword generation method described by Littell et al. (2017), we manually refined the keyword list by pruning based on world knowledge. For each candidate keyword, we added 30 most similar words using the English word2vec model trained on the Google News corpus 4 . We retained only the words that have cosine-similarity greater than 70%. For each candidate keyword in this extended list, we computed a label affinity score with each SF class label (e.g., water, evacuation) using cosinesimilarity between their word2vec embeddings. Candidate keywords with similarity above a certain threshold th 1 were retained and used as keywords for the corresponding classes 5 .
4 Method: Training Data Augmentation Chen et al. (2017) assumed the auxiliary language contains a large amount of labeled data for the task, about 700k Yelp reviews. For our case, the original training data, which was built semi-automatically by Littell et al. (2017, Sec 2.3), contains only about 80k sentences (Table 1, first row). To improve the performance of the neural model, therefore, we propose to utilize the keyword-based system to automatically augment the original training data. We also explore using additional training data obtained via manual annotation for comparison. Figure 2 is a summary of the various training data sources we compare in this paper: the original training data ( O ), keyword-spotting in the auxiliary language ( E ), keyword-spotting with bootstrapping in the target language ( T ), and annotated data in the target language ( A ). The additional training data from keyword-spotting in English ( E ) can be directly obtained by using the keyword list in English that we used for the keyword model (Section 3.2) to label a larger ReliefWeb corpus. We describe the other two ways ( T and A ) to obtain additional training data in the following sections.
Bootstrapping Language-specific Keywords
We note that using simple keyword matching can result in low coverage due to missing keywords in the bilingual dictionary or word-variations in the target language. To overcome this, we developed an iterative bootstrapping algorithm that takes into account the newly labeled documents from keywordspotting and generates additional language-specific keywords in a two-step process ( T in Figure 2).
Clustering: In the first step, we collected labeled documents from each class, and generated clusters for them (D = {D c 1 ,..,D cm }, where D cp is the cluster of the class c p ). For each cluster D cp and nonkeyword w i in it, we then computed the label affinity score S p (w i ), defined as follows: where W p was the set of keywords present in D cp . We then appended the words which exceed a certain threshold th 2 to the keywords list of class c p .
The rationale for this step is that the keywords that were missed in the initial keyword list (due to an incomplete bilingual dictionary, the keywords being language-specific, or incident-specific) may appear more frequently in the document cluster, and the second term in S p (w i ) will capture this.
Labeling: With the updated set of keywords for each class, we relabeled the documents to obtain a new set of labeled documents and again executed the clustering step to get more keywords. We can repeat this two-step process n times until we have the desired coverage or until this process no longer gives useful new keywords. In our experiments, we found that n = 10 generally gives good coverage. To generate the training data, we ran this procedure on the test set and took the top-100 most confident predictions. 6
Annotation in Target Language
When we have the budget and annotators to do so, we can also annotate documents in the target language with class labels of interest. Given the limited budget and the rarity of documents with SFs (14-18% in our dataset), however, one question remains: how to best pick the documents to be annotated to maximize the gain from the additional training data? Seeing that the number of documents with at least one positive class is much less common compared to the number of documents without any positive class (see Table 1), simply taking a randomly sampled document from the unlabeled documents will likely give a document with no class, which is less useful compared to document with at least one positive class. Thus, we opt for a simpler method of asking annotators to make a binary decision on a (and 652 in Oromo). In addition to the native speakers, we also had non-speaker linguists annotate another separate (359) sentences in Tigrinya, assisted by grapheme-tophoneme conversion, morphological glossing, and machine translation (MT) output. 7 This results in 1,012 sentences annotated in Tigrinya ( A in Figure 2 and Table 1). Overall, we spent less than 12 manhours with native speakers of the target language to do the keyword translation and the annotation, with the larger amount of time spent on keyword translation.
Dataset
For the purpose of the experiments and analysis, we used the dataset from the LoReHLT 2017 8 shared task, which consists of news articles in two Ethiopian languages: Tigrinya and Oromo. 9 The statistics of the dataset is shown in Table 2.
The available resources that we used for this experiment consist of:
Setup
We summarize more details about the experiment setup.
Sentence-level prediction: Although the model we used can be applied to produce document-level predictions directly, working at sentence-level provided more training data for the model and made it easier to train. Doing so also enabled some insight on which sentences contain the information about the document-level predictions.
Document-level aggregation: We then aggregate sentence-level predictions to a document-level prediction by assigning to each SF type the maximum confidence score of that type across all sentences in the document. Based on these scores, we calculate the mean confidence score µ cp of each SF type c p . We then took the top-k (k = 3 in our experiments) SF types as our document-level prediction and filter out the predicted SF types which have confidence scores below µ cp . In the absence 7 The MT model was also trained in a low-resource setting, with BLEU score around 12 for Tigrinya. 8 https://www.nist.gov/itl/iad/mig/lorehlt-evaluations#lorehlt17 9 For Oromo, the original dataset includes one annotator (out of 4) which annotated most of the documents with a single class. We did not consider this outlier annotator in our experiments. Table 3: Performance of the neural model (NN) with various sources of training data, averaged over 3 runs. O is the original training data, E is the additional training data in English from keyword-spotting, A is the additional training data in target language from annotation, and T is the additional training data in target language from bootstrapping. The result on keyword model (KW) is also shown for comparison. of labeled data in the target language to be used as development set, this is one method that we can use without much tuning. In later sections we show how different document-level aggregation procedures may affect the performance.
Metric:
We followed the metric defined in LoReHLT 2017 guidelines 13 , which is occurrence-weighted scores, defined as follows: where α tp is the number of true positives, weighted with the number of annotators that agree with it. α f p and α f n are similarly defined for false positives and false negatives. False negatives always have weight 1. For brevity, we drop the occ subscript when referring to these scores.
Model hyperparameters:
In our neural CNN model, we used filter lengths of {3, 4, 5} and 300 filters for each length. We also applied dropout on the extracted feature by CNN at a rate of 0.2. The model was optimized in mini-batches of size 128 by Adam (Kingma and Ba, 2014) optimizer at the learning rate of 0.001. The optimization was terminated after 30 epochs or a convergence criteria was satisfied on the held-out training data. Table 3 shows the results in Tigrinya and Oromo with the varying training data described in Section 4. First, the keyword model (KW) outperforms the neural model with only original training data (NN O ). This suggests that in a low-resource setting, a keyword-based model can be used as a way to quickly get a working classifier, without the hassle of training a machine learning classifier or getting a large additional training data.
Results
Next, the additional training data did help to significantly improve the performance of the baseline neural model in both languages. The large additional English data (+ E ) provided a large boost both in Tigrinya (+4.97) and Oromo (+4.27). Interestingly, with only about 900 examples in the target language, the additional annotation in the target language (+ A ) gave more improvements in F 1 -score in both Tigrinya (5.3 points more) and Oromo (3.27 points more), compared to the large additional training data in English. Recall that the annotation was done on a subset of the neural model's output (trained on O ). This suggests that when an annotation budget is available, using that to verify the output of a model is a good investment.
It is interesting to note that each source of additional training data improved a different aspect of the model. The additional training data in English ( O + E ) seemed to improve recall more, while the additional training data in target language from annotation seemed to improve precision more ( O + A ), and combining both ( O + E + A ) provided the best of both worlds, especially in Tigrinya.
When we included the training data in target language from the keyword model with bootstrapping together with all other training data ( O + E + T + A ), it further improved the result in Tigrinya, but not in Oromo, although when it was used alone ( O + T ) it still gave some improvements over the original training data ( O ). This could be due to the lower quality of the keyword system in Oromo. Recall that it was created by taking the top-100 most confident predictions of the keyword model. This set of predictions gave 75.9% precision in Tigrinya and 47.1% precision in Oromo. This lower quality of Oromo bootstrapping method can also be seen in the diverging SF Type distribution, as can be seen in Table 1.
The best overall improvement was more pronounced in Oromo (+7.54 points in F 1 for O + E + A ) than in Tigrinya (+15.19 points in F 1 for O + E + T + A ). This could be related to the fact that the baseline score was much lower in Oromo than in Tigrinya to begin with.
For completeness, we also compare the results of the keyword model (KW) in target language without and with bootstrapping in the first two rows of Table 3. As anticipated, the bootstrapping process increased recall significantly, almost doubling the recall in Oromo. Although the precision was slightly reduced, it still resulted in an overall improvement in F 1 -score for both languages.
In summary, there are four main observations: 1. The Neural model with original training data (NN O ) gave lower performance compared to the keyword (KW) model, although the keyword model was a much simpler system. 2. With large additional training data in English (+ E ), we obtained large improvements both in Tigrinya (+4.97) and Oromo (+4.27). 3. With only small additional annotations in target language (+ A ) we obtained even better performance compared to using large English training data. 4. Getting additional training data in the target language through the keyword model can help if the quality of the keyword model is good enough.
Discussion
We hypothesize that the focused improvements on recall when using additional training data in English ( E ) could be attributed to the large number of training data in E (over 1 million sentences). This causes the model to be able to learn more unique contexts for SF types, thereby increasing recall. In contrast, the annotated dataset in the target language ( A ), being made from the output of the NN model on a separate dataset, will mostly help the model to do better recognition of false positives, thereby improving precision.
Another reason for increased precision could be the similar distribution of SF types in the annotated dataset ( A ) to the true distribution, compared to the distribution of SF types in the additional training data in English ( E ). We can see this from the SF type distribution shown in Table 1 by comparing the visualization at the last column.
To analyze the differences between the various source of additional training data, we plot the cooccurrence of classes on the Tigrinya dataset in Figure 3. Each row describes the percentage of a particular SF type co-occurring with other SF types in the same document (recall that each document might be labeled with multiple SF types), including none, in which the SF type is the single label for that document. The numbers in a row sums to unity.
As can be seen, predictions of the NN system trained on the additional English data (Figure 3b) and target language data (Figure 3c) have different co-occurrence patterns. The additional English data apparently allowed the NN model to find a strong correlation between the crime violence class and the terrorism class, which is consistent with our intuition. On the other hand, the NN model fine-tuned on the Tigrinya annotations apparently found the crime violence and terrorism classes tend to occur alone (last column of Figure 3c).
There is also an interesting phenomenon that arises from the correlation between keywords and class labels. We found that the SF type terrorism is associated with the keyword " " which means Table 4: Impact of various aggregation strategy to the performance.
"youth" or "juvenile", as in the example sentence (English translation, the word recognized as keyword in the original language underlined) "According to the information, the Eritrean girls killed in the incident hide near the tyre of a car and was hot by the Sudanese soldiers." Examining the various examples in the dataset, we found that the violence inherent in terrorism is often depicted with youths as the victims. This could be related to the tendency of news outlets to focus on the suffering experienced by young people to make more emotional appeal.
Impact of Document-level Aggregation Strategy
In Section 5.2 we showed one heuristic to do document-level aggregation. One might wonder whether one can do better in the classification performance by using another aggregation strategy, such as filtering out classes with confidence scores under certain threshold, or using different k when taking top-k classes. In this section, we explore the impact of different aggregation strategies on the performance under different conditions. Assuming the more realistic case of having no development set to prefer one strategy over another, we can use the top-k strategy like we did in our experiments, or set a fixed threshold on the confidence score based on the average confidence score of each type across all documents in the test set. We also show the result when we set the threshold based on the reference annotation, to show how well the result can be in the case that we have a development set to find the best threshold. The result is shown in Table 4. The significant gap of performance between the one tuned on the reference annotation and the rest suggests that if additional training data can be obtained in the target language, independently from the model's predictions, we should allocate a portion of them to be used for validation, since there are still large room for improvements just from tuning the thresholds (about 4 points in Tigrinya and over 17 points in Oromo). Note that in our experiments, since the annotation was done on the output of our neural model, we cannot use them as validation set, as it is biased towards the output of our model. So there is a trade-off between ease of annotation process and the amount of data that can be used as validation set.
Conclusion and Future Work
In this paper we tackled the problem of event type detection and classification in low-resource setting. We found that a neural model with adversarial training compared about the same as a simple keywordbased model using a small bilingual dictionary. Given that the problem lies with the limited amount of training data available, we proposed and compared methods to increase the amount of training data: to get significant gain in performance one can either use a very large additional semi-automatically labeled dataset in a resource-rich language, or annotate a small amount of documents in the target resource-poor language. We also showed how investing in a development set for tuning might also be a good strategy when there is a limited budget for annotation, after allocating some of them for keyword translation and additional training data.
One possible direction for future work is to address the mismatch of distribution of the classes between the additional training data and the actual test data as we see in Oromo. One way to mitigate the mismatch would be to make the classifier itself less prone to overfitting. In Section 6.1 we have shown how the document-level aggregation strategy may significantly affect the final result. Thus, exploring ways to effectively select the thresholds might be worthwhile. We could also incorporate the correlation between classes as evident from Figure 3, which has proven useful in multi-label classification (Zhang and Zhang, 2010). | 2018-08-16T13:28:13.320Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "7c007641e10eda058436fd33ffe83f5d9f20cbf9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "7c007641e10eda058436fd33ffe83f5d9f20cbf9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
15724401 | pes2o/s2orc | v3-fos-license | Diversity of Emergency Codes in Hospitals
Background: Hospitals must be prepared to deal efficiently and effectively with different emergencies. To accomplish this, several countries have standardized their hospital emergency codes to improve their response capability. This is particularly important in Puerto Rico given that many health professionals, particularly physicians and nurses, provide services in more than one hospital. This study examined the emergency codes and alerts utilized in Puerto Rican hospitals. Objective: To assess hospitals’ level of emergency preparedness and response capability related to the variability of emergency codes and alerts utilized to respond to a situation in Puerto Rico. Method: A survey was conducted to characterize hospital emergency department level of preparedness and response to a mass fatality incident. A total of 39 out of a sample of 44 hospitals participated in the study. Semi-structured questionnaires were administered by the research team to members of each hospital’s administrative staff to explore the following: general hospital characteristics, emergency plans, emergency department capacity, collaborative agreements, personnel training, emergency communications, laboratory facilities, treatment protocols, security, epidemiologic surveillance, equipment and infrastructure. Results: Some hospitals in Puerto Rico use color coded emergency alerts, while others use key words or codes. Single color emergency codes can have different meanings in different hospitals. Conclusions: The findings clearly show that there is a lack of uniformity and clarity in the emergency codes utilized by hospitals in Puerto Rico. Single color codes have diverse meanings in different hospitals. This could adversely affect hospitals’ efficient and effective emergency response.
Introduction
An emergency is defined as any incident, caused by humans or a natural event, that requires an effective, responsive action to protect life or property [1].Therefore, the response to an emergency must be quickly, coordinated and well-planned [2].Initial efforts for the standardization of emergency codes in hospitals started with an incident where three persons were killed in southern California in a shooting at a medical center after the wrong code was called [3].This particular incident led the Hospital Association of Southern California to develop a comprehensive campaign to achieve standardizetion of hospital emergency codes [1].
Assuring emergency preparedness and response re-quires a systematic and structured methodology that enables an objective assessment [4].As part of this effort, a number of jurisdictions have moved towards standardization of hospital emergency codes.The need for code uniformity is underscored due to the mobility of the health care workforce.Staff who are reassigned to a new medical facility, or who must practice in more than one facility, need to be immediately familiarized with a code identifying the nature of a given crisis and their expected response [5].
The purpose of the study was to assess hospitals' level of emergency preparedness and response capability related to the variability of emergency codes and alerts utilized to respond to a situation of emergency in hospitals in Puerto Rico.
Methods
A survey was conducted to characterize hospital emergency department level of preparedness and response to a mass fatality incident.A total of 39 out of a sample of 44 hospitals participated in the study.Semi-structured questionnaires were administered by trained research assistants to hospital directors to explore the following: general hospital characteristics, emergency plans, emergency department capacity, collaborative agreements, personnel training, emergency communications, laboratory facilities, treatment protocols, security, epidemiologic surveillance, equipment and infrastructure.The selection of the hospital emergency departments that participated in the study was conducted by the Puerto Rico Department of Health (PRDOH) Office of Public Health Preparedness and Response (OPHPR).The OPHPR provided the research team with a list of 44 healthcare facilities grouped into six (6) coalitions: North, South, East, West, Metro A and Metro B. The list consisted of forty-one (41) hospitals and three (3) community health centers, which included a Diagnostic and Treatment Center, a primary health center and a family health center.All these healthcare facilities were located in 23 municipalities throughout Puerto Rico.Figure 1 shows the location of the participating emergency departments.At the end of the study, 39 hospital facilities agreed to participate in the study, yielding a response rate of 88.6%.
To gather the data, seven (7) electronic instruments were constructed considering ten (10) dimensions identi-fied through the literature.These dimensions were: 1) general characteristics of the hospital; 2) emergency plans; 3) collaborative agreements between agencies; 4) infrastructure and equipment; 5) epidemiologic surveillance; 6) protocols for medical treatment; 7) laboratory; 8) training among personnel; 9) communications during an emergency; and 10) hospital physical security.Confidentiality and voluntary participation issues were discussed with all subjects and all study activities were reviewed and approved by the Human Subjects Institutional Review Board.IRB: RCM IRB, protocol #A66640211, January 31, 2011.
Results
Results from this study showed that some hospitals use color coded emergency alerts, while others use key words or codes.Moreover, a single color emergency code can have diverse meanings in different hospital installations.Among the key colors that showed the highest diversity were code blue with seven different meanings, code yellow and code white with six different meanings, and code green with five.The findings of the study showed the following color codes as the most used: red for fire with 79.5% (n = 31); gray for safety/security situation with 74.4% (n = 29); and, green for cardio-respiratory arrest with 71.8% (n = 28).
Discussion
The findings clearly show that there is a lack of uniformity and clarity in the emergency codes and alerts utilized in hospitals in Puerto Rico.A single code could have diverse meanings in different hospitals.This could adversely affect an efficient and effective emergency mobilization of patients, visitors and hospital personnel during an emergency.The lack of standardization increases the potential for confusion or misinformation during critical times [1].In Puerto Rico, on August 10, 2011 Law 170-2011, which allows the Department of Health to implement the standardization of protocol codes for emergency care facilities in the private and public health sector was approved [6].This law represents an additional preparedness effort; however, it is not completely implemented yet.An emergency can happen at any time.Clear communication is a key element to ensure a quick response to protect patients, visitors and staff.The fact is that code alert and standardization among all hospitals may not be immediate and there will need to be a planned transition to the recommended code set.Several implications of the diversity or inconsistent codes for the differences in terminology have to be considered for planning, communication, and operations during an actual event [7].According to the Hospital Incident Command System guidance, it is important to point out the need to use clear language in case of a disaster event, especially when dealing with external resources [8].The goal is for hospitals to phase in the implementation of the recommended codes so that the required materials and training can be developed and offered at a time best suited for hospital personnel.Clearly, considerable training and financial resources will be required for this transition to be conducted efficiently and effectively [9]. | 2017-04-14T23:54:57.606Z | 2013-11-15T00:00:00.000 | {
"year": 2013,
"sha1": "4abe53273f6ef6ff29e3e56828228b494f81c5d8",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=39620",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "4abe53273f6ef6ff29e3e56828228b494f81c5d8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237428269 | pes2o/s2orc | v3-fos-license | Compact CubeSat Gamma-Ray Detector for GRID Mission
Gamma-Ray Integrated Detectors (GRID) mission is a student project designed to use multiple gamma-ray detectors carried by nanosatellites (CubeSats), forming a full-time all-sky gamma-ray detection network that monitors the transient gamma-ray sky in the multi-messenger astronomy era. A compact CubeSat gamma-ray detector, including its hardware and firmware, was designed and implemented for the mission. The detector employs four Gd2Al2Ga3O12 : Ce (GAGG:Ce) scintillators coupled with four silicon photomultiplier (SiPM) arrays to achieve a high gamma-ray detection efficiency between 10 keV and 2 MeV with low power and small dimensions. The first detector designed by the undergraduate student team onboard a commercial CubeSat was launched into a Sun-synchronous orbit on October 29, 2018. The detector was in a normal observation state and accumulated data for approximately one month after on-orbit functional and performance tests, which were conducted in 2019.
I. INTRODUCTION
Gamma-Ray Integrated Detectors (GRID) mission is a student project designed for the scientific purpose of monitoring the transient gamma-ray sky in the local universe, particularly, to accumulate a sample of gamma-ray bursts (GRBs) associated with gravitational waves (GWs). According to the estimation of the GW-GRB joint detection event rate, the maximum number of events that can be detected is more than a dozen per year. Therefore, GRID is designed to serve as full-time all-sky gamma-ray detection networks, without Earth occultation or interruptions due to the South Atlantic Anomaly (SAA), with many compact and modularized gamma-ray detectors on a fleet of CubeSats in low Earth orbit. As a distributed system, GRID can localize detected GRBs via triangulation or flux modulation with simple scintillation detectors [1].
The scientific payloads of GRID comprise several modularized compact gamma-ray detectors. Many key technologies have been utilized to optimize the gamma-ray detection performance in the limited space of a CubeSat. In the first gamma-ray detector prototype of GRID, GAGG:Ce * M. Zeng acknowledges funding support from the Tsinghua University Initiative Scientific Research Program. H. Feng acknowledges funding support from the National Natural Science Foundation of China (Grant Nos. 11633003, 12025301, and 11821303) and the National Key R&D Program of China (Grant Nos. 2018YFA0404502 and 2016YFA040080X). † Corresponding author, Ming Zeng, zengming@mail.tsinghua.edu.cn ‡ Corresponding author, Ji-Rong Cang, cangjr14@tsinghua.org.cn was used, and the package was optimized for high transmittance of low-energy X-rays down to 10 keV. To respect the power and space limitations of CubeSats, silicon photomultipliers (SiPMs) were utilized instead of traditional photomultiplier tubes (PMTs) owing to their attractive capabilities, such as their super miniature size, low weight, low power consumption, and insensitivity to magnetic fields [2]. The strong dark noise of the SiPM array restricts the signalto-noise ratio (SNR) at room temperature; hence, the current sensitive pre-amplifier was designed, modeled, and optimized to improve the SNR. Data acquisition DAQ) electronics were designed based on an off-the-shelf ARM core microcontroller unit (MCU), which is sufficiently simple for undergraduate students. We note that numerous CubeSatbased missions similar to GRID mission have been proposed and are under development [3] [4][5] [6][7] [8]. The scientific payloads of BurstCube [4], CAMELOT [5], HERMES [7], and GRBAlpha (in-orbit demonstration for CAMELOT) [6] are scintillator-based detectors similar to those of GRIDs. BlackCAT [3] adopts silicon detectors and is sensitive to soft X-rays. LECX [8] is designed to employ four CdZnTe (CZT) detectors with high energy resolution.
The first detector prototype onboard a 6 U (30 cm × 20 cm × 10 cm) CubeSat developed by Spacety Co. Ltd, a commercial satellite company in China, was launched into a Sun-synchronous orbit [1] with an altitude of 500 km and inclination of 97.5 • . It was in a normal observation state and accumulated data for approximately one month after its onorbit functional and performance tests, which were conducted in 2019. Compared with the first detector, the second detector has fewer hardware modifications and improvements. An aluminized polyimide film was used to better insulate the sunlight, and the leakage current monitoring circuits of the SiPM arrays were modified to provide a wider measurement range in the second detector. The second detector was launched into a Sun-synchronous orbit with an altitude of 500 km and inclination of 97.3 • by the Long-March 6 rocket on November 6, 2020. The second detector has accumulated data for more than 300 h of on-orbit observation. Multiple GRBs were observed. All the corresponding scientific data will be collected and published by the National Space Science Data Center (NSSDC) in the future. In this paper, we present the detailed design of the detector, electronics, and firmware. In addition, we discuss the energy resolution, low-energy X-ray detection performance, and high-rate performance of the detector.
II. DETECTOR STRUCTURE
A schematic and photograph of the first detector fabricated for GRID mission are shown in Fig.1. The detector comprises four GAGG:Ce scintillators coupled with four SiPM arrays on one SiPM board. Each SiPM array comprises 4 × 4 J-60035-type SiPMs from SensL. The standard output pins of the SiPMs are connected in parallel on a front-end electronics (FEE) board. The following DAQ board provides the capabilities of four-channel signal digitization, power distribution, communication, and detector control. The detector has dimensions of 5 cm × 9.4 cm × 9.4 cm, which occupies half of a CubeSat with standard dimensions (units or 'U'), and a power consumption of 5 V×0.6 A. The detector provides a novel electronic interface, supporting the serial peripheral interface (SPI), universal asynchronous receiver/transmitter (UART) protocols, and pulses per second (PPS) interface. The features of the first detector, which is a modularized CubeSat payload suitable for different nanosatellite platforms, are summarized in Table 1. nanosatellite unit. A single GAGG:Ce scintillator has a surface area of 3.8 cm × 3.8 cm and a thickness of 1 cm. GAGG:Ce has the advantages of a high density (6.5 g/cm 3 ) and a high detection efficiency for gamma-rays of up to the MeV magnitude. The maximum emission spectrum for GAGG:Ce is approximately 530 nm, which is suitable for silicon-based photodetectors. Its high light yield (∼30-70 ph/keV) and ∼ 100 ns decay time provide a reasonable SNR when the SiPM arrays are used. The reflection layer is an enhanced specular reflector (ESR), a 65 µm polymer with high reflectance (> 98%) manufactured by 3M. A series of aluminized polyester films, which are generally used as the outermost cover of satellites, is used as the light shield layer. Owing to the good mechanical characteristics and nonhygroscopic properties of GAGG:Ce [9], no further layers are required to ensure sufficient moisture proofing and mechanical reinforcement. The simplified scintillator package provides a high detection efficiency of low-energy X-rays at a low cost.
With an increase in the number of parallel connected SiPMs on the same channel, the output capacitance and dark count rate of the SiPM array increase, which reduce the SNR. Therefore, an SiPM array smaller than the bottom area of the scintillator is utilized, similar to many detectors where the scintillator is coupled with SiPMs. The ESR is cut into a particular shape and fully covers the top and side of the scintillator. Moreover, there is a 2.2 cm×2.2 cm square window at the bottom-center for the scintillation light collection, as shown in Fig.2. By measuring the 661.7 keV full-energy peak positions of the 137 Cs source using different packages, we found that the light collection efficiency of this type of package is 62% that of a full coupling of the scintillator bottom surface. The optical grease is used for the optical coupling between the scintillator and SiPM array in the first payload. It is replaced with a silicon sheet in the second payload because of the fixed shape and good light collection efficiency of the silicon sheet.
B. SiPM
The SiPM has numerous attractive features, such as small size, insensitivity to magnetic fields, low power consumption, and light weight, which are crucial in space mission applications [2], particularly nanosatellites. Therefore, four SiPM arrays are utilized as photoelectric converters. The SiPMs are the J-60035 type manufactured by SenSL and have a high photon detection efficiency (PDE) curve that matches reasonably with the emission spectrum of GAGG:Ce with a low dark count rate. Each SiPM chip has dimensions of 6.13 mm×6.13 mm×0.6 mm and consists of 22,292 singlephoton avalanche diodes with a microcell fill factor of 75%. The bias voltage supplied to the SiPM is 28.5 V, which is considerably lower than that required by a PMT. To reduce the dark noise and output capacitance of the SiPM array and manage the costs, a 4 × 4 SiPM array is adopted with a 2.45 cm × 2.45 cm area, which covers approximately onethird of the GAGG:Ce scintillator bottom surface area , as mentioned previously.
Four SiPM arrays are integrated on a single board designed and manufactured in-house (Fig.3). In addition, an SiPM through silicon via (TSV) package ensures that the SiPM fill factor of the printed circuit board (PCB) footprint is over 93%. Each array is powered independently, and the fast and standard outputs of every single SiPM chip are independently extracted to the FEE through a high-density connector QTE_040_03_F_D_A manufactured by SamTec. Because the breakdown voltage of the SiPM changes with temperature, which affects the gain of the SiPM, there is a temperature-monitoring chip on the other side of the PCB for the correction of the SiPM gain.
C. FEE
A schematic of the FEE is shown in Fig.4. The standard output signals from one 4 × 4 SiPM array are directly shorted on the FEE and fed to a transimpedance amplifier (TIA) via an alternating current (AC) coupling, while the fast output signal pins are left floating. A 2 kΩ resistor connects the standard output of the array to the ground as the direct-current (DC) path of the standard output, which can restrain the current of the SiPM array to improve the system robustness. A standard high-speed amplifier OPA656 was adopted as the TIA amplifier, and the parameters of the TIA were optimized by detector modeling, as described in the next section. The TIA is followed by a low-pass filter circuit to reverse the signal and adjust its amplitude. The filter output is directed into two paths, which are connected to the trigger and peak hold circuits. The trigger circuit comprises a hysteresis comparator with an adjustable threshold and monostable pulse generator LTC6993, generating a high-level trigger signal for 2 µs without a retrigger. The high-bandwidth peak hold circuit comprises an operational transconductance amplifier (OTA) OPA615 and electronically controlled analog switch for discharge. Then, the four triggers and four peak hold signals are fed to the DAQ through a standard 2.54 mm connector. For such a compact detector, considerable efforts have been invested to improve the SNR, such as the scintillator reflector design, SiPM array design, and high-speed amplifier application. However, we do not have flexibility in terms of the scintillator and SiPM, whereas the TIA parameters significantly affect the SNR and can be analyzed and optimized in detail.
The transient response of the GAGG:Ce scintillator can be expressed as a single exponential decay signal, and its normalized transfer function is where τ GAGG = 100 ns is the GAGG:Ce decay time.
An accurate electrical model of SiPM is complex. However, neglecting the equivalent input resistance of FEE and quench capacitance can simplify the SiPM transient response to a single exponential decay signal, with a normalized transfer function as follows: where τ SiPM ≈ 38 ns is the recovery time of the SenSL Jseries SiPM[10] [11].
With proper selection of feedback capacitance and resistance, the TIA can be treated as a second-order Butterworth filter with the following transfer function where Ω = 2πF , F = GBP/(2πR F C D ) is the approximate -3 dB bandwidth of the TIA circuit, GBP is the gain bandwidth product of OPA656, R F is the feedback resistance, and C D is the output capacitance of the SiPM array. Here, Q = 0.707 is the quality factor of the Butterworth filter. Thus, the output pulse waveform of the TIA can be expressed as where E is the incident photon energy, LY is the light yield of GAGG:Ce, CE is the scintillation light collection efficiency determined by the scintillator and its packaging, P DE is the PDE of the SiPM array, e is the elementary charge, and G is the gain of the SiPM.
The equivalent output noise voltage of the TIA contributed by the SiPM dark counts can be treated as a random pulse train. The standard deviation of the dark count noise can be obtained from Campbell's theorem [12] wheren is the dark count rate,n ≈ 90 kHz/mm 2 ×576 mm 2 for one SiPM array at 20 • C with an operation voltage of 28.5 V, and h(t) is the dark count pulse. The dark count pulse can be considered a Dirac delta pulse eGδ(t) convoluted by the SiPM and TIA response function; thus, Considering that the output noise of the TIA is bandlimited, the equivalent output noise voltage contributed by the TIA can be estimated by a simple expression where I N and E N are the input current and voltage noise of OPA656, respectively, 4kT R F is the thermal noise of the feedback resistor, and F 0 ≈ 1 MHz is the band-limiting frequency of the TIA and low-pass filter system. Therefore, the SNR can be defined as The SNR varies with the feedback resistance R F , as shown in Fig.5. R F was set to 500 Ω to satisfy both the SNR and appropriate gain. The DAQ board comprises the power regulator, analoguedigital converters (ADCs), MCU, embedded multimedia card (eMMC), and communication interface to process signals from the FEE, supply electricity for all analog and digital devices, supply and control bias voltage of SiPM arrays, process commands, format data for storage, and transmit data to the spacecraft, as shown in Fig.6. The power system regulates the +5 V input voltage to ±5 V for the analog devices, +2.5 V for the ADCs, and +3.3 V for the digital devices. An adjustable bias voltage of 0-40 V for the SiPM arrays can be generated through the SiPM bias voltage supply module, which can also monitor the bias voltage and leakage current. The DAQ core is a 32-bit ARM Cortex M0+ MCU KEA128 with 16 kB static random-access memory (SRAM) and 128 kB flash manufactured by NXP, which is an automotive-level MCU optimized for cost-sensitive applications and focuses on exceptional electromagnetic compatibility (EMC) and electrostatic discharge (ESD) robustness. A 512 MB single-layer cell (SLC) eMMC stores the raw science and housekeeping data with high reliability. The eMMC can store approximately 12 h of data based on existing data format definitions and background count rates of 500 counts/s. Peak hold signals from the FEE are sampled by four individual ADCs with 1M sample rate and 16-bit precision. The four internal ADCs of the MCU are alternatives of each other and can be selected through a gating switch for redundancy.
As shown in Fig.6, the DAQ electronics provides an interface with spacecrafts, which comprises a data bus using differential SPI protocol with LVDS level for the raw science and housekeeping data transmission, a UART interface for firmware update, a PPS interface for time calibration, and some general-purpose input/output (GPIO) interface for MCU reset, data request, boot configuration, and burst trigger (denoted as Telemetry and RESET).
B. Flight firmware
The flight firmware operates on the MCU without an operating system and comprises two parts: the boot loader and application program, residing in the internal flash memory of the MCU. The firmware provides limited online data processing capacity because of the low MCU performance. However, the configuration of all hardware, response to the triggers, command reception and processing, data storage and transmission, control and monitoring of the detector, and the application program update are provided, which can satisfy all necessary on-orbit requirements.
After the MCU is powered on, it first runs the boot loader to check the Config&Trigger pin level, and updates the application program if high, or jumps to the application program if low.
The application program is interrupt driven. Interrupts are generated on the following events: (1) FEE trigger; (2) timer interruption per second to record housekeeping data; (3) PPS from the GPS module; (4) spacecraft commands.
When the FEE trigger signals interrupt the MCU, the peak hold signals are sampled by four individual ADCs or four internal ADCs of the MCU, and the internal clock, PPS count, and UCT time will be recorded. When the converter sampling is finished, the MCU discharges the peak holder. Four channels work in a single thread in the present configuration, so other channels are in dead time, while one channel is triggered. The peak values and time information are stored in the eMMC. Every 1 s, the MCU records the housekeeping data, as listed in Table 2. The internal clock is recorded, while the PPS triggers the MCU for accurate time reconstruction of incident photons.
Currently, the application program processes the following commands from the spacecraft.
(2) Set the bias voltage of each SiPM array and trigger threshold for each channel. In daily operation, the instruction sequence will be sent to the spacecraft from the Earth station. Then, the spacecraft adjusts its attitude , powers on the detector, and controls the detector to set the SiPM bias voltage starting the observation, and powers off the detector after a specific observation time. The data stored in the detector eMMC will be read to the POBC's eMMC and downloaded at the appropriate time.
C. Data format
The DAQ produces two types of data packets: the raw science and housekeeping data. All the raw data are stored in the eMMC as a series of 512-byte packages. The definitions of housekeeping and raw science data are summarized in Tables 2 and 3, respectively. In a 512-byte raw science data package, the first trigger event occupies 3-25 bytes, and the other 43 trigger events occupy 26-499 bytes in the same format as shown, from 26 to 36 bytes.
In a 512-byte housekeeping data package, there are seven housekeeping datasets occupying bytes 3-492 in the same format as shown in bytes 3-72. "UT", "PPS count", and "Internal clock" are indicated when the housekeeping data package is recorded. "PPS to UTC" indicates the corresponding PPS count last time UTC is received and "internal clock to PPS" denotes the corresponding internal clock last time PPS is received. From these data, we can precisely correspond the MCU internal clock to the UTC time, which is significant for GRB triangulation.
V. PERFORMANCE
The detector performance can be investigated through experimental calibration and simulation [14]. The performance of the GRID, including the energy-channel relations at different temperatures and biases, space uniformity, energy resolution, effective areas, angular responses, and detector noise, was calibrated experimentally in detail on the ground, and the effective areas and angular responses were investigated using a Monte Carlo simulation. The detector was irradiated with collimated radioactive sources in the laboratory (from 14 keV to 1.4 MeV) for calibration. Because of the low number of radioactive source emission lines in the low energy range, calibration measurements from 10 to 160 keV were performed with X-ray radiometry at the National Institute of Metrology of China. The detailed calibration and simulation results will be described in a future study. Here, we summarize the key features of low-energy performance, energy resolution, and high-rate performance, as well as the temperature dependence of the GRID.
A. Low-energy performance
Based on the SNR expression derived previously, if the minimum SNR required is 6, which means that the peak value of the signal amplitude is six times the standard deviation of noise, the lower limit for the low-energy detection can be derived theoretically. The light yield of the GAGG:Ce is calibrated experimentally in-house. The GAGG:Ce and a LaBr 3 : Ce scintillator with a known light yield are irradiated by a 241 Am source, respectively, and the same PMT and electronics are used for the scintillation light readout. Considering the quantum efficiencies of the PMT cathode, the light yield of GAGG:Ce is estimated to be 16 ph/keV by comparing the 59.5 keV peak positions. Under normal observation conditions, where the temperature is 20 • C and SiPM operating voltage is 28.5 V with a scintillation light collection efficiency of 62% and PDE of the SiPM array, defined by the PDE of a single SiPM multiplied by the array's fill factor, of approximately 27.9%, the lower limit of the detector is 13 keV. The light yield of the crystal is lower than that reported in a previous study [9]. The provider of the GAGG:Ce crystal has improved the crystal growth process. Therefore, a crystal with a higher light yield will be used in the next detectors, with the lower limit of the GRID expected to extend to 10 keV. In addition, with the development of low dark count rates [15] and high PDE SiPMs [16], the low-energy performance of these types of detectors can be further improved.
The trigger threshold was set to 20 mV (approximately 13 keV) and a spectrum of 241 Am with 59.5 and 13.5 keV Xrays was measured, as shown in Fig.7. The performances of the four channels are not exactly the same owing to the difference in the scintillator light yield and light collection efficiency. However, the peak of the 59.5 keV X-ray can be observed clearly. The peak at 13.5 keV and dark count noise are mixed together near the threshold.
B. Energy resolution
The energy resolution results were calibrated by radioactive sources in the laboratory, which handled the detector and electronics noise, as well as the statistical fluctuation and energy nonlinearity. The sources used for calibration with their emitted photon energies are listed in Table 4. As shown in Fig.8, the energy resolution ∆E E is approximately proportional to 1 √ E and is approximately 9% at 662 keV. The poor fitting in the low-energy region is due to the nonlinearity of GAGG:Ce, particularly the distinct inconsistency at 81 keV, which is caused by the X-ray absorption edge of GAGG:Ce at approximately 70 keV [9]. Two effects typically impair the performance of scintillation detectors at high photon rates: dead time and pulse pileup. The pulse pile-up occurs when the count rate is so high that the pulses from successive events overlap in the FEE, which causes distortions in the measured spectrum that are difficult to characterize. These types of distortions are generally treated as systematic errors in the determination of the gamma-ray spectrum. Owing to the high bandwidth of the TIA and filter, the pulse of a single event lasts less than 1 µs from generating a trigger to recover to baseline, which incurs little distortion in the measured GRB spectrum according to the relevant discussion about the Fermi GBM detector [13]. However, there is another type of pulse pile-up occurring at the peak-hold circuit; that is, a small signal will be overridden by a larger signal. This problem can be solved by a limit switch that restricts the peak-hold circuit input voltage to the ground level when the channel is triggered. Fig. 9. Contribution of dead time in GRID. When the MCU is triggered by the FEE trigger output signal, a conversion input (CNV) is generated after 6 s to initiate the ADC conversions. The ADC conversion takes 4 µs, which is followed by a 26 µs MCU-ADC SPI communication time. Then, the discharge signal is generated after 2 µs and lasts for 12 µs until the ADC value is wrapped in the eMMC.
The nominal detector dead time is approximately 50 µs per event, which mainly consists of the MCU-ADC communication and MCU-eMMC communication, as shown in Fig.9. However, in the last firmware version, which is used in the second GRID detector, the dead time is optimized to 15 µs with the same hardware design.
D. Temperature dependence
As the breakdown voltage of an SiPM varies with temperature, the gain of SiPM arrays is significantly affected by the temperature. The relationships between the gain, temperature, and bias voltages should be calibrated [17]. Figure 10 shows the calibration results for the temperature dependence of the GRID detector channel 0. The gain decreases with increasing temperature, and other channels show the same changing rule as channel 0. In a GRID detector, the bias voltage does not vary with temperature to stabilize the gain. The temperature of SiPM arrays will be recorded for offline correction of the energy-channel relation of the detector.
VI. CONCLUSION
GRID mission is a student project with a dedicated and straightforward scientific goal: to detect and locate GRBs produced by neutron star mergers jointly with ground-based GW detectors in the local universe. In this paper, the detailed design of a GRID, electronics, and firmware, as well as the energy resolution and low-energy and high-rate performances of the detector are introduced. Further calibration of the detector, including the angular response, detection efficiency, and temperature response, using both simulations and experiments, will be reported in detail in a future study.
GRID mission was initially proposed and developed by students, with a considerable contribution from undergraduate students, and shall continue to operate as a student project in future. The current GRID collaboration involves more than 20 institutions and continues to grow. The purpose of GRID mission is twofold. In addition to its scientific goals, we hope to attract excellent students from different disciplines into astrophysics and train them to organize and participate in a multi-disciplinary collaboration, while learning how to build a real science project that covers hardware, data, and science.
In conclusion, GRID mission is a scientific collaboration that accepts students and scientists worldwide. Members can launch their own detectors, share the data, and produce scientific results under certain agreements. The detailed hardware and firmware design materials described in this paper will be part of a standard design package to be delivered and shared within the GRID collaboration community. | 2021-04-30T01:16:07.972Z | 2021-04-29T00:00:00.000 | {
"year": 2021,
"sha1": "0b182b0808d1c407b37a941708b5703772be5273",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.14228",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0b182b0808d1c407b37a941708b5703772be5273",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
2534859 | pes2o/s2orc | v3-fos-license | Seasonal BMI Changes of Rural Women Living in Anatolia
Today, obesity is one of the most evident public health problems in many parts of the World and it is more common among women. Several factors are affecting women’s obesity, among these short term weight fluctuations, either gain or loss, cause severe health disorders, particularly in rural areas where seasonal activity differs significantly throughout the year. Since this case has not been studied in detail, our research focused on prevalence and probable causes of seasonal rural obesity among women in two rural areas of Turkey. The study was undertaken with 100 participants. One-way ANOVA and one-way repeated ANOVA tests were utilized for categorical, continuous and repeated variables as study contains groups with more than one and repeated variables. Overweight is more common in the 18–30 years and 50+ years groups, whereas the absence of obesity, except during winter of 2010 in the 50+ years of age group, is most probably due to the widespread occurrence of diabetes for this age group. The highest BMI values for all groups, which were 25.2 ± 3.39 for 2009 and 26.1 ± 3.40 for 2010, were determined in winter, because of minimum physical activity, while summer BMIs were 24.1 ± 3.39 in 2009 and 25.1 ± 3.35 in 2010. This decrease was most probably due to intense agricultural field work in both regions. The majority of the women claimed that their weight is balanced in summer but results revealed that participants did not lose all the weight which was gained during winter months although BMI showed a significant fall from spring to autumn.
Introduction
Changes and fluctuations in weight, particularly for obese subjects, are an indicative of health problems [1][2][3] and create severe health disorders if the changes happen in the short term [4]. The risks of obesity are well documented by several studies [4][5][6][7]. The prevalence of obesity, contrary to popular belief, is more common in low-income segments of the society due to consumption of low cost but energy-dense foods such as carbohydrates [8]. While Sobal [9] outlines the obesity prevalence among low income women, studies undertaken in Egypt and China manifested an opposite finding which was an increase of weight with increasing welfare, particularly for rural women, as they tended to give up working in the field with increasing household income [1,10,11]. In parallel to the above studies, intensity of overweight/obesity both in urban and rural areas exceeds malnutrition in 37 developing countries [12] e.g., more than 60% of the adults, being more common in women, are overweight/obese in Turkey [13]. The prevalence and effects of overweight/obesity in Turkey is in an increasing trend [14][15][16]. Overweight/obesity in urban areas is more studied than rural areas in Turkey [17,18]. Yumuk et al. [16] determined 70% of the adult population in Konya (Central Turkey) was overweight/obese, with a higher rate for women than men. This finding is supported with another study undertaken in a suburban area in Turkey which revealed more than one quarter of the women have obesity or overweight problems [19]. The causes of high overweight/obesity in the country and the World can be attributed to the better access to food, decreased physical activity, and the consumption of relatively cheaper but high energy bearing foods [18,20].
Studies on BMI (body mass index) are generally based long term trends. However in rural areas, seasonal changes in daily life due to intense field work in crop seasons significantly affect body weight. The repeated increasing and decreasing body weight in short terms (monthly or seasonally) unfortunately has a tendency for higher BMI. The reasons for short term fluctuation intensity should be taken into account for the prevention of obesity-related health problems. Thus, our study focused on the prevalence and causes of seasonal overweight and obesity in two rural areas of Turkey which are also representative of high overweight/obese zones of the country. The outcomes of the study can also be used for mitigating and/or protecting rural women from high BMI [21].
Methods
The seasonal BMI variation of women in rural areas of Central (Karapınar) and Southeastern Anatolia (Adıyaman) in 2009 and 2010 were studied. The study was undertaken at two villages in each town, namely Kuyucak and Doluca in Adıyaman (Southeastern Anatolia), and Yeşilyurt and Hasanoba in Karapınar (Central Anatolia). Obesity is a common phenomenon in both study sites [22]. A series of questions were asked by interviewers to each participant to record their eating habits and daily life along with demographic properties.
Body weight and height for BMI measurement were taken in autumn, winter, spring, and summer of 2009 and 2010 for evaluating seasonal changes of subjects' [24]. A calibrated precision balance and a stadiometer were used for weight, and height measurement which were undertaken with removed shoes and light clothing. BMI ranges are classified as follows: underweight for <18.5 kg/m 2 , normal for 18.5-24.9 kg/m 2 , overweight for 25-29.9 kg/m 2 , obese for ≥30 kg/m 2 [24]. BMI evaluation analyses were performed with SPSS 17.0 statistical software.
The outdoor agricultural activities begin generally 15 days earlier in SE Anatolia (late February) due to earlier spring conditions. The agricultural activities are grazing, field preparation by hoeing, fertilizing, irrigation and harvesting. Due to second crop cultivation at both sides agricultural activities continue until late-autumn (harvest). However, SE. Anatolian women generally work 10-15 days longer because of more appropriate mild climatic conditions for grazing and crop cultivation than in C. Anatolia. Cereals and sugar beet are major crops in C. Anatolia (Karapınar) whereas cotton and cereals are dominant in SE. Anatolia (Adıyaman). Women said that they spent approximately 11 hours in the field for agricultural activities (Table 1). In addition to field activities, vegetables and fruits are also grown in house gardens between spring and autumn. Following intense field work in from spring to late autumn, women switch to a more sedentary life in winter. Following the recent introduction of irrigation in Karapınar (C. Anatolia), the annual income of households has significantly increased compared to the standard rainfed agriculture practices of Adıyaman. The annual income of women working in agriculture in C. Anatolia is roughly 5,000-6,000 USD whereas participants living in southeastern part earn 3,000-4,000 USD. However, the incomes of both sites are relatively lower than country's mean annual gross domestic product of 10,000 USD [25]. Moreover, the rate of literacy, which also effects on nutrition habits, is still quite low for both sites, Southeastern Anatolia being less than Central Anatolia ( Table 2).
Results
A total of 100 women, 25 women at each site, participated in the study. All participants are involved in agricultural practices. Subjects were grouped into four age ranges, namely 18-30, 31-40, 41-50 and 50+ years, as marriage and working in the field are based on the age ranges employed in the study. The average age of participants is 43.9 ± 12.0 years in C. Anatolia, and 39.0 ± 12.9 years in SE. Anatolia, with high illiteracy rates ( Table 2). The low number of younger age women at both sites is most probably due to the younger generation's decision to move to city centers for better job and education opportunities.
The weekly diet at present is still dominated by high energy bearing carbohydrate foods since they can be obtained with low cost and/or produced by villagers (Table 3). Thus, more than 1/3 food consumption is centered on wheat products which are followed by dairy products in C. Anatolia. C. Anatolian women's food consumption is also based on wheat products and potatoes, but with less dairy products ( Table 3). The relatively high levels of livestock in C. Anatolia increased consumption of dairy products. Fresh vegetables are used as salad or an additive to main courses so their consumption is not as high as desirable for a healthy diet [24]. The use of oil at both sites was boosted in winter due to preparation of traditional oily stew foods such as dry beans, pilaf and potato [26]. Also, sugar is widely consumed in both regions for making deserts and used in tea. Daily tea consumption reaches 13-15 tea cups a day at both sites. However, except the increase in oil consumption in winter and availability of fresh vegetable and foods in summer, the overall food consumption patterns throughout year are somewhat same (Table 3). So, the average daily calorie intake throughout the year is quite similar in both sites ( Table 2) which is approximately 3,000-3,500 calories/day. But, this figure is quite high for a sedentary life in winter since studies suggest 1,900-2,000 daily calorie intakes for the ones who had sedentary life [27].
Outdoor agricultural activities in C. Anatolia and SE Anatolia begin in early March/late February, respectively, and end in early November in the former and late October for the latter. Thus, women's activity in the field, which as approximately 11 hours/day, reaches a maximum from the second half of summer to the second half of autumn ( Figure 1). However, SE Anatolian women's working hours are longer than those of CE Anatolian women due to the longer period of second crop cultivation. The major activity of women at the field for CE Anatolia is grazing and milking sheep, hoeing, irrigating and harvesting vegetables and fruits, whereas SE Anatolian women generally work in cotton and orchard fields for hoeing and harvesting. Also, most of the women grow vegetables and fruits in their house gardens for family consumption, which also increases daily working hours. Overweight is more common in the 18-30 years and 50+ years groups at both study sites ( Figure 2). However, the number of overweight is less in the 18-30 years group than the 50+ years as teens are more responsible for housework and child care than elders throughout the year. With age, women's activity in the field changes from physical activity to directing other women. The main finding of the study is the significant seasonal body weight fluctuations of rural women in C. and SE. Anatolia. Women gain weight from autumn to early spring and lose weight from spring to autumn in both study sites (Table 2, Figure 3). The changes in BMI values with less than 1 may be evaluated as insignificant, however there is an increasing trend for the two years of study ( Figure 2) it will negatively affect women health. The highest BMI is determined in winter with 25.2 ± 3.39 for 2009 and 26.1 ± 3.40 for 2010 when physical activity is at its minimum, and in summer BMI values were decreased to 24.1 ± 3.39 in 2009 and to 25.1 ± 3.35 in 2010 ( Figure 2). The BMI exposed a significant fall from spring to autumn within the year, however the higher 2010 values than 2009 values (Figures 2 and 3) manifested an insalubrious trend. The highest weight fluctuation throughout the year is observed at normal and overweight group members (Figure 2) notwithstanding the participants' age. Although women claimed that they lost the weight which they gained in winter due to the intensive outdoor agricultural activities from spring to fall, the measurements did not support this idea in light of an average permanent gain of 2.6 kg after the two years of the study (Figure 4). Studies revealed a positive relation with low socio-economic status and high BMI among women in rural areas [9]. However, in contrast to those, highest BMI and obesity prevalence were observed in Karapınar (C. Anatolia) than Adıyaman (S. Anatolia) where the former site has a relatively high income than the latter. We suggest that high income enables women to own more electronic appliances which in turn decrease their daily household duties as in the case in C. Anatolia [28]. Moreover, the agricultural cropping is denser in SE Anatolia due to the five crops produced in two years which means more hours working in the field (Figure 1). Although the yearly BMI increase within each area is not statistically significant (p > 0.05), a positive relation exists with BMI increase and weight at each study site. Table 4). The repeated ANOVA analyses between each same months of 2009 and 2010 revealed a significant difference for the BMI (F: 376.698, p < 0.000) of participants which is in an increasing trend by years (Figures 3 and 4, Tables 4 and 5).
Discussion
Agricultural practices in rural areas of Turkey are still labor intensive. Women participate in grazing, milking, weed control, pruning, hoeing, budding, cotton picking and harvesting (fruits and vegetables) activities which all require high physical power. Besides field activities, housework is also managed by women. Agricultural activities are intensified from spring to autumn in Turkey, whereas from late autumn to early spring, women shifted to a sedentary life in late autumn to early spring. The lower energy consumption during 5 to 6 months/year has a negative effect on body weight. Changes in physical activity are more effective on body weight than seasonal diet at two sites because diet patterns of both sites are centered on high carbohydrate containing and easily available wheat products, pasta and potatoes. The extreme seasonal physical activity changes in daily life, i.e., from high energy consuming outdoor agricultural activities to sedentary life causes significant fluctuations in body weight in short term and mostly resulted an increasing BMI with years. Our study manifested the negative effect of seasonal fluctuations on BMI, a relatively short term in human life, due to intensity of physical activity and diet habits of women living in rural areas of C. and SE Anatolia of Turkey. Xavier and Sunyer [29] reported negative effects of fast weight loss and/or gain in short term with formation gallstone and cholecystitis, excessive loss of lean body mass, water and electrolyte issues, mild liver dysfunction, and elevated uric acid levels. The change of agricultural pattern from one crop/year to two to three crop/year due to the demand for high income generation increased field activities in both sides, and for meeting high physical activity energy requirements women tend to consume more carbohydrates and fatty foods, a trend which is also seen in other parts of the World [20]. The high carbohydrate containing food consumption in the study sites are attributed to two factors: traditional eating attitudes and the low socioeconomic status of rural women, both causing increasing prevalence of overweight/obesity [28].
Increasing BMI and obesity with age is a common problem in the World [1,3,30]. The increase of BMI in winter is common in rural areas where agricultural activities are at their minimum in the Northern Hemisphere [31]. Several studies have suggested obesity and high BMI in most parts of the World are linked to inequalities in education and income [3,5], and the tie between socioeconomic status and obesity reveals a more inverse relationship among women in developed societies [32]. Thus, the high illiteracy rate and low socio-economy along with traditional eating habits of women in rural areas of C. and SE Anatolian resulted in a high BMI. In contrast to previous studies' findings which suggest a positive relation between low income and obesity, the relatively higher socio-economic status of C. Anatolian women has a positive correlation with high BMI and obesity, so the highest BMI and obesity prevalence were observed in Karapınar (C. Anatolia), whereas relatively low obesity and BMI were determined in Doluca (SE. Anatolia). This may be attributed to use of less electronic home appliances such as vacuum cleaners, washing machines and dishwashers by women in SE Anatolia than C. Anatolia [33].
The traditional diet at both sites is based on wheat products, lentils, chickpeas due to their local production for thousands of years. Traditional bread (free of salt and oil), lentils and chickpeas are healthy foods based on their mode of preparation. But, with the introduction of cash crops at both sites, the traditional cultivation of chickpeas and lentils is now negligible and their proportion in the daily diet is very low and eating patterns at both sites are now based on oily foodstuffs.
The most affected group for the prevalence of obesity and overweight is the 18-30 years group, which may be attributed to the heavy work load on the younger generations in the field which resulted in high fluctuations in weight gain and loss within a year. Moreover, some health problems such as diabetes [31] may be another effect on weight gains and losses in the 50+ years group but this assumption requires further detailed study in the research area. Thus, a health profession's contribution is needed for further studies along with detailed information on daily type of food consumption When participants are told for the danger of putting up weight in winter, they responded that they are losing weight in summer during the intense field activity. However, our results showed that majority of the women did not lose all the weight gained during winter months (Figure 3).
The high calorie containing food consumption of the 18-30 years age group in winter is most probably from their continuing diet habits which they used have in spring and summer when the physical activity is at its peak. The pattern of obesity and BMI in Karapınar and Adıyaman is in accordance with that seen in the women in Ghana where a higher time is devoted to agricultural tasks by women [34].
Conclusions
Women living in rural areas were until recently known to have low BMI than their counterparts in cities due to the high energy required for the intensive labor in the field. However, with the introduction of machinery for agricultural practice and home electronic appliances such as washing machines, vacuum cleaners for housework, women spend less working hours both at home and in the field. In addition to those factors, due to the lesser energy consumption in winter without changing the high calorie-containing diet from late autumn to early spring, the BMI of rural women in Turkey shows an increasing trend. Another risk is the severe effects of short term fluctuations of body weight i.e., losses in cropping season, gains in winter season. Moreover, women believe that winter weight is easy to lose due to intense field work in spring and summer, but our study outcomes showed that winter weight is not totally lost during field activities as slight weight increases were determined after two years of study. The weight increase in general triggers development of several health disorders, particularly cardiovascular ones. However, the health problems need further study at both sites. Thus, women should be informed to take precautions for decreasing consumption of high carbohydrate containing foods in winter season which is a traditional diet. In the studied areas access to healthy foods such as vegetables is easy, however the difficulties in giving up traditional habits and gathering women for training courses in centers necessitates individual information activities like visiting women at their homes which can be organized via governmental health agencies or local administration bodies' public service departments. | 2016-03-22T00:56:01.885Z | 2012-04-01T00:00:00.000 | {
"year": 2012,
"sha1": "aa0a28c69197bea33bea37ea9a30f218ddeeb4b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/9/4/1159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa0a28c69197bea33bea37ea9a30f218ddeeb4b6",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
127651207 | pes2o/s2orc | v3-fos-license | Riesgo de inundaciones y estrategías de mitigación en los suburbios del sudeste de la ciudad de Fez (Marruecos) ; Flood risk and mitigation strategies in the southeastern suburbs of Fez City (Morocco)
The risk of flooding in the south-eastern suburbs of Fez (Morocco) was engendered by multiple factors (topographic, hydro-climatic, land use, social, technical). This Estudios Geográficos, Vol. LXXIV, 275, pp. 379-408, julio-diciembre 2013 ISSN: 0014-1496, eISSN: 1988-8546, doi: 10.3989/estgeogr.201314 406 BRAHIM AKDIM, ABDELGHANI GARTET, MOHAMED LAAOUANE AND MHAMED AMYAY Estudios Geograficos 275_Estudios Geograficos 275 27/11/13 18:42 Página 406
INTRODUCTION
The goal of achieving sustainable urban development emerged in the last decades among major scientific and developers' preoccupations. It is conditioned by the achievement of environmental equilibrium and security (Burton et al., 1993;Holden, 2006), the economic efficiency, the social equity and the regional sustainability (Beck, 2002;Ramos, 2009). Branscomb (2006) focused on Safety and security to define the sustainable cities. Their districts must balance ecological, social, and economic needs. The environmental investigation is becoming among approaches that seek better means of assessing urban trends on the basis of risks' prevention and human wellbeing (Besson, 1996;Schmidt-Thomé, 2005;Camfield et al., 2009;Douglas et al., 2010;Radojevic, 2012).
In the last decade, the analysis of vulnerability to hazards dominated by the engineering approaches, have been criticised as it failed to engage with the social, political and structural causes of vulnerability within society (Adger, 2006, p. 271). Human ecologists attempted to explain why the poor and marginalized have been most at risk from natural hazards (Hewitt, 1983;Watts, 1983;Cutter et al., 2003), Poorer households tend to live in riskier areas in urban settlements, putting them at risk from flooding, disease and other chronic stresses. Assumptions that marginal areas and social groups are most exposed to the environmental risks are tested and confirmed in several areas in the world (Calvo García-Tornel, 2001;Aguilar, 2008;Furey and Lutyens, 2008).
In Morocco, water has been studied as a resource and as a factor of risks in rural and urban areas (Laouina, 1995 andEl Jihad, 2005;Bouaicha et Benabdelfadel, 2010;Gartet et al., 2010;Saidi et al., 2010). In the suburbs, where degraded environment and flooding risks are spectacular, the explaining factors are either human-induced or linked to the urban expansion in lowlands (Akdim et al., 2003). The climatic and the global change factors are also evoked among drivers of urban flood risks (Snoussi et al. 2008;Tramblay et al., 2012). However urban water, as issue of sustainable cities is still rare either in academic investigation or in terms of operational research and guiding the decision-making. The urban flooding risk appears among the major environmental preoccupations of actors, in resources' management and land planning (Matueh, 2002;AEBS, 2005 andReynard et al., 2011).
In Fez, most spatial dysfunctions in the city resulted from a lacking urban governance during the last decades (Ameur, 1993;Barrou, 2005) engendering marginal districts where environmental and social risks are frequent. As occurred in most metropolitan cities in the developing countries, accelerated population growth in Fez had many negative environmental consequences (Boukir, 1995;El Bouaaichi-Nadri et al., 2002) and of these, informal and spontaneous settlements are of great importance. In the past, the city experienced several types of environmental risks (slopes' instability and habitat collapse in its northern districts, floods in the south). They are of physical and anthropogenic origins. The environmental risks were emphasized in the environmental monograph of the northeastern region (SEE, 2002). Waele et al. (2004) focus on the geo-environmental risk in the upper valley of the Oued Sebou and present a detailed description of land degradation in the northern vicinity of Fez. El Bouaaichi-Nadri (2004a and2004b) and Gartet (2007) reported that in the Fez suburbs, the urban increasing is not generally mastered. Consequently, vulnerable districts such as Aouinate El Hajjaj in the southeastern part, Hay El Hassani in the northern zone and Zouagha in the southwest developed. They show diverse risk' s indicators due to the hydrologic factors, the geomorphic context and the low degree of settlements' equipment that degrade the quality of life and require attention. Hamdouni Alami (2004) and El Harchaoui (2008) studied the environmental impacts of the under equipment in selected districts and in the Medina of Fez, but the flooding risk was not treated. The combination of multiple factors in the risk genesis had been experienced in Fez in the past.
The case of Aouinate El Hajjaj district, in the southeastern suburb is significant. It is located in a sensible site as it forms the upstream land of the Wislane projected tourist zone, which is planned to be a major tourist area of Fez in the future. The flooding risks in Aouinate El Hajjaj district is compounded by the topographical and meteorological factors. It expands on the left valley flank of Oued Boufekrane, and extends to its bottom. This local factor is determinant for the district' s vulnerability. In addition, the socioeconomic characteristics of its population such as unemployment and poverty favor this vulnerability, as households have other priorities (habitat and work), most urgent than environmental security or quality of life.
In this article we argue that mastering urbanization should be among the best ways to lower the intensity of potential flood risks in urban areas. The urban environmental vulnerability worsened more and more where regulation (the urbanism law) is ignored or badly applied. We evoke the complexity of the urban flooding risk in its physical and social dimensions and the potential impacts of the regional context that should be considered in urban planning.
The article aims to present the results of a field research led in the perspectives of applied urban development in the Fez suburbs. General assumptions considering that anarchic urban development reinforce vulnerability in marginal zones are tested. The study focuses on the following issues: -Identifying causes of flooding in the studied areas and its context. -Evaluating adopted strategies in urban development and potential risk increasing. -Apprehending their potential to manage urban drainage and reduce flooding in the district. -Discussing alternative conceptions, including social and spatial approaches to reduce the future impact of urban flooding on people and the environment.
We discuss propositions and alternatives to develop a sustainable strategy against flooding risks.
THE GENERAL CONTEXT
The city of Fez is located in the Saïs plain, a low land which links the Rif mountain in the north, to the Middle Atlas in the south (figure 1). It is formed of many contrasted districts and fragmented territories. Its total population in 2004 was 944.376 residents. The highest population density in Fez is in the marginal districts. Approximately 80.000 residents live in the Aouinate El Hajjaj district located in the administrative commune subdivision of Saïs (156.550 residents).
The watershed of the Oued Fez is located in the Saïs plain. Its watershed total surface is 700 km 2 extending in the Saïs plain, south of the city of Fez. It is alimented by multiple tributaries such as Oued Zitoun, Oued Boufekrane, Oued Chekko and Oued Himmer. These oueds are converging to Fez (figure 2) and are regularly alimented by the Aïn Smen and Aïn Chkef springs. Their catchments are relatively homogeneous in terms of geomorphic characteristics and substrates, because they generally extend within the same context Source: authors' elaboration. Estudios Geográficos, Vol. LXXIV, 275, pp. 379-408, julio-diciembre 2013ISSN: 0014-1496, eISSN: 1988 (the Saïs plain and the Middle Atlas margin) and are submitted to similar human factors and land use. The Oued Boufekrane (whose catchment area is 52.40 km 2 ), converges to Oued El Mehraz (its catchment area is 137.7 km 2 ) and form a braided bifurcating hydrologic system that, in downstream, influence the lowest part of the Aouinate El Hajjaj district. The width of its valley varies. It attains locally 3 kilometers, but in most sites it becomes less than 900 meters. Downside the dam, in the urban area of Fez, the Oued El Mehraz valley is only 50 meters wide. The reduced capacity of the channel to convey water during high flows in this transect increases the height of the water surface, and causes the banks inundation, and therefore the flood hazards in the low part of Aouinate El Hajjaj district. The geomorphic hydraulic geometry of the channels is trapezoidal but the valley is dissymmetric because the western' s gradient is more important. Mehraz dam construction was a new hydrologic parameter as it regulates water flow and sediments' deposition, but it becomes ineffective in periods of intense rainfall when the dam' s retaining capacity is exceeded. The Oued Boufekrane drains the Southeastern part of Fez, extending from El Gaâda plateau. It converges with Oued El Mehraz and Oued Zitoune in the entry of the Rçif district (in the ancient Medina). Its outflow is weak, but the discharge becomes torrential in periods of strong rainfall causing the flooding risk along Oued Boufekrane, mainly in the Aouinate El Hajjaj district.
FIGURA 1 LOCATION AND CONTEXT OF THE STUDY AREA
The district was formed in the eighties of the last century, as a result of rapid urbanization in most Fez peripheries. Different environmental problems -Slope inclination on the western flank supporting the district (the transect between the river and the plateau surface).
-The inclination along the rivers profile 35 per cent 2.7 to 2.8 per cent.
Geologic substrate
Quaternary flank deposits and terraces, Pliocene sand and aggregates and Miocene marl. appear, such as the insalubrious habitat, the water pollution and solid wastes, the lack of some urban services, the deterioration of existing infrastructure and the lack of welfare conditions. The major problem in the low part of the Aouinate El Hajjaj district was however the flooding risks. The local topographic and hydrologic conditions favour the hydrologic vulnerability of the district. It is open to the upstream drained through streets in the district and is exposed downstream to Oued Boufekrane in periods of heavy rainfall.
The climatic context in Fez is semi arid. The rainfall tends to be irregular in the last two decades with annual rainfall averages fluctuating between 180 and 580 millimeters (figure 3).
The extreme hydroclimatic events may occur in all seasons. However, the frequently humid months are between October and April (figure 4); but flooding risks and disasters in the urban areas may occur in any season and are linked to the extreme meteorological events that happen on the watershed in a daily time scale. In most cases, the rainfall may become a factor of the flooding risk when it is over 30 mm/day. Most flood hazards in Fez were linked to the daily or instant heavy rain. In October, the 13 th , 1989 for example, floods in the area were explained by the rain concentration, as it attained 37,6 mm in the Aïn Timedrine station, 40,6 mm in Sefrou and 28 mm in the Fez Saïs station. In May the 18 th , 2011, several districts were flooded in Fez following the heavy rain that attained 52 mm in the city (Météorologie Nationale, 2011).
However the problem appears only when flooding has an impact on human settlements and activities. The link between social, spatial and natural factors are strong as they intervene together in the flooding disasters in the area.
The dominant habitat is informal because it appeared in the eighties of the last century when the expansion of spontaneous settlements had been alarming. But the increasing of the district in the same way was continuous over time due to speculation and non respect of urbanism rules (Fejjal, 1994 andDarkaoui, 2009). Two current forms of urbanization were observed in Aouinate El Hajjaj and in most Fez suburbs they converge with those underlined by Qadeer (2004) and are: (i) the lateral growth due to emigration and (ii) the urbanization by implosion, which builds up urban spatial organizations through the densification of human settlements. In both cases, urbanization and high density population induce transformations in spatial organizations in three ways: the infrastructure needs' increase; the changes in the landscape and settlement system; and the restructuring of land economy and land uses.
Since 1981, the Aouinate El Hajjaj district has been restructured and offered basic equipment needs such as drinking water, electricity and other common public needs. A dam had been built in its upper proximity zone (the Gaada Dam), but the area is still vulnerable to flood risks and is environmentally degraded.
Nowadays, these settlements are demanding better living conditions in a more secure environment and a better restructured area. The concepts of "environmental equity and security" are emerging as basic concepts in these peripheries.
METHODS
The flood risk in the studied area is approached using historic evidence and mapping to produce spatial data that are pertinent to future flood management. The historic and mapping approaches are pertinent in the flooding study (Coeur and Lang, 2008;Koivuma et al., 2010). The recently submerged areas were delimited in the field work. The historic information was reviewed basing on administrative archives. Data was completed by directed questionnaires addressing risk issues in the district.
The interactions between physical and socioeconomic factors through local and regional scales were investigated. The approach is global and multifactors integrating (Asté, 1994;Akdim and Laaouane, 2006;Gartet, 2007;Hansson et al., 2008;Jacobson, 2011). It allows the study of local characteristics and other significant contextual factors influencing the flooding risk genesis and management.
A comparison of the aerial photographs' restitution (1991) and the aerial photographs taken in 1987 and 1998, with more recent flooding events reported by the AUSF (Urban Agency of Fez), allows a precise reconstitution of the flooded zones in the valley of Oued Boufekrane at the Aouinate El Hajjaj district. The households' chiefs were interviewed concerning the housing process, the flooding risk factors and the perceptions of the risk and floods' management. The interviews were conducted using an oriented questionnaire apprehending the past of the district, the present-day environment and the challenges and the future perceptions. The questions are mostly open-ended to explore all possible horizons. Interviews were led out by the Master' s students from the University Sidi Mohamed Ben Abdellah, Fez in March 2009 and April 2010. They were briefed on how to conduct the interviews and the study objectives. The senior researchers in the group were permanently present during the field work and their repeated visits permit to further investigation on emerging questions at time. The direct observations of the flood phenomena were collected during the last five years at each heavy rain.
The results of the most recent census of the population and habitat (HCP, 2004) were used, but they are not detailed concerning the proportion of Aouinate El Hajjaj district' s population, compared to the whole population of the administrative subdivision Saïs. More detailed data on the district was collected in the administrative archives, mainly the Saïs commune services and from the reports of the Urban Agency of Fez and the Water Agency of Sebou Basin.
ENVIRONMENTAL RISKS LINKED TO URBAN DEVELOPMENT IN THE SOUTHEASTERN SUBURBS OF FEZ
Since five decades, the flooding risks have been intense in the Fez agglomeration. Their impacts were important either inside the urban area or in its immediate peripheries, where population densities are progressively high. The southeastern Fez suburbs experienced episodic flooding risks since 1950. A severe flood disaster occurred in 1989, when Oued Boufekrane frightened the extending habitat in its valley bottom and engendered several human victims. In October the 12 th , 2008 an abrupt rise of the Oued' s discharge caused considerable material loss in the area. Most suffering settlements are informal houses in the district of Aouinate El Hajjaj and in the Medina. The latest disaster happened in February the 13 th , 2009, when the district was flooded in the river' s vicinity and three victims were reported in the Medina.
The extreme discharge values have been estimated to 63 m 3 /s for the rise in the water level of Oued Boufekrane in September the 25 th , 1950 and to 20 m 3 /s for the rise in the water level in October the 13 th , 1989 (MATUEH, 2002;AEBS, 2005 andReynard et al., 2011).
These discharges are often violent and unforeseeable. Water level attained the constructions situated in the low parts of the district Aouinate El Hajjaj and affected them severely. The main limits of the flooded surface over time were defined using the comparison of the aerial photographs of different dates (1991, 1987 and 1998) with indications deduced from the reports of the Urban Agency. The reconstitution of the flooded zones in the valley of Oued Boufekrane at Aouinate El Hajjaj district is presently well known. They are significant. In 1989, the local authorities have shown that numerous constructions and roads, situated in the valley bottom have been flooded and that the damages were important.
Since the construction of the Dam Al Gâada in 1992, the drainage of the Oued Boufekrane in its downstream is weakened and most population, mainly the most recent migrants, ignores its potential risks. The dam seems RIESGO generating a false sense of security. It is certainly controlling most, but not all hydrologic risks because its retaining capacity is limited. The downstream of the dam is still opened to receive critical discharges from uncontrolled small catchments that are potential sources of risks. This may occur for example in the Mokhtar Essoussi School. Its external wall (200 m of length) is very close to the oued' s bottom, and is built on a non consolidated embankment, where increasing encroachment can cause a slip down when raining or following any released water of the dam. We note that urban practices in these cases ignore the potential hydrologic risks and don't respect the urbanism laws (laws 12-90 and 25-90), the water law (law 10-95) and even the recently adopted laws of environment (2003). This critical situation has been observed in 2001 while testing the effect of dam water releasing. A discharge of 300 l/s caused the flooding of several buildings in Aouinate El Hajjaj district in the river bank proximity. The human occupation of the riverbanks activates erosion and sedimentation in the talweg and causes changing velocity and water level, accentuating therefore the risk of flooding. The human factors of risk in the district are multiple. The population density, the structure of the streets facilitating water flow down, mainly after the rehabilitation program in 1993 (all streets were concreted and increase surface drainage of water). The spontaneous occupation of land following the speculation and the construction without respect of technical and urban planning norms are among social and spatial factors of the risk in the district.
Downstream the district of Aouinate El Hajjaj, the oued flows out following a sinuous tracing, in the valley where farming activities developed. Waste deposits reinforce the Oued Boufekrane discharge which reach Oued Zitoune, and change the name as it is called Boukhrareb until the transect crossing the Medina. Along this tracing, down bridges are filled by the waste deposits and vegetation debris and sediments. It sometimes hinders water flow in these intermediate basin sections. These factors explain the most critical hydrologic situation in the bridge section of the railroad track toward Bab Ftouh, whose discharge has been estimated to about 1 m 3 /s. The system of floodgates permitting the confluence toward Oued Zitoune, engender flooding problems in the vicinity whenever the water level rise. This is also observed in the section situated to the uphill of the floodgates close to the parking lot Bab Jdid whose capacity is lower than 6 m 3 /s. Flood threats are therefore frequent along the valley between the Aouinate El Hajjaj district and the Medina.
Most interviewed population converge on the idea that potential flooding risk is understood and know that the district is vulnerable, but they consider the public actors (mainly the commune) as responsible of the situation and should resolve their problems. «I understand the serious risk of flooding I am exposed to, in the district, but I don't see any solution to change my situation» said a respondent. The focus was often put on the low income and poverty, preventing any individual initiative. «The public authorities are well equipped to resolve the problem and should find alternative habitat to the district stakeholders» said another. The flooding risk is generally perceived; the speculation in land tenure is invoked as a major factor of the district' s situation in the past; but living there is accepted even it is risky. It is considered a transitory experience that may offer future opportunities to gain individual interests from the public supporting programs in habitat and resettlement.
THE PRESENT DAY STRATEGIES OF THE FLOODING RISK MANAGEMENT IN FEZ
To face urban flooding risks in Morocco, panoply of strategies have been adopted in the last decade. Among these strategies (i) the technical protective measures, that modify, in a given vulnerable place, the level of the risk (discharge, height of water, flooded surface, time of submersion, etc.) and (ii) the measures of prevention that have, as the main goal, to limit the vulnerability of a given site, by a better knowledge of the risk, a mastery of the extension of the vulnerable zones and a better organization of the intervention in periods of crisis.
In Fez, important actions were adopted after the flooding disaster in 1989. Among these measures we note the following: The channel links the dam Moulay Arafa to the dam Bled El Gaada, near Aouinate El Hajjaj. Its length is 4.100 m. Its normal discharge capacity is 25 m 3 /s, but it may reach a maximum capacity of 35 m 3 /s. These projects are important but not sufficient to prevent the city against the flood risks, because the strategy should take in account, the hydrologic particularities of the local context, mainly after the construction of the double road (highway) between Fez and Sefrou, which induced major changes in the The main objective of the channel project is to evacuate water coming from a hundred-year frequency water rise in the river. The channel began from the dam Bled El Gaada and cross the district Aouinate El Hajjaj, on a length of about 580 m. Its cross profile is trapezoidal in shape, and is built in masonry to reduce the speed of out-flow and to better integrate the urban landscape.
Within the mitigation strategy of the flooding risk and following the rehabilitation program, most streets of the district were managed and concreted to avoid water infiltration and guide water downstream. However, considering The study of flood risks in this case, shows the diversity of its acting factors. The physical factors are climatic, bioclimatic, hydrologic, geomorphic and geologic and intervene through the local and regional scales. The land use, population density, households' conditions of life and incomes are among the socioeconomic factors that influence their vulnerability and how people perceive the risk. The way adopted in land use and the regulation' s respect are also evident as factors of risk genesis in this case, because spontaneous and informal development of the district amplified its environmental vulnerability.
The efforts to mitigate the flooding risk in the district are mainly technical in nature. As shown in the photographs (figure 4), the channel' s wall magnified the engendered risk and the solution was to pierce the wall. Adequate technical solutions are important when appropriate but they are certainly not sufficient, because the socio economic factors should be considered. A more global strategy is needed to insure the environmental security and sustainable urban development in the area. Previous research on flooding risks point out the human responsibility in the case of the lower districts of Taza and Sefrou and in the city peripheries of Oujda and El Hajeb for example (Akdim et al., 2003;El Hafid et al., 2004). Several measures were suggested to face the land use deficiencies and the lack of preventive and maintenance measures in urban and rural areas. In the neighbor countries, mainly in Spain, the flood risk mitigation strategies have been studied in most critical areas (Gutiérrez et al., 1998;Ayala-Carcedo, 2000;Calvo García-Tornel, 2001;Alcántara-Ayala, 2002;Olcina Cantos, 2004 andHeitza, 2009;Pérez Morales, 2010;Martínez Ibarra, 2012;Camarasa-Belmonte and Soriano-García, 2012). Corrective measures to reduce the vulnerability of urban areas to flooding risks were proposed (technical actions such as the drainage channels improving, the land use mastering, the flood warning systems and the drafting of emergency flood plans, flood studies at the local scale, maintenance work and monitoring of the fluvial systems, as well as introducing a real-time rainfall and drainage control system). The factors of the risk, either in Spain or in Morocco are complex and underline multiple possible actions to face the prob-lems. Several important mitigation strategies have to be adapted depending on the needs of the specific situation in each site. The whole catchment must be integrated in the analysis, but different spatial scales of action have to be considered. The catchment subdivisions (the uppermost part, the intermediate areas and the valleys) and the urban area in Fez are major framing units to conceive actions.
The uppermost part of the rivers extends to the Middle Atlas Mountain flanks, formed of piedmont and margin fans that affect drainage. The physical context should be apprehended to understand its determinant factors of risk and conceive needed adaptations. The anthropogenic factors of flood risks in the area show high pressure on the resources and equilibrium destabilization due to erosion and vegetation cover degradation. Either in the Imouzzer plateau (elevated to 2.020 m) or/in the Sefrou mountain border (1.400 m), the land use have to be sustained to maintain the environmental system' s equilibrium (adapted techniques to reinforce the vegetation cover, the soil and water conservation) and therefore reduce the flood hazard risks influencing the catchments downstream.
The intermediate surfaces formed of fans and local plateaus, extending to the Sais plain in the lower watershed (400-700 m), have a gentle curve (0.1%) from the south to the north. The sedimentary forms (terraces and debris flow) favor water flow through ephemeral drains that reinforce floods in periods of intense rainfall. These sections of the rivers influence the hydrology downstream and contribute progressively to the dams filling with sediments. The flood genesis affecting Fez is therefore favored. Actions of soil and vegetation stabilization and land use planning are useful to improve local runoff conditions in the intermediate surfaces within the catchments.
At a strict local scale, in the Aouinate El Hajjaj District, between the two dams, and in transect leading to the Medina, the mitigation strategies of floods must be at the same time technical (the drainage channel achievement following adapted norms), based on regulation (mastering the urbanism and environment management) and consider social aspects (information, awareness, organizing NGO to create efficiency stakeholders involvement, reinforce resilience and adaptation capacity).
The uncontrolled peri-urban development is a potential factor of flooding risk, mainly in lowlands (Konrad, 2003). In the Aouinate El Hajjaj district, the perspectives of sustainable urban development are conditioned by considerations of the whole Oued Boufekrane and Oued EL Mehraz watershed characteristics and their local influence on risks' genesis. They are also linked to local factors (social, geographic and technical factors) that should be understood, adapted and integrated in the planning tools and risk mitigation strategies.
Adopted management in urban planning and measures of flooding risk mitigation during the last decades in Fez fail as mentioned before. The rehabilitation of the district which began since 1993 and the technical solutions adopted to prevent flooding risks in Aouinate El Hajjaj have not yet assured environmental security in the area. The protective actions have to be conceived taking in account the complexity of risk factors, their types and changing nature, while respecting the principles of global and integrated management at the district level and in its regional context.
The protection plans for least flooding risks may therefore integrate the high probable discharges of Oued Boufekrane and its lateral submersion to define areas that may be opened to the construction projects and areas where they must be forbidden. After considering the technical criteria and the socioeconomic and environmental factors, the decision remains merely political.
Among priorities of flooding risk mitigation in the area, the whole transect of Oued Boufekrane from the dam El Gaada to the Medina (the future place of the tourist zone of Wislane) should be efficiently managed following a sustainable vision which consider the following parameters: Plans of action were suggested by the Water Agency of Sebou Basin (AEBS, 2005 and and other partners (the commune of Fez, agencies and authorities) are motivated to resolve the flooding problems in the area. However, a most integrated strategy considering the complexity of flooding factors is still lacking. The suggested technical actions are important but not sufficient. They must be supported by spatial, social and juridical actions to assure adequate functions of the territory either in its urban and regional systems.
The suggested technical solutions in the future consist of more derivation channels management and two more dams' construction to master water flow in the whole Oued Fez watershed until the entry of the Medina. Lessons learned from the past show that they may suffer from lateral impacts of neglected variables in the global context. These are of spatial, social and juridical orders and should be considered in an integrated urban and regional planning.
The detailed local studies are useful in hydrology (Nafaa, 2005), but are not sufficient for the flooding risk apprehension. The spatial planning in a general framework is more pertinent to consider the total hydrographic networks and include all isolated actions within a hydraulic continuity (uphilldownstream consistency) and the capacity of out-flow and the propagation of the debits along the river and from its lateral tributaries, following «an integrated water management strategy to overcome conflicts between urban growth, water infrastructure and environmental quality global strategy» (Furey and Lutyens, 2008).
The regulation weakness and the lack of laws' respect in the studied suburbs development are also among environmental risks factors. The law of urbanism (law number 25-90) and the water law (the law 10-95) define minimal distances of 20-50 meters from the talweg «the hydraulic public domain». Its dimensions vary following the rivers' dimensions. These measures were not respected in the Aouinate El Hajjaj district as it was among informal and non authorized districts before its rehabilitation in 1994. Such negative heritage, linked to deficits in urban governance since several decades underlie the fact that «governance and regulation aspects» should be considered as basic elements in urban development and its environmental security and sustainability.
The social factors of the risk genesis and in the mitigation strategy are important. Their roles were important in the beginning (emigration, poverty, lack of participation, speculation, etc.) and are still important in the present day context, because they accelerate the valley bottom occupation. The severe impacts of these factors on the success and/or failure in the public strategies give evidence of their necessary integration in the flood risk mitigation strategies. The evolution and complexity of the social-ecological system and the changing behavior of the local context should be incorporated in the decision making, in conformity with the newly conceptual approach of the adaptive management presented by Allen et al. (2011).
Several actions are possible (sensitization and dialog stimulation to obtain consensus on the best land use and districts reorganization, participation to creation or repairing actions, respect of the urban planning norms, respect of the hydraulic public domain, co-financing environmental projects, etc.). It is clear that such participation is conditioned by acceptable socioeconomic conditions allowed by sustainable income of households and a satisfying formation and cultural level of women and men. Communication is becoming among important factors of risk mitigation as it facilitate individuals to participate to risk management and mitigation (Milman and Short, 2008;Homa et al., 2009). The social factors of risk mitigation are nonstructural and include the mitigation by householders who are encouraged to develop some form of resilience against future flooding (protection walls, door covers, sand-bags, connection to the adjacent sewer, etc.).
However, mitigation by forewarning is more reliable but should be adopted by authorities. The weather forecasting at a geographical and regional scale is pertinent and could be done using mapping flooded areas, GIS, studying previous flood events to apprehend their extension and deduce conclusions from actors' adaptations and resilience. Faced with the difficulty in controlling the hazard of flooding in this complex context, the Water Agency of Sebou Catchment (Fez) and the Swiss DDC realized recently a study and established a system of prevention and warning against floods in 2012, in order to minimize their eventual damage. But the system is not yet operational as it needs sufficient equipments that are geographically well placed and human empowerment and training to insure the best precision in the date collection and exploitation. The alternative risk' s mitigation strategies should integrate a well defined zoning of risk vulnerability mapping «the limits of tolerance and adequate prevention» in each sector. Such zoning documents may correspond to the Plans of Prevention of the Risks (PPR) adopted in several countries. Their pertinence is certainly conditioned by prior further knowledge of the risks factors, basing on a scientific study and monitoring of indicators and their foreseeable flooding impacts in the future. As synthesized in table 4, the flooding mitigation strategies should be considered in their plurality. No single strategy may be sufficient and globally applied. The study argued on the interactive components of each strategy, depending on the technical aspects of the action, its regulation framework and its socio-economic impacts and consideration.
CONCLUSION
The critical review of the mitigation risk strategies previously adopted in the Aouinate El Hajjaj district (Fez) illustrate the limits of the technical solutions in presence of huge social and spatial dysfunctions. The risk analysis in this case shows that its factors are multiple and complex and therefore need multivariate mitigation strategies to be reasonably treated.
The district' s evolution and structures were guided since the beginning by illegal transactions (speculation, land squatting, construction without respect to norms and regulations, etc.). The environmental worries were almost absent in such development. Since 1993, the public actors try to resolve the districts' crisis and reinforce its environmental security but adopted strategies were mainly technical. Sometimes they are useful, but in other cases they may amplify the flood risk. This case was observed in the Aouinate El Hajjaj district, when the built channel' s wall formed a barrier preventing streets' flowing water under heavy rain to access to the channel. When these solutions are locally recognized to be useful in appropriate situations, they are not sufficient.
Other mitigating actions including reduction of physical vulnerability, reduction of socioeconomic vulnerability and strengthening the social structure of the community are discussed. These actions constitute elements of an alternative strategy in urban planning and development and must be framed by laws and regulations respect. Their use could be undertaken at individual, community, and State levels depending on the specific dimension of each measure. The global vision integrating local and regional scales is essential to conceive a more sustainable action and should be progressively adapted as demonstrated elsewhere (Homa et al., 2009;Douglas et al., 2010;Eakin et al., 2010).
As underlined by Konrad (2003) flood mitigation strategies and «Stormwater managers can use streamflow information in combination with rainfall records to evaluate innovative solutions for reducing runoff from urban areas. Real-time streamflow-gaging stations, which make streamflow and rainfall data available via the internet and other communications networks as they are recorded, offer multiple benefits in urban watersheds».
The social action is also a major component of sustainable flood mitigation strategies as it offers useful elements to understand the floods' factors, elaborate sustained solutions and develop preventive actions. It includes the important role of non-governmental organisations, in the planning, managing and monitoring phases.
ACKNOWLEDGEMENT
The Moroccan CNRST, the Hassan II Academy of Sciences and RELOR research network are thanked for supporting the LAGEA -URAC54 and the project SHS 2011/03. The University Sidi Mohamed Ben Abdellah (Fez) and the Faculty of Letters Saïs provide logistic facilities to the team within the laboratory LAGEA support.
Fecha de recepción: 12/11/2012 Fecha de aceptación: 05/07/2013 RESUMEN En los suburbios del sudeste de Fez (Marruecos) el riesgo de inundaciones responde a fenómenos de diversa naturaleza física y humana (topográficos, hidrológicos, climáticos, utilización del suelo, factores sociales y técnicos). En este trabajo se analiza la situación del distrito de Aouinate El Hajjaj, en donde diferentes procesos derivados de la ocupación de este espacio (especulación, proliferación de asentamientos informales, edificación sin respeto a las normas y reglamentos…) han constituido factores decisivos en la génesis de los riesgos de inundación que actualmente afectan a la zona. Aunque los esfuerzos públicos para resolver la crisis del distrito y reforzar su seguridad frente a las adversidades del medio natural comenzaron en 1993 (construcción de dos presas y de canales, pavimentado de calles, dotación y mejora de los equipamientos básicos como el abastecimiento de agua potable y de electricidad, obras de saneamiento, etc.), los factores de riesgo aún persisten. Las soluciones técnicas aplicadas han sido reconocidas por su utilidad, pero es necesario trabajar en otros ámbitos, como la reducción de la vulnerabilidad física, social y ambiental, y en el reforzamiento de la estructura social de la comunidad, lo que permitiría abordar el problema desde un punto de vista sistémico. Todos ellos constituyen los elementos de una estrategia alternativa en la planificación y el desarrollo urbano que deben ser enmarcadas por leyes y reglamentos. Su uso podría llevarse a cabo a nivel individual, comunitario y estatal -en función de la dimensión específica de cada caso-, con el fin de fomentar una estrategia de integración entre las escalas locales y regionales que contribuya a concebir acciones más sostenibles en los estudios de mitigación de los riesgos ambientales. Así mismo, el papel de las organizaciones no gubernamentales es importante en esta cuestión, y la política medioambiental, las acciones de los diferentes agentes que intervienen en el planeamiento urbano y la población local, deben estar sensibilizados y comprometidos con las estrategias para la prevención de inundaciones. RIESGO study focuses on the Aouinate El Hajjaj district and shows that the district' s evolution, characterized by informal transactions (speculation, land squatting, construction without respect of norms and regulations) was a major factor in the genesis of flood risk as the district extended into vulnerable sectors. Public efforts to resolve the district' s crisis and reinforce its environmental safety began in 1993 (2 dams and channels were constructed, street were paved and basic facilities were provided, such as drinking water, sanitation and electricity), but the risk factors persist. Technical solutions are locally recognized to be useful, but they are not sufficient. Other mitigating actions including the reduction of physical and socioeconomic vulnerability and the strengthening of the social structure of the community should be considered in a systemic point of view. They constitute elements of an alternative strategy in urban planning and development and must be framed by laws and regulations. Their use could be undertaken at individual, community and State levels depending on the specific dimension of each measure. A vision which integrates local and regional scales is essential in risk studies, in order to design a more sustainable action. The role of non-governmental organisations is important in flood mitigation and urban environmental policy. The incorporation of flood prevention as a parameter of urban planning should be considered by planners and the local population. KEY WORDS: flood; mitigation of natural risks; suburbs; environmental vulnerability; Fez; Morocco. | 2019-04-23T13:27:14.277Z | 2013-12-30T00:00:00.000 | {
"year": 2013,
"sha1": "0f488655a793683dcb032c7cc6c49687bf124553",
"oa_license": "CCBY",
"oa_url": "http://estudiosgeograficos.revistas.csic.es/index.php/estudiosgeograficos/article/download/404/404",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e147c947e4286b234ce1737e63ec1b2d00281d6c",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
25174686 | pes2o/s2orc | v3-fos-license | Genetic variability in Italian populations of Drosophila suzukii
Background Drosophila suzukii is a highly destructive pest species, causing substantial economic losses in soft fruit production. To better understand migration patterns, gene flow and adaptation in invaded regions, we studied the genetic structure of D. suzukii collected across Italy, where it was first observed in 2008. In particular, we analysed 15 previously characterised Simple Sequence Repeat (SSR) markers to estimate genetic differentiation across the genome of 278 flies collected from nine populations. Results The nine populations showed high allelic diversity, mainly due to very high heterozygosity. The high Polymorphism Information Content (PIC) index values (ranging from 0.68 to 0.84) indicated good discrimination power for the markers. Negative fixation index (F IS) values in seven of the populations indicated a low level of inbreeding, as suggested by the high number of alleles. STRUCTURE, Principal Coordinate and Neighbour Joining analysis also revealed that the Sicilian population was fairly divergent compared to other Italian populations. Moreover, migration was present across all populations, with the exception of the Sicilian one, confirming its isolation relative to the mainland. Conclusions This is the first study characterising the genetic structure of the invasive species D. suzukii in Italy. Our analysis showed extensive genetic homogeneity among D. suzukii collected in Italy. The relatively isolated Sicilian population suggests a largely human-mediated migration pattern, while the warm climate in this region allows the production of soft fruit, and the associated D. suzukii reproductive season occurring much earlier than on the rest of the peninsula. Electronic supplementary material The online version of this article (10.1186/s12863-017-0558-7) contains supplementary material, which is available to authorized users.
Background
The spotted wing drosophila (SWD), Drosophila suzukii Matsumura (Diptera: Drosophilidae), is a pest species which has spread from its original range in Asia to a number of western countries in the past decade, including the Mediterranean basin [1], Europe, and USA [1][2][3]. The history of the geographical spread and infestation of D. suzukii is still under investigation: it is known that in 1939 this species was first recorded in Japan (Kanzawa 1939), while in the 1980s it was collected on the island of Hawaii [4]. Europe and the Americas were colonised much later, possibly during the last 9 years [2,3,5,6]. First adults of D. suzukii were caught contemporaneously in the region of Catalonia, Spain [7] and in Tuscany, Central Italy, in 2008 [3]. In 2009 D. suzukii individuals were found on both wild hosts (Vaccinium, Fragaria and Rubus spp.) and several species of cultivated berries in Trento Province, North Eastern Italy, where also the first economically important damage by this species in Europe was reported [8]. During the following years, D. suzukii has been spreading rapidly across Europe, with documented infestations ranging from Mediterranean regions (i.e. Greece, Turkey) to northern latitudes (i.e. Sweden, Poland, UK) (EPPO Global Database, Drosophila suzukii -DROSSU, 2017). In Italy, after the first detection, infestations were reported from the regions of Bolzano, Piedmont, Liguria, Campania and Veneto in 2010, from Lombardy, Emilia Romagna, Marche, Aosta Valley, Marche, Calabria and Sicily in 2011 [3], Sardinia in 2012 [9], Apulia in 2013 [10], Umbria in 2014 [11], and Latium in 2015 (Antonini G., present paper). Invasion dynamics can be studied using molecular markers that can discriminate and characterise the genetic relationships between source and derived populations, migration flows and population expansion patterns [12][13][14]. In particular, Single Nucleotide Polymorphism (SNP) and Simple Sequence Repeat (SSR) markers have played an increasingly significant role in the study of genetic differentiation across species populations [15]. Thanks to their great discrimination power and high reproducibility and variability, SSRs represent one of the most robust and informative molecular markers available for genotyping individuals [16]. For instance, their use in Drosophila species was pivotal in studying intrapopulation genetic variation and evolution [14,[17][18][19][20].
In relation to D. suzukii, SSRs have been exploited to study genetic aspects of the colonisation process in the USA and Europe. Jeffrey and colleagues based their research on the use of six X-linked genes and suggested that the invasions of the USA and Europe are two independent events [21]. Bahder et al. in particular analysed samples of D. suzukii populations collected in California and Washington and determined that while D. suzukii in the former region had high levels of genetic variation, the latter was highly monomorphic [22]. Furthermore, Fraimout's group investigated Hawaiian and Spanish populations by exploiting microsatellite markers, finding a significant level of genetic differentiation [23]. Although both studies exploited two different sets of microsatellites and tested different populations, the authors were led to similar conclusions: they demonstrated the presence of a specific differentiation process among ancestral and derived populations and suggested that for D. suzukii a genetic analysis approach is valuable not only to better understanding of the evolutionary history of the species, but also to manage its great potential for invasiveness. Different studies on the invasiveness of species, including D. suzukii, have demonstrated the relationship between their spread and human trade [24][25][26]. For this reason, it is very important to consider the correlation between gene flow analysis and the sale of soft fruit all over the country. Taking into account this aspect, we used a population genetic approach to characterise genetic diversity among D. suzukii individuals collected in different regions of Italy. In order to perform this work a set of 15 microsatellites validated by Fraimout and colleagues [23] were employed. The current research is the first study that provides new insights on the trend of genetic diversity in Italian populations of D. suzukii.
D. Suzukii collection, identification and DNA extraction
A total of 278 individuals of D. suzukii collected from nine populations in Italy were analysed (Fig. 1). Adult D.
suzukii were collected between October 2015 and April 2016 using Droskidrink®-baited traps [27] left exposed for 3 days. In order to limit the likelihood of sampling individuals related to each other, three traps per location were used, at a distance of at least 500 m from each other. In the laboratory, D. suzukii individuals were identified using a 7×-45× stereomicroscope, according to Hauser's (2011) morphological characteristics, such as the structure of the ovipositor for females and spots on the wings and tarsal combs for males. Samples were preserved in 96% ethanol and kept at 4°C until DNA extraction. For each location, we selected 15 females and 15 males for DNA extraction, with genomic DNA being extracted from each individual separately using the Macherey Nagel Kit (NucleoSpin Tissue, Macherey Nagel, Düren, Germany).
Microsatellite analysis
The SSRs used for this work were selected from a set of microsatellites previously designed and validated [23]. Of the 28 published SSRs, 22 continuous di-nucleotide loci were tested on a pool of 20 D. suzukii individuals. Seven of these loci were discarded because of amplification problems, leaving 15 SSR markers distributed across chromosomes 2 and 3 ( Fig. 2) [28].
Each pair of primers was used for PCR amplification in 25 μL final volume, containing 1X GoTaq G2 Master Mix, 0.5 μL of each primer, 10.5 μL of distilled deionized water and 1 μL of genomic DNA. The PCR program was set with an initial period of denaturation at 94°C (30 s) followed by 32 cycles of additional denaturation at 94°C (30 s), an annealing phase at 57°C (1 min 30 s), an elongation phase at 72°C (1 min), and ending with another extension phase at 72°C (30 min). PCR products were checked using electrophoresis on 1.5% agarose gel, stained with ethidium bromide and visualised under UV light. Each amplicon was then diluted 1:10 in distilled water and 1 μL of this dilution was added to 12.5 μL of a mixture of deionised Formamide (Sigma-Aldrich) and GeneScan-500 ROX size standard (Life Tech, Waltham, MA USA). Prior to denaturation for 4 min at 94°C, capillary electrophoresis was carried out in an ABI PRISM 310 Genetic Analyzer (Life Tech) and the fragments were sized with GeneMapper v.4.0 software in binning mode. If no sample amplification was obtained after two PCR attempts, the locus was classified as missing data.
Statistical analysis
Microsatellite allele data were processed with Tandem program v.1.08 [29]. GenAIEx software v.6.41 [30] was run to study the genetic variability between populations using the following statistics: mean number of alleles (N a ), effective number of alleles (N e ), expected heterozygosity (H E ), observed heterozygosity (H O ), number of private alleles (N p ), frequency of private alleles (A p ) and inbreeding coefficient (F IS ). Allelic richness was calculated using FSTAT v.2.9.3 software [31]. Deviation from Hardy-Weinberg equilibrium after the Bonferroni multiple correction test and allelic Polymorphic Information Content (PIC) were tested using CERVUS software v.3.0 [32]. N e and H E were chosen as the basic genetic variability and estimated for each population. N e was analysed with ANOVA using origin as a factor. N e was taken from the formula 1/(1-H E ) and then tested with the non-parametric Tukey test [33]. N e was used instead of (N a ), considering that it is less sensitive to rare alleles and sample size. H E was taken from the formula H E = 1-(Σq i 2 ), where q i represents the frequency of the i th allele in the population. H E was converted into 1/H E and then tested with the non-parametric Kruskal Wallis test. All statistical analyses were performed using R software v.3.3.2. The significance level was set below 0.001 (P < 0.001) to minimise sources of uncertainty.
To evaluate the genetic structure of populations, we relied on multiple approaches: Principal Coordinate Analysis (PCoA), Neighbour Joining Tree, AMOVA analysis, measurement of the index of differentiation (F ST ) and use of a non-spatial Bayesian algorithm. These approaches were chosen in order to obtain a broad view of the genetic structure of this invasive species in Italy. PCoA, obtained with GenAIEx software, was used to display genetic divergence across D. suzukii in a multidimensional space, considering frequency data. Unrooted Neighbour Joining Tree based on Nei's genetic distance constructed using DARwin software was complementary to PCoA analysis [34]. AMOVA analysis obtained using the Arlequin v.3.5 [35] program was performed to estimate variability distribution within and between the tested groups. The level of genetic differentiation in populations was detected using the F ST values obtained with Microsatellite Analyzer (MSA) v.4.05 software [19]. The program allows comparison of each observed F ST value with that obtained in 10,000 matrix permutations in order to define the statistical significance of each F ST .
The Bayesian method was implemented with STRUC-TURE software v.2.3.3 [36,37]. This program was employed in order to obtain clusters of individual genotypes. The analysis was run using the admixture hypothesis, which is based on correlated allele frequencies, in which each sample contains a portion of the genome of each ancestral population. This, correlated to the allele frequency model, allows calculation of the log likelihood for the data, L(K). Not knowing the origin and the degree of isolation of the studied populations a priori, this model is considered to be the most appropriate in these situations [36]. Prior probability, i.e. the probability that an individual belongs to any K reference populations, is defined as l/K. The K value was fixed from 1 to 10 with 20 replicates of each K to test the convergence of the Markov chain. A total of 1000,000 simulations per run and 500,000 Markov Chain Monte Carlo MCMC repetitions were fixed. Once the results were obtained, they were scored with STRUCTURE HARVESTER software to detect the number of K groups that best fit the dataset according to the Evanno test [38,39]. GENECLASS v.2.0 [40] was run to estimate the probability of each individual in a population belonging only to that population, the probability of it being an immigrant from each of the other populations, and the probability of it being a migrant to the other populations. BOTTLENECK v.1.2.02 [41] was run in order to evaluate whether demographic events such as population contraction or expansion took place in each population.
Heterozygosity excess, which is associated with a population expansion, was tested with the two-phase mutation model (TPM) using Wilcoxon signed-rank test, which according to Piry et al. is the most appropriate and powerful test when dealing with less than twenty loci [41]. Parameters were set as 20% multiple-step mutations and 80% single-step mutations with 1000 iterations. In order to verify the effect of isolation by distance, and therefore to find possible correlation between genetic and geographical distances, the ISOLDE option in GENEPOP software was run.
Genetic diversity
The variability indices of the 15 SSR loci are shown in Table 1. The number of alleles per locus across populations ranged from 8 (DS17) to 20 (DS07), with an average (± standard deviation) of 13.6 ± 3.37. The PIC estimate ranged from 0.68 (DS14) to 0.84 (DS07), suggesting that this set of loci is informative for population analysis. Only five alleles were in Hardy-Weinberg Equilibrium (DS07, DS09, DS22, DS23, DS26), while the other 10 showed significant HWE deviations, with nine loci having an excess of H O ( Table 1). The reason for the HWE disequilibrium could be the presence of null allele that may affect estimation of population differentiation [42,43]. The mean H O across loci ranged from 0.68 (DS32) to 0.91 (DS16), while H E ranged from 0.71 (DS14) to 0.86 (DS07). Mean H O across populations ranged from 0.66 ± 0.16 (Trentino2), to 0.89 ± 0.09 (Tuscany) ( Table 2). Allelic richness ranged from 6.23 in Trentino1 to 8.58 in Apulia. For most of the loci the F IS was negative. F IS values ranged from −0.28 in Sicily to 0.07 in Trentino2. In the Additional file 1 are reported all the data concerning the observed and expected heterozygosity, the number of alleles, the effective number of alleles, the number of private alleles, the F-statistic (Fis, Fit and Fst) and the fixation index. The Tukey test revealed a significant effect of population origin on the heterogeneity of N e on comparing the following sampled populations: Trentino1 and Apulia, Trentino2 and Apulia, Sicily and Apulia, and Sicily and Tuscany (P < 0.001). On analysing all the populations together with ANOVA, using the collection site as a factor, N e showed a significant difference between populations (F = 3.86, P < 10 −10 ). The effect of the collection site was also evident in mean H E (F 4.19, P < 0.001).
Genetic population structure and gene flow
An estimate of variability distribution (AMOVA) within the tested populations indicated that 96% of the variation occurred within individuals, while only 4% of total variation was detected between populations. Table 3 gives a summary of analysis of variance for the nine D. suzukii populations. The results of PCoA are shown in Fig. 3. The first axis explains 57.9% of genetic variation, while the second axis explains 18.9%. The first axis separates the Sicilian population from the remaining populations. The second axis mainly divides Apulia, Tuscany, Liguria and Veneto from the others. The unweighted Neighbour-Joining dendrogram represented in Fig. 4 supports data obtained using PCoA: the Sicilian group has the same origin as the other populations, but individuals belong to a separated cluster. The F ST values confirmed the genetic differentiation between the Sicilian group and the others. Considering all the populations, 30 of the 36 pairwise comparisons tested were significantly different from zero ( Table 4). The least significant differentiation was between Liguria and Veneto (F ST = 0.003), while the greatest divergence was between Sicily and Trentino1 (F ST = 0.135). Population structure analysis led to the identification of two clusters (K = 2), based on the Evanno method (Fig. 5) and revealed genetic homogeneity between most populations, with the exception of flies collected in Sicily. The data of gene flow are reported in Table
Genetic diversity
The introduction of invasive species to new environments poses threats to biodiversity, agriculture, public health and ecosystem integrity [44][45][46][47]. For this reason, considerable attention is paid to the rapid spread of alien species [46,48]. Genetic characteristics deeply affect the capacity for expansion [49]. Therefore, in order to mitigate their impact and define management strategies it is imperative to study these fundamental characteristics. Currently techniques such as genomics [50][51][52], transcriptomics [53,54], and metagenomics [55,56] allow us to investigate these basic traits.
This research investigated the genetic structure of D. suzukii collected in different areas of Italy. In particular, the aim of the analyses was to understand the gene flow of this species in a newly colonised environment. Our findings help to better understand the dynamics and complexity of this invasive species in Italy. The nine populations studied show a high level of genetic variation. The high number of alleles per locus detected clearly demonstrated the discriminatory power of these markers. Taking into consideration N e , H E and H O , it is evident that the level of genetic differentiation is similar in D. suzukii collected across Italy, even in the locations at the greatest distance from the likely spreading centre of the species in Italy [25]. The high level of heterozygosity could be explained by good adaptation to new ranges due to a favourable environment, their reproductive power, and the absence or limited presence of natural competitors and predators [57,58]. Bahder et al. found that populations from Washington were much less polymorphic than those in California, suggesting a recent strong population bottleneck associated to the recent invasion of the former [22]. Washington has a much cooler climate than California, similar to the contrast between Trentino and the rest of Italy. However, we did not observe such a contrast in heterozygosity, probably due to the highly favourable habitat found in Trentino coupled with a high migration rate with the rest of the Italian populations. Heterozygosity deficiency was Negative results indicate random mating, therefore a lack of inbreeding among the collected individuals. In contrast, Trentino1 and Trentino2 had a positive F IS value, indicating inbreeding. The Apulian population showed the greatest number of private alleles (25). This could be the consequence of a steady introduction of new alleles due to migration, possibly associated with human-mediated transport [25,59].
Genetic structure analysis
Moderate genetic differentiation between most of the groups was in evidence for the nine populations, while the Sicilian population was the most differentiated from the others. This is supported by the NJ tree, PCoA data and structural analysis. At the same time, low differentiation between the other populations may be due to gene flow, which can homogenize gene frequency across populations. Data concerning the reduction or expansion of the studied populations indicated that Trentino2 was the only group having indices of genetic bottleneck.
Migration pattern
Human transportation is the most probable explanation for the extensive spread of D. suzukii. [25,59].
When an alien species is introduced into an environment outside its native range, expansion can be identified not only by analysing genetic diversity indices, but also by analysing the genetic flow between populations, which is a direct proof of rapid distribution. [60,61]. In particular, in the last 40 years, the risk of biotic invaders has increased significantly because of levels of international trade not seen before [62]. This situation facilitates genetic flow between groups located in different locations, and may well apply to the results of our study. For instance, the observation that the level of heterozygosity (F st ) does not clearly decline (increase) from the hypothetical source population (Livorno, Tuscany) and that there is a high migration rate among localities, suggests that D. suzukii moves extensively across most of the Italian peninsula. Most of the Sicilian production of vegetables and fruit, including high D. suzukii susceptible hosts, is frequently exported to central and northern Italy. While this could suggest a high probability of flies being transported between Sicily and the rest of the peninsula, our results indicate that there was no gene flow from Sicily to other regions. This is probably due to the fact that ripe fruits are exported from Sicily mostly during the cold season, when moderate temperatures allow the production of berry fruit in Sicily, but not in the rest of Italy. Therefore, any D. suzukii accidentally moving from Sicily to the rest of Italy would arrive at a time when the local population is made up of a few individuals in winter diapause [27].
A second interesting piece of information revealed by our results is related to the scenario in Sardinia. To satisfy local demand for berry fruit, this region imports fruit from Italy and northern Europe, Spain, the USA and South America. The flies used in this study were collected in Arborea, a town 13 km away from the port of Oristano, one of the most important commercial ports in Italy. Thus, it is likely that the Sardinian population is made up of immigrants from other regions, as suggested by the low differentiation between this population and those on the mainland.
Conclusion
This research represents the first study investigating the pattern of genetic variability for D. suzukii following its introduction to Italy. Defining the population structure of a species, in particular of an invasive species, it is necessary not only to improve our knowledge of the genetic architecture, but also to apply knowledge. Indeed, understanding the current genetic structure of D. suzukii has significant implications in relation to geographical and economic impact. The evaluation of the genetic status of the D. suzukii populations in newly invaded areas and their expansion or reduction phases during defined periods of the year, may thus provide valuable information for predicting population spread, outbreaks, and improve integrated pest management programmes. Proper genetic management practices for D. suzukii and constant monitoring are therefore critical for maintaining populations under control.
The information obtained can be applied in particular to the management of coastal areas; one important action could be to increase monitoring control with the use of traps and other early warning tools in order to limit either multiple reintroductions of the same species or new introductions of exotic organisms.
Additional file
Additional file 1: Data explained allele frequencies for each locus at each location. Are in more reported the observed and expected heterozygosity, the number of alleles, the effective number of alleles, the number of private alleles, the F-statistic (Fis, Fit and Fst) and the fixation index. (XLSX 246 kb)
Funding
Funding for this research was provided by Fondazione Edmund Mach.
Availability of data and materials
The data supporting the results of this article are included in the article and the supplementary information.
Authors' contributions GT contributed to all the steps, planning the experimental design, sampling populations, conducting laboratory tests and data analysis, as well as writing the manuscript. SV participated in genetic data analysis and drafting the manuscript. SV and LO contributed to data interpretation. FS contributed to laboratory experiments. GfA and GlA contributed to sampling populations, suggesting professional contacts and conceiving the main idea. AB, NB, GS, AC and LT contributed to sampling populations. All the authors have read and approved the final manuscript.
Ethics approval and consent to participate No specific permits were required for this project. Drosophila suzukii is an agricultural pest, not a protected species. All the insects analysed were collected in the open field and not from national parks or protected areas.
Competing interests
The authors declare that they have no competing interest.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 2017-11-04T17:09:23.828Z | 2017-11-03T00:00:00.000 | {
"year": 2017,
"sha1": "230c2043d4fc942d3b57f243508061b452bb8444",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomdata.biomedcentral.com/track/pdf/10.1186/s12863-017-0558-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de6b502145be6cf7837136971e495b4757f85457",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
33385294 | pes2o/s2orc | v3-fos-license | “Science without Borders” program and Brazilian-Hungarian collaboration in thermoregulation
Lu ıs Carletto, Adam Troncoso, Ana C Rocha, Zolt an Rumbus, Margit Solym ar, and Andr as Garami* Department of Pathophysiology and Gerontology; Medical School, University of P ecs; P ecs, Hungary; Medical School; Federal University of Paran a; Curitiba, Brazil; Medical School; Federal University of S~ao Francisco Valley Foundation; Petrolina, Brazil; Medical School; Federal University of Alagoas; Macei o, Brazil
Here we discuss how the Science without Borders program has helped Brazilian medical students to spend 2 semesters studying in Hungary and to get theoretical and practical training in the field of body temperature regulation at the Medical School, University of P ecs.
Brazil has been sending graduate (particularly PhD) students for studying abroad for more than 20 years. As a recent strategy for outbound student mobility, the Science without Borders (Ciência sem Fronteiras) program expanded this initiative and included undergraduate students. The Science without Borders program is a joint effort of the Ministry of Education and the Ministry of Science and Technology funded by the Brazilian Federal Government. From 2011 to 2014, the program has invested the equivalent of approx. 1 billion US dollars and granted 80,000 scholarships to strengthen and expand the initiatives of science, technology, innovation, and competitiveness through international mobility of undergraduate and graduate students and researchers. 1 The program focuses mainly on science, technology, engineering, and math, including health and biomedical sciences, clinical and pre-clinical sciences, pharmaceuticals, and biotechnology. Among the 43 partner countries participating as hosts in the program, the United States, the United Kingdom, and France are the top 3 destinations, while Hungary is the 11th most popular country by receiving over 2,000 Brazilian students since the launch of the program. 2 In Hungary, the whole higher education institution system is represented by the Hungarian Rector's Conference, a body comprising the heads of higher education institutions. There are 17 Hungarian higher education institutions included in the project, which offer 68 undergraduate, 1 postgraduate, 46 PhD and 7 health science courses. 3 After completion of the scholarship the undergraduate students are expected to return to Brazil to complete their degrees. By accepting the scholarship the students imply their commitment to stay in Brazil for the same number of months as the duration of their studies abroad funded by the program. It is expected that the returning students apply some of the knowledge obtained abroad for the development of science and technology in Brazilian universities and industries. The program's focus on industrial interest ensures that award-holders will have strong chances of employment both in industry and in academia.
In 2014, a total of 1,542 Brazilian students applied to study in Hungary in the Science without Borders program and 382, i.e. »25% of them were selected to spend one or more semester(s) at a Hungarian institution. The most popular destination cities were Budapest, Debrecen, P ecs, and Szeged among the students. Interestingly, Medical School belongs to the universities in all 4 of these cities. The oldest institution among them, and in the country, is the University of P ecs, which was founded by Louis the Great in 1367. With 22,000 students, 1,600 lecturers and 10 faculties the University of P ecs is currently one of the largest higher education institutions in Hungary and the center of knowledge within the Transdanubian region. A total of 80 Brazilian students have visited the University of P ecs with the help of the Science without Borders program, 41 of them, including 3 of the authors of this piece (LC, AT, and ACR), studied at its Medical School. For the duration of their visit, the program has provided the students with several benefits, including air ticket allowance, monthly stipend to cover living expenses, settlement and health insurance allowance, and tuition fee coverage. During the scholarship, students could attend courses at the host institution and then spend an 8-week long internship in a department of their choice.
One of the optional courses at the Medical School, University of P ecs was "Thermomania: the medicine of thermoregulation," which was attended by several students of the Science without Borders program. This course was directed by one of the authors (AG), who brought together a motivated team of young lecturers, whose specialties included evolutionary biology, pathophysiology, nutrition, anesthesiology, urology, pediatric surgery, and pulmonology, but they all shared the common motto: "Maintenance of normal body temperature means life." Students applying for the course could learn about the importance of body temperature regulation and gain an insight into the mechanisms maintaining body temperature in accordance with the modern theories of thermoregulation. 4 This is an Open Access article distributed under the terms of the Creative Commons Attribution-Non-Commercial License (http://creativecom mons.org/licenses/by-nc/3.0/), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. The moral rights of the named author(s) have been asserted.
(e.g., chili, menthol, etc.) influence body temperature, how thermoregulation differs between dinosaurs and humans and what the subcellular heater units of our body are. Based on the theoretical knowledge obtained in the first part of the course, students were then introduced to extreme thermoregulatory disorders (e.g. hypothermia in high mountains, malignant hyperthermia), furthermore, to the characteristics and peculiarities of the clinical appearance, diagnosis and therapy of thermoregulatory disorders in adult and childhood in the form of clinical and pathophysiological case studies. During the second part of the course, students had the opportunity to visit different clinical departments and to observe the signs and symptoms of certain diseases firsthand. Titles of the lectures included "Mechanisms of body temperature maintenance"; "Microscopic heating units of the body: heat production in the mitochondria"; "Proper techniques of temperature measurement"; "Temperature maintenance from dinosaurs to humans: the evolution of thermoregulation"; "The role of chili, menthol, wasabi, cinnamon and their receptors in temperature regulation"; "Chili-pepper against obesity? Role of the capsaicin receptor in energy balance"; "Feeding as a heat generator"; "Hypothermia in high mountains"; "Brain-teasing pathophysiology case studies"; "Characteristic and peculiar clinical cases" and others.
The major part of the Thermomania course took place in the Department of Pathophysiology and Gerontology, Medical School, University of P ecs. The department was established in 1949 by Professor Szil ard Donhoffer, who was also the founding father of thermoregulatory research in P ecs. 5 Prof. Donhoffer's scientific initiatives and achievements are still continued in the department by his former pupils (currently emeritus professors) and their pupils who still actively conduct scientific research on various aspects of thermoregulation and complex energy balance. The Thermoregulatory Group is formed of multiple generations of research staff, including emeritus professors, associate and assistant professors, postdocs, PhD and medical students, and technicians. Student volunteers who are interested in the research activities of the group can join the department for shorter or longer periods of time and engage in thermophysiological research. On that account, after finishing the Thermomania course, the Brazilian authors of the current report were inspired to spend their 8-week long internship in the Department of Pathophysiology and Gerontology. During the internship the students could deepen their understanding of body temperature regulation and they could also observe the different research techniques and experimental setups that are used in the department. The laboratories are equipped to study thermoregulatory parameters in restrained and freely-moving unanesthetized small animals under controlled thermal conditions. A recent description of the commonly used experimental setups can be found elsewhere. 6 In the Thermomania course the students were presented with slides illustrating the complexity of energy balance, 7 feedingrelated thermoregulatory effects, 8 and mechanisms of heat loss, 9 which were published in Temperature in a special format called "Teaching Slide." Encouraged by the obtained theoretical and practical knowledge and inspired by the slides presented in Thermomania, one of the students (LC) decided to participate in the preparation of a teaching slide about the pathophysiology of heat exposure to be published in Temperature. 10 This was not the first example for a productive teamwork between Brazilian and Hungarian researchers. Professor Andrej Romanovsky's FeverLab in Phoenix, Arizona, has constituted a central site where research scientists visiting from Brazil and from Hungary could join forces to make advances together in the field of thermoregulation as evident from the publications from the host laboratory. [11][12][13][14] Some fellows in FeverLab (e.g., Maria Camila Almeida, presently Assistant Professor at the Federal University of ABC, São Bernardo do Campo, SP, Brazil) were also supported through the Science without Borders program. Successful completion of the course and the internship by the Brazilian students at the University of P ecs and publication of articles jointly with Hungarian co-authors represent further testimony of the fruitful Brazilian-Hungarian collaboration for the study of thermoregulation.
Disclosure of potential conflicts of interest
The studies of LC, AT, and ACR in Hungary were funded by the Science without Borders program. The funding bodies had no involvement in preparing this manuscript. | 2018-04-03T02:21:49.425Z | 2015-10-02T00:00:00.000 | {
"year": 2015,
"sha1": "27b67a9758741d8b567e4c8b00e883693e65c016",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23328940.2015.1109745?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "27b67a9758741d8b567e4c8b00e883693e65c016",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236531546 | pes2o/s2orc | v3-fos-license | Stress-primed secretory autophagy promotes extracellular BDNF maturation by enhancing MMP9 secretion
The stress response is an essential mechanism for maintaining homeostasis, and its disruption is implicated in several psychiatric disorders. On the cellular level, stress activates, among other mechanisms, autophagy that regulates homeostasis through protein degradation and recycling. Secretory autophagy is a recently described pathway in which autophagosomes fuse with the plasma membrane rather than with lysosomes. Here, we demonstrate that glucocorticoid-mediated stress enhances secretory autophagy via the stress-responsive co-chaperone FK506-binding protein 51. We identify the matrix metalloproteinase 9 (MMP9) as one of the proteins secreted in response to stress. Using cellular assays and in vivo microdialysis, we further find that stress-enhanced MMP9 secretion increases the cleavage of pro-brain-derived neurotrophic factor (proBDNF) to its mature form (mBDNF). BDNF is essential for adult synaptic plasticity and its pathway is associated with major depression and posttraumatic stress disorder. These findings unravel a cellular stress adaptation mechanism that bears the potential of opening avenues for the understanding of the pathophysiology of stress-related disorders.
5) line 239. This is an aspect of discussion and as a comment not easy to follow.
6) The use of the term stress is often misleading and there is a certain tendency to overinterprete the data. The authors are not clearly separating between cellular stress or stress as an adaptive in vivo response. For instance in the abstract, line 48: These findings unravel a novel mechanistic link between stress, stress adaptation and the development of psychiatric disorders. This is not true. The authors don't show a mechanistic link between stress and the development of psychiatric disorders.
Or line 85ff: We explored a possible role for secretory autophagy as a mechanism linking GC-mediated stress to the development of psychiatric disorders. 7) Figure 7b is not covered by data.
Robert Blum
Reviewer #2 (Remarks to the Author): In this manuscript, Martinelli and Anderzhanova et al describe the involvement of stress response in relationship to FKBP51 on secretory autophagy. Autophagy and stress response are central homeostatic regulators. Glucocorticoids are central players in stress response. Here, the authors use a number of different cell lines and manipulations to investigate the role of FKBP51 in secretory autophagy and find that FKBP51 forms complexes with some key players involved in this pathway. In addition, FKBP51 levels and/or GR activity through Dex treatment regulate secretory autophagyrelated proteins. They complement these findings using microdialysis in wt and FKBP5 knockout mice exposed to stress, which revealed impaired release of CTSD, MMP9 and mBDNF into the interstitial fluid, similar to what was found following treatment with an autophagy inhibitor (ULK1i). Finally, they showed that increased FKBP51 elevates release of these same factor from microglial cells and that SAFit1 treatment reduces this. Overall, it is an interesting story, but there are major issues with the writing and data interpretation that weaken enthusiasm for the work. In addition, the overall novelty is considered moderate. Specific comments can be found below, which would strengthen the paper: • The overall writing is not cohesive. The title does not well represent the paper, the introduction does not flow into the results, the results and discussion both have information that should be in the introduction, and the discussion has information that is more results and does not summarize the whole story well. Major revisions are needed. • Conclusions are drawn from the co-IP data that cannot be made. Co-IPs will reveal complexes and are not quantitative. Other direct methods of binding need to be used to make statements about direct interactions between proteins. In addition, it does not appear that the lysates were precleared with beads to ensure only specific complex interactions are measured, as many proteins can stick to the beads directly.
• Do other GR-regulated proteins cause the same increase in secretory autophagy? • Why were microglial cells selected? • A known positive regulator of secretory autophagy should be used to compare to the results found in this paper for FKBP51 and Dex treatment • Readouts of Dex treatments need to be shown throughout. Charcoal stripped media should also be used to remove the effects of FBS hormones. • Confirmation of FKBP51 levels and GR activity should be confirmed in each cell line being used. • Reviews are referenced, where original manuscripts should be referenced instead. The introduction would benefit from additional references. Some references are misleading, for example Ref 1, which does not mention bipolar or schizophrenia, and Ref 8, which is not the first time secretory autophagy is described.
• Baf should be defined in the results section. • Line 395: Secretory pathway is activated only after prolonged or excessive stress? This does not support their stress paradigm since the authors describe footshock as an acute stressor. • Line 404-408, 412-414: missing references • Line 428-429: It mentions NMDAR without explaining it relation to BDNF or synaptic plasticity. • Line 448: Indicate type of stress (GC-induced, acute or both) • Line 443: Need to discuss the "contrasting" findings-"However, despite some consistent findings, other studies report incongruent or contrasting results". • Methods are missing for some technical aspects, including descriptions of the experiments for Figure 6 and Baf treatments.
• Authors must include exact number of n values for in vitro and in vivo experiments. Also, report the number of times the experiment was repeated, or replicates included in the final analysis.
• Mention if all animal procedures followed standard policies animal care. Include age, sex and number of animals used for each experiment. Overall, the Ns are very low for these studies and should be increased. Sex of the mice should also be considered as an independent variable. Time of day for the experiments should be carefully described in the methods • Explain the rationale for choosing one-tailed over two-tailed unpaired t tests. Reviewer #3 (Remarks to the Author): FKBP51 (gene name: FKBP5) is a glucocorticoid (GC) receptor binding protein, which acts as a cochaperone of heat shock protein 90 (HSP90) and regulates GC-mediated stress. This protein is also known to be associated with mental disorders.
The authors of this study showed previously that GC-mediated stress leads to the activation of macroautophagy, which is regulated by FKBP51 (Gassen etal., pLos Med., 2014). In the present study, they show that GC induces another type of autophagy called secretory autophagy and that FKBP51 plays an important role in this secretory autophagy by interacting with specific SNARE proteins ( Fig. 1). They further show that MM9 is a novel cargo molecule of GC-mediated secretory authophagy (Fig. 3), that FKB51 is critical for secretion of MMP9 (Fig. 4), and that the MMP secretion plays an important role in BDNF maturation (Fig. 4, 5).
1. The authors provide strong cellular evidence that FKBP51 is critical for secretory authophagy of MMP9 leading to BDNF maturation. However, I feel that the biological significance for this role of FKB51 in living mice is missing or obscure.
As the authors stated in Abstract, BDNF is essential for synaptic plasticity. Thus, I would imagine that the reduced BDNF maturation in the stress response by FKBP5 KO has strong a strong behavioral phenotype in the BDNF-related behavior, such as learning and memory. However, previous study seems to fail to show such phenotype in the FKBP5 KO mice. For instance, O'Leary et al. (2011, pLos One) described that FKBP51 KO mice showed antidepressant behavior without affecting cognition and other basic motor functions. This previous result does not seem to be consistent with the proposed function of FKBP51 in stress-induced BDNF maturation. The authors of the present study should show some sort of behavioral or neurological phenotype associated with the reduced BDNF maturation by FKBP5 KO mice. Without such evidence, the readers of Nature Communications remain puzzled about the role of FKBP5 in stress-induced synaptic plasticity and BDNF maturation.
2. Co-IP data in Fig. 1J is NOT described correctly. Fig. 1J is labeled as "GFP-IP", which I believe is correct. Based on the figure, I believed that the authors overexpressed SEC22B as GFP-fusion protein and IPed using GFP-Ab. However, the used antibody was labeled as "FLAG-Ab". This should be "GFP-Ab". The Figure legend to Fig. 1J described this experiment as "FLAG-tagged FKBP51 co-IP (FLAG-IP). I believe that the legend should be "GFP-tagged SEC22B co-IP (GFP-IP).
Reviewer #4 (Remarks to the Author): In this manuscript by Martinelli et al., termed " Stress-primed secretory autophagy drives extracellular BDNF maturation" the authors identify the matrix metalloproteinase 9 (MMP9) as a stress-induced secreted protein involved in the cleavage of pro-brain derived neurotrophic factor (proBDNF) to its mature form (mBDNF). The authors demonstrate the involvement of the co-chaperone FK506-binding protein 51 (FKBP51) in stress-elevated secretion of MMP9 in the mouse brain, exploiting in vivo microdialysis in WT and Fkbp5 KO mice. The importance of the autophagy machinery for the stress elevated secretion is assessed in WT mice by including a ULK1 inhibitor. The authors claim that stressinduced secretion of MMP9 is through secretory authophagy, facilitated by FKBP51.
The novel finding in this manuscript is the involvement of FKBP51 in stress-elevated secretion of MMP9 in the mouse brain, resulting in maturation of BDNF. BDNF is essential for synaptic plasticity and altered BDNF signaling is associated with stress-related psychopathology. Hence, this finding is of general interest and contributes to the understanding of MMP9 secretion in the CNS. Overall, the biochemical data are well performed and the use of proteomic methods, data mining and in vivo microdialysis reflect an extensive amount of work. Nevertheless, the manuscript has some critical shortcomings that have to be addressed before publication. (Fig. 1e). However, that does not necessarily mean that TRIM16 binds to CTSD since IP-based interaction detection can reflect indirect binding of the proteins in question. Secondly, FKBP51 is a known co-chaperone of heat shock protein 90 (HSP90) and HSP90 has been assigned a key role in secretory autophagy of IL-1β, mediating import of IL-1β into the autophagsomal intermembrane space (Zhang et al., 2015; doi: 10.7554/eLife.11205). The authors do not address this point at all other than showing that mutating the HSP90 binding site of FKBP51 reduces a potential interaction between FKBPB51 and galectin 8 (Fig. 2b). Finally, the authors show that FKBP51 is essential for the association of SEC22B with its Q-SNARE partners in SH-SY5Y cells (Fig. 1j). However, there are no data presented that demonstrate the importance of the formation of this SNARE complex for stress-induced MMP9 secretion in SIM-A9 (microglia) cells or mice. Detailed comments and suggestions are included below.
Major comments: 1) Figure 1b,c,d and e and figure S1 a and b: Interaction between FKBP51 and SEC22B or TRIM16 is implicated from reciprocal co-IPs. To demonstrate a direct binding between FKBP51 and SEC22B or TRIM16 the authors could use a GST-pulldown assay with labeled in vitro translated proteins. Furthermore, co-localization images in cells to visualize FKBP51 together with SEC22B or TRIM16 would be helpful. The same applies for the implied interaction between TRIM16 and CTSD.
2) For the blots in figure 1d, e and g, which form of CTSD is shown/recognized by the antibody? CTSD exists in different forms with the inactive precursor of the enzyme, procathepsin D, being cleaved, resulting in different forms of mature/active cathepsin D. 3) Figure 1l-"Schematic overview of the interactions of FKBP51 in the secretory autophagy pathway": Here FKBP51 is shown to interact with GAL8 but no data have yet been presented to show this. TRIM16 binds to GAL8 and the figure should indicate that. Furthermore, HSP90 is shown as a binding partner of FKBP51 but HSP90 is not included in the blots of any of the IPs in figure 1. Furthermore, there are no data presented indicating the importance of FKBP51 for transfer of the TRIM16-cargo (CTSD) to the autophagosome or data indicating that HSP90 is not present. TRIM16 association to SEC22B is independent of FKBP51 according to figure 1e. And there are no comments or experiments addressing how CTSD, that normally resides in lysosomes, is translocated into the lumen of autophagosomes prior to its secretion. Therefore, the claim on page 6 line 148-149. "From these data, FKBP51 results to be involved in several key steps of the secretory autophagy pathway (Fig 1l)", appears as an overstatement. 4) In figure 2c and 2d the authors use a tandem tagged (mRFP-GFP) galectin 3 (tfGal3) in SH-SY5Y cells to monitor lysosomal damage. The reduction of the GFP signal is a result of acidification of tfGal3. Gal3 is recruited to damaged lysosomes and Gal3 becomes acidified trough lysophagy. Lysophagy involves autophagosomal engulfment of damaged lysosomes that subsequently become degraded by fusion with intact lysosomes (Maejima et al., 2013; doi: 10.1038/emboj.2013.171). LLOMe induces lysosomal damage that culminates in lysophagy and Dex appears to be able to do the same. Inhibition of lysosomal acidification with BafA1 abolishes the effect. Therefore these data actually show degradation of Gal3 on damaged lysosomes through autophagy. The authors should comment on how they envision the effect of dex on lysosomes and how this relates to secretory autophagy. 5) In figure 2 e) and 2 f) SIM-A9 secretion of CTSD in response to LLOMe and dex treatment, respectively should also include BafA1 treatment to determine if the secretion is dependent on functional lysosomes or not. 6) In figure 4 the dex induced MMP9 secretion should be shown in FKBP51 KO SIM-A9 cells as well to complement the in vivo results in figure 5. Furthermore, in order to link MMP9 secretion to secretory autophagy the authors could use siRNA knockdown of TRIM16 or SEC22B in these cells. The presence of MMP9 in an IP of TRIM16 or co-localization study of MMP9 with TRIM16 in cells would also be desirable. 7) Figure 7-"Schematic representation of the findings and proposed model": In a, TRIM16 is shown to interact with MMP9. Again, there are no data in the manuscript that demonstrate this interaction and this schematic drawing is thus not accurate.
Minor comments: 1) In Supplementary Table S1 of 29 identified interactors of FKBP51 in HEK293 cells, HSP90 is not listed. The authors should comment on that.
2) Page 5 line 114-115: "…the interaction of FKBP51 with SEC22B (Fig. 1b,c), previously only deduced via differential centrifugation13". Please clarify, since the cited reference ( 9) The discussion section is rather long and should be more focused on the obtained data in this study.
Reviewer #5 (Remarks to the Author): The manuscript by Martinelli et al. "Stress-primed secretory autophagy drives extracellular BDNP maturation" described the mechanism of enhancement of secretory autophagy by glucocorticoidinduced stress. The authors use interactomics and secretome analysis by mass spectrometry to identify proteins involved in the process and propose an elegant step-by-step mechanistic model validated by several other methods. The paper is well written, the findings are novel and this reviewer supports the publication in Nature Communications which will allow these results to reach a broad readership. There are a few minor concerns, mostly technical in nature, that should be addressed before acceptance.
Detailed comments: 1. Since TRIM16 was confirmed to be an interactor of FKBP51 by Western blot, but not originally found in the MS dataset, can the MS data be researched and perhaps TRIM16 peptides can be found (maybe with the help of the inclusion list?) 2. For the interactome analysis, the transfection with a vector containing FLAG only was used. Can the authors include more description of how the control (unspecific) binders were eliminated? The only explanation I could found is in lines 651-651, that the proteins overlapping from all four replicates were counted as interactors. This is not sufficient. 3. For the secretome analysis, the media supplemented with FBS was used throughout the whole experiment. Is that correct? If so, how was the signal suppression by the overwhelming amount of protein handled? Was the albumin removed? This was not an issue for detection because the labeling was used, but signal suppression would be an issue anyway. Can the authors comment on that? 4. Another useful clarification of the secretome analysis would be the comparison of growth rates and cell death between the wild type and the Atg5 KO cells. Are they identical in this respect, and if not, how was the data normalized?
Reviewer #1
We thank Robert Blum (Reviewer #1) for his constructive and insightful comments. We addressed his valid suggestions by performing additional experiments and tried to clarify imprecisions with additional information.
Reviewer #1 (Remarks to the Author): The paper by Martinelli Anderzhanova et al presents new insights in regulation of secretory autophagy, identifies new and important regulatory proteins of secretory autophagy. The study shows for the first time, how one important cargo protein of secretory autophagy, the matrix metalloproteinase 9 (MMP9) can contribute to more behavior-related, extracellular abundance of mature BDNF.
BDNF is one of the most important key proteins in synaptic plasticity, it can be stored in synapses and undergoes activity-dependent secretion. BDNF is involved in diverse synaptic processes, including synapse maturation, synapse refinement, synaptic transmission and even pre-and postsynaptic LTP. Two BDNF isoforms are known to regulate synaptic transmission, the mature BDNF, a homodimeric protein with high affinity to TrkB, and proBDNF, an isoform carrying the so called pro-domain. Even the cleaved pro-domain has been shown to be involved in synaptic processes. proBDNF shows high-affinity to p75, another neurotrophin receptor. There is an ongoing debate about proBDNF secretion and processing.
In this important study, the authors show that FKBP51, a stress responsive cochaperone is critically involved in secretory autophagy. Notably, MMP9 is a cargo of these granules and undergoes regulated secretion, possibly for the local cleavage of proBDNF to mBDNF. The finding that cellular stress-related release from autophagosome-like granules/vesicles is associated with extracellular mBDNF abundance is credible.
The design of the study is straightforward and conceptually strong. The experiments are convincing and there is a lot of important, new and relevant information available. For instance, the interactome of FKBP51 is well worked out and the experiments help better understand how FKBP51-positive autophagosomes behave within the trafficking pathway. Much of the work has been done in cell lines (SH5YSY, neuroblastoma-like cell line; Hek293, SIM-A9, microglia-like), but the detailed information about key proteins in the secretory autophagosome (SEC22B, RACK1, UBC12) pathway will help to find out how secretory autophagy is acting in neurons, at synapses or between cell types (microglia) at synapses.
The in vivo experiments clearly show, with a new approach, direct determination of behavior-related secretion of BDNF isoforms. However, we do not learn from which cells BDNF is secreted, an aspect beyond the scope of this study. Nevertheless, the data convincingly reveal that MMP9 and BDNF appear in the prefrontal cortex in the course of a behavioral test and the data show that MMP9 contributes to more mBDNF.
I'm sure that these proof-of-principle experiments will motivate many researchers in the field to look again at the fundamental biology of synaptic BDNF in 'real' learning and memory paradigms.
The methodology is of highest quality. I think that this paper is of substantial importance. For me, there are some major concerns that have to be addressed before it can be considered for publication in Nature communication.
Major comments: (1) The authors write (line 88): …an increase in cleavage of pro-brain-derived neurotrophic factor (proBDNF) to its mature form (mBDNF) both in vitro and in vivo.
However, the data do not show that proBDNF cleavage leads to less proBDNF and more mBDNF. The data show the same amount of proBDNF (in vitro-ELISA) and in vivo (Immunoblotting; but size is not given). If there is cleavage on cost of proBDNF, Western analysis should show less proBDNF at its full size relative molecular weight and more mBDNF at 13 kDa. The interpretation and discussion of the data should be in line with the data.
Response:
Dexamethasone treatment leads to an increase in BDNF expression, as shown in the supplementary fig. S5, and to a consequential increased proBDNF secretion, resulting from ELISA ( Fig.4b) and microdialysate data ( Fig. 5e and I and supported by the following table showing the additional statistics of the 2-way ANOVA analysis performed in the microdialysates experiment where proBDNF's expression is dependend on the time factor but independent from the genotype or treatment factor). This enhanced expression could explain the almost unchanged levels of proBDNF, as it is more expressed and secreted while it is degraded extracellularly.
Regarding the molecular weight of mBDNF, we observe proBDNF and the physiologically active mBDNF dimer in the microdialysates as 32-kDa-and 26-kDasignals, respectively. We do identify a signal at 13 kDa corresponding to monomers of mBDNF. However, this signal is below the LOD and for this reason not quantifiable (see figure below showing a representative SimpleWestern blot quantified for experiments shown in Fig. 6: secreted proBDNF or BDNF from DMSO-treated mouse organotypic brain slices.) (2) Fig. 3: Secretome. proBDNF and mBDNF should appear in the secretome, but the authors do not show the data and do not discuss it (or I have overseen it). My question is: Is there more or less pro-domain or mBDNF in the secretome of SIM-A9 cells after Atg5KO or is there an explanation why there is not much BDNF in the secretome?
In line with the overall concept of the study, one assumption would be that if BDNF is in a different vesicle and is not co-released with MMP-9, one would expect to see more pro-domain in case of Dex/Atg5KO. Is this really extracellular cleavage on cost of proBDNF or are there other explanations?
Response:
ProBDNF and mBDNF indeed have not been detected in the secretomics experiment. A general drawback of untargeted, discovery-driven MS analysis applied here is the fact that the selection of full peptides on MS1 level for the fragmentation and subsequent identification on MS2 level is intensity-based. This means that only the x most intense peptide ions in a MS1 spectrum are selected for further processing in MS2 (where x might range from 5 to 20 depending on the MS method used). Consequently, low intense peptides (derived from proteins of low intensity) might be missed from mass spectrometric detection although they have been present in the sample. This has most certainly happened to proBDNF and mBDNF, which, however, have been detected in a targeted assay ( fig. 4 and supplementary fig. S4c). Furthermore, the fact that mBDNF levels are significantly reduced upon treatment with MMP9i (which inhibits MMP9's enzymatic activity only in the extracellular space), demonstrates that the proBDNF to mBDNF cleavage occurs extracellularly and at the hand of MMP9.
(3) Fig 4: Important data depend on SIM-A9 cells, a microglia-like cell line. The data suggest that the cells produce BDNF and secrete it. However, western analysis is missing. Western blotting data would help to find out whether proBDNF is intracellular and then secreted as proBDNF. It can well be that this is secretion or abundance of cleaved pro-domain versus mBDNF. In Western blotting after SDS-PAGE, mBDNF should appear at 13 kDa, the cleaved pro-domain should run close to 15 -maybe 20 kDa, the uncleaved pro-domain may be expected, in its glycosylated form at about 32-34 kDa. Immunoblotting from supernatants, maybe after IP or after protein concentration, should answer this important aspect. It may be that the authors find other or new anti-pro/anti-BDNF immunoreactive bands. This experiment can help to better understand the overall data set.
Response:
Regarding the size of proBDNF and mBDNF please refer to our answer to comment 1.
For the purpose of our investigation, we found ELISA assays more sensitive and reliable and therefore favored this method. We used in some cases Western blot analyses (see supplementary Fig. S4 f and g) which confirmed our findings with the additional information of the protein sizes. However, the technical steps necessary to remove the high abundance of serum proteins contained in the culture medium would interfere with a direct comparison between the intracellular and extracellular protein quantifications. Furthermore, the extracellular quantification focuses on exactly our point of interest of whether MMP9 leads to extracellular cleavage of BDNF, since we have two observation time points: before and after treatment. While the detection of further or novel variants of BDNF represents an interesting challenge, it might deviate from the scope of this manuscript.
Response:
We updated the Method section in particular in reference to the recently published paper (Anderzhanova et al., 2020), where a comprehensive validation of the measurements using multiple methods was provided. Furthermore, SimpleWestern blotting is an automated system where capillary-based immunoblotting signals are automatically quantified and normalized by the system. The shown quantifications are, therefore, the original data (outputs) of the assays. Additionally, all original raw data are made available to the editors in the final submission procedure. Regarding the information relative to the Antibodies, they are the same used for PAGE Western blotting.
(5) In Fig. 6, after the in vivo data, the authors present the SAFit1 inhibition of FKBP51 in cell culture. The data fit better to Figure 3. Again, as mentioned above, the data are not supported by Western blotting. Here, it is ELISA detection of proBDNF or mBDNF from supernatants. ProBDNF is unchanged, but there is less mBDNF after SAFit1 inhibition. It can well be that MMP9 is acting at a different place. Or another yet unknown MMP9-associated process leads to more secretion of mBDNF. This might explain that there is almost no change in proBDNF in Fig. 4, 5e and 5i. Fig. 6. Anyhow, in Fig. 5, it is very clear that mBDNF goes up, in vivo, extracellular, regulated and behavior-related. This direct verification of regulated, in vivo BDNF secretion is a very important finding.
Response:
Please refer to our previous responses to comments 1 and 3. The main proof of the extracellular cleavage is the fact that by inhibiting the extracellular enzymatic activity of MMP9 via MMP9i, the mBDNF levels do not increase (Fig. 4f). We do not have any evidence to suppose a novel function of MMP9 that acts on mBDNF secretion, since its extracellular enzymatic activity is well characterized and there is no structural feature of MMP9 that leads to hypothesize such a function.
Regarding Fig. 6, we followed reviewer #1's suggestion and added the FKBP51 overexpression and SAFit1 treatment data to the dataset of Fig. 4, which we think reviewer #1 was referring to and we think would fit best.
Response:
We tried to add this information, but it felt really forced as we address the BDNF cleavage from an unbiased finding of MMP9 secretion. We never exclude the existence of other proteolytic enzymes, nor we claim to be the first to describe BDNF cleavage by MMP9. On the contrary, we focus on the secretion of MMP9, thus the cleavage of BDNF by other enzymes (whether plasmin or other metalloproteases) is out of our focus.
(7) Regarding line 419: The authors do not show a change in the ratio. This would mean that more mBDNF appears on cost of proBDNF, but the authors show an increase in the ratio because they see more mBDNF. That's a difference. This does not mean that this interpretation is wrong, but, as mentioned above, there are plausible other options. Please, re-write accordingly.
Response:
As stated in the responses to comments 1, 3 and 5, even though we acknowledge the reviewer's concern regarding the lack of decrease in extracellular proBDNF, there is indeed proof that the increase in extracellular mBDNF occurs on cost of proBDNF since this process can be impaired via inhibition of MMP9, that is responsible for such conversion. The increase in mBDNF/ proBDNF ration remains therefore valid, as only mBDNF increases and not proBDNF, which is the important aspect here given the opposing roles of the two isoforms in synaptic plasticity.
(8) Line 602: How many animals were excluded? What was the surgery success rate?
Response:
We started our experiments with six mice per experimental group, taking into account a possible loss of 1-2 mice per group. Despite having a surgery success rate of 100% we had to exclude some animals during perfusion. Sometimes microdialysis probes stop working, which is a strong indication to exclude the animals. Most of the microdialysate data derive from four mice per group and from five mice for some groups (individuals indicated by dots in graphs).
(9) The secretome data are not easy to interpret. The authors should briefly discuss the limitation of the data set. In Fig. 3b, it is not clear me whether the volcano plot represents a mean of all samples or one representative sample?
Response:
The major challenge of secretome analyses is the large abundancy of serum proteins in the medium that hide the signal of secreted proteins. To avoid this problem, we used an innovative technique developed by Eichelbaum and colleagues (Eichelbaum et al., 2014) where proteins were labelled with AHA (L-Azidohomoalanin), an amino acid analog of methionine, then purified via click-it reaction and subsequently analyzed via LC-MS/MS. The AHA labelling procedure presents some limitations such as the fact that proteins low in methionine might have been missed from initial enrichment by click chemistry. Similarly, methionine-containing peptides are missing from the dataset. In such way, the chance to detect e.g. small proteins is lower. The coverage of proteins (based on peptides) is lower which can makes quantification and sometimes identification somehow difficult.
Finally, a general MS drawback is that a data-dependent acquisition is used here. This means that only a sampling of the most intense peptides for further fragmentation and identification has happened. Very low intense secreted proteins might have been missed from detection. Overall, however, this method allowed us to have a more accurate sampling of the secretome compared to the traditional, non-labelled analyses.
Regarding the volcano plot data, each dot represents the sample mean. This information was added to the legend.
(10) line 239. This is an aspect of discussion and as a comment not easy to follow.
Response:
We removed that last sentence and integrated it into the discussion.
(11) The use of the term stress is often misleading and there is a certain tendency to over-interpret the data. The authors are not clearly separating between cellular stress or stress as an adaptive in vivo response. For instance in the abstract, line 48: These findings unravel a novel mechanistic link between stress, stress adaptation and the development of psychiatric disorders. This is not true. The authors don't show a mechanistic link between stress and the development of psychiatric disorders.
Or line 85ff: We explored a possible role for secretory autophagy as a mechanism linking GC-mediated stress to the development of psychiatric disorders.
Response:
We acknowledged the overstatement and edited the whole manuscript with this in mind. We better differentiated between evidence-based findings and speculations. We also tried to better define stress, as it is a very broad and variable concept and topic that has different interpretations in the different scientific fields.
(12) Figure 7b is not covered by data.
Response:
That is correct. Figure 7b is a proposed model of the autophagic stress-response dynamics. We took the possible misunderstanding into consideration and added this information to the legend text for clarity.
Reviewer #2
We thank Reviewer #2 for their detailed feedback. We edited the manuscript according to their suggestions in a way that we hope will result more cohesive to the reader. We also addressed technical aspects with additional data and control experiments.
Reviewer #2 (Remarks to the Author): In this manuscript, Martinelli and Anderzhanova et al describe the involvement of stress response in relationship to FKBP51 on secretory autophagy. Autophagy and stress response are central homeostatic regulators. Glucocorticoids are central players in stress response. Here, the authors use a number of different cell lines and manipulations to investigate the role of FKBP51 in secretory autophagy and find that FKBP51 forms complexes with some key players involved in this pathway. In addition, FKBP51 levels and/or GR activity through Dex treatment regulate secretory autophagyrelated proteins. They complement these findings using microdialysis in wt and FKBP5 knockout mice exposed to stress, which revealed impaired release of CTSD, MMP9 and mBDNF into the interstitial fluid, similar to what was found following treatment with an autophagy inhibitor (ULK1i). Finally, they showed that increased FKBP51 elevates release of these same factor from microglial cells and that SAFit1 treatment reduces this. Overall, it is an interesting story, but there are major issues with the writing and data interpretation that weaken enthusiasm for the work. In addition, the overall novelty is considered moderate. Specific comments can be found below, which would strengthen the paper: (1) The overall writing is not cohesive. The title does not well represent the paper, the introduction does not flow into the results, the results and discussion both have information that should be in the introduction, and the discussion has information that is more results and does not summarize the whole story well. Major revisions are needed.
Response:
We thoroughly revised the paper with this comments and the editor's feedback in mind and hope that reviewer #2 will find the new version more coherent and readable.
(2) Conclusions are drawn from the co-IP data that cannot be made. Co-IPs will reveal complexes and are not quantitative. Other direct methods of binding need to be used to make statements about direct interactions between proteins. In addition, it does not appear that the lysates were precleared with beads to ensure only specific complex interactions are measured, as many proteins can stick to the beads directly.
Response:
We acknowledge that co-IP data do not necessarily implicate a direct interaction, therefore we replaced the term "interaction" with "association", meaning the formation of a complex via direct or indirect interaction. In fact, an indirect interaction would not contradict our model and our focus remains to show that FKBP51 is necessary for the complex formation with SEC22B and the SNARE proteins that leads to the secretion of the secretory autophagy cargo. However, to address Reviewer #2's demand, we also performed pull-down experiments that demonstrate that FKBP51 can directly interact with TRIM-16 in vitro ( Supplementary Fig. S1c).
Regarding the technical aspects of the IP's specificity, we precleared the beads with BSA (bovine serum albumin) to omit unspecific binding of proteins. This way of proceeding allows an increased specificity for the target protein (i.e. for the antibody), while reducing unspecific binding to the bead material. In addition, to enhance specificity we performed a selective elution of IP-material from antibody-loaded beads using peptide competition with excessive amounts of FLAG peptide.
(3) Do other GR-regulated proteins cause the same increase in secretory autophagy?
Response:
We have not tested other GR-regulated proteins, since we selected FKBP51 based on the newly found interaction with SEC22B. Although it would certainly be interesting to assess whether stress can affect the secretory pathway via other proteins, the fact that FKBP51's absence (in SH-SY5Y KO) or inhibition (via SAFit1) suffices to impair this pathway, suggests that FKBP51 has a rather unique and specific role that we think is hardly a general feature of other GR-regulated proteins. A possible examination of the role of other GR-regulated proteins in the secretory pathway is a laborious project (or projects) and is out of the scope of this manuscript.
Response:
Our interest focuses on stress-related psychiatric disorders, therefore we wanted to analyze this mechanism in brain cells, and to ensure a proper readout for our experiments we selected microglia cells as they are the main secretory cells in the brain.
(5) A known positive regulator of secretory autophagy should be used to compare to the results found in this paper for FKBP51 and Dex treatment
Response:
We used L-leucyl-Lleucine methyl ester (LLOMe) as a positive regulator of secretory autophagy (Fig. 2 c and e) since it is the best characterized secretory autophagy inductor (Kimura et al., 2016). We added this information to the manuscript for clarity (line 199-201).
Response:
They are now aligned.
Response:
They look indeed similar as they represent the same proteins and same conditions but in two different cell lines. However, a close look reveals several small differences that confirm that they are indeed two different blots. (little cut in the FKBP51 and SNAP29 bands of Fig. 1i, shadows of bands in the FLAG conditions of STX3 and SEC22B in fig. S2, different shape of all the other bands).
Response:
FKBP51 and SEC22B are indeed the same. We repeated them in the supplements for completion. We realized it can lead to confusion and therefore proceeded to remove SEC22B from the supplementary figure (S1a), but kept FKBP51 since it is the immunoprecipitated protein.
(9) Line 184-185: Seems to be out of place, proteins shown here look to be part of previous set of experiments (first section in results).
Response:
We did not use SIM-A9 cells for the first section of the results. However, we acknowledge that this sentence is not well incorporated in the paragraph and, therefore, rewrote it.
Response:
WT SIM-A9 cells underwent the same transfection procedure as the Atg5 KO cells but without the gRNA targeting Atg5. SEC22B and FKBP5 KO SIM-A9 cells were generated with the Alt-R CRISPR-Cas9 system from Integrated DNA Technologies. WT control cells were identified by WB after single-cell cloning procedures and therefore underwent the same transfection and isolation procedure as the KO cells.
(11) Line 214: Authors did not discuss CTSF despite it represented the biggest fold change in Fig 3c.
Response:
The deeper investigation of the effect of stress on the secretion of cathepsines is out of the scope of our manuscript, but is part of a follow up study (Niemeyer et al., in preparation).
(12) Due to the limit of allowed references in the main manuscript, the references of Table 1 can be found as Supplementary references file. We added the missing link to it in the Table legend (lines 1308-1309). We ask the Editor for suggestions for the best format for this purpose.
Response:
As control, an empty vector was used (same vector as ect. FKBP51).
(14) Fig S4: Describe the vehicle used in the experiment. Indicate meaning of lines (inhibitor)
Response:
We added the information regarding the used vehicle to the method "Treatments" section (line 545). We also indicated the meaning of lines in panel c.
(15) Readouts of Dex treatments need to be shown throughout. Charcoal stripped media should also be used to remove the effects of FBS hormones.
Response:
In our experience, glucocorticoids contained in FBS are in too low concentrations to elicit any GR activation. However, to ensure that this is true for the mechanism analyzed in this manuscript, we performed additional experiments with charcoal stripped medium. We measured secreted CTSD and MMP9 via ELISA. The figure below shows the results from WT and Atg5 KO SIM-A9 cells treated with vehicle or 300 nM of dexamethasone for four hours and cultured in FBS-or charcoaled stripped serum (CSS)-supplemented culture medium for 24 h hours (n=3).
From these results, we can observe a rather diminished response to GR activation in the presence of complete FBS and, as a consequence, the significant results obtained using complete FBS do rather strengthen our output and conclusions.
In the tables below, the complete statistics regarding the multiple comparison are shown, confirming that complete FBS does not affect the outcome of our experiments.
(16) Confirmation of FKBP51 levels and GR activity should be confirmed in each cell line being used.
Response:
Dexamethasone stimulations in SIM-A9 was performed and FKBP51 levels were analyzed via western blot. Dexamethasone treatments led to a significant increase of FKBP51 levels confirming the dexamethasone-induced GR activation and increased expression of FKBP51 in this cell line. These data are shown in Supplementary Fig. S2a.
(17) Reviews are referenced, where original manuscripts should be referenced instead. The introduction would benefit from additional references. Some references are misleading, for example Ref 1, which does not mention bipolar or schizophrenia, and Ref 8, which is not the first time secretory autophagy is described.
Response:
Due to the limit in number of references that are allowed we sometimes preferred to reference the reviews to convey more information especially in reference of a broader topic such as secretory autophagy, for which there is no real first publication, but rather a first publication in which the term was coined (Jiang et al., 2013). However, we now added more direct references in addition to the reviews in order to better specify our sources.
(18) Baf should be defined in the results section.
Response:
It is defined in lines 198-202.
(19) Line 395: Secretory pathway is activated only after prolonged or excessive stress? This does not support their stress paradigm since the authors describe footshock as an acute stressor.
Response:
Footshock is an acute but very strong stress. It, therefore, falls into the category excessive stress.
Response:
References were added.
Response:
The discussion was thoroughly revised and adapted to the new data. In this optics we also incorporated the discussion about NMDAR, BDNF and synaptic plasticity in a more cohesive way.
Response:
We better defined this point both here (line 523). However, we would like to specify that throughout the whole manuscript, when we talk about stress, we always refer to GC-mediated stress, as stated in the introduction (lines 96-97).
(23) Line 443: Need to discuss the "contrasting" findings-"However, despite some consistent findings, other studies report incongruent or contrasting results".
Response:
The discussion was thoroughly revised and adapted to the new data. In this process, this sentence was eliminated.
(24) Methods are missing for some technical aspects, including descriptions of the experiments for Figure 6 and Baf treatments.
SAFit1 and Baf treatments were indeed missing and were added in the section "Treatments" (lines 545-550) (25) Authors must include exact number of n values for in vitro and in vivo experiments. Also, report the number of times the experiment was repeated, or replicates included in the final analysis.
Response:
The exact n values can be found in each figure legend and all information will be reported in detail in the source data table upon acceptance. However, all experiments were performed in at least three technical replicates and at least three biological replicates.
(26) Mention if all animal procedures followed standard policies animal care. Include age, sex and number of animals used for each experiment. Overall, the Ns are very low for these studies and should be increased. Sex of the mice should also be considered as an independent variable. Time of day for the experiments should be carefully described in the methods
Response:
We added the first information in the Methods section "Animal housing conditions" (lines 666-673). We specify here that all procedures were done in accordance with European Communities Council Directive 2010/63/EU and approved by Government of Upper Bavaria. All mice used in in vivo experiments were males. FKBP51-KOs and respective WTs at the age of 14-16 weeks were used in FS experiments ( fig. 5 c-f). C57Bl/6NCrl mice at the age of 13-15 weeks were used in FS/ ULK1 inhibitor experiments ( fig. 5 g-j). Microdialysis experiments were performed during the first half of the day at an inverted day-night light cycle. FS was applied between 11.00, and 12.00 am.
(27) Explain the rationale for choosing one-tailed over two-tailed unpaired t tests.
Response:
We know that Dex treatment, i.e. GR activation, leads to FKBP51 induction and, therefore, we expect a change in only one direction. (29) All references should be in the same format. (Some include web link and DOI, see #26 and 35)
Response:
The bibliography formatting has been revised and modified according to the feedback.
(30) Fig S3: Equal amount of Gapdh protein cannot be appreciated in the figure.
Response:
Quantifications of the blot were added and GAPDH quantifications are indicated in the following graph. Despite being the GAPDH signal slightly lower in the KO compared to the WT line, this difference is negligible compared to the difference in Atg5 signal in WT compared to KO, where in the KO line the signal is not detectable.
We thank Reviewer 3 very much for their positive and constructive feedback. Their interesting observations led us to further investigate and clarify the role of stressprimed secretory autophagy on the neurophysiological level.
Reviewer #3 (Remarks to the Author): FKBP51 (gene name: FKBP5) is a glucocorticoid (GC) receptor binding protein, which acts as a co-chaperone of heat shock protein 90 (HSP90) and regulates GC-mediated stress. This protein is also known to be associated with mental disorders.
The authors of this study showed previously that GC-mediated stress leads to the activation of macroautophagy, which is regulated by FKBP51 (Gassen etal., pLos Med., 2014). In the present study, they show that GC induces another type of autophagy called secretory autophagy and that FKBP51 plays an important role in this secretory autophagy by interacting with specific SNARE proteins (Fig. 1). They further show that MM9 is a novel cargo molecule of GC-mediated secretory authophagy (Fig. 3), that FKB51 is critical for secretion of MMP9 (Fig. 4), and that the MMP secretion plays an important role in BDNF maturation (Fig. 4, 5).
The authors demonstrated these results using FKBP51 overexpressed HEK-293 cells, Atg5 KO and FKBP-5 KO cell lines as well as FKBP5 KO mice. The story is novel, logical and supported by an impressively large amount of data which appear to be sound and quite convincing. I have two concerns however.
(1) The authors provide strong cellular evidence that FKBP51 is critical for secretory authophagy of MMP9 leading to BDNF maturation. However, I feel that the biological significance for this role of FKB51 in living mice is missing or obscure.
As the authors stated in Abstract, BDNF is essential for synaptic plasticity. Thus, I would imagine that the reduced BDNF maturation in the stress response by FKBP5 KO has strong a strong behavioral phenotype in the BDNF-related behavior, such as learning and memory. However, previous study seems to fail to show such phenotype in the FKBP5 KO mice. For instance, O'Leary et al. (2011, pLos One) described that FKBP51 KO mice showed antidepressant behavior without affecting cognition and other basic motor functions. This previous result does not seem to be consistent with the proposed function of FKBP51 in stressinduced BDNF maturation. The authors of the present study should show some sort of behavioral or neurological phenotype associated with the reduced BDNF maturation by FKBP5 KO mice. Without such evidence, the readers of Nature Communications remain puzzled about the role of FKBP5 in stress-induced synaptic plasticity and BDNF maturation.
Response:
We agree with reviewer #3 that further investigation on the learning and memory effect would be of extreme interest. However, we think that acquiring such answers is out of the scope of this manuscript, but would rather represent the subject of an interesting follow-up project. The possible physiological and behavioral downstream effects could be many and variable. However, in order to shed some light on possible effects of increased mBDNF on neuroplasticity, we performed 2-photon experiments. The resulting data (Fig. 6) provide a validation to the physiological effect of stress-induced secretory autophagy not only on BDNF maturation but also on the consequential change of neuroplasticity in ex vivo murine organotypic brain slices. With the obtained results we could confirm our previous hypotheses and corroborate the effect on a neurological phenotype.
Regarding the absence of cognitive effects of the lack of FKBP51 reported by O'Leary et al., an important aspect to consider is that for that paper only old mice (between 17 and 22 months of age) were used. Age is a fundamental component when analyzing cognitive behaviors and the outcome of the same experiment might have been different in younger animals. The same is true for other variables such as type and intensity of stress. With our results we highlight the fact that there is a novel pathway that links excessive stress to neuroplasticity and we hypothesize that this is a mechanism regulating stress adaptation and possibly be correlated to psychiatric disorders when dysregulated. The exact consequences of such pathway need to be analyzed into detail and differentiated from other similar pathways triggered by similar but different stimuli.
(2) Co-IP data in Fig. 1J is NOT described correctly. Fig. 1J is labeled as "GFP-IP", which I believe is correct. Based on the figure, I believed that the authors overexpressed SEC22B as GFP-fusion protein and IPed using GFP-Ab. However, the used antibody was labeled as "FLAG-Ab". This should be "GFP-Ab". The Figure legend to Fig. 1J described this experiment as "FLAG-tagged FKBP51 co-IP (FLAG-IP). I believe that the legend should be "GFP-tagged SEC22B co-IP (GFP-IP).
Response:
Thank you for noticing. It is was indeed a labelling mistake and we corrected it.
Reviewer #4
We thank Reviewer #4 for their insightful feedback and suggestions. We addressed all the raised issues by performing additional experiments and by editing the manuscript in order to answer all the concerns.
Reviewer #4 (Remarks to the Author): In this manuscript by Martinelli et al., termed " Stress-primed secretory autophagy drives extracellular BDNF maturation" the authors identify the matrix metalloproteinase 9 (MMP9) as a stress-induced secreted protein involved in the cleavage of pro-brain derived neurotrophic factor (proBDNF) to its mature form (mBDNF). The authors demonstrate the involvement of the co-chaperone FK506binding protein 51 (FKBP51) in stress-elevated secretion of MMP9 in the mouse brain, exploiting in vivo microdialysis in WT and Fkbp5 KO mice. The importance of the autophagy machinery for the stress elevated secretion is assessed in WT mice by including a ULK1 inhibitor. The authors claim that stress-induced secretion of MMP9 is through secretory authophagy, facilitated by FKBP51.
The novel finding in this manuscript is the involvement of FKBP51 in stresselevated secretion of MMP9 in the mouse brain, resulting in maturation of BDNF. BDNF is essential for synaptic plasticity and altered BDNF signaling is associated with stress-related psychopathology. Hence, this finding is of general interest and contributes to the understanding of MMP9 secretion in the CNS. Overall, the biochemical data are well performed and the use of proteomic methods, data mining and in vivo microdialysis reflect an extensive amount of work. Nevertheless, the manuscript has some critical shortcomings that have to be addressed before publication. The authors do show the presence of CTSD in a co-IP of ectopically expressed FLAG-TRIM16 in SH-SY5Y cells (Fig. 1e). However, that does not necessarily mean that TRIM16 binds to CTSD since IP-based interaction detection can reflect indirect binding of the proteins in question. Secondly, FKBP51 is a known co-chaperone of heat shock protein 90 (HSP90) and HSP90 has been assigned a key role in secretory autophagy of IL-1β, mediating import of IL-1β into the autophagsomal intermembrane space (Zhang et al., 2015; doi: 10.7554/eLife.11205). The authors do not address this point at all other than showing that mutating the HSP90 binding site of FKBP51 reduces a potential interaction between FKBPB51 and galectin 8 (Fig. 2b). Finally, the authors show that FKBP51 is essential for the association of SEC22B with its Q-SNARE partners in SH-SY5Y cells (Fig. 1j). However, there are no data presented that demonstrate the importance of the formation of this SNARE complex for stress-induced MMP9 secretion in SIM-A9 (microglia) cells or mice. Detailed comments and suggestions are included below.
Response:
Concerning the use of CTSD instead of IL-1b as an established cargo, this was done for two reasons: 1) the implication of IL-1b in this pathway is the focus of another study we are currently completing (Hartmann et al., in preparation). We attach here confidential results showing that IL-1b is regulated in the same way as CTSD.
Quantification of IL1b via ELISA assay. IL1b from supernatants was measured via ELISA after SIM-A9 cells were treated as follow a) LLOMe for 4, 8 and 24 hours or vehicle for 24 hours. b) 3nM, 30nM and 300nM Dex or vehicle for 4 hours. c) 300nM Dex or vehicle for 4 hours in WT and Atg5 KO SIM-A9 cells. d) transfected with FKBP51 expressing plasmid or control vector. *P < 0.05; ***P< 0.001; ****P< 0.0001. Tukey's multiple comparison test was used for a, b and c; unpaired t-test was used for d. Significances in c are referred to comparison of Dex 300nM with each of the other conditions. Error bars expressed in SEM.
2) IL1b was not detected as part of the secretome in the MS experiment, probably because below the detection limit. Therefore, to give a complete picture, we decided to opt for CTSD as another well-characterized secretory autophagy cargo.
Finally, regarding the importance of the SNARE complex formation for the stressinduced MMP9 secretion, we generated a SIM-A9 SEC22B KO line with which we demonstrated that the absence of SEC22B impairs not only the secretion of MMP9, but also of CTSD. Detailed results are shown in response to comment 6.
Major comments: (1) Figure 1b,c,d and e and figure S1 a and b: Interaction between FKBP51 and SEC22B or TRIM16 is implicated from reciprocal co-IPs. To demonstrate a direct binding between FKBP51 and SEC22B or TRIM16 the authors could use a GST-pulldown assay with labeled in vitro translated proteins. Furthermore, co-localization images in cells to visualize FKBP51 together with SEC22B or TRIM16 would be helpful. The same applies for the implied interaction between TRIM16 and CTSD.
Response:
To address this point (addressed also by Reviewer #2), we performed the suggested pull-down experiments to verify the direct interaction between FKBP51 and TRIM16 and found that there is indeed a direct interaction in vitro (supplementary Fig. S1c). However, we also rephrased the parts describing these results replacing the term "interaction" with "association", meaning an either direct or indirect interaction. In fact, for the presented mechanism it is irrelevant whether the interaction is direct or not. What we show is that the proteins form a complex for which they are immunoprecipitated together, and, more importantly, this association (whether it is direct or not) affects the pathway.
(2) For the blots in figure 1d, e and g, which form of CTSD is shown/recognized by the antibody? CTSD exists in different forms with the inactive precursor of the enzyme, procathepsin D, being cleaved, resulting in different forms of mature/active cathepsin D.
Response:
As shown in the representative blot below, the predominant and quantified CTSD form is the cleaved/mature one (CTSD heavy chain).
(3) Figure 1l-"Schematic overview of the interactions of FKBP51 in the secretory autophagy pathway": Here FKBP51 is shown to interact with GAL8 but no data have yet been presented to show this. TRIM16 binds to GAL8 and the figure should indicate that. Furthermore, HSP90 is shown as a binding partner of FKBP51 but HSP90 is not included in the blots of any of the IPs in figure 1. Furthermore, there are no data presented indicating the importance of FKBP51 for transfer of the TRIM16-cargo (CTSD) to the autophagosome or data indicating that HSP90 is not present. TRIM16 association to SEC22B is independent of FKBP51 according to figure 1e. And there are no comments or experiments addressing how CTSD, that normally resides in lysosomes, is translocated into the lumen of autophagosomes prior to its secretion. Therefore, the claim on page 6 line 148-149. "From these data, FKBP51 results to be involved in several key steps of the secretory autophagy pathway (Fig 1l)", appears as an overstatement.
Response:
With the FKBP51-IP displayed in Fig 2b, we show that FKBP51 associates with GAL8. Whether this association is direct or indirect is irrelevant for the proposed mechanism.
In fact, we state that this association is, at least partially, indirect and occurs via HSP90. With additional WB analyses we detected HSP90 in the FKBP51 eluate, as further confirmation of our hypothesis (these data have been added to Fig. 2b). The model represented in Fig 1l, takes into account data from Fig. 2 (as stated in the image). Taken together these data we, therefore, do not think that asserting that FKBP51 is involved in several key steps of secretory autophagy is an overstatement because being involved represents a very mild action verb and, for that meaning, we believe to have the adequate supporting data.
(4) In figure 2c and 2d the authors use a tandem tagged (mRFP-GFP) galectin 3 (tfGal3) in SH-SY5Y cells to monitor lysosomal damage. The reduction of the GFP signal is a result of acidification of tfGal3. Gal3 is recruited to damaged lysosomes and Gal3 becomes acidified trough lysophagy. Lysophagy involves autophagosomal engulfment of damaged lysosomes that subsequently become degraded by fusion with intact lysosomes (Maejima et al., 2013; doi: 10.1038/emboj.2013.171). LLOMe induces lysosomal damage that culminates in lysophagy and Dex appears to be able to do the same. Inhibition of lysosomal acidification with BafA1 abolishes the effect. Therefore these data actually show degradation of Gal3 on damaged lysosomes through autophagy. The authors should comment on how they envision the effect of dex on lysosomes and how this relates to secretory autophagy.
Response:
With our study, we do not expect to answer the question of how Dex can lead to lysosomal damage (e.g. lysosomal membrane permeabilization) as it would be a far to complex topic to investigate and is beyond the interest of this manuscript. However, with our data we can confidently affirm that Dex indeed leads to lysosomal damage and activates the repair mechanism involving the recruitment of galectines on the lysosomal membrane. As for the mechanism leading to the lysosomal damage, many could be the hypotheses, but having no data to this regard we are reluctant to postulate any. Here are some articles that describe the mechanisms (and its complexity) that can lead to lysosomal damage and that are still not fully unraveled: Jia et al., Galectins Control mTOR in Response to Endomembrane Damage, Mol. Cell, 2018; Napolitano and Ballabio, TFEB at a glance, J. cell sci., 2016. A hypothesis of a more direct effect of GCs on the biophysical properties of cell membranes, that we can apply to the mechanism leading to lysosomal damage, was described by Van Laethem et al. (J Immunol, 2003). In their study, dexamethasone caused alterations in lipid raft palmitate content inducing a decline in the proportion of saturated fatty acids while increasing unsaturated ones. From a biophysical perspective, the changes in membrane lipid composition increase fluidity and therefore it is tempting to speculate that these changes also affect lysosomal osmotic stability (Yang et al., Cell Biol. Int., 2013).
(5) In figure 2 e) and 2 f) SIM-A9 secretion of CTSD in response to LLOMe and dex treatment, respectively should also include BafA1 treatment to determine if the secretion is dependent on functional lysosomes or not.
Response:
We performed additional experiment that were added to supplementary Fig. S2 as panel c, where we show that Baf has the expected effect on CTSD secretion (i.e. reversion of Dex and LLOMe effects) and confirmed that CTSD secretion correlates indeed with lysosomal damage.
(6) In figure 4 the dex induced MMP9 secretion should be shown in FKBP51 KO SIM-A9 cells as well to complement the in vivo results in figure 5. Furthermore, in order to link MMP9 secretion to secretory autophagy the authors could use siRNA knockdown of TRIM16 or SEC22B in these cells. The presence of MMP9 in an IP of TRIM16 or colocalization study of MMP9 with TRIM16 in cells would also be desirable.
Response:
To answer this question, we generated Fkbp5-KO and Sec22b-KO SIM-A9 cells and analyzed CTSD, MMP9, proBDNF and mBDNF secretion via WB of supernatants. In line with the rest of our data, results of this experiment showed that the secretion of CTSD, MMP9 and mBDNF is both SEC22B and FKBP51 dependent as is significantly impaired in the both KO cell lines compared to WTs, while the secretion of proBDNF is unaffected by FKBP51 or SEC22B (see Fig. S4 f and g).
(7) Figure 7-"Schematic representation of the findings and proposed model": In a, TRIM16 is shown to interact with MMP9. Again, there are no data in the manuscript that demonstrate this interaction and this schematic drawing is thus not accurate.
Response:
We rephrased the figure title as "proposed model based on the findings".
The manuscript by Martinelli et al. "Stress-primed secretory autophagy drives extracellular BDNP maturation" described the mechanism of enhancement of secretory autophagy by glucocorticoid-induced stress. The authors use interactomics and secretome analysis by mass spectrometry to identify proteins involved in the process and propose an elegant step-by-step mechanistic model validated by several other methods. The paper is well written, the findings are novel and this reviewer supports the publication in Nature Communications which will allow these results to reach a broad readership. There are a few minor concerns, mostly technical in nature, that should be addressed before acceptance.
Detailed comments:
(1) Since TRIM16 was confirmed to be an interactor of FKBP51 by Western blot, but not originally found in the MS dataset, can the MS data be researched and perhaps TRIM16 peptides can be found (maybe with the help of the inclusion list?)
Response:
TRIM16 was not found in the FKBP51 interactome list, nor in the control one. We think that this is due to a below the detection level expression of this protein. In fact, we confirmed the interaction of FKBP51 with TRIM16 via a pull down assay ( Supplementary Fig. S1c).
(2) For the interactome analysis, the transfection with a vector containing FLAG only was used. Can the authors include more description of how the control (unspecific) binders were eliminated? The only explanation I could found is in lines 651-651, that the proteins overlapping from all four replicates were counted as interactors. This is not sufficient.
Response:
We considered as interactors of FKBP51 only proteins that fulfilled both of the following criteria: • Bind only to FKBP51-FLAG and not control FLAG • Be found in all four replicates of FKBP51-FLAG transfected cells This selection method is quite stringent as it results in the exclusion of false negatives, but allows us to be confident in the identification of the positive candidates.
(3) For the secretome analysis, the media supplemented with FBS was used throughout the whole experiment. Is that correct? If so, how was the signal suppression by the overwhelming amount of protein handled? Was the albumin removed? This was not an issue for detection because the labeling was used, but signal suppression would be an issue anyway. Can the authors comment on that?
Yes, the secretomics experiment has been carried out in the presence of FBS to avoid any artefacts introduced by serum starvation. In order to study secreted proteins of low intensity irrespective of e.g. high-intense albumin in the background, we performed labeling of newly synthesized proteins and selective enrichment according to a protocol that has been published before (Eichelbaum K, Winter M, Diaz MB, Herzig S, Krijgsveld J: Selective enrichment of newly synthesized proteins for quantitative secretome analysis. Nat Biotech 2012, 30(10):984-990.). In more detail, the cells were labeled with azide-containing azidohomoalanine that is substituting methionine in newly synthesized proteins. In a second step, these proteins were selectively enriched and covalently linked to alkyne beads by click chemistry. Consequently, other non-labeled proteins such as FBS-derived albumin were removed before subsequent LC-MS/MS analysis was performed.
(4) Another useful clarification of the secretome analysis would be the comparison of growth rates and cell death between the wild type and the Atg5 KO cells. Are they identical in this respect, and if not, how was the data normalized? | 2021-08-01T06:17:09.146Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "1ecbf23bcfe3038d1b4b553bafc7c2bd7820a5e6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-021-24810-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f205c4eb4c8679498e9de5d9275c84dc4330e224",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219172644 | pes2o/s2orc | v3-fos-license | Contribution of Chronic Fatigue to Psychosocial Status and Quality of Life in Spanish Women Diagnosed with Endometriosis
Aim: To analyze the levels of chronic fatigue in Spanish women with endometriosis and its relationship with their psychosocial status and quality of life (QoL). Methods: A total of 230 Spanish women with a clinical diagnosis of endometriosis were recruited. Chronic fatigue (Piper Fatigue Scale) and pelvic pain (Numeric Rating Scale) were evaluated. An on-line battery of validated scales was used to assess psychosocial status [Hospital Anxiety and Depression Scale, Scale for Mood Assessment, Pain Catastrophizing Scale, Pittsburgh Sleep Quality Index, Gastrointestinal Quality of Life Index, Female Sexual Function Index and Medical Outcomes Study-Social Support Survey] and QoL [Endometriosis-Health Profile questionnaire-30]. Associations between fatigue and both psychosocial and QoL outcomes were explored through multivariate regression models. Results: One-third and one-half of women showed moderate and severe fatigue, respectively. Fatigue was associated with higher anxiety and depression, poorer sleep quality, poorer sexual functioning, worse gastrointestinal health, higher catastrophizing thoughts, higher anger/hostility scores and lower QoL (p-values < 0.050). Moreover, fatigue and catastrophizing thoughts showed a mediating effect on the association between pelvic pain and QoL. Conclusion: This work reveals the important role of fatigue in the association between pain, psychosocial status, and QoL of Spanish women with endometriosis.
Introduction
Endometriosis, characterized by the ectopic development of endometrial-like tissue, is among the most commonly diagnosed benign diseases in women of reproductive age [1]. Diagnostic delay and the fact that diagnosis is often overlooked by primary care doctors make the prevalence of the disease difficult to establish. Nevertheless, prevalence estimates range from 1-2% when considering populations at low risk to 10% when high-risk populations are considered [2]. However, despite the benign nature of this disease, huge direct and indirect costs (raising up to more than $12,000 -$15,000 in some countries) have been evidenced to be associated with endometriosis [3].
Pain in the pelvic region is acknowledged to be the most characteristic symptom of women with endometriosis, which is intensified during the menstruation period (dysmenorrhea) and during the performance of daily activities such as defecation (dyschezia), urination (dysuria) or sexual relationships (dyspareunia) [4,5]. The contribution of pelvic pain (PP) to the psychosocial status and the quality of life (QoL) of women with endometriosis has been extensively addressed [6][7][8][9][10]. Additionally, chronic fatigue, i.e., the perception of physical tiredness and lack of energy distinct from sadness or weakness, is another endometriosis-related symptom, as recently identified in women with endometriosis [11]. However, the role of chronic fatigue on patients' lives has been poorly addressed, although a few qualitative studies have indicated that affected women ascribed social and work impairments to fatigue [12,13]. Contrary, there is consistent evidence of the relevant role of fatigue in different subsets of patients experiencing chronic pain, suggesting that fatigue hinders the completion of routines and significant activities, and therefore, severely reduces QoL [14,15].
Moreover, the relevant contribution of chronic fatigue to the presence of psychosocial impairments in patients with autoimmune diseases [16] or neurological problems [17] has been reported. However, contrary to the well-established relationship between chronic pain, psychosocial problems and QoL in women with endometriosis, there is a scarcity of published studies addressing the contribution of chronic fatigue to the symptomatic burden in women with endometriosis under medical treatment. Thus, the aim of this study was to explore the presence of chronic fatigue in Spanish women diagnosed with endometriosis and its contribution to the psychosocial status and QoL.
Study Population
A total of 230 women with a clinical diagnosis of endometriosis, from different regions of Spain, were enrolled in this cross-sectional study from January to July 2019. Recruitment of women was carried out in combination with both gynecologists and Spanish associations of endometriosis patients, which advertised the study in their social networks. The inclusion criteria were: to have attended a gynecological visit with any participating gynecologist or to belong to any of the Spanish associations of endometriosis patients; to be diagnosed with endometriosis (either by laparoscopy, magnetic resonance or ultrasound imaging, or based on symptoms); and to have the ability and availability to use an electronic device with internet connection (computer, tablet or mobile phone). The exclusion criteria were: to live in another country or to be a non-Spanish speaker. The survey was designed to consider researchers', gynecologists' and patients' opinion about the most relevant aspects that should be addressed. Interested women received a link for the completion of an on-line questionnaire. Prior to this, women were informed about the nature and objectives of the study, and they were requested to read and sign the informed consent. No personal information was asked in the questionnaire, and data was extracted to create an anonymized database. This study was carried out following the principles of the Declaration of Helsinki and Biomedical Research Law 14/2007 and was approved by the Research Ethics Committee of Granada (1733-N-18).
Assessment of Self-Reported Intensity of Chronic Fatigue and Pain
Chronic fatigue was assessed through the Spanish version of the Piper Fatigue Scale (PFS) [18,19]. PFS is a validated 22-item tool for self-reported chronic fatigue in breast cancer survivors, but also in patients with gynecological disorders [20] or coronary diseases [21]. It includes four dimensions of subjective fatigue: "behavioral/severity", "affective meaning", "sensory" and "cognitive/mood". Scores range from 0 to 10, with higher scores indicating greater fatigue. It has demonstrated high reliability and validity (Cronbach's alpha 0.86). Participants were divided into three groups according to the clinically significant fatigue criteria: mild (≤ 4.0), moderate (4)(5)(6)(7) or fatigued (≥ 7), according to the value obtained for the PFS total score [18,22].
Psychosocial Assessment
Participants were asked to complete the Hospital Anxiety and Depression Scale (HADS), the Scale for Mood Assessment (EVEA), the Pain Catastrophizing Scale (PCS), the Pittsburgh Sleep Quality Index (PSQI), the Female Sexual Function Index (FSFI), the Gastrointestinal Quality of Life Index (GIQLI) and the Medical Outcomes Study-Social Support Survey (MOS-SSS).
Catastrophic thoughts about pain were assessed through the Spanish version of the PCS, a 13-item, validated, self-report instrument with adequate reliability (Cronbach's alpha 0.79) [33]. This measure has a 5-point Likert-style response scale and the scoring range is 0-52, with higher scores indicating higher levels of catastrophic thoughts. Previous studies have shown a cut-off of more than 30 points to be associated with clinical relevance [34].
Sleep quality was assessed using the Spanish validated version of the PSQI [35]. The PSQI is a 19-item, validated, self-report scale used to measure quality and patterns of sleep, with adequate reliability (Cronbach's alpha 0.87). Scores range from 0 to 21, with higher scores representing poorer sleep quality [36]. It has been proposed that a total score ≤5 indicates good sleep quality while a total score > 5 indicates poor sleep quality [36].
Sexual function was assessed through the Spanish version of the FSFI [37]. This is a 19-item questionnaire, validated, multidimensional self-report instrument for assessing the major aspects of female sexual dysfunction and sexual satisfaction [37,38]. The FSFI score ranges from 0 to 36. Higher scores represent better sexual function, considering that patients with a FSFI total score below 26 are categorized as sexual dysfunctional, whereas those scoring at or above this cut-off score are categorized as sexually functional [39]. Adequate reliability has been reported (Cronbach's alpha >0.70 for all domains).
Digestive complaints were assessed through the Spanish version of the GIQLI [40], a self-administered 36-item questionnaire concerning digestive symptoms, physical status, emotions, social dysfunction and effects of medical treatment. Each item scores from 0 to 4 with the total score ranging from 0 to 144, higher scores representing better quality of life. GIQLI also measures physical well-being, mental well-being, digestion and defecation [41].
The Spanish version of the MOS-SSS scale was used to assess the extent to which the person has the support of others to face stressful situations [42]. It is comprised of 19 items with a 5-point Likert-style response, with higher scores representing better social support. This measure has shown good psychometric quality in different studies using diverse populations and clinical scenarios (Cronbach's alpha 0.94) [43].
Quality of Life
The Spanish version of the validated Endometriosis Health Profile-30 (EHP-30) questionnaire was used for the assessment of QoL in participating women [44]. This 30-item scale has five subscale scores: pain, control and powerlessness, social support, emotional well-being and self-image. Each subscale is standardized on a scale of 0-100, where 0 indicates the best health status and 100 the worst health status. Scale scores for each scale are calculated from the total of the raw scores of each item in the scale divided by the maximum possible raw score of all the items in the scale, multiplied by 100. This instrument has shown good internal consistency reliability, with Cronbach's alpha >0.88 for all subscales.
Statistical Analysis
The sociodemographic and gynecological characteristics of participants and scores for PP and chronic fatigue were expressed as geometric means (GMs) with geometric standard deviation (GSD), or as percentages, depending on the continuous or categorical nature of the variable. Scores for QoL, i.e., psychosocial outcomes, were expressed as GM with GSD, as minimum and maximum values, and as percentiles (25, 50, and 75). When clinical cut offs were available, variables were categorized and expressed as percentages.
To improve normality of the data, psychosocial outcomes were log-transformed and, therefore, β coefficients are also presented as exp(β). Associations between fatigue severity (mild/moderate/severe), psychosocial outcomes and QoL were assessed by using linear regression models adjusted for sociodemographic and gynecological characteristics, including age, schooling, civil status, severity of premenstrual syndrome (none, mild, moderate or severe), type of diagnosis, time since diagnosis and number of surgeries. Additional models adjusted for severity of last week PP are also presented. Moreover, the mediation effect of fatigue and pain catastrophizing thoughts on the relationship between last week PP intensity and QoL was assessed through the macro PROCESS for Statistical Package for the Social Sciences (SPSS) [45], and mediating effects were considered significant when zero was not located within the confidence intervals.
The statistical significance level was set at p = 0.05. Analyses were performed using SPSS v23.0 statistical software (IBM, Chicago, IL, USA), while figures were designed with Graphad Prism 5.0 software (San Diego, CA, USA). The post-hoc analysis to estimate the power (1-β) of the statistical analysis was conducted using G*Power 3.1.9.7 statistical software (Düsseldorf University, Düsseldorf, Germany). For the main analysis between chronic fatigue and QoL, it revealed that, for an R 2 of 0.28 assuming an α-error of 0.05, the power was >0.99.
Results
A total of 241 women were interested in the study. However, 11 (4.6%) women did not meet inclusion/exclusion criteria. Finally, 230 women agreed to participate. Baseline characteristics of the study population are summarized in Table 1. The mean (±SD) age of the study population was 36.7 ± 5.2 years, the majority of them hold a university degree (53.9%), are currently working (64.3%) and declared the presence of premenstrual syndrome at any level of severity (56.6%). A total of 155 out of 230 women had a laparoscopic confirmation of the presence of endometriosis lesions at the time of this survey, while in 62 (27.0%) the diagnosis was based on magnetic resonance imaging (MRI) and/or ultrasound (US) imaging techniques. Only 13 (5.7%) were diagnosed based on symptoms but not confirmed by MRI and/or US imaging. Finally, the mean time since diagnosis was 5.0 ± 5.3 years, and 68 (29.6%) had undergone at least two endometriosis surgeries.
Intensity of Chronic Fatigue and Pain in Spanish Women Diagnosed with Endometriosis
Self-reported severity of chronic fatigue and last week PP are summarized in Table 2. GM (±GSD) intensity of chronic fatigue was 5.9 ± 1.7 points, with almost half of the participating women reporting severe fatigue. Concerning last week PP intensity, GM (±GSD) was 5.0 ± 1.9. A total of 46.3% and 27.1% of the entire study population showed moderate and severe PP during the last week. Using multivariate linear regression modelling, a positive association was found between intensities of both chronic fatigue and last week PP scores after adjustment for potential confounders (Supplementary Table S1).
Contribution of Fatigue Intensity to Psychosocial Impairment in Spanish Women
Results from the multivariate analyses assessing associations between self-perceived fatigue severity and psychosocial impairments are depicted in Figure 1. Results from the bivariate and multivariate analyses are summarized in Supplementary Table S2. After adjustment for potential confounders (sociodemographic and gynecological characteristics and intensity of PP during the last week), moderate and severe fatigue was found to be related to anxiety and depression, poorer sleep quality, poorer sexual functioning and less gastrointestinal quality of life in an intensity-dependent manner, while higher PCS and EVEA-anger/hostility scores were associated with severe fatigue. Moreover, multivariate logistic regression analyses that were run in parallel when cut-off points were available showed similar results (data not shown in tables). Sensitivity analyses stratified by endometriosis diagnosis yielded similar results. Results from multivariate linear regression analyses adjusted for age (years), schooling, civil status, number of surgeries, type of diagnosis, time since diagnosis, number of children, premenstrual symptom severity and last week pelvic pain intensity. # p-value ≤ 0.05 between mild and moderate groups; * p-value ≤ 0.05 between mild and severe groups.
Contribution of Pain, Fatigue and Psychosocial Impairment to Quality of Life in Spanish Women
Results from the multivariate linear regression analyses are summarized in Table 4. Severity of chronic fatigue and last week PP were associated with poorer QoL in an intensity-dependent manner. Moreover, anxiety, depression, anger/hostility and catastrophizing thoughts were associated with poorer QoL. Similarly, poorer gastrointestinal health, sexual function and sleep quality were also related to poorer QoL, although the latter showed a close to statistically significant association with QoL when models were further adjusted for intensity of PP during the last week (p-value 0.059).
Mediation effects of fatigue and psychosocial impairments on QoL were also accomplished ( Figure 2). All chronic fatigue, gastrointestinal complaints, sexual function, anxiety, depression, anger/hostility, sleep quality and catastrophizing thoughts showed a mediating effect on the association between last week PP and QoL when assessed on an individual level (data not shown). However, when the combined mediating effect was evaluated, only chronic fatigue and catastrophizing thoughts revealed a statistically significant mediating effect on the association between last week PP and QoL (1.128 and 0.863, respectively; p-value < 0.05).
Discussion
To our knowledge, this study constitutes the first attempt to objectively evaluate levels of endometriosis-related fatigue in a comprehensive population, and to address its relevant contribution to the psychosocial status and QoL in Spanish women diagnosed with endometriosis. Moreover, our results suggest that endometriosis-related fatigue and catastrophizing thoughts also exert a mediating effect on the association between intensity of PP and poorer QoL in affected women, evidencing that these factors also need to be addressed within appropriate treatment approaches in women with endometriosis.
Previous studies have stated that women with endometriosis frequently report the presence of chronic fatigue [46], with authors defending the effect of endometriosis on its generation, independently from other symptoms of the disease [11]. Our study shows that 85.3% of the patients enrolled have moderate to severe fatigue. Our findings are in accordance with previous studies where affected women were asked if they felt fatigue. Hence, a total of 50.7% and 27.1% of women reported frequent and occasional fatigue, respectively [11]. Similarly, Surrey et al. [47] recently reported that Results from multivariate linear regression analyses adjusted for age (years), schooling, civil status, number of surgeries, type of diagnosis, time since diagnosis, number of children, premenstrual symptom severity and last week pelvic pain intensity. # p-value ≤ 0.05 between mild and moderate groups; * p-value ≤ 0.05 between mild and severe groups.
Contribution of Pain, Fatigue and Psychosocial Impairment to Quality of Life in Spanish Women
Results from the multivariate linear regression analyses are summarized in Table 4. Severity of chronic fatigue and last week PP were associated with poorer QoL in an intensity-dependent manner. Moreover, anxiety, depression, anger/hostility and catastrophizing thoughts were associated with poorer QoL. Similarly, poorer gastrointestinal health, sexual function and sleep quality were also related to poorer QoL, although the latter showed a close to statistically significant association with QoL when models were further adjusted for intensity of PP during the last week (p-value 0.059).
Mediation effects of fatigue and psychosocial impairments on QoL were also accomplished ( Figure 2). All chronic fatigue, gastrointestinal complaints, sexual function, anxiety, depression, anger/hostility, sleep quality and catastrophizing thoughts showed a mediating effect on the association between last week PP and QoL when assessed on an individual level (data not shown). However, when the combined mediating effect was evaluated, only chronic fatigue and catastrophizing thoughts revealed a statistically significant mediating effect on the association between last week PP and QoL (1.128 and 0.863, respectively; p-value < 0.05).
Discussion
To our knowledge, this study constitutes the first attempt to objectively evaluate levels of endometriosis-related fatigue in a comprehensive population, and to address its relevant contribution to the psychosocial status and QoL in Spanish women diagnosed with endometriosis. Moreover, our results suggest that endometriosis-related fatigue and catastrophizing thoughts also exert a mediating effect on the association between intensity of PP and poorer QoL in affected women, evidencing that these factors also need to be addressed within appropriate treatment approaches in women with endometriosis. Regarding the relationship identified in this study between levels of fatigue experienced by women with endometriosis and severity of PP, it was not unexpected, as this association has been previously stated in different populations suffering different chronic conditions such as rheumatic diseases [51] or cancer [52]. In fact, both symptoms have been found to be related to an inflammatory microenvironment. Hence, studies from basic sciences evidenced that changes in immune surveillance and central sensitization were related to the pathophysiology of endometriosis [53]. Interestingly, a misbalance in estrogen levels, as widely reported in patients with endometriosis, may be the first responsible for the generation of an inflammatory microenvironment [48] that ultimately can lead to the development of not only PP [54] but also endometriosis-related fatigue. Moreover, the dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis has been reported to contribute to the development of fatigue in chronic illness [55].
Our sample of patients show high levels of psychosocial impairments such as anxiety, depression, poor sleep quality or sexual dysfunction. In this respect, several studies have stated the association between endometriosis and psychological impairments, with depression and anxiety as the most common disorders related to endometriosis, deeply affecting the QoL of these women [6,[56][57][58][59]. Interestingly, our results suggest an association of chronic fatigue and psychosocial factors, as reported in another multicenter study by Ramin-Wright et al. [11] that comprised 554 women with endometriosis, where fatigue was associated with insomnia and depression among other factors. In this respect, a previous review stated the influence of chronic fatigue on different inflammatory conditions and the possible association between inflammation, pain and depression [51,60]. Moreover, in addition to poorer sleep quality and depression, our study suggests for the first time that the presence of chronic fatigue is associated with higher levels of anxiety and anger/hostility, as well as poorer sexual function. In accordance with our results, it has been reported that fatigue was associated with poorer sexual functioning in women with chronic conditions such as breast cancer [61] or multiple sclerosis [62]. In this regard, fitness level, crucially related to the presence of chronic Previous studies have stated that women with endometriosis frequently report the presence of chronic fatigue [46], with authors defending the effect of endometriosis on its generation, independently from other symptoms of the disease [11]. Our study shows that 85.3% of the patients enrolled have moderate to severe fatigue. Our findings are in accordance with previous studies where affected women were asked if they felt fatigue. Hence, a total of 50.7% and 27.1% of women reported frequent and occasional fatigue, respectively [11]. Similarly, Surrey et al. [47] recently reported that 54-74% of affected women with moderate to severe pain reported experiencing fatigue. Although the underlying mechanisms are not still fully elucidated, it has been reported that the endometriotic lesions usually develop a complex and dynamic environment dominated by inflammatory, angiogenic, and endocrine signals [48]. Similarly, Suryawanshi et al. [49] reported that endometriotic lesions generate a specific immune microenvironment similar to a tumor-like inflammatory profile. Thus, in accordance with the positive correlation between inflammatory cytokines and fatigue shown in cancer patients [50], elevated cytokine levels found in endometriosis might be hypothesized to play a role in the development of fatigue symptomology in these affected women.
Regarding the relationship identified in this study between levels of fatigue experienced by women with endometriosis and severity of PP, it was not unexpected, as this association has been previously stated in different populations suffering different chronic conditions such as rheumatic diseases [51] or cancer [52]. In fact, both symptoms have been found to be related to an inflammatory microenvironment. Hence, studies from basic sciences evidenced that changes in immune surveillance and central sensitization were related to the pathophysiology of endometriosis [53]. Interestingly, a misbalance in estrogen levels, as widely reported in patients with endometriosis, may be the first responsible for the generation of an inflammatory microenvironment [48] that ultimately can lead to the development of not only PP [54] but also endometriosis-related fatigue. Moreover, the dysregulation of the hypothalamic-pituitary-adrenal (HPA) axis has been reported to contribute to the development of fatigue in chronic illness [55].
Our sample of patients show high levels of psychosocial impairments such as anxiety, depression, poor sleep quality or sexual dysfunction. In this respect, several studies have stated the association between endometriosis and psychological impairments, with depression and anxiety as the most common disorders related to endometriosis, deeply affecting the QoL of these women [6,[56][57][58][59]. Interestingly, our results suggest an association of chronic fatigue and psychosocial factors, as reported in another multicenter study by Ramin-Wright et al. [11] that comprised 554 women with endometriosis, where fatigue was associated with insomnia and depression among other factors. In this respect, a previous review stated the influence of chronic fatigue on different inflammatory conditions and the possible association between inflammation, pain and depression [51,60]. Moreover, in addition to poorer sleep quality and depression, our study suggests for the first time that the presence of chronic fatigue is associated with higher levels of anxiety and anger/hostility, as well as poorer sexual function. In accordance with our results, it has been reported that fatigue was associated with poorer sexual functioning in women with chronic conditions such as breast cancer [61] or multiple sclerosis [62]. In this regard, fitness level, crucially related to the presence of chronic fatigue, has been recently identified as a strong predictor of sexual function in middle-aged adult women [63]. Although the molecular links between chronic fatigue and psychosocial impairments remain unclear, it has been suggested that the HPA axis might be behind this cluster of symptoms typically observed in cancer patients [64]. Interestingly, an aberrant HPA response has been reported in women with endometriosis [65]. Moreover, social factors might also contribute to the development of fatigue. Hence, in a different subset of patients, it has been reported that social support, through promoting self-confidence and rational thoughts, may have an impact on the reinforcement of the immunity system, and in turn, on the reduction of fatigue levels [66]. Care practitioners and clinicians' perception of women's experiences of endometriosis [67] and low-value healthcare [68] might also contribute to the endometriosis-related burden of symptoms. Additionally, diagnosis delay, infertility and worries related to low work productivity or job loss, in addition to depressive symptoms or disturbed sleep, might also negatively impact on energy and vitality [69][70][71], revealing the complex inter-relationships between physiological and psychosocial factors in women with endometriosis.
Regarding the interrelationship between intensity of PP, chronic fatigue and psychosocial impairments, our findings indicate that chronic fatigue and catastrophizing thoughts may mediate the association between last week PP intensity and QoL. A similar relationship was described in a previous work showing an association between pain and psychological stress with a worsening of the QoL in women living with endometriosis [72]. In this study, we have added for the first time the contribution of chronic fatigue, in addition to psychosocial distress, on QoL impairment in women with endometriosis. Taken together, these studies would support the idea that pain is associated with chronic fatigue and negative emotions [73,74], that in turn could affect QoL in women with endometriosis. Therefore, our data supports the necessity of multimodal treatments that address fatigue and psychosocial distress in addition to PP intensity in order to improve QoL in women with endometriosis, in line with previous suggestions [11,72]. Thus, besides medical therapy [47], physical and psychological interventions might be beneficial in endometriosis treatment, as evidenced for a variety of chronic illnesses in women [75][76][77][78]. More attention should be paid to non-pharmacological approaches to manage the symptom burden of this silenced female disease.
Regarding limitations, study population might not be fully representative of Spanish women with endometriosis. In this regard, although we have included affected women from all Spanish regions, the presence of a selection bias is plausible, given that participants might have a different symptom burden than non-participants. In this regard, it has been reported that outcomes related to QoL are influenced by recruitment strategy [79]. Secondly, this study has a cross-sectional design that does not allow the assessment of the causal relation between studied variables. The absence of a control group also limited understanding of the differential impact of this symptom on lives of women with and without endometriosis. Nevertheless, in a case-control study comprised of 25 women with endometriosis and 25 healthy controls, we have recently reported that mean fatigue score was 2.9 ± 2.0 among controls and 5.3 ± 2.3 among women with endometriosis [80]. In fact, a large majority of healthy women had mild fatigue and none of them had severe fatigue. Contrarily, the majority of women with endometriosis (72.0%) had moderate-severe fatigue. Moreover, the information retrieved in the present study was obtained from self-administered questionnaires and, therefore, a risk of information bias could also exist. However, the use of validated scales for this assessment may counteract this information bias. Finally, we have no information about medication taken by the women during the study, although all participants reported to be adhering to the prescribed medical treatment. However, it is possible that fatigue in endometriosis could be partially attributed to side effects from medication [46]. In addition, contraceptive hormonal therapy is usually prescribed to many women with endometriosis, and its use has been previously associated with different depression symptoms [81].
Conclusions
This work provides preliminary evidence of the relevance of chronic fatigue for the psychosocial status and the QoL of women living with endometriosis. We consider that it has important implications for the evaluation and treatment of this population, as the main goal of their management is usually to ameliorate symptoms and to improve general QoL. The habitual treatment of the disease is focused on classic symptoms such as pain or infertility [1,82], but our findings also support the importance of addressing fatigue when treating patients with endometriosis, highlighting the necessity for an interdisciplinary management of the disease. Thus, our results warrant future studies that assess the effectiveness of multidisciplinary approaches (i.e., physical and psychological rehabilitation interventions, in addition to medical therapy) for symptom management.
Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/11/3831/s1, Table S1. Relationship between intensity of chronic pelvic pain and chronic fatigue; Table S2. Relationship between intensity of chronic fatigue, psychosocial status and quality of life in women with endometriosis. Funding: This research was funded by Health Institute Carlos III (ISCIII)-FEDER (grant number PI17/01743) and donations from particular women with endometriosis that believed in this project from the first time. It was also partly supported by the PAIDI group CTS-206 (Oncología Básica y Clínica) funds. This study takes place thanks to the additional funding from the University of Granada, Plan Propio de Investigación 2016, Excellence actions: Units of Excellence; Unit of Excellence on Exercise and Health (UCEES), and by the Junta de Andalucía, Consejería de Conocimiento, Investigación y Universidades and European Regional Development Fund (ERDF), ref. SOMM17/6107/UGR. | 2020-06-02T21:05:51.368Z | 2020-05-28T00:00:00.000 | {
"year": 2020,
"sha1": "484cc757ce23eb6942b1bd9cfb049e27fbb51b66",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/11/3831/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3d289381a3a36f79b49c8919bfdc94b0593269e",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247768958 | pes2o/s2orc | v3-fos-license | Preclinical Study of Biphasic Asymmetric Pulsed Field Ablation
Pulsed field ablation (PFA) is a novel method of pulmonary venous isolation in atrial fibrillation ablation and is featured by tissue-selective ablation. Isolation is achieved via the application of high-voltage microsecond pulses that create irreversible perforations in cell membranes (i.e., electroporation). We proposed a new biphasic asymmetric pulse mode and verified the lesion persistence and safety of this mode for pulmonary vein ostia ablation in preclinical studies. We found that biphasic asymmetric pulses can effectively reduce muscle contractions and drop ablation threshold. In the electroanatomic mapping, the ablation site showed a continuous low potential area, and the atrium was not captured after 30 days of pacing. Pathological staining showed that cardiomyocytes in the ablation area were replaced by fibroblasts and there was no damage outside the ablation zone. Our results show that pulmonary venous isolation using the biphasic asymmetric discharge mode is safe, durable, effective, and causes no damage to other tissues.
INTRODUCTION
Pulsed field ablation (PFA), also known as irreversible electroporation, has been applied to cardiac ablation in recent years (1)(2)(3). PFA is a non-thermal ablative modality. It is used to treat atrial fibrillation and does not cause damage to other tissues such as the esophagus and nerves, because the threshold for cardiomyocytes is the lowest of any other tissue (4)(5)(6)(7)(8)(9).
Due to these potential advantages of PFA, the technique has attracted more attention in the field of ablation treatment of atrial fibrillation (10). Although the safety and effectiveness of PFA have been proved in many experimental studies (11,12), PFA is still in the preliminary research stage and the optimal pulse modes and ablation dose remain unclear.
The ablation effect of PFA is greatly influenced by the pulse amplitude (y-axis) and pulse width (x-axis). Lavee et al. (13) were the first to perform cardiac PFA with a monophasic direct current pulse sequence of 1,500-2,000 V, 100 µs (microsecond) per pulse, and frequency of five pulses/second. However, an electrical pulse creates local and systemic muscle contractions that make it difficult to accurately perform ablation; to solve this problem, substantial doses of chemical paralytics need to be administered to patients. Arena et al. (14) proposed a new type of high-frequency biphasic pulse to reduce muscle contraction during ablation, but the high frequency is often accompanied by heat generation. Moreover, biphasic pulses require higher voltage to achieve a similar effect compared with monophasic pulses, because the cancellation effect means that the effect of the first pulse is reduced by the second pulse of opposite polarity (15,16). For example, Sano et al. (17) found that the lethal threshold of a biphasic symmetric pulse was 1,316 V/cm, which was significantly higher than that of a biphasic asymmetric pulse (536 V/cm). Our previous study on smooth muscle cells and cardiomyocytes found that asymmetric pulse width was superior to biphasic symmetric pulse ablation under equal amplitudes (18).
Based on our previous cell experiment, PFA with biphasic asymmetric pulses was carried out in 12 Bama miniature pigs. Gross examination and histological investigation were used to evaluate the lesion persistence and safety of PFA at the 7th and 30th day after pig PFA operation. Moreover, PFA with biphasic asymmetric pulses was carried out in 2 dogs, and electroanatomic mapping was used to display the lesion area. Our results would be used to help to make a treatment plan for clinical trials in the future.
Reagents and Instruments
The medicine used in the present study were Sumianxin II injection (compound preparation of Jingsongling, edetic acid, Dihydroetorphine hydrochloride (DHE), and haloperidol) (Jilin Dunhua Shengda Animal Pharmaceutical Co., Ltd.), 1% propofol injection (AstraZeneca, Italy), and enoxaparin sodium injection (Sanofi, France). Other medicines and instruments were conventional equipment found in laboratories and catheterization laboratories.
Preoperative Treatment
The animals were fasted for 12 h before operation, and water was withheld for 6 h. Aspirin (5 mg/kg) was administered once daily for the first 3 days. Before operation, blood was collected from the precaval vein for preoperative routine blood testing. Conventional doses of xylazine and midazolam were used for the induction of anesthesia. After the peripheral venous access was established, 30-50 mg of propofol was intravenously infused as appropriate to obtain stable anesthesia. After being weighed, the animals were placed in a U-shaped groove in supine position on the digital subtraction angiography operating bed, and tracheal intubation was performed. A ventilator was used to support mechanical ventilation. The skin on the front of the chest and the bilateral inguinal regions was prepared for operation. The bilateral femoral veins were punctured by Seldinger's method and a 6F vascular sheath was placed. Propofol was infused at a rate of 3-5 mg/kg/h throughout the operation to maintain anesthesia. The vital signs were routinely monitored intraoperatively.
Pulsed Field Catheter Ablation
After the atrial septal puncture sheath was guided into the left atrium under endoluminal sonography and X-ray fluoroscopy, 6,000 units of heparin were injected, and 1,000 units of heparin were added every hour during the operation. The position of the sheath was adjusted, and pulmonary venography was performed to show the shape and branching of the pulmonary veins. A circular mapping catheter was placed in the left superior pulmonary vein to evaluate the effect of electrical isolation.
The PFA system was composed of a pulse instrument (PFA instrument) (Tianjin Intelligent Health Co., Ltd., Tianjin, CHN) and a PFA catheter (Tianjin Intelligent Health Co., Ltd., Tianjin, CHN) ( Figure 1A). The pulse instrument was used to set the parameters, including the number of pulses, number of pulse groups, and pulse amplitude. The 10.5 F PFA catheter had four frames; each frame contained two electrodes, one of which was a positioning electrode ( Figure 1B). When fully expanded, the diameter of the most distal electrode was 28 mm.
The PFA catheter was half-opened in the left atrium through a flexible sheath, and then with the guidance of guidewires, the ablation electrode was pushed to a position close to the pulmonary vein ostium. The adherence condition was reflected by angiography and signal feedback of the PFA instrument ( Figure 1E). The catheter electrode was optimized, and the pulse mode was modified. As shown in Figure 1F, based on the existing two pulse modes a and b, the pulse mode was improved to type c ( Figure 1F, c), so that the negative narrow pulse depolarized the positive wide pulse and reduced the muscle contractions caused by the pulses, which enhanced the safety and maneuverability. The pulse width of all positive pulses was 5 µs and the pulse width of negative pulses was 3 µs. The pulse voltage, pulse width, frequency, and electrode spacing of the PFA system used in the present study were able to be adjusted to ensure the ablation effect. The pulse released 1,000 V microsecond pulses through the ECG synchronization signals.
The experiment included two groups. Group A and B comprised six pigs each. PFA was carried out in all pigs. Animals in Group A were euthanized at 7th day after ablation and in Groups B at 30th day after ablation. The ablation tissue (left superior pulmonary vein) was regarded as the experimental group and non-ablation tissue (left inferior pulmonary vein) was the control group. After electric discharge, pacing was performed by a pacemaker (Medtronic 5388, MN, USA) in groups A and B. The pacing voltage was 10 mv, and the pacing frequency was about 15% higher than the heart rate before ablation ( Figure 1D). As the biphasic asymmetric pulse method greatly reduced the impact on the skeletal muscles, no intervention was performed for the skeletal muscles.
Besides, in order to further verify the effectiveness of PFA, two dogs were selected as dog group and electroanatomic voltage mapping of the left atrium was constructed after ablation.
After Care
All animals underwent Computed Tomography (CT) to check for adverse conditions such as air emboli, thrombi, vascular access tears, and cardiac tamponade. After the catheter was withdrawn, the condition of the puncture points was observed. The vital signs, mental behavior, and activity status of the animals were observed after they awoke from anesthesia.
Animals were injected with 20 IU/kg of intramuscular penicillin sodium + 0.9% normal saline twice daily for 3 days after operation to prevent infection. Intensive care was performed after operation, and the clinical changes were recorded. Daily wound cleaning was implemented to prevent infection. On post-op days 7 and 30, twelve pigs were euthanized and dissected for gross observation and pathological analysis. The dogs were followed up for 30 days, and circular mapping electrodes were used to create electroanatomic maps on each animal.
Gross Examination of Specimens
The surviving pigs were euthanized on post-op days 7 (n = 6) and 30 (n = 6). The heart, lungs, and adjacent trachea and esophagus were removed as a whole. The hearts were dissected, the ablation sites were identified, and the pathological changes of the ablation sites and adjacent tissues were observed. The continuity of the ablation area and the presence of local thrombosis were evaluated.
Histological Investigation
The specimens were fixed in formalin and stained with Masson trichrome and HE to evaluate the presence of lesions in the pulmonary vein ostia, transmural ability, neurological activity in the lesions, presence of thrombi and edema, degree of damage to adjacent tissues, and other related findings. Other assessed variables included fibrosis, inflammation, hemorrhage, and nerve damage.
Statistical Analyses
Origin8.5 software was used for data analysis. The data were presented by Average ± SD or Median (25th, 75th percentiles). The Mann-Whitney U test was used for continuous parameters in different experimental and/or control groups due to the non-normal distribution and the small sample size. p < 0.05 was defined as statistically significant.
Clinical Observation and Survival Rate
The first day after operation, the behavioral activities, mental status, appetite, and food intake of the animals returned to normal. The animals had no hollow back, piloerection, loose stools, or perinasal bleeding throughout the experiment. No animals died, and the blood test results were normal on the 7th and 30th day after ablation.
Acute Experiments
The ablation zone was the left superior pulmonary vein in all pigs, with an ablation success rate of 100% (12/12). In the pigs (Group A&B), the average ablation times were 84.06 ± 13.09 s, and the average operation time was 92.3 min. The average peak current of ablation in the pig experiment was 7.83 ± 1.90 A (Ampere). The ablation zone was the right superior pulmonary vein in all dogs, with an ablation success rate of 100% (2/2). In the dog group, the average ablation time for 1,000 pulses was 71.90 ± 2.48 s. The average peak current of ablation in the dog experiment was 7.10 ± 0.42 A ( Table 1). None of the animals showed obvious arrhythmia during the experiments. When the pigs were given pacing stimulation after the ablation, the pacing signal was detected by ECG monitoring, but the heart rate did not increase. In the dog group, electroanatomic mapping showed that the right superior pulmonary vein potential disappeared immediately after ablation (Figures 2A,B). The voltage mapping showed low voltage, indicating afferent block (Figure 2C). At the 30th day after operation, the right superior pulmonary vein pacing did not cause atrial capture, indicating afferent block ( Figure 2D).
Gross Observation of the Pulmonary Vein Ostia and Adjacent Tissues
After all pigs were euthanized, gross observation was conducted to identify any esophageal or tracheal deformation and to evaluate the smoothness of the surface. The conditions of the adventitia and epithelium of the esophagus and the conditions of the tunica adventitia of trachea were observed with the naked eye and found there was no damage. The lung specimens were soft, light, smooth, moist, elastic, spongy, and free of scarring. The vagus nerves were intact and the upper and lower connections were normal under observation with the naked eye (Figure 1).
On the 7th day after operation, the ablation zones had a pale circular pulmonary vein appearance with a clear boundary ( Figure 3A, Day 7). On the 30th day after the operation, there was an obvious white circular ablation zone in the pulmonary vein ostia, with similar morphology as on the 7th day after operation (Figure 3A, Day 30). There was no obvious damage outside the ablation zone. The boundary was clear, the transmural ability was obvious, and there was no pulmonary vein stenosis or thrombosis ( Table 2).
Histological Investigation
At 7 days after the operation, there was a clear circumferential ablation zone at the ablation site. Under 100× magnification, HE (hematoxylin-eosin) staining showed a clear boundary between the ablation and non-ablation zones and a large amount of layerby-layer deposition of collagen and fibrotic tissue proliferation under the intima of the ablation zone; the myocardial cells had irregular morphology and were arranged in a disorderly fashion (Figure 3B, 100X, Day 7). Under 200× magnification, there was increased infiltration of monocytes, a certain amount of cell infiltration and proliferation of epithelioid cells, and hyperemia in the middle layer of the myocardial tissue. Under 400×magnification, the nuclei of infiltrating cells were as oblong as those of smooth muscle cells (Figure 3B, 400X, Day 7). At 30 days after the operation, the infiltration of the middle cardiac muscle layer had disappeared, and there was no congestion, inflammatory cells, or signs of thrombosis. The middle layer of cardiomyocytes had shrunk in shape, the nuclei had fragmented or disappeared, cardiomyocytes showed a large amount of necrosis, myocardial endothelial cells had formed, and a large number of fibroblasts had replaced cardiomyocytes and were growing under the endothelium. There were no obvious effects on arterioles and venules ( Figure 3B).
Masson staining of specimens collected on the 7th postoperation (post-op) showed a thickened intimal layer of the pulmonary vein, a large number of collagen fibers. The average thickness of collagen fibers is 0.68 ± 0.21 (±SD). And a large amount of collagen in the intimal medial layer (Figure 3C, Day 7). At 30 days, the number of collagen fibers in the intimal layer had basically returned to normal (Figure 3C, Day 30), 0.096 ± 0.03 (±SD) ( Table 2). Using the Mann-Whitney Utest, there is significant differences in the variable of transmural ability indicated by percentages of the collagen fiber depth at the 7th day after operation (P = 0.002), compared with the control (Table 3). However, the measurement of transmural ability appeared similar to the control on the 30th day after operation (P = 0.699).
DISCUSSION
The usual atrial fibrillation catheter ablation is performed using the RF technique, which applies a high-frequency alternating current to heat and damage the myocardial tissues (19). However, the disadvantages of the RF technique are the difficulty in controlling the size of the treatment zone and the high risk of adverse effects due to excessive damage. Subsequently, the cryoballoon ablation technique emerges. The cryoballoon technique is the use of cryogenic energy, which leads cells to necrosis by freezing. However, it damages blood vessels and other tissues (20). PFA technology is derived from the principle of electroporation, which exposes cells to electrical pulses to increase the permeability of cell membranes (21). Previous studies have shown that cardiomyocytes have lower electroporation thresholds than cell types related to collateral damage, such as the cells of nerves, arteries, and the esophagus. For example, Kaminska et al. (22) found that a pulse intensity higher than 375 V/cm had an ablation effect on cardiac muscle in a MTT assay on rat cardiomyocytes. Koruth et al. (23) found no lesions in the lumen and outer surface of the esophagus after ablation with a peak electric field intensity of 900 V/cm during PFA of the esophagus. Maor et al. (7) found that the ablation of smooth muscle cells was ineffective when the electric field intensity of the pulse field was lower than 875 V/cm. Due to differences between cell types in electroporation thresholds, the unique tissue selectivity of PFA in the treatment of atrial fibrillation has the great advantage of reducing complications compared with ablation via traditional energy sources, such as RF, which indiscriminately damage all tissues in the hot zone. In clinical operation, isolation using the RF method is performed in a point-by-point mode. The mode needs further mapping and ablation. In contrast, PFA creates a closed-loop isolation region, which avoids the problem of RF ablation and greatly shortens the ablation time.
Preclinical studies have evaluated parameters that affect the effectiveness and safety of PFA, such as the electric field intensity and pulse field direction (24). And Reddy et al. had already demonstrated that PFA preferentially affected myocardial tissue with excellent durability and chronic safety in human trials (11). The mode of ablation in (11) was biphasic and symmetric. The difference between our experiment and previous PFA studies is the introduction of a new biphasic asymmetric pulse ablation mode, which narrows the width of the negative pulse and reduces the contractions of the muscle caused by the positive pulse (18). So before the clinical experiments, pigs were used to evaluate the safety and efficacy of biphasic asymmetric pulse ablation mode. As a result, there is no need to inject muscle relaxant before operation. In the original biphasic symmetric pulse mode (Figure 1F, a) (14), there was a certain amount of cancellation, which required the provision of a higher pulse intensity during ablation (25). The biphasic asymmetric pulse can reduce muscle contraction compared to monophasic pulses, and has the advantage of lower cell ablation threshold compared to biphasic symmetric pulses. We also updated the catheter used in previous experiments (18), and added positioning electrodes to the new catheter to ensure that the current state of electrodes was displayed under X-ray to provide more reliable information to doctors. Furthermore, we reinforced the framework of electrodes to make them more rigid and increase the stability of the electrode structure.
In the present study, the ablation results were recorded at 1 and 4 weeks, and the lasting effectiveness and safety of the biphasic asymmetric PFA system in the treatment of atrial fibrillation were verified, ensuring the safety of future clinical trials using the same system. In the pig experiment (Group A&B), there was no ventricular arrhythmia at any time during the operation, and the behavioral activities, mental status, appetite, and feed intake of the pigs returned to normal after the operation. We examined the near-ablation zone, including the vagus nerve, lung, trachea, and esophagus. A pathological analysis of the ablation zone performed at 1 and 4 weeks showed that all targeted parts produced complete transmural lesions and maintained good safety. At 4 weeks after PFA, fibroblasts had replaced the original cardiomyocytes and PFA had not destroyed the intercellular connection, which was conducive to the maintenance of the tissue structure. There was no thromboembolism, intracardiac injury, or acute or chronic collateral tissue injury after ablation. Consistent with our previous in vitro experiments (18), the present results further proved that the biphasic asymmetric pulse mode and PFA system used in the present study had good safety and produced effective lesions. In dog group, the immediately post-op ablation zones showed a low potential and a ring shape. The electrical signal level measured at 4 weeks after ablation showed that the potential remained low. Pacing did not capture atrium, and it confirmed the effectiveness of ablation and the safety of the experimental animals. More details in Supplementary Figure 2.
The number of pulses in the pulse sequence is an important parameter, and there is currently no clear consensus on the optimal number of pulses (26). Stewart et al. (27) set the number of pulses to a group of 60 (six pulses released per heartbeat), while Koruth et al. (23) proposed a pulse sequence with five pulses per group. We found that when the number of pulses was set to 20 (all pulses released after one R wave), from the 15th pulse, a current amplitude peak appeared which was much higher than the normal amplitude value. This phenomenon may be caused by an unstable heart rate. Alternatively, when the number of pulses is sufficient, a large amount of intracellular fluid may overflow from the tissue cells, increasing the conductivity and sharply increasing the current amplitude. When we reduced the number of pulses, the current returned to its normal value. We believe that this was due to the self-recovery function of tissue cells in a short period of time. The existence of this phenomenon allowed us to keep the number of pulses in each group to 10 (similar to the parameters used in previous studies). The length of the blank period between the positive and negative pulses is generally referred to as the pulse interval. There is no standard formula for determining the length of the pulse interval (16,28). In subsequent studies, we plan to study the effect of the pulse width and the pulse interval on the ablation.
PFA is one kind of AF catheter ablation, so the drug treatment after the PFA can be referenced to the guideline of 2020 ESC Guidelines for the diagnosis and management of atrial fibrillation developed in collaboration with the European Association for Cardio-Thoracic Surgery (EACTS) (29). In the future clinical trials, we will investigate the interaction between the drug and PFA.
LIMITATIONS
In this experiment, the novel PFA catheter had high levels of safety and durability. However, the only parameter setting that was varied in this experiment was the number of pulses, while the other parameters of the PFA system were kept constant. The complex intracardiac environment and local dynamic electrical characteristics of tissues will have a certain impact on the PFA zones. Considering the difference of heart between human and animal, the distance between electrodes should be adjustable. So the voltage amplitude and the number of pulses should be adjusted during the ablation process in order to achieve a high degree of durable pulmonary vein isolation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by Tianjin TEDA Cardiovascular Hospital-Animal Experiment Center. | 2022-03-29T13:20:29.587Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "42869ab09f06865fea40c37e134230cab490bc3a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "42869ab09f06865fea40c37e134230cab490bc3a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
162180860 | pes2o/s2orc | v3-fos-license | Linking aberrant chromatin features in chronic lymphocytic leukemia to transcription factor networks
Abstract In chronic lymphocytic leukemia (CLL), a diverse set of genetic mutations is embedded in a deregulated epigenetic landscape that drives cancerogenesis. To elucidate the role of aberrant chromatin features, we mapped DNA methylation, seven histone modifications, nucleosome positions, chromatin accessibility, binding of EBF1 and CTCF, as well as the transcriptome of B cells from CLL patients and healthy donors. A globally increased histone deacetylase activity was detected and half of the genome comprised transcriptionally downregulated partially DNA methylated domains demarcated by CTCF. CLL samples displayed a H3K4me3 redistribution and nucleosome gain at promoters as well as changes of enhancer activity and enhancer linkage to target genes. A DNA binding motif analysis identified transcription factors that gained or lost binding in CLL at sites with aberrant chromatin features. These findings were integrated into a gene regulatory enhancer containing network enriched for B‐cell receptor signaling pathway components. Our study predicts novel molecular links to targets of CLL therapies and provides a valuable resource for further studies on the epigenetic contribution to the disease.
Thank you again for submitting your work to Molecular Systems Biology. We have now heard back from the three referees who agreed to evaluate your study. As you will see below, the reviewers appreciate the extensive amount of data generated in this work and mention that the study is a potentially relevant contribution to the field. They raise however a series of concerns, which we would ask you to address in a major revision of the manuscript. I think that there is no reason to repeat the points listed below, since they are rather clear. Overall the reviewers mention that additional analyses need to be included in to better support the main conclusions and strengthen the study. They provide several constructive suggestions in this regard. Please let me know in case you would like to discuss further any of the comments of the reviewers. Reviewer #1: In the paper of Mallm et al., the epigenomic analysis of B-cells of 19 CLL patients and 9 healthy donors is described. The authors use a variety of different assays to analyze DNA methylation, chromatin states, nucleosome positioning, accessibility and the transcriptome to provide new insights into CLL pathogenesis. One of the main findings is a derived gene regulatory network (GRN) constructed from (public) RNA-seq data. The manuscript is rich in data that can be further explored by the community and most computational analysis is on par with the standards in the field. However, the current manuscript presents some weaknesses. First, the relation and the agreement between the RNA-seq derived network and the TF motif analysis is not clear. Are some of the TF motifs associated with alternative promoters or are CLL-specific enhancers also major Reviewer #2: Summary This work represents an effort to comprehensively characterize differences between chronic lymphocytic leukemia cells and normal B cells with respect to genome-wide chromatin state, DNA methylation, gene expression, and genomic element regulation by DNA sequence-specific transcription factors. CLL-associated epigenetics changes identified by the authors include differences in large-scale repressive chromatin domains with respect to histone and DNA methylation, altered nucleosome positioning and histone modification distribution at active promoters, altered combinatorial histone modifications at repressed promoters, altered enhancer activation, and altered enhancer-promoter relationships. The authors used differential gene expression analysis between CLL and normal B cells, transcription factor binding motif analysis of genomic features highlighted in their epigenetic analysis, and a regulatory network model generated from published B cell transcriptional profiles to create a network model for mechanisms of epigenetic and transcriptional dysregulation in CLL. After identifying increased histone deacetylase activity in CLL cells versus normal B cells, they also used an HDAC inhibitor to study differences in the way enhancers and promoters are affected by HDAC perturbation in CLL and normal B cells, and relate these findings to their model.
General Remarks
In this work, Mallm, Iskar, Ishaque and colleagues present the results of an integrative genome-wide comparison of peripheral blood CLL cells to normal peripheral blood B cells with respect to a diverse set of features related to chromatin and transcriptional regulation. These include wholegenome assessment of DNA methylation, RNA-sequencing, chromatin accessibility (in bulk and at the single-cell level), diverse histone modifications, and nucleosome positioning. The number of subjects in each category were sufficient for valid statistical assessment of significant differences between CLL and normal B cells with regard to these features, and the analyses presented clearly demonstrate the high technical quality of these datasets, with few exceptions (the H3K9ac data appears to be of somewhat lesser quality than other datasets, for example). Together, this represents one of the most impressive and comprehensive epigenetic profiling efforts I have seen for any cancer type, and these datasets will certainly be an invaluable resource for the CLL field, and as a model for future investigations in other cancers. The authors are also to be applauded for their efforts in making not only raw and processed data, but also their custom analysis scripts available to the community.
With regards to the analysis and biological findings presented here, they are primarily of a descriptive and hypothesis-generating nature. Given the ambitiously broad scope of the profiling and analysis portions of the project, the value of their well-characterized datasets as a resource, and the challenges inherent in studying CLL for which few faithful in vitro or in vivo functional models exist, this does not overly detract from the value of the work. However, the authors should more clearly acknowledge this as a caveat in describing their findings. It would be best to avoid language that appears to claim an unproven functional consequence of chromatin mark associations in the abstract and discussion (e.g. "...loss of bivalent promoters indicated a reduced developmental capacity"). As another example, their network model predicts a functional association between TCF4 activation and BCL2 activation in CLL (described in the abstract, results, and discussion sections), but no functional experiments are performed to directly test this prediction. If the authors wish to highlight this hypothesized relationship in the abstract, they should make it clear that this is an example of the hypothesis-generating power of their data, rather than an experimentally demonstrated mechanism.
Another important caveat is warranted in presenting this data. It is well known that CLL proliferation, and activation of many transcriptional regulatory pathways critical for CLL pathogenesis (BCR signaling, MYC, Notch) occur primarily in tissues (lymph nodes & other secondary lymphoid organs, bone marrow, etc). See for example Herishanu et al 2014, cited by the authors. However, all samples studied in this work are (for practical reasons) from peripheral blood. The authors should discuss the inherent limitations of studying CLL peripheral blood cells with regard to understanding the biology that sustains these tumors.
Major points
The following points should be addressed in revisions in order for this paper to be acceptable for publication.
• Overall, this is a very impressive and comprehensive description of the epigenetic regulatory "landscape" of CLL as contrasted with normal mature B cells, However, in the absence of hypothesis-driven experiments, the authors seem to stretch their model and the literature excessively in some areas in order to construct mechanistic explanations for all of their major findings and purported relevance to CLL pathogenesis and therapy. The paper could be improved by deemphasizing these pat explanations, and instead suggesting broad categories of future hypothesisdriven investigation that could be enhanced by using these impressive datasets as a resource.
• Figure 1b requires far more explanation for the reader to understand. Four different heatmap legends are provided, but I only see corresponding data for the top two (which have very similar color schemes).
• Figure 3 / S3 -Broadening of H3K4me3 domains is a phenomenon that has been studied in a number of developmental and cancer contexts, and has been linked to altered transcriptional elongation, and enhancer-dependent regulation, for example. Here, the authors seem to link this phenomenon to alternate TSS / promoter usage, but it's not clear that these phenomena are related. The authors should not rely on chromatin state (HMM) calls alone to claim that alternate TSS's are being used, but should support this by demonstrating the presence of transcripts with alternate 5' ends, either through RNA-Seq analysis or more targeted experiments, to disambiguate true alternate TSS usage from broadening H3K4me3. For example, it is not clear in figure 3F whether the gene on the bottom is truly using the lower TSS, or if it merely has H3K4me3 extending into the area of that annotation. Definitions of "active promoter" used in each sub-panel should be explicitly stated, and should be based on demonstration of alternate RNA transcripts for at least some of these figures.
• With regards to claims of TF motif enrichment at broadened H3K4me3 promoters ( Fig S3C) and loss of promoter bivalency in CLL (Fig 3J), much more information needs to be provided about these analyses (merely showing the HOMER library PWM motif logos is not informative). Were these findings supported by both de novo and known motif analyses? What background regions were used? What was the statistical significance and frequency of these motif associations, and how much more significant were they than other enriched motifs (it is assumed that these were the MOST enriched motifs in these analyses, but that is not explicitly stated).
• The author's hypothesized association between MEF2 factors and MLL enzymes that generate H3K4me3 is poorly supported by the single citation provided, a review of MEF2 factors that in turn cites a single paper that linked MEF2C/D phosphorylation to recruitment of KMT2D, which is generally thought to be an enhancer-associated generator of H3K4me1 (not a promoter-associated generator of k4me3). Figure 3K uses the deprecated gene symbols for the MLL genes -this is ambiguous and should be avoided since "MLL2" has been used for both KMT2B and KMT2D (though generally the latter in humans). It is unclear why "MLL4" (presumably KMT2B) is not included in this figure. Given the very weak literature support for a direct connection between MEF2 factors and H3K4 trimethylation, this figure might better be omitted, and the corresponding claims significantly modified.
• Figure S4A is not mentioned in the text, and needs to be better explained with respect to which ENCODE TF ChIP-Seq datasets were analysed (B-lymphoblastoid lines only? Other cell types?) and how the TFBS clusters were generated.
• Figure 4B requires much better explanation of what is being depicted. Each row seems to represent the regions in a given state in CLL that had a different state in normal B cells, correct? And then whether they were enriched for that same state (or for p300) in ENCODE or FANTOM cell lines? It is surprising that so many of the different FANTOM and ENCODE lines showed enrichment for the enhancer state in CLL enhancers, given how cell type specific enhancers are. I would have expected significant enhancer state enrichment for ENCODE B cell lines (e.g. GM12878), but not others -it would be helpful to label the cell lines. It is unclear what "inactivation of enhancers in CLL occurred mostly via the bivalent state" means -does this reflect the fact that the author's ChroHMM "bivalent" state shows more H3K4me1 / H3K27me3 bivalency (a variant "enhancer bivalent state" -Fig1B), rather than the classic k4me3 / k27me3 usually described at promoters? In its current state, it is unclear what points are being made in this figure.
• 4G -What background was used for the motif analysis of DMR-associated enhancers? Are enriched motifs in these enhancers significantly different from motifs enriched in a control set of CLL enhancers that are not associated with DMR's or differential acetylation?
• 4H -How were "NFATC1 binding sites" identified? Are these ChIP-Seq-proven binding sites (e.g. from an ENCODE GM12878 dataset) or simply enhancers that contained the NFAT motif? This figure is barely mentioned in the text and needs to be better explained.
• 5E -To my eye, there doesn't seem to be any relationship between the scATAC-Seq correlation matrix and the overlayed Hi-C contact domains. Perhaps a different locus, different scale, or genome-wide statistical measure would make this point better? As it stands, the author's claim that ATAC-Seq peak co-regulation correlated with topological features is not supported by the data presented.
• S5E -The depiction of co-regulated scATAC-Seq regions on chromosome 1 is perplexing. Enhancer-promoter loops do not operate on the scale of 10's to 100's of megabases as shown by the arcs in this figure. While the text briefly points out that these correlations could be driven by coregulation in trans, the figure legend seems to describe them as enhancer-promoter pairs, which is highly unlikely. Needs clarification.
• The main text (page 10) also describes an enhancer-promoter pair analysis within windows of 200 kb, followed by identification of "re-wired" pairs in CLL versus normal cells, but no specific figures or tables are associated with this analysis (other than simple lists of features). CTCF site chromatin accessibility near these "re-wired pairs" is supposedly lost in CLL, but no analysis is presented as to whether this is a statistically significant association vs appropriate comparator genes. It is unclear whether the authors hypothesize that these CTCF pairs directly facilitate enhancer-promoter looping in the normal B cells, or rather represent topological domain boundaries (e.g. insulators) that are selectively lost in CLL and thus facilitate "re-wiring" across TAD boundaries. This is a complex topic, and should either be investigated in detail with appropriate statistical analysis and figures presented or perhaps omitted entirely and explored in a future paper. Discussion: • The authors mention several times (discussion and introduction) that BCR signaling pathways genes are not recurrently altered by mutations in CLL, but this is not true in an important sense. They should refer to the extensive literature on immunoglobulin gene VDJ stereotypy in CLL, which results in genetically encoded B cell receptors capable of autonomous signaling (e.g. PMID 22885698), or remove this claim.
• It is not at all clear how the papers cited support the authors' MEF2 / KMT2D / bivalent domain loss model (Ge et al, 2011;Kurt et al, 2017;Long et al, 2013) -Mef2 factors are not mentioned in any of these papers. This explanation needs to be removed or better supported.
• In the HDAC / enhancer section of the discussion the author's claim that MEF2 factors are connected to HDAC's "via BCOR" is also unusual (no reference is provided). MEF2 factors bind directly to Class II HDACs (HDAC4/5/7/9) via a well-characterized interface -crystal structures of this interaction are available. This interaction is mutually exclusive with Mef2 binding to p300 acetyltransferase or the CABIN1 corepressor and is likely controlled by post-translational modifications. The authors should incorporate a more conventional understanding of MEF2 factor function into their model.
Minor points
The following points suggest areas where improvements would enhance the paper's acceptability for publication.
• The introduction could benefit from some re-writing for clarity and coherence.
• Figure 1a -The authors provide sample tracks for one CLL and one normal sample at the TCF4 locus, which is helpful for gauging the quality of their many datasets, although inclusion of their peak calls for each track might provide more insight into their genome-wide analysis. The figure legend should contain an appropriate reference to the chromHMM color code & corresponding states (e.g. chromHMM states are color coded as in Figure 1B) • Figure 1C -Here, the authors appear to be using their data to reproduce a previously published result regarding DNA methylation-defined CLL subtypes. It's not clear whether there is functional evidence for a "developmental block" as the authors claim, except perhaps as explored in that prior paper.
• Figure 2 / S2 -The characteristics described for the CLL PMDs (100's of kb to mb scale, enrichment for repressive chromatin marks, boundary enrichment for CTCF or active promoter marks) are reminiscent of topologically associated domains (TADs) as defined by Hi-C studies, which the authors discuss later in the paper. If the CLL PMD's in fact correspond to repressed TADs (or perhaps series of TADs within larger structures such as Hi-C "B" compartments) this would be an interesting point to discuss.
• 4B The analysis of bidirectional transcription is quite interesting, but it's unclear how this was achieved. The methods describe separation of small and large RNAs prior to ribodepletion and library construction from the latter. Was sequencing of short RNA's performed (which would presumably contain most eRNA's), or is the bidirectional signal coming from the long RNA pool? Please clarify in methods and results text.
• 4C -I would expect a null signal along the diagonal for the matrix of state transitions. How is it possible to transition to the same state?
• 4D/E -There seems to be a typo in the legend, as the "number of sites with the motif" is not found on the y axis. What is the significance of dot size and color intensity? Many of the motifs in the HOMER library are highly redundant (e.g. ETS factors, which make up most of the significant signals in the downregulated chart, or different ATF factors). It is unlikely that these very similar motifs accurately distinguish between these different members of the same TF family. Perhaps a more parsimonious analysis could be shown for unique motifs detected in de novo HOMER motif analysis of the union enhancers, which would highlight more distinct motifs rather than many variants of the same top motifs.
• S4D/E -The legend mentions depiction of TFs with enriched motifs in super enhancers, but most of the genes highlighted in the hockey plots are not TF's but rather selected loci that are discussed in the text as "particularly relevant" for various reasons. It's not entirely clear why the TCRA locus is of interest (T cell receptors are not expressed in CLL), and the BCL2 super-enhancer, and those of several other genes highlighted in the discussion, are not indicated in the hockey plots.
• 4I -It is mentioned in the text that 6.3% of CLL super-enhancers were "unique to this study", but no citations are provided as to the prior super-enhancer papers or databases from which this comparison is made.
• Figure 5/S5 -While the "footprinting" analysis shown in 5B is intriguing, the fine-scale pattern may be largely driven by the sequence preferences of the TN5 transposase, which should be mentioned as a caveat. The overall differential signal levels are also not informative, since the sites were selected on that basis. The motif enrichment plots in S5C are much more compelling evidence for a specific role for these TF's.
• 5D -The discussion of the TF motif heterogeneity analysis is difficult to follow -the list of highly heterogeneous motifs is very enriched for Fos/Jun sites, which are not mentioned in the text, while SP1 and MYC (not in the list of heterogeneous motifs) are discussed in the text as possible interactors of NFYA/B. The text should probably follow more closely to the data, or be better explained.
• Figures 6 and S6 provide an integrative framework that tie together the differential epigenetic and transcriptional features of CLL versus normal B cells into a model of TF-driven gene dysregulation in CLL. It is interesting that there is almost no overlap between TF's frequently mutated in CLL and the TF's identified as driving differential chromatin and transcriptional regulation in this study. For example, gain-of-function mutations in the TF NOTCH1 are the most common recurrent gene mutations in CLL, and Notch dysregulation has also been demonstrated in most NOTCH1 wild-type CLL samples, but no signature of Notch/RBPJ differential activity was described in this work. Does this reflect a limitation of the approaches used, or does the network model provide a possible explanation?
• Figure 7D -This figure shows significantly increased H3K27ac at promoters enriched for SP1, E2F, and KLF family members. The corresponding text describes ETS motif enrichment, which is not shown in the figure. Discussion.
• CLL-specific large PMD's associated with repressive histone modifications and gene silencing are described, but confusingly are then associated with genes (IRF4 HIST1H1E, NOTCH1, IGLL5) that are mutated but also expressed in CLL. Are these two distinct classes of PMD's (associated with repression or alternately with expressed, recurrently mutated genes)? Needs clarification.
Supplementary data
• Some of the supplementary data Excel sheets suffer from conversion of certain gene names to dates (e.g. SEPT11 has been converted to the date "Sep-11") Reviewer #3: General comments. The identification of partially methylated domains in CLL is a very interesting observation, but the manuscript is crowded with other minor and sometimes dubious observations. There is a lot of valuable data in this study that needs publishing once it is better analysed with more rigorous curation of the data, but it cannot be published in this form. The methods used to identify high confidence regulatory elements are well below the required standard and need repeating with more stringent criteria for defining them. The enhancer identification data set is flooded with background made up of huge regions of modified histones, probably representing entire domains or active chromatin. To be meaningful, the study should focus strictly on narrowly defined open chromatin regions (ATAC) linked to flanking histone regions. It should not define strings of modified nucleosomes as enhancers as this is not where the transcription factors are bound. Instead of being the starting point for identifying enhancers, the ATAC data is rarely used effectively, and this greatly undermines the whole analysis. Rather than coming first, ATAC appears in fig 5 in the context of footprinting, not to define enhancers in the first place. By focussing on histone modifications to define enhancers, the authors are actually identifying nucleosomal regions flanking open chromatin regions and not the enhancers themselves. The authors could expand the ATAC motif analysis to get the most useful information from this study, and leave out more dubious stuff. Some of the most valuable information is effectively lost in fig S5C. More convincing footprints at these motifs would be good. The study equates modified chromatin domains with enhancers, and then counts up TF motifs in these domains, which is something you can only do with enhancers. They need to separate these entities. The HDAC experiments are probably providing valuable information, but the gene regulation networks pointing to HDACs may not be real if the analyses were flawed. The authors may be trying too hard to make the analyses increasingly more intricate and more novel, but this runs the risk of losing sight of the basic biology, and moving away from our current definitions of what an enhancer is. An enhancer is more than a chromatin state as defined here. Overall there is a sense that the authors are using complex tools to extract more information than the data and the methods actually allow with any confidence. The end result is that the several good points of the study are lost in a morass of unreliable analyses and predictions based on these analyses.
Specific comments:
(1) The manuscript contains too few examples of primary data showing specific gene loci and showing individual tracks for the individual patients. For 2 or 3 key genes it would be good to have e.g. the ATAC tracks for all samples where ATAC was performed. The one example shown in Fig 1A does not allow such a comparison, and as a 1 Mb window the resolution is too low to fully interpret the patterns. Finer resolution is needed to see what is being measured. Given the problems illustrated below, it would be helpful to have regions called as enhancers marked by bars under the tracks as the EXCEL file suggests that many are not true peaks but broad regions of histone modification and not the open chromatin regions needed to specific enhancers where factors are likely to be bound.
(2) The TCF4 locus in Fig 1A has sharp peaks of H3K4me3 in both the transcribed and nontranscribed state. This is a little unusual, and might raise questions about the specificity of the antibody. This modification is normally associated with active promoters. Are the authors suggesting that these peaks are instead poised or bound by Polycomb?
(3) The text refers to an enhancer-like region identified at the TCF4 locus in Fig 1A based on peaks of H3K4me1, but the profiles actually shows several broad stretches of H3K4me1 and H3K4me27, and not sharp peaks. It is hard to see how this data can be used to identify a discrete enhancer. It would have helped if this putative element was labelled in Fig 1A. (4) DNA regions should not be defined as "enhancers" based entirely on histone ChIP data. These "enhancer-like" regions can be defined as active chromatin regions, but should not be referred to as enhancers without more data, and not if there is no ATAC peak at the same site. E.g. One study found that only 26% of predicted enhancers had any actual enhancer activity (Kwasnieski JC, Fiore C, Chaudhari HG, Cohen BA. High-throughput functional testing of ENCODE segmentation predictions. Genome Res. (2014) 24:1595-602.). The text also refers to "genic enhancers" but it is unclear what distinguishes these as a group from poised and active enhancers. Does genic mean regions inside transcribed genes that also have H3K36me3? If so, it is a bit artificial to separate intergenic and exonic enhancers into 2 groups as they do the same things, and exon enhancers often do not regulate the gene where they reside, and may not then have H3K36me3.
(5) The data in table 8 lists 141,000 enhancers present in B cells and 238,000 enhancers in the combined data set. This is vastly more than you would reasonably expect to find in one cell type if meaningful criteria were used to delimit the data. A stringent analysis might identify 30,000 high confidence peaks that have the potential to be enhancers. It has become a problem in the field that some studies greatly over-estimate of the number of total enhancers in the genome (some claim more than 500,000) or in any one cell type. This arises by including too much low quality data or unsuitable definitions. It appears that the Roadmap consortium data cited here to validate the study also over-estimates numbers of enhancers, claiming 10% of the genome in B cells, so this is not helpful. In the present study the huge number of "peaks" indicates that insufficient culling of insignificant peaks has been performed, and the wrong criteria used to define enhancers. The table also suggests that normal B cells have over 1000 enhancers greater than 10 kb in length. This means that the authors are defining entire active chromatin domains, not discrete enhancers. At the very least the authors should ensure that a distinct open chromatin region is present at the site where they define an enhancer. These should be discrete ATAC peaks, typically ~ 200-500 bp across, defined on the basis of a minimum meaningful threshold. My analysis of a random selection of the 140,000 enhancers identified in B cells raised many concerns. A good example highlighting the problems are 2 enhancers defined by the authors at the NFATC2IP locus in hg19 as 16 28961200 28962000 16 28963000 28965000 If these are viewed on the ENCODE GC B cell and CD20 B cell DNaseI tracks it is clear that these represent modified histones on either side of a sharp DNaseI peak between these two coordinates.
In this case the defined enhancers excluded the open chromatin region at the promoter in between. Next to this promoter is a sharp DNaseI peak that is called as an enhancer at 16 28956400 28957800 In this case the called region is in open chromatin, but this is a discrete conserved CTCF binding site and is most likely an insulator, not an enhancer. At the CD20 locus (MS4A1) the entire coding region and promoter is defined as a series of enhancers spanning 15 kb at 11 60220600 60226000 11 60227000 60231600 11 60232000 60232800 11 60234600 60235400 At this locus a very probable enhancer exists as a very sharp DNaseI peak downstream but is called as a 3 kb enhancer at 11 60398800 60402200 While it is not useful to focus on just a few specific examples, this does leave the reviewer with the firm conviction that the entire enhancer data set needs redefining with more stringent criteria to include discrete regions of less than 1 kb, that are validated as open chromatin, with minimum peak heights defined on an empirical basis, so that they have a good chance of representing true regulatory elements and not just broad zones of active chromatin. In the current data set, 61,000 of the defined enhancers are greater than 1000 bp whereas the typical enhancer should be ~300 bp.
(6) The enhancer data file should include at least some numerical data on peak values, so that it would at least be possible for the reader to interpret the significance of each peak, without having to re-process and upload the data on a genome browser. In this case, a link to a genome browser session would have been a good idea so that the reviewers and the end reader can see the data for themselves. (7) The Hidden Markov modelling of chromatin states may not be valid if the identification of chromatin features is over-stated to start with.
(8) In contrast to the claim in the text relating to fig 1D, it is not obvious that there is a significant increase in enhancers in CLL. (9) Page 5 states that "we identified 1378 regulators (p < 0.05) with differential activity" which I think is far too many. This p value may mean that 5% of the 23,000 genes in the genome will be identified by chance, which is equivalent to 1150 genes, which could account for most of the regulators identified. I suggest that a lower p value cut-off would give more meaningful data. It would be better if they focussed on e.g. the top 100 differential regulators for this to be meaningful. (10) Fig 2D would benefit by adding an analysis centred on open chromatin regions near the histone peaks at the boundaries of PMDs. This may show that an active promoter or enhancer also blocks spreading of repressive marks, maybe more so than CTCF. It is not acceptable that ATAC is left out of the PMD analysis.
(11) In fig 2E the authors should indicate whether the increased mutations are occurring in PMDs that gained (or lost) meCG compared to normal B cells. If methylation is gained at these sites, it could be that meCG is transmuting to TG or CA during replication. How many of these mutations involve these bases?
(12) The extension of H3K4me3 domains in fig 3A is rather minor and may depend on how this data is normalised. This analysis does not add much and is difficult to interpret. The claims made in 3C and 3D about gain of nucleosomes would need a high resolution map of average nucleosome spacing for e.g. the first 5 nucleosomes from the TSS, or the centred on the first nucleosome. Fig 3D is only one example and is too low resolution to see anything meaningful. (13) The link between the gain of one modified promoter nucleosome, and the gain of alternate promoters is greatly over-stated given that alternate promoters are many kb away in fig 3F, An enhancer should be an entity at an open chromatin region, not a vast expanse of modified chromatin. For example, a 10 kb stretch of transcribed modified nucleosomes should not be called an enhancer, but this is what some loci in the attached enhancer table look like. This figure needs to state the number of discrete open chromatin regions with the right flanking modification, not Mb of chromatin of highly variable lengths. The whole discussion on page 8 becomes invalid once the reader realises that discrete rigorously defined enhancers were not identified in the first place, so I will not comment on it at length. You cannot do a motif analysis of chromatin regions 1 to 10 kb long. There are many of these, and some are much longer. Motifs concentrate within 100 to 150 bp of an ATAC peak summit, a much smaller region than most enhancers defined here. The background must be huge. I assume that some of the super-enhancers will also include the tracts of ~50 kb of modified chromatin that are in some places defined as single enhancers. This is not a great definition.
(15) Fig S5A cannot be interpreted without an inbuilt legend to the graphs. The ATAC data should show average profiles for CLL-specific ATAC peaks in CLL and in normal B cells. Fold change is not enough. We need an indication of peak height and width. The data here is over-processed to the point where you know longer know what it looks like. Where is some primary data? It needs stating if the X axis in S5A is a natural or a log2 scale. If they are using a 1.13 threshold, this has to be a log scale = ~2.3 actual difference.
(16) Fig 5B does not necessarily show TCF occupancy, it more likely shows sequence specific cleavage around a central TCA motif. This pattern is characteristic of the sequence bias of nucleases. Footprints normally look very different and are wider with a deeper drop at the footprint. The EBF data is less clear but this may also be influenced by sequence specificity which could explain the pattern. If we do not know how rigorous the calling is, it is hard to have confidence in 5C and 5D. A simple table of actual motifs identified, and the % occurrence would help to make fig S5C more meaningful.
(17) I find it hard to know whether the gene regulation pairs defined by co-occurrence of sequence reads in scATAC are reliable or not. This is an interesting approach, but it is hard to see how they can reliably get the pairing right. In a uniform population of cells, expressing the same genes, there is no reason why they would detect an active promoter/enhancer pair in one cell and not another. It seems there is a lot of room for error here. These pairings should be mostly within 100 kb, rarely more than 1 Mb apart, but the pairings shown in S5E are typically 100 Mb apart. This seems to confirm that the method is not very reliable.
(18) On page 10, the authors seem to be using differential footprinting to assess CTCF occupancy. Differential occupancy analysis here would actually be much more reliable if they simply mapped ATAC sites at known CTCF sites or motifs. An insulator has one or more sharp ATAC peaks with a single CTCF motif in it. Footprinting is not needed here and is more likely to generate false differentials, because footprinting has more limitations than ATAC peak calling.
(19) The motif analysis in fig 7E is not valid, because the enhancers were not correctly identified. Stretches of modified nucleosomes in coding regions should not have more motifs than random sequence. It is possible that the analysis ends up biased towards simple motifs that exist everywhere. To interpret this, a motif table is needed with the actual motifs and with % found, and the predicted % background values shown. P values become unreliable when dealing with what seems to be 10% of the genome because of the vast number of motifs found. (20) On page 4 and on page 11 the authors seem to want the region downstream of TCF4 to be both an enhancer and the co-regulated LINC01929 gene. This may actually be an enhancer-derived transcript. Maybe this should be clarified, and the predicted enhancer better defined.
(21) For the above reasons, the 231 Mb of novel genic enhancers (8% of the genome) referred to on page 13 are more likely to be just active gene coding regions and not enhancers. The authors will need to redo their analyses of histone modifications and ATAC peaks to separate active genes (with e.g. H3K27Ac) away from promoters and enhancers defined as open chromatin regions. (b) The link to MACS seems to be the wrong address.
(c) Hyphens are missing in the protein names for IL-4 an NF-κB, and kappa should be a Greek letter, and Sp1 and Sp2 have p in lower case.
(d) The methods text and the data files should include information as to which build of the genome is being used to define chromosomal coordinates. This seems to be hg19. Response to reviewer comments for manuscript MSB-18-8339 General comments (GCs) to the revised manuscript We are highly grateful to all reviewers for the significant work that they have put into the critical evaluation of our study and for the detailed, specific and constructive suggestions and comments to improve it. We have thoroughly revised our work and have addressed the issues as described in the point-by-point response to the specific reviewer comments below (highlighted in blue and renumbered) and feel that these revisions have significantly strengthened our manuscript. In addition, we have summarized how we have addressed more general issues that have been raised at more than one occasion by the reviewers.
GC1. Workflow and integration of different readouts
To better explain how the different readouts were integrated we have now included an additional work-flow scheme in Appendix Fig. S1B. It gives an overview about how data, methods and results are connected. In addition, we have clarified three issues throughout the manuscript. (i) The core set of TFs associated with deregulated chromatin features (Supplementary Table S2) was always based on TF binding motif analysis of differential ATAC-seq signal which is therefore the most fundamental data layer. Subsequently, the chromatin context was used to annotate the corresponding locus. (ii) The scATAC-seq analysis yields promoters and enhancers that showed high correlations for being simultaneously open in the same cell. The resulting connections expressed as a set of pairwise correlation coefficients yield an enhancer-promoter network. This network has now been integrated into the B cell gene regulatory network (GRN) to create what we have termed a "gene regulatory enhancer containing network" (GREN, Supplementary Data). (iii) From the complete GREN, a "CLL specific GREN" was extracted. It includes the connected network part that contains TFs from our CLL TF list, their target genes as well as linked chromatin modifiers that affect the chromatin features aberrant in CLL. An additional requirement for this reduced network was that all included factors were deregulated in their activity/expression between CLL and non-malignant B cells.
GC 2. Additional experimental datasets for validation of model predictions.
We have clarified in the abstract and in the manuscript that we generate a number of predictions and hypotheses that await further experimental studies. Furthermore, in several instances we have improved the "quality control" of our analysis and validations with existing data. For example, the scATAC-seq based analysis of promoter-enhancer correlations includes now additional controls in Appendix Fig. S5 and the results are confirmed with interactions listed in the 4D nucleome database (Teng et al, 2015). To further validate our general approach and to demonstrate the predictive power of our model and analysis we have now included ChIP-seq experiments for the CTCF and EBF1 transcription factors that were conducted for four samples from healthy donors and eight CLL patients. Motifs of CTCF and EBF1 were among the top hits for TF binding motifs that were lost at CLL enhancers (Fig. 5C). We now confirm by ChIP-seq that these factors are indeed lost at enhancers as predicted from the ATAC-seq based binding motif analysis and apply the additional datasets in two exemplary analyses for the integrative assessment of CLL specific deregulation. In Fig. 5G we dissect the role of CTCF in promoter-enhancer wiring at the NFkB2 locus and in Fig. 6D and 6E we exploit the EBF1 ChIP-seq data to validate interactions and changes in enhancer activity that are predicted by our CLL specific GREN. In addition, we would like to note that a study published during the revisions of our manuscript comes to similar conclusions about deregulated chromatin features and TFs in CLL as mentioned now in the revised text (Beekman et al, 2018).
GC 3. Organization of results
Our manuscript is associated with a large set of data. We understand that the high information density and the way we have presented them in the initial manuscript made it complicated to understand easily. In the revised manuscript we have now, based on the assessment of the reviewers, expanded or reduced relevant parts (see below). Furthermore, we have reordered the presentation of results into main figures, expanded view figures and appended supplementary figures to show results from the workflow in Appendix Fig. S1B and to better explain how the different parts of our results provide complementary information. Specifically, we have done the following revisions: (i) In Fig. 3 we now focus on the localized extension of H3K4me3 and its loss at bivalent promoters in CLL. (ii) The analysis of enhancers has been separated into two Fig. 4 and 5. A more general part in Fig. 4 compares the different enhancer definitions and integrates the contribution of the increased HDAC activity in CLL from the previous Fig. 7. The ATAC based analysis of TF binding motifs gained or lost in enhancers is now presented as Fig. 5. (iii) Fig. 6 is expanded and focuses on the integration of the enhancer-promoter correlations from the scATAC-seq into the GREN and testing the resulting predictions by ChIP-seq. (iv) The revised Fig. 6 integrates the main finding of our study. The associated "hand-made" schemes of the deregulated TF network in CLL and its associated links to chromatin modifiers (previous Figs. 6C and Fig. S6) have been replaced by the network scheme in Appendix Fig. S7. This scheme is directly derived in a clearly defined manner from the CLL specific GREN and the corresponding full network is provided as supplementary Cytoscape and xls files.
GC 4. Access to data access and analysis software
We provide original data and analysis software via different sources: Reviewer #1: 1. In the paper of Mallm et al., the epigenomic analysis of B-cells of 19 CLL patients and 9 healthy donors is described. The authors use a variety of different assays to analyze DNA methylation, chromatin states, nucleosome positioning, accessibility and the transcriptome to provide new insights into CLL pathogenesis. One of the main findings is a derived gene regulatory network (GRN) constructed from (public) RNA-seq data. The manuscript is rich in data that can be further explored by the community and most computational analysis is on par with the standards in the field.
We thank the reviewer for the overall positive evaluation of our work.
2.
However, the current manuscript presents some weaknesses. First, the relation and the agreement between the RNA-seq derived network and the TF motif analysis is not clear. Are some of the TF motifs associated with alternative promoters or are CLL-specific enhancers also major regulators in the GRN?
We have addressed the above issues and clarified in the manuscript, how we derived the set of TFs listed in Appendix Table S2 that were associated with deregulated CLL chromatin features and other results (see GC 1 and point 4 below). The enhancers in CLL and non-malignant B cells have now been explicitly integrated in the CLL specific part of the GRN that we identified in our analysis, resulting in a gene regulatory enhancer containing network (GREN). While an interesting finding, the alternative promoter usage is more difficult to associate with a specific TF motif due a relatively low number of loci. Accordingly, these motifs were not included in the selection of our TF set.
3.
Second, many of the key findings are based on computational analysis for which no validation (e.g. ChIP-seq or 4C-seq) was provided. The manuscript needs to be strengthened by validation of the findings.
We agree with the reviewer that our findings are based on our integrative analysis. We have now performed additional experiments (e.g., CHiP-seq of CTCF and EBF1) and extended the integrative analysis to validate our predictions as described below and in GC 2.
Major comments 4.
The GRN is based on information theoretic analysis (ARACNE-AP) of published RNA-seq data and supplemented with current knowledge of known regulators in the field. How well does the derived model predict these known regulators? E.g. a ROC curve would provide a measure of the "quality" of this network. The abstract states that 'based on DNA-binding motifs, TFs were integrated into the GRN.' How was this done? How do the TF motifs relate to the VIPER-derived activity scores? It seems that the authors can use differential motif analysis to constrain their GRN to direct targets that are differential between healthy and CLL donors. This is however not clear from the methods section, and some figures (e.g. Fig 1E) appear to be based solely on the RNA-seq data.
We have now clarified the workflow (see GC 1 and Appendix Figure S1B). The TFs that we identified from the combination of ATAC-seq based Homer analysis, aberrant chromatin context and deregulated gene expression were used to select a connected part of the network that contained these TFs (shown in part in Fig 6D). We further used a ROC curve analysis to see how well our TF set derived from chromatin features (Appendix Fig S2) rationalizes deregulation on the transcriptome level in CLL. For this we ranked all TFs based on the significance of their activity changes between CLL and the B cell samples and then checked at which positions are the TFs from the independent motif analysis. This leads to the ROC curve shown below ( Figure R1) and an AUC of 0.73. Thus, our TF set, which has been selected based on a binding motif analysis at promoters and enhancers with aberrant chromatin features in CLL, makes a significant contribution to transcriptional deregulation. Figure R1. ROC curve for the contribution of TFs linked to aberrant chromatin features in CLL to changes in activity. Regulators were ranked according to the significance of their activity changes computed with VIPER between CLL and the B cell samples. Then it was checked at which positions the TFs from the motif analysis can be found to compute the ROC.
The quality of the GRN has been evaluated in two ways: First, we compared our B cell network with three publicly available ARACNE networks for prostate cancer, glioblastoma (GBM) and acute myeloid leukemia (AML) available in the R-package "aracne.networks" and calculated the activities of all regulators (transcription factors, transcriptional co-factors and signaling pathway related genes etc.) with our CLL RNA-seq data. All regulators with a significant activity change (p-value < 0.05) between the CLL and the non-malignant B-cell samples were then used for a pathway analysis. This analysis yielded "B cell receptor signaling pathway" as top hit for the B cell network as opposed to the other entities (prostate cancer, pos. 65; GBM, pos. 27; AML, pos. 6). This suggests that the network we have computed captures more specific B cell target genes than the three other networks. In addition, we calculated transcription factor activities with the MIPRIP framework we have developed. MIPRIP is an R-package to predict the most important transcription factors for the regulation of a gene of interest by using all the regulators binding to a gene's promotor (Poos et al, 2018;Poos et al, 2016) (MIPRIP webpage: https://www.leibniz-hki.de/en/miprip.html). It uses a generic human network derived from different cell types and thus provides a global view of the transcriptional and regulatory circuitry. However, the MIPRIP method is restricted to transcription factors as regulators, while our ARACNE based B cell network includes also regulation of target genes by chromatin modifiers, signaling proteins and is thus preferable for our current analysis. We compared the computed transcription factor activities from MIPRIP with those obtained with VIPER/ARACNE and with the conventional differential gene expression analysis. There was a high overlap in the significantly up-or downregulated TF activities computed with the two different methods. Accordingly, we conclude that the ARACNE B cell network used in our study is wellsuited to analyze deregulation of transcription factors and other regulators and their linked target genes.
5.
The authors state that loss of DNA methylation and the accompanied loss of RNA-expression could be explained by accumulation of H3K9me3 and H3K27me3.This is not apparent from Fig 2D and Fig S2G. In Fig S2G, the accumulation of H3K27me3 seems much higher in the non-PMDs. Moreover, the PRC/repressed states do not cover 50% of the genome, unlike the PMDs. Can the authors show a negative correlation between DNA methylation and occupancy of these marks?
We agree with the reviewer that the enrichment of H3K27me3/H3K9me3 among PMDs is not apparent from chromatin states and are grateful for the additional analysis suggested. We calculated the average DNA methylation and H3K27me3 signal for each genomic region divided by PMDs. Appendix Fig S2E shows the negative correlation (r = -0.3) of DNA methylation with the H3K27me3 signal. We have rephrased the results section to describe the enrichment of repressive histone marks in PMDs.
6.
Also, H3K36me3 is generally associated with gene transcription. In Fig 1A for CLL TCF4 seems high expressed according to the RNA-seq results, but H3K36me3 is low, while it is high in nonmalignant cells that have no signal for TCF4 in the RNA-seq track.
We thank the reviewer for pointing this out. TCF4 is actually expressed in both CLL and nonmalignant cells but the expression level in CLL is considerably higher. As the same track heights were used in the figure the expression of TCF4 but was hardly visible for non-malignant cells. To visualize that TCF4 is indeed expressed we now show the RNA-seq tracks at different scales, which is also mentioned in the figure legend.
7.
The authors identify novel TFs related to alternative promoter usage (FOXA1, LEF1, POUF3 OR REPIN1), differential acetylation (MEF2a/d, Klf4) and CLL-specific enhancers (e,g. NFkB, TCF4, ATF) but no validation was provided. ChIP-seq in healthy vs CLL for one or two of these TFs would significantly advance the current manuscript. Similarly, the promoter-enhancer rewiring inferred from single cell co-accessibility is interesting. These claims would be substantially more valuable when validated in differential setting for some loci of interest using a chromatin conformation assay like 4C-seq. Along the same lines: on page 10: "distinct patterns of spatially coregulated activity hubs ... affect central pathways in the CLL pathogenesis". If the differential occupancy of CTCF and promoter-enhancer rewiring is so essential to CLL pathogenesis, computational derivation from ATAC-seq is insufficient. Again, such claims would be substantially more valuable when CTCF occupancy was measured using ChIP-seq and chromatin interactions for some loci were analyzed in a differential setting using 4C-seq.
This point is well taken and we have performed ChIP-seq for EBF1 and CTCF for four samples from healthy donors and eight CLL patients. We could confirm the predictions from the ATAC-seq based TF motif analysis for the two factors and that CTCF binding to enhancers is correlated with changes of genes expression of the next nearest gene. Via the additional analysis of both the CTCF ATAC and ChIP, we found that 90% of enhancers that change in promoter-enhancer correlation had CTCF stably bound to them ( Fig. 6G and associated text). We agree with the reviewer that we do not show physical interactions but nevertheless we are convinced that we indeed detect co-regulated open chromatin regions. The loss of this co-regulation (either by loss of physical contact or loss of a transcription factor targeting multiple sites) occurs at genes relevant to the CLL pathogenesis. This has been clarified in the main text.
8. In Fig 7 the authors essentially show that HDACi's are doing what they are expected to do (reduce HDAC activity and increase H3 acetylation) at the molecular level. What happens at the cell biological to these treated cells? Are they differentiating (as would be expected from the loss of the CLL heterochromatinized state claimed by the authors) or do these cells have altered viability? Is this specific for the CLLs, or are also the non-malignant cells responding?
We did not check for a differentiation phenotype as the differentiation process takes substantially longer (24 hrs) than induction of apoptosis (6 hrs) in primary CLL cells. With respect to apoptosis however, CLL cells are much more sensitive to HDACi than non-malignant B-cells. We have included exemplary data in Appendix Figure S4 D & E.
Minor comments 9. Please clarify Fig 1B. The legend refers to log change and -log 10 p-values but how are they represented in the figure?
Thank you for pointing this out. We have now simplified the legend to this figure. 10. How many of the BCR signaling components are occupied by the FOXA1, LEF1, POUF3 OR REPIN1? I.e. what is the consensus between the functional network analysis and the motif analysis?
The analysis of FOXA1, TCF4/LEF1, POUF3 and REPIN1 was simply based on the correlation between the expression of these TFs and the expression of alternative transcripts. Following the excellent suggestion of the reviewers, we have constructed a novel gene regulatory enhancer containing network (GREN, see general comment GC 1 above, exemplary part shown in Figure 6D). Using the core TFs identified in the GREN and all their target genes, the KEGG pathway "B-cell receptor signaling" was significantly enriched. The consensus between our motif analyses and the new GREN are described in Figure 6. Fig 1C: Can the authors include the 9 normal B cells for comparison?
11.
Thank you for this suggestion. The normal B cells are now included in the analysis depicted in EV Figure 1A.
12. Fig 3B: Do we see H3K4me3 broadening or stronger nucleosome positioning? The H3K4me3 signal to the right seems to disappear in CLLs. Fig 3C, Fig 3K).
It is now described in the text that 2639 out of 2785 promoters with the extended H3K4me3 promoter signal also gain nucleosomes according to the MNase H3 ChIP-seq mapping. The different nucleosome occupancy gain patterns at promoters are now depicted in Fig EV3B. The numbers of extended H3K4me3 signals are now stated in the paper for both malignant and non-malignant cells.
In short, we only detect two extended H3K4me3 regions in non-malignant cells. We link reduced MLL (or KMT as now stated in both text and figure) expression to loss of bivalency in line with previous findings. The extension of the H3K4me3 signal is probably simply caused by the positioning of additional nucleosomes that are then modified alongside the neighboring ones. These two separate ways to change the H3K4me3 signal are now stated in the text accompanying The error has been corrected in the revised manuscript.
14. Fig. 6A, B: In the text 5 deregulated chromatin features are mentioned, but in 6B the authors show 6. Please clarify.
The error has been corrected in the revised manuscript.
Summary
This work represents an effort to comprehensively characterize differences between chronic lymphocytic leukemia cells and normal B cells with respect to genome-wide chromatin state, DNA methylation, gene expression, and genomic element regulation by DNA sequence-specific transcription factors. CLL-associated epigenetics changes identified by the authors include differences in large-scale repressive chromatin domains with respect to histone and DNA methylation, altered nucleosome positioning and histone modification distribution at active promoters, altered combinatorial histone modifications at repressed promoters, altered enhancer activation, and altered enhancer-promoter relationships. The authors used differential gene expression analysis between CLL and normal B cells, transcription factor binding motif analysis of genomic features highlighted in their epigenetic analysis, and a regulatory network model generated from published B cell transcriptional profiles to create a network model for mechanisms of epigenetic and transcriptional dysregulation in CLL. After identifying increased histone deacetylase activity in CLL cells versus normal B cells, they also used an HDAC inhibitor to study differences in the way enhancers and promoters are affected by HDAC perturbation in CLL and normal B cells, and relate these findings to their model.
General Remarks 15.
In this work, Mallm, Iskar, Ishaque and colleagues present the results of an integrative genomewide comparison of peripheral blood CLL cells to normal peripheral blood B cells with respect to a diverse set of features related to chromatin and transcriptional regulation. These include wholegenome assessment of DNA methylation, RNA-sequencing, chromatin accessibility (in bulk and at the single-cell level), diverse histone modifications, and nucleosome positioning. The number of subjects in each category were sufficient for valid statistical assessment of significant differences between CLL and normal B cells with regard to these features, and the analyses presented clearly demonstrate the high technical quality of these data sets, with few exceptions (the H3K9ac data appears to be of somewhat lesser quality than other data sets, for example). Together, this represents one of the most impressive and comprehensive epigenetic profiling efforts I have seen for any cancer type, and these data sets will certainly be an invaluable resource for the CLL field, and as a model for future investigations in other cancers. The authors are also to be applauded for their efforts in making not only raw and processed data, but also their custom analysis scripts available to the community. We would like to thank the reviewer for the enthusiasm about our findings and have continued to work on making the data and software accessible (see GC 4).
16.
With regards to the analysis and biological findings presented here, they are primarily of a descriptive and hypothesis-generating nature. Given the ambitiously broad scope of the profiling and analysis portions of the project, the value of their well-characterized data sets as a resource, and the challenges inherent in studying CLL for which few faithful in vitro or in vivo functional models exist, this does not overly detract from the value of the work.
We would like to thank the reviewer for acknowledging the value of our findings for the general readership of Molecular Systems Biology.
17.
However, the authors should more clearly acknowledge this as a caveat in describing their findings. It would be best to avoid language that appears to claim an unproven functional consequence of chromatin mark associations in the abstract and discussion (e.g. "...loss of bivalent promoters indicated a reduced developmental capacity"). As another example, their network model predicts a functional association between TCF4 activation and BCL2 activation in CLL (described in the abstract, results, and discussion sections), but no functional experiments are performed to directly test this prediction. If the authors wish to highlight this hypothesized relationship in the abstract, they should make it clear that this is an example of the hypothesis-generating power of their data, rather than an experimentally demonstrated mechanism.
We agree and have removed the above predictions from the abstract and have modified results and discussion sections to make clear what are model derived hypotheses (e.g., stating that the functional connection between TCF4 and BCL2 is a prediction derived from our network model) as opposed to a prediction from the network model that has been backed up by additional experimental data (e.g., EBF1 driving an enhancer of the SNX22 gene). Wherever relevant we no longer suggest causative links but rather clearly state that our observations are frequently of correlative nature and use them to generate novel hypotheses.
18.
Another important caveat is warranted in presenting this data. It is well known that CLL proliferation, and activation of many transcriptional regulatory pathways critical for CLL pathogenesis (BCR signaling, MYC, Notch) occur primarily in tissues (lymph nodes & other secondary lymphoid organs, bone marrow, etc). See for example Herishanu et al 2014, cited by the authors. However, all samples studied in this work are (for practical reasons) from peripheral blood. The authors should discuss the inherent limitations of studying CLL peripheral blood cells with regard to understanding the biology that sustains these tumors.
We agree with the reviewer that the origin of the samples should be mentioned and discussed in the context of our results. To this end, we now included a passage in the first paragraph of the results section to point out this shortcoming.
Major points
The following points should be addressed in revisions in order for this paper to be acceptable for publication.
19.
Overall, this is a very impressive and comprehensive description of the epigenetic regulatory "landscape" of CLL as contrasted with normal mature B cells, However, in the absence of hypothesis-driven experiments, the authors seem to stretch their model and the literature excessively in some areas in order to construct mechanistic explanations for all of their major findings and purported relevance to CLL pathogenesis and therapy. The paper could be improved by deemphasizing these pat explanations, and instead suggesting broad categories of future hypothesisdriven investigation that could be enhanced by using these impressive data sets as a resource.
We feel that the paper is indeed a valuable resource and we thank the reviewer for his/her appreciation of our collected data (see GC 4). Though still including some hypotheses directly derived from our work in the paper we have made it clear now that our data do not provide detailed mechanistic explanations of CLL pathogenesis. Rather, we have now revised the discussion to emphasize more on findings directly derived from our study and conceptual advancements in our work that could guide further investigations of experimental validation of CLL disease features. Figure 1b requires far more explanation for the reader to understand. Four different heatmap legends are provided, but I only see corresponding data for the top two (which have very similar color schemes).
20.
We thank the reviewer for pointing this out. We have removed the additional 2 legend items, changed the color scale of the emission heatmap, have made the panel less cluttered to make it easier to understand, and have revised the figure legend. Figure 3 / S3 -Broadening of H3K4me3 domains is a phenomenon that has been studied in a number of developmental and cancer contexts, and has been linked to altered transcriptional elongation, and enhancer-dependent regulation, for example. Here, the authors seem to link this phenomenon to alternate TSS / promoter usage, but it's not clear that these phenomena are related. The authors should not rely on chromatin state (HMM) calls alone to claim that alternate TSS's are being used, but should support this by demonstrating the presence of transcripts with alternate 5' ends, either through RNA-Seq analysis or more targeted experiments, to disambiguate true alternate TSS usage from broadening H3K4me3. For example, it is not clear in figure 3F whether the gene on the bottom is truly using the lower TSS, or if it merely has H3K4me3 extending into the area of that annotation. Definitions of "active promoter" used in each sub-panel should be explicitly stated, and should be based on demonstration of alternate RNA transcripts for at least some of these figures.
21.
We have clarified this section stating that the H3K4me3 extension of 1-2 nucleosomes observed is not directly linked to alternative promoter usage, which is in turn was identified from the emergence of H3K4me3 at additional transcription start sites.
22.
With regards to claims of TF motif enrichment at broadened H3K4me3 promoters ( Fig S3C) and loss of promoter bivalency in CLL (Fig 3J), much more information needs to be provided about these analyses (merely showing the HOMER library PWM motif logos is not informative). Were these findings supported by both de novo and known motif analyses? What background regions were used? What was the statistical significance and frequency of these motif associations, and how much more significant were they than other enriched motifs (it is assumed that these were the MOST enriched motifs in these analyses, but that is not explicitly stated).
Thank you for pointing to the missing information. We have updated the figure to include the statistical significance and frequency as requested. The background, which in this case contains all non-extended promoters, is now stated in the text.
The author's hypothesized association between MEF2 factors and MLL enzymes that generate
H3K4me3 is poorly supported by the single citation provided, a review of MEF2 factors that in turn cites a single paper that linked MEF2C/D phosphorylation to recruitment of KMT2D, which is generally thought to be an enhancer-associated generator of H3K4me1 (not a promoter-associated generator of k4me3). Figure 3K uses the deprecated gene symbols for the MLL genes -this is ambiguous and should be avoided since "MLL2" has been used for both KMT2B and KMT2D (though generally the latter in humans). It is unclear why "MLL4" (presumably KMT2B) is not included in this figure. Given the very weak literature support for a direct connection between MEF2 factors and H3K4 trimethylation, this figure might better be omitted, and the corresponding claims significantly modified.
We agree with the reviewer that the naming of histone modifiers is ambiguous and have clarified the naming in the figure and included KMT2B as suggested. Still we believe that the expression levels of histone modifiers should be displayed in the manuscript. As the reviewer rightly commented we can only speculate about the relationship between MEF2C/D and KMTs but have now included more references that would further support our hypothesis. The text was significantly changed as recommended and the heatmap moved to the supplements. Figure S4A is not mentioned in the text, and needs to be better explained with respect to which ENCODE TF ChIP-Seq datasets were analysed (B-lymphoblastoid lines only? Other cell types?) and how the TFBS clusters were generated.
24.
We apologize for the lack of clarity. This figure has been moved to Figure EV4B. We have amended the manuscript mentioning that transcription factor binding sites are enriched in the TSS, enhancer active and bivalent chromatin states. The heatmap was simplified into clusters of transcription factors as originally there were too many TFs to meaningfully visualize. The cluster and the description how they have been generated is now given in Appendix Table S4 and the figure is now referenced in the main text. Figure 4B requires much better explanation of what is being depicted. Each row seems to represent the regions in a given state in CLL that had a different state in normal B cells, correct? And then whether they were enriched for that same state (or for p300) in ENCODE or FANTOM cell lines? It is surprising that so many of the different FANTOM and ENCODE lines showed enrichment for the enhancer state in CLL enhancers, given how cell type specific enhancers are. I would have expected significant enhancer state enrichment for ENCODE B cell lines (e.g. GM12878), but not others -it would be helpful to label the cell lines. It is unclear what "inactivation of enhancers in CLL occurred mostly via the bivalent state" means -does this reflect the fact that the author's ChroHMM "bivalent" state shows more H3K4me1 / H3K27me3 bivalency (a variant "enhancer bivalent state" -Fig1B), rather than the classic k4me3 / k27me3 usually described at promoters? In its current state, it is unclear what points are being made in this figure.
25.
We thank the reviewer for pointing out the lack of clarity regarding Fig. 4B. The so called "bivalent" chromatin state (also referred to as a "poised" state) is transcriptionally silent and involves presence of the repressive mark H3K27me3 alongside an active mark of H3K4me1/2/3. The earlier chromatin segmentation models used in the ENCODE study originally showed the bivalent states being primarily a combination of H3K27me3 and H3K4me1 (Segway segmentation, state "EnhancerPoised") but also H3K27me3 with mainly H3K4me2 and H3K4me3 (ChromHMM segmentation, state "PromoterPoised"). Thus, the concept of the combination of H3K27me3 and H3K4me1 to describe bivalent marks in enhancers is not new. This was recapitulated in the Roadmap Project 15 state segmentation which also had a specific "Bivalent Enhancer" chromatin state. Figure 4B shows the overlap enrichment of other enhancer sets to our identified chromatin states using the ChromHMM OverlapEnrichment tool to show enrichment of these enhancer calls in our active enhancer and TSS states as well as in our bivalent state. From this we would infer that since the bivalent state is essentially inactive, bivalency is the most enriched state for inactive enhancers. To clarify what is being shown, we have revised Fig 4B (now Fig. EV4C) its legend and the methods description. Like the reviewer, we were expecting many of the FANTOM5 enhancer to be mainly enriched in repressed chromatin due to the cell specificity of these enhancers. However, there are still many overlaps between different FANTOM enhancer sets as, for example, 109 of the 209 FANTOM large intestine enhancers overlap with FANTOM blood enhancers.
26. 4G -What background was used for the motif analysis of DMR-associated enhancers? Are enriched motifs in these enhancers significantly different from motifs enriched in a control set of CLL enhancers that are not associated with DMR's or differential acetylation?
We apologize for this omission. We now improved our TF-binding site analysis of DMRs by focusing on the ATAC-seq sites overlapping with the DMRs. All the remaining ATAC-seq sites not overlapping with the DMRs were treated as the background. We have modified the figure legend of Figure EV2 to clarify this point.
27. 4H -How were "NFATC1 binding sites" identified? Are these ChIP-Seq-proven binding sites (e.g. from an ENCODE GM12878 dataset) or simply enhancers that contained the NFAT motif? This figure is barely mentioned in the text and needs to be better explained.
These are binding sites identified using the HOMER tool, which in turn uses the NFATC1 binding motif and have modified the text to reflect this. We attempted to experimentally validate the results by ChIP-seq of NFATC1 in our patient samples but the signal-to-noise ratio of the sequencing results was not of sufficient quality to draw any meaningful conclusions, most likely due to the quality of the antibodies tested.
28. 5E -To my eye, there doesn't seem to be any relationship between the scATAC-Seq correlation matrix and the overlayed Hi-C contact domains. Perhaps a different locus, different scale, or genome-wide statistical measure would make this point better? As it stands, the author's claim that ATAC-Seq peak co-regulation correlated with topological features is not supported by the data presented.
We have rephrased the text to make clear that the scATAC-seq correlations indeed do not fully reproduce the Hi-C contact domains and have removed this from the figure. This difference could be present for two reasons: First, the Hi-C domains were from a lymphoid cell and not from primary CLL cells. We also investigated the recently published HiC dataset for CLL cells but found its resolution insufficient for our purposes (Beekman et al, 2018). Second, correlations in accessibility derived from the scATAC-seq reflect co-regulation of promoter/enhancer sites. These can involve physical contact as in HiC but can also arise from co-regulation in trans, for example by the overexpression of a TF within a given cell that binds two co-regulated sites. Accordingly, some TAD boundaries show a good overlap with the scATAC-seq pattern, while other correlated open sites retrieved from scATAC-seq do not arise from physical interactions within a TAD. In the revised manuscript we have clarified this issue. Furthermore, we find that 68% of the promoter-enhancer pairs identified from scATAC-seq are also found in the 4D nucleome database (Teng et al, 2015). It is noted that the pair-wise correlations derived in our analysis make no assumptions about the underlying molecular mechanism and are conceptionally equivalent to the pair-wise correlations used to describe the B cell GRN. Accordingly, these pair-wise correlations can be used to extend the GRN to include enhancer-gene linkages as done in our study.
29. S5E -The depiction of co-regulated scATAC-Seq regions on chromosome 1 is perplexing. Enhancer-promoter loops do not operate on the scale of 10's to 100's of megabases as shown by the arcs in this figure. While the text briefly points out that these correlations could be driven by coregulation in trans, the figure legend seems to describe them as enhancer-promoter pairs, which is highly unlikely. Needs clarification.
We agree with the reviewer that the previous Fig. S5E (now removed) should probably not be mentioned in the context of promoter-enhancer connections. We do not want to make the claim that the long-range correlations shown in the figure are promoter-enhancer connections (which are indeed expected to be more short-range) but are more likely to occur in trans via binding of the same TF (see response to point 28). This has been addressed by limiting the maximal distance between promoters and enhancers to 100 kb. The length distribution of wired promoters and enhancers is now also included in the manuscript clearly showing a preference for shorter distances within the 100 kb window (Appendix Figure S5G).
30.
The main text (page 10) also describes an enhancer-promoter pair analysis within windows of 200 kb, followed by identification of "re-wired" pairs in CLL versus normal cells, but no specific figures or tables are associated with this analysis (other than simple lists of features). CTCF site chromatin accessibility near these "re-wired pairs" is supposedly lost in CLL, but no analysis is presented as to whether this is a statistically significant association vs appropriate comparator genes. It is unclear whether the authors hypothesize that these CTCF pairs directly facilitate enhancerpromoter looping in the normal B cells, or rather represent topological domain boundaries (e.g. insulators) that are selectively lost in CLL and thus facilitate "re-wiring" across TAD boundaries. This is a complex topic, and should either be investigated in detail with appropriate statistical analysis and figures presented or perhaps omitted entirely and explored in a future paper.
Since we have performed CTCF ChIP-seq we included these datasets in the context of the re-wiring analysis. We found that 90% of re-wired enhancers are stably bound by CTCF within the enhancer in both malignant and non-malignant cells. This suggests that CTCF is needed for promoter-enhancer contacts and that the re-targeting is driven either by variable CTCF binding sites outside of enhancers or by other factors such as YY1 or cohesin.
31.
The authors mention several times (discussion and introduction) that BCR signaling pathways genes are not recurrently altered by mutations in CLL, but this is not true in an important sense. They should refer to the extensive literature on immunoglobulin gene VDJ stereotypy in CLL, which results in genetically encoded B cell receptors capable of autonomous signaling (e.g. PMID 22885698), or remove this claim.
We have now included the landmark paper by Hassan Jumaa and colleagues and have in addition pointed out in the introduction that the BCR itself is subject to biased usage of IGHV genes and that somatic hypermutation is strongly correlated with prognosis, underlining its relevance in the pathomechanism of CLL. We apologize for not referencing the correct papers here. We have now included additional references that would further support our hypothesis.
33.
In the HDAC / enhancer section of the discussion the author's claim that MEF2 factors are connected to HDAC's "via BCOR" is also unusual (no reference is provided). MEF2 factors bind directly to Class II HDACs (HDAC4/5/7/9) via a well-characterized interface -crystal structures of this interaction are available. This interaction is mutually exclusive with Mef2 binding to p300 acetyltransferase or the CABIN1 corepressor and is likely controlled by post-translational modifications. The authors should incorporate a more conventional understanding of MEF2 factor function into their model.
The panobinostat section has been moved to our general enhancer section and has also been completely revised. Now we point to changes in the reactivation of enhancers by panobinostat and affected pathways, rather than investigating specific transcription factors being involved in the changes of the histone acetylation since this is, as suggested by the reviewer, indeed very complex.
Minor points
The following points suggest areas where improvements would enhance the paper's acceptability for publication.
34.
The introduction could benefit from some re-writing for clarity and coherence.
We have simplified sentences and structure of the introduction to improve readability. Figure 1a -The authors provide sample tracks for one CLL and one normal sample at the TCF4 locus, which is helpful for gauging the quality of their many datasets, although inclusion of their peak calls for each track might provide more insight into their genome-wide analysis. The figure legend should contain an appropriate reference to the chromHMM color code & corresponding states (e.g. chromHMM states are color coded as in Figure 1B)
35.
The figure legend of Figure 1A is now revised to include a reference for the chromHMM color codes. To address the valid point about peak calls all the track files together with the peak files are made accessible from the NCBI GEO and CancerEpiSys website (see GC 4). Thus, it is possible to overlay these tracks locally for any gene of interest using IGV. Figure 1C -Here, the authors appear to be using their data to reproduce a previously published result regarding DNA methylation-defined CLL subtypes. It's not clear whether there is functional evidence for a "developmental block" as the authors claim, except perhaps as explored in that prior paper.
36.
We agree with the reviewer in that we reproduce the correlative results of previous publications by showing similarity of DNA-methylation patterns of CLL cells with non-malignant B-cells of different stages. Although we strongly believe that this is representative of a developmental block at least for the IGHV non-mutated more aggressive subtype of CLL, we have no functional proof and have thus removed this statement. Figure 2 / S2 -The characteristics described for the CLL PMDs (100's of kb to mb scale, enrichment for repressive chromatin marks, boundary enrichment for CTCF or active promoter marks) are reminiscent of topologically associated domains (TADs) as defined by Hi-C studies, which the authors discuss later in the paper. If the CLL PMD's in fact correspond to repressed TADs (or perhaps series of TADs within larger structures such as Hi-C "B" compartments) this would be an interesting point to discuss.
37.
We are thankful for this suggestion. To address this, we calculated the overlap of HiC B compartments among PMD regions. As shown in Figure EV2, we found both HiC B compartments and lamina-associated domains to be enriched in PMD regions. We revised the manuscript to include a section discussing the findings.
4B
The analysis of bidirectional transcription is quite interesting, but it's unclear how this was achieved. The methods describe separation of small and large RNAs prior to ribodepletion and library construction from the latter. Was sequencing of short RNA's performed (which would presumably contain most eRNA's), or is the bidirectional signal coming from the long RNA pool? Please clarify in methods and results text.
Indeed, we agree with the reviewer that the analysis of the bidirectional transcription is interesting. We have improved our methods description on how this was achieved. In short, we have conducted separate sequencing of both long and short RNAs and used the combined data to map bidirectional transcript. We have also added an additional supplementary figure (Appendix Fig. S4A) which describes the chromatin states at the identified bi-directionally transcribed loci.
39. 4C -I would expect a null signal along the diagonal for the matrix of state transitions. How is it possible to transition to the same state?
We would like to thank the reviewer to point this out. The zero diagonal is now present due to an additional modification: only the transition of the consensus non-malignant and consensus CLL state was considered whereas in the previous version the matrix of all non-malignant to CLL transitions were inclued. Thus, there was previously a constraint that the consensus CLL and non-malignant state had to be different. We expect the current version of the panel to be more intuitive.
40.
4D/E -There seems to be a typo in the legend, as the "number of sites with the motif" is not found on the y axis. What is the significance of dot size and color intensity? Many of the motifs in the HOMER library are highly redundant (e.g. ETS factors, which make up most of the significant signals in the downregulated chart, or different ATF factors). It is unlikely that these very similar motifs accurately distinguish between these different members of the same TF family. Perhaps a more parsimonious analysis could be shown for unique motifs detected in de novo HOMER motif analysis of the union enhancers, which would highlight more distinct motifs rather than many variants of the same top motifs.
The typo has been removed. As we always show the results from the known motif search we also do this in case of the enhancers for consistency. It is true however, that we cannot discriminate between very similar motifs but we have focused this type of analysis by using our ATAC-seq data and color-coded TFs from the same family with similar binding motifs for clarity (compare Figure 5A and 5C).
41.
S4D/E -The legend mentions depiction of TFs with enriched motifs in super enhancers, but most of the genes highlighted in the hockey plots are not TF's but rather selected loci that are discussed in the text as "particularly relevant" for various reasons. It's not entirely clear why the TCRA locus is of interest (T cell receptors are not expressed in CLL), and the BCL2 superenhancer, and those of several other genes highlighted in the discussion, are not indicated in the hockey plots.
The hockey plots have been removed from the manuscript.
42.
4I -It is mentioned in the text that 6.3% of CLL super-enhancers were "unique to this study", but no citations are provided as to the prior super-enhancer papers or databases from which this comparison is made.
We have now mentioned the database used in our study in the methods section which was "dbSuper". Figure 5/S5 -While the "footprinting" analysis shown in 5B is intriguing, the fine-scale pattern may be largely driven by the sequence preferences of the TN5 transposase, which should be mentioned as a caveat. The overall differential signal levels are also not informative, since the sites were selected on that basis. The motif enrichment plots in S5C are much more compelling evidence for a specific role for these TF's.
43.
We now mention the TN5 sequence preference as a caveat in the manuscript and the motif analysis has been moved to the main figure panel to highlight the more convincing finding as suggested by the reviewer.
44. 5D -The discussion of the TF motif heterogeneity analysis is difficult to follow -the list of highly heterogeneous motifs is very enriched for Fos/Jun sites, which are not mentioned in the text, while SP1 and MYC (not in the list of heterogeneous motifs) are discussed in the text as possible interactors of NFYA/B. The text should probably follow more closely to the data, or be better explained.
We agree with the reviewer and the text has now been fully restructured to follow the findings presented.
Figures 6 and S6
provide an integrative framework that tie together the differential epigenetic and transcriptional features of CLL versus normal B cells into a model of TF-driven gene dysregulation in CLL. It is interesting that there is almost no overlap between TF's frequently mutated in CLL and the TF's identified as driving differential chromatin and transcriptional regulation in this study. For example, gain-of-function mutations in the TF NOTCH1 are the most common recurrent gene mutations in CLL, and Notch dysregulation has also been demonstrated in most NOTCH1 wild-type CLL samples, but no signature of Notch/RBPJ differential activity was described in this work. Does this reflect a limitation of the approaches used, or does the network model provide a possible explanation?
As the reviewer pointed out, the majority of patients with or without NOTCH1 mutations show activation of the NOTCH1 signaling cascade. Consistent with this expectation we find NOTCH1 as a deregulated gene in our study and it is also present in the CLL specific GREN that we derive to integrate our finding (see GC1 and Fig. R2 Furthermore, it is noted that we find a loss of EBF1 at enhancers as one of our most prominent features, which was validated. As discussed previously, EBF consensus binding sites overlap with high-affinity RBPJ sites and it has been concluded that RBPJ complexes can bind at these sites only after EBF1 dissociation (Miele, 2011). It is noted that EBF1 can act as a repressor of NOTCH1/RBPJ genes (Banerjee et al, 2013). Thus, the activation of the NOTCH1 signaling cascade in the absence of NOTCH1 mutations could be linked to the loss of EBF1. These considerations are now mentioned in the discussion. Figure 7D -This figure shows significantly increased H3K27ac at promoters enriched for SP1, E2F, and KLF family members. The corresponding text describes ETS motif enrichment, which is not shown in the figure.
46.
The claims described by the reviewer were removed from the manuscript.
Discussion.
47. CLL-specific large PMD's associated with repressive histone modifications and gene silencing are described, but confusingly are then associated with genes (IRF4 HIST1H1E, NOTCH1, IGLL5) that are mutated but also expressed in CLL. Are these two distinct classes of PMD's (associated with repression or alternately with expressed, recurrently mutated genes)? Needs clarification. The reviewer's concern is valid that these example genes are rather exceptions within the PMDs. We investigated these genes in detail and found these to be only partially overlapping with the PMDs. We agree with the reviewer that these may not be the best representative cases. Therefore, we decided to remove this section from the discussion.
Supplementary data
47. Some of the supplementary data Excel sheets suffer from conversion of certain gene names to dates (e.g. SEPT11 has been converted to the date "Sep-11") Thanks for pointing this out. The gene names in the excel lists have been corrected.
Reviewer #3:
General comments 48. The identification of partially methylated domains in CLL is a very interesting observation, but the manuscript is crowded with other minor and sometimes dubious observations. There is a lot of valuable data in this study that needs publishing once it is better analysed with more rigorous curation of the data, but it cannot be published in this form.
We are glad to learn that this reviewer sees a lot of valuable data in our study. We have carefully considered his/her criticism. In several instances we have improved the data analysis as described below and corroborated our conclusions with additional experiments (GC2).
49.
The methods used to identify high confidence regulatory elements are well below the required standard and need repeating with more stringent criteria for defining them. Defining enhancers based on an ChromHMM segmentation of histone modifications is a wellestablished approach that has been used in many studies, for example in the Roadmap Epigenomics Consortium (Roadmap Epigenomics et al, 2015) and in a recently published study on chromatin features of CLL (Beekman et al, 2018). Accordingly, we disagree that this type of analysis would be "well below the required standard" but rather consider it an important as a reference approach for the annotation of chromatin states. It is also noted that no consensus exists about the best way to map the "real" enhancer sites. Other studies, like those by the FANTOM consortium, use neither ATAC/DNase I nor histone modifications but base their enhancer assignment on bidirectional transcription and p300 binding (Andersson et al, 2014). While we fully agree that defining active enhancers based on the enrichment of H3K4me1 and H3K27ac results in rather broad regions we believe that it would not be justified to conclude that the resulting larger size of enhancers has no biological relevance. In fact the concept of super-enhancers with typical extension of ≥10 kb, while controversially discussed, explicitly claims that these particular large enhancer tracts have specific functions by (i) increasing cooperative binding of transcription factors (Loven et al, 2013), (ii) the local accumulation of H3K27ac reader proteins like BRD4 (Loven et al, 2013) or (iii) inducing a liquid-liquid phase separation process to form functionally distinct nuclear subcompartments (Hnisz et al, 2017;Sabari et al, 2018). That being said the point of the reviewer is well taken with that it is also important to narrow down enhancers for certain types of analysis especially for the TF motif analysis (s. point 50 below) and to focus more on the ATAC-seq analysis. Accordingly, we now introduce the ATAC-seq analysis already in Fig. 1D and EV1D-F and extended our enhancer annotation with an ATAC centric approach as follows: Based on the extension of enhancer profiles reported previously (Chen et al, 2018) and excluding sites within promoters we select a +/-1 kb window around either the ATAC peak center or the bidirectional start sites (bidirectional transcription is from our strand specific RNA-seq datasets of both short and long RNAs). The ATAC sites that overlap with an active ChromHMM enhancer state (E1 or E9) within this 2 kb region are used to define a set of "focused" active enhancers loci that represent 20-40% of the active ChromHMM enhancers. In this manner we now provide different enhancer sets, which we believe will all be valuable to other researchers. Furthermore, alternative strategies to map enhancers from a combination of DNA methylation, histone marks, ATAC peaks and bidirectional transcription can be implemented as desired with the data provided in our study.
50.
The authors could expand the ATAC motif analysis to get the most useful information from this study, and leave out more dubious stuff. Some of the most valuable information is effectively lost in fig S5C. More convincing footprints at these motifs would be good.
We would like to emphasize that the TFs listed in Table S2 in the previous and revised manuscript were based on a motif analysis within the regions of differential ATAC signal. The other readouts were used to assign these loci with different types aberrant chromatin features in CLL. Thus, the requirement was that the motif was associated with both a differential ATAC signal and an aberrant chromatin feature and that the given TF displayed gain/loss of activity/gene expression. Additional motif analyses presented in the manuscript in Fig. EV2F and Fig. 3E are intersections with the ATAC-signal, while the analysis in Fig. EV4G and EV4H was conducted within the region of aberrant chromatin features, which is now clear from the figure legends.
51.
The study equates modified chromatin domains with enhancers, and then counts up TF motifs in these domains, which is something you can only do with enhancers. They need to separate these entities.
See response to point 49 and 50.
52.
The HDAC experiments are probably providing valuable information, but the gene regulation networks pointing to HDACs may not be real if the analyses were flawed. The authors may be trying too hard to make the analyses increasingly more intricate and more novel, but this runs the risk of losing sight of the basic biology, and moving away from our current definitions of what an enhancer is. An enhancer is more than a chromatin state as defined here. Overall there is a sense that the authors are using complex tools to extract more information than the data and the methods actually allow with any confidence. The end result is that the several good points of the study are lost in a morass of unreliable analyses and predictions based on these analyses.
To address this point for the HDAC experiment we have now simplified the analysis of this part and focused on the local increase of H3K27ac before and after inhibition. This is complemented by a differential RNA-seq analysis extracting genes that respond to treatment. The HDAC inhibitordependent TF motif analysis was removed and we now focus on making the following points: (i) HDAC activity is globally increased in CLL. (ii) It can be efficiently reduced to normal levels with panobinostat. (iii) The drug treatment leads to increased histone acetylation and a time dependent response on gene expression where the initial effect is counteracted by changes of RNA metabolism and chromatin acetylation related activity. (iv) The loss of H3K27ac at some CLL enhancers can be reverted by panobinostat treatment. Specific comments: 53. The manuscript contains too few examples of primary data showing specific gene loci and showing individual tracks for the individual patients. For 2 or 3 key genes it would be good to have e.g. the ATAC tracks for all samples where ATAC was performed. The one example shown in Fig 1A does not allow such a comparison, and as a 1 Mb window the resolution is too low to fully interpret the patterns. Finer resolution is needed to see what is being measured. Given the problems illustrated below, it would be helpful to have regions called as enhancers marked by bars under the tracks as the EXCEL file suggests that many are not true peaks but broad regions of histone modification and not the open chromatin regions needed to specific enhancers where factors are likely to be bound.
We have included several new tracks in the manuscript with finer resolutions as suggested. Enhancer / promoter regions are also marked as bars if appropriate (see Figures 1,3-5, Fig. S2,S3, S5). In addition, we refer to the additional tracks at GEO or at our web page as described above in GC 4. Fig 1A has sharp peaks of H3K4me3 in both the transcribed and nontranscribed state. This is a little unusual, and might raise questions about the specificity of the antibody. This modification is normally associated with active promoters. Are the authors suggesting that these peaks are instead poised or bound by Polycomb?
The TCF4 locus in
TCF4 has multiple transcription start sites and is transcribed at significantly lower levels also in non-malignant cells and thus the H3K4me3 peaks are consistent with that. The RNA track is hardly visible (see response to point 6). This has been addressed by presenting the RNA tracts at different scales.
55.
The text refers to an enhancer-like region identified at the TCF4 locus in Fig 1A based on peaks of H3K4me1, but the profiles actually shows several broad stretches of H3K4me1 and H3K4me27, and not sharp peaks. It is hard to see how this data can be used to identify a discrete enhancer. It would have helped if this putative element was labelled in Fig 1A. The ChromHMM is included in the Fig 1A giving the chromatin state and thus also the information about putative enhancer elements or active chromatin regions (in light and dark orange -please refer to the legend in panel Fig 1B). In addition, the element mentioned in the text has now been clearly labeled.
56. DNA regions should not be defined as "enhancers" based entirely on histone ChIP data. These "enhancer-like" regions can be defined as active chromatin regions, but should not be referred to as enhancers without more data, and not if there is no ATAC peak at the same site. E.g. One study found that only 26% of predicted enhancers had any actual enhancer activity (Kwasnieski JC, Fiore C, Chaudhari HG, Cohen BA. High-throughput functional testing of ENCODE segmentation predictions. Genome Res. (2014) 24:1595-602.). The text also refers to "genic enhancers" but it is unclear what distinguishes these as a group from poised and active enhancers. Does genic mean regions inside transcribed genes that also have H3K36me3? If so, it is a bit artificial to separate intergenic and exonic enhancers into 2 groups as they do the same things, and exon enhancers often do not regulate the gene where they reside, and may not then have H3K36me3.
The definition of "genic enhancer" is in line with definitions of chromatin states as defined by the ROADMAP consortia. It is the co-occurrence of the transcription elongation mark H3K36me3, the enhancer mark H3K4me1, and the active mark H3K27ac. These regions are not annotated by gene models and instead only use histone modification signals. Therefore, these are not annotated specifically to lie within genes, exon or introns or other gene features defined by gene models. For the other aspects of defining enhancers we refer to our response to points 49 and 50.
57.
The data in table 8 lists 141,000 enhancers present in B cells and 238,000 enhancers in the combined data set. This is vastly more than you would reasonably expect to find in one cell type if meaningful criteria were used to delimit the data. A stringent analysis might identify 30,000 high confidence peaks that have the potential to be enhancers. It has become a problem in the field that some studies greatly over-estimate of the number of total enhancers in the genome (some claim more than 500,000) or in any one cell type. This arises by including too much low quality data or unsuitable definitions. It appears that the Roadmap consortium data cited here to validate the study also over-estimates numbers of enhancers, claiming 10% of the genome in B cells, so this is not helpful.
We agree with the reviewer that a focused enhancer set is needed as described in response to point 49 and 50. The comparison to the roadmap data, however, is still useful as the same histone marks in our study were used for the enhancer definition. Thus, the roadmap data set confirmed our data quality per se, even if the enhancer definition has to be more stringent.
58.
In the present study the huge number of "peaks" indicates that insufficient culling of insignificant peaks has been performed, and the wrong criteria used to define enhancers. The table also suggests that normal B cells have over 1000 enhancers greater than 10 kb in length. This means that the authors are defining entire active chromatin domains, not discrete enhancers. At the very least the authors should ensure that a distinct open chromatin region is present at the site where they define an enhancer. These should be discrete ATAC peaks, typically ~ 200-500 bp across, defined on the basis of a minimum meaningful threshold. My analysis of a random selection of the 140,000 enhancers identified in B cells raised many concerns. A good example highlighting the problems are 2 enhancers defined by the authors at the NFATC2IP locus in hg19 as 16 28961200 28962000 If these are viewed on the ENCODE GC B cell and CD20 B cell DNaseI tracks it is clear that these represent modified histones on either side of a sharp DNaseI peak between these two coordinates. In this case the defined enhancers excluded the open chromatin region at the promoter in between.
Next to this promoter is a sharp DNaseI peak that is called as an enhancer at 16 28956400 28957800 In this case the called region is in open chromatin, but this is a discrete conserved CTCF binding site and is most likely an insulator, not an enhancer.
At the CD20 locus (MS4A1) the entire coding region and promoter is defined as a series of enhancers spanning 15 kb at 11 60220600 60226000 11 60227000 60231600 11 60232000 60232800 11 60234600 60235400 At this locus a very probable enhancer exists as a very sharp DNaseI peak downstream but is called as a 3 kb enhancer at 11 60398800 60402200 While it is not useful to focus on just a few specific examples, this does leave the reviewer with the firm conviction that the entire enhancer data set needs redefining with more stringent criteria to include discrete regions of less than 1 kb, that are validated as open chromatin, with minimum peak heights defined on an empirical basis, so that they have a good chance of representing true regulatory elements and not just broad zones of active chromatin. In the current data set, 61,000 of the defined enhancers are greater than 1000 bp whereas the typical enhancer should be ~300 bp.
It is unclear to us where the 300 bp size of a typical enhancer would come from. We have now included a more stringent list of enhancers based on an intersection of ATAC signal and active ChromHMM enhancer states. For the extension of "typical" enhancer regions we use the aggregated profiles for a set of crucial enhancer marks from a recent study (Chen et al, 2018), which yields regions of +/-1kb around an ATAC peak or a bidirectional TSS (see response to point 49 and 68).
59.
The enhancer data file should include at least some numerical data on peak values, so that it would at least be possible for the reader to interpret the significance of each peak, without having to re-process and upload the data on a genome browser. In this case, a link to a genome browser session would have been a good idea so that the reviewers and the end reader can see the data for themselves.
Bigwig files giving the enrichment of reads per region are available for further data analysis for all readers via the NCBI GEO accession number GSE113336. In addition, all the intermediate peak and ChromHMM segmentation files were shared for individual samples from the CancerEpiSys website (http://www.cancerepisys.org/).
60.
The Hidden Markov modelling of chromatin states may not be valid if the identification of chromatin features is over-stated to start with.
The ChromHMM analysis was done in accordance to other studies (e.g., (Beekman et al, 2018;Roadmap Epigenomics et al, 2015). As outlined above in response to point 49 and 57, we consider it informative to relate our data in this manner to existing work as long as it is unclear what a "better" consensus approach would be.
61.
In contrast to the claim in the text relating to fig 1D, it is not obvious that there is a significant increase in enhancers in CLL.
We apologize for the incorrect statement, which has been removed from the manuscript. As apparent from Fig. 1D there is in fact a loss of ATAC signal at enhancer regions in CLL which is in line with a globally increased HDAC activity.
62.
Page 5 states that "we identified 1378 regulators (p < 0.05) with differential activity" which I think is far too many. This p value may mean that 5% of the 23,000 genes in the genome will be identified by chance, which is equivalent to 1150 genes, which could account for most of the regulators identified. I suggest that a lower p value cut-off would give more meaningful data. It would be better if they focused on e.g. the top 100 differential regulators for this to be meaningful.
We have expanded the relevant part of the Methods section to clarify this issue. For the construction of the B cell network we initially used a precompiled list of 5927 regulatory proteins (TFs, associated co-factors, signaling enzymes etc.) (Alvarez et al, 2016). From this list, 3862 proteins were present in the B cell network. With our RNA-seq data we could compute activities for 2804 of them and found 1378 regulators differentially active with an adjusted p-value < 0.05. Thus, our reference list does not involve all genes but only 2804 regulators of which 5% would correspond to 140 proteins. Since we provide all relevant data in the xls file "Dataset_EV13-B-cell_network" other thresholds can be set as desired. Fig 2D would benefit by adding an analysis centred on open chromatin regions near the histone peaks at the boundaries of PMDs. This may show that an active promoter or enhancer also blocks spreading of repressive marks, maybe more so than CTCF. It is not acceptable that ATAC is left out of the PMD analysis.
63.
We thank the reviewer for the suggestion. We revised Figure 2D to include open chromatin regions in addition to histone peaks (Fig. EV2D). As pointed out by the reviewer, open chromatin regions show a higher signal around PMD boundaries. In addition, we have conducted ChIP-seq of CTCF and confirm our previous conclusion that it is enriched at the PMD boundaries (Fig. 2D). Thus, the simplest interpretation of our data is it that the enrichment of open chromatin at the PMD boundaries arises from bound CTCF.
64.
In fig 2E the authors should indicate whether the increased mutations are occurring in PMDs that gained (or lost) meCG compared to normal B cells. If methylation is gained at these sites, it could be that meCG is transmuting to TG or CA during replication. How many of these mutations involve these bases?
Only 5% of the somatic mutations (Puente et al, 2015) were found to be on CG sites. In addition, PMD regions were mainly characterized by DNA methylation loss rather than DNA methylation gain. Overall, our analysis of dinucleotide frequencies revealed that the increased mutation rate is valid for all somatic mutations and is not specific to CpG dinucleotides.
65.
The extension of H3K4me3 domains in fig 3A is rather minor and may depend on how this data is normalised. This analysis does not add much and is difficult to interpret. The claims made in 3C and 3D about gain of nucleosomes would need a high-resolution map of average nucleosome spacing for e.g. the first 5 nucleosomes from the TSS, or the centred on the first nucleosome. Fig 3D is only one example and is too low resolution to see anything meaningful. Figure EV3 B has now been included that depicts the clustered nucleosome occupancy at all promoters. It can be seen that the bottom cluster shows an increased occupancy signal at the TSS, which reflects the profile of the promoters with the extended H3K4me3 signal depicted in Fig EV3A. 66. The link between the gain of one modified promoter nucleosome, and the gain of alternate promoters is greatly over-stated given that alternate promoters are many kb away in fig 3F, not 200 to 400 bp away.
We agree with the reviewer and we have removed the claims linking these two points. The whole discussion on page 8 becomes invalid once the reader realises that discrete rigorously defined enhancers were not identified in the first place, so I will not comment on it at length. You cannot do a motif analysis of chromatin regions 1 to 10 kb long. There are many of these, and some are much longer. Motifs concentrate within 100 to 150 bp of an ATAC peak summit, a much smaller region than most enhancers defined here. The background must be huge.
As described in response to point 49 and 50, the TF motif analysis was conducted within the region that carry a differential ATAC signal. As a background unchanged ATAC-seq regions were used as stated in the method section.
68. I assume that some of the super-enhancers will also include the tracts of ~50 kb of modified chromatin that are in some places defined as single enhancers. This is not a great definition.
In our study we follow the approach introduced previously using the ROSE software tool as stated in the manuscript (Loven et al, 2013;Whyte et al, 2013). This leads indeed to some super-enhancer regions of ~50 kb. We refer to our response to point 49 for this issue. We would also like to note that recent studies (Hnisz et al, 2017;Sabari et al, 2018) propose that enhancers assemble into a distinct nuclear subcompartment formed by a liquid-liquid phase separation process. Such a mechanism to form "chromatin bodies" would require large chromatin regions that establish multivalent interactions as opposed to bivalent bridging interactions between specific binding sites as discussed recently (Erdel & Rippe, 2018). Fig S5A cannot be interpreted without an inbuilt legend to the graphs. The ATAC data should show average profiles for CLL-specific ATAC peaks in CLL and in normal B cells. Fold change is not enough. We need an indication of peak height and width. The data here is over-processed to the point where you know longer know what it looks like. Where is some primary data? It needs stating if the X axis in S5A is a natural or a log2 scale. If they are using a 1.13 threshold, this has to be a log scale = ~2.3 actual difference.
69.
The figure has been moved and updated as requested to Fig EV1 D. Primary ATAC-seq data can be found throughout the manuscript, e.g., in the new Fig EV1F and in Fig 3 and 4. 70. Fig 5B does not necessarily show TCF occupancy, it more likely shows sequence specific cleavage around a central TCA motif. This pattern is characteristic of the sequence bias of nucleases. Footprints normally look very different and are wider with a deeper drop at the footprint. The EBF data is less clear but this may also be influenced by sequence specificity which could explain the pattern. If we do not know how rigorous the calling is, it is hard to have confidence in 5C and 5D. A simple table of actual motifs identified, and the % occurrence would help to make fig S5C more meaningful.
In the case of EBF1 we have performed ChIP-seq to validate our findings as illustrated in Figure 6. The information requested by the reviewer about the motifs is comprehensively displayed now in Figures 4A and 4C including the % of sequences with the corresponding motif. We have now stated in the text that the transposase has a sequence bias which is a caveat of the method, nevertheless we are confident that the differences depicted in the plot are valid.
71
. I find it hard to know whether the gene regulation pairs defined by co-occurrence of sequence reads in scATAC are reliable or not. This is an interesting approach, but it is hard to see how they can reliably get the pairing right. In a uniform population of cells, expressing the same genes, there is no reason why they would detect an active promoter/enhancer pair in one cell and not another. It seems there is a lot of room for error here. These pairings should be mostly within 100 kb, rarely more than 1 Mb apart, but the pairings shown in S5E are typically 100 Mb apart. This seems to confirm that the method is not very reliable.
We have extended the "quality control" of our scATAC-seq analysis in Appendix Fig S5. First, we are convinced of the high quality of our scATAC-seq data as reflected by the high number of integrations per cell and the excellent agreement of the aggregated data with the bulk ATAC-seq profile. As discussed above in the context of reviewer comment 28, scATAC-seq data are qualitatively different from chromosome conformation capture data because correlations can arise from both physical contacts in cis or from co-regulation in trans and reflect a bona fide active state in contrast to 3C based interactions. That being said, it is noted that 68% of the promoter-enhancer pairs identified from our scATAC-seq are also found in the 4D nucleome database (Teng et al, 2015), which further corroborates the validity of our analysis.
72.
On page 10, the authors seem to be using differential footprinting to assess CTCF occupancy. Differential occupancy analysis here would actually be much more reliable if they simply mapped ATAC sites at known CTCF sites or motifs. An insulator has one or more sharp ATAC peaks with a single CTCF motif in it. Footprinting is not needed here and is more likely to generate false differentials, because footprinting has more limitations than ATAC peak calling.
Our data set now includes EBF1 and CTCF ChIP-seq data, which validate our ATAC-seq analysis. As depicted in Figure EV5 and associated text, the conclusions on CTCF occupancy derived from the ATAC-seq data fully validated CTCF ChIP-seq results. Based on the CTCF ChIP-seq analysis CTCF is lost at 5964 sites and gained at 441 sites in CLL with an overlap of 55% and 47% of the sites, respectively, as identified also in the differential ATAC-seq analysis.
The motif analysis in fig 7E
is not valid, because the enhancers were not correctly identified. Stretches of modified nucleosomes in coding regions should not have more motifs than random sequence. It is possible that the analysis ends up biased towards simple motifs that exist everywhere. To interpret this, a motif table is needed with the actual motifs and with % found, and the predicted % background values shown. P values become unreliable when dealing with what seems to be 10% of the genome because of the vast number of motifs found.
As stated in response to point 52, the TF motif analysis has been removed.
74.
On page 4 and on page 11 the authors seem to want the region downstream of TCF4 to be both an enhancer and the co-regulated LINC01929 gene. This may actually be an enhancer-derived transcript. Maybe this should be clarified, and the predicted enhancer better defined.
We have now marked the predicted enhancer downstream of TCF4 in Fig 1A with only a small part overlapping with LINC01929. The weak RNA signal that is detected in the CLL rescaled RNA-seq track is alos outside of this gene.
75.
For the above reasons, the 231 Mb of novel genic enhancers (8% of the genome) referred to on page 13 are more likely to be just active gene coding regions and not enhancers. The authors will need to redo their analyses of histone modifications and ATAC peaks to separate active genes (with e.g. H3K27Ac) away from promoters and enhancers defined as open chromatin regions.
The link to MACS seems to be the wrong address.
This error has been corrected.
78.
Hyphens are missing in the protein names for IL-4 an NF-kB, and kappa should be a Greek letter, and Sp1 and Sp2 have p in lower case.
We have corrected the spelling of NF-kB when we refer to the protein complex. For the individual proteins we use the HGNC nomenclature (all letters in uppercase, gene in italics, no hyphens).
79.
The methods text and the data files should include information as to which build of the genome is being used to define chromosomal coordinates. This seems to be hg19.
This information is now included in the method section. Thank you again for submitting your work to Molecular Systems Biology. We have now heard back from the two referees who agreed to evaluate your study. I apologize for the slow process, which was due to the late arrival of one of the reports and the need to perform a consultation with the reviewers regarding the remaining issues. As you will see below, the reviewers appreciate that the performed revisions have improved the manuscript. However, they think that several issues remain and as such, we would ask you to address them in a major revision of the manuscript. Typically, our editorial policy is to restrict major revisions to a single round. However, since in this case the reviewers do not request additional experiments and considering that most of the requested additional analyses sound feasible, we would like to offer you a chance to revise the study.
In brief the most fundamental issues that need to be addressed are the following: -ChromHMM needs to be used with caution when defining enhancers, since strictly speaking it is a tool for detecting genomic states but cannot be used on its own to define enhancers. The reviewers provide constructive suggestions on how to use additional features to define enhancers.
-The motifs used for the motif analysis need to be better specified and shown in the manuscript. Moreover, the TF regulator analyses need to be performed with more stringent criteria and perhaps in more restricted genomic regions, to avoid false positives. Both reviewers provide constructive suggestions in this regard.
-The super-enhancer calling needs to be refined.
-Reviewer #1 also recommends performing further analyses to better support the proposed promoter-enhancer interactions. As the reviewer mentions, confirming the interactions by Hi-C or 4C would be rather challenging, but s/he proposes alternative options for analyses to address this point.
-As a side note, we would recommend further emphasizing the resource value of the study (which was also explicitly acknowledged by reviewer #2 in the initial round of review) e.g. by adding a sentence in the abstract.
All other issues raised by the reviewers would need to be convincingly addressed. Reviewer #3 raises strong concerns mainly regarding the enhancer calling and the motif analysis. We think that all these issues can be addressed by following the recommendations of reviewer #1. During our cross-commenting process (in which the reviewers are given the chance to make additional comments, including on each other's reports), reviewer #1 provided further comments and recommendations on how to address the concerns of reviewer #3 on these two aspects and I pasted below for your reference (see REVIEWER #1 (comments during the referee cross-commenting process). Please do let me know in case you would like to discuss further any of the comments of the reviewers. We could also have a phone call if you think this would be useful. Reviewer #1:
Main comments
The authors have substantially improved the organization of the manuscript and consecutive sections are now better connected. The integration of the scATAC-seq with the GRN is interesting, but the number of identified regulators is very high, suggesting that the integrated method introduces many false positives. The TP's in the ROC curve (shown in the rebuttal) are also based on motif analysis of the same data, rather than true known regulators from the literature, thereby providing little evidence for the predictive accuracy of the method.
The authors made an effort to validate the interactions derived from co-accessibility in scATAC-seq with data from the 4D nucleome database. Unfortunately, no 4C-seq experiments (as suggested) were included for validation. Also, ChIP-seq experiments for CTCF and EBF1 were performed in CLL patient and healthy controls. Taken together, the additional experiments are sufficient to validate some of the results, but were not used for a genome-wide verification. Instead, ChIP-seq analysis is mainly limited to a few examples and the analysis of the newly derived data raises some questions and is not described in the methods section.
The manuscript will substantially benefit from a genome-wide comparison of some of the model/epigenetic data derived hypothesis with the 4D interactions/CTCF and EBF1 ChIP-seq. This could be addressed in a few figure panels. See specific comments below.
Specific comments page1) 1,378 differential regulators (TF's and chromatin modifiers) out of 2,804 were detected between CLL and healthy controls. This number is extremely high and likely includes many false positives. Can the authors compare these results with some of their collected data to support this claim? E.g. how many of these regulators are (highly) expressed or differentially expressed in/between CLL and healthy controls. How many of the TF motifs are enriched in the differential ATAC-seq peaks compared to all ATAC-seq peaks?
Page 8) Superenhancers (SE) are called using the ROSE program by stitching neighboring TF binding sites that have overall enriched H3K27ac. This means that broad H3K27ac domains need to have multiple TF binding sites in order to be called a "super-enhancer". In the original manuscript (Whyte et al 2013), the authors used Med1 and H3K27ac. Here, the authors could use the ATACseq peaks and H3K27ac. By calling every H3K27ac region larger than 10kb (which is not uncommon for some promoters) a SE, many false positives might be introduced. Furthermore, the authors may consider removing SEs that overlap a TSS, to make sure these are genuine enhancers. Lastly, are the 6.3% and 5.5% of the SE unique to the study (compared to what then?) or unique to the CLL/healthy control? The latter makes more sense, and it would be of interest to perform a differential motif analysis in the ATAC-seq peaks of the differential SE's, using all (union CLL and healthy) ATAC-seq peaks within a SE as control.
Page 10) CTCF binding was lost at 5,964 sites and gained at 441 sites. Such a dramatic change in CTCF occupancy within the same cell type is a spectacular result, but may indicate a flaw in the analysis. Are these genuine CTCF binding sites (e.g. called with MACS2 and p < 1e-8) that have significant changes (e.g. Diffbind or DESeq2) in occupancy? How many CTCF binding sites were detected in total? Figure EV5B indicates sites with very low CTCF coverage, which are likely background. Please add a description of the analysis (similarly for EBF1) to the methods section.
Page 11) 90% of the enhancers had CTCF stably bound in both cell types. While CTCF certainly overlaps with promoters and enhancers, 90% is a lot. Are the authors sure that these are genuine CTCF binding sites (see comment above)?
Page 11) We found that 68% of these pairs were also listed as spatial contacts in the 4D nucleome database (Teng et al, 2015), suggesting that many promoter-enhancer pairs involve physical contacts. In total 3955 promoter-enhancer pairs were identified, with most promoters being connected to one enhancer at mean and median distances of 32 kb and 20 kb (CLL) and 23 kb and 10 kb (non-malignant B cells), respectively". The average promoter-enhancer distances are very short and it will be hard to validate these with 4C-seq or Hi-C given the high background from proximity ligation at this distance from the promoter viewpoint. Therefore, it is surprising that 68% of these interactions could be validated. Furthermore, an upper limit of 100kb might be too short. The authors should consider the significant interactions derived from a (B-cell specific) 4D nucleome data sets and set their lower-and upper bound distances to e.g. 10% and 90% of the significant interactions in the 4D nucleome data set. Finally, the authors should describe which datasets were taken from the 4D nucleome, whether these are B-cell specific (or closely related) and how the analysis and comparisons were performed (methods section) The authors suggest that the correlation in scATAC-seq is predictive of (CTCF) looping, but provide only one example. Given that a relatively high number of differential CTCF sites were found, one would expect a coordinated down-regulation in the scATAC-seq signal of CTCF sites that interact and are downregulated in CLL patients relative to control. Can the authors e.g. show a scatter plot where the correlation in the ATAC-seq signal decrease of two interacting loci is associated with the loss of CTCF occupancy in these two loci, for every differential CTCF binding site overlapping the scATAC-seq signal? A genome browser view of the ATAC-seq and CTCF signal for an exemplary locus would be a nice illustration for the supplement to really reinforce this point.
Similarly, the EBV1 ChIP-seq was not/barely used to validate some of the results. How many peaks were found in CLL and healthy controls? How many were differential? Were these associated with gain/loss in ATAC-seq and H3K27ac at promoters or enhancers? Were these enhancers connected to the target genes that were predicted from the model and to what extent? Fig 6G (referred to in the rebuttal) does not exist CTCF and EBV1 and 4D nuceome analysis are missing in the methods section.
Discussion "Our integrative analysis .. can explain more than 80% of the transactional variance of CLL cells". This statement indicates that e.g. a multivariate regression using all the epigenetic signals as predictors can explain the RNA-seq change between CLL and healthy subjects with an adjusted R^2 of 0.80. If this is the case, the authors should show that. However, such an accurate prediction of gene expression changes is typically not feasible, even with the high-quality integrative dataset the authors collected. In that case, the statement should be adjusted.
Transcription factor motif analysis "In addition, only TFs were included that showed a significant differential protein activity (or gene expression for network target genes) as computed from our B cell specific transcriptome network ( Fig 1D)" How do the authors compute protein activity from RNA-seq data?
Text/typos • while the loss at enhancers might indicate a loss OF enhancer activity (page 1) • "PMD's were enriched .. conformation capture". Please edit the sentence.
• Infexion point → Inflection point (legend Fig. EV1) Reviewer #3: The revised manuscript is improved, but not yet to the point of being acceptable. Some of the deficiencies relate to terminology and can be corrected by reclassifying chromatin features under a different name. However, there are many substantial fundamental defects in both the approach and the analysis in this manuscript which relate primarily to: Using an inadequate method to define enhancers: The ChromHMM pipeline is used extensively by the Roadmap Epigenetics Consortium, the authors of ChromHMM, but in my view this is a highly unreliable method to use to predict enhancers as it is not centred on any analysis of open chromatin regions. Based on the results of this manuscript it is highly likely that a large fraction of the predicted enhancers or changes in enhancer state relate more to changes in the activities of transcription units than enhancer activity. It may be relevant that ChromHMM was a default track on the hg18 UCSC genome browser, but not on the more recent hg19 and hg38 browsers, maybe because it has gone out of favour as a useful tool. The main flaw in this approach is that active genes are essentially covered in varying degrees of H3K4 me1, me2 and me3 and H3 K27 ac, as follows: H3 K4me3 concentrates +/-~1 kb, K4me2 extends a few more kb and K4me1 extends over most of the gene. While H4 K27ac is highest on either side of the promoter, it extends over the whole gene. H3K36me3 is concentrated on the 3' half of genes. This means that the three different ChromHMM enhancer states, especially genic enhancer and active enhancer 2, will often represent transcribed chromatin and not enhancers. Without alignment with well curated ATAC data, which excludes minor ATAC peaks associated with active nucleosomes, the ChromHMM data has no predictive value. This is abundantly clear in fig 1D, which does at least filter for overlap with ATAC data, but shows that the modifications predicted for active enhancers (enhancer 1 + H3K27ac) are in the minority. The biggest enhancer groups are defined on the basis of either "active enhancer 2" which is just H3 K4me1, found at the 5' half of genes, or "genic enhancer" which is H3 K36 me3, K4me1 and K27ac, found at the 3' half of genes. My original perusal of the table of 238,000 enhancers suggested that these often comprised whole coding regions. The enhancer state 1, marked by K4 me1 and K27ac, which should be found flanking enhancers but not at the enhancer itself, is the most useful group included. However, this is a tiny fraction of the population of peaks in fig 1D. For confirmation of these views, I suggest that the editor and authors to look at Fig that shows one motif linked to one ATAC peak. For most of the paper, the criteria described on p22 for inclusion as an enhancer are just 2 of 6 patterns that do not have to include open chromatin, which should be obligatory. This leads to swamping of the data pool with active chromatin marks commonly found on active genes. Worse than that, most of the 238,000 active chromatin regions do not appear to contain ATAC peaks, meaning that any motif analysis is being performed on nucleosomes, not enhancers, and is meaningless. Furthermore, if 2/6 chromatin modifications are needed to define enhancers, this excludes enhancers at open chromatin regions where TFs are bound where there are no histones. This means that true enhancers and enhancer state 1 are mutually exclusive. This is also evident in the table of 238,000 enhancers, where some actual potential regulatory elements are excluded. It is significant that a recent study published by Ott et al on the same subject this month in Nature Genetics used just ATAC and H3 K27ac, and came to some different conclusions. It is at least apparent that the target genes identified by Ott et al seem to be different to those identified here. Flawed motif analysis: From the outset, the construction of a gene regulation network relies on motifs, but the motif table in table S2 is full of errors. If these motifs were used in the motif analyses, then all these analysed are fatally flawed, as follows: (i) The NFAT motif listed is AAGAAGAAA. This will never bind NFAT which binds (A/T)GGAAA, (ii) the authors are confused by the TCF nomenclature whereby for example human TCF3 and TCF4 belong to the HLH E2A CACCTG E-Box class whereas mouse Tcf3 and TCF1/LEF1 are HMG proteins that bind CAAAG. The authors imply that they are including both motifs in their E2A searches, (iii) the wrong RUNX motif is used. The right motif for RUNX proteins is TGTGGT(T/C)(T/A)without upstream CCCC which will greatly skew the analyses, possible making it 200x less likely to hit the basic motif, (iv) The FOX motif is not right, and at the least should start with G/A and not GA, and (v) the NF-kB motif is better represented by The NF-kB motif is best represented by GGGGAAATCCCC (which has an extra A). Another fatal flaw, explained above, is that most motifs defined in this study are defined based on groups of modified nucleosomes, where there are no TFs, and the analyses may actively exclude enhancers bound by TFs where there are no modified histones. Specific comments: (1) Regarding terminology, many objections are easily resolved by replacing every single reference to "enhancer regions" on every single occasion, without exception, with a reference to "active chromatin Regions", and also in e.g. Fig 1B, 1D etc etc.
(2) The main persistent problem that remains is the underlying assumption throughout the manuscript that enhancers can be defined as broad regions of modified chromatin. They must by necessity be defined as small In their rebuttal, the authors say they look at motifs +/-1 kb from an ATAC peak (point 58). This will create a massive background as it adds 1700 bp of nucleosomal DNA to each 300 bp enhancer regions, where most motifs are +/-100 bp, the window the authors use in Fig 5D. Why not use the same window to define motifs and thereby get better enrichments, (4) All analyses included in this study should be restricted to those samples that included ATAC, which is most, and always use ATAC as one of the two criteria used to define regulatory elements. This still allows for 3 normal samples where both ATAC and ChromHMM are available, which should be sufficient given the differences between these and the 18 CLLs that can be included. The authors should show the matrix logos and provide the PWMs used in this study. If these differ to the same motifs annotated on HOMER, these need to be shown alongside. It would be preferable to simplify the number of motifs depicted in figures such as 3E by restricting the motifs depicted to a single representative example in cases such as ETV/ELK/ETS and KLF where they all bind the same motifs. This figure probably needs ETS, KLF, E-box, SP1, NFY, NRF and no more. Fig 5C only needs one AP-1 and one RUNX motif etc. (7) As stated above, there is an over reliance on the ChromHMM gene prediction tool, released in 2010, which has not really gained wide acceptance in the gene regulation field as a front line approach for identifying enhancers. Published evidence from groups such as the Hardison and Cohen group suggests that enhancers should be defined on the basis of transcription factors or on function. Most studies measuring actual enhancer activity find ~10% of active chromatin regions have enhancer activity. While it is helpful to know about chromatin states in the context of gene regulation, over-use of this tool is likely to lead to misleading results. ChromHMM seems to tell us more about whether a region is transcribed or not. The ChromHMM browser tracks often show enhancers and promoters as 10 kb tracts of chromatin spanning open chromatin regions. As an example, ENCODE shows the first 50 kb of the B cell gene PAX5 at hg18 chr9:36,977,389-37,029,852 is covered by either H3 K4 me3 or me1 in lymhoblastoid cells and is shown as either enhancer or promoter region on the Broad Institute ChroHMM track in the same cells. This is highly misleading. Just because the current authors get the same patterns as the ChromHMM tracks loaded on the UCSC genome browser does not mean that this tool is suitable to use as the front line approach to defining enhancers. It would in fact be a disservice to the gene regulation field to give de facto acceptance of this methodology by prominently presenting it as a state of the art approach to be used to predict enhancers. It is not, it is an adjunct to define the neighbourhood of enhancers.
To illustrate another feature of histone profiles, some ChromHMM data nicely show a gap between the K4me1 and me3 marks, just where K4me2 would be predicted. This is what is expected in a transcription unit, but ChromHMM would show it as a gap between chains of enhancers. Page 8 line 15 refers to transitions from the quiescent state to the K4 me1 E2 state, and from the transcribed K36me3 state to the K36me3/K4me1 genic enhancer state. Both represent gain of K4me1, which may just reflect differences in sensitivity of K4me1 detection, and in the second case depends on an ill-defined boundary between states seen at the 5' and 3' halves of genes. There is plenty of room for over-interpretation of the data here.
(8) The opening figure, Fig 1A, is poorly described and has inconsistencies. The sentence "the histone modifications identified a downstream enhancer, which was activated in CLL cells as judged from the enrichment of H3K4me1 and H3K27ac (Fig 1A)" should be turned the other way around to say that "a potential enhancer region was identified far downstream of the TCF4 gene as a cluster of ATAC peaks embedded within an ~ 60 kb region of chromatin enriched in H3K4me1 and H3K27ac in CLL cells". This approach needs to be taken throughout the manuscript whereby any description of a potential regulatory region must begin with a description of the ATAC data. The 2 RNA-Seq scales in Fig 1A should have some degree of equalisation across the genome, as it is not acceptable to show normal on a scale of 100 and upregulated on a scale of 8000, as it suggests the opposite of what it is. For example, a uniformly active region of the genome could be used to either normalise the FPKM data. Alternatively the authors could adjust the scales to make them equivalent, and then maintain the same ratio in the scales. Furthermore, (i) some of the ATAC peaks at TCF4 look bigger in normal cells, and (ii) the whole TCF4 locus is plastered with H3K4me1 and H3K27Ac in CLL. Does that mean the whole TCF4 locus is one big super-enhancer? Because the whole gene is involved, this data could be actually be used to undermine the very concept of the "super-enhancer", a process that is already happening in other studies that dispute the relevance of this definition (eg at the alpha globin locus, and in Barakat et al recently in Cell Stem Cell). (9) The significance of the PMDs is unclear. They were defined at 10 kb resolution which is too high to be meaningful. Either 500 or 1000 bp by would be better, as a 10 kb region can contain one 10% and one 90% methylated region. Fig 2A shows a highly variegated pattern of methylation, so maybe this is indeed the case. It would be better to show verification with single clones from bisulphite sequencing of a few hundred bp where meCG is traditionally shown as black dots in a series of clones to reveal the trend. The 50% could also be a population average. Basic mechanisms of activation and repression tend to drive the unstable 50% meCG state one way or the other, so 50% is unexpected at one discrete region. On p8 line 11, when the authors say PMD loss and meCG loss correlate with loss of enhancers, I suggest these changes simply correlate with gain in transcription. (10) the data on nucleosome changes at promoters and could be left out. This section does not provide any clear insights. The data is not convincing. (11) Some major conclusions in this study are based on the flawed motifs TCF4 (at the centre of the network) and NFAT (e.g. 1E, 5A, EV2F).
(12) The concept of "superenhancers" remains controversial and has not been widely accepted in the basic gene regulation community, even though many groups are now trying to define these regions.
Most problematic with this concept is that many of the superenhancers defined include whole transcription units and promoters flanked by enhancers and other regulatory elements. This configuration was originally defined as the "active chromatin domain", and the clusters of enhancers were defined as LCRs. In my view the redefinition of these as superenhancers is a retrograde step. Fig 5D do not resemble footprints but reflect sequence specificity within a ATAC peak centred on a sequence. Only CTCF shows a true footprint. This figure also highlights the error in the TCF4 motif. This should be the TCF3/TCF4 motif CACCTG, but the authors show the motif for TCF1/LEF1 which is CAAAG. This confirms that the wrong motifs are being assigned, (16) Too many figures are far too small and could be double or trebled in size, especially in the supplement and EV where space is not an issue. As a general criticism, almost all of the genome browser views are at much too low a resolution. Gene regulation and active chromatin are best studies at the level of 100 kb or less.
(17) Although the authors have addressed some concerns, many of the problems highlighted in the first review remain. These have been addressed above so I will not address the rebuttal letter in detail.
Overall, the authors repeatedly refer to studies from the creators of ChromHMM to validate the use of ChromHMM, which is hardly an independent validation. If other groups are already using better targeted approaches to define enhancers in CLL, and getting different answers, it does not justify published a less well-designed study that may contradict it.
REVIEWER #1 (comments during the referee cross-commenting process): Enhancer definition and motif analysis Rev #3 is correct that chromHMM should not be used to define (active) enhancers; in particular, not if the follow-up purpose is to define enriched motifs in these enhancers. ChromHMM tiles the whole genome into bins with a binary signal (presence/absence) of the chromatin mark. From the individual chromatin marks, a user defined number of compound "states" are assigned. Importantly, even the presence of a very low signal (e.g. low levels of H3K4me1, H3K27ac etc.) may cause that such tiles are defined as "enhancers", without any nucleosome depleted region. The authors can easily verify this by including the chromHMM segmentation in a genome browser. They will see that many enhancers with an active state overlap H3K27ac and ATAC-seq peaks, but that even a higher number of regions with a very low signal will share this state. In short, chromHMM might be useful to segment the genome into different states, but will introduce many false positives when used as a "enhancer detection tool". Moreover, even chromHMM segments that overlap the truly active enhancers are often large regions enriched in H3K4me1/H3K27ac from which the nucleosome depleted regions cannot be easily inferred. This is indeed critical for a reliable motif analysis.
I suggest to define enhancers by calling peaks (e.g. MACS2) on the ATAC-seq data and H3K27ac data (the latter possibly with the -broad setting). In the case where multiple replicates are available, it is recommended to use the IDR framework to make sure peaks are reproducibly found in two replicates. It is recommended to upload the tracks of the ATAC and H3K27ac peaks to a genome browser and verify that many ATAC peaks are fully embedded within a broader H3K27ac region. There will also be ATAC-seq peaks without overlapping H3K27ac, which often correspond to CTCF binding sites or TFs related to repressive marks like H3K27me3. The intersection (e.g. bedtools intersect) of the ATAC-seq and H3K27ac tracks is a good proxy for active promoters and enhancers.
Rather than inferring motifs in all the enhancers, the authors may want to look into differential enhancers between CLL and healthy cells. Such differential enhancers can be defined by significantly altered ATAC-seq levels AT the ATAC-seq peak and/or strongly altered levels of H3K27ac at the H3K27ac regions overlapping the ATAC-seq peaks. Altered enhancer activity is not always evident from an increased/decreased nucleosome depletion and so both methods are worthwhile to investigate. Homer can be used for motif enrichment and all enhancers defined in the above way should be used as a background set.
For the motif analysis, the authors may want to discard regions in close proximity to an annotated TSS, or treat these as active promotes. The ATAC-seq/H3K27ac intersecting regions will typically be several hundreds of basepairs and can be used for motif analysis. Alternatively, the ATAC-seq summit +/-150bp can be used (see comments rev #3). Binding sites for CTCF and EBV1 ChiP-seq should also be defined using peak calling, e.g. by MACS2 and not by clustering reads over the whole genome. Altered binding levels should be assessed with an appropriate tool like DESeq or Diffbind. A correct implementation of these basic analyses is critical and should take place before more sophisticated (network) models are pursued.
Normenclature
The nomenclature around Tcf3/Tcf4 can be confusing and can refer to the Tcf/Lef family (official symbols Tcf7l1 and Tcf7l2) or the bHLH motif family (E2a). When the authors assign genes/TFs to motifs, official gene symbols should be used. The consensus motifs or logos shown in the paper should be the same as the ones used for the motif analysis. Homer provides these motifs: http://homer.ucsd.edu/homer/motif/HomerMotifDB/homerResults.html or they can be reconstructed from the PWMs that were used. The authors should use the official gene symbol annotated with the gene. In the case that the motif is highly degenerate (e.g. AP-1 or ETS motifs) it is helpful to include the RNA-seq data and show the expression pattern for the highest expressed/most differential members of that motif cluster.
Superenhancers
Superenhancers have been defined by the group of Richard Young (Whyte et al. 2013) as H3K27ac regions containing multiple Med1 binding peaks. They are also called stitched enhancers, which is probably a better name, as it better describes their nature: it is a cluster of individual enhancers, not one enhancer with a particular "super" capability of driving gene expression. If there is anything special about them, it is their proximity to each other and to lineage specifying/cell identity genes. In defense of Richard Young's group they have never claimed that interstitial regions are also enhancers and provide the locations of the individually stitched enhancers in their paper. Thus, it is not surprising that interstitial regions have no enhancer activity. In the context of this paper, the individual "super enhancer constituents" should be contained in the set of all enhancers. It is only useful to define superenhancers in the CLL context when these are located near/driving the transcription of key CLL genes. Also here, a differential motif analysis (using the SE embedded ATAC-seq peaks) between CLL and healthy subjects would be of interest. Are these same motifs enriched as in the "normal" enhancers?
Response to reviewer comments for revised manuscript MSB-18-8339R
We appreciate the detailed comments of the reviewers and the editor to our revised study and are particularly grateful to reviewer #1 for the additional work he/she has put into the constructive cross-commenting of the various issues raised. In the General Comments 1-3 below we have summarized how we have addressed recurring issues. In addition, we have carefully considered, discussed and addressed all comments in the point-by-point response (reviewer comment highlighted in blue and renumbered). A number of additional analyses have been included as requested by the reviewers. For the enhancer annotation, different approaches exist as discussed below and we provide additional data and explanations for the workflow we have chosen. After having compared the different methods that have been suggested, we, as the authors of the paper, ask to be given some freedom to select what we consider the best suited method. We would like to emphasize that we make all data accessible so that other scientists can use them to conduct alternative analyses as they wish.
General comment 1, Enhancer annotations
In our opinion, the combination of ChromHMM states and ATAC signal provides additional information over an enhancer annotation based only on H3K27ac and ATAC-seq. It has been shown previously that H3K4me1 and H3K9ac are highly informative about the active enhancer state (Calo & Wysocka, 2013) and that active enhancer may also lack H3K27ac (Pradeepa et al, 2016). Furthermore, the addition of the H3K36me3 mark allows it to distinguish intragenic from intergenic enhancers. Accordingly, we consider it advantageous to annotate potential enhancers with ChromHMM from a combination of peaks called from H3K27ac, H3K4me1, H3K9ac and H3K36me3 data sets and then additionally consider the ATAC signal, bidirectional transcription or differential DNA methylation to annotate predicted enhancers rather than only use H3K27ac and ATAC measurements. Furthermore, there are certain regions we would wish to exclude based on the ChromHMM context (e.g. unannotated promoters or regions where H3K27ac signal occurs with repressive marks, both of which are identified in our analysis).
As proposed by the reviewers we have rephrased the text to clarify that the ChromHMM states alone do not represent an "enhancer definition". To make this clear, we have also renamed the ChromHMM states in Fig. 1B as "Active 1" (state 10 with H3K4me3, predictive for TSS), "Active 2" (state 9 with H3K4me1, H3K27ac, predictive for active intergenic enhancers), "Active 3" (state 1 with H3K4me1, H3K27ac and H3K36me3, predictive for active intragenic enhancers), "Active 4": (state 11 with H3K27ac, H3K9ac, predictive for an weakly active enhancer state that lacks H3K4me1) and "Poised" (state 8 with H3K4me1 but no H3K27ac, predictive of a poised intergenic enhancer state). These states were derived from ChromHMM by using binarized data obtained after peak calling of the 7 histone modifications as input. Thus, peak calling is implicitly included in our ChromHMM chromatin state annotation Additionally, we would like to note that the presence of an ATAC signal is no gold standard for active enhancers in line with comment #9 of reviewer 1: (i) Low nucleosome occupancy (for example at poly(dA:dT) tracts or unusual DNA structures like R-loops) can lead to accessible chromatin regions that are independent of TF binding. (ii) Increased accessibility could also reflect the binding of a repressive complex. (iii) TF binding can also occur without a 2nd Revision -authors' response 2nd April 2019 concomitant gain of chromatin accessibility as inferred previously for binding of some TFs like Oct4 and Sox2 to sites with high nucleosome occupancy (Soufi et al, 2015;Teif et al, 2012). Furthermore, some promoters are active but show no ATAC signal although they must be bound by multiple general and specific TFs to be transcribed.
General comment 2. Enhancer annotations used in our study
We have carefully considered and discussed the reviewer comments as described in the current revision letter and in the previous one. We selected the following approach for annotation different set of enhancers in CLL and NBCs (NBCs): • ChromHMM was used to annotate potential enhancer regions as states 1, 8, 9 and 11 (including putative poised or weakly active states) by using peak calling data of histone modifications as input. Different combinations of these states were used to derive different enhancer sets after intersection with the ATAC-seq signal (see below).
• The super-enhancers (SEs) were identified with the ROSE software from the Young lab (Loven et al, 2013;Whyte et al, 2013). It is the standard tool to call SEs and has also been used in a recent study of CLL SEs (Ott et al, 2018). Following the suggestions from both reviewers, we have refined the SE calling with the ROSE tool by using the H3K27ac read signal within the overlap region of ATAC and H3K27ac peaks.
• All promoter regions as defined by a region of ±1 kb around the TSS according to the RefSeq database were removed from all our enhancer annotations. • In order to identify TF motifs gained or lost in CLL within the ATAC peak regions, the differential ATAC signal was intersected with either all four states 1, 8, 9 and 11 (Fig. 5A,C) or only with the predicted active enhancer states 1 and 9, which carry a strong H3K27ac signal (Fig. S5 A) or with the predicted poised/weak enhancers from state 8 and 11 (Fig. S5 B).
• For the lists of active enhancers in CLL and in NBCs the ChromHMM states "Active 2" (state 9, H3K4me1, H3K27ac) and "Active 3" (state 1, H3K4me1, H3K27ac and H3K36me3) were intersected with regions of ±1kb around the center of ATAC-seq peak. This size was selected based on the average extension of key enhancer marks (H3K27ac, H3K4me1, p300, bidirectional transcription) (Chen et al, 2018). We do not consider it excessively long since the nucleosome-depleted regions can also comprise more than one nucleosome and definition of the histone modification state requires multiple nucleosomes flanking the nucleosome depleted regions. The resulting list of predicted active enhancers comprises 10145 loci in CLL and 7312 in NBCs of which 4771 are shared between the two groups (see Dataset EV09). These enhancer lists were used for further analysis and integration into the regulatory network described in Fig. 5 and 6 and the associated expanded view and supplementary figures.
• The predicted active intronic and intergenic enhancer ChromHMM states (1 and 9, see above) were intersected with sites of bidirectional transcription to yield a set referred to as "transcriptionally active enhancers".
The part of the manuscript associated with Figs 4, EV4 5, S5 and EV5 has been rewritten to cover these points.
General comment 3, TF motif analysis
We consistently used the full HOMER transcription factor binding motifs. These motifs are experimentally determined and contain the weight of every nucleotide at each position in the motif sequence. Genome-wide binding sites were predicted based on the detection threshold as log odds ratio. We would like to emphasize that the TF motif analysis with Homer was always conducted only within ATAC peak regions of 327 bp median size throughout the manuscript (see point #10 below). This has been stated already (e.g., p. 10. "We identified TF binding motifs that were enriched at ATAC-seq peaks gained in CLL enhancers (Fig 5A)) but we have further clarified this issue throughout the text. The TF motif analysis for different enhancer annotations is now shown in Fig. 5A,C and Fig. S5). As proposed by reviewer 1, an additional TF motif analysis was conducted by intersecting the differential H3K27ac peaks (excluding promoters) with ATAC peaks either genome wide (Fig S5C) or for SEs called as described above (Fig S5D).
It is emphasized that we consistently used the full Homer motifs, also for the set of core TFs with gained/lost binding in CLL in Appendix Table S2. The simplified binding motifs listed previously in Table S2 were only intended as a reference to the complete HOMER motifs. We apologize for the confusion this has created. These sequences have now been replaced with full HOMER motifs retrieved from the differential ATAC-seq analysis. For the results of the TF motif analysis, we now consistently used the HOMER motif names while the HGNC nomenclature is applied when referring to specific TFs that recognize a given motif. As described in the text, in cases where multiple TFs would recognize a motif identified by Homer we have selected those TF(s) that showed a change in activity as computed by VIPER between CLL and NBCs that corresponded to the differential ATAC signal.
Editor comments
1. ChromHMM needs to be used with caution when defining enhancers, since strictly speaking it is a tool for detecting genomic states but cannot be used on its own to define enhancers. The reviewers provide constructive suggestions on how to use additional features to define enhancers.
See General Comment 1.
2. The motifs used for the motif analysis need to be better specified and shown in the manuscript. Moreover, the TF regulator analyses need to be performed with more stringent criteria and perhaps in more restricted genomic regions, to avoid false positives. Both reviewers provide constructive suggestions in this regard.
See General Comment 3
3. The super-enhancer calling needs to be refined.
See General Comment 2 4. Reviewer #1 also recommends performing further analyses to better support the proposed promoter-enhancer interactions. As the reviewer mentions, confirming the interactions by Hi-C or 4C would be rather challenging, but s/he proposes alternative options for analyses to address this point.
We have expanded this part as requested. When all interactions present in the 4D Genome data base (https://4dgenome.research.chop.edu/Download.html) for Homo sapiens where considered, 72 % (CLL) and 74 % (non-malignant) of the interactions derived from our scATAC-seq analysis were confirmed by the 4D Genome database. These numbers changed to 58 % (CLL) and 56 % (NBCs) when only 4D genome pairs for contacts within the 10%-90% distance range were considered, which corresponds to 7.23 kb to 429 kb. Most of the distances in the 4D Genome database are below 100 kb with the distribution shown below (response to point #21) and similar to that derived for the scATAC-seq data (Appendix Figure S6G). We conclude that our 100 kb cut-off is reasonable.
5. As a side note, we would recommend further emphasizing the resource value of the study (which was also explicitly acknowledged by reviewer #2 in the initial round of review) e.g. by adding a sentence in the abstract.
We have emphasized the resource value of our study in the abstract and the end of the introduction. It is noted that reviewer 2 also commented positively on this aspect of our work: "Together, this represents one of the most impressive and comprehensive epigenetic profiling efforts I have seen for any cancer type, and these data sets will certainly be an invaluable resource for the CLL field, and as a model for future investigations in other cancers. The authors are also to be applauded for their efforts in making not only raw and processed data, but also their custom analysis scripts available to the community." We believe this comment captures an important aspect of our work that should be considered in the context of the discussion of the "best" way to analyze the data. For example, it is straightforward to use the data we provide to annotate predicted enhancer loci with a different approach and then discuss the results in relation to our findings.
6. On a more editorial level, we would like to ask you to make sure that the Author Contributions are correct. Currently SR is listed as a contributor, it seems that this is a typo and it should be DR for Daniel Remondini?
Yes, this typo has been corrected.
Reviewer #1 (comments during the referee cross-commenting process) 7. Enhancer definition and motif analysis. Rev #3 is correct that chromHMM should not be used to define (active) enhancers; in particular, not if the follow-up purpose is to define enriched motifs in these enhancers. ChromHMM tiles the whole genome into bins with a binary signal (presence/absence) of the chromatin mark. From the individual chromatin marks, a user defined number of compound "states" are assigned. Importantly, even the presence of a very low signal (e.g. low levels of H3K4me1, H3K27ac etc.) may cause that such tiles are defined as "enhancers", without any nucleosome depleted region. The authors can easily verify this by including the chromHMM segmentation in a genome browser. They will see that many enhancers with an active state overlap H3K27ac and ATAC-seq peaks, but that even a higher number of regions with a very low signal will share this state. In short, chromHMM might be useful to segment the genome into different states, but will introduce many false positives when used as a "enhancer detection tool". Moreover, even chromHMM segments that overlap the truly active enhancers are often large regions enriched in H3K4me1/H3K27ac from which the nucleosome depleted regions cannot be easily inferred. This is indeed critical for a reliable motif analysis.
The rationale for our approach for the enhancer annotation is given in the General Comments 1 and 2. As described above we have clarified that ChromHMM states are not used as enhancer definition. Furthermore, all motif analysis is exclusively based on ATAC-seq peaks (General Comment 3). We would like to point out that peak calling also results in binarized data and that we have used the binarized data obtained after peak calling for the 7 histone modifications as input for our ChromHMM state annotation. Accordingly, only regions of chromatin with significant enrichment of histone modification signal are used for the segmentation. Thus, we disagree with the statement that our analysis would somehow artificially detect "a very low signal (e.g. low levels of H3K4me1, H3K27ac etc.)" as compared to using only peak called H3K27ac data (see also below, response to point #8). With respect to the statement "chromHMM segments that overlap the truly active enhancers are often large regions enriched in H3K4me1/H3K27ac from which the nucleosome depleted regions cannot be easily inferred" we would like to reply the following: Considering our General Comment 1, it is unclear to us how one would define "truly active enhancers" in the absence of data from additional experiments where these regions have been deleted or translocated and the effect on transcription was evaluated. The observation that H3K4me1/H3K27ac regions can cover 10 kb or more does not preclude that this feature is functionally relevant as discussed in point #49 of our previous revision letter. Whenever a motif analysis was conducted in the manuscript we used the ATAC-seq data/peaks, which in itself is comprised of small, defined regions, and used the ChromHMM states only to annotate the ATAC peak context. Thus, an extension of the predicted enhancer region beyond the ATAC peak size is not an issue for the TF motif analysis.
8. I suggest to define enhancers by calling peaks (e.g. MACS2) on the ATAC-seq data and H3K27ac data (the latter possibly with the -broad setting). In the case where multiple replicates are available, it is recommended to use the IDR framework to make sure peaks are reproducibly found in two replicates. It is recommended to upload the tracks of the ATAC and H3K27ac peaks to a genome browser and verify that many ATAC peaks are fully embedded within a broader H3K27ac region. There will also be ATAC-seq peaks without overlapping H3K27ac, which often correspond to CTCF binding sites or TFs related to repressive marks like H3K27me3. The intersection (e.g. bedtools intersect) of the ATAC-seq and H3K27ac tracks is a good proxy for active promoters and enhancers.
We have conducted the analysis suggested by the reviewer. It yields a very similar set of enhancers that we have derived from the intersection of the predicted active ChromHMM enhancer states (state 1 and 9) and the ATAC-seq signal (see General Comment 2). An exemplary data set for one patient is given in Fig. R1. Figure R1. Overlap of enhancers called by intersecting ChromHMM states 1 and 9 with ATAC peaks (grey) vs. H3K27ac peaks and ATAC in patient sample CLL1. Averaged over all patients 85±8 % of the ChromHMM enhancers overlap with those from the H3K27ac peaks.
The large overlap of 85 ± 8 % (average and standard deviation for all patients of ChromHMM vs H3K27ac peaks) is to be expected because states 1 and 9 carry a strong H3K27ac signal that is obtained from the peak called H3K27ac ChIP-seq data used as input for ChromHMM. Furthermore, the total number of enhancers is somewhat lower in all samples for the ChromHMM based enhancer calling with values of 4977 ± 1863 vs 6160 ± 2080 (average and standard deviation for all patients) due to the additionally required presence of certain other histone modifications. Thus, the statement made in point #7 that "chromHMM … will introduce many false positives when used as a "enhancer detection tool" as compared to H3K27ac peaks is not correct for our study. Considering these results, we do not think that the enhancer definition from intersection of ATAC and H3K27ac would be an improvement as also explained above in General Comment 1 and 2 and in response to point #7. The approach to only use H3K27ac is typically applied if information on other relevant histone modifications is lacking.
9. Rather than inferring motifs in all the enhancers, the authors may want to look into differential enhancers between CLL and healthy cells. Such differential enhancers can be defined by significantly altered ATAC-seq levels AT the ATAC-seq peak and/or strongly altered levels of H3K27ac at the H3K27ac regions overlapping the ATAC-seq peaks. Altered enhancer activity is not always evident from an increased/decreased nucleosome depletion and so both methods are worthwhile to investigate. Homer can be used for motif enrichment and all enhancers defined in the above way should be used as a background set.
All of our enhancer TF motif analysis is based on a differential analysis between CLL and NBCs using HOMER. We performed an analysis using differential ATAC-seq in combination with ChromHMM states (Fig. 5A, C, Appendix Fig. S5A, B), a differential H3K27ac signal at ATAC peaks (Appendix Fig. S5C) and also TF motifs at SE ATAC peaks (Appendix Fig. S5D).
We fully agree that altered enhancer activity is not always evident from an increased/decreased nucleosome depletion (see General Comment 2), which is accounted for by the analysis included in Fig. S5C. It is noted that there is a large overlap of the TF motifs retrieved with the different sets in Fig 5 and Appendix Fig S5. 10. For the motif analysis, the authors may want to discard regions in close proximity to an annotated TSS, or treat these as active promotes. The ATAC-seq/H3K27ac intersecting regions will typically be several hundreds of basepairs and can be used for motif analysis. Alternatively, the ATAC-seq summit +/-150 bp can be used (see comments rev #3).
See General Comment 2 and 3. In all of our enhancer analysis we have removed regions ±1 kb around the TSS given in the RefSeq database. The TF motif analysis was conducted only in the regions of ATAC-peaks, which were called with MACS2 and differential ATAC-seq signals were calculated with DiffBind as described in the Methods section. Our distribution of ATACseq peaks had a median size of 327 base pairs, which is very similar size to the 300 bp suggested by the reviewer and is shown below in Fig. R2. Since the nucleosome depleted region can also comprise multiple nucleosomes, e.g., due to the binding of multiple TFs, some more extended nucleosome depleted region can arise after peak calling. Accordingly, we prefer to use the actual experimentally determined size of a given peak region rather than a fixed value for all peaks. Figure R2. Histogram of the ATAC consensus peak list obtained from peak calling of the ATAC-seq data with MACS2. The median peak size is 327 bp.
11. Binding sites for CTCF and EBV1 [typo, should be EBF1] ChiP-seq should also be defined using peak calling, e.g. by MACS2 and not by clustering reads over the whole genome. Altered binding levels should be assessed with an appropriate tool like DESeq or Diffbind. A correct implementation of these basic analyses is critical and should take place before more sophisticated (network) models are pursued.
There seems to be a misunderstanding. We have indeed used MACS2 to call peaks for both CTCF and EBF1 and applied DiffBind to the resulting consensus peak list for the identification of differential CTCF and EBF1 binding sites from the ChIP-seq data. This has been clarified in the text and in the method section.
12. Normenclature. The nomenclature around Tcf3/Tcf4 can be confusing and can refer to the Tcf/Lef family (official symbols Tcf7l1 and Tcf7l2) or the bHLH motif family (E2a). When the authors assign genes/TFs to motifs, official gene symbols should be used. The consensus motifs or logos shown in the paper should be the same as the ones used for the motif analysis. Homer provides these motifs: http://homer.ucsd.edu/homer/motif/HomerMotifDB/homerResults.html or they can be reconstructed from the PWMs that were used. The authors should use the official gene symbol annotated with the gene. In the case that the motif is highly degenerate (e.g. AP-1 or ETS motifs) it is helpful to include the RNA-seq data and show the expression pattern for the highest expressed/most differential members of that motif cluster.
We agree that this issue was presented confusingly and have revised the manuscript accordingly as described in General Comment 3. 13. Superenhancers. Superenhancers have been defined by the group of Richard Young (Whyte et al. 2013) as H3K27ac regions containing multiple Med1 binding peaks. They are also called stitched enhancers, which is probably a better name, as it better describes their nature: it is a cluster of individual enhancers, not one enhancer with a particular "super" capability of driving gene expression. If there is anything special about them, it is their proximity to each other and to lineage specifying/cell identity genes. In defense of Richard Young's group they have never claimed that interstitial regions are also enhancers and provide the locations of the individually stitched enhancers in their paper. Thus, it is not surprising that interstitial regions have no enhancer activity. In the context of this paper, the individual "super enhancer constituents" should be contained in the set of all enhancers. It is only useful to define superenhancers in the CLL context when these are located near/driving the transcription of key CLL genes. Also here, a differential motif analysis (using the SE embedded ATAC-seq peaks) between CLL and healthy subjects would be of interest. Are these same motifs enriched as in the "normal" enhancers? See General Comment 2. We agree with the reviewer that SEs are most interesting in light of their CLL context, i.e. when they are near key CLL genes and have already described examples from Fig. 4D, E in the main text on page 9. In addition, exemplary regions of CLL associated genes with putative SE activation for CREB3L2 (Fig. EV4D), FMOD (Fig. EV4E) are shown. With the refined SE calling (using ATAC peaks in combination with H3K27ac according to the reviewers' suggestion) the list of differential SEs now also includes gain of SE activity near ETV6 and CTLA4, and loss near CDKN1A. A TF motif analysis within ATAC peaks in SEs has been added to the manuscript (Fig. S5D). The motif calling in SEs shows that the top enriched motifs in lost SEs were NFkB-p65 and EBF1 (as seen in the normal enhancers). In the gained SEs we also find the E2A binding motif in agreement with the other enhancer annotations. Thus, a significant overlap of motifs enriched in SEs and in "normal enhancers" is observed. The text has been revised to account for these changes.
14. Main comments. The authors have substantially improved the organization of the manuscript and consecutive sections are now better connected. The integration of the scATAC-seq with the GRN is interesting, but the number of identified regulators is very high, suggesting that the integrated method introduces many false positives. The TP's in the ROC curve (shown in the rebuttal) are also based on motif analysis of the same data, rather than true known regulators from the literature, thereby providing little evidence for the predictive accuracy of the method.
We thank the reviewer for the positive feedback on our revised manuscript. The integrated network of regulators, target genes and enhancers ("CLL specific GREN", Dataset EV14) includes only factors that have a differential activity as computed with VIPER or, in case of the target genes, were differentially expressed according to the DESeq2 analysis (see also reply to point #17 on the number of identified regulators). Please note that, as described in Methods, our initial B cell gene regulatory network was computed from a completely independent gene expression data set of 264 publicly available samples including normal B cells, B cell lymphomas (Basso et al, 2010). The previously shown ROC curved compared our selection of TFs from the motif analysis at aberrant chromatin features with the TF ranking based solely on the differential activity in CLL computed with VIPER. Furthermore, the scATAC-seq data were only used to derive the promoter enhancer connectivity for the network from the correlation analysis with our "RWire" R script.
15. The authors made an effort to validate the interactions derived from co-accessibility in scATAC-seq with data from the 4D nucleome database. Unfortunately, no 4C-seq experiments (as suggested) were included for validation. Also, ChIP-seq experiments for CTCF and EBF1 were performed in CLL patient and healthy controls. Taken together, the additional experiments are sufficient to validate some of the results, but were not used for a genomewide verification. Instead, ChIP-seq analysis is mainly limited to a few examples and the analysis of the newly derived data raises some questions and is not described in the methods section.
Both CTCF and EBF1 data have been used to validate several conclusions made in our initially submitted manuscript: (i) We predicted from publicly available CTCF data that CTCF is enriched at the PMD borders. This conclusion was now confirmed for our CLL samples by the CTCF ChIP-seq data as shown in the bottom panel of Figure 2D. (ii) The differential EBF1 and CTCF occupancy as inferred from ATAC-seq was validated by the EBF1 and CTCF ChIP-seq. For EBF1 this is shown in Figs. 6E, S8A, S8F. Called CTCF ChIP-seq peaks overlapped with 55% and 47% of both gained and lost CTCF binding sites derived from ATAC-seq, respectively. This is now stated in the text. (iii) The CTCF ChIP-seq data reveal clear differences between CLL cells and NBCs (Fig. EV5A). Using DiffBind to extract differentially occupied regions from our CTCF ChIP-seq data we found that CTCF binding is lost in CLL cells at 5964 sites and gained at 441 sites (Fig. EV5B), with 93 % of the lost sites overlapping with peaks from the ENCODE data set for the B-lymphocyte cell line GM12875 (GEO GSM749670). (iv) Following the reviewer's comments, we have now further validated that rewired enhancers are stably occupied by CTCF in both B-cells and malignant cells. Thus, according to our data, differential CTCF binding does not control enhancer-promoter rewiring but rather has functions in direct regulation of transcription as shown in Figure 5E. This conclusion is in line with other reports showing that CTCF impacts transcriptional activity by stabilizing loops that insulate contact domains (Merkenschlager & Nora, 2016;Nora et al, 2017), while specific enhancer-promoter interactions might involve other factors like YY1 (Weintraub et al, 2017) (see also point #20 and #23). (v) We have included additional genes for the validation of prediction from the EBF1 dependent part of our network (Fig. 6D, E, Fig. S8E). We find that for 3 out of 4 enhancers of EBF1 target genes displayed a CLL specific loss of EBF1 binding as predicted from our network (Results last paragraph, p.13).
16. The manuscript will substantially benefit from a genome-wide comparison of some of the model/epigenetic data derived hypothesis with the 4D interactions/CTCF and EBF1 ChIP-seq. This could be addressed in a few figure panels. See specific comments below.
See comments to points #15.
17. page1) 1,378 differential regulators (TF's and chromatin modifiers) out of 2,804 were detected between CLL and healthy controls. This number is extremely high and likely includes many false positives. Can the authors compare these results with some of their collected data to support this claim? E.g. how many of these regulators are (highly) expressed or differentially expressed in/between CLL and healthy controls. How many of the TF motifs are enriched in the differential ATAC-seq peaks compared to all ATAC-seq peaks?
This issue has already been discussed in point #63 of our rebuttal letter accompanying the first revised version of our manuscript. For the construction of the B cell network we initially used a precompiled list of 5927 regulatory proteins (TFs, associated co-factors, signaling enzymes etc.) (Alvarez et al, 2016). From this list, 3862 proteins were present in the B cell network. Using our RNA-seq data and the VIPER software we computed activities for 2804 of them and found 1378 regulators differentially active with an adjusted p-value/FDR < 0.05 (corresponding to 140 regulators). At a 0.01 threshold this number would only be slightly reduced to 1239 regulators being differentially active. The relevant data are in the xls file "Dataset_EV13-B-cell_network" (and for differential gene expression as calculated by DESeq 2 the in Dataset_EV11-RNAseq-dif-gene-expr). Please note that the activity values are calculated with VIPER from the gene expression of multiple target genes in the B cell network, with an average and median 56 and 45, respectively, of targets per regulator. This makes the resulting activity values more robust than the differential expression signal of an individual gene and also captures activity changes that do not occur at the gene expression level of a given regulator but for example due to posttranslational modifications. The methodology and the statistical methods are described in our manuscript and the cited references. Finally, we would like to emphasize that our study yields a core set of TFs listed in Appendix Table S2 that have been derived from the ATAC based TF motif analysis, are associated with aberrant chromatin features in CLL and have the additional requirement that they display differential activity/gene expression.
18. Page 8) Superenhancers (SE) are called using the ROSE program by stitching neighboring TF binding sites that have overall enriched H3K27ac. This means that broad H3K27ac domains need to have multiple TF binding sites in order to be called a "super-enhancer". In the original manuscript (Whyte et al 2013), the authors used Med1 and H3K27ac. Here, the authors could use the ATAC-seq peaks and H3K27ac. By calling every H3K27ac region larger than 10kb (which is not uncommon for some promoters) a SE, many false positives might be introduced. Furthermore, the authors may consider removing SEs that overlap a TSS, to make sure these are genuine enhancers. Lastly, are the 6.3% and 5.5% of the SE unique to the study (compared to what then?) or unique to the CLL/healthy control? The latter makes more sense, and it would be of interest to perform a differential motif analysis in the ATAC-seq peaks of the differential SE's, using all (union CLL and healthy) ATAC-seq peaks within a SE as control.
We followed the reviewer's recommendations and now included a TF motif analysis within ATAC-seq peaks within SEs as Appendix Fig. S5D.
19. Page 10) CTCF binding was lost at 5,964 sites and gained at 441 sites. Such a dramatic change in CTCF occupancy within the same cell type is a spectacular result, but may indicate a flaw in the analysis. Are these genuine CTCF binding sites (e.g. called with MACS2 and p < 1e-8) that have significant changes (e.g. Diffbind or DESeq2) in occupancy? How many CTCF binding sites were detected in total? Figure EV5B indicates sites with very low CTCF coverage, which are likely background. Please add a description of the analysis (similarly for EBF1) to the methods section.
We agree that it is an exciting finding to detect differential CTCF binding at about 6,000 loci. We stand by our results and are convinced that this is not the result of a technical flaw in our analysis. The CTCF binding sites were called with MACS2 and differential occupancy were calculated by DiffBind using an FDR threshold of 0.01 with standard parameters. A description of the workflow is now included in the method section. In total about 35,000 CTCF binding sites were detected for both CLL and NBCs. Each dot in Figure EV5B represents the average value of all samples with a normalized log2 value. Thus, only regions with an at least 10-fold higher coverage than the background were considered in the analysis. The majority of loci showed an enrichment between 16 and 128-fold. Furthermore, our data displayed an 85% overlap with peaks from the ENCODE data set for the B-lymphocyte cell line GM12875 (GEO GSM749670), which gives us confidence that we have identified genuine CTCF sites. Furthermore, 93% of the sites losing CTCF in CLL are also found in the GEO data set (now stated in the text, see also response to point #15). We would like to note that ~40,000 CTCF binding sites per cell type have been identified in previous studies. It has also been reported that ∼20-50% of these sites show some cell-type-specific binding and are therefore be considered "variable" (see our previous work (Teif et al, 2014) and references therein). Thus, considering the large number of different chromatin features between CLL and NBCs detected in our study (nucleosome positioning and DNA methylation are particularly relevant here), a total of ~6,000 or 17% sites with differential binding is well within the number of previously reported variable CTCF binding sites.
20. Page 11) 90% of the enhancers had CTCF stably bound in both cell types. While CTCF certainly overlaps with promoters and enhancers, 90% is a lot. Are the authors sure that these are genuine CTCF binding sites (see comment above)?
The 90% number refers to the percentage of rewired enhancers that were constitutively bound by CTCF (see also point #15 and #23). These enhancers were in active chromatin regions (or in other words accessible) in both cell types, which would be in line with the role of CTCF mentioned in point #15.
21. Page 11) We found that 68% of these pairs were also listed as spatial contacts in the 4D nucleome database (Teng et al, 2015), suggesting that many promoter-enhancer pairs involve physical contacts. In total 3955 promoter-enhancer pairs were identified, with most promoters being connected to one enhancer at mean and median distances of 32 kb and 20 kb (CLL) and 23 kb and 10 kb (non-malignant B cells), respectively".
The average promoter-enhancer distances are very short and it will be hard to validate these with 4C-seq or Hi-C given the high background from proximity ligation at this distance from the promoter viewpoint. Therefore, it is surprising that 68% of these interactions could be validated. Furthermore, an upper limit of 100 kb might be too short. The authors should consider the significant interactions derived from a (B-cell specific) 4D nucleome data sets and set their lower-and upper bound distances to e.g. 10% and 90% of the significant interactions in the 4D nucleome data set. Finally, the authors should describe which datasets were taken from the 4D nucleome, whether these are B-cell specific (or closely related) and how the analysis and comparisons were performed (methods section) We have expanded this part as requested. When all interactions present in the 4D Genome data base (https://4dgenome.research.chop.edu/Download.html) for Homo sapiens where considered, 72 % (CLL) and 74 % (non-malignant) of the interactions derived from our scATACseq analysis were retrieved in 4D Genome. These numbers changed to 58 % (CLL) and 56 % (NBCs) when only 4D genome pairs for contacts within the 10%-90% distance range were considered, which corresponds to 7.23 kb to 429 kb. These additional data are now mentioned in the legend to Fig. S6G. Most of the distances in the 4D Genome database are below 100 kb with the distribution shown and similar to that derived for the scATAC-seq data (Fig. R3). Thus, the 100 kb cut-off seems reasonable to us. Figure R3. Histogram of the distance dependence of interactions in the 4D Genome database. It can be seen that the frequency sharply decays above 20 kb and that a 100 kb threshold captures the majority of the interactions.
23. The authors suggest that the correlation in scATAC-seq is predictive of (CTCF) looping, but provide only one example. Given that a relatively high number of differential CTCF sites were found, one would expect a coordinated down-regulation in the scATAC-seq signal of CTCF sites that interact and are downregulated in CLL patients relative to control. Can the authors e.g. show a scatter plot where the correlation in the ATAC-seq signal decrease of two interacting loci is associated with the loss of CTCF occupancy in these two loci, for every differential CTCF binding site overlapping the scATAC-seq signal? A genome browser view of the ATAC-seq and CTCF signal for an exemplary locus would be a nice illustration for the supplement to really reinforce this point.
Our conclusions are different and we have clarified them in the revised manuscript. We tested whether differential CTCF binding was linked to loss/gain of correlation in the scATAC-seq signal between enhancers and promoters. However, we found that for the 90% of these "rewired" enhancer-promoter pairs the CTCF binding site proximal to the enhancer shows stable binding of CTCF in both CLL and NBCs (Fig. EV5G, see also response to points #15 and #20). Thus, we conclude that changes in enhancer-promoter interactions are not dependent on variable CTCF binding.
24. Similarly, the EBV1 [typo, should be EBF1] ChIP-seq was not/barely used to validate some of the results. How many peaks were found in CLL and healthy controls? How many were differential? Were these associated with gain/loss in ATAC-seq and H3K27ac at promoters or enhancers? Were these enhancers connected to the target genes that were predicted from the model and to what extent?
Results from the EBF1 ChIP-seq analysis are presented in Fig. S8. The number of differential sites and overlap with the ATAC-seq analysis are described in Fig. S8A. About 2/3 of lost EBF1 sites in CLL at ChromHMM enhancer states with EBF1 motif also showed loss of the ATAC signal in CLL. The average number of EBF1 binding sites for CLL was 5358 and for nonmalignant 6298, which is now stated in the legend to Fig. S8B. The number of differential sites is now stated in the main text (paragraph: "Enhancer-promoter network changes were identified from single cell analysis") and in the legend to Fig. S8 ("Analysis of EBF1 binding by ChIP-seq").
Fig 6G (referred to in the rebuttal) does not exist
We apologize for this omission. This figure panel is now included as Figure EV5F.
26. CTCF and EBV1 and 4D nucleome analysis are missing in the methods section.
The methods have been added to the analysis section of the manuscript on pages 17 and Appendix Table S6. 27. Discussion. "Our integrative analysis .. can explain more than 80% of the transactional variance of CLL cells". This statement indicates that e.g. a multivariate regression using all the epigenetic signals as predictors can explain the RNA-seq change between CLL and healthy subjects with an adjusted R^2 of 0.80. If this is the case, the authors should show that. However, such an accurate prediction of gene expression changes is typically not feasible, even with the high-quality integrative dataset the authors collected. In that case, the statement should be adjusted. This statement refers to results shown in Fig. 6A. As suggested by the reviewer we have rephrased the sentence in the discussion. It now reads "Our integrative analysis of a comprehensive set of readouts revealed altered chromatin features at promoter/enhancer elements for more than 80% of the differentially regulated genes in CLL cells" (discussion page 14).
28. Transcription factor motif analysis. "In addition, only TFs were included that showed a significant differential protein activity (or gene expression for network target genes) as computed from our B cell specific transcriptome network (Fig 1D)". How do the authors compute protein activity from RNA-seq data?
The protein activity referred to here were calculated with VIPER using the B cell network constructed with ARACNE using the gene expression data from RNA-seq. This is described in the context of Fig. 1E (Results, page 5 results) and in Methods in the section that is now entitled "Gene regulatory network construction and activity calculation" (page 23) to make clear that also the activity calculation is described in this part.
Text/typos
• while the loss at enhancers might indicate a loss OF enhancer activity (page 1) • "PMD's were enriched .. conformation capture". Please edit the sentence (page 6).
• Infexion point → Inflection point (legend Fig. EV1) Thank you very much for picking this up. These errors have been corrected.
30. The revised manuscript is improved, but not yet to the point of being acceptable. Some of the deficiencies relate to terminology and can be corrected by reclassifying chromatin features under a different name. However, there are many substantial fundamental defects in both the approach and the analysis in this manuscript which relate primarily to: We have changed the terminology as described in General Comment 1 and addressed the criticism to our analysis as described in the response to the specific points below. 31. Using an inadequate method to define enhancers: The ChromHMM pipeline is used extensively by the Roadmap Epigenetics Consortium, the authors of ChromHMM, but in my view this is a highly unreliable method to use to predict enhancers as it is not centred on any analysis of open chromatin regions. Based on the results of this manuscript it is highly likely that a large fraction of the predicted enhancers or changes in enhancer state relate more to changes in the activities of transcription units than enhancer activity. It may be relevant that ChromHMM was a default track on the hg18 UCSC genome browser, but not on the more recent hg19 and hg38 browsers, maybe because it has gone out of favour as a useful tool.
These statements are not correct. For example, the ENCODE ChromHMM data sets for the UCSC genome browser hg19 build are listed under the "Regulation -ENCODE Histone Modification" tracks as "Broad ChromHMM" (https://genome.ucsc.edu/cgibin/hgTrackUi?hgsid=716060827_FwR3ZiQP0yusQe8kGQ7OS4ConR8z&c=chr21&g=wgEn codeHistoneSuper). The hg38 build of the UCSC genome browser currently has limited track availability and excludes, e.g., Vista Enhancers, ENCODE DNA methylation, ENCODE transcription factor binding ChIP-seq. However, this is more likely due a slow migration of data rather than these tracks having fallen out of favor. Furthermore, the current Ensembl genome browser for hg38 (release 95 from January 2019) includes "regulatory segmentation" that makes use of ChromHMM segmentation from other projects (https://www.ensembl.org/info/ genome/funcgen/regulatory_segmentation.html). Finally, the loci in our lists of active enhancers in CLL and NBCs used in our analysis (General Comment 2) are centered around open chromatin regions detected by ATAC-seq (see also response to point #32).
32. The main flaw in this approach is that active genes are essentially covered in varying degrees of H3K4 me1, me2 and me3 and H3 K27 ac, as follows: H3 K4me3 concentrates +/-~1 kb, K4me2 extends a few more kb and K4me1 extends over most of the gene. While H4 K27ac is highest on either side of the promoter, it extends over the whole gene. H3K36me3 is concentrated on the 3' half of genes. This means that the three different ChromHMM enhancer states, especially genic enhancer and active enhancer 2, will often represent transcribed chromatin and not enhancers. Without alignment with well curated ATAC data, which excludes minor ATAC peaks associated with active nucleosomes, the ChromHMM data has no predictive value. This is abundantly clear in fig 1D, which does at least filter for overlap with ATAC data, but shows that the modifications predicted for active enhancers (enhancer 1 + H3K27ac) are in the minority. The biggest enhancer groups are defined on the basis of either "active enhancer 2" which is just H3 K4me1, found at the 5' half of genes, or "genic enhancer" which is H3 K36 me3, K4me1 and K27ac, found at the 3' half of genes. My original perusal of the table of 238,000 enhancers suggested that these often comprised whole coding regions. The enhancer state 1, marked by K4 me1 and K27ac, which should be found flanking enhancers but not at the enhancer itself, is the most useful group included. However, this is a tiny fraction of the population of peaks in fig 1D. For confirmation of these views, I suggest that the editor and authors to look at Fig Barski et al Cell 129, 823, (2007) which were global studies of many histone modifications. The average gene profiles show K27ac, K4me1 and K36me36 on typical transcribed regions. This cannot be used to define enhancers. This is also why many "super-enhancers are defined on regions that include transcribed genes.
See General Comment 1 and 2 and response to reviewer 1, points #7 and #8. We would like to emphasize that we provide different sets of enhancers including lists of active enhancer loci in CLL and NBCs (ATAC-seq signal overlapping with or flanked by ChromHMM states 1 and 9). The later comprises 10145 loci in CLL and 7312 in NBC of which 4771 are shared between the two groups (Dataset EV09). With respect to the distribution of histone modifications at active genes we disagree with the conclusions made by the reviewer. Wang et al show in Fig. S4 that there is some H3K27ac in the gene body but also that H3K27ac levels are about 10x higher at the promoter. Barski et al show that the H3K4me1 signal continuously decays towards the end of the gene. Furthermore, it is noted that the levels of H3K27ac and H3K4me1 are reduced at exons. These features allow it to separate ChromHMM state 1 (predictive of intronic enhancers) from regions of low H3K4me1 and H3K27ac levels, since it is based on the cooccurrence of peaks called for H3K4me1, H3K27ac and H3K36me3. These points are apparent when comparing the end of the TCF4 gene containing a putative intronic enhancer with the transcription start site and exons in Fig. 1A. the paper, and that shows one motif linked to one ATAC peak. For most of the paper, the criteria described on p22 for inclusion as an enhancer are just 2 of 6 patterns that do not have to include open chromatin, which should be obligatory. This leads to swamping of the data pool with active chromatin marks commonly found on active genes. Worse than that, most of the 238,000 active chromatin regions do not appear to contain ATAC peaks, meaning that any motif analysis is being performed on nucleosomes, not enhancers, and is meaningless. Furthermore, if 2/6 chromatin modifications are needed to define enhancers, this excludes enhancers at open chromatin regions where TFs are bound where there are no histones. This means that true enhancers and enhancer state 1 are mutually exclusive. This is also evident in the table of 238,000 enhancers, where some actual potential regulatory elements are excluded.
This description of our analysis is not correct. As stated in General Comment 3 the TF motif analysis was conducted only within ATAC peaks within regions that displayed a differential chromatin feature between CLL and NBCs. We clarified this in the methods section "Transcription factor motif analysis" (p. 23). Our enhancer definition and the rational for using it is given in the General Comments 1 and 2. It is indeed correct that our annotation of predicted enhancers starts from chromatin states 1, 8, 9 and 11 and is histone modification based. Regions lacking any histone modifications studied here would be annotated as state 12 (quiescent). These regions were not included in our enhancer analysis. A histone modification independent enhancer annotation would be via p300 ChIP-seq and bidirectional transcription as used by the FANTOM consortium (Andersson et al, 2014), for which we lack the p300 ChIPseq. However, it is noted that the FANTOM enhancer annotation is implicitly linked to histone marks associated with active transcription and histone acetylation. Accordingly, the FANTOM enhancers overlap with H3K27ac and H3K4me1 modifications and are not found in regions that are void of histone modifications (Andersson et al, 2014). We also disagree with the statement that open chromatin regions alone would represent "true enhancers" where "TFs are bound". They might as well simply be biologically inactive regions with low nucleosome occupancy that have no TFs bound (see General Comment 1, last paragraph).
34. It is significant that a recent study published by Ott et al on the same subject this month in Nature Genetics used just ATAC and H3 K27ac, and came to some different conclusions. It is at least apparent that the target genes identified by Ott et al seem to be different to those identified here.
We assume that the reviewer refers to a recent study in Cancer Cell and not in Nature Genetics (Ott et al, 2018). Their TF analysis is very different from the one in our study. We conduct a differential TF motif analysis of CD19+ B cells in CLL vs. NBCs within ATAC peaks for different enhancer annotations (including SEs and yielding a large overlap) and identify TF motifs enriched or depleted at enhancers (Fig 5A, C, Fig S5). The set of 15 central TF motifs in CLL (including promoters) identified in our study is linked to different aberrant chromatin features and is independent of calling SEs (Appendix Table S2). In contrast, Ott et al only analyze SEs. While their analysis includes also a differential SE calling between CLL and NPCs based solely on H3K27ac, they derive master TFs from an "enhancer-based modeling of regulatory circuits and assessments of transcription factor dependencies" in conjunction with cell line survival studies. Based on this work they conclude "that the essential super enhancer factor PAX5 dominates CLL regulatory nodes and is essential for CLL cell survival." We do not find PAX5 as a top (super-)enhancer hit in our TF motif analysis but other TFs ranked high in the Ott et al study namely IRFs, FOXP1 and RUNX2/3. Since two very different approaches are used this is not surprising. It is thus unclear to us why the results of Ott et al should be a cause of concern with respect to the quality of our work. Rather, we note that our findings in terms NFAT, TCF4 and LEF1 becoming activated in CLL factors like EBF1 and AP-1 getting silenced are in full agreement with two other studies (Beekman et al, 2018;Oakes et al, 2016).
35. Flawed motif analysis: From the outset, the construction of a gene regulation network relies on motifs, but the motif table in table S2 is full of errors. If these motifs were used in the motif analyses, then all these analysed are fatally flawed, as follows: (i) The NFAT motif listed is AAGAAGAAA. This will never bind NFAT which binds (A/T)GGAAA, (ii) the authors are confused by the TCF nomenclature whereby for example human TCF3 and TCF4 belong to the HLH E2A CACCTG E-Box class whereas mouse Tcf3 and TCF1/LEF1 are HMG proteins that bind CAAAG. The authors imply that they are including both motifs in their E2A searches, (iii) the wrong RUNX motif is used. The right motif for RUNX proteins is TGTGGT(T/C)(T/A)without upstream CCCC which will greatly skew the analyses, possible making it 200x less likely to hit the basic motif, (iv) The FOX motif is not right, and at the least should start with G/A and not GA, and (v) the NF-kB motif is better represented by The NF-kB motif is best represented by GGGGAAATCCCC (which has an extra A).
See General Comment 3. We have always used the full Homer motifs retrieved from the analysis and apologize for the misleading sequence representation in Appendix Table S2. The table has been revised to address the issues raised by the reviewer. 36. Another fatal flaw, explained above, is that most motifs defined in this study are defined based on groups of modified nucleosomes, where there are no TFs, and the analyses may actively exclude enhancers bound by TFs where there are no modified histones.
See General Comment 1 and 2 and response to point #32 and #33.
Specific comments: 37. Regarding terminology, many objections are easily resolved by replacing every single reference to "enhancer regions" on every single occasion, without exception, with a reference to "active chromatin Regions", and also in e.g. Fig 1B, 1D etc etc. This is a good suggestion and we have changed the ChromHMM state names as described in the General Comments 1 to make clear that ChromHMM does not provide an enhancer definition but an initial step to annotate enhancer candidate regions.
38. The main persistent problem that remains is the underlying assumption throughout the manuscript that enhancers can be defined as broad regions of modified chromatin. They must by necessity be defined as small regions of 200-500 bp of open chromatin bound by TFs. This authors methodology sidesteps the most important fundamental concept of gene regulation which is that the genome is reprogrammed by TFs which establish open chromatin regions. The modifications came later. In the case of changes in DNA methylation, sometimes much later.
See General Comment 1 and 2 and response to point #32 and #33. We disagree with the statement that "TFs … establish open chromatin regions. The modifications came later." Given the much higher stability of a nucleosome (complex lifetimes in the range of hours to days) as compared to TFs (chromatin bound state typically on the minute time scale and below), most TFs are unable to simply displace a nucleosome by competitive binding. Rather, several lines of evidence support the view that histone modifications (or other signals) guide chromatin remodeling to subsequently open up chromatin and to allow TF binding (Erdel et al, 2011). This mechanism might be particularly relevant at enhancers with H3K4me1 targeting the activity of the BAF chromatin remodeling complexes (Local et al, 2018) to create an open chromatin region. Active chromatin remodeling might also contribute to differential CTCF binding (Teif et al, 2014) at the large fraction of sites that show cell type specific CTCF binding (see point #19). 39. Following on from above, any analyses of motifs should focus on +/-200 bp at most of an ATAC peak. In their response to this suggestion in the original review, the authors question where my previous definition of enhancers as ~300 bp regions comes from (point 58). The answer is very simple and based on our fundamental understanding of mechanisms of gene regulation. For essentially every enhancer defined in the genome, enhancers are occupied by nucleosomes in the inactive or poised state, and are nucleosome-free once TFs have evicted 1 or 2 nucleosomes. 1 nucleosome + 2 linkers occupies 40 + 160 + 40 = 240 bp. This is why DHSs are typically 250-300 bp of open chromatin bound by TFs. The TFs do not as a whole bind outside this zone where chromatin is inaccessible. This is why motif analyses must be of ATAC peaks and not H3 peaks.
In their rebuttal, the authors say they look at motifs +/-1 kb from an ATAC peak (point 58). This will create a massive background as it adds 1700 bp of nucleosomal DNA to each 300 bp enhancer regions, where most motifs are +/-100 bp, the window the authors use in Fig 5D. Why not use the same window to define motifs and thereby get better enrichments.
See General Comment 2 and 3 and response to point #10. All of our TF motif analysis was conducted only within the ATAC-seq peak regions (median 327 bp). The extension to ±1 kb from the ATAC peak center was used to define active enhancer loci according to the extension of their characteristic marks (Chen et al, 2018) but not for the TF motif analysis.
40. All analyses included in this study should be restricted to those samples that included ATAC, which is most, and always use ATAC as one of the two criteria used to define regulatory elements. This still allows for 3 normal samples where both ATAC and ChromHMM are available, which should be sufficient given the differences between these and the 18 CLLs that can be included. See General Comment 3. For the differential analysis of a given readout we have used the full number of samples available that met the quality criteria. From this sample set a consensus feature list was constructed, which was then used with DiffBind to compare CLL and nonmalignant samples. In case of ATAC-seq we have acquired data for 7 NBC pools (4 in replicates) and 19 CLL B cell patient samples (all in replicates). 41. ATAC should be at the top of the flow diagram in fig S1B, and not on the forth tier where it sits now as an afterthought.
As explained in the General Comments 1 and 2 we used a workflow that starts with the ChromHMM annotation. We then intersected these regions with the ATAC peaks and conducted the TF motif analysis or prediction of active enhancer loci. 42. A major technical deficiency in this manuscript is the inadequate analysis of the TF motifs underlying the chromatin patterns which are presented in great detail. Analyses of motif occurrence is restricted to meta data showing probabilities etc. but not the primary data. To have confidence in the enhancer definitions it is necessary to plot motif positions on the plots of ATAC peaks in the style of the analysis shown in fig 5B. To verify enhancers adequately, the same figure should include the profile of ATAC peaks centred on a window +/-1 kb, ranked by ATAC enrichment, with flanking histone modifications, and confirming the presence of motifs at the centre of the peaks more enriched in the enriched ATAC peaks. None of this has been done. Fig 5B is aligned on motifs, not ATAC peaks, and even then uses a flawed NFAT motif.
As described in General Comment 3, the motif representation is now consistently giving the Homer motifs identified in the analysis. It would be interesting to also further analyze the motif position within the ATAC peaks, e.g. by plotting the motif location with respect to the peak center for all TFs of interest. However, given that we have already expanded the TF motif analysis to address other reviewer requests in the text and added an additional figure (Appendix Fig. S5) we refrain from adding further figures. Our manuscript is already at the size limit. We would like to note, however, that we provide all data to conduct such and other analyses.
43. The authors should show the matrix logos and provide the PWMs used in this study. If these differ to the same motifs annotated on HOMER, these need to be shown alongside.
See General Comment 3. The Homer motif logos and exact Homer motif names are now provided in Appendix Table S2. From these the PWM can be retrieved.
44. It would be preferable to simplify the number of motifs depicted in figures such as 3E by restricting the motifs depicted to a single representative example in cases such as ETV/ELK/ETS and KLF where they all bind the same motifs. This figure probably needs ETS, KLF, E-box, SP1, NFY, NRF and no more. Fig 5C only needs one AP-1 and one RUNX motif etc.
See General Comment 3. The motifs names used in the TF motif enrichment plots are those used by HOMER for a consistent assignment to address the criticism of the reviewer above. The reduction to selected TFs based on motif recognition and differential activity is done in the revised Appendix Table S2. 45. As stated above, there is an over reliance on the ChromHMM gene prediction tool, released in 2010, which has not really gained wide acceptance in the gene regulation field as a front line approach for identifying enhancers. Published evidence from groups such as the Hardison and Cohen group suggests that enhancers should be defined on the basis of transcription factors or on function. Most studies measuring actual enhancer activity find ~10% of active chromatin regions have enhancer activity. While it is helpful to know about chromatin states in the context of gene regulation, over-use of this tool is likely to lead to misleading results. ChromHMM seems to tell us more about whether a region is transcribed or not.
See General Comment 1 and 2 and response to points #32 and #33. As stated above in response to point #37, we have changed the terminology to make clear that we do not consider ChromHMM an enhancer identification tool and also have intersected the ChromHMM states with ATAC peaks in our further analysis. The reviewer states that "Published evidence from groups such as the Hardison and Cohen group suggests that enhancers should be defined on the basis of transcription factors or on function." It is, however, unclear to us how their work would be related to better approaches for the annotation of enhancers in our study. Zhang and Hardison have published a chromatin annotation approach termed IDEAS and compared it to ChromHMM (Zhang & Hardison, 2017). The software appears to perform somewhat better in terms of predicting "validated" enhancers from the FANTOM set. However, on an overall scale many chromatin states including "enhancers" and "bivalent enhancers" were "commonly identified with similar proportions in the genome." The Cohen group has conducted work on the effect of the location of enhancer sequences in the genome. They conclude that the regional chromatin context strongly affects the activity of cis-regulatory sequences (Maricque et al, 2019), which is the reason why we use the ChromHMM annotation.
46. The ChromHMM browser tracks often show enhancers and promoters as 10 kb tracts of chromatin spanning open chromatin regions. As an example, ENCODE shows the first 50 kb of the B cell gene PAX5 at hg18 chr9:36,977,389-37,029,852 is covered by either H3 K4 me3 or me1 in lymhoblastoid cells and is shown as either enhancer or promoter region on the Broad Institute ChroHMM track in the same cells. This is highly misleading. Just because the current authors get the same patterns as the ChromHMM tracks loaded on the UCSC genome browser does not mean that this tool is suitable to use as the front line approach to defining enhancers. It would in fact be a disservice to the gene regulation field to give de facto acceptance of this methodology by prominently presenting it as a state of the art approach to be used to predict enhancers. It is not, it is an adjunct to define the neighbourhood of enhancers.
As stated in the General Comment 1 and in response to point #37 we have renamed the ChromHMM states and rephrased the text to clarify that the ChromHMM states are no "enhancer definition". Nevertheless, we consider them useful to predict different sets of potential enhancer loci when combined with ATAC peaks as discussed above.
47. To illustrate another feature of histone profiles, some ChromHMM data nicely show a gap between the K4me1 and me3 marks, just where K4me2 would be predicted. This is what is expected in a transcription unit, but ChromHMM would show it as a gap between chains of enhancers.
For our enhancer analysis we have always removed a region of ±1 kb around the TSS (s. General Comment 2). Thus, any dependence of the ChromHMM segmentation on the H3K4me1, H3K4me2 and H3K4me3 distribution in this region is irrelevant for the enhancer assignment in our analysis.
48. Page 8 line 15 refers to transitions from the quiescent state to the K4 me1 E2 state, and from the transcribed K36me3 state to the K36me3/K4me1 genic enhancer state. Both represent gain of K4me1, which may just reflect differences in sensitivity of K4me1 detection, and in the second case depends on an ill-defined boundary between states seen at the 5' and 3' halves of genes. There is plenty of room for over-interpretation of the data here.
We have rephrased this section of the text with the revised ChromHMM terminology to make the distinction between ChromHMM states and enhancers. With respect to the genic enhancer state, which has been now renamed to "Active 3" (state 1), we would like to point out that it contains H3K4me1, H3K36me3 and H3K27ac. Thus, for a transition from only H3K4me1 (state 8) or H3K36me3 (state 2), it is additionally required to gain H3K27ac (see also response to point #8).
49. The opening figure, Fig 1A, is poorly described and has inconsistencies. The sentence "the histone modifications identified a downstream enhancer, which was activated in CLL cells as judged from the enrichment of H3K4me1 and H3K27ac (Fig 1A)" should be turned the other way around to say that "a potential enhancer region was identified far downstream of the TCF4 gene as a cluster of ATAC peaks embedded within an ~ 60 kb region of chromatin enriched in H3K4me1 and H3K27ac in CLL cells". This approach needs to be taken throughout the manuscript whereby any description of a potential regulatory region must begin with a description of the ATAC data.
As stated in response to point #41 we disagree that we "must begin with a description of the ATAC data".
50. The 2 RNA-Seq scales in Fig 1A should have some degree of equalisation across the genome, as it is not acceptable to show normal on a scale of 100 and upregulated on a scale of 8000, as it suggests the opposite of what it is. For example, a uniformly active region of the genome could be used to either normalise the FPKM data. Alternatively the authors could adjust the scales to make them equivalent, and then maintain the same ratio in the scales. Furthermore, (i) some of the ATAC peaks at TCF4 look bigger in normal cells, and (ii) the whole TCF4 locus is plastered with H3K4me1 and H3K27Ac in CLL. Does that mean the whole TCF4 locus is one big super-enhancer? Because the whole gene is involved, this data could be actually be used to undermine the very concept of the "super-enhancer", a process that is already happening in other studies that dispute the relevance of this definition (eg at the alpha globin locus, and in Barakat et al recently in Cell Stem Cell).
RNA-seq data were normalized to RPKM values. In response to the previous reviewer request (see previous revision letter, response to point #6) we have introduced different scales for CLL vs NBCs in Fig. 1A as stated in the figure legend. In this manner it can be seen that TCF4 is already lowly expressed in NBCs as also evident from the presence of the H3K36me3 mark but that the expression level in CLL is considerably higher. This is stated in the figure legend. The large enhancer region linked to TCF4 is indeed also called as a differential SE in the analysis (Fig. 4D). This is now stated in the text. 51. The significance of the PMDs is unclear. They were defined at 10 kb resolution which is too high to be meaningful. Either 500 or 1000 bp by would be better, as a 10 kb region can contain one 10% and one 90% methylated region. Fig 2A shows a highly variegated pattern of methylation, so maybe this is indeed the case. It would be better to show verification with single clones from bisulphite sequencing of a few hundred bp where meCG is traditionally shown as black dots in a series of clones to reveal the trend. The 50% could also be a population average. Basic mechanisms of activation and repression tend to drive the unstable 50% meCG state one way or the other, so 50% is unexpected at one discrete region. On p8 line 11, when the authors say PMD loss and meCG loss correlate with loss of enhancers, I suggest these changes simply correlate with gain in transcription.
PMDs are defined in the literature as large regions (>100kb) containing highly-variable, disordered DNA methylation patterns. To identify PMDs in CLL, we used the approach described by Berman et al. (Berman et al, 2011), which is detailed in the methods section. PMDs were defined at single CpG resolution by computing the average methylation value at each CpG within a sliding window of 10 kb size. Thus, our resolution is better than 10 kb. Due to the large region size of >1 Mb needed to display PMDs as shown in Fig 2A, it is impossible to indicate individual CpG methylation sites. The impact of PMDs on gene expression is shown in Figure 2C. We also report separately on meC loss at individual differentially methylated regions (DMRs) in Figure EV2E, F and Appendix Figure S2G. These regions are provided in Dataset EV3. The meC loss at DMRs was correlated with enhancer activation as shown for NFAT binding sites in Fig. 5B. 52. the data on nucleosome changes at promoters and could be left out. This section does not provide any clear insights. The data is not convincing.
We believe that the nucleosome analysis provides an additional layer of information and points to B cell receptor signaling as a central player in the pathomechanism of CLL. It is unclear to us what the reviewer means with the generalizing comments "does not provide any clear insights" and "The data is not convincing" without any specific suggestions for improvement. Rather, based on the experience from our previous work on nucleosome positioning (Teif et al, 2014;Teif et al, 2012), we would argue that the nucleosome positioning data provided in our current study are of exceptionally high quality and unique as they are obtained with primary tumor samples. They allow it to detect the gain or loss of individual nucleosomes as illustrated in Fig. EV3C. Thus, we are confident that this part of our work will be appreciated by other scientists and will prove to be a valuable resource.
53. Some major conclusions in this study are based on the flawed motifs TCF4 (at the centre of the network) and NFAT (e.g. 1E, 5A, EV2F). See General Comment 3 and response to point #35. All motifs used in our analysis were taken from HOMER and are included now in Appendix Table S2. We apologize for the confusion. 54. The concept of "superenhancers" remains controversial and has not been widely accepted in the basic gene regulation community, even though many groups are now trying to define these regions. Most problematic with this concept is that many of the superenhancers defined include whole transcription units and promoters flanked by enhancers and other regulatory elements. This configuration was originally defined as the "active chromatin domain", and the clusters of enhancers were defined as LCRs. In my view the redefinition of these as superenhancers is a retrograde step. In the original paper, Whyte et al defined 231 superenhancers as a small group of highly specialised LCRs. Even their prime examples depicted in the figures include whole transcription units. This is also true of two of the three prime examples shown in the present study in fig 4I where the CDKN1a and KLF13 genes make up a big part of the superenhancers. The current authors may also be a bit over-zealous as they defined 829 superenhancers, 4x the expected number.
In their rebuttal (point 49 and 68) the authors use recent descriptions of "liquid phase compartments" from the Sharp lab as evidence that superenhancers are meaningful. I would suggest that this new definition is another way of restating the fact that active genes reside within active chromatin domains, which are more accessible, dynamic and interactive. This concept has been around for decades, and like LCRs and superenhancers, has been reinvented. The study Barakat et al also show most regions within a "superenhancers" do not have enhancer activity. The authors can claim that active chromatin domains enhance TF recruitment at localised enhancers, but not that TFs bind cooperatively all across domains > 10 kb. This is a fallacy.
In contrast to the Ott et al study mentioned above by the reviewer, our work has neither a focus on SEs nor do we claim that SEs are particularly important for deregulated epigenetic signaling in CLL. We simply consider them as another subset of putative enhancers that are assigned according to a specific genome annotation used in the literature (General Comment 2). As pointed out by the reviewer the concept is controversially discussed. Thus, we consider it informative to provide also a SE analysis in our study as we expect it to be of interest to the readers of our study. Finally, as discussed in our previous rebuttal letter, it seems premature to conclude that the "functional" part of enhancers must be restricted to the nucleosome depleted region and that large extensions of chromatin features like H3K27ac are irrelevant for enhancer function.
55. Fig 6A confirms that most upregulated "enhancers" are marked by H3 K4me1 and not K27ac. This should be seen as a warning sign that these are poorly defined. The figure shows differential chromatin signals linked to gene expression rather than the appearance of the signals per se. Thus, it cannot be concluded from the figure that the upregulated H3K4me1 sites are enhancers not marked by H3K27ac.
56. In fig EV4a the authors need to refer to Mb of active chromatin, not enhancers.
We agree with the reviewer and have revised the figure accordingly.
57. As mentioned in the original review, the patterns for TCF4 and EBF1 in Fig 5D do not resemble footprints but reflect sequence specificity within a ATAC peak centred on a sequence. Only CTCF shows a true footprint. This figure also highlights the error in the TCF4 motif. This should be the TCF3/TCF4 motif CACCTG, but the authors show the motif for TCF1/LEF1 which is CAAAG. This confirms that the wrong motifs are being assigned.
While the footprints indeed include a significant contribution from the Tn5 sequence preference this is independent of the strong gain/loss of protection towards Tn5 integration in CLL versus NBCs. The error with the TCF4/LEF1 labeling has been corrected (see General Comment 3 and response to point #35).
58. Too many figures are far too small and could be double or trebled in size, especially in the supplement and EV where space is not an issue. As a general criticism, almost all of the genome browser views are at much too low a resolution. Gene regulation and active chromatin are best studies at the level of 100 kb or less.
The readability of figures was improved wherever possible. In order to increase size of the figure labels we have introduced the abbreviation for NBC for "non-malignant B cell"). 59. Although the authors have addressed some concerns, many of the problems highlighted in the first review remain. These have been addressed above so I will not address the rebuttal letter in detail. Overall, the authors repeatedly refer to studies from the creators of ChromHMM to validate the use of ChromHMM, which is hardly an independent validation. If other groups are already using better targeted approaches to define enhancers in CLL, and getting different answers, it does not justify published a less well-designed study that may contradict it.
The statement that "other groups are already using better targeted approaches to define enhancers in CLL" appears to be without merit (see response to point #34 and #45). Furthermore, our findings that NFAT, TCF4 and LEF1 were activated in CLL while other factors like EBF1 and AP-1 were silenced is in full agreement with two other studies (Beekman et al, 2018;Oakes et al, 2016). Based on our comprehensive analysis of aberrant chromatin features in CLL and their assignment to 15 central TF motifs (Fig 6B, Appendix Table S2) we have developed an integrated gene regulatory enhancer containing network, for which we have validated predictions for EBF1. Our network reveals a number of new links to the CLL pathophenotype to be tested in further studies. In addition, we provide a rich multi-readout resource of CLL chromatin features which, we are sure, will help to understand the molecular pathology of CLL with integrative approaches. Thank you again for sending us your revised manuscript. We are now satisfied with the modifications made and I am pleased to inform you that your paper has been accepted for publication. | 2019-05-24T13:06:59.777Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "446c697521938d315b2db24f388f428304fa996f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.15252/msb.20188339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8148c29f69318779a3ca037e8316907e294f255c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258635291 | pes2o/s2orc | v3-fos-license | Research on the Measurement and Convergence of Technological Innovation Efficiency of New Energy Enterprises under the Target of “Double Carbon”
The new energy industry helps to meet the challenges of energy exhaustion and environmental pollution. Based on the panel data from 2015 to 2021, this paper uses the BCC model and Tobit model to construct the empirical measurement of technology innovation efficiency of listed new energy enterprises with three-stage DEA technology and uses the convergence model to test the efficiency difference. The results show that: (1) Using the traditional BCC model, the technical innovation level of new energy enterprises is insufficient, and the overall comprehensive efficiency and pure technical efficiency are 0.220 and 0.302 respectively, which need to be improved. (2) The three-stage DEA model was used to control environmental factors and statistical errors, and the overall efficiency of technological innovation of new energy enterprises increased. The comprehensive efficiency and scale efficiency increased to 0.541 and 0.896 respectively. (3) The technological innovation efficiency of new energy enterprises presents temporal and spatial heterogeneity. The coefficient of variation of technological innovation efficiency of new energy enterprises ranged from 0.084 to 1.000 with significant differences. The technological innovation efficiency of new energy enterprises in eastern, central and western China shows a U-shaped fluctuation.
Introduction
Traditional energy consumption leads to increased carbon emissions and environmental pollution.The energy structure that is highly dependent on traditional petrochemical raw materials has forced the development of new energy industries to become a key direction for the transformation and upgrading of industrial structures.Coal as the main fuel of the energy model has led to the growth of carbon dioxide emissions, posing a challenge to climate governance.The Chinese government attaches great importance to the issue of carbon emissions.In 2022, President Xi Jinping proposed the goals of "carbon peaking by 2030" and "carbon neutrality by 2060" at the 75th session of the United Nations General Assembly.At present, the problem of global warming is becoming more and more serious, which has become a common concern of people all over the world.China has a high proportion of traditional fossil fuels, and by 2026, the proportion of non-fossil energy sources is projected to be as high as 21% [1].The proposal of the "double carbon" goal is a medium -and long-term national strategy put forward by China to cope with the changing environment [2].In order to achieve this goal, both new energy development and energy conservation and emission reduction need full attention.In order to reduce the high production cost of new energy, the government should promote scale, provide financial support to enterprises and encourage them to carry out technological innovation.Compared with traditional energy, new energy has the advantages of large reserves, small pollution and wide distribution [3].New energy enterprises refer to enterprises that use renewable resources such as water energy, electric energy and solar energy for industrial production.At present, common new energy enterprises include new energy vehicles, new energy power plants, and electronic enterprises that produce intelligent products such as photovoltaic and lithium batteries [4].The "low carbon" caused by global climate change is gradually affecting the decision-making of various enterprises.
The new energy enterprises started late and developed slowly, so the new energy enterprises should improve their innovation enthusiasm and take the initiative to shoulder social responsibility.For new energy companies, the challenge they face is the lack of access to institutional funding; the price of renewable energy technologies; lack of skilled labor; underdeveloped physical infrastructure and logistics; the incumbents are not dominant enough; insufficient government or policy support [5].In this context, new energy enterprises should establish a diversified new energy technology supply system in the future, based on diversified supply security, vigorously promote the application of new energy technologies, and combine other high and new technologies with new energy application technologies in a green and low-carbon oriented manner.On the other hand, international cooperation should be strengthened to realize energy sharing under open conditions [6].
Innovation is an important symbol of a country's core competitiveness [7].Innovation is the first source of power for the development of the country, nation and province, and the innovation capacity determines the country's ability to cope with sudden changes.Especially after the "double carbon" goal was put forward, our demand for innovation is increasing [8].Since 2008, new energy enterprises have been developing rapidly with the support of national policies.According to the analysis of China Energy News, the operating revenue of China's new energy enterprises now accounts for 63.41%, indicating that the innovative development of new energy enterprises has made some achievements.Since the 21st century, our economy has realized leap-forward development and achieved remarkable achievements, the GDP grew from 10,028 billion yuan in 2000 to 11,43669 billion yuan in 2021, but this extensive economic model led to the problem of environmental pollution.According to statistics, China's CO 2 emissions increased from 4.025 billion tons in 2000 to 11.71 billion tons in 2018, putting great pressure on emission reduction [9].
Innovation efficiency is an indicator of the excellence of a company's modernization investments and the efficiency of converting innovation inputs into outputs in technological activities.Domestic and foreign scholars have made rich achievements in innovation efficiency evaluation, providing technical support for the subsequent innovation and development of enterprises [10].The estimation models of innovation efficiency include the data envelopment model (DEA), stochastic Frontier model (SFA) [11].The advantage of using the SFA model is to evaluate the influence of related factors on volume efficiency by constructing production functions.However, the method of the parametric function model has the risk of model hypothesis error.Once the model hypothesis is wrong, not only the estimated results are different from the expected results, but even the conclusions are meaningless.Therefore, since then, more researchers choose non-parametric methods, such as the DEA analysis method, to measure individual innovation efficiency.DEA is a non-parametric method and an effective method to evaluate the relative efficiency of the decision-making unit (DMU) [12].To a certain extent, this method can improve the accuracy of research results and solve the problems existing in the traditional DEA model.Fried published two papers in a series to discuss the use of the DEA model to estimate efficiency [13].The DEA analysis method can fundamentally avoid the wrong results caused by the wrong model selection.Meanwhile, the weight of indicators is automatically calculated from the inputoutput data, which makes the whole research process more objective [14].To sum up, the DEA method has been promoted and applied in the measurement of innovation efficiency.
Although the classical DEA method overcomes the deviation of parameter estimation, it ignores the influence of external environmental factors and random bias.DEA's three-stage method innovatively integrates the advantages of the two models, excludes the influence of external factors and random factors, and obtains more objective results.
Schumpeter's innovation theory is of monumental significance.In addition to the research on how to innovate, the technical requirements for innovation, the needs for its realization, and the innovation cycle, research on the efficiency of innovation has been added.The research on innovation efficiency is not only related to the development of enterprises, but also directly affects the technological innovation and economic development of a country.MENG and XU (2021) estimated the innovation efficiency of China's provinces and territories by examining data.After excluding the external environmental factors, the results showed that the innovation efficiency of China's central and western regions was lower than that of the eastern regions [15].On the whole, the innovation efficiency of China was still in slow development.At the same time, the research of innovation efficiency also has many limitations, such as environmental constraints, capital input, marketization degree and so on [16].Some scholars studied the innovation efficiency of China's high-tech zones in 2012 and found that the environment was an important factor restricting the improvement of innovation efficiency in the central and western regions.Hu found through empirical research that industrial structure and enterprise scale have a significant influence on technological innovation efficiency, but the influence factors of enterprise systems are not significant enough.In addition, the research also found that regional higher education also has a significant impact on the efficiency of regional technological innovation [17].
Different from the classic one-stage DEA model, the three-stage DEA model can eliminate the interference of environmental factors and attract attention.Domestic and foreign scholars have used the three-stage DEA model to study the tourism ecological efficiency of China's coastal cities [18] and the resource allocation efficiency of elderly care services [19].Through the construction of the DEA three-stage model, this paper measures the innovation efficiency of national and regional new energy enterprises by setting environment variables and random factors, integrating and processing corresponding data.
To sum up, the DEA model is an effective method to evaluate innovation efficiency.Referring to relevant domestic and foreign achievements, this paper mainly selects panel data of new energy enterprises to analyze their technological innovation efficiency.The innovation of this paper is shown as follows: on the one hand, the application scope of the DEA model is expanded, and the three-stage innovation efficiency estimation of China's new energy enterprises is constructed.By smoothing out the interference of the external environment and random bias, the accuracy of innovation efficiency evaluation can be improved.On the other hand, the convergence model is introduced to discuss the convergence distribution of innovation efficiency of new energy enterprises.
Model Construction
Since the three-stage DEA model proposed by Fried effectively avoids the influence of environmental factors and random errors on the efficiency value, and is combined with the characteristics of new energy enterprises, this paper adopts the three-stage DEA model when measuring the innovation efficiency of new energy enterprises.In the first stage, the input-oriented BCC model is selected.Different from the classical three-stage model, considering that the efficiency value of the BCC model is no more than 1, it has the property of truncation.Therefore, the Tobit regression model is used in the second stage of this paper.
(1) The first stage: initial calculation of BCC model In terms of enterprise innovation efficiency, the innovation of new energy enterprises is uncertain.Input variables are variables determined by enterprises themselves and can be controlled, while it is difficult for enterprises to determine or accurately predict output variables.In addition, CCR model can only show comprehensive efficiency, while BCC model can respectively show comprehensive efficiency, pure technical efficiency and scale efficiency.Therefore, when measuring the innovation efficiency of new energy enterprises, this paper uses the input-oriented BCC model in the first stage, and its model scale returns are variable.According to Charnes' research, the general expression of BCC model can be obtained as follows [20]: In the above equation, ρ represents the target value of the decision unit.x ij ( j = 1, 2, ..., k) represents the input factors, y ir ( j = 1, 2, ..., s) represents the output factors, where in i = 1, 2, ..., n, j = 1, 2, ..., m, r = 1, 2, ..., s. n represents the number of decision-making units, k represents the number of decision-making unit input, s represents the number of decision-making unit output.
The efficiency value of BCC model is comprehensive technical efficiency (TE), which can be decomposed into the product of scale efficiency (SE) and pure technical efficiency (PTE).
Assume that the environment variable of the ith input factor is Z i , and the environment variable of the RTH output variable is Z r , the corresponding influence coefficients are α i and α r respectively, and the random disturbance terms are μ ij and μ rj respectively.
At the same time, it is assumed that the coefficients of variation of environmental variables of all decisionmaking units are consistent, namely α i and α r , which is conducive to maintaining the consistency of evaluation [21].
Suppose EG ρ ρ ′ = , when EG>1, it means that the adjusted efficiency is higher than the original efficiency value.When EG = 1, it indicates that environmental factors have no influence on the efficiency value of DMU; When EG<1, the adjusted efficiency value is lower than the original efficiency value.
Model Construction of δ Convergence Analysis
In order to deeply explore the differences in technological innovation efficiency and their sources, Dagum Gini coefficient method was introduced in this study.With reference to relevant studies [22], Dagum Gini coefficient is calculated as follows: In Formula ( 9), k represents the total number of regions under investigation, j and h are the regional subscripts, n represents the number of enterprises under investigation, i and r are the enterprise subscripts.n j and n K are the number of enterprises in area j(h), y hr is the enterprise in the j(h) region, G is innovation efficiency measure value, μ represents the mean value of innovation efficiency of all enterprises under investigation.
In order to analyze the distribution position and shape of variables, this paper introduces Gaussian kernel function.
The density function of random variable x is set as f(x), where N, X i and k(x) are the number of observed values, independent identically distributed observed values, bandwidth and kernel function respectively.See Equation (10)(11) for the calculation method.
[ ] Through the convergence test of the above equations, the evolution of technological innovation efficiency gap of new energy enterprises in different regions can be verified.Therefore, the δ convergence analysis of technological innovation efficiency can be conducted according to Equations ( 9) and (10).Common convergence studies include α convergence, δ convergence and club convergence.δ convergence represents the process of sample deviation decreasing (3) energy industry, the input indicators selected in this paper are the number of R&D personnel and R&D input respectively, and the output indicators are the number of patents and net profit.
(2) Selection of environment variables: Technological innovation efficiency refers to the input-output ratio of technological innovation resources.Before that, most of the existing literature and research only considered the efficiency value of input and output, and only analyzed the environmental factors affecting the efficiency of technological innovation, but did not control or treat the environmental factors to calculate the efficiency of technological innovation.The research results are not rigorous enough.Different degrees of environmental regulation have different impacts on technological innovation [26].In this paper, the total export trade of the location of an enterprise is used as an indicator to measure the degree of external development.There is a certain relationship between the economic development difference of China's eight economic regions and the regional difference of technological innovation efficiency of enterprises.Per capita GDP is used to evaluate the regional economic level.With the exception of factors such as physical capital and human capital, technology market has a significant influence on innovation [27].Therefore, the turnover of technology market in the region where the company is located is taken as an indicator to measure the technology market environment.The definitions of all the variables are shown in Fig. 1.
(3) Data source: The data studied in this paper are mainly from China Statistical Yearbook, State Intellectual Property Office, RESSE Reisi Financial Database, etc.
In terms of sample selection, selecting listed companies that meet the requirements based on the new over time, which is described by coefficient of variation (cv) in this study, as shown in Equation (12).Where, i and j are the subscripts of regions and enterprises respectively, n i is the number of enterprises in area j, F̅ ij is the mean value of innovation efficiency in region j.
Variable Selection and Data Sources The construction of an indicator system is an important method for quantitative research and an important basis for selecting indicator variables that fit the research object.In this regard, the strengths and weaknesses of the indexes and their relevance to the target population can be obtained through the screening of a large amount of literature and the comparative analysis of the selected indexes.According to the above ideas, the literatures with high citation volume of related topics in CNKI and some English literature [23] were retrieved and their index variables were sorted out.Referring to the research base of domestic scholars [24,25], most studies are based on the C-D production function theory.The selected input indicators mainly include: labor, capital stock and energy output, etc.Based on the availability, science and rationality of data, and considering the specific situation of the new Fig. 1.Input-output index system.energy concept sector in Huaxi Securities (including the new energy concept of the main board and the small and medium-sized board).In order to ensure the continuity of the research, the time span selected in this paper is 2015-2021.Considering the stability of the model, companies with serious data deficiency, such as Silver Star Energy, Bowei Alloy and Zhongmin Energy, were excluded.Finally, after data screening and cleaning, 72 new energy listed companies were collated.When partial data of individual samples were missing, the linear difference method was used to complete the values.
Results and Discussion
According to the idea and principle of three-stage DEA model, DEAP2.1 software is used in the first stage and the third stage, and STATA software is used in the second stage to realize data processing and analysis.
Analysis of DEA Model Results in the First Stage
In the first stage, the DEA-BBC model was used to measure the panel data of 72 listed new energy enterprises from 2015 to 2021.
As can be seen from Table 1 the overall technical efficiency value of the sample is between 0.013 and 1.000, and the overall difference is large.Among them, only 3% enterprises (Beixin building materials, energysaving wind power) have comprehensive innovation efficiency equal to 1, and their innovation efficiency and management efficiency are relatively effective, while the rest enterprises have no efficiency in technological innovation.Among the research objects, 49 enterprises' technological innovation efficiency value is lower than the average, and 23 enterprises' technological innovation efficiency value is higher than the average.Under the interference of environmental factors and random errors, the average comprehensive innovation efficiency, pure technical efficiency and scale efficiency
Analysis of Technological Innovation Efficiency of New Energy Enterprises in Different Regions
In order to further explore the technological innovation efficiency level of sample enterprises as representatives of various provinces in China, the distribution Fig. 3 respectively.As can be seen from Fig. 3, after adjusting environmental variables, the technological innovation efficiency of Xinjiang and other provinces with relatively backward market environment is higher, indicating that their enterprises have strong technological innovation ability.
The technological innovation efficiency performance of listed new energy enterprises in the 9 provinces and cities in the eastern China covered in this paper can be divided into four types: (1) dual-optimal type, that is, provinces where both pure technical efficiency and scale efficiency are above 0.9.This is represented by Hebei Province, which has new energy enterprises represented by Great Wall Motor and others.(2) Optimal-low type, which is represented by Jiangsu Province, has a high scale efficiency of 0.92 and a pure technical efficiency of 0.386.(3) Medium-low type.Taking Zhejiang as the representative, the scale efficiency level is 0.84 and the pure technical efficiency is only 0.39, indicating that the development focus of new energy enterprises in Zhejiang Province is to enhance their own technological innovation ability and improve the pure technical efficiency.(4) Double medium-sized.Its pure technical efficiency and scale efficiency is above 0.65.Taking Shanghai as the representative, these provinces and cities should both expand the scale of enterprises and focus on improving their technical innovation capacity in the subsequent development.The above analysis shows that the current situation of new energy enterprises in the eastern provinces is that the pure technical efficiency is low, leading to a low comprehensive efficiency level.
The new energy innovation efficiency of the six provinces in western China can be divided into three types: (1) Dual-optimal type, that is, the provinces with pure technology innovation efficiency and scale efficiency are both above 0.9, including two provinces at the technological frontier (Xinjiang and Ningxia) and Guizhou and Gansu.The new energy technology innovation efficiency of these provinces needs less improvement.(2) Double-medium, that is, pure technology innovation efficiency and scale efficiency are both above 0.7, including Sichuan Province and Chongqing Province.(3) The medium-low type, represented by Shaanxi Province, has a low pure technical efficiency (0.431) and a high scale efficiency (0.88).The above analysis shows that all provinces and cities have a good momentum of new energy technology innovation development, but there is room for further optimization.
Analysis of Tobit Regression Model Results in the Second Stage
In this paper, the technology market environment, the degree of regional openness to the outside world and the level of regional economy are taken as environmental variables to establish the Tobit model.Thus, Tobit model is adopted to explain the influencing factors of enterprise technological innovation efficiency, and relevant results are shown in Table 2.
As can be seen from Table 2, the regression coefficient value of the technology market is 0.0724, showing a significance level of 0.01 (z = 5.24), indicating that the technology market has a significant impact on efficiency.In other words, with the increasing development and improvement of the technology market in which new energy enterprises are located, the technological innovation efficiency of new energy enterprises shows a trend of gradual improvement.The higher the development level of technology market, the more intensive technology trading, which is conducive to the technological innovation of enterprises.
At the level of 1%, the degree of external openness has a significant promoting effect on efficiency, indicating that the higher the degree of external openness, the higher the efficiency of technological innovation of enterprises.This may be because the higher the degree of external openness, the more opportunities for enterprises to introduce more advanced technologies and capital.
The regional economic level has a negative influence on the technological innovation efficiency of new energy enterprises at the 1% level, indicating that the regional economic level inhibits the technological innovation efficiency of new energy enterprises, indicating that the improvement of the regional economic level does not improve the technological innovation efficiency of new energy enterprises.The reason may be that the main objectives of the enterprise and the region are not consistent.Enterprises carry out technological innovation in order to form patents and finally obtain economic value, that is, to obtain corporate profits.
The above results show that the influence of environmental factors on decision-making units is not completely consistent.Next, we apply equation (4) to form a new environmental factor and add it as a new input variable, so as to control the impact of environmental factors.
Analysis of the Results after Adjustment of Input in the Third Stage
Table 3 shows the efficiency results of the remeasurement.As can be seen from the table, the adjusted comprehensive innovation efficiency, pure technology efficiency and scale efficiency are 0.541, 0.606 and 0.896, respectively.Among them, the overall technological innovation efficiency is between 0.120 and 1.000.Compared with before adjustment, the comprehensive indicators of other enterprises have changed.Among them, the technological innovation efficiency of only 23 enterprises was higher than the overall average value of the first stage (0.220), and that of 33 enterprises was higher than the overall average value of the third stage (0.541).This also shows that environmental factors have different impacts on enterprise efficiency.
Convergence Analysis of Technological Innovation Efficiency of New Energy Enterprises
In order to explore the difference of technological innovation efficiency, δ convergence test was used.The main purpose is to explore the effect and change trend of the above new energy enterprises' technology innovation efficiency into practical engineering applications.
Convergence theory originated from neoclassical growth theory.In fact, it is easy to understand that in a relatively closed and closed economic system, backward or underdeveloped countries or regions have a faster and higher economic growth rate.The purpose is to catch up with the relatively developed economic regions and countries faster, that is, to reach the mean value and constantly try to narrow the gap.In the mathematical sense, it is reflected as convergence, convergence to the average level.Finally, it approaches to a stable state of economic development.Otherwise, the growing gap between regions will pose a challenge to social stability and rapid development of society.
In general, δ convergence refers to the fact that the dispersion of the distribution of per capita income or salary between countries or regions with different economic development will decrease over time.In short, people's income intervals will no longer disperse to a large extent, but tend to the mean, that is, converge to the mean.Next, a new test method of δ convergence is introduced: by tracking the dispersion degree of per capita income in a country and region, and analyzing the changes of the coefficient of variation of income over time, the convergence can be measured more appropriately and the correlation test can be conducted.This method is known as the Friedman test.
The Friedman test applied in this paper, while choosing three dimensions, namely overall level, regional level and time for different levels and regions in China, including national level, central, western and eastern parts of the country to do δ convergence analysis on the efficiency of technological innovation of new energy enterprises.Factorial δ convergence is one of the methods to evaluate the technological innovation efficiency of new energy enterprises.
In this paper, according to the relevant method of convergence model, the δ convergence results of technological innovation efficiency are obtained according to the geographical location of new energy enterprises, as shown in Table 4.The results show that during 2015-2021, the δ convergence and variation coefficient of eastern, central and western regions are all lower than 0.4, and the variation coefficient is lower than 0.6, with significant differences among regions.
According to the convergence analysis, the coefficient of variation is between 0.084 and 1.000, indicating that there are significant differences in technological innovation efficiency among Chinese new energy enterprises.
As shown in Table 5, δ convergence analysis is carried out for the four aspects respectively, and it is found that there is no obvious δ convergence in the whole country, central, western and eastern parts.On the whole, the technological innovation efficiency level of the national sample is not high, and there is still a large room for improvement, and the development and innovation gap between enterprises in the eastern region is large.The variation coefficient of the whole country presents an upward trend.Meanwhile, the coefficient of variation in the eastern region is significantly higher than that in the other regions, while the coefficient of variation the western region is relatively small.In the middle and west regions show a good upward trend after going downward.The results show that if environmental factors are not controlled, the average value of enterprise technological innovation efficiency measured by the classical DEA model is 0.220.If environmental factors are controlled, the technological innovation efficiency of listed new energy enterprises measured by the threestage DEA increases to 0.541.However, the overall average technological innovation level is still low, and there is still a large room for improvement.The lack of pure technical efficiency is the key constraint.This indicates that the development of technological innovation of new energy enterprises in China mainly depends technological innovation capability, not simply by virtue of the scale effect.
There are significant differences between the measured results of the classical DEA model and the three-stage DEA model, indicating that the environmental variables affect the technological innovation efficiency of new energy enterprises, and the regional economic level of the environmental variables is not conducive to reducing the redundancy of human and financial investment in the technological innovation process of new energy enterprises.Since the pure technical efficiency of most listed new energy enterprises in this paper is low, enterprises need to focus on improving their technical innovation ability by increasing R&D investment and increasing the proportion of technical talents.
Finally, through the convergence analysis of the technological innovation efficiency of new energy enterprises in different regions, it is found that the innovation gap of enterprises in the eastern region is large, and the innovation efficiency among enterprises is uneven, while the gap is small in the western region, and the regional technological innovation efficiency shows obvious temporal and spatial heterogeneity.
( 1 )
Input and output index selection: of the samples from 2015 to 2021 are 0.220, 0.302 and 0.796 respectively.The comprehensive innovation efficiency of listed new energy enterprises is generally not high, indicating that the input-output results in the innovative use of new energy are relatively low return, and development needs to be strengthened.Combined with Fig.2, the innovation efficiency of new energy enterprises is on a slow rising trend, and the change trend of pure technical efficiency is very similar to the trend of innovation efficiency.Scale efficiency is in a relatively stable state, and compared with scale efficiency, pure technical efficiency is at a lower level.However, due to the influence of environmental factors and random errors, the results of traditional BCC model can not accurately display the technical innovation efficiency level of Chinese new energy listed enterprises, so it is necessary to further analyze.
of new energy technology innovation efficiency in 2015, 2017, 2019 and 2021 was drawn
Fig. 3 .
Fig. 3. Distribution of new energy technology innovation efficiency.
Table 1 .
Measurement results of DEA model in the first stage.
(Crste: comprehensive technical efficiency Vrste: pure technical efficiency Scale: scale efficiency) Fig.2.Innovation efficiency of new energy enterprises and its decomposition.
Table 2 .
Results of Tobit model.
Table 3 .
Measurement results of DEA model in the third stage.
Table 4 .
δ coefficient and variation coefficient of different regions.ConclusionsIn order to analyze the technological innovation efficiency of listed new energy enterprises, this paper takes 72 listed new energy enterprises as the research object, uses the panel data from 2015 to 2021, builds the three-stage DEA model and convergence analysis model, and empirically tests the technological innovation efficiency of listed new energy enterprises.
Table 5 .
Technology innovation efficiency zones of new energy enterprises. | 2023-05-12T15:04:52.368Z | 2023-05-10T00:00:00.000 | {
"year": 2023,
"sha1": "2b662b19542931a893c082a6059927bb76242e76",
"oa_license": null,
"oa_url": "http://www.pjoes.com/pdf-161662-91613?filename=Research%20on%20the.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "401b4227ac07d27cf42f4ddf5788001a176734c6",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Engineering"
],
"extfieldsofstudy": []
} |
5496018 | pes2o/s2orc | v3-fos-license | Signal Transducer and Activator of Transcription 3 (STAT3) Degradation by Proteasome Controls a Developmental Switch in Neurotrophin Dependence*
Background: As neonatal hippocampal neurons mature, STAT3 levels increase via an unknown mechanism, which makes neurons neurotrophin-independent. Results: STAT3 protein levels are low in neonates because of calcineurin-dependent proteasomal degradation of STAT3. Conclusion: Calcineurin-dependent STAT3 degradation regulates the survival of neonatal neurons. Significance: Proteasomal degradation controls the developmental switch of neurotrophin dependence. Neonatal brains develop through a program that eliminates about half of the neurons. During this period, neurons depend on neurotrophins for their survival. Recently, we reported that, at the conclusion of the naturally occurring death period, neurons become neurotrophin-independent and, further, that this developmental switch is achieved by the emergence of a second survival pathway mediated by signal transducer and activator of transcription 3 (STAT3). Here I show that calcineurin plays a key role in controlling the developmental switch in mouse hippocampal neurons. Calcineurin promotes the degradation of STAT3 via the ubiquitin-proteasome pathway. Inhibition of calcineurin acutely increases total levels of STAT3 as well as its activated forms, resulting in decreased levels of the tumor suppressor p53 and its proapoptotic target, Bax. In vivo and in vitro, calcineurin regulates levels of STAT3 and neurotrophin dependence. TMF/ARA 160 (TATA element modulatory factor/androgen receptor co-activator 160), the key mediator of STAT3 ubiquitination, is required for calcineurin-dependent STAT3 degradation. Thus, these results show that the ubiquitin-proteasome pathway controls the critical developmental switch of neurotrophin dependence in the newborn hippocampus.
Protein levels can be regulated not only by transcription and translation but also by the rate of degradation. The proteasome is a large protease complex ubiquitously found in all types of cells, including neurons (1), where proteins tagged with ubiquitin are destined for destruction (2). The proteasomal degradation of misfolded proteins (3) appears to be especially critical in neurons, and its failure is thought to underlie a number of neurodegenerative diseases (4). However, besides its role in quality control, the ubiquitin-proteasome pathway also regulates important cellular functions, such as cell division (5) and stress response (6). Many signaling pathways, including necrosis factor B (7), Wnt (8), and hypoxia-inducible factor (9), utilize the inhibition of proteasome-dependent degradation as a type of signal transduction. In developing neurons, a precise balance between ubiquitination and deubiquitination is required for the formation of proper neural circuitry (10).
In early postnatal life, the central nervous system (CNS) goes through a specific developmental stage during which approximately half of its neurons are eliminated (11). In rodent hippocampus, this wave of developmental death occurs during the first 10 postnatal days (12,13). During this period, neurons depend on neurotrophin signaling for their survival (14). Recently, we reported that, after this period of vulnerability, neurons became resistant to the lack of neurotrophin-signaling and that this developmental switch was achieved by the emergence of a second survival pathway mediated by signal transducer and activator of transcription 3 (STAT3) (15). STAT3 mediates cytokine signaling (16) and plays a critical role in embryonic brain development (17). We found that Ser(P)-727 STAT3 is required for the resistance to the lack of neurotrophin signaling in hippocampal neurons and that the developmental switch from the neonatal to adult survival pathway was achieved not simply by changing the phosphorylation state of but rather by changing the total protein levels of STAT3 (15). However, the mechanism responsible for increasing STAT3 protein during development is unknown.
Calcineurin is a calcium/calmodulin-dependent Ser/Thr phosphatase (18). Although its roles in T-cell activation and cytokine production are well known (19), calcineurin was originally purified from brain tissue (20) and has been implicated in several aspects of neuronal development (21)(22)(23) and brain function (24). Here I show that the developmental switch in neurotrophin dependence in postnatal hippocampal neurons is regulated by calcineurin. In early postnatal hippocampal neurons, calcineurin effectively promotes the proteasomal degradation of STAT3, whose survival signal later allows mature neurons to achieve resistance against neurotrophin deprivation. Thus, this work shows that the ubiquitin-proteasome system controls a critical developmental step in newborn hippocampus.
with the blocking solution. Samples were incubated for 2 h with antibodies.
For the in vivo injection analyses, pups were perfused with 4% PFA 2 days after the injection. Consecutive coronal slices 50 m thick were made by a Leica VT100S vibrating microtome (Leica, Allendale, NJ) and were immunostained with a neuronal marker, NeuN, and an apoptotic marker, c-cas3. Slices were compared with respect to distance from the injection site. The analysis was done blind with respect to the content of the injections.
Cell Quantification-Fluorescent images were taken with a Zeiss confocal microscope (LSM-510) equipped with a ϫ10, ϫ25, or ϫ40 lens. Z-Stacked images from eight sections (1-m intervals) were used for the analyses. Images were analyzed with ImageJ. For cell survival assays, images were taken from five fields: one from the center of the coverslip and two vertically and two horizontally 400 -3000 m from the center. Because the densities of neurons were higher near the rim of coverslips than in other regions, we avoided sampling the coverslips' edges. The mean number of neurons of five fields was then calculated. Each coverslip was defined as an individual culture. Numbers given represent means Ϯ S.E. For immunofluorescence intensity analyses, ROI Manager in ImageJ was used to select soma areas of Z-stacked images, and mean intensities were measured. The TATA element modulatory factor (TMF)/ androgen receptor co-activator 160 (ARA 160) distribution assay followed the Golgi distribution assay (26); the position where the apical dendrite emerges was defined as ϭ 0°, and quadrants 1-4 were defined as 315°-44°, 45°-134°, 135°-224°, and 225°-315°. The background was subtracted in each image. All analyses were done blind.
Transfection-Transfection was performed using Lipofectamine 2000 (Invitrogen). Cells were transfected with 1.6 g/ml pEGFPC1 vector (Clontech, Mountain View, CA), 8 g/ml constitutively activated calcineurin (⌬CaN)/pSRa vector (gift from Dr. U. Siebenlist, NAID/National Institutes of Health), 30 pM mouse ARA 160 siRNA or mouse TMF/ARA 160 siRNA (Santa Cruz Biotechnology, Inc.), and/or wild-type STAT3 internal ribosome entry site enhanced green fluorescent protein/pMX in OPTI-MEM (Invitrogen) for 15 min, and then the medium was replaced with NeuroBasal medium. Cells were cultured in NeuroBasal medium (osmolarity adjusted to 290 osM with sucrose) before and after the transfection in order to maximize the transfection efficiency.
Immunoprecipitation-Cultures were incubated for 15 min with 40 l of lysis buffer per well (150 mM NaCl, 1% Nonidet P-40, and 50 mM Tris-HCl (pH 8.0) containing a protease inhibitor mixture (Roche Applied Science) and then collected and centrifuged at 12,000 ϫ g for 10 min. Supernatants were preabsorbed with 10% (v/v) protein A-conjugated Sepharose beads (Amersham Biosciences) for 1 h and then centrifuged at 3000 ϫ g for 3 min. The supernatant was incubated with 1% (v/v) STAT3 antibody for 2 h followed by 10% (v/v) protein A-conjugated Sepharose beads for 1 h. The beads were then washed with the lysis buffer twice. Proteins were eluted with 10ϫ (v/v) SDS sample buffer. The procedure was done at 4°C.
Chromatin Immunoprecipitation (ChIP)-Chromatin immunoprecipitation assays were performed as described by Ballas et al. (27). Cultures were fixed with 4% paraformaldehyde, permeabilized in 0.5% Triton X-100, and collected with 40 ml/well of cell lysis buffer (5 mM Hepes, pH 8, 85 mM KCl, and 0.5% Triton X-100) containing 1 mM phenylmethylsulfonyl fluoride (PMSF) and then centrifuged at 3000 rpm for 2 min at 4°C, and the pellet was resuspended in cell lysis buffer with PMSF and centrifuged at 3000 rpm for 2 min at 4°C two times. The pellet was then resuspended in nuclear lysis buffer (50 mM Tris-HCl, pH 8, 10 mM EDTA, 1% SDS) with 1 mM PMSF and was sonicated to yield 100 -1000 bp of DNA on ice and was centrifuged at 12,000 rpm for 15 min at 4°C. The nuclear lysate was preabsorbed with recombinant protein G-agarose (Invitrogen) preincubated with 200 g/ml yeast tRNA and 200 g/ml salmon sperm (Invitrogen) for 1 h at 4°C. The chromatin suspension was diluted with ChIP dilution buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 16.7 mM Tris-HCl, pH 8, 167 mM NaCl) and then immunoprecipitated with 5 g/ml monoclonal mouse anti-STAT3 overnight at 4°C. The chromatin suspension was incubated with recombinant protein G-agarose pretreated with 3% BSA and yeast tRNA and salmon sperm for 4 h at 4°C. Agarose beads were washed with a series of solutions as follows at room temperature: ChIP dilution buffer, dialysis buffer (2 mM EDTA, 50 mM Tris-HCl, pH 8, 0.2% Sarkosyl), TSE-500 (0.1% SDS, 1% Triton X-100, 2 mM EDTA, 20 mM Tris-HCl, pH 8, 500 mM NaCl), LiCl detergent (100 mM Tris, pH 8, 500 mM LiCl, 1% Triton X-100, 1% deoxycholic acid), and TE (10 mM Tris-HCl, pH 8, 1 mM EDTA). To change the solution, the beads were centrifuged at 3000 rpm for 1 min, and the supernatant was aspirated. The samples were eluted from the beads with 300 l of elution buffer (50 mM NaHCO 3 , 1% SDS). Samples were incubated overnight at 65°C to reverse PFA cross-links, following the addition of 20 l of 5 M NaCl. DNA was then purified from the eluted samples using the Qiagen PCR purification kit (Qiagen, Valencia CA). PCR was performed to analyze the STAT3 binding site in the mouse p53 promoter using the following DNA primers: 5Ј-GGGCCCGTFTTGGTTCATCC-3Ј and 5Ј-CCGCGAGACTCCTGGCACAA-3Ј. Conditions for PCRs were 30 cycles of 94°C (30 s), 60°C (30 s), and 72°C (1 min). The PCR products were separated in a 1.5% agarose gel.
Calcineurin Assay-The enzymatic activity of calcineurin was determined using a colorimetric calcineurin assay kit (Calbiochem). Cultures and brain tissue were homogenized with lysis buffer with protein inhibitor mixture as described above. Phosphatase assays were performed following the manual provided by the vendor. The activity was measured with 150 M RII phosphopeptide as a substrate, incubated at 30°C for 30 min. Absorbance at 620 nm was measured using a Beckman spectrophotometer DU600 (Beckman, Brea, CA). The endogenous phosphate concentrations were separately measured in the assay buffer without calmodulin, and the values were subtracted. Protein concentration was determined using a Pierce BCA protein assay kit (Thermo Scientific, Rockford, IL). For each reaction, 5 l of sample lysate, which contained ϳ90 g of total protein, were used.
In Vivo Injection-In vivo injection to CA1 was described previously (28). Briefly, Sprague-Dawley rat pups (P2) of either sex were anesthetized by hypothermia (in ice for 5 min) prior to the surgery. The anesthetized animal was placed on ice in a stereotaxic instrument. The stereotaxic coordinates from the bregma are as follows: AP ϩ1.5, ML Ϯ1.8, VD Ϫ1.8 mm. 0.3 l of reagents were delivered at a rate of 0.1 l/min using a using a Hamilton needle and syringe attached to a microsyringe pump controller (World Precision Instruments (Sarasota, FL)). Concentrations of reagents were as follows: FK506, 30 M; anti-TrkB antibody, 0.5 mg/ml. PBS was used as a control. The incision was closed using a polyglycolic acid suture (CP Medical, Portland, OR). Animals were allowed to recover at 37°C for 1-2 h.
Statistical Analyses-Statistical significance between two groups was determined with a two-tailed paired Student's t test. For multiple groups, statistical comparisons were made by ANOVA followed by individual group tests with the Bonferroni correction made for multiple comparisons.
STAT3 Increases during Postnatal Development via Post-
transcriptional Regulation-To examine if STAT3 transcription is regulated during the first 10 days postnatal, mRNA levels in tissue samples were analyzed by RT-PCR. Despite the notable increase in protein level detected by Western blot analysis during this period, mRNA levels remained unchanged (Fig. 1A). The in vitro system exhibited the same phenomenon, an increase in protein level while mRNA level stayed constant (Fig. 1B). Immunostaining showed that MAP2 ϩ cells increased STAT3 between 7 days in vitro (DIV7) and DIV14 (Fig. 1C).
These results indicate that the increase in neuronal STAT3 during development was caused by post-transcriptional regulation.
Calcineurin Is Involved in Regulating the STAT3-p53-Bax Pathway-What regulates the STAT3 protein level in developing neurons? Inhibition of calcineurin by FK506 acutely up-regulated STAT3 protein levels (Fig. 2, A and B); within 15 min of adding FK506 to young neurons (DIV7), STAT3 protein dramatically increased (5.0 Ϯ 1.3-fold, n ϭ 4, p ϭ 0.025, Student's t test), as did its phosphorylated forms (Ser(P)-STAT3 and Tyr(P)-STAT3). A lesser but notable increase also occurred in more mature neurons (DIV14) that are not dependent on neurotrophin TrkB signaling for survival (15). Phosphorylation of STAT3 induces it to dimerize and translocate to the nucleus, where it binds to its target DNA (29). Immunostaining for STAT3 revealed an accumulation of STAT3 in nuclei (Fig. 2B), consistent with the increase in active forms of STAT3. STAT3 is known to negatively regulate the tumor suppressor gene, p53 (30). To check if inhibition of calcineurin affects the binding of STAT3 to the p53 promoter, chromatin immunoprecipitation (ChIP) analysis was performed. A brief (15-min) exposure to FK506 resulted in an increase in STAT3 binding to the p53 promoter (Fig. 2C). After a 2-h incubation with FK506, both the mRNA level and the protein level of p53 decreased (assessed by RT-PCR and Western blot with immunostaining, respectively, Fig. 3, A and B). Bax, a proapoptotic target of p53 (31), was also decreased by FK506 (Fig. 3A). To test if STAT3 is involved in the regulation of p53 levels, STAT3 was knocked down by siRNA. Following a 2-h treatment with FK506, neurons cotransfected with siRNA for STAT3 showed strong staining for p53, whereas neurons transfected only with GFP expression vector showed much weaker staining for p53 (Fig. 3C). These data suggest that calcineurin down-regulates p53 via STAT3. Significant reductions in levels of p53 and Bax in hippocampus were observed during neonatal development in vivo (Fig. 3A), suggesting that these changes may be developmentally relevant.
Calcineurin Regulates Neurotrophin Dependence through STAT3-Next, the effect of calcineurin on developmental neuronal death was examined. We reported previously that neurons during the developmental death period depend on brainderived neurotrophic factor (BDNF) for survival (14). Consistent with our previous observation, the number of neurons transfected with GFP-expressing vector decreased during the death period (DIV5-DIV8), and the number of transfected neurons further decreased when treated with the functionblocking anti-TrkB antibody (Fig. 4A). Treatment with FK506, the inhibitor of calcineurin, blocked the spontaneous neuronal death, and the number of surviving neurons did not decrease following co-incubation with the anti-TrkB antibody (Fig. 4A), suggesting that neurons became neurotrophin-independent, and the survival effect of FK506 is not achieved by inhibition of TrkB signal down-regulation. As we previously reported, these young neurons during the death period contain very low levels of STAT3 (15). Concomitantly, STAT3 knockdown by siRNA did not increase the amount of death; however, treatment of these neurons with FK506 did not block their death (Fig. 4A), suggesting that the survival effect of FK506 is STAT3-dependent.
Calcineurin becomes calcium-independent and constitutively activated when the C terminus of its catalytic subunit is deleted (32). When neurons were co-transfected with a vector expressing ⌬CaN, the spontaneous death of neurons increased. However, when calcineurin activity was blocked by FK506 in these neurons, the death was completely attenuated (Fig. 4A). Co-transfection with a STAT3-expressing vector also blocked the neuronal death. These results suggest that calcineurin controls survival of neurons through regulation of STAT3.
To examine whether calcineurin activity is developmentally regulated, the phosphatase activity levels of immature (DIV7) and older (DIV14) neurons were compared. A significant decrease in calcineurin activity was observed between primary cultures maintained at DIV7 and DIV14 (Fig. 4B). A similar result was obtained when the phosphatase activity was measured using acutely prepared hippocampal tissue (Fig. 4B). Whether calcineurin is responsible for TrkB independence in the mature neurons (DIV14) was checked next. As reported previously (15), treatment with anti-TrkB antibody did not cause death in the GPF-expressing mature neurons (Fig. 4C). Neurons co-transfected with a vector expressing ⌬CaN only showed a tendency to reduced numbers presumably due to higher levels of endogenous BNDF, but a significant number of neurons died following the addition of the function-blocking anti-TrkB antibody, suggesting that the neurons became TrkBdependent. The effect of overexpressing ⌬CaN was completely reversed by adding FK506 or overexpressing STAT3 (Fig. 4C), suggesting that STAT3 is also responsible for calcineurin-induced TrkB-dependence in the mature neurons. Taken together, these results suggest that calcineurin regulates TrkB dependence in immature and more mature neurons through STAT3. JULY 12, 2013 • VOLUME 288 • NUMBER 28
JOURNAL OF BIOLOGICAL CHEMISTRY 20155
Calcineurin Induces STAT3 Degradation via the Ubiquitin-Proteasome Pathway-Next, the post-transcriptional mechanism that allows calcineurin to regulate STAT3 protein level was explored. Preincubation with anisomycin, an inhibitor of translation, did not block the effect of FK506 on STAT3 protein levels (Fig. 5A), suggesting that calcineurin regulates the stability of STAT3 protein. Moreover, when neurons were incubated in MG-132, an inhibitor of the proteasome, STAT3 protein increased to a level similar to that in neurons incubated in FK506 (Fig. 5A). To further investigate the role of calineurin in proteasome-dependent STAT3 degradation, immunoprecipitation analyses were performed (Fig. 5B). Immunoprecipitation using the antibody against STAT3 gave a single band at around 90 kDa, and no bands were observed in the sample treated without the antibody in the immunoblotting with STAT3 antibody, showing that the antibody specifically immunoprecipitated STAT3 (Fig. 5B). The sample treated with the proteasome inhibitor, MG-132, showed a high smeared signal in addition to an increased level of 90-kDa species (Fig. 5B). Treatment with FK506 increased the level of 90-kDa species without the higher molecular species (Fig. 5B). When these samples were analyzed by ubiquitin immunoblot, a reduction in the 90 kDa band was observed in the FK506-treated sample as compared with the control, whereas the MG-132-treated sample showed a smeared signal both higher and lower than 90 kDa (Fig. 5B). These results suggest that calcineurin controls the ubiquitination of STAT3.
A previous study showed that ubiquitinated Tyr(P)-STAT3 was not detected (33), suggesting that only the inactive form of STAT3 is subjected to degradation. Therefore, whether calcineurin changed the phosphorylation state of STAT3 was tested next. In the presence of the proteasome inhibitor MG-132, neither Ser(P)-STAT3 nor Tyr(P)-STAT3 was increased by the addition of FK506, whereas the general Tyr phosphatase inhibitor, orthovanadate, dramatically increased Tyr(P)-STAT3 without affecting Ser(P)-STAT3 levels (Fig. 5C), suggesting that calcineurin does not dephosphorylate Tyr(P)-STAT3 in these neurons. Then the effect of Tyr phosphorylation on STAT3 stability using orthovanadate was further examined. Exposure to orthovanadate caused only a small increase in the total STAT3 as compared with the increase induced by MG-132 (Fig. 5D). Together, these results suggest that calcineurin affects the stability of STAT3 mainly by promoting proteasome-dependent degradation and that the stabilization effect by dephosphorylating STAT3 is limited.
For a protein to become ubiquitinated, it has to be directed to the E3 ubiquitin ligase complex (34,35). Human immunodeficiency virus 1 TMF (36)/ARA 160 (37) has been reported to interact with elongin C in the E3 ligase complex through the BC-box motif and to mediate degradation of STAT3 (38). Therefore, whether TMF/ARA 160 is involved in the regulation of STAT3 levels in hippocampal neurons was checked next. Immunostaining for TMF/ARA 160 confirmed its expression in immature neurons. Applying siRNA for TMF/ARA 160 effectively knocked down this expression and resulted in a large increase in STAT3 (Fig. 6A). Whether or not a similar mechanism operates in more mature neurons was tested next. When older neurons (DIV14) were transfected with ⌬CaN-expressing vector, STAT3 levels were significantly reduced (Fig. 6B, p Ͻ 0.001). However, when neurons were co-transfected with siRNA for TMF/ARA 160, STAT3 levels were not affected (Fig. 6B). These results suggest that TMF/ARA 160 is involved in regulating STAT3 levels.
To examine if TMF/ARA160 is sensitive to calcineurin activity, the distribution of TMF/ARA 160 was analyzed. Immunostaining for TMF/ARA 160 in control neurons showed scattered puncta throughout the soma area, whereas in FK506treated neurons, puncta were accumulated on one side of the perinuclear region (Fig. 7A). Quadrant distribution analyses showed that TMF/ARA 160 in the cell body was evenly distributed in the control condition, whereas FK506-treated neurons showed higher TMF/ARA 160 signals on the side of apical den- JULY 12, 2013 • VOLUME 288 • NUMBER 28 drites (Fig. 7A). These results suggest that the distribution of TMF/ARA 160 is sensitive to calcineurin activity.
STAT3 Degradation Controls a Developmental Switch
To further investigate how calcineurin affects STAT3 degradation, the effects of FK506 and MG-132 on other proteins that are degraded by the ubiquitin-proteasome system were compared. A tyrosine kinase, Fer, is known to interact with TMF/ ARA 160 (39). Treatment with FK506 increased levels of Fer protein to levels similar to that induced by MG-132 treatment (Fig. 7B). In contrast, p35, the activator for cyclin-dependent kinase (Cdk5), and hypoxia-inducible factor 1␣ (HIF1␣), whose degradation is regulated by a tumor suppressor protein, VHL (40), were sensitive to MG-132 but not to FK506 (Fig. 7B). In fact, no effect on the total ubiquitin levels was observed when neurons were treated with FK506 (Fig. 7B). These results suggest that the effect of calcineurin is more TMF/ARA 160-specific than regulating proteasome activity itself. Also, total ubiquitin levels were found not to change during postnatal development (data not shown). Therefore, it is unlikely that the developmental increase in STAT3 protein levels is caused by changes in proteasome activity.
Calcineurin Regulates Survival of Newborn Hippocampal
Neurons in Vivo-Next, whether or not the regulation of STAT3 by calcineurin controls the survival of neonatal hippocampal neurons in vivo was examined. When FK506 was injected into CA1 of P2 hippocampus, STAT3 protein levels increased without a concomitant increase in mRNA levels (Fig. 8A), consistent with the results obtained in vitro. Apoptotic hippocampal neurons in these pups (P4) were analyzed by immunostaining for c-cas3 and NeuN (Fig. 8B). c-cas3 ϩ neurons showed fragmented nuclei revealed by DAPI staining (Fig. 8B), a feature characteristic of apoptotic cells. Inhibiting calcineurin decreased the numbers of apoptotic neurons in the entire hippocampus, including CA3 and stratum oriens (SO), as well as CA1 (Fig. 8C). Injecting anti-TrkB antibody, on the other hand, increased the numbers of apoptotic neurons in these regions of hippocampus (Fig. 8C). Consistent with experiments performed in vitro, when FK506 and anti-TrkB antibody were co-injected into the pups, the numbers of apoptotic neurons were significantly lower than those injected with vehicle, indicating that treated neurons could survive independent of TrkB signaling (Fig. 8C). These experiments confirm that calcineurin indeed regulates the survival of newborn hippocampal neurons in vivo.
DISCUSSION
Calcineurin Promotes Proteasome-dependent Degradation of STAT3 in Neonatal Hippocampal Neurons-As we reported previously, as neurons pass through the developmental death period, they lose their early dependence on neurotrophin when a new survival pathway mediated by STAT3 signaling arises to take its place (15). Results presented here, summarized in Fig. 9, show that this developmental change is caused by a decrease in the calcineurin activity that promotes degradation of STAT3 through the ubiquitin-proteasome pathway. These results establish that 1) calcineurin inhibitor rapidly increases the amount of STAT3 protein in the presence of anisomycin; 2) proteasome inhibitor increases STAT3 with a similar rate and magnitude as the calcineurin inhibitor; and 3) immunoprecipitation of STAT3 reveals the accumulation of ubiquitinated STAT3 by a proteasome inhibitor, an effect that is attenuated by the calcineurin inhibitor. Further, TMF/ARA 160, the protein responsible for translocating STAT3 to the proteasome (38), was present in hippocampal neurons, and knocking down its gene by siRNA effectively up-regulated STAT3 protein levels. Overexpressing ⌬CaN caused a striking decrease in STAT3 levels, an effect that was attenuated by co-expressing TMF/ ARA 160 siRNA, suggesting that calcineurin affects TMF/ARA 160-mediated degradation. Thus, this study assigns an important new role for calcineurin in regulating the ubiquitin-proteasome dependent STAT3 degradation.
Overall levels of ubiquitinated protein were unchanged during the postnatal period. FK506 did not affect total ubiquitinated protein levels either. The observation that FK506 affected the distribution of TMF/ARA 160 suggests that the association of the STAT3-E3 ligase complex is calcineurin-dependent. Moreover, FK506 increased the levels of another protein, Fer, that also interacts with TMF/ARA 160, but other proteins whose ubiquitination is mediated by different systems were not affected. These results suggest that calcineurin affects TMF/ ARA 160 specifically, rather than indirectly by regulating proteasome activity. TMF/ARA 160 was originally identified as a DNA-binding protein (36,37); however, it was later shown that TMF/ARA 160 mainly localizes in the Golgi apparatus (41). A scattered distribution of TMF/ARA 160 is also observed in myoblast cells when they are starved of serum (38), and it is thought that dispersion of TMF/ARA 160 enhances accessibility to STAT3 and thus increases the degradation of STAT3. The precise mechanism by which calcineurin causes dispersion of TMF/ARA 160 remains to be determined. Understanding the trafficking of TMF/ARA 160 may provide some clues.
Calcineurin Regulates Neurotrophin Dependence by Promoting STAT3 Degradation-The calcineurin inhibitor increased both Ser(P)-and Tyr(P)-STAT3 species in amounts proportional to the total amount of STAT3 protein, and, as a result, STAT3 accumulated in neuronal nuclei and bound to the promoter of p53. Calcineurin inhibitor decreased the levels of p53 and its target gene product, Bax, in a STAT3-dependent manner. Consistent with this decrease in the proapoptotic protein Bax, calcineurin inhibitor promoted the survival of immature neurons. This survival effect of calcineurin inhibitor is not achieved by blocking the negative regulation of Trk signaling because these neurons become resistant to anti-TrkB treatment. In a complementary finding, overexpressing constitutively activated calcineurin caused mature neurons to re-express a regressive sensitivity to the blockade of neurotrophin In an immature neuron, calcineurin stimulates degradation of STAT3 by the ubiquitin-proteasome pathway. TMF/ARA 160 is required for STAT3 degradation. In a mature neuron, diminished calcineurin activity results in stabilization of STAT3, which effectively represses a tumor suppressor, p53. The STAT3 survival signaling allows mature neurons to achieve resistance against neurotrophin deprivation. Ub, ubiquitin.
signaling; mature neurons otherwise do not require neurotrophin to survive. Gain-and-loss of STAT3 function experiments suggested that the effect of calcineurin on survival depends on STAT3. In vivo and in vitro, immature neurons contained low levels of STAT3, high levels of p53 and Bax, and high calcineurin activity as compared with more mature neurons after the death period.
The phosphorylation state of the proteins plays a key role in the induction of proteasome-dependent degradation (7)(8)(9). Although calcineurin is a Ser phosphatase (42), calcineurin has been reported to promote dephosphorylation of Tyr(P)-STAT3 (43), and the Tyr(P)-705 STAT3 species has not been detected among ubiquitinated proteins (33) in non-neuronal cells. This raises the possibility that the survival effect of calcineurin inhibitor might be mediated by the inhibition of dephosphorylation of Tyr(P)-STAT3 in young neurons. However, whereas orthovanadate, the general Tyr phosphatase inhibitor, effectively up-regulated Tyr(P)-STAT3, the calcineurin inhibitor did not change levels of Tyr(P)-STAT3 in the presence of the proteasome inhibitor. Moreover, FK506 increased STAT3 protein levels in a similar manner as the proteasome inhibitor, whereas orthovanadate only slightly increased STAT3 in young neurons. These results establish that, in these neurons, any contribution of phosphorylation state to the stability of STAT3 is minimal; rather, the calcineurin inhibitor promotes survival by reducing STAT3 degradation by proteasome.
Calcineurin responds to neuroactivity and mediates synaptic plasticity (44 -46). Therefore, the survival effect of FK506 may be achieved through neuroactivity. In fact, as we demonstrated previously (14), neuronal activity plays a critical role in the survival of neonatal neurons. However, it is unlikely because knockdown of STAT3 completely attenuated the survival effect of FK506. Calcineurin is known to influence cell survival by regulating transcription factors (47)(48)(49). However, the results reported here reveal an important new function of calcineurin. Calineurin also mediates developmental change in synaptic properties (50). Therefore, inhibiting calcineurin in these young neurons may lead to a delay of normal synaptic maturation.
Calcineurin in newborn hippocampus showed much greater enzymatic activity than that in older hippocampus that had already gone through the developmental death period. The mechanism for regulating calcineurin has not yet been determined. When young neurons (DIV7) were briefly (2 min) depolarized with 50 mM KCl in the presence of anisomycin, STAT3 protein levels significantly decreased in 2 h (52.7 Ϯ 19.1% of the control, n ϭ 4, p ϭ 0.012), suggesting that neuroactivity induced degradation of STAT3. However, blocking spontaneous activity induced death instead of mimicking the survival effect of calcineurin (14). Moreover, elevation of excitability did not decrease the total levels of STAT3 (15). These results suggest that spontaneous activity may not be the major pathway to the activation of calcineurin. Calcineurin is regulated by anchoring proteins or regulators (51)(52)(53). Intriguingly, the spatial and temporal expression patterns of the regulators of calcineurin are dramatically reconfigured during the postnatal period (54). Although calcineurin activity declines during the postnatal period, it nonetheless remains important for normal hippocampal function in the adult (55), suggesting that its activity needs to be well titrated. In fact, both dysregulated calcineurin activity (56) and a marked decrease in STAT3 (57) have been observed in the aged brain. Therefore, the connections between calcineurin activity and neuronal survival signaling reported here are likely to contribute to novel approaches to treat neurodegenerative diseases. | 2018-04-03T03:41:23.985Z | 2013-06-03T00:00:00.000 | {
"year": 2013,
"sha1": "bb05f6f335307e5cbef1b63c2fb8e7aafc0c9104",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/288/28/20151.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7bdd1ce306d434667702400908142389c82166d9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
229678018 | pes2o/s2orc | v3-fos-license | Blackwell Online Learning for Markov Decision Processes
This work provides a novel interpretation of Markov Decision Processes (MDP) from the online optimization viewpoint. In such an online optimization context, the policy of the MDP is viewed as the decision variable while the corresponding value function is treated as payoff feedback from the environment. Based on this interpretation, we construct a Blackwell game induced by MDP, which bridges the gap among regret minimization, Blackwell approachability theory, and learning theory for MDP. Specifically, from the approachability theory, we propose 1) Blackwell value iteration for offline planning and 2) Blackwell $Q-$learning for online learning in MDP, both of which are shown to converge to the optimal solution. Our theoretical guarantees are corroborated by numerical experiments.
I. INTRODUCTION
Sequential decision making under uncertainty lies in the heart of many real world problems, ranging from portfolio management to robotic control. Many models and methods have been proposed to study the dynamic decision-making process, among which Markov decision process (MDP) [1] and online optimization [2], together with their variants, are most popular and well studied ones. Based on Bellman principle [3], solving an MDP problem relies on the backward induction using dynamic programming, where the future is considered when making decisions. For online settings, where the transition kernel and/or the reward function are unknown, reinforcement learning (RL) algorithms [4] come into play, which more or less are based on dynamic programming idea. Combined with linear or nonlinear function approximators [5], [6], [7], dynamic programming-based RL methods such as Q−learning [8], actor-critic [9] have brought about many empirical successes.
On the other hand, online optimization operates in a forward fashion, where decisions are made based on history and no information regarding the future is revealed during the process. In online optimization, the solution concept rests on the optimality in hindsight, widely referred to as no-regret property, as we shall introduce in the background. Closely related to the no-regret idea, Blackwell approachability [10] gives a geometric interpretation of how regret vanishes over time in online decision-making. As pointed out in [11], [12], approachcbility and no-regret are equivalent, and we can develop no-regret algorithms based on the geometric intuition of approachability. As we will show in this paper, such connection can be made explicitly by considering a Blackwell game with vector-valued payoffs measuring the regret.
Instead of studying MDP from an online optimization perspective, most prior works focus on online version of MDP, where transition and/or reward are time-varying, referred to as online MDP [13], [14] or non-stationary RL [15], [16]. Under this setting, the no-regret idea plays an important role, and we see no difficulty in extending our approachbility framework to these problems as we adopt the online learning viewpoint. Related to our work, [17] also applies the no-regret idea to MDP problems, which provides theoretical guarantees for offline settings. As shown in the paper, the convergence of the proposed method relies on no-absolute-regret algorithm, such as follow-the-regularized-leader (FTRL) with the linear cost. We argue that such no-regret method is a special case of our Blackwell approachability based framework.
In this paper, we take a step toward understanding MDP from the perspective of online optimization. We construct an auxiliary Blackwell game for MDP, so that we can leverage online optimization methods based on regret minimization. Our main contributions includes: 1) we give a no-regret value iteration algorithm, based on Blackwell approachability, which we term Blackwell value iteration. We show that this method provides asymptotic convergence guarantee as classical value iteration in discounted MDP; 2) We extend this idea to RL domain with unknown transition and reward, which accounts for online learning problems. Similar to Q-learning, our proposed method, Blackwell Q-learning, does not require any prior information nor any access to state sampling distribution. Hence, instead of an asynchronous version of value iteration [18], our Blackwell Q-learning is indeed a RL algorithm based approachability idea. To the best of our knowledge, this is the first work that interprets an MDP as a Blackwell game, which leads to provably convergent learning algorithms.
The rest of the paper is organized as follows. We first introduce some preliminaries, including Blackwell approachability and no-regret in section II. We then move to our proposed methods based on Blackwell approachability in section III, where we give both value iteration like and Q−learning like algorithms for both offline planning and online learning problems. Our theoretical analysis is supported by numerical examples presented in section IV. Finally, we conclude the paper in section V. Due to the limit of space, we suppress all proofs in the paper and they can be found in the supplementary 1 .
II. BACKGROUND A. Markov Decision Process
An infinite-horizon discounted MDP can be characterized by a tuple, S, A, P, r, γ , where S is the finite state set; A is the finite action set; P : S × A → ∆(S) is the transition probability and ∆(S) ⊂ R |S| denotes the simplex over S; r : S × A → R is the reward function; and γ ∈ (0, 1) is the discounting factor.
For a given policy π : S → ∆(A), the total expected reward starting from an initial state s ∈ S is defined as V π (s) = E P,π [ ∞ k=1 γ k r(s k , a k )]. If we denote π(s, a) the probability of choosing a at state s, then with the Bellman principle [3], V π can also be written as where we denote Q π (s, a) = r(s, a) + γ s ∼P(·|s,a) V π (s ), known as the Q function or Q table. The goal is to find an optimal policy π * such that V π * (s) ≥ V π (s) for all s ∈ S. Similarly, for π ∈ R |S||A| , π(s) ∈ ∆(A).
Since we focus on finite cases throughout this paper, all functions introduced above are of finite dimensions. To better present our work, we use the following notations. For Q ∈ R |S||A| , Q(s) := [Q(s, a)] a∈A denotes the vector in R |A| . Similarly, for π ∈ R |S||A| , π(s) denotes the a vector in ∆(A). Finally, we assume that for every s ∈ S there exists an action a ∈ A such that the Markov chain is aperiodic and irreducible, which is a common assumption in reinforcement learning [4].
B. Blackwell Approachability
Blackwell approachability theory [10] was developed for studying repeated game play between two players with vectorvalued payoffs. In such a game, which we refer to as Blackwell game, at the k-th round, both Player 1 and Player 2 select their actions x k ∈ X and y k ∈ Y and then player 1 incurs the vector-valued payoff given by u(x k , y k ) ∈ R m , where u : X × Y → R m is a bi-affine function. We assume that action sets X , Y are compact and convex. The objective of Player 1 is to guarantee that the average payoff converges to a desired closed convex target D ∈ R m . We let d(x, D) := inf z∈D x − z denote the distance between a point x ∈ R n and the set D under norm · . If we consider the Blackwell game X , Y, u, D , then we can define an approachable set for Player 1 as follows.
Definition 1 (Approachable Set [10]). A set D is said to be approachable for Player 1, if there exists an algorithm σ k (·) : X k × Y k → X which chooses an action at each round based on the history of play: x k = A key concept in Blackwell approachability theory is the approachable halfspace, defined as below.
Blackwell's approachability theorem states that D is approachable if and only if all halfspaces H that contain D are approachable. Based on this theorem, we can construct a Blackwell strategy that guarantees the approachability, as shown in [10].
We denote the average payoff up to time k byū k : Since we deal with convex sets, P D (x) returns a singleton. If D is approachable, then the halfspace H defined by H := {z : z,ū k − P D (ū k ) ≤ 0} is approachable. Therefore, there exists x * ∈ X such that for all y, u(x * , y) ∈ H and hence, if we let x k+1 = x * , u(x k+1 , y k+1 ) falls into the same halfspace as the set D does. By doing so, we makeū k+1 closer to the set, as shown in Fig. 1 and repeating the same procedure at each round, the average payoff converges to D.
To better present our idea of leveraging Blackwell approachability and no-regret idea to solve RL problems, we first consider an example of online learning, which is a repeated game between the player and the nature. At each time k, the player chooses an action x k ∈ ∆ m ⊂ R m , a simplex in R m , while the nature chooses a payoff vector y k ∈ R m , which evaluates the action according to the revealed payoff x k , y k . Here, m is a positive integer. Then, the regret for not having played action e i ∈ ∆ m at time k is given by y k (i) − x k , y k , measuring the difference of counterfactual outcomes of e i and the received payoff, where y k (i) is the i-th element of the vector y k . Naturally, one would like to have a sequence of {x k } k that achieves the best possible result: showing that the sequence yields the same average performance as the best action in hindsight and a sequence is said to achieve no regret if it satisfies (1). One way to construct such no-regret sequences is to leverage Blackwell approachability as we show in the following. We consider the Blackwell game where 1 m ∈ R m is an all-ones vector. We note that such u measures the change in regret incurred at time k. If we adopt Blackwell strategy, we aim to find x ∈ ∆ m such that for all y, then we obtain that, for x k+1 = RM(x 1:k , y 1:k ), showing that (RM) is indeed a Blackwell strategy. Intuitively, this strategy outputs the next action x k+1 that is proportional to current cumulative regretū k : actions with larger regret shall be player more frequently, as they bring up better payoffs. Hence, it is also referred to as regret matching (RM) and has been studied in various contexts, including game theory [19], [20] and online optimization [11].
III. BLACKWELL Q-LEARNING
In this section, we present how to incorporate Blackwell approachability framework into MDP problem through the Blackwell game we introduced above.
A. Blackwell Value Iteration: Offline Planing
We first address the planing problem of MDP, where the transition kernel and reward function is known. From dynamic programming perspective, to solve such a MDP problem, we either resort to value iteration or policy iteration, both of which relies on the stationarity of the environment. However, as we have mentioned before, online learning (optimization) methods operate in a forward fashion for non-stationary or time-varying systems. In this subsection, we show that under stationary environments, online learning methods also guarantee the optimality of the solutions.
As for implementation, we first initialize a Q-value table, Q 0 , and an initial policy π 0 (s) for each s ∈ S. We run |S| copies of the algorithm; i.e., one for each state s ∈ S, and we iteratively reveal the rewards r(s, a) for all a ∈ A, which further translates to the payoffs in the online learning problem induced by MDP. In this algorithm, for each state s ∈ S, we view the policy π k (s) ∈ ∆(A) as the decision variable and Q k (s) = (Q k (s, a)) a∈A as the payoff vectors, which is obtained by expected SARSA [21] Q k (s, a) = r(s, a) + γE s ∼P(·|s,a),a ∼π k−1 [Q k−1 (s , a )]. (2) In this case, the payoff of the decision π k (s) is given by π k (s), Q k (s) . Similar to the Blackwell game in section II, we can also construct an approachability game for MDP. For the decision π k (s) and the feedback Q k (s), we define cost R : We note that for a given π, Q, the i-th entry of R(π(s), Q(s)) measures the quality of the policy δ(a i ), i.e., simply choosing a i , compared with current policy π. Intuitively, larger R i implies that δ(a i ) could have given a better payoff, had it been implemented.
It is not surprising that when using the Blackwell strategy such as (RM): π k+1 = RM(π 0:k , Q 0:k ), we drive the averaged regretR n = 1 n n−1 k=0 R(π k (s), Q k (s)) to the nonpositive orthant R |A| − and in the limit; i.e., no action can produce a positive regret, showing that the limiting point achieves optimality. Since we consider the average regret under the Blackwell framework, our convergence result in proposition 1 is also about the average Q tables. Proposition 1. LetQ n = 1/n n k=0 Q k and Q * be the Q tables under the optimal policy, then lim n→∞Qn = Q * . Similar result has also been shown in [17], when applying FTRL. Compared with their approach, our Blackwell approachability-based method is in fact more generic. We argue that for the linear cost considered in that paper i.e., π(s), Q(s) , it can be shown that FTRL is equivalent to RM proposed here. The details are included in the supplementary, which is mainly based on the connection between Blackwell approachability and online linear optimization studied in [12]. On the other hand, though RM is probably the most natural Blackwell strategy, it is definitely not the only one. In the supplementary, we claim that various online linear optimizers, including online gradient descent and mirror descent algorithms, all can be leveraged to construct Blackwell strategies, offering much freedom in designing algorithms.
B. Blackwell Q-learning: Online Learning
Though intuitive and provably convergent, Blackwell value iteration only applies to the offline setting where full information regarding the MDP is known. However, in standard RL problems, the agent is required to find the optimal policy without any access to the transition probability and reward functions. Hence, we need an online version of such approachability-based algorithms.
As pointed out in [17], developing such online learning schemes is not straightforward, and there are two major challenges. The first one is about rewards revelation: in the Blackwell value iteration, we require that at each iteration, rewards r(s, a) for all actions a ∈ A at a certain state s ∈ S are revealed for updating the Q-table according to (2). However, in RL, since the reward function is unknown, we only have access to the feedback corresponding to the actual action executed at each iteration. Apparently, such bandit feedback cannot produce the regret vectorR k . One workaround proposed in [17] is to use importance sampling technique in multi-arm bandit problems [22]. The update rule becomes Q k+1 (s, a) = (r(s, a) + γE π k [Q k (s , a )]) /π k (s, a) if a is an action sampled from π k (s) for state s, and Q k+1 (s, a) = 0 otherwise. Unfortunately, as in bandit problems, the incorporation of importance sampling can only ensure expectation convergence, a weaker guarantee than almost sure convergence in Q-learning, which is less desirable in practice.
The other challenge is that the asynchronous update in online learning is more involved than the synchronous one. In Blackwell value iteration, we update every state at every iteration, whereas in an online setting, this synchronous update is impossible, which introduces additional complexity to the convergence analysis. Different from the synchronous update, in the current iteration at time k, Q k (s) may be updated at different time instances for different states for the first k steps. And it is highly likely that some states are visited more frequently than others. One straightforward remedy is to require all states to be visited with the same frequency, i.e., the state to be updated at each iteration is chosen uniformly from S. With this additional condition, the asynchronous version of value iteration still guarantees the convergence as shown in [17], though it still falls within the realm of offline planning as the state transition, instead of fixed, is influenced by the chosen actions in online settings.
In this subsection, we propose an online learning scheme based on the Blackwell approachability. We address these issues by the two-time scale asynchronous stochastic approximation [23]. By leveraging the Lyapunov stability theory of differential inclusion developed in [24], [25], we show that adopting the Blackwell strategy in asynchronous update gives a provably convergent online learning scheme for tackling RL problems.
As we have discussed above, in the online setting, the state transition is influenced by executed actions, sampled from the policy, which further influences the update of the Q-table. This coupled dynamics of the policy update and the Q-table update makes it difficult to directly extend value iteration to online learning. One way to decouple the two dynamics is to adjust the timescales of the two. Simply put, we update the Q-table in the faster timescale while the policy in the slower one, where the faster timescale sees the slower as quasi-static while the slower timescale sees the faster as equilibrated. Specifically, we consider the following learning scheme based on the regret matching we have introduced.
Let s k+1 ∈ S and a k+1 ∈ A be the state and action visited at time k + 1, and the agent receives a noised reward R k+1 from the environment. We assume that R k+1 is unbiased in the sense that for the σ fields F k+1 = σ{a 0:k+1 , s 0:k+1 , R 0:k+1 }, E[R k+1 |F k+1 ] = r(s k+1 , a k+1 ). Since states and actions are visited asynchronously, the agent need the asynchronous counters φ k (s, a) : together with the step sizes {α(k)} k∈N , {β(k)} k∈N , to determine the learning rates. Based on all the above, the agent estimates the Q function by (3) and then updates its policy by (4). Finally, the agent chooses an action based on regret matching idea in (RM), i.e, sampling an action from the probability proportional to [R(π k (s), Q k (s))] + , which we denote by RM(π k (s), Q k (s)) with abuse of notations. We summarize the scheme in the following: for every s ∈ S and a ∈ A, s, a)) where V k (s) = a∈A π k (s, a)Q k (s, a), and e a is the unit vector in R |A| . We note that different from Blackwell value iteration, we here do not rely on all historical π k and Q k as our stochastic approximation schemes (3) and (4) already return averaged results.
It is clear that (3) and (4) are coupled as V k involves both π k and Q k . Technically speaking, in order to analyze the limiting behavior of the coupled dynamics, we must "decouple" them, and one possible approach as proposed in [23], [9] is to adjust the timescales. Specifically, in our case, we require that β(k) = o(α(k)), meaning that the Q update (3) operates at a faster timescale than the policy one (4). Intuitively speaking, when synchronous update (2) becomes impossible in an online setting, in order to produce a feedback Q k that can approximately evaluate the current policy π k , we must wait until Q k stabilizes before we update the policy. By running the policy update at a slow timescale, Q updates see π k as quasi-static, hence (3) can be viewed as expected SARSA [21], while policy update sees Q k as stabilized, serving as an approximation to Q π k . In the subsequent, we show that the two timescale stochastic approximation indeed converges to the optimal Q function and policy.
1) Convergence of the fast timescale: In order to solve for the Q-learning problem defined in (3), we resort to stochastic approximation introduced in [24], [25] and we first rewrite it in a more concise form. We define an operator T : R |S||A| × R |S||A| → R |S||A| whose (s, a) entry is given by T (s,a) (π, Q) := r(s, a) + γ a∈A π(s, a)Q(s, a) and then we define a vector Γ k+1 ∈ R |S| , whose s entry is We further define the asynchronous step sizesᾱ k and the relative step sizes µ k (s, a) as s, a)), µ k (s, a) := α(φ k (s, a))/ᾱ.
Note that (3) is equivalent to
By letting M k be the |S||A| × |S||A| diagonal matrix whose (s, a) entry is given by µ k (s, a), we can rewrite the asynchronous update (3) as where ⊗ denotes Kronecker product.
Denote the interpolated version of the stochastic approximation of the update above by {Q t (s)} s∈S (See Definition 2.2 in [26]), where t is the continuous time index.
Proposition 2. [26] There exists 0 < η < 1 such that almost surely, the interpolated stochastic approximation {Q t (s)} s∈S is an asymptotic pseudo-trajectory to the differential inclusion, where Proposition 3. When π is fixed, Q π is the unique global attractor of (6).
2) Convergence of the slow timescale: From the discussions above, the sequence of Q-tables {Q k } k converge to the Q π when π is fixed and it are Lipschitz continuous in π [26]. Therefore, we can study the limiting behavior of (4) by analyzing its continuous counterpart in (8), where we can replace the Q-table in (4) with the attractor Q πt given the current policy π t , as the slow timescale views the Q updates as stabilized.
where Ω η is defined similarly as (7). It remains to show that the differential inclusion (8) has a global attractor, which we prove by standard Lyapunov argument and moreover, we shall present that such a global attractor is indeed the set of optimal policy. In the following lemma, we identify the Lyapunov function associated with (8) Lemma 1. For every s ∈ S and any fixed ω(s) With this lemma, we now construct the Lyapunov function for (8), which further leads to the global convergence of the algorithm. First, given π * , an optimal policy, we define L(π) = s∈S V π * (s) − V π (s) . Apparently L(π) is a positive semidefinite function, since the optimality gives V π * (s)−V π (s) ≥ 0 for all s ∈ S and L(π) = 0 only if π is an optimal policy. Then with lemma 1, for any t > 0, we have This implies that L(π) is a Lyapunov function for the differential inclusion (8), with a global attractor Π = {π : π is an optimal strategy}, showing that π t given by (8) converges almost surely to the attractor. Therefore, from the convergence result of the continuous dynamics, we claim the convergence of the coupled dynamics (3), (4). Proposition 4. The sequence {Q k , π k } k given by the coupled recursive scheme (3) and (4) converges almost surely to (Q π * , π * ), where π * is an optimal policy and Q π * is associate optimal Q values.
IV. NUMERICAL EXPERIMENTS
In this section, we present experimental results when applying our Blackwell Q-learning to MDP problems. Since our proposed method resembles expected SARSA [21], we consider cliff walking task in that paper, where the agent has to find its way from the start to the goal in a grid world. The agent can take any of four-movement actions: up, down, left, and right, each of which moves the agent one square in the corresponding direction. Each step results in a reward of -1, except when the agent steps into the cliff area, which results in a reward of -100 and an immediate return to the start state. The episode ends upon reaching the goal state.
We evaluate the performance of Q-learning, SARSA, expected SARSA, and our Blackwell Q-learning. It is noted that our Blackwell Q-learning does not need any hyperparameter for encouraging exploration, as (RM) always retains some probabilities for actions that yield positive regret. Hence, our method is less aggressive in terms of exploitation, compared with the others. In our experiments, we adopt -greedy policy for the first three, where = 0.1 for Q-learning and SARSA, and for expected SARSA, we run the algorithm with two different exploration rates = 0.1, 0.5. We test these algorithms with 2000 episodes and we average the results over 200 independent runs. The numerical results are shown in Fig. 2. At first glance, both expected SARSA with = 0.1 and our Blackwell Q-learning give the best performance in the end, though the expected SARSA converges faster, due to the greedy policy. However, we note that the success of the expected SARSA relies on a carefully crafted exploration rate. If we set = 0.5, then the performance is even worse than that of SARSA. This observation highlights one merit of Blackwell Q-learning: it is hyperparameter free for exploration.
Though in our experiments, Blackwell Q-learning seems not to outperform expected SARSA in terms of the convergence rate, because of the difference in action selection, we argue that such conservative action selection is actually more desired for online learning problem, where the environment is nonstationary. One prominent example is learning in games [27], where the payoff is jointly determined by all players' actions. In this case, if one player only seeks the best response based on his own Q function, he may not achieve any equilibrium in the end, as observed in [28]. Due to the limited space, we fully develop our arguments in the supplementary.
V. CONCLUSION
We have introduced a novel approach for tackling MDP problems based on the Blackwell approachability theory. By constructing an auxiliary Blackwell game, we use its geometric interpretation to solve MDP problems by deriving no-regret learning from the Blackwell strategy, which provides an alternative to dynamic programming for MDP. Specifically, we have discussed one simple Blackwell strategy, regret matching, and how it can be incorporated into both offline planning methods (e.g., Blackwell value iteration) and online learning schemes (e.g. Blackwell Q-learning). Both are provably convergent. Related numerical results have been used to corroborate our results. As for future work, we would like to extend our Blackwell approachability-based idea to online (adversarial) MDP [14] and multi-agent systems, where the environment is not stationary from any player's perspective, hence imposing difficulties on applying dynamic programming. | 2020-12-29T02:15:51.701Z | 2020-12-28T00:00:00.000 | {
"year": 2020,
"sha1": "5435b741bdc271971de607defd85ec2d1cac9bc6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.14043",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5435b741bdc271971de607defd85ec2d1cac9bc6",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
31423712 | pes2o/s2orc | v3-fos-license | Combination biologic therapy for the treatment of severe palmoplantar pustulosis
IL: interleukin INTRODUCTION Psoriasis is a common chronic disease andmay be the most prevalent autoimmune disease. The estimated prevalence is 2% to 4% of the global population. Fortunately, providers have a plethora of medications to choose from, but even so, some cases can be very difficult to treat. Herein, we present a challenging case of palmoplantar pustulosis primarily affecting the soles of the feet. Successful treatment requires use of a novel combination of biologic therapy. To our knowledge, this is the first published case highlighting adalimumab and ustekinumab in combination for the treatment of severe palmoplantar pustulosis.
INTRODUCTION
Psoriasis is a common chronic disease and may be the most prevalent autoimmune disease. The estimated prevalence is 2% to 4% of the global population. 1 Fortunately, providers have a plethora of medications to choose from, but even so, some cases can be very difficult to treat. Herein, we present a challenging case of palmoplantar pustulosis primarily affecting the soles of the feet. Successful treatment requires use of a novel combination of biologic therapy. To our knowledge, this is the first published case highlighting adalimumab and ustekinumab in combination for the treatment of severe palmoplantar pustulosis.
CASE REPORT
A 33-year-old, 170-pound fit, white male auto mechanic presented with severe palmoplantar inflammation characterized by deep-seated pustules, discrete vesicles, and weeping, eroded plaques within a background of erythema and scale (Fig 1, A). The feet were significantly more involved than the hands. Aside from left great toe dactylitis, no other skin or nail findings were present. At the time of presentation, the differential diagnosis favored palmoplantar pustular psoriasis but included dyshidrotic eczema, allergic contact dermatitis, and dermatitis secondary to hyperhidrosis. Initial treatment interventions included topical steroids, narrow-band ultraviolet B therapy, and oral prednisone (maximum of 60 mg/d), but all failed to provide improvement. Botox injections, to treat underlying hyperhidrosis, were attempted but could not be tolerated. Subsequently, cyclosporine, 150 mg twice daily (;4 mg/kg/d), and methotrexate, 10 mg weekly, were initiated, resulting in 85% to 90% disease clearance after 4 weeks. Despite his initial response, we were unable to reduce the dosage of cyclosporine. Addition of mycophenolate (1 g twice daily) was also inadequate to taper the cyclosporine dose. Consequently, we opted for a trial of adalimumab, 40 mg, initially taken subcutaneously every other week. Methotrexate was discontinued on initiation of adalimumab; however, mycophenolate was continued. After 4 months, the adalimumab was increased to 40 mg weekly, and the patient was on this dose, along with mycophenolate, for 6 months. Although this treatment helped, it did not clear his condition nor did it facilitate tapering of cyclosporine. Finally, we opted to add the higher dosing regimen of ustekinumab, 90 mg, to his treatment plan (day 1, day 28, then every 3 months) given the severity of disease. The addition of ustekinumab facilitated the removal of all other immunosuppressants by the third dose with the exception of adalimumab. After 3 doses of ustekinumab in combination with adalimumab (40 mg weekly), the patient obtained 95% clearance and has remained clear for more than 4 months (Fig 1, B). Importantly, no adverse effects occurred.
DISCUSSION
Palmoplantar pustulosis, felt by many to be a variant of psoriasis, is a complex disorder that is often recalcitrant to single-agent therapy. It is likely that the pathophysiologies of psoriasis and palmoplantar pustulosis overlap and result from a combination of genetic, epigenetic, and environmental factors leading to unregulated hyperproliferation of keratinocytes. 2 Traditional therapies including corticosteroids, systemic immunosuppressants, and phototherapy can be effective in treating palmoplantar pustulosis, including palmoplantar psoriasis and many of the inflammatory conditions in our patient's differential diagnosis. In our case, however, topical therapies and phototherapy were ineffective. Systemic agents, including methotrexate, mycophenolate, and cyclosporine (noteworthy in combination) were effective but insufficient in facilitating tapering of cyclosporine, which over time can lead to hypertension and renal toxicity.
Unlike the above therapies, biologic therapies target individual cytokines and pro-inflammatory proteins. Given the inconclusiveness of the diagnosis in our case, biologic therapies were only considered after multiple more broadly effective therapies failed. Even among the biologics, options abound, and our choice to start with adalimumab was not done without reason. Tumor necrosis factor-a has a role as a regulator of the interleukin (IL)-23/T-helper-17 axis in that it promotes dendritic cell production of proinflammatory IL-23, and thus has a broader inflammatory target compared with ustekinumab, which only targets cytokines primarily derived from dendritic cells, namely IL-12 and IL-23. 1,3 Although adalimumab kept our patient's condition stable, it did not provide full relief or adequately manage his condition, as he was still cyclosporine dependent. It was at this point that ustekinumab was added as a therapy. Despite limited literature on the safety of combination biologic therapy, 4 the concept of adding a biologic in another class is logical, as targeting multiple components of the inflammatory cascade might be therapeutically superior. Indeed, it was this combination therapy that afforded our patient near complete clearance and a sustained response.
When deciding whether to initiate any combination therapy, in particular, combinations of biologic agents, both safety and cost are important factors to consider. In addition to the potential for risks associated with individual agents, such as malignancy and infection, other yet unknown adverse effects could occur. Thus, close monitoring, including more frequent follow-up visits and additional laboratory tests, may be warranted. In our case, after starting the therapy, we regularly communicated with the patient to assess his tolerability to the medication including any adverse effects, and none were reported.
Cost is obviously also a significant hurdle when deciding on combination therapies. All of the newer biologic therapies are expensive. 5 It is important to note, however, that patients with severe inflammatory skin disease accumulate costs related to loss of productivity and missed workdays because of the daily impact disease has on their lives. Costly yet effective drug treatments also may have the potential to reduce long-term overall cost through reduction in other aspects such as laboratory monitoring and office visits. 6 A limitation of our case is the lack of a skin biopsy. Because the patient was on multiple therapies, he preferred not to suspend treatment solely for diagnostic confirmation. Additionally, the patient's insurance afforded us the ability to use multiple biologics, a luxury likely not applicable to many patients at this time.
As the pathophysiology of psoriasis becomes more fully understood, new therapeutic agents and combinations of existing therapies are likely to become options for refractory cases. In this case, we found successful treatment with combination therapy using adalimumab and ustekinumab. Although long-term safety and cost are strong considerations, this approach may be an option for other patients with similar or more extensive palmoplantar disease resistant to standard treatments. | 2018-04-03T06:25:05.425Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "e5a55e96ce5b72ba21c04fc33f289fdc9eddf99a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jaadcasereports.org/article/S2352512617300589/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5a55e96ce5b72ba21c04fc33f289fdc9eddf99a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119584886 | pes2o/s2orc | v3-fos-license | The Probability of Intransitivity in Dice and Close Elections
Intransitivity often emerges when ranking three or more alternatives. Condorcet paradox and Arrow's theorem are key examples of this phenomena in the social sciences, and non-transitive dice are a fascinating aspect of games of chance. In this paper, we study intransitivity in natural random models of dice and voting. First, we follow a recent thread of research that aims to understand intransitivity for three or more $n$-sided dice (with non-standard labelings), where the pairwise ordering is induced by the probability, relative to 1/2, that a throw from one die is higher than the other. Conrey, Gabbard, Grant, Liu and Morrison studied, via simulation, the probability of intransitivity for a number of random dice models. Their findings led to a Polymath project studying three i.i.d. random dice with i.i.d. faces drawn from the uniform distribution on $1,\ldots,n$, and conditioned on the average of faces equal to $(n+1)/2$. The Polymath project proved that the probability that three such dice are intransitive is asymptotically 1/4. We study some related models and questions. Among others, we show that if the uniform dice faces are replaced by any other continuous distribution (with some mild assumptions) and conditioned on the average of faces equal to zero, then three dice are transitive with high probability, in contrast to the unique behavior of the uniform model. We also extend our results to stationary Gaussian dice, whose faces, for example, can be the fractional Brownian increments with Hurst index $H\in(0,1)$. We study analogous questions in social choice theory, where we define a notion of almost tied elections in the standard voting model, and show that the probability of Condorcet paradox for those elections approaches 1/4, in contrast to the unconditioned case. We also explore voting models where methods other than simple majority are used for pairwise elections.
Introduction
Intransitive dice and Condorcet paradox are two examples of phenomena featuring an unexpected lack of transitivity. In this paper we present some quantitative results regarding their frequency in probabilistic settings. We introduce and present our results for dice and voting separately, making comparisons between the two settings where appropriate.
Intransitive dice
For purposes of this paper, we call an n-sided die (think of gambling dice) any vector a = (a 1 , . . . , a n ) of real numbers. The face-sum of a die a is n i=1 a i . We say that die a beats die b, denoted a ≻ b, if a uniformly random face of a has greater value than a random face of b. In other words, a ≻ b if n i,j=1 Á[a i > b j ] > n 2 /2. We call a finite set of n-sided dice intransitive if the "beats" relation on the set cannot be extended to a linear order. That is, a set of dice is intransitive if it contains a subset a (1) , . . . , a (k) such that a (1) ≻ a (2) ≻ . . . ≻ a (k) ≻ a (1) . A well-known and simple example with three sides is a = (2, 4, 9), b = (1, 6, 8) and c = (3, 5, 7). One checks that a ≻ b ≻ c ≻ a. If a set of dice forms a linear ordering, then we call it transitive. Because of ties there can be sets that are neither transitive nor intransitive, but they occur with negligible probability in the models we are studying.
Note that even though Theorem 1.1 is proved for three dice, a union bound implies that k random dice are transitive with high probability for any fixed k. The proof considers random variables W (kk ′ ) := n i,j=1 Á[k i < k ′ j ] and their normalized versions W (kk ′ ) := . Note that the dice are intransitive if and only if sgn W (ab) = sgn W (bc) = sgn W (ca) . We show that the sum W (ab) + W (bc) + W (ca) has small variance and, at the same time, each W (kk ′ ) is pretty anti-concentrated. This implies that with high probability the signs of W (kk ′ ) cannot be all equal.
To understand the differing behavior of normal and uniform dice implied by Theorem 1.1 and the Polymath result, we first note that, as shown by Polymath [Pol17a], for unconditioned dice with faces uniform in (0, 1), the face-sums determine if a beats b with high probability. Without conditioning on face-sums, the distribution of the random variable W = n i,j=1 Á[a i < b j ] is the same, regardless if the faces are Gaussian or uniform in (0, 1) (this is because the value of W is preserved by applying a strictly increasing function, in particular the Gaussian CDF Φ(·), to the faces). Furthermore, noting that, under are uniformly distributed and (globally) weakly dependent, which suggests that with high probability the expression determines the winner. Experiments for the Gaussian and exponential distributions suggest the following conjecture: Note that in the conjecture, the distribution function depends on n through the conditioning, though for large n, the marginal effect of the conditioning is small. One hopes that the proof strategy for unconditioned dice from [Pol17a] can be extended to prove Conjecture 1.2, at least for the Gaussian distribution, but we were unable to do so.
Condorcet paradox: Social chaos for close majority elections
The Condorcet paradox is a famous intransitive phenomenon in social choice theory. Consider n voters trying to decide between k alternatives. Each voter has a ranking (linear ordering) of the alternatives and we would like to aggregate the n rankings into a global one. A natural approach is as follows: Given a pair of alternatives a and b, we say that a beats b if a majority of voters put a ahead of b in their rankings (we always assume n is odd to avoid dealing with ties). Aggregating these majority elections for all K := k 2 pairs of alternatives, we obtain a tournament graph on k vertices, that is, a complete graph where each edge is directed.
If this tournament is transitive (i.e., it induces a linear ordering), or if there exists a Condorcet winner (i.e., the alternative that beats all others), we might conclude that there is a clear global winner of the election. However, the Condorcet paradox says that the pairwise rankings need not produce a Condorcet winner. For example, we might have three voters with rankings a ≻ b ≻ c, b ≻ c ≻ a and c ≻ a ≻ b, respectively. Majority aggregation results in a beating b, b beating c and c beating a. Assume a probabilistic model with n voters and k alternatives, where each voter samples one of k! rankings independently and uniformly. This is called the impartial culture assumption and is the most common model studied in social choice. Despite the example above, one might hope that under impartial culture the paradox is unlikely to arise for a large number of voters. However, it was one of the earliest results in social choice theory [Gui52,GK68] that it is not so: In particular, letting P Cond (k, n) as probability of Condorcet winner for n voters and k alternatives, and P Cond (k) := lim n→∞ P Cond (k, n) we have For k ≥ 4 there is no simple expression, but the numerical values up to k = 50 were computed by Niemi and Weisberg [NW68]; for example, P Cond (10) ≈ 51.1% and P Cond (27) ≈ 25.5%; and the asymptotic behavior is given by May [May71] as which shows that lim k→∞ P Cond (k) = 0. If one is interested in the probability of a completely transitive outcome, the best asymptotics known [Mos10] are exp(−Θ(k 5/3 )). We note that there is a vast literature on different variants and models of voting paradoxes. See Gehrlein [Geh02] for one survey of results in related settings.
Motivated by the dice models studied in [CGG + 16] and [Pol17b], we look at the probability of Condorcet paradox under impartial culture, conditioned on all pairwise elections being close to tied. More precisely, for each pair of alternatives {a, b}, define the random variable S (ab) to be the number of voters that prefer a to b, minus the number of voters preferring b to a. In other words, the sign of S (ab) determines the alternative that wins the pairwise election. Let Y (ab) := sgn(S (ab) ) and Y be a random tuple encoding the K pairwise winners via the Y (ab) , having K entries with values in {−1, 1}. Furthermore, for d ≥ 1, let E d be the event that S (ab) ≤ d for every pair {a, b}. We think of the event E d as "the elections are d-close", with d = 1 corresponding to almost perfectly tied elections.
Our main result for voting uses a multidimensional local limit theorem to show that the probability of Condorcet winner for almost tied elections goes to zero much faster than in (1.2). Actually, we prove the following stronger result. Theorem 1.3. Let n be odd, d ≥ 1 and y ∈ {−1, 1} K . Then, where α k > 0 depends only on k and o k (1) denotes a function that, for every fixed k, goes to zero as n goes to infinity. In particular, (1.5) Comparing Theorem 1.3 to intransitivity of random uniform dice conditioned on their sum, first note that for almost tied elections and k = 3, the asymptotic probability of Condorcet winner computed from (1.5) is 3/4, which is same as the probability of transitivity for dice. On the other hand, there is a difference in the transition between the transitive and chaotic regimes. Assuming dice with faces uniform in (−1, 1), the model is chaotic when conditioned on face-sums equal to zero, but, as shown by Polymath [Pol17a], it becomes transitive as soon as we condition on face-sums of absolute value at most d for d = ω(log n). However, the voting outcomes behave chaotically for d-close elections for any d = o( √ n) and transition into the "intermediate", rather than transitive, regime given by (1.1). Furthermore, (1.3) means that the tournament on k alternatives determined by Y is asymptotically random. [CGG + 16] conjectured that k random dice also form a random tournament, however [Pol17b] report experimental evidence against this conjecture. We also note that the proof of Theorem 1.3 can be modified such that its statement holds even when conditioning on only K − 1 out of K pairwise elections being d-close.
The above-mentioned work by Kalai [Kal10] calls the situation when Y is a random tournament social chaos. He considers impartial culture model (without conditioning) and an arbitrary monotone odd function f : {−1, 1} n → {−1, 1} for pairwise elections (the setting we considered so far corresponds to f = Maj n ). Under these assumptions, he proves that social chaos is equivalent to the asymptotic probability of Condorcet winner for three alternatives being equal to 3/4. [Kal10] contains another equivalent condition for social chaos, stated in terms of noise sensitivity of function f for only two alternatives. It is interesting to compare it with the reduction from three to two dice in Lemma 2.1 of [Pol17b].
Condorcet paradox: Generalizing close elections -A case study
It would be interesting to extend Theorem 1.3 to other natural pairwise comparison functions such as weighted majorities and recursive majorities, similar to the electoral college in the USA. However, in order to formulate such a result, it is first necessary to define d-close elections for an arbitrary function. We explore the difficulties around this issue in a simple example. Let us assume that there are three candidates a, b, c and a number of voters n that is divisible by three, letting m := n/3. We take f : In words, f is a two-level majority: Majority of votes of m triplets, where the vote of each triplet is decided by majority.
The function f possesses many pleasant properties: It is odd, transitive symmetric and is a polynomial threshold function of degree three. We would like to devise a natural notion of d-close elections according to f and see if it results in chaotic behavior for small d, similar to Theorem 1.3.
To start with, let w i := x 3i−2 + x 3i−1 + x 3i . In the following we will sometimes treat f as a function of w := (w 1 , . . . , w m ): f : {±1, ±3} m → {±1}, with the distribution of w induced by the distribution of x, i.e., w i = ±3 and w i = ±1 with probabilities 1/8 and 3/8, respectively. A CLT argument as in Theorem 1.3 implies chaotic elections for f if we define "d-close" as " m i=1 sgn w (kk ′ ) i ≤ d" for every pair of candidates (kk ′ ). However, this is not very satisfactory for at least two reasons. First, it does not accord well with our intuition of closeness, with the problem becoming more apparent considering analogous condition for other two-level majorities, say √ n groups of √ n voters each. Second, it does not seem to extend to other functions that do not have such an "obvious" summation built into them. Another idea is to define "d-close" the same way as in Theorem 1.3, that is as Clearly, this is not a good closeness measure for an arbitrary comparison method (e.g., weighted majority with large differences between weights), but one could argue that it is relevant at least for transitive symmetric functions. Using another CLT argument, we find that for this definition of closeness, the behavior of o( √ n)-close elections under f is not chaotic: The asymptotic Condorcet paradox probability is slightly less than 25%. Note that for three candidates, the Condorcet paradox happens if and only if f Theorem 1.4. Under the notation above and the event E d as defined in Section 1.
For comparison, without conditioning the Condorcet paradox probability is ≈ 12.5% when the elections are according to f and ≈ 8.8% according to majority.
The idea for the proof of Theorem 1.4 is to use multivariate Berry-Esseen theorem for random variables , kk ′ ∈ {ab, bc, ca} .
We are looking at sign patterns of B (kk ′ ) conditioned on small absolute values of A (kk ′ ) . A (kk ′ ) and B (kk ′ ) are not perfectly correlated and it turns out that part of (negative) correlations between B (ab) , B (bc) and B (ca) is not attributable to correlations between A (ab) , A (bc) and A (ca) . Hence, even after conditioning on small A (kk ′ ) there remains a small constant correlation between B (kk ′ ) which prevents completely chaotic behavior. Another promising definition of closeness involves the noise operator T ρ from the analysis of Boolean functions (see, e.g., [O'D14] for more details). Let ρ ∈ [−1, 1] and x ∈ {−1, 1} n . Define a probability distribution N ρ (x) over {−1, 1} n such that y 1 , . . . , y n are sampled independently with y i = −x i with probability ε := 1−ρ 2 and y i = x i otherwise. Note that E[x i y i ] = ρ, hence we say that a pair (x, y) sampled as uniform x and then y according to N ρ (x) is ρ-correlated. Given ρ and x, the noise operator T ρ is defined as For ρ ∈ (0, 1) one can think of N ρ (x) as a distribution over {−1, 1} n with the probabilities that are decreasing in the Hamming distance to x. Furthermore, for majority and This suggests that it may be fruitful to define "d-close" as " The idea becomes even more appealing when considering a Fourier-analytic Condorcet formula discovered by Kalai [Kal02]. He showed that for an odd function g : {−1, 1} n → {−1, 1} and 1/3-correlated vectors (x, y) the probability of Condorcet paradox without conditioning is equal to Another feature of the T ρ operator is that for noise sensitive functions (which [Kal10] proved to be exactly those that result in chaotic elections without conditioning) the value |T ρ f (x)| is o(1) with high probability over x. A possible interpretation of this fact is that elections according to a noise sensitive function are almost always close. Recall our "majority of triplets" function f and define the event F ρ,d as At first sight, (1.6) suggests that the event F ρ,d , with ρ = 1/3 and d = o( √ m), should cause the expectation term in (1.6) to vanish and the probability of Condorcet paradox to approach 1/4. Surprisingly, this is not the case for f : The proof of Theorem 1.5 is a complication of the proof of Theorem Then we observe that, just as for majority the value of T ρ Maj(x) is proportional to the number of ones in x minus n/2, also for f the value of T ρ f (w) is proportional to a certain linear combination of V b (w). This allows us to proceed with an identical argument as in Theorem 1.4 with appropriately redefined random variables A (kk ′ ) .
Some more recent results show that, without conditioning, majority in fact maximizes the probability of Condorcet winner among "low-influence functions" (see [MOO10] for three voters and [Mos10, IM12] for general case). This contrasts with Theorems 1.4 and 1.5 for different definitions of close elections.
Arrow's theorem for dice
To further consider the parallels between dice and social choice, we also ask if there is a dice analogue of Arrow's theorem (and its quantitative version). We obtain a rather generic statement that does not use any properties of dice and a quantitative version which is a restatement of a result on tournaments by Fox and Sudakov [FS08].
Organization of the paper The proofs of our main theorems are located in Sections 2 (Theorem 1.1), 3 (Theorem 1.3) and 4 (Theorems 1.4 and 1.5). Section 5 contains the discussion of Arrow's theorem for dice.
Gaussian dice
We are going to prove Theorem 1.1, saying that three random dice a, b and c with standard Gaussian faces conditioned on the face-sum equal to zero are transitive with high probability. One possible approach would be to generalize the proof of transitivity for unconditioned uniform dice in [Pol17a], however we follow a slightly different idea.
We rely considerably on the Hilbert space structure of joint Gaussians (see, e.g., [Jan97] for an exposition of this view). In particular, we use the fact that we can write a random Gaussian face-sum zero die a = (a 1 , . . . , a n ) as a i = x i − n i=1 x i /n, where x 1 , . . . , x n are n independent standard Gaussians. This implies that a 1 , . . . , a n are jointly Gaussian with Var[a i ] = 1 − 1/n and Cov[a i , a j ] = −1/n for i = j.
Note that E W (kk ′ ) = n 2 /2. We start with computing the covariance structure of W (kk ′ ) .
Let us look at another case: W (ab) 11 and W (ab) 12 . Take A := n 2(n−1) (b 1 − a 1 ) and B := n 2(n−1) (b 2 − a 1 ). This time A and B are joint standard Gaussians with correlation (2.5) Putting (2.4) and (2.5) together, and using bilinearity to decompose Var W (ab) into three classes of terms, We move to covariances of W (ab) and W (bc) . There are two cases to consider, analogous to the first part of the proof.
First, take W (ab) 11 and W (bc) The final case is W (ab) 11 and W (bc) (2.7) Combining (2.6) and (2.7), In particular, by Chebyshev's inequality, One hopes that, since there is only a modest amount of correlation in the random vectors a and b, the distribution of W := W (ab) converges to a standard Gaussian. We prove a weaker statement: It is at least as anti-concentrated as a Gaussian of constant variance.
Lemma 2.3. For every C ∈ R and ε > 0, we have where the constants in the O(·) notation are absolute, in particular independent of C or ε.
Proof. For ease of presentation, we assume that n is even and define m := n/2. Let us consider the die a first. Let x 1 , . . . , x n be i.i.d. standard Gaussians such that a i = x i − n i=1 x i /n. By standard Chernoff-Hoeffding bounds, all three events considered below have exponentially small probability: the m pairs (b 1 , b 2 ), (b 3 , b 4 ), . . . , (b n−1 , b n ) become independent.
Recall the unnormalized random variable W := W (ab) and write it as by an argument very similar to the one that led to (2.8) we get that u 1 , . . . , u m are "good" with high probability in the following sense: . Recall the assumptions we made on a in (2.8). We consider Y k as a function of s k and make some ad-hoc computations concerning Y k (s k ) and Φ(·): This implies that any such Y k has (conditional) variance Var[Y k ] ≥ Ω(n 2 ).
Putting everything together, for every good a and u 1 , . . . , u m , we get independent random variables Y 1 , . . . , Y m such that σ 2 := Var [ m k=1 Y k ] ≥ Ω(n 3 ); the Ω(·) does not depend on a or u 1 , . . . , u m . Furthermore, clearly, E |Y i − E Y i | 3 ≤ 8n 3 . Let N be a standard Gaussian independent of everything else, C ∈ R and ε > 0. The Berry-Esseen theorem for non-identically distributed variables gives (2.10) Recall that W = W −n 2 /2 √ αn 3 and observe that σ √ αn 3 = O(1). Continuing (2.10), we obtain, still conditioned on good a and u 1 , . . . , u m , where all constants in the O(·) notation are absolute. Finally, averaging over all a and u 1 , . . . , u m and applying union bound over those that are not good, we get the desired conclusion: First, by union bound, for δ > 0 (2.11) We set δ := 1 n 1/3 and bound both right-hand side terms of (2.11). By Lemma 2.2, and by Lemma 2.3,
Condorcet paradox for close elections: Majority
This section contains the proof of Theorem 1.3.
Notation
We start with recalling and extending the model and notation.
There are n voters (where n is odd) and each of them independently chooses one of k! rankings of the alternatives uniformly at random. For voter i, such a random ranking gives rise to a random tuple pairwise choices (according to some fixed ordering of pairs). We call each of k! tuples in the support of x i transitive. Any other tuple is intransitive. We say that a tuple has a Condorcet winner if it has an alternative that beats everyone else.
We denote aggregation over voters by boldface. Therefore, we write x = (x 1 , . . . x n ) for the random vector of voter preferences (where each element is itself a random tuple of length K).
Given voter preferences, we say that the voting outcome is intransitive if the aggregated tuple Y is intransitive. Similarly, we say that there is a Condorcet winner if tuple Y has a Condorcet winner.
We are interested in situations where elections are "almost tied" or, more precisely, "d-close" for d ≥ 1. Specifically, we define E d to be the event where S ∞ ≤ d, i.e., |S (j) | is at most d for every j ∈ [K].
Local CLT
We use a theorem and some definitions from the textbook on random walks by Spitzer [Spi76]. In accordance with the book, we define: Definition 3.1. A k-dimensional random walk (X i ) i∈N is a Markov chain over Z k with X 0 = 0 k and a distribution of one step Z i+1 := X i+1 − X i that does not depend on i.
), note that (S i ) i∈{0,...,n} is a random walk over Z K and that we want to compute È(sgn(S n ) = y|E d ), for y ∈ {−1, 1} K . There is one technicality we need to address to apply a local CLT: Since the steps of our random walk are in {−1, 1} K , the values of (S i ) lie on a proper sublattice of Z K , namely, S (j) i always has the same parity as i. To deal with this, we define T (j) i := (S (j) 2i+1 − 1)/2. Note that (T i ) is still a random walk over Z K , with one catch: The starting point of T 0 is not necessarily the origin, but rather one of k! points in {−1, 0} K corresponding to a transitive tuple picked by the first voter.
Before we state the local CLT, we need another definition: Spi76], D1 in Section 5). A random walk over Z K is strongly aperiodic if for every t ∈ Z K , the subgroup of Z K generated by the points that can be reached from t in one step is equal to Z K . Now we are ready to state the theorem: Theorem 3.3 (Local CLT, Remark after P9 in Section 7 of [Spi76]). Let (T i ) i∈N be a strongly aperiodic random walk over Z K , starting at origin and with a single step Z, i.e., where the o(1) function depends on n, but not on t.
Our main lemma states that the joint distribution of T n conditioned on T n ∞ being small is roughly uniform.
Lemma 3.4. For the random walk (T i ) defined above and t ∈ Z K , d ≥ 1 such that t ∞ ≤ d, there are some α k , β k > 0 such that Proof. We first deal with the technicality that we mentioned before: The starting point T 0 of the random walk is itself a random variable. In the proof below we proceed conditioning on T 0 = 0 K . After reading the proof it should be clear how to modify it for other starting points in {−1, 0} K . (3.1) is obtained from those conditional results by triangle inequality. We need to check that the random walk (T i ) satisfies hypothesis of Theorem 3.3. First, note that the "step" random variable Z for (T i ) has the same distribution as (X 1 + X 2 )/2, i.e., two steps of our original random process.
To show that (T i ) is strongly aperiodic, let (e (1) , . . . , e (K) ) be the standard basis of Z K . Note that it is enough to show that for each z ∈ Z K , all of z, z + e (1) , . . . , z + e (K) are reachable from z in one step. But this is so: • It is possible to stay at z by choosing a permutation (ranking) τ for X 1 and then its reverse τ R for X 2 .
• We explain how one can move from z to z + e (j) on an example and hope it is clear how to generalize it. For k = 5 and e (j) corresponding to the b vs. d comparison, one can choose a ranking b > d > a > c > e for X 1 followed by e > c > a > b > d for X 2 .
Since Theorem 3.3 applies, we have which can be rewritten as Finally we observe that t = dt ′ for some t ′ with t ′ ∞ ≤ 1, so we have as we needed.
Proof of Theorem 1.3
Recall that we want to prove (1.3), that is After we have (1.3), the bounds (1.4) and (1.5) easily follow by triangle inequality. For y ∈ {−1, 1} K , let S y := s ∈ (2Z + 1) K : Furthermore, note that |S y | = |S y ′ | for every y, y ′ . Set M := |S y | as the common cardinality of the S y sets.
First, we use Corollary 3.5 to show that the probability È[Y = y | E d ] must be close to q : where α k is the constant from Corollary 3.5: The value of q depends on k, n and d, but not on y. The implication is that the conditional probabilities must be almost equal for every pair y, y ′ : But this is all we need, since
Remark 3.6. A similar bound with an explicit
(implying chaotic behavior for n 1/2−1/K ≪ d ≪ n 1/2 ) can be achieved using a multidimensional Berry-Esseen theorem instead of the local CLT.
Remark 3.7. As we mentioned in Section 1.2, the proof of Theorem 1.3 can be modified to give a similar bound The reason for this is that if we remove conditioning from just one S (a 0 b 0 ) , there are still no covariance factors in the CLT computation that would steer the distribution of Y away from uniform.
Condorcet paradox for close elections: Majority of triplets
Recall that we are considering odd n = 3m voters, alternatives a, b, c and random variables x This section contains proofs of non-chaotic behavior of f under certain conditionings. Section 4.1 contains proof of Theorem 1.4, dealing with conditioning on small In Section 4.2 we prove Theorem 1.5, which considers conditioning on small T ρ f x (kk ′ ) .
Proof of Theorem 1.4
For i ∈ [m], we take random tuple . Note that Z 1 , . . . , Z m are i.i.d. Let us compute the first two moments of a single-coordinate distribution Z = (A (ab) , A (bc) , A (ca) , B (ab) , B (bc) , B (ca) ). For this keep in mind that Cov x = −1/3 and refer to Table 1 for joint distribution of w (kk ′ ) and w (k ′ k ′′ ) : N (kk ′ ) be joint standard Gaussians with the same covariance structure as A (kk ′ ) and B (kk ′ ) respectively. After checking that our six by six covariance matrix is not singular, by multidimensional Berry-Esseen theorem (see the statement e.g., in [Ben05]), we can move to the Gaussian space: Using the covariance structure of M (kk ′ ) and N (kk ′ ) and the geometry of joint Gaussians, we can conclude that each N (kk ′ ) can be written as where the variables R (kk ′ ) are standard Gaussians independent of the M (kk ′ ) such that Cov R (kk ′ ) , R (k ′ k ′′ ) = −1/27. To continue the computation in (4.1), first note that by considering the probability density function of M (kk ′ ) , we have Fix some values of M (kk ′ ) such that M (kk ′ ) ∞ ≤ 1 √ 3ln n . From (4.2) and evaluating the probability È R (ab) ≥ 0 ∧ R (bc) ≥ 0 ∧ R (ca) ≥ 0 in a computer algebra system, we see that after this conditioning Putting together (4.1), (4.2) and (4.3), we can finally conclude that, for n big enough, letting C :=
Proof of Theorem 1.5
The proof of Theorem 1.5 is an unpleasant complication of the proof of Theorem 1.4, which is a recommended preliminary reading. In particular, we will use the notation that was developed there. From now on the constants in the O(·) notation are allowed to depend on ρ. Recall that for x ∈ {−1, 1} n and w ∈ {±3, ±1} m we have defined Note that T ρ f (w) = 2 (È[f (z) = 1] − 1/2), where z is generated from y, which in turn is generated according to N ρ (x). Therefore, in light of (4.5) and (4.6) we have where the sum under the probability sign is over four independent random variables with binomial distributions. It should be possible to apply a CLT argument to conclude that, for most values of W b (w), the value of T ρ f (w) is proportional to where q 3 := p 3 − 1/2 and q 1 := p 1 − 1/2. We will state a precise lemma now and continue with the proof, proving the lemma at the end: . Let Take C := π 2 σ and define events Let ∆ stand for a symmetric difference of events. Then, Assuming Lemma 4.1 we can continue along the lines of the proof of Theorem 1.4. Same as there, let B . The random variables Z 1 , . . . , Z m are i.i.d. and for CLT purposes we can compute (again Table 1 is helpful) the six by six covariance matrix Q of the distribution of Z := Z 1 : . Applying Lemma 4.1 and multidimensional Berry-Esseen theorem (using computer algebra system to check that the covariance matrix is invertible for every ρ ∈ (0, 1)), . (4.10) Computations in a computer algebra system lead to expressing N (kk ′ ) as a linear combination in the following way: where γ > 0, the random tuples M (kk ′ ) are independent of each other and each R (kk ′ ) is a standard Gaussian. Some more computation shows that (4.12) Interestingly, the bound in (4.12) is independent of ρ and approaches −1/27 as ρ goes to zero. (4.11) and (4.12) imply that after conditioning on fixed values of M (kk ′ ) such that ≈ 11.6%. (4.13) At the same time, (4.14) and (4.10), (4.13) and (4.14) lead to as we wanted. It remains to prove Lemma 4.1.
Proof of Lemma 4.1. Recall the definitions of W b (w) and V b (w). We begin with estimating T ρ f (w) for a fixed w. In the following we will sometimes drop dependence on w (writing, e.g., ) in the interest of clarity. Recall equation (4.7) and let Z := m i=1 Z i be the sum of m independent random variables arising out of the four binomial distributions featured there. We have: for t := t(w) := . Since the random variables Z i are bounded, we can apply Berry-Esseen theorem and get (using Φ(x) = 1/2 + 1/2 erf (x/ √ 2)) From now on we consider a random election with random vote vectors x (ab) , x (bc) , x (ca) that induce random vectors w (ab) , w (bc) , w (ca) . First, consider the marginal distribution of w. Since t(w) can be written as a sum of m i.i.d. random variables σ 2 mt(w) = m i=1 t i (w i ) with E[t i ] = 0 and |t i | ≤ 1, a standard concentration bound gives (4.16) We turn to estimating the symmetric difference We will use union bound over a small number (twelve) cases and show that each of them has probability O(ln −5 m). We proceed with two examples, noting that the rest are proved by symmetric versions of the same argument. First, due to (4.15) and the Taylor expansion as well as symmetric versions of (4.17) for reverse inequality and ±1/ ln m. We use (4.17) and (4.16) to estimate the first example coming from È[G 1 ∧ ¬G 2 ]: Using Berry-Esseen theorem as in the proof of Theorem 1.5 we get jointly normal centered random variables M (ab) , M (bc) , M (ca) with covariances given by (4.8) and (4.9), for which we know that Finally, the second example stemming from È[¬G 1 ∧ G 2 ] is bounded in a similar manner:
Arrow's theorem for dice
Arguably the most famous result in social choice theory is Arrow's impossibility theorem [Arr50,Arr63]. Intuitively, it states that the only reasonable voting systems based on pairwise comparisons that never produce a Condorcet paradox are "dictators", i.e., functions whose value depend only on a single voter. There are also quantitative versions, proved by Kalai [Kal02] for balanced function and by Mossel [Mos12] for general functions (with tighter bounds obtained by Keller [Kel12]). For simplicity we consider three alternatives and the impartial culture model. Then, quantitative Arrow's theorem says that a reasonable pairwise comparison function f that is ε-far from every dictator (in the sense of normalized Hamming distance), must be such that the probability of Condorcet paradox is at least Ω(ε 3 ).
There is an analogous question about transitive dice: What are the methods for pairwise comparisons of k dice that always produce a linear order? In particular, we know that comparing two dice a and b by using the "beats" relation is not one of them.
We restrict ourselves to k = 3. Assume that we look at dice with n sides labeled with distinct dice a, b, c such that f (a, b) = f (b, c) = f (c, a).
A little thought reveals that the answer is somewhat trivial. Let O be a linear order on D m,n . We think of O as an injective function O : D m,n → R. If we define f as On the other hand, every transitive f must be of this form. To see this, consider a directed graph with vertex set D m,n where there is an edge from a to b if and only if f (a, b) = −1. This graph is a tournament and transitivity of f means that it does not contain a directed triangle. But a triangle-free tournament does not contain a directed cycle and, therefore, induces a linear order on its ground set.
We can extend this reasoning to a quantitative result. It seems easiest to assume a model where a set of three dice is sampled u.a.r. from D m,n .
There is a result about tournaments due to Fox and Sudakov [FS08]. A tournament on n vertices is called ε-far from transitive if at least εn 2 of its edges must be reversed to obtain a transitive tournament.
Theorem 5.1 ( [FS08]). There exists c > 0 such that if a tournament on n vertices is ε-far from transitive, then it contains at least cε 2 n 3 directed triangles.
Theorem 5.1 can be restated as a quantitative Arrow-like statement for dice.
Corollary 5.2. There exists c > 0 such that if a comparison function f on D m,n with m, n > 1 is ε-far from transitive, then the probability that a random triple of dice is intransitive is at least cε 2 .
Acknowledgement
We thank Timothy Gowers for helpful discussions of [Pol17b] and Kathryn Mann for asking if there is an "Arrow's theorem" for dice. | 2018-08-29T00:16:30.000Z | 2018-04-02T00:00:00.000 | {
"year": 2020,
"sha1": "45192c2301e38bf066b7b934bfccc2f471860797",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00440-020-00994-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "45192c2301e38bf066b7b934bfccc2f471860797",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
201070209 | pes2o/s2orc | v3-fos-license | Charged Taub-NUT solution in Lovelock gravity with generalized Wheeler polynomials
Wheeler's approach to finding exact solutions in Lovelock gravity has been predominantly applied to static spacetimes. This has led to a Birkhoff's theorem for arbitrary base manifolds in dimensions higher than four. In this work, we generalize the method and apply it to a stationary metric ansatz. Using this perspective, we present a Taub-NUT solution in eight-dimensional Lovelock gravity coupled to Maxwell fields. We use the first-order formalism to integrate the equations of motion in the torsion-free sector. The Maxwell field is presented explicitly with general integration constants, while the background metric is given implicitly in terms of a cubic algebraic equation for the metric function. We display precisely how the NUT parameter generalizes Wheeler polynomials in a highly nontrivial manner.
I. INTRODUCTION
Higher-dimensional theories appear in different contexts of theoretical physics. For instance, an important open problem is the question about the enormous difference between the Planck and the electroweak scale. An attempt to deal with this hierarchy problem consists in considering field theories with extra spatial dimensions [1,2]. For the additional dimensions, the corresponding field equations must be generalized by including higher-curvature terms in the action. These terms appear also in the renormalization approach of quantum field theory in curved spacetimes [3] or in the low-energy limit of string theory [4]. The AdS/CFT correspondence [5][6][7], on the other hand, is an additional motivation to study gravity in higher dimensions, since it provides a non-perturbative approach to strongly coupled systems by means of a weakly coupled gravitational dual within an extra-dimensional spacetime. This evidence indicates that gravitational theories with extra dimensions possessing higher-order curvature terms may have important applications in the context of quantum field theory and theoretical physics, in general.
In the case of gravity, the Lanczos-Lovelock theory is the natural generalization of General Relativity (GR) in higher dimensions [8,9]. The corresponding action principle is endowed with higher-curvature terms, while sharing some of the main features of GR, namely: (i) it is invariant under local Lorentz transformations and diffeomorphisms, (ii) it is torsion free, and (iii) it yields second-order field equations for the metric. This theory is free of ghosts [10] and it has the same degrees of freedom as the Einstein-Hilbert action in any dimension [11]. The first non-trivial term of the Lovelock series, i.e. the Gauss-Bonnet term, appears as a low-energy correction of string theory [12], modifying the field equations in dimensions higher than four. In fact, several static exact solutions have been found in this scenario [13][14][15][16][17][18][19][20][21][22][23], some of which have not been studied from a thermodynamic viewpoint or any of its more recent extensions. Although in four dimensions the Gauss-Bonnet term does not contribute to the field equations since it is a topological invariant proportional to the Euler characteristic class, its inclusion becomes relevant in the regularization of conserved charges in asymptotically locally AdS spacetimes [24] and in the context of holographic renormalization [25]. Moreover, the AdS/CFT correspondence has been used to impose bounds on the shear viscosity to entropy ratio for supersymmetric CFT by considering the Lanczos-Lovelock theory as its gravitational dual [26,27]. Quantum anomalies, on the other hand, have been computed from the holographic principle in Lanczos-Lovelock gravity, showing that the Weyl and a particular non-Abelian asymptotic symmetry are broken at the quantum level on the dual CFT [28]. Remarkably, when the theory has a unique AdS vacuum, there exists a gauge fixing that leads to a finite Fefferman-Graham expansion [29].
One aspect of higher-dimensional gravity which is interesting for the present investigation is the (non)uniqueness of static black holes [30,31]. Indeed, consider the line element where dΣ 2 is the metric of an arbitrarily chosen codimension two submanifold, henceforth referred to as the base manifold. In fact, the only static black hole in higher-dimensional GR which is asymptotically flat is given by the Schwarzschild-Tangherlini metric, whose base manifold is a round hypersphere. However, non-asymptotically flat solutions are obtained for different base manifolds, although the field equations imply that it must be an Einstein manifold. In Eq. (1), the geometry of the Einstein manifold is parametrized so that its Ricci scalar coincides with that of a hypersphere with the same dimension; this fact is closely related to the higher-dimensional Birkhoff's theorem. The Lovelock version of this result also imposes conditions on the base manifold. Nevertheless, they no longer need to be Einstein manifolds and a number of new geometries come into the fold. Returning to Eq. (1), when spherical symmetry is assumed, Wheeler devised an approach to determine the metric function f [32]. The differential equation for the metric function is integrated in an elementary way. Remarkably, this method yields an algebraic equation for f = f (r). Moreover, defining an auxiliary function by F = (1 − f )/r 2 , the general result is that This polynomial in F has constant coefficients a i determined by the Lovelock coupling constants, D is the spacetime dimension, p = [(D − 1)/2] is the highest-order curvature term contributing to the field equations, and the squared brackets denote the integer part. In Eq. (2), M is the integration constant, which is later related to the black hole mass. The function P is what has been dubbed the Wheeler polynomial of the solution. This showcases how incredibly restrictive spherical symmetry is. A family of p spacetimes are uniquely determined by the roots of Eq. (2). Of course, in higher-dimensions, the exact solutions are increasingly more complex and the lack of closed form begins in eleven dimensions for Lovelock gravity. Nonetheless, some general results have been proven to hold for this set of solutions. For instance, a solution always exists for at least one value of the sign of M [32]. Moreover, the extension of the asymptotic solution increases monotonically as r decreases, until it ends for small values of r in one of the following two possibilities: either a curvature singularity at the origin is surrounded by exactly one event horizon or the singularity happens at a finite value of r, where at most one event horizon is present. Notice that this includes the possibility of a naked singularity. All maximally symmetric spaces are equally restrictive. The topological versions of these solutions are determined by Wheeler polynomials as well, but with the auxiliary function redefined as F = (κ − f )/r 2 , where κ is the space's sectional curvature. However, the most general admissible base manifolds require the use of an analogue of Wheeler's polynomial defined by [31] where the constants b k depend on the geometry of the base manifold and the auxiliary function is defined as U = −f /r 2 . Wheeler polynomials (2) are rearrangeable as just above, e.g., spaces of constant sectional curvature κ have constants b s = κ s . The polynomials A k are of order p − k and are defined by where the a i are, as before, the coefficients in Eq. (2). Notice that the highest order polynomial is A 0 = P (U ) and that the polynomials comply with the recurrence relation A ′ k = (k + 1)A k+1 . Even outside the context of determining exact solutions, Wheeler polynomials provide a remarkable theoretical tool to investigate gravitational physics. Equation (3), for instance, provides a way for black hole thermodynamics to be carried out even when a closed form for f is not available [33,34]. In essence, this can be carried out because black holes have event horizons characterized by the vanishing of the metric function. Hence, the Wheeler polynomial may be evaluated in the null hypersurface to yield an important algebraic relation. Taking the differentiation of the polynomial and restricting to the horizon determines the Hawking temperature. Of course, it is crucial to relate the integration constant with the physical parameters of the solution, especially the mass of the black hole. The relation between gravitational parameters and thermodynamical ones allows for a vast class of scenarios to be explored in this direction. However, staticity need not limit this line of research.
The original Taub and Newman-Tamburino-Unti metrics [35,36]-hereon Taub-NUT-have motivated a plethora of investigations in gravitational physics. A particular research area is spacetime thermodynamics, where the similarities between Taub-NUT metrics and black holes have been studied through Euclidean techniques. Relying on the methods of finite-temperature quantum field theory, an analytical continuation of the metric is performed, and the period of the Euclidean time circle is chosen in such a way that no conical singularity is present. The action of the U (1) isometry group, in general, has a set of fixed points which comes from the Killing horizon in the Lorentzian sheet. If the set is zero-dimensional, the analytically continued sheet is called Taub-NUT; otherwise, it is dubbed Taub-Bolt. Possible observational signatures of this spacetimes have been studied in [37].
Higher-dimensional Taub-NUT and Taub-Bolt metrics are a special type of inhomogeneous geometry on complex line bundles over a Kähler manifold [38][39][40]. Thus, they exist only in even dimensions. These metrics have Lorentzian counterparts which in the static limit coincide with Eq. (1); in this case, the base manifold is Kähler. In fact, Taub-NUT geometries, in Boyer-Lindquist coordinates, quite resemble the line element (1). This, in turn, implies that the Wheeler approach is applicable to these stationary spacetimes as well. Of course, the method is blind to whether the metric is Lorentzian or not. These spacetimes carry a gravitational charge which in many ways is analogous to a magnetic monopole moment (for a recent discussion see [41]). An important example are the famous Kaluza-Klein monopoles [42], where the Euclidean Taub-NUT space is used as a seed manifold. Both the Taub-NUT solution and the Kaluza-Klein monopole have a rich geometric structure which have led to applications in GR [43,44] and string theory [45], as well as insights in differential geometry [46,47]. In a complementary manner, the Taub-Bolt space has a very interesting topological structure which closely resembles Euclidean black holes. This resemblance has allowed for the construction of holographic heat engines [48]. It also allows for the space to possess electromagnetic fields which generalize the Dirac monopole field [49]. Taub-NUT metrics have been found to exist in a wide range of vacuum and electrovacuum gravitational theories which include, but is not limited to, the Lanczos-Lovelock-Maxwell theory [50][51][52][53][54][55][56].
In this work, we revisit the eight-dimensional Lovelock theory where solutions in a closed form for arbitrary coupling constants are already intractable [57]. This framework is extended by considering arbitrary coefficients for the Lovelock series and by adding minimally coupled Maxwell fields with general integration constants. For the sake of comparison, we use the same ansätze of Ref. [58] for the metric and Maxwell fields, which can be found in Eqs. (23) and (26), respectively. The charged Taub-NUT solution in cubic Lovelock gravity-the main result of this work-is presented as a root of the Wheeler polynomial in Eq. (3), given by U (r) = −f (r)/r 2 , where f (r) is the metric function appearing in Eq. (23). The latter is determined by a generalization of Eq. (3), that is, where B k (U ) is a deformation of A k (U ) by warping functions which depend on the NUT parameter n. When the latter vanishes, we recover Eq. (3), i.e. Q 0 = Q. Notice that in eight dimensions the polynomial is cubic, namely p = 3. The field equations are solved by means of the first-order formalism, focusing on the torsion-free sector of the space of solutions. To the best of the authors' knowledge, this result represents the first Wheeler's-like polynomial for Taub-NUT spacetimes in Lovelock gravity. The article is organized as follows: In Sec. II we present the eight-dimensional Lanczos-Lovelock theory coupled to Maxwell fields and their field equations. In Sec. II A, we restrict ourselves to lower orders in the Lovelock series and write the (analogue) Wheeler polynomials for a spherical and complex projective base manifold. This explicitly shows that, although one may freely parametrize the base manifold to set b 1 = 1, other b k coefficients cannot be arbitrarily fixed, in contrast to the Einstein case for higher-order theories. In Sec. III, the higher-dimensional ansatz is presented together with lower-dimensional Taub-NUT Wheeler's polynomials which represent a generalization relative to Eq. (3). In Sec. IV, we report and discuss the charged Taub-NUT solution with arbitrary coefficients of the Lovelock series.Finally, conclusions and further discussions are given in Sec. V. The Appendix A has been included for additional details of the computation. In our notation, greek and latin characters denote spacetime and Lorentz indices, respectively, the Minkowski metric is η ab = diag(−, +, ..., +), and the language of differential forms will be used from hereon.
II. EIGHT-DIMENSIONAL LANCZOS-LOVELOCK GRAVITY
In this work, we use the first-order formalism to treat Lovelock's gravity [59]. This is done by considering the vielbein e a = e a µ dx µ and the Lorentz connection ω ab = ω ab µ dx µ 1-forms as independent gravitational fields. The former is related to the spacetime metric through g µν = η ab e a µ e b ν , where η ab is the Minkowski metric, while the latter allows us to perform the parallel transport of Lorentz-valued p-forms over the spacetime manifold. The curvature and torsion 2-forms are defined through the Cartan structure equations where ∧ is the wedge product, d is the exterior derivative, and D is the Lorentz-covariant exterior derivative with respect to ω a b . These fields satisfy the Bianchi identities DT a = R a b ∧ e b and DR ab = 0. The eight-dimensional Lovelock theory coupled to U (1) gauge fields A = A µ dx µ -the theory we are interested in throughout this work-is described by the action principle where the gravity and matter action are denoted by S g and S m , respectively, and they are considered to be minimally coupled. The Lovelock action functional is given by with ǫ 01234567 = 1 (for a discussion of Lovelock gravity in terms of spacetime components see [9]). Notice that in eight dimensions this theory admits a quartic term in the curvature 2-form. However, it represents the dimensional continuation of the Euler density and it does not contribute to the vielbein dynamics on the bulk. The Lovelock action is conformed by a series of dimensionally continued Euler densities. For a given dimension, the series terminates according to the differential form of maximum degree. In addition to the gravitational sector, we write the Maxwell action functional as Here, ⋆ denotes the Hodge dual and F = dA is the field strength of the U (1) gauge fields. The field equations of this theory are obtained by performing stationary variations with respect to the vielbein, Lorentz connection, and U (1) gauge fields, leading to respectively, where we have defined the energy-momentum 7-form of the gauge fields as with i a being the inner contraction along the vector field E a = E µ a ∂ µ such that e a µ E ν a = δ ν µ and e a µ E µ b = δ a b . The Noether theorem associated to the invariance under diffeomorphisms [60][61][62] implies that the energy-momentum 7-form in Eq. (14) satisfies the conservation law Invariance under local Lorentz transformations, on the other hand, imply a conservation law that is trivially satisfied for Maxwell fields. It is worth mentioning that the Bianchi identities impose severe restrictions on the torsion components when arbitrary coefficients of the Lovelock series are considered in vacuum [63]. These restrictions can be avoided if the coefficients are chosen in such a way that the action principle can be written as the Chern-Simons form for the (A)dS group or as Born-Infeld gravity in odd and even dimensions, respectively. This implies that the theory has the maximum number of degrees of freedom [63]. 1 Here, we consider arbitrary coefficients of the Lovelock series and focus our attention on the torsion-free sector of the space of solutions, namely T a = 0, which automatically solves Eq. (12). This condition allows one to solve the Lorentz connection in terms of the vielbein, reducing its form to the standard Levi-Civita connection. Thus, the solution presented here belongs to the Riemannian branch of the Lovelock theory, even though vacuum solutions with nontrivial torsion have been reported for different isometry groups in Refs. [64][65][66][67][68][69].
A. Lower-order Wheeler polynomials
Before going on to compute the Wheeler polynomial for the Taub-NUT solution in eight-dimensional Lovelock-Maxwell theory, it is useful to summarize the lower-order solutions in the static limit. They portray how the original Wheeler polynomials [32], which consider spherical symmetry, are generalized to the Kähler case. In the next section, we explain why we specialize to the case where the base space is complex projective.
Let us focus on vacuum Einstein-Gauss-Bonnet theory with a cosmological constant Λ and a Gauss-Bonnet coupling constant α GB . This fixes the couplings constants in Eq. (9) in terms of these last two parameters and in particular sets α 3 = 0. In arbitrary spacetime dimension D, the Wheeler polynomial (2) is This equation yields the Boulware-Deser solution [13] and, setting α GB = 0, leads to the familiar Schwarzschild-Tangherlini result .
For comparative reasons we rewrite this result in the form of Eq. (3), which in eight dimensions is
with polynomials A k (U ) given by recalling that U = −f /r 2 . If we now substitute the base manifold from a hypersphere S 6 to a complex projective space CP 3 , then the previous polynomials remain unchanged but the equivalent to Eq. (18) is Recall that the coefficients b k in Eq. (3) depend on the geometry of the base manifold. Since the complex projective spaces with Fubini-Study metric are Einstein manifolds, the results of Ref. [30] imply that Eqs. (18) and (22) only differ in the coefficient b 2 , once the parametrization convention of this reference is adopted. Additionally, we mention that the Taub-NUT solution found in Ref. [70] has as static limit the black hole determined by Eq. (22). In the next section, we discuss how the NUT parameter generalizes polynomials such as the ones presented above.
III. HIGHER DIMENSIONAL TAUB-NUT GEOMETRY
The definition of a higher-dimensional Taub-NUT space we consider here is given by the family of inhomogeneous geometries built over complex line bundles presented in [39], this is where τ is the Euclidean time coordinate and n is the NUT parameter. This parameter sources the magnetic part of the Weyl tensor and it is, in general, related to the magnetic mass of the geometry [71,72]. Notice that, for n → 0, we recover a metric equivalent to (1), which is a static metric modulo a Wick rotation. The line element dΣ 2 is Kähler and its associated symplectic form is given by ω = dB. The original Taub-NUT solutions are the special case where the base manifold is a sphere S 2 , which coincides with the complex projective line CP 1 . Thus, the static limit leads to a spherically symmetric spacetime. This is particular case in four dimensions since no hypersphere admits a Kähler structure [73]. We will specialize to higher-dimensional Taub-NUT solutions with hyperspherical boundary conditions. These are the only ones which admit non-singular Euclidean sheets with nuts [38]. This, in turn, implies that they are the only conditions under which Hawking-Page-like [74] phase transitions are possible [75]. As for Eq. (3), there is no greater loss of generality than variation of its coefficients. These boundary conditions imply a Hopf fibration of the Euclidean time direction over a complex projective space. Hence, we fix the geometry of the base manifold to that of Fubini-Study. For the complex projective space of real dimension 2k our notation is B = A k and we add a subscript k to the line element in (23) to indicate that it is the Fubini-Study metric on CP k . An iterative construction of the Fubini-Study metric using explicitly real expressions is useful [76]. We write the recursion relation as Notice how the metric on the CP k manifold is built on top of the one on the CP k−1 submanifold. This submanifold is in fact totally geodesic, or extrinsically flat. In these coordinates ψ k = π/2 corresponds exactly to this special submanifold. This fact is commented on further below. The four-dimensional charged Taub-NUT solution [50,51] possesses a Maxwell field whose null directions are aligned with the repeated principal null directions of the Weyl tensor. In this spirit, we choose as the ansatz for the gauge potential. This form of the gauge field was used in a higher-dimensional setting for the first time in Ref. [55]. Moreover, even without an explicit form of the metric function f in Eq. (23), we notice that the Maxwell Eq. (13) can be solved independently. In other words, Maxwell's equations together with the ansatz (26) yield a differential equation for h, namely where prime denote derivative with respect to the coordinate r. This equation admits the general solution where q and v are integration constants and W k denotes the series Notice that it resembles the binomial expansion The function W k may be generated, if so desired, by an integral formula. It may also be written in terms of Legendre polynomials or a hypergeometric function by setting the appropriate parameters. To illustrate how Wheeler polynomials are generalized by NUT parameters, we present the special cases of Lovelock Taub-NUTs in four dimensions given by and in six dimensions by Recall that Q n has been defined in Eq. (5). Equations (29) and (30) are, in fact, the deformation elements of the Wheeler polynomials (3) when the NUT parameter is turned on. It should be noted that, when n → 0, both series become unity. Recall that in four dimensions the base manifold is the complex projective line, while in six dimensions it is the complex projective plane.
IV. CHARGED EIGHT DIMENSIONAL SOLUTION
We are now in a position to present the charged eight-dimensional solution which fits within the ansatz (23). To this end, we use a generalized Wheeler polynomial. The base manifold is the complex projective space CP 3 . The Euclidean time direction is Hopf fibered over this base space resulting in r = constant hypersurfaces wich are hyperspheres S 7 . The isometry algebra of the total space is su(4) ⊕ u(1) and the topology will either be Euclidean, if it has a nut, or complex projective minus a point, if it possesses a bolt.
The explicitly real Fubini-Study metric on the base manifold may be found by Eqs. (24) and (25); thus dΣ 2 2 = 6 dψ 2 2 + sin 2 ψ 2 cos 2 ψ 2 dφ 2 + We choose the vielbein basis as shown in Appendix A. Since we are looking for torsion-free solutions, the Lorentz connection can be solved in terms of the vielbein by solving de a + ω a b ∧ e b = 0. The 2-form curvature associated with this connection can be computed from the first Cartan equation (6). However, due to the cumbersome nature of its components we report them in the Appendix A. Moreover, we write the field strength in the following manner with Here, h(r) is the function defined in Eq. (26). The so(1, 7)-valued energy-momentum 7-form (14) for this ansatz yields whereā = 2, .., 7 are indices of dΣ 2 3 such that ǫ 234567 = 1, and Although we know the solution for the Maxwell field beforehand, we mention that the Maxwell equation takes the form F ′ I r 2 − n 2 + 6 (rF I + nF II ) = 0, (45) whose explicit solution is [cf. Eq. (28)] h(r) = 1 (r 2 − n 2 ) 3 qr + v r 6 − 5n 2 r 4 + 15n 4 r 2 + 5n 6 . (46) In our notation, this corresponds to Examining the asymptotic behavior of the field strength reveals q to be the electric charge up to some rescaling. The other integration constant v can be interpreted as the value of the electric potential at infinity [55]. For the gauge potential to be regular at the nut (bolt, respectively), where the Euclidean time direction degenerates, it must be null there. So v is, in fact, a potential difference across the entire manifold. Furthermore, there is a topological interpretation of v which endows it with a magnetic flavor [41]. This Maxwell field naturally lives in a principal U (1) bundle over the Euclidean background. The bundle's connection is locally represented by the gauge potential. This circle bundle is classified by a single topological index, which can be calculated by integrating over the background. If the background has a nut, then the index vanishes. In the complementary case, we have So, we see that v and n are related to a topological invariant of the underlying bundle space. However, the circle bundle just described possesses a principal U (1) subbundle defined over the unique totally geodesic sphere that lies at the asymptotic boundary. This subbundle is isomorphic to the Dirac monopole bundle and has Chern number 8nv, which must be an integer. In the Dirac monopole, the Chern number is twice the magnetic charge; this can be carried over to this eight-dimensional Taub-Bolt. For the gauge potential (46) this means that the magnetic charge, p, is given by 4nv. This is also consistent with an asymptotic examination such as the one carried out for the electric charge.
On the other hand, the functions ρ and p can be read off from Eqs. (47) and (48) by using their definition in Eq. (44). Then, the field equation (11) reads where R I , ..., R V I have been defined in Appendix A. It is worth mentioning that these equations are not linearly independent, since differentiating the former results in the latter, after some algebraic manipulation. Thus, the equation of motion admits the following solution given in terms of a generalized Wheeler polynomial where M is an integration constant and In Eq. (52), the coefficients are b 0 = 1, b 1 = 1/5, b 2 = 1/20 and b 3 = 1/40. Notice that we have not set b 1 = 1 which is convenient in the setting of Ref. [30]. However, it may be done so by a reparametrization of r. The left-hand side of Eq. (52) is completely invariant under this change except in the b k coefficients. Moreover, P (r) is the Maxwell contribution and it is a shorthand for P (r) ≡ −1 12r (r 2 − n 2 ) 3 300v 2 n 10 r 2 − n 2 + 280n 6 v 2 r 4 r 2 − 5n 2 − 4v 2 r 8 n 2 r 2 − 25n 2 To evaluate the static limit, we first interchange v by its equivalent p/4n and then take n → 0. After this limit has been taken, the vielbein component e 0 has only the Euclidean time direction. Careful evaluation yields two parts of the gauge potential, that we write it in the following manner A = q r 5 dτ + 2p sin 2 ψ 3 dφ 3 + sin 2 ψ 2 dφ 2 + sin 2 ψ 1 dφ 1 .
The Wheeler polynomial (52) reduces to with polynomials A k (U ) given by As far as the Wheeler polynomial is concerned, the static limit amounts to setting the warping functions in the Taub-NUT solution to unity. Moreover, the appearance of warping functions (29) and (30) is recurrent. The Gauss-Bonnet case (α 3 = 0) shows a change of warping function from six dimensions to eight, cf. Eqs. (32) and (56). Notice that, in eight dimensions, the coefficients that appear in the polynomials just above are recurrent in Lovelock gravity.
V. CONCLUSIONS
In this work, we considered the eight-dimensional Lanczos-Lovelock-Maxwell theory in the realm of the first-order formalism of gravity. By focusing on the torsion-free sector of the space of solutions, we have generalized the Wheeler's approach of integrating the equations of motion of Lovelock theory, reducing them to an algebraic equation. This new generalization allows us to investigate stationary spacetimes, in addition to the static case which has been considered so far in the literature. In particular, we focus on Taub-NUT geometries with different higher-curvature terms of the Lovelock series to pave the way towards most general situations. The application of the method is novel since previous cases were limited only to static manifolds. Taub-NUT spacetimes are stationary and are considerably more tractable than rotating spacetimes such as the Kerr solution. Considering inhomogeneous geometries on complex line bundles over Kähler manifolds has proven to be a nontrivial generalization of the approaches used for static manifolds [31,32]. However, the geometries resemble static metrics in such a way that the generalization is straightforward.
Using the extended version of Wheeler's methodology, we presented a new solution to Lanczos-Lovelock theory supplemented by Maxwell sources in a rather compact form. Arbitrary parameters of the Lovelock series are used, allowing us to analyze gravity theories such as Born-Infeld or pure Lovelock for the corresponding values of the couplings. The warping functions in the Wheeler polynomial are independent of the rescalings of the base manifold, except in the coefficients which encode its geometry. The Taub-Bolt branch of the solution presented here, is a generalization of the Dirac monopole which includes self-gravity [41]. It has a unique Chern index [cf. Eq. (49)] which completely classifies all possible configurations and results in an electromagnetic parameter being a topological charge.
Interesting questions remain open. For instance, given the recent development of Lorentzian thermodynamics for Taub-NUT spacetimes [77][78][79], a higher-dimensional treatment including the example presented here is certainly desirable. The Euclidean method can be applied to the generalized Wheeler polynomial we provide in Eqs. (52) and (57). We stress that this thermodynamic exploration does not require the explicit solution of the metric function, as the Wheeler polynomial suffices. The black hole limit may also deserve a thermodynamic study in the extended black hole mechanics by considering the Lovelock coupling constants as thermodynamic entities. Interpreting them as thermodynamic variables which are held fix in the action-and so in the ensemble associated to them as wellnaturally leads to their variation in the associated thermodynamic potential. We expect to consider this task in future works. | 2019-08-20T07:09:35.224Z | 2019-08-19T00:00:00.000 | {
"year": 2019,
"sha1": "0048cf710267690d41cfd358257f1e338c572a97",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.06908",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "416e73adc906b9849668ecd10f7e1484e98f69f0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10754318 | pes2o/s2orc | v3-fos-license | Pathogenicity of the Entomopathogenic Fungus Metarhizium anisopliae Var. Major on Different Stages of the Sunn Pest Eurygaster integriceps
The sunn pest, Eurygaster integriceps Puton (Hemiptera: Scutelleridae), is the most important insect pest of wheat and barley. The population management of this pest is of major concern to wheat producers. One of the potential control strategies is to use entomopathogenic fungi. This study evaluates the pathogenicity of the fungus Metarhizium anisopliae var. major (Metchnikoff) Sorokin (Hypocreales: Clavicipitaceae) on the sunn pest, E. integriceps. Five concentrations of the fungus were utilized, ranging from 1×104 to 1×108 conidia/mL, accompanied by controls. Fifth instar nymphs and adults (a migratory summer population and a diapausing population) previously exposed to fungi were sown to isolate the fungi, and the growth parameters were analyzed. A direct spray technique was used to expose the isolates to the E. integriceps. The experiment was repeated four times, and mortalities of the insects for all treatments were recorded daily. The results showed that the mortality of infected nymphs was significantly higher than the mortality of control nymphs. Also, the longevity of infected adults was lower than the controls. The results also showed that diapausing adults of the sunn pest were much more susceptible to infection than the summer adults. Estimated LC50 values for the M14 isolate were 1.4 × 106, 1.4 ×10 5 , and 2.3 × 103 spores/mL against the aestivation population, the diapausing population, and 5th instar nymphs, respectively. Estimated LT50 values using 108 spores/mL of the Mm isolate on the aestivation and diapausing populations were 11.9 and 5.11 days, respectively. The results demonstrated that M. anisoplaie was effective on all of stages of E. integriceps. In addition, the nymphal stage was more susceptible than the adults.
Introduction
In the regions of Central and West Asia and North Africa, the human population is steadily rising, while rainfall and arable land are limited, and a food security crisis is looming (El-Beltagy 2000). Cereals are an extremely important food commodity in most countries in this region, and wheat, Triticum aestivum L., provides over 40% of the per capita dietary supply of calories and protein (FAO 2013). Wheat and barley, Hordeum vulgare L., are attacked by several species of pests. The sunn pest, Eurygaster integriceps Puton (Hemiptera: Scutelleridae), is the most damaging and economically important pest that predominantly attacks these plants, feeding on the leaves, stems, and grains, reducing yield, and injecting chemicals into the grain that destroy the gluten and greatly reduce the baking quality of the resulting flour (Javahery 1995;Hariri et al. 2000;Parker et al. 2003).
Eurygaster integriceps lives for one year and is univoltine (produces one generation annually). Around three months are spent actively on graminaceous plants. The rest of the year is spent overwintering in vegetation and litter and undergoing diapause, which includes two phases: aestivation during the hot, dry months of late summer and autumn and diapause during the cold winter months. In the southern area of its range (Iran, Iraq, and Turkey), E. integriceps overwinters at altitudes between 1000 and 2000 m a.s.l. As soon as temperatures rise above ~12° C, they migrate back into lower plains, where the crops are (Critchley 1998). The pest affects around 15 million ha annually (Moore and Edgington 2006). There are many different methods for controlling E. integriceps. However, all the methods are to control its outbreak and not to eradicate it. The methods utilized for this purpose in-clude chemical, biological, mechanical, and cultural control.
One of the research priorities identified by the Food and Agriculture Organization of the United Nations was to evaluate the potential of naturally occurring pathogens in E. integriceps overwintering sites (Jordan and Pascoe 1996). Entomoparasitic fungi (e.g., Beauveria bassiana) have demonstrated their potential to kill bugs when other biological agents do not, i.e., during diapause (Parker et al. 2003;El Bouhssini et al. 2004). Entomopathogenic fungi such as B. bassiana and Metarrhizium anisopliae (Metchnikoff) Sorokin (Hypocreales: Clavicipitaceae) have shown great potential for managing various insect pests (Inglis et al. 2001). In determining the effectiveness of introducing fungi to the overwintering localities of E. integriceps, scientists recorded significantly increased mortality in plots treated with B. bassiana and M. anisopliae compared to control plots (Parker et al. 2003). Metarrhizium anisopliae, the agents of the green muscardin disease, attach their spores into the epidermis of the insects. Then they pierce the epidermis, and after germinating they infiltrate the insect. This entomopathogenic fungus penetrates the haemolymph of the insect, grows and proliferates inside the body of the host, interacts with the insect's defense mechanisms, and finally sporulates on the host cadaver (Hajek and St. Leger 1994;Sedighi 2011).
The objectives of this study were to: (1) calculate the LC 50 and LT 50 of Iranian M. anisopliae isolates from different sources and regions, (2) select the most virulent isolate, and (3) compare the susceptibility of different developmental stages (nymphal and migratory summer and diapausing adults) of E. integriceps to infection by the entomopathogenic fungus M. anisopliae.
Maintenance of field-collected insects
Adult E. integriceps were collected from their resting sites in the Varamin region (Tehran, Iran) in December 2009 (diapausing population) and in September 2009 (migratory summer population). The samples were maintained at 25 ± 2° C, 60% RH, and a 16:8 L:D photoperiod on wheat seeds. A piece of cotton soaked with water was used as a water source. The insects were kept for one week at these conditions, and dead insects were removed before the bioassays. The E. integriceps 5 th instar nymphs were collected from wheat fields of the Varamin region. Nymphs were kept under similar conditions as adults.
Fungal isolates
The isolates of M. anisopliae var. major that were used in the experiments are now preserved at the Department of Agricultural Entomology, Iranian Research Institute of Plant Protection, Tehran, Iran (Table 1). Fungi were cultured on Sabouraud dextrose agar media and maintained at 23 ± 2° C and a 16:8 L:D photoperiod and were prepared using a two-step method. In this method, the hyphae were produced in a shaking Erlenmeyer flask and were then transferred to solid medium to produce conidia.
Biphasic production: liquid phase
A liquid stage in the production system encourages rapid mycelial growth of the fungal culture, which can then be used to inoculate the second, solid stage in the production process. The liquid medium used in the first stage of production contained potato extract (200 g in 1000 mL distilled water), which is essential for growth. The liquid media were autoclaved in propylene bags at 121° C for 20 min (Jenkins et al. 1998). The 1 L Erlenmeyer flasks containing 250 mL of liquid media were incu-bated with spores of M. anisopliae in a shaking incubator (US-848DSRNL, Vision Scientific, www.visionbionex.com) at 100 rpm and 24° C for five days.
Biphasic production: solid phase
The grain (300 g barley) was added to boiling distilled water (700 mL) and allowed to parboil for 1 hr. After cooling, it was autoclaved in propylene bags at 1 atm and 120° C for 1 hr and then transferred to a laminar flow cabinet. The liquid culture was diluted by 50% with sterile cold water, and the resulting liquid inoculum was added to the bags (150 mL/kg barley) (Jenkins et al. 1998). Conidia were harvested from 14-day-old sporulating cultures by scraping the surface of the barley with a spatula and suspending the conidia in sterile 0.04% aqueous Tween 80 (Merck, www.merck.com). Different concentrations of spores were prepared as required after several preliminary tests.
Bioassay: adults
Batches of 10 E. integriceps adults were placed on filter paper in dishes (5 × 10 × 16 cm). Spore suspensions (4 mL) of M. anisopliae were sprayed by a hydrolic handle sprayer, con-di nozzle on adults at different concentrations (10 4 , 10 5 , 10 6 , 10 7 , 10 8 , and 10 9 conidia/mL). Dishes were incubated at 25° C, 75 ± 5% RH, and under a 16:8 L:D photoperiod. Control adults were sprayed with 4 mL of sterile distilled water containing 0.04% Tween 80. Dead insects were counted daily for 22 and 13 days for the migratory summer population and the diapausing population, respectively. Dead insects were surface sterilized by immersion in a solution of 50% sodium hypochlorite for 2 min, rinsed once in 70% ethanol and three times in sterile distilled water, and then placed on sterile Petri dishes with wet cotton and incubated at 25° C, 75 ± 5% RH to grow fungus on cadavers and to confirm infection with fungus. In total, 280 engorged adults were treated with various concentrations of the different strains.
Bioassay: nymphs
The effects of the isolates M 14 , IRAN 437 c , and IRAN 715 c on the 5 th instar nymphs were studied. Spore suspensions (3 mL) of M. anisopliae were sprayed on nymphs at different concentrations (10 3 , 10 4 ,10 5 , 10 6 , 10 7 , and 10 8 conidia/mL). Nymphs were maintained at 25 ± 2° C, 75 ± 5% RH, and under a 16:8 L:D photoperiod on wheat grains in plexiglass dishes (5×10×16 cm). There were 10 nymphs in each treatment. For each treatment dose, four replicates were used. Control nymphs were sprayed with 3 mL of sterile distilled water containing 0.04% Tween 80. Nymphs were monitored for seven days, and mortality was determined by larval wasting and immobility.
Statistical analysis
Mortality was corrected using Abbott's formula (Abbott 1925). All experiments were assigned using completely randomized basic factorial design. Values of LC 50 , LC 99 , and 95% fiducial limits were calculated using Probit analysis (SAS Institute 2004). Graphs were plotted using Microsoft Excel software (www.microsoft.com).
Results
All three studied isolates of M. anisopliae were pathogenic to E. integriceps (Table 2). Variations in their pathogenicity were noticed. Estimated LC 50 values for the Iran715 c and Iran 437 c isolates on 5 th instar nymphs were 1.4 × 10 3 and 2.3 × 10 3 conidia/mL, respectively. In addition, for the summer adults the M 14 , IRAN 437 c , and IRAN 715 c values were 1.4 × 10 6 , 3.2 × 10 6 , and 4.3 × 10 7 conidia/mL and for the winter adults they were 6.5 × 10 4 , 7.1 × 10 4 , and 3.6 × 10 5 conidia/mL, respectively. The mortality of E. integriceps upon M 14 treatment was the highest (58% and 92.5% after 10 days for summer and winter adults, respectively) when compared to treatments with the other two fungal strains (Table 3). The susceptibility of the nympal stage of E. integriceps to local strains of M. anisopliae is shown in Table 4. The times at which 50% lethality occurred (LT 50 for 10 8 conidia/mL) for the M 14 , Iran715 c , and Iran 437 c isolates were 5.11, 6.31, and 9 days (diapausing adults) and 11.9, 17.52, and 29.64 days (summer adults), respectively. For the 5 th instar nymphs, these values were 3.45 and 3.12 days for the Iran715 c and Iran 437 c isolates, respectively (Table 5). Strain Iran 437 c was less virulent to the adult stages of E. integriceps in comparison with other strains. Nymphs were highly susceptible to all the isolates (Table 4). Isolates Iran 437 c and Iran 715 c caused 32% and 95% and 43% and 100% mortality in 5 th instar nymphs on days three and five, respectively, and these values reached 100% by day eight. The nymphs that were inoculated but not killed were not able to emerge as adults. The longevity of the adults was significantly affected, being shorter after pest treatment. The most susceptible stage was the nymphal stage, followed by the diapausing and summer population (Figure 1). The percentage of cumulative mortality in the diapausing population treated with the M 14 strain of M. anisopliae increased proportionately with increasing concentrations of the test substance, as 25%, 57%, 85%, 92%, and 100% cumulative mortality was reported in the 10 4 , 10 6 , 10 7 , 10 8 , and 10 9 spore/mL treatments, respectively (Figure 2). Similar results were found for the other strains (Fig-ures 3 and 4). Insects that died within the first 24 hours were not considered in the analysis, because fungus needs more than 24 hours to affect an insect.
When the treated insects died, they showed typical symptoms of infection and were covered with fungus after incubating in an initially sterile Petri dish in conditions with high humidity. Infected bodies of nymphs and adults of E. integriceps also turned hard and dry. The fungal outgrowth and sporulation on treated nymphs and adults were observed for multiple days after treatment. At first the color of the fungal outgrowth and sporulation, which was observed on the surface of pests' cadavers, was white. After some days, the color of the fungus slowly changed to green.
Mortality rates of different growth stages of E. integriceps were significantly different. Average mortality after treatment with fungi suspension on the 5 th instar nymphs was 100% after six days of spraying, while the average mortality percentages for the diapausing and summer adults were calculated at 54.58% and 52.50%, respectively, after 12 days.
Discussion
Eurygaster integriceps has been reported to be susceptible to infection by several fungi in overwintering sites (Parker et al. 2003). Some of these naturally infecting pathogens, like B. bassiana and M. anisopliae, are among the most virulent pathogens to the pest. Metarhizium anisopliae causes the highest mortality rate for E. integriceps. Parker et al. (2003) suggested the use of this entomopathogenic fungus as a biological control agent for management of E. integriceps on wheat fields and in overwintering sites. The results of our study revealed that isolate M 14 was more effective than the Iran 437 c and Iran 715 c isolates at combating E. integriceps.
Two previous studies on the efficacy of M. anisopliae on E. integriceps were considered. One study (Bandani et al. 2006) indicated that the LC 50 values of two isolates (M189 and 4556) of M. anisopliae on adults (with an immersion method) were 7.704 × 10 4 and 3.38 × 10 5 conidia/mL, respectively. The LT 50 values for 1 × 10 8 and 1 × 10 9 spores/mL test suspensions for M189 and 4556 isolates were 14.8 and 11.2 days, respectively. The other study (Kivan 2006) was a laboratory pathogenicity experiment on the effects of isolate 3540 M. anisopliae from Galleria mellonella on adult E. integriceps. In a single exposure concentration (1 × 10 6 conidia/mL) assay, the adults were immersed in 10 mL of a fungal suspension. The mortality was 100% in M. anisopliae at eight days after treatment. In our study, the estimated LC 50 values for each isolate (M 14 , Iran715 c , and Iran 437 c ) for the summer adults were 1.4 × 10 6 , 3.2 × 10 6 , and 4.3 × 10 7 conidia/mL, and for the diapausing adults they were 6.5 × 10 4 , 7.1 × 10 4 , and 3.6 × 10 5 conidia/mL, respectively. The results of our study showed that different isolates of M. anisopliae had different effects on E. integri-ceps adults. So, identification, application, and screening of different isolates in bioassays will provide promising potential for finding efficient isolates to be used in field studies as bio-pesticides. Generally, direct immersion of the insects in fungal suspensions resulted in high percentages of overall mortality (Kivan 2006). Our study showed that the probability of placing conidia on the insect is higher when using the direct immersion method than when spraying, but the direct immersion method is not applicable in the field.
The results from the insect bioassays clearly indicate that E. integriceps is susceptible to infection by M. anisoplaie. However, comparing the LC 50 values showed differences in the mortality rates at different stages of life, as the nymphal stages were more susceptible than the adult populations. Different developmental stages of insects have different sensitivities to fungi, so to most effectively apply fungi it is useful to know the developmental stage of the insect being treated.
In this study, the effects of fungal isolates were evaluated on the various nymphal and adult stages of E. integriceps so that they can be used in the field against the most susceptible stages. Because hardness and resistance of the insect cuticle has an important role in incidence of fungi infections, susceptibility of nymphal stages could be attributed to their reduced cuticle thickness (Ghazavi 2002). There were also significant differences between diapausing and summer adult populations. Diapausing adults of E. integriceps showed more susceptibility than summer populations. Reduction of food leads to increased pest sensitivity to fungi, and conversely, adequate feeding could reduce infection efficiency (Sarami and Zand 2003). So, insects in the diapausing stage, due to lack of nutrition and a reduction in the amount of reserved energy (fat) in the their bodies, will be more susceptible than insects in the summer population. In their experiments on winter adults of E. integriceps in hibernating regions and wheat fields, Moore and Edgington (2006) concluded that because of exposure to cold, wet soil, and a lack of nutrition during the four months, winter E. integriceps adults were weaker and were faced with energy shortages, thus providing better conditions for infection by pathogenic fungi. So, application of fungi in the wintering sites of E. integriceps before they move to the wheat fields was recommended. These results are consistent with the results of our study, and the following characteristics can be attributed to the diapausing stage: 1) reduced food storage, 2) weakness due to more varied environmental conditions, and 3) reduced body fat. | 2016-05-12T22:15:10.714Z | 2013-12-13T00:00:00.000 | {
"year": 2013,
"sha1": "11fd06a8dcfe88454e3bbffb31d0eb0dbd743e89",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/13/1/150/18153653/jis13-0150.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11fd06a8dcfe88454e3bbffb31d0eb0dbd743e89",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219124047 | pes2o/s2orc | v3-fos-license | Egalitarian and Just Digital Currency Networks
Cryptocurrencies are a digital medium of exchange with decentralized control that renders the community operating the cryptocurrency its sovereign. Leading cryptocurrencies use proof-of-work or proof-of-stake to reach consensus, thus are inherently plutocratic. This plutocracy is reflected not only in control over execution, but also in the distribution of new wealth, giving rise to"rich get richer"phenomena. Here, we explore the possibility of an alternative digital currency that is egalitarian in control and just in the distribution of created wealth. Such currencies can form and grow in grassroots and sybil-resilient way. A single currency community can achieve distributive justice by egalitarian coin minting, where each member mints one coin at every time step. Egalitarian minting results, in the limit, in the dilution of any inherited assets and in each member having an equal share of the minted currency, adjusted by the relative productivity of the members. Our main theorem shows that a currency network, where agents can be members of more than one currency community, can achieve distributive justice globally across the network by joint egalitarian minting, where each each agent mints one coin in only one community at each timestep. Equality and distributive justice can be achieved among people that own the computational agents of a currency community provided that the agents are genuine (unique and singular). We show that currency networks are sybil-resilient, in the sense that sybils affect only the communities that harbour them. Furthermore, if a currency network has a subnet of genuine currency communities, then distributive justice can be achieves among all the owners of the subnet.
Introduction
Money is nothing but a piece of paper; or a string of bits, perhaps. In modern history, fiat money is issued and controlled by rulers and governments. Following Bitcoin [15], many blockchain-based cryptocurrencies were introduced [16]. Their technology and distributed protocol renders the community operating the currency to be its sovereign as, unlike in standard computer systems, there is no third party that may exert control over the system, e.g. shut it down. In existing cryptocurrencies, however, most control and benefit lies in the hands of the few -their founders, early adopters, and large stakeholders (e.g. large "mining pools") [11].
Our goal here is to explore the possibility of a digital currency that may be issued by all, where both control and benefit are distributed in an egalitarian way among the people participating in the creation and use of the currency. This can be achieved if the parties to the currency are genuine (unique and singular) agents of the participating people [22], thus excluding sybils. Such a currency implements distributive justice in the sense that each person enjoys an equal share of the created currency. As we wish our medium to be scalable, our further goal is to build this digital currency in a grassroots way. The key, high-level differences between our proposed digital currency and most existing cryptocurrencies are outlined below: • Equality: Leading cryptocurrencies employ either proof-of-work (PoW) or proof-of-stake (PoS) systems [17]. As such, they are inherently plutocratic, since control over the behavior of the system is positively correlated with the computing power or amount of currency available to different parties. A cryptocurrency is egalitarian if control over the execution and modification of the currency system is shared equally among the parties to the currency. Such equality can be guaranteed using digital social contracts [2] over genuine identifiers [22].
• Distributive justice: Leading cryptocurrencies do not aim for justice, distributive or otherwise. Newly minted coins are allocated to parties with superior computing power (PoW) or larger amounts of currency (PoS). A cryptocurrency satisfies distributive justice if each agent enjoys an equal share of the newly created value of the currency. Here, we spell out conditions that give rise to such distributive justice. A single currency community can achieve distributive justice by egalitarian coin minting, where each member mints one coin in every time step. Assuming the community has only genuine members and no sybils, egalitarian minting results, in the limit, in the dilution of any inherited assets and in each member having an equal share of the minted currency, adjusted by the relative productivity of the members. In a currency network, where people can be members of more than one currency community, egalitarian minting regime in which each person mints one coin in only one community in each timestep, can allows market forces can also achieve distributive justice globally across the network, under conditions we discuss.
• Grassroots Sybil-Resilience: Leading cryptocurrencies are monolithic, in that there is one community using the cryptocurrency (e.g., one blockchain in which the bitcoin transactions are recorded). Here, we aim at a grassroots architecture that allows currency communities to form independently, allowing people from different communities to trade and exchange their currencies, and eventually form a currency network that serves as a joint, grassroots medium of exchange. Our method is sybil-resilient in that sybils in a currency network affect only the currency communities that harbour them.
In this paper we study the possibility of designing an egalitarian and just digital currency that may form currency networks in a grassroots manner. One key challenge in this task is the presence of fake and duplicate identities, aka sybils, that may be employed by their operators in order to tilt control and wealth in their favor. We first observe that sybils cannot penetrate small communities of people that know and trust each other and that, indeed, trust communities can grow in a sybil-resilient way by employing graph-based properties [19] of genuine identifiers [22], using various mechanisms as admission rules to the community [23], or utilizing some machine learning algorithms [9]. In particular, our paper may be viewed as means for a joint, safe scale-up of these communities, concentrating on the aspect of distributive justice as we rely on the infrastructure of digital social contracts [2] for equality in execution, and techniques such as mutual sureties [22] and sybil-resilient community expansion [19] for defending against sybils. Note that, even though digital social contracts [2] assume asynchronous model of computation, here, for simplicity, we assume a synchronous model of computation.
We first begin with a single currency and provide a formal definition for a just distribution among its agents. Intuitively, distributive justice is satisfied if every member in the currency community is granted, initially, an equal share of the currency, and may trade its portion as it pleases. Formally, at every time step, the diluted balance of every agent amounts to its equal share plus its diluted cashflow up to this point. We then present a richer notion of asymptotic justice, where distributive justice is reached in the limit. With this notion, distributive justice can be reached even if agents begin with different initial amounts of the currency; as such it models distributive justice in the face of unequal inheritances. To achieve asymptotic justice, the difference between the diluted balance and cashflow must converge to an equal share of the currency, but these quantities need not match at all times. We show that this notion of justice may be realized via egalitarian coin minting, which provides a form of Universal Basic Income (UBI). That is, each community member minting an equal amount of coins in every time step results in asymptotic justice, regardless of the initial balance of the agents and the differences in the time of of joining the community.
Envisioning different currency communities emerge independently, each employing their own egalitarian minting regime as describe above, we then analyze the conditions under which multiple communities, each employing their own independent currency, may inter-operate in such a way that, jointly, all genuine agents in all communities will get an equal share of the joint created value of all currencies: In other words, we set to investigate the possibility of achieving global distributive justice in a situation where many independent currencies are used at once. To this end, we define the notion of a currency network, in which several currency communities operates simultaneously. The formal definition of a currency network is given below; in essence, it is a tuple of communities that employ independent currencies (each a coin belongs to a single currency). The network structure arises from chain payments via agents that are members in multiple communities simultaneously. This model is a direct generalization of credit networks [24,8,20,4,5]. In order to analyze the dynamics of such networks and the economic consequences of such dynamics, we apply the free exchange economy model [14] for the emergence of exchange rates among the different currencies. Based on these rates, we extend the definition of distributive justice to a currency network, and provide sufficient conditions under which distributive justice is satisfied. Importantly, these conditions rely on the currency volumes being in perfect balance with the marginal rates of substitution among the currencies. This balance requires calibration with every alternation in the network structure (i.e., the admission of a new member, etc.), and is thus hard to maintain without frictionless and efficient trade among the currencies.
With these assumptions, we extend the notion of asymptotic justice to currency networks. Our main result in this setting provides sufficient conditions under which asymptotic justice is achieved under an egalitarian minting regime. That is, in order to obtain distributive justice at the limit, the substantial collaboration among the different communities is expressed in jointly ensuring that every agent may mint one coin of only one currency at every time step. Agents may choose which coin to mint from the different currency communities in which they are members. Specifically, our main result shows that exchange rates between all communities will converge to 1:1 and asymptotic justice would follow, as long as the following conditions hold: 1. Agents behave myopically, in that each agent mints the highest valued coin at every time step; 2. The network is efficient, in that agents trade coins in order to maximize their utilities, causing equilibria to be reached infinitely often; 3. The intersections among two communities is sufficiently large to compensate for the productivity gap between them.
Our focus in this paper is on the economic analysis of such currency networks. Ultimately, we aim to implement such currencies using digital social contracts, and show elsewhere [2] social contract schemes for single-and multi-currency egalitarian minting. Our analysis shows how distributive justice can be achieved globally in a network of egalitarian and grassroots digital currencies. Finally, we explore the connection between people and their agents, and show that if a currency community is genuine that it can achieve distributive justice among its owners. In a currency network with a genuine (sybil-free) subnet, distributive justice can be achieved among all owners of the subnet.
Organization
After reviewing related work, we proceed with the notion of a single currency community at Section 2, where we define initial and asymptotic distributive justice, and discuss means for achieving them. We then address currency networks at Section 3, where we discuss the emergence of exchange rates via the free exchange economy model and extend the definitions of justice to this richer setting. Then, at Section 4, we analyze sufficient conditions for asymptotic justice in a network under an egalitarian minting regime.
Related Work
Mathematically, the main predecessor for personal currency networks are credit networks [4,5,20,6,10,8], and some of the results and analyses of credit networks carry over to personal currency networks. The key difference between credit networks and our newly proposed digital currency networks is that credit networks assume the existence of an objective measure of value, namely, an outside currency, whereas currency networks aim to create an objective measure of value.
While credit networks inspired some cryptocurrencies, including Ripple [21] and Stellar [13], they all had to chose an external currency to peg credit to: Ripple has chosen to provide its own cryptocurrency, XRP, the production of which is controlled by the Ripple Foundation (who owns the majority of minted XRP coins), while Stellar chose to be a "stablecoin", pegging the credit to a basket of fiat currencies.
Practically, the most related cryptocurrencies are the trust-based currencies of Circles [3] and Duniter [7]. Both create money through Universal Basic Income (UBI) to their members. Circles is a smart contract on top of Ethereum and is still a concept under development. Duniter is a cryptocurrency with an active community of mostly-French users; it anticipated the idea of egalitarian coin minting presented here and has a mechanism of sybil-resilience, being an indication that the conceptual and mathematical framework presented here may be viable.
A UBI-based currency community is a possibility, as demonstrated by Duniter, and is consistent with our mathematical model. Here, in particular, we study joint-UBI regimes, supporting the grassroots formation of multiple currencies; so we do not only concentrate on a single currency community (like Duniter and Circles), but anticipates a network, consisting of many such currencies, and study their joint economic behavior. Indeed, Duniter is not grassroots in the sense that it does not provide conceptual or architectural foundation for multiple independent Duniter-like currency communities to form an interoperate, like we do.
A Currency Community
Here we first describe a cryptocurrency community that is equal and just, provided it is sybil-free. We expect people to participate in a currency community via a computational agent, we assume a one-to-one correspondence between people and the agents and refer to the computational agents as "it". Hence, Such a sybil-free community may be simply a small-scale community in which all agents know and trust each other, or a larger-scale community that grows in a sybil-resilient way [22,19]. We first define such a currency community formally, and analyze economic properties of its dynamics, showing in particular that distributive justice can be achieved in the limit using an egalitarian minting regime in which each agent mints a single coin in each timestep. A digital social contract for egalitarian minting is described elsewhere [2].
Definition 1 (Currency Community). A Currency Community is a tuple
Coins are fungible in the sense defined below. We shall also use the inverse We regard the currency as a medium of exchange for goods and services. The fundamental operation in a currency is a payment, i.e., the transfer of a coin from a payer to a payee.
Definition 2 (Payment). Let C = (V, C, h) be a currency community and let u, v ∈ V . A payment from u to v is a transfer from u to v of a coin c ∈ C, initially held by u. The result of such a payment, denoted by C We observe that payments are reversible.
Observation 1 (Reversibility in a Single Currency). If C is a currency com- Proof. By Definition 2, exchanging a coin back and forth results in the initial configuration.
A Currency Community History
We wish to better understand the economic properties of a currency community, in particular, to explore the possibility of achieving distributive justice within the community. To this end, and since we envision a digital currency built with the currency community model in its core, we take the following approach: As the economy of a currency community takes place in a dynamic setting, where agents trade coins with each other for goods and services, we consider currency community dynamics.
We assume a dynamic setting with discrete time steps, where coins may be minted periodically by the agents. We mention that this can be implemented by a digital social contract [2] among the participants. We note that while the formal model digital social contracts, as well as any feasible realization of it, are asynchronous, we nevertheless assume a synchronous setting as a simpler first step, in particular a notion of time is needed for egalitarian coin minting.
Definition 3 (Currency Community History). A currency network history is a sequence of currency communities C 0 , C 1 , C 2 , . . ., C t = (V t , C t , h t ), t > 0, with the following monotonic attributes: That is, intuitively, we assume that the coin configuration may vary and that new agents and new coins may be added over time. We leave natural extensions and generalizations of these dynamics (i.e., to accommodate agent departures, coin burns, etc.) for future research.
For the analysis of currency community histories, we employ the notation V := t V t to denote all agents throughout history, and define the following.
Definition 4 (Balance, Income, Revenues and Expenses). Let C 0 , C 1 , C 2 , . . .. denote a currency community history. Then, we define the following: • Balance: The balance of agent v at time t is the number of coins held by v at that time, denoted by: • Income: The income of agent v at time t is the number of newly minted coins held by v, denoted by: • Revenue: The revenue of agent v at time t is the number of coins in C t−1 that were added to v's account due to trade, denoted by: • Expenses: The expenses of agent v at time t are the number of coins subtracted from v's account due to trade, denoted by: The relations between these notions are formally expressed in the following: This finishes the proof.
Summing up, we conclude the following: That is, the balance of an agent equals its initial endowment plus its income and cash-flow up to this point.
Justice in a Single Currency
Given the above definitions and observations, we now formally define our desired property of distributive justice, in which, intuitively, every agent is granted an equal share of the currency. We then demonstrate monetary regimes which realize distributive justice. The fundamental definition of a just currency is the following: That is, the diluted balance of each agent equals its diluted cash-flow plus an equal share of the currency.
Intuitively, a just currency grants an equal share of the currency to every community member, regardless of their inputs, while allowing them to do with their share as they please. This results in a socially just allocation of the currency, which is offset from equality only by voluntary trade.
Observation 3 (Equal Birth Grant). Consider a currency community history where each agent receives a fixed number of coins when it joins the community.
Such an equal birth grant regime is just, as it satisfies Next, we define a relaxed notion of distributed justice.
Definition 6 (Asymptotic Justice). A currency community history is said to be asymptotically just, if That is, the difference between the diluted balance of each agent and its accumulative diluted cash-flows converges to an equal share of the currency's equity.
Intuitively, Definition 6 aims to capture justice "in the limit". We note that Definition 6 is weaker then Definition 5, that is, a currency community that satisfies distributive justice is also asymptotically just.
Remark 1. Importantly, we note that both Definitions 5 and 6 heavily rely on the the currency history being monotone (see Definition 3). A formal definition of justice in the (very realistic) case of non-monotone histories, as well as the means for achieving it in a setting where agents may die or depart from a community, would be more subtle. In this paper we refrain from these questions, which include community taxes and inheritance issues, and leave them for future research.
As demonstrated in Observation 3, coin minting may serve as means to achieve distributive justice. In the context of asymptotic justice, we discuss a natural minting regime, termed egalitarian minting regime, in which each agents obtain equal income in the form of new coins minted periodically.
Definition 7 (Egalitarian Minting). A currency community history is said to employ egalitarian minting, if at every step every agent mints the same amount of coins. Formally, Note that egalitarian minting might be realized using a simple digital social contract, as demonstrated by Cardelli et al. [2].
The following lemma specifies sufficient conditions under which egalitarian minting is asymptotically just. Lemma 1. A currency community history that employs egalitarian minting with |C t | − → ∞ and |V | = N < ∞ is asymptotically just.
As |C t | − → ∞, the first summand approaches zero when t − → ∞. We now focus on t s=1 m s (v). Assume that v joined the community at time t , i.e., v ∈ V t \ V t −1 and fix t ≥ t . By Definition 7, we have On the other hand, consider a time step t with |V t | = N and fix t ≥ t . We then have As |C t |, |C t | are constant and |C t | − → ∞, we conclude that It now follows that The claim follows.
To summarize, above we showed that a single, sybil-free currency community that employs egalitarian minting is asymptotically just, namely, as time advances, each member indeed approaches being awarded with an equal share of the currency, offset only by its voluntary trades. This result is a first step towards the goal of the next section, in which we study the economic relationship between several such currency communities.
Currency Networks
The egalitarian minting currency described in Section 2 indeed satisfies equality and distributive justice, however only for a single, sybil-free community. Recall that our goal in this paper is a digital currency that is not only equal and just but also grassroots, in that it can support the bottom-up formation of multiple currency communities that can interoperate. Indeed, we envision that many such currency communities may form independently and we wish to analyze conditions under which all agents in a network of such currency communities will jointly enjoy distributive justice.
To study the economic interactions between different currency communities, the novel mathematical structure we study here is a currency network. In this section we define currency networks and consider some of their important special cases, including such that capture credit networks in particular. Similarly to credit networks, currency networks are based on trust between agents; in particular, we show that they extend and generalize the well-established models of debt and credit networks [8,4,5,20]. In this model, agents may be members in several communities simultaneously. In order to grasp the network structure, it is useful to think of a currency network as a labeled hypergraph CN = (V, {V i } k i=1 , h), where agents V = i V i are the vertices, and currency communities {V i } k i=1 are the hyperedges, and each vertex v ∈ V is labeled by the coins it holds from all the communities it is a member of, h −1 (v). See Figure 1 for a visual example. We also note that the special case in which all currency communities are of size 2 corresponds to credit networks, where the resulting hypergraph is in fact a graph, as every community is manifested as an edge.
As in a single currency, the fundamental operation in a currency network is a (direct) payment, i.e., a transfer of a coin from a payer to a payee (Definition 2); However, a payment of a coin of a currency can only be made among two members of the coin's currency community. Still, agents in a currency network may be able to transact with each other via chain payments, defined below. , v 2 , v 5 }) represents the vertices V 1 of community C 1 , the red hyperedge at the bottom represents the vertices V 2 of C 2 , and the green hyperedge on the right represents the vertices V 3 of C 3 . The agent corresponding to v 5 holds the coins c 1 of C 1 as well as the coin c 2 of C 2 , while the agent corresponding to v 4 holds the coin c 3 of C 3 .
Definition 9 (Chain Payment). Let
Note that it is not the same coin that is transferred among the agents participating in a chain payment.
Justice Within a Currency Network
Our main aim is to explore the possibility of distributive justice within a currency network. In order to do so, we must first address the issue of exchange rates among the different currencies. For now, we defer the intricate question of the emergence of exchange rates to the next section, and provide a formal definition of exchange rates in this setting, denoting by EX ij the amount of coins in C j that may be traded in CN for a single coin in C i . Definition 10 (Coin Exchange Rates). The coin exchange rates of a currency network CN = {C 1 , ..., C k } is given by a matrix EX ∈ R k×k that satisfies: • Currency fungibility: EX ii = 1 for all 1 ≤ i ≤ k.
• Arbitrage-free trade: EX ij · EX jl = EX il for all 1 ≤ i, j, l ≤ k.
That is, coins within the same currency have equal value, and exchanging c ∈ C i to C j and then to C l yields the same rate as a direct exchange from C i for C l .
Corollary 2 (Reciprocal rates). Let EX ∈ R k×k denote a coin exchange matrix of a currency network CN = {C 1 , ..., C k }, then every pair of indices 1 ≤ i, j ≤ k satisfy: Proof. Straightforward from Definition 10.
Given exchange rates of coins and the total number of coins of each currency, we now define the equity of an agent as the value of its coins as a fraction of the total value of all currencies within the network.
Definition 11 (Fractional Equity of Agent). Let EX ∈ R k×k denote a coin exchange matrix of a currency network CN = {C 1 , ..., C k }. The fractional equity of agent v ∈ V is given by .
That is, the equity of an agent equals is the fraction of its assets of the total value of the network, as may be realized in currency C j .
Remark 2. We note that Definition 11 is independent of the choice of the index j. To see this, multiply both the nominator and denominator by EX jl (CN t ) and apply the arbitrage free trade property (see Definition 10).
As in the case of a single currency community, our interpretation of distributive justice relies on the dynamics in the network over time. We thus provide the notion of a currency network history, defined below. CN 0 , CN 1 , CN 2
Definition 12 (Currency Network History). A currency network history is a sequence of currency networks
. ., is a currency community history for all 1 ≤ i ≤ k. We employ the notation V := i,t V i t and C := i,t C i t to denote all agents and all coins in the network throughout history.
In short, a currency network history is nothing but a synchronized set of distinct community histories. We mention that the coin exchange rates may vary over time, and thus apply the notation EX(CN t ) to differentiate between exchange rates at different time periods throughout history.
With the notion of network history at hand, we now extend the notion of distributive justice to a network setting as follows: CN 0 , CN 1 , CN 2 , . . . is said to be just, if for every t ≥ 0 and v ∈ V t :
Definition 13 (Distributive Justice in a Network). A currency network history
That is, the difference between all assets of an agent and its current cashflow, exchanged to currency C j and diluted properly, results in each agent's equity being an equal share of the entire currency network's equity, at every time step throughout history. We note that this is a straightforward extension of Definition 5 which corresponds to the special case k = 1.
Next, we present the notion of asymptotic justice, extended to a network setting.
Definition 14 (Asymptotic Justice within a Network). A currency network history CN 0 , CN 1 , CN 2 , . . . is said to be asymptotically just, if for every v ∈ V : Definitions 14 and 5 for currency networks relate to each other similarly to the way Definitions 6 and 5 for a single currency community relate to each other. Distributive justice in a network requires that the difference between all assets of an agent and its current cash-flow, exchanged to some currency C j and diluted properly, converges to an equal share of the currency network's equity. Note that Definition 6 corresponds to the special case k = 1.
Justice via Joint Egalitarian Coin Minting
Achieving distributive justice within a currency network requires a joint coin minting regime that is agreeable to all communities in the network. Indeed, the admission of an agent to one community in a just network must affect the distribution of wealth in another, and the exchange rates volatility requires joint efforts in order to maintain distributive justice over time. The joint minting regime required to achieve that is a natural extension of egalitarian coin minting to the network setting.
Definition 15 (Joint Egalitarian Minting). A currency network history is said to employ joint egalitarian minting if at every time step, every agent mints exactly one coin among all currencies in the network.
Formally, if i m i t (v) = 1 for every t > 0 and v ∈ V t .
We demonstrate elsewhere a social contract joint egalitarian minting in a currency network [2]. In the following, we explore sufficient conditions under which joint egalitarian minting naturally gives rise to asymptotic justice within all agents participating in multiple currencies within the same currency network.
Myopic Agents
We begin with the natural question each agent shall ask at each timestep: Which coin should I mint next? Indeed, there are many possibilities. Here we consider a simple answer: Always mint the highest-valued coin.
Definition 16 (Most Valued Coin). Let CN = {C 1 , ..., C k } be a currency network with coin exchange rates EX ∈ R k×k . A most valued coin in this setting is an index i that maximizes EX ij over all indices 1 ≤ j ≤ k. Given an agent v ∈ V , a most valued v-coin is an index i with v ∈ V i that maximizes EX ij over all indices 1 ≤ j ≤ k.
The next definition formalizes the notion of myopic behaviour under egalitarian minting in a network.
Definition 17 (Myopic Agents). Let CN 1 , CN 2 , ... be a network history that employs joint egalitarian minting. We say that the agents in the network are myopic if in every time step t, every agent v ∈ V t mints a most valued v-coin (ties are broken arbitrarily).
Where do Exchange Rates Come From?
The relations and interactions among the currencies within a network are inherent to the currency network setting. In the following, we present a conceptual and mathematical framework for the analysis of these interactions which result in exchange rates among the different currencies. We reason that any relation among independent currencies is based upon what the currencies represent, namely actual commodities (e.g., goods and services) that may be purchased from agents that accept these currencies as payment. Specifically, our analysis focuses on the exchange rates that emerge at equilibrium, with respect to individual preferences over these underlying commodities. Note that the commodities are not represented explicitly in our model; we assume their existence solely to induce preferences on currencies, which we then take into account.
Formally, given a currency network CN = {C 1 , ..., C k }, it will be convenient to view the balances of all agents as a matrix b ∈ R n×k , where b i (v) is the balance of agent v ∈ V in currency C i (i-balance, for short). We denote the diluted balances by |C i | , and assume that every agent v has a preference corresponds to a fractional ownership in each currency in the network.
This setting is generally known as a pure exchange economy (see, e.g., [12,14,25]). We follow standard practice and assume that the preferences of agent v are expressed via a convex, continuous, and monotone linear order over [0, 1] k . Given an initial endowment b ∈ [0, 1] n×k , and assuming that agents may freely trade currencies with each other, the standard solution concept in this model is a competitive equilibrium b * wrt. the preferences { v } v∈V that Pareto dominates b. Importantly, a competitive equilibrium establishes not only an allocation (which is reflected in the balances), but also marginal rates of substitution among currencies [18]: A matrix MRS ∈ R k×k where MRS ij denotes the quantity of the currency C j that an agent can exchange for one (infinitesimal) unit of currency C i while maintaining the same level of utility under the equilibrium b * .
The normalization of the marginal rates of substitution among currencies by the currency volumes, naturally gives rise to exchange rates among coins within these currencies. As these rates are induced by individual preferences, we term them preferences-based exchange rates, formally defined below.
Definition 18 (Preferences-based rates). Let CN * be a currency network in which the diluted balances matrix b * form an equilibrium under agents' preferences over the currencies. The preferences-based rates between coins in C i and C j is given by Remark 3. Note the difference between the marginal rate of substitution among currencies (denoted by MRS), which relates the effective values of the two economies underlying the two compared currencies, and the exchange rate between coins (denoted by EX). In essence, preferences-based coin exchange rates (EX) are the currency rates (MRS), normalized by the number of coins in circulation.
The following observation asserts that preferences-based rate are valid coin exchange rates as specified in Definition 10.
Observation 6. Preferences-based rates satisfy currency fungibility and arbitrage free trade.
Proof. As marginal rates of substitution arise in equilibrium, these rates must satisfy both MRS ii = 1 and MRS ij · MRS jl = MRS il , or else agents would benefit from further trade. Applying Definition 18 to these equations completes the proof.
A key merit of using coins as a medium of exchange (rather than direct trade in fractions of currencies) lies in the degree of freedom manifested in currency volumes, as an increase in money supply causes inflation [1]. Put simply, if more coins are issued for a single currency, this linearly impact the exchange rate of this currency with other currencies. Roughly speaking, our general approach builds upon the observation that agent choices in coin minting affect and control the fractions |C j | |C i | , which in turn affect the coin exchange rates. We say that the volumes of all currencies are in perfect balance if the ratio between the number of coins of any two currencies exactly equals the difference in the marginal rate of substitution among them in equilibrium. We claim next that if the volumes of a pair of currencies is in perfect balance then a fixed 1:1 coin exchange rate follows.
Observation 7. Let CN * be a currency network in which the diluted balances matrix b * forms an equilibrium under agents' preferences, and let EX denote preferences-based coin exchange rates. Then, if two currencies C i , C j satisfy Proof. Straightforward from Definition 10.
All Together Now
Following Observation 7, our aim is to establish 1:1 exchange rates by reaching perfect balance among currency volumes. Our approach builds upon on the dynamics of the trade within the network, as reflected in the network's history. While individual preferences may potentially vary in time, in the following we consider the simple scenario of fixed agents' preferences, where { v } v∈V is fixed eventually, namely after some finite prefix of the currency history in which it may fluctuate.
Finally, we rely on the efficiency of the network, namely, the tendency of reaching equilibria wrt. the agents' preferences via voluntary mutual trade. Indeed, not all configurations throughout history necessarily form an equilibrium: in particular, it might take several time steps for agents to perform all profitable coin trades and arbitrages. We thus define an efficient history as such that gives rise to equilibria infinitely often.
Definition 19 (Efficient History). Let CN 0 , CN 1 , CN 2 , . . . be a currency network history with agents' individual preferences over its currencies. Such network history is said to be efficient if there exists an (infinite) subsequence t 1 < t 2 < t 3 . . . such that CN ti is in equilibrium wrt. to these preferences.
Following that line, we now extend the notion of marginal rates of substitution (and consequently, also preferences-based rates) to all time periods (possibly excluding a finite prefix) by defining the rate at time t as the exchange rate at t * , where t * is the most recent equilibrium that precedes t. That is, we assume constant rates that are updated infinitely often whenever the network reaches equilibrium.
With the above notions at hand, we can now state our main theorem: . . be a currency network history with 2 communities C 1 , C 2 that employs joint egalitarian coin minting. Assume: • Fixed agents' preferences over the currencies.
• Preference-based coin exchange rates.
• Network history is efficient.
Then, if it holds that then the network history is asymptotically just. Furthermore, it also follows that The proof follows the observation that the agents in the intersection V 1 ∩ V 2 are the only agents that can choose which coin to mint, and, with myopic joint egalitarian minting, they would choose the more valuable coin; thus, if there are relatively enough agents in the intersection, then, together, they would mint enough coins to set the coin exchange rate right, and asymptotic justice then follows.
In order to establish asymptotic justice, it is enough to note that for sufficiently large t: (1) The initial endowment of each agent v (or the exact time of joining each community) is negligible, and (2) Approximate 1:1 rates hold (EX 12 (CN t ) ∼ 1). It follows that
Agents and People
All analysis above was done for computational agents. Here are aim to relate the analysis to people, define the notion of genuine agents and sybils, and explain in what sense the framework proposed is sybil-resilient.
Definition 20 (Agent Ownership). We assume a domain of people P, a domain of (computational) agents V, and an ownership relation among them, owns ⊂ P × V, and use owns(p, v) for (p, v) ∈ owns. Given v ∈ V, then p ∈ P is an owner of v if owns(p, v), and given a set of agents V ∈ V, its owners are Our previous work on sybils refers to genuine personal identifiers [22], which are cryptographic key pairs. To connect it with the current work, assume each computational agent is associated with a unique key pair, where the public key identifies the agent and the private key is used to sign agent transactions.
Definition 21 (Genuine Agent and Community
). An agent v ∈ V is unique if owns(p, v) and owns(p , v) implies p = p , singular if owns(p, v) and owns(p, v ) implies v = v , and genuine if it is unique and singular. A community, or set of agents, V is genuine if every agent v ∈ V is genuine.
Hence, in a genuine community there is a one-to-one correspondence between agents and the people that own them.
We have investigated elsewhere [19,22] how a community of genuine agents may grow without letting too many sybils in. The method described in [22] relies on mutual sureties among agents regarding their genuineness. Importantly, the methods does not specify what are the implications of violating a surely. A currency community with egalitarian minting provides a natural answer: An agent that gave a surety to a sybil in a currency community is liable, at the very least, to the coins minted by the sybil in this currency. We will explore the implications of integrating the results of this paper with the results of [19,22] in subsequent work. Here we make some preliminary observations on the relation between people and their computational agents in a currency network.
Clearly, a genuine currency community that achieves distributive justice in the limit, provides distributive justice in the limit also to the people who own it. A non-genuine agent v in a community V may hamper distributive justice among the owners of the community in two ways: If v is not singular in V , namely its owner p owns also another agent v ∈ V , then p gets more than her fair share in V . This situation corresponds to a real-world situations in which people operate fake identities, possibly in addition to their "true" identity. But even if |V | = |P |, where all agents are singular in V , this does not guarantee distributive justice. If agent v is singular in V , but is not unique and is owned by two people p and p , then these two people together will have a share equal to that of people who own genuine agents in the community. This may correspond to a real-world situation where a dependent person p is being exploited by a person p on whom she depends, who unfairly extracts value that should belong to p. Hence, for a currency community to provide distributive justice in the limit to its owners, it must be genuine.
We now consider a currency network that employs joint egalitarian minting. We have two observations. First, if a currency network achieves distributive justice in the limit, then each genuine community within the network achieves distributive justice among its owners. In other words, the damage that a sybil causes to a currency network is local to the community that harbours it. The reason is as follows: Consider two currency communities, a green currency and a blue currency, where the green community is genuine and the blue community is infested with sybils. Now consider an agent v at the intersection of the two communities. By assumption, v is genuine since it is a member of the green community. Joint egalitarian minting implies that all agent of the green community will mint the same number of coins, although members in the intersection of the green and blue community may mint some blue coins. However, since in the limit the exchange rate between the green and blue community will be 1 : 1, the fractional equity of the agents in the intersection will be as if they only minted green coins. Since the green community is genuine, its owners will also reach distributive justice in the limit, despite the fact that some member of the genuine green community are also members of the non-genuine blue community.
Second, consider a currency network that achieves distributive justice in the limit. If a subnetwork of it is genuine, namely all agents in the sub-network are genuine, then this subnetwork achieves distributive justice in the limit among its owners. The reasoning is similar, with the additional note that no person can own two agents in distinct currency communities in the subnetwork, lest these agents not be genuine.
Outlook
Here we analyzed the possibility of a digital currency that realizes equalitythere is not a single entity controlling the currency but all genuine agents equally control the system; distributive justice -all genuine agents (that is, not including sybils) enjoy an equal share of the value of the digital currency; and grassroots -several independent communities may freely trade while satisfying joint distributive justice. Indeed, as we envision bottom-up growth of communities, our analysis, modeled via currency networks, paves the way for interoperability and offers the possibility of equality and justice at scale.
In particular, our main result shows that joint egalitarian coin minting (that is possible to implement using digital social contracts [2] and in which each agent shall mint only a single coin in each timestep) indeed may lead to pairwise 1:1 exchange rates and thus to joint distributive justice among genuine identifiers [22] on currency networks satisfying certain conditions, most importantly sufficient intersections between different currency communities.
Next we discuss some future research directions.
Other Regimes
We analyzed joint egalitarian minting with myopic agents. Here we mention other possibilities: • Egocentric minting: Here, every agent mints the coin that maximizes her private preferences. (Note that this coin depends both on the agent preferences and on the global exchange rate between coins.) • Strategic minting: Here, agents are rational and sophisticated, in that each agent may mint the coin that maximizes its private preferences, taking other agent choices into account.
• Defensive minting: Here, in each iteration, each agent mints the coin that it currently has the least among all currencies it is a member of. (This regime can be specified and thus enforced on its parties via a digital social contract.) We leave a detailed study of such possibilities for future work. In particular, studying -analytically or via computer simulations -which of these possibilities give rise to 1:1 exchange rate, and what is the rate of convergence, are natural future research directions.
In particular, issues of liquidity in such networks, which could be the main motivation for community merges, shall be studied, as well as the extension of Theorem 1 to networks with more than 2 communities.
Most importantly is the integration of the two approaches -achieving sybilresilient growth [19,22] of a currency community and a currency network, using the notion of joint egalitarian coin minting developed here. | 2020-06-01T01:00:40.530Z | 2020-05-29T00:00:00.000 | {
"year": 2021,
"sha1": "8c98738a0af1ff9b19ced793acc2490ca17cdf55",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "94c779ec8b815545582814430ccf16d79a83af3e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
265086817 | pes2o/s2orc | v3-fos-license | Short-term dentoalveolar effects of aesthetic lip bumper appliance: A longitudinal cases-series and pilot study
Highlights • One of the most used orthodontic appliances is the lip bumper (LB), a semi-functional, removable orthodontic appliance.• The ALBAa, a modified version of a traditional LB appliance, designed to be more aesthetically acceptable to patients, is proposed and tested clinically.• The ALBAa allows predictable expansion of the mandibular arch in each dimension investigated.
Introduction
Management of transverse dimensions is of paramount importance in orthodontics (Haas, 1970).Indeed, transverse deficiency is a common issue, and orthodontic treatments primarily focus on achieving proper transverse dimensions in both arches to prevent significant issues like crowding, teeth impaction, cross-bite, mandibular functional shift, and difficulty in nasal breathing (Babacan et al., 2006).
Although rapid skeletal maxillary expansion is a well-known method for increasing the perimeter and diameter of the maxillary arch, with both dentoalveolar and skeletal effects reported, the mandibular arch can only be expanded through the expansive dentoalveolar effects of orthodontic appliances (Solomon et al., 2006).In order to achieve adequate maxillary skeletal expansion in a single phase, thereby increasing the efficiency of orthodontic treatment, McNamara et al. recommend expanding first the mandibular arch and then the maxilla (McNamara, 2000).One of the orthodontic appliances most used with this aim is the lip bumper (LB), a semi-functional, removable orthodontic device that applies active forces on the lower first molars and redirects oral forces.By utilizing an extended resin shell, it effectively eliminates the centripetal pressure caused by the perioral muscles.This, in turn, allows the centrifugal forces exerted by the tongue to passively expand the mediolateral sectors, leading to the proclination of the anterior teeth.At the same time, the lip forces are directed towards the molars, facilitating their uprighting and distal inclination.Additionally, mechanical overexpansion forces are applied through a rounded stainless-steel wire on the lower first molars.Although effective, the LB depends strictly on patient compliance, as it should be worn for about 18-20 h per day (Raucci et al., 2016;Soo and Moore, 1991)).In turn, patient compliance with an orthodontic appliance is closely influenced by how well it is accepted.This depends on both the comfort and aesthetics of the appliance (Bonnick et al., 2011), as well as the level of patient motivation (Sergl et al., 1998;Sergl et al., 2000;Doll et al., 2000).As highlighted by recent studies, young patients, like adults, are uncomfortable wearing unsightly orthodontic appliances, especially when they are visible (Rivera et al., 2000).The literature shows that aesthetic appliances are very much appreciated by patients as young as 12 years old (Walton et al., 2010), with clear aligners being the preferred option among 12-15-year-olds with respect to conventional fixed appliances (Walton et al., 2010).With this in mind, the ALBAa, a modified version of a traditional LB appliance, designed to be more aesthetically acceptable to patients, has been proposed.It consists of an anterior bumper made of fibreglass and two shells made of polyethylene terephthalate glycol (PET-G) to completely cover both lower first molars and deciduous molars (Ormco, Glendora, USA) (Bagden, 2003).The purpose of this study was to investigate the dentoalveolar effects of this appliance.2000) and Wilson Curve of ≥ 1.5 mm in the mandibular arch (Chung, 2019).
Study design and description of sample
Patients with craniofacial syndromes and previous orthodontic treatment were excluded.
The study sample thereby comprised 23 patients (13 boys and 10 girls, with a mean age of 9.5 years ± 1.8 years).For each patient, initial records were acquired.Data recorded were demographic characteristics (sex and age), digital models, intra-and extra-oral photos, panoramic radiograph and cephalogram.
Treatment protocol
Each patient recruited for the study was treated primarily with an ALBAa to decompensate the mandibular Wilson curve.The ALBA used is composed of a round 0.045-inch SS archwire whose anterior part is encased in a bumper made of quartz crystal microbalance (QCM) fibreglass (Tecnident, São Carlos, Brazil) (Fig. 1A-B), 1.5 mm away from the vestibular surfaces of the lower incisors (Fig. 1C).The posterior part of the stainless-steel arch is U-shaped at the deciduous lower second molars, with the final portion being embedded in two shells made of PET-G (1.5 mm, Scheu-dental, Iserlohn, Germany), completely covering the lower first molars and both deciduous lower first and second molars (Fig. 1D).
All patients were treated by the same orthodontist (LL).They were asked to wear the appliance for 20 h per day, except during meals and oral hygiene procedures.Follow-up visits were scheduled monthly.The appliance, passive upon delivery, was activated by about 1.5 mm transversely for each hemiarch at each appointment to ensure an adequate amount of transversal expansion.
Patients were monitored after delivery of the appliance (T0) with follow-ups at 3 (T1), 6 (T2) and 9 (T3) months, when alginate impressions of the lower arch were acquired and then digitized in STL.format using a 3-Shape D800 extraoral scanner (3 Shape, Copenhagen, Denmark).
Digital measurement protocol
The following measurements were made on each digital scan of the mandibular arch at each time point:
Transverse linear measurements
• Inter-deciduous canine width: the distance between the cusps of the lower deciduous canines (C-C) • Inter-deciduous first molar width: the distance between the occlusal fossae of the deciduous lower first molars (D-D) • Inter-deciduous second molar width: the distance between the occlusal fossae of the deciduous lower second molars (E-E) • Inter-permanent first molar width: the distance between the occlusal fossae of the lower first molars (6-6) (Fig. 2)
Mandibular teeth tip
Vestibulolingual crown tipping values on all lower teeth.All linear mandibular transverse distances and anterior mandibular crowding (Little's irregularity index) were measured using Orthoviewer software (3 Shape, Copenhagen, Denmark).
To assess crown tip values on mandibular teeth, we utilized digital models for each observed time point.These models were imported into VAM software (Vectra, Canfield Scientific, Fairfield, NJ, USA), enabling the identification of 100 reference points on each tooth.
Each point was assigned three coordinates, which were recorded on an Excel spreadsheet named "master.xls".This approach was employed to automate the calculation of crown tip values for all mandibular teeth, specifically assessing their labiolingual inclination (Fig. 3C) relative to the occlusal reference plane (Fig. 3D).This measurement method relies on the occlusal plane as a crucial point of reference (Huanca Ghislanzoni et al., 2013).This plane is defined by passing through the mesiovestibular cusps of the first molars and the centroid of the FACC (Facial Axis of Clinical Crown) lines of all other teeth, excluding canines.Utilizing the occlusal plane as a reference helps minimize measurement errors caused by tooth movement during orthodontic treatment.
The resulting measurements, encompassing both linear and angular aspects, were then subjected to comparative analysis across the different time points (T0, T1, T2, T3).
Statistical analysis
The data was analysed using R software.The same operator conducted all measurements, including repeated measures on eight randomly selected patients to test the inter-operator reliability (repeatability).This yielded no statistically significant variations (1.2 • for angular measurements and 0.13 mm for linear measurements), thereby confirming the repeatability of the measurements performed in this study.
Descriptive analysis, recording the mean and standard deviation (DS) of every measurement, was conducted.ANOVA for repeated measures and post-hoc tests were performed to evaluate the statistical significance of measurements at each time point investigated.A p-value < 0.05 was considered statistically significant.ANOVA considering the "time effect" was calculated for each repeated measure, and in all cases yielded a statistically significant result (p < 0.05).Therefore, post-hoc tests considering different time points were performed for each type of measurement analysed.
Transverse distances
The variation in mandibular intra-arch linear measurements and the respective post-hoc test results across the observational period are reported in Table 1.Positive values indicate an increase due to the expansion effect and arch development in the medial and posterior sectors.A statistically significant increase in each measurement was observed when comparing T0vsT1, T1vsT2 and T0vsT2 time points (p < 0.05), although no significant variations were found for the T2-T3 interval (p > 0.05) (Table 1).
Crown tip measurements
Variations in crown tip were recorded for all mandibular teeth at each time point, and the respective post-hoc test results are reported in Table 2.In particular, statistically significant increases were observed when comparing time points T0vsT1, T1vsT2 and T0vsT2 (p < 0.05).No significant variations were reported for the interval T2-T3 (p > 0.05), except for at the lower incisors (teeth 3.1, 4.1, 3.1, 3.2) and the left mandibular premolars (teeth 3.4 and 3.5) (p < 0.05) (Table 2).
Crowding
Table 3 shows the mean values for Little's irregularity index and the respective SD at each time point, together with post-hoc testing results.A statistically significant decrease in the amount of crowding was observed across the observation period (p < 0.05).This effect was equally distributed across the various time intervals, reaching statistical significance at each time point (Table 3).
Discussion
All patients treated with the ALBA displayed both anterior and transverse expansion of the mandibular arch, in line with that reported in the literature (Grossen and Ingervall, 1995).In the posterior sectors, the ALBAa exploits the principle of the super elastic plate and allows transverse expansion.The average increase in width between the first deciduous molars was greater (4.38 ± 0.24 mm) than that recorded at the level of the second deciduous molars (4.23 ± 0.2 mm).In contrast, the smallest transverse expansion occurred at the deciduous canines, with a net increase of about 2.54 ± 0.12 mm, a value greater than that Fig. 3. Points in the lower arch on VAM 3D software.Graphical representation of Little's index measurements between anatomical contact points of anterior mandibular teeth.The index is a sum of all measurements (A).Marking of mandibular anatomical points on the lower arch using VAM software (Vectra, Canfield Scientific, Fairfield, NJ, USA) (B).Graphical representation of crown-tip measurement values (C) and the occlusal plane used as a reference (D).The latter is defined by passing through three key points: mesiovestibular cusps of the lower first permanent molars and the centroid point (0), which is, in turn, calculated based on all the FACC lines of the other teeth, with the exception of the canines.
Table 1
Means (mm) and SD (mm) of inter-arch linear measurements at each time point investigated.Statistical comparison was performed between time points T0, T1, T2 and T3.P < 0.05 was considered significant.recorded by Grossen and Ingervall (1995).The inter-permanent first molar width values recorded in this study are also higher than those reported in the literature (Grossen and Ingervall, 1995;Osborn et al., 1991;Werner et al., 1994;Moin and Bishara, 2007;O'Donnell et al., 1998;Bjerregaard et al., 1980), although lower than those reported by Cetlin and Ten Hoeve (Cetlin et al., 1983).Intra-arch expansion was not evenly distributed over time, with 53 % of the total gain occurring in the first 3 months of treatment, 39 % during the following 3 months, and only 8 % in the last 3 months of therapy.The gains were statistically significant in the first six months of observation, but not in the last time interval.Crown tip in the posterior sectors increased significantly, with the greatest variation observed at the deciduous first molars, which displayed an average increase in crown tip values that was approximately 2 • greater than that recorded at the deciduous second molars (12.24 • ± 0.47 • for the left and 12.6 • ± 0.48 • for the right deciduous first molars and 10.83 • ± 0.37 • for the left and 10.85 • ± 0.51 • for the right deciduous second molars, respectively).The permanent first molars, on the other hand, displayed an average angular increase of 8.43 • ± 0.31 • on the left and 8.48 • ± 0.34 • on the right, respectively.Indeed, as highlighted in the literature, one of the side effects of the Lip Bumper is the distal inclination of the first permanent molar, which could lead to eruptive alterations of the second permanent molars (Ferro et al.,2011).
In contrast, the smallest variation in crown tipping was recorded at the canines, with increases of 5.97 • ± 0.31 • on the left and 6.04 • ± 0.32 • on the right.To best of our knowledge, no other recent studies have analysed the variation in crown tipping values of on the lateral teeth in mixed dentition after LB treatment.Indeed, at the central incisors a proclination of 6.3 • ± 0.63 • on the left and 5.98 • ± 0.62 • on the right was recorded, while 6.78 ± 0.86 • of proclination was recorded for the left lateral incisor and 7.13 • ± 0.99 • for the right.
Regarding the proclination effect on anterior teeth, 34.5 % occurred in the first 3 months of treatment, 29.5 % in the next 3 months, and 36 % in the last 3 months.However, 53.25 % of the total posterior transverse expansion occurred in the first 3 months of treatment, 39.5 % in the following 3 months and only 7.25 % in the last 3 months of therapy.Similarly, crown tipping values for the posterior teeth increased significantly across the various time intervals, except for that pertaining to the last three months of observation.
This contrasts with the effects of ALBAa therapy in the anterior arch, as gains were equally distributed between the various time points, and invariably statistically significant.The effect of proclination of the anterior teeth is to cause a reduction in anterior crowding, as evaluated by Little's index (Little, 1975).Little's index is reduced thanks to a combination of two factors: both the increase in the inter-canine width, and the elimination of the labial pressure on the anterior sector of the arch.This allows the incisors to procline through lingual action, increasing the perimeter of the arch in this area consistently throughout therapy (Moin and Bishara, 2007).Indeed, our analysis revealed that about 37 % of the total reduction in crowding occurred in the first 3 months of treatment, 30 % in the following 3 months, and 33 % in the last 3 months of therapy.This reduction in Little's index is in line with that reported in the study by Werner et al. (Werner, 1994), whereas Ferris et al. reported a greater reduction (Ferris et al., 2005).
There are some limitations associated with this study: it only involved patients treated with the ALBA, and it would be interesting to carry out further research to compare its effectiveness with respect to a control group treated with traditional LB.Therefore, it is not feasible to directly ascribe these study findings to either the enhanced ALBA design or the significant patient compliance with its usage.In addition, the variation in verticality and divergence was not analysed, and this could be a key factor.Furthermore, no data regarding long-term stability of results are available at this time, so the effects of ALBA on the incidence of lower second molar impaction could not be assessed.Hence, this should be considered a pilot study, but nonetheless one that prompts further investigation.
Conclusions
The ALBAa allows predictable expansion of the mandibular arch in each dimension investigated.All intra-arch measurements increased during therapy, especially the distance between the deciduous molars.This effect was accompanied by an increase in crown tipping values and a reduction in crowding, by way of anterior teeth proclination.
The greatest changes were seen in the first six months of therapy, especially those based on a mechanical mechanism, whereas the functional effect guaranteed by the resin bumper provided a more gradual, evenly distributed change.
Table 3
Mean and SD of Little's irregularity index at each time point investigated.Statistical comparison between Little's index measurements (mm) at T0, T1, T2 and T3.P < 0.05 was considered significant.
The study was designed as a prospective longitudinal experimental study without a control group.It was approved by the University of Ferrara Postgraduate School of Orthodontics Ethics Committee and registered as protocol n • 3/2015.The study sample comprised University of Ferrara Postgraduate School of Orthodontics patients recruited prospectively and consecutively who met the following inclusion criteria: 1. Young patients aged between 7 and 11 in late mixed dentition.2. The need for transverse lower arch decompensation before rapid maxillary skeletal expansion: transverse deficiency was identified as a McNamara index ≤ 31 mm in the maxillary arch (McNamara, J. A.
Fig. 1 .
Fig. 1.Aesthetic Lip Bumper appliance.Aesthetic Lip Bumper appliance (A-B).Detail of anterior part of appliance with QCM fibreglass bumper spaced 1.5 mm from the vestibular surfaces of the lower incisors (C).Detail of posterior part, with U-shaped stainless-steel archwire and shells completely covering the lower first molar and deciduous lower first and second molars (D).
Fig. 2 .
Fig. 2. Arch width measurements.Graphical representation of linear intra-arch measurements in the lower arch: between the canine cusp tips (C-C), the occlusal fossae of the deciduous lower first molars (D-D), and those of the deciduous lower second molars (E-E) and lower first molars (6-6). | 2023-11-10T16:16:54.991Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "160e1cd53c97b3b713e0b96626312a42dfd963f8",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sdentj.2023.11.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5f571e1196d8e445eb2deb0cf74a8445e03ac89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210702101 | pes2o/s2orc | v3-fos-license | Psych Socs: student-led psychiatry societies, an untapped resource for recruitment and reducing stigma
Medical recruitment and retention are national problems. Psychiatry has been more affected than many specialties, as a result of stigma from the public and other healthcare professionals. The Royal College of Psychiatrists has undertaken several initiatives to redress this, notably the ‘Choose Psychiatry’ campaign. In this editorial we argue that student-led university psychiatry societies are a wonderful but frequently untapped resource to help attract the brightest and best medical students to our profession. We describe the activities of three ‘Psych Socs’ across the UK and propose next steps to continue this work.
2019 saw the best psychiatry core trainee recruitment for some years, with 92% of places filled. 1 However, the figures for higher training remain a concern, with a cumulative fill rate of 52% and considerable regional variation in both core and higher training. 1 Staff recruitment and retention remain key challenges across the National Health Service, 2 with overall vacancies predicted to double by 2030. 3 Psychiatry has historically faced unique difficulties, not least stigmatising attitudes from the public, other doctors and medical students, 4 and we need to remain active and focused on attracting the brightest and best to our profession. The time at medical school is a key period when attitudes and beliefs about psychiatry are most susceptible to change, 5 and students' personal experience of psychiatry has been described as the 'critical variable' in recruitment rates. 6 A survey by Curtis-Barton and Eagles 7 reported 'push' factors that discouraged students from choosing psychiatry as a career, including a perceived lack of scientific evidence underpinning diagnoses, the perception that patients generally have a poor prognosis, the amount of 'paperwork' in the specialty and general stigma about mental illness. A qualitative study conducted at the University of Bristol echoed these findings, adding that the emotional burden of seeing patients at the lowest point of their lives and the focus on 'non-medical' social issues were also reasons reported by students choosing not to pursue psychiatry. 8 Psychiatry and general practice have been shown to attract most negative comments or 'bashing' from academic staff, doctors and students. 9 Halder et al 10 found that across 18 medical schools, 16% of medical students considered psychiatry as a future career upon entering medical school; by the final year 17% reported still seriously considering it, but only 3% had decided to actually pursue the specialty. A total of 27% of students reported that they had changed their future specialty choice as a result of 'direct bashing'. 9 By the latter years of undergraduate study, the point at which many students begin direct psychiatry teaching, many more medical students held negative beliefs about the specialty less adaptable to change. 11,12 Conversely, a positive experience of a psychiatry rotation and extracurricular enrichment 'pull factor' opportunities, such as engaging electives and special study modules, can ignite an interest and have been found to be one of the strongest predictors of career choice. 5,10 Royal College strategies The Royal College of Psychiatrists (RCPsych) has attempted to tackle both push and pull factors. In 2016, it ran the 'Anti-BASH' campaign, 12 utilising the Twitter hashtag '#banthebash' to identify and address badmouthing, attitudes and stigmatising in healthcare, in particular from clinicians in other specialties.
More recently, the College has emphasised positive pull factors, offering free Student Associate membership 13 to the College and providing free access to network events and College journals. 14 The 'Choose Psychiatry' campaign 15 has adopted a strategy of demonstrating the many positive aspects of a career in psychiatry, with the use of short videoclips encompassing real-life patient stories and the impact that psychiatry has made on their lives, as well as pieces by psychiatrists explaining why they chose and enjoy their careers. This encourages viewers to join the conversation on social media, using the hashtag #choosepsychiatry.
The 1-year 'Psych Star' scheme supports medical students through mentoring and financial support. 16 A 2-year 'Foundation Fellowship' offers a parallel route for foundation year doctors, with both schemes supporting candidates to act as local ambassadors for promotion of the specialty. Although it is not possible to causally link these efforts and the recent improved recruitment, we should still acknowledge the College's work to achieve this result.
Psychiatry societies
A less-explored area is the momentum generated by university psychiatry societies (Psych Socs), which are led by students with support from clinicians and the RCPsych. These bottom-up initiatives host diverse local events with the aims of raising the profile of mental health issues, challenging stigmatising attitudes by increasing understanding of the central role that psychiatry has in medicine and inspiring students to choose psychiatric careers. They also come together annually for a national Psych Soc conference, hosted by one of the organisations. Here we describe the types of activities undertaken across three UK societies, in London, Birmingham and Belfast.
Augmenting the syllabus: guest lectures and exam practice
The societies host diverse free guest lectures across the range of psychiatric subspecialties complementing and extending the undergraduate curriculum. Mukherjee et al 5 argued that placing a particular focus on liaison psychiatry during undergraduate teaching allows students to appreciate that psychiatry has a central role in the aetiology and outcome of many medical disorders. In our Psych Socs, we have had particularly positive experiences when collaborating with other university societies to promote this understanding (see Box 1 for more detail): for example, gastroenterology societies to discuss eating disorders; paediatrics and obstetrics and gynaecology societies to discuss perinatal psychiatry, autism and neurodevelopmental disorders; emergency medicine societies to learn about patients presenting in crisis; and oncology societies to discuss psycho-oncology. Some Psych Socs also organise additional examination practice, for example via mock OSCEs and history-taking workshops, as well as providing more links and discussion around psychiatry electives and research opportunities.
Talks on novel fields not typically covered in lectures are usually very popular, such as evolutionary psychiatry, psychosexual medicine and cutting-edge research (for example, therapeutic use of psychedelics). These have the additional value of attracting a wider range of medical students who might not attend more 'standard' psychiatry talks, and indeed are often enriched by pulling in students from different disciplines, such as philosophy and the arts, and members of the local community. This reinforces a message of mental health at the centre of medicine and society, and challenges stigmatising attitudes.
Crucially, as membership is open to all students, these events are great opportunities to attract pre-clinical medical students several years before their psychiatry teaching and placements, and potentially before more significant exposure to any 'psychiatry bashing'.
Tackling stigma and discussing student mental health
Brown and Ryland 17 emphasised the importance of involving people with mental health disorders in student education, particularly those who have recovered, as placements are often too short for students to experience this. Psych Soc speakers are encouraged to explore relevant case studies, and we endeavour to invite speakers with lived experience. One Psych Soc has published a single-arm pre-post comparison study, which demonstrated statistically significant reductions in student stigma in the domains of knowledge, attitude and behaviour following exposure to a perinatal event when a mother spoke of her personal journey. 18 Students can feel less able to disclose their own mental health problems because of perceptions of peers' negative views, 12 and successful Psych Soc events have also discussed and promoted resources for student well-being especially during examination periods. Psychiatrists have helped with this, with events on 'Mental Health in Healthcare' and 'Bipolar Disorder: Don't Believe Everything You Hear' hosted by health care professionals who themselves have recovered from psychological problems. 19 This also addressed psychological challenges and pressures students might face once qualified.
Work in the arts
Broader Psych Soc initiatives involving the arts have proved very popular. These have included a student film and book club (one in conjunction with the local psychiatry trainees' book group) and exploring the perception of mental illness in popular literature and media. Popular talks have discussed the portrayal of psychopathology in historic literature, such as Othello syndrome in 'A Winter's Tale' and wider psychiatric themes in 'Don Quixote'. The 'MedFest' film festival is a popular international event for Psych Socs and mental health more broadly, displaying and discussing short films that touch on pertinent issues in mental health.
Dissemination through new tools: social media
Psych Socs successfully use a range of social media, from Facebook to Twitter and Instagram, and more 'old-fashioned' email to reach students. These regularly share information regarding wider opportunities, such as summer schools (unlike many parallel schemes in other specialties, most of these are free), RCPsych events, prizes and bursaries, student-selected components in psychiatry, research and elective opportunities and so forth. They also provide guidance and encouragement to students on becoming College Associate Members of the College, and advertise College resources, articles and podcasts. Anecdotally, many students have informed us that Psych Soc posts on social media have alerted them to opportunities of which they had previously been unaware.
In October 2019, Queen's University Belfast 'Mind Matters' Psych Soc hosted a highly successful 1-hour 'Twitter Takeover'. Numerous psychiatrists and other Psych Socs across the country participated, answering questions on how medical students can get involved with psychiatry early, personal reasons for choosing psychiatry, upcoming events and interesting books and articles relevant to students. Twitter in particular affords an opportunity to engage and connect with the many psychiatrists and medical students online, unhindered by distance.
Starting a Psych Soc
Medical students and psychiatrists interested in starting a Psych Soc at their own local university should firstly endeavour to recruit a core committee of students for the academic year. The committee should attempt to make contact with the Undergraduate Lead for Psychiatry at their university, the RCPsych regional division and other local psychiatrists. Such contacts may be called upon to act as speakers at evening lecture events, mock OSCE examiners and mentors.
Psych Socs should also contact the RCPsych to receive funding for events, as each university society receives a grant
Box 1. Examples of well-received Psych Soc events
• 'Evolution and the brain', discussing how brain functioning and psychopathology can be understood using evolutionary perspectives.
• 'Real people sharing real stories', five students shared their personal experiences of living with mental illness.
• 'Time to put the psychedelics back into psychiatry?', a discussion on psychedelics in modern psychiatry.
• 'Trauma and violence' with trauma surgeons, a psychiatrist and young victims of knife crime discussing post-traumatic stress disorder.
• 'Through the lens' mental health photography workshop with the Health and Humanities society, discussing the portrayal of borderline personality disorder in the arts.
• Psychiatric themes in Don Quixote and Othello syndrome in 'A Winter's Tale' • 'Homelessness and healthcare' with individuals who had been street homeless, describing how this impacted their ability to access care, and their experiences of living in the streets.
• 'Disfigurement and quality of life', with maxillo-facial surgeons and psychiatrists discussing the impact of facial surgery on perceived quality of life.
• 'Mental health in developing countries' hosted by psychiatry trainees and 'Students for Global Health', discussing different practices in other countries, and career opportunities in international assistance.
• 'Not guilty by reason of insanity', exploring the roles of forensic psychiatrists.
• 'Mental disorder and autonomy: classical and romantic perspectives', a seminar co-hosted with a Philosophy Society discussing varying philosophical views of mental illness across time.
• 'Sex and psychiatry' seminar with the university 'Sexpression' group, discussing psychiatric bases for dyspareunia, tocophobia and so forth.
of £500 per annum. The College also offer free promotional material such as pens, key rings and leaflets, which can be handed out as 'freebies' during events. The RCPsych website includes detailed advice for setting up a local Psych Soc, event ideas and contact details for useful stakeholders. 20 Psychiatrists' perspectives and next steps As senior clinicians, we recall the difference that enthusiastic and passionate trainers, teams and rotations made to our career choices at all stages, from medical school through to our own training. 21 Sadly, we have also all experienced the negative effect of 'bashing' of psychiatry and our patients by other medical students and doctors. All psychiatrists need to remain proud advocates for our profession and remember that every contact counts. The recent College initiatives for recruitment appear to be paying dividends with the positive message of 'Choose Psychiatry' particularly pleasing. The Psych Socs, however, speak to students in a way we cannot, and it is heartening to see the positive energy they generate. Enthusiastic medical students deliver the compelling message that psychiatry is a mainstream part of medicine and offers a diverse and rewarding career and a flexible worklife balance. Their bottom-up initiatives relevant to their local teaching and training, identification of gaps and novel areas they wish to explore, and the fun, interesting and culturally broader events in turn have refreshed us. The Psych Socs typically offer compensation to speakers through covering their expenses, but in our experience the real payment is the pleasure of sharing and contributing to their enthusiasm.
Several next steps can be recommended both locally and nationally. Students require enthusiastic engagement from local psychiatrists: as guest speakers, mock OSCE examiners and mentoring via 'buddy schemes'. The relationship should be reciprocal: assisting students with areas they request as needing redressing, but also using our contacts and experience to suggest and link-up additional input. Students often need discrete guidance in organising events and making sure that these are well balanced in the views that are expressed.
Nationally, the RCPsych has created a supportive linking webpage to share ideas and learning; this and the annual National Student Psychiatry Conference need to be nurtured and grown. In a time of austerity, there are inevitable challenges about 'who funds' travel and attendance, but medical schools and the College need to continue to encourage and maximise subsidised student engagement, including through poster presentations, oral presentations, student sections and prizes. This is not just a 'central' issue, it falls to all divisions and faculties to review their engagement. We propose that Psych Socs are an excellent opportunity for outreach to catch the best future colleagues. As a College we need to be better at recognising, celebrating and sharing what is working with our medical students. A recently published RCPsych report 22 makes explicit recommendations for a range of initiatives on enhancing interest in psychiatry, including developing medical student psychotherapy schemes and Balint groups, and better working with Psych Socs. The College's Choose Psychiatry Committee has an initiative to make sure that each Psych Soc for the next academic year has a link senior member of the Committee to help support local initiatives.
We believe that university Psych Socs are a secret, but as yet not fully exploited, tool to improve recruitment into psychiatry, as well as promoting respect for the profession and mental health amongst those who do not become psychiatrists. They offer a valuable opportunity for students and psychiatrists to work together, and for us to continue to encourage the brightest and best to join what we know to be the most rewarding of medical specialties. | 2020-01-18T14:03:25.895Z | 2020-01-17T00:00:00.000 | {
"year": 2020,
"sha1": "d607c309f8d619313645929f6c0b750c1c31072e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C8DB00A7F6D2AADBFED18E09234DE517/S2056469419000883a.pdf/div-class-title-psych-socs-student-led-psychiatry-societies-an-untapped-resource-for-recruitment-and-reducing-stigma-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bfd9b27f7827331855292be227160723d6b1ffc",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Sociology"
]
} |
815381 | pes2o/s2orc | v3-fos-license | One Health concept for strengthening public health surveillance and response through Field Epidemiology and Laboratory Training in Ghana.
The lack of highly trained field epidemiologists in the public health system in Ghana has been known since the 1970s when the Planning Unit was established in the Ghana Ministry of Health. When the Public Health School was started in 1994, the decision was taken to develop a 1 academic-year general MPH course. The persisting need for well-trained epidemiologists to support the public health surveillance, outbreak investigation and response system made the development of the Field Epidemiology and Laboratory Training Programme (FELTP) a national priority. The School of Public health and the Ministry of Health therefore requested the technical and financial assistance of the United States Centers for Disease Control and Prevention (CDC) in organizing the Programme. The collaboration started by organizing short courses in disease outbreak investigations and response for serving Ghana Health Service staff. The success of the short courses led to development of the FELTP. By October 2007, the new FELTP curriculum for the award of a Masters of Philosophy in Applied Epidemiology and Disease Control was approved by the Academic Board of the University of Ghana and the programme started that academic year. Since then five cohorts of 37 residents have been enrolled in the two tracks of the programme. They consist of 12 physicians, 12 veterinarians and 13 laboratory scientists. The first two cohorts of 13 residents have graduated. The third cohort of seven has submitted dissertations and is awaiting the results. The fourth cohort has started the second year of field placement while the fifth cohort has just started the first semester. The field activities of the graduates have included disease outbreak investigations and response, evaluation of disease surveillance systems at the national level and analysis of datasets on diseases at the regional level. The residents have made a total of 25 oral presentations and 39 poster presentations at various regional and global scientific conferences. The Ghana FELTP (GFELTP) has promoted the introduction of the One Health concept into FELTP. It hosted the first USAID-supported workshop in West Africa to further integrate and strengthen collaboration of the animal and human health sectors in the FETP model. GFELTP has also taken the lead in hosting the first AFENET Center for Training in Public Health Leadership and Management, through which the short course on Management for Improving Public Health Interventions was developed for AFENET member countries. The GFELTP pre-tested the Integrated Avian Influenza Outbreak and Pandemic Influenza course in preparation for introducing the materials into the curriculum of other FELTP in the network. The leadership positions to which the graduates of the program have been appointed in the human and animal Public Health Services, improvement in disease surveillance, outbreak investigation and response along with the testimony of the health authorities about their appreciation of the outputs of the graduates at various fora, is a strong indication that the GFELTP is meeting its objectives.
Introduction
At the request of the Ghana Ministry of Health, the University of Ghana established the School of Public Health (SPH) in October 1994. This is a 1year course in general public health which awards a Master of Public Health (MPH) degree. The SPH was one of the beneficiaries of the Rockefeller Foundation support to the network of Public Health Schools Without Walls (PHSWOW) in the Africa Region [1].
Graduates of the SPH were found to meet the expectations of the Ministry of Health, as they took up leadership roles at district level. It was however, realized that a cadre of highly-trained epidemiologists with competencies and skills in applied epidemiology and disease control was needed to manage the existing complex of public health emergencies and emerging and re-emerging diseases, such as Severe Acute Respiratory Syndrome(SARS) and Avian influenza. During the early stages of implementation of the Global Programme for the Control of Malaria, HIV/AIDS and Tuberculosis, the lack of highly trained field epidemiologists became more apparent as the demand for expert management, interpretation and use of disease surveillance data increased. Unfortunately, the MPH Programme did not make provisions for the training of this cadre of professionals. A process was initiated to establish a Field Epidemiology and Laboratory Training Program (FELTP) to address the identified need. The Ghana FELTP (GFELTP) evolved from an initial collaboration with the United States (U.S.) Centers for Disease Control and Prevention (CDC), through cooperative agreements with the SPH. Activities supported by this cooperation included organization of short courses on disease surveillance, outbreak investigations and response. More than 60 serving district health staff (frontline health workers) and MPH graduates benefited from these short courses over a three-year (2003)(2004)(2005) period [2]. Parts of the short course materials were later incorporated into the MPH curriculum of the School of Public Health. When, in 2005, the decision was taken to start an FELTP, the task of designing the curriculum was spearheaded by the faculty under the guidance of staff of CDC including staff from the Sustainable Management Systems Development Program (SMDP) [3].
The FELTP curriculum was adapted from CDC's core FETP curriculum [3]. GFELTP graduates receive a Master of Philosophy (MPhil) in Applied Epidemiology and Disease Control upon completing all university requirements. In addition, graduates receive certificate of competency in field epidemiology. The program was approved by the University Academic Board and the National Accreditation Board in 2007. The program started with an initial cohort of three physicians, one laboratory scientist and one veterinarian. In keeping with the "One health" concept, to mitigate the increasing threat of outbreaks of zoonotic diseases and to further strengthen the laboratory's key role in public health surveillance and response in the country, the trainees/residents were selected from serving staff nominated by the Ghana Health Service, Ministry of Health (physicians and laboratory scientists) and the Veterinary Service Directorate, Ministry of Food and Agriculture (veterinarians).
The vision of GFELTP is to improve the health of the people in Ghana. The mission is to contribute to addressing Ghana's public health needs and priorities through training and service provision in applied epidemiology and public health laboratory management.
The objectives of GFELTP are to: 1) Strengthen public health capacity by developing a cadre of health professionals with applied skills in applied epidemiology and laboratory management; 2) Contribute to research activities on priority public health problems; 3) Improve national capacity to respond to public health emergencies such as disease outbreaks, natural disasters and unusual public health events including those that could be a result of chemical or bioterrorism; 4) Strengthen national surveillance systems through a team approach (physicians, laboratory scientists and veterinarians); 5) Improve communications and networking of public health practitioners in the country and throughout the Africa Region.
Brief outline of the course
The GFFELTP is a 2 calendar-year programme with about 30% course work and 70% field work covering two tracks (i.e., the epidemiology track and the laboratory track). During the first academic year, residents study core courses that cut across the two tracks in the first semester. In the second semester, residents take courses in each of the prescribed track (i.e., epidemiology for medical and veterinary professionals or laboratory for laboratory scientists) and some selected electives to make up for the required 36 credits for the course work. In addition, residents are required to be involved in 16 weeks of field activities made up of 8 weeks at the end of the first semester to undertake evaluation of surveillance systems of selected diseases and 8 weeks at the end of the second semester for analysis of available large datasets on diseases at national or regional levels. In the second year, residents develop their research topics under the guidance of their academic supervisors and mentors. A further requirement is the organization of at least one seminar prior to going for the field work. Ten months of the second year are devoted to field practice and collection of data while providing services to the district/region of assignment. The last two months are used for data analysis and write up of theses. During the 2-year period of training especially when on field postings, residents of the programme join the staff of the Ghana Health Service and Veterinary Service Directorate to investigate and respond to disease outbreaks and public health emergencies. Being mid-career professionals in public service, the residents sometimes lead these investigations, conduct public health interventions and present written and verbal reports to stakeholders with support of their supervisors and mentors.
Residents
Five cohorts have so far been admitted into the residency programme. The breakdown is as shown in Table 1. The distribution of residents by professional background and sex is shown in Figure 1 and As part of the collaboration between the Disease Surveillance Department(DSD) and the School of Public Health (SPH), a needs assessment to determine the gaps in disease surveillance with emphasis on disease outbreak investigations and response, data analysis and interpretation and capacity development was conducted in fivedistricts in Ghana, namely Asuogyaman, Ketu, Kassena-Nankana, Wassa West and Berekum Districts. Needs assessment tools were developed and discussed by DSD and SPH at an orientation before the exercise commenced. The reports from these assessments were compiled into a composite document for implementation of sensitization workshops for the districts. The workshop had 19 participants, made up of Disease Control Officers, Nurses, Statistician, Midwives, and Medical Superintendent at the Volta River Authority (VRA) Hospital, Medical Assistant and District Director of Health Services. The general objective of the workshop was to give health workers in the district the appropriate knowledge and skills in identifying cases of priority diseases and also process the data and use it for public health action. In addition, core stakeholders such as district assembly members, immigration and custom officers, teachers, information officers, the police and the media were also involved.
The specific objectives were to enable participants to: detect priority diseases, analyze and interpret data on priority diseases, investigate and respond to suspected outbreaks, be prepared for disease epidemics, investigate and respond to other priority diseases, supervise and provide feedback and be able to monitor and evaluate IDSR implementation.
The workshop employed methods including presentations on Integrated Disease Surveillance and Response training modules, role-playing, group work and field exercises. Similar workshops were organized in Ketu, Upper East and Berekum districts. As a result of these workshops, participating districts reported improvements in their disease detection, investigation and public health response.
Major activities undertaken by GFELTP residents over the years are summarized as follows: Disease outbreak investigations: A total of 23 disease outbreak investigations were conducted by GFELTP residents between 2007 and 2011. These include outbreaks on meningitis, influenza (type A), human rabies, food borne diseases, measles, gastrointestinal diseases, Yellow fever, pertussis, cholera and herpes B. The investigation of an outbreak of herpes B virus infection in May 2011 in Techiman and adjoining districts of central Ghana reported this virus as the probable cause of zoonotic encephalitis in Ghana for the first time.The large number of disease outbreak investigations and the timely response that residents of the programme have been able to carry out alongside other Ghana Health Service or Veterinary Service staff to date have appreciably enhanced disease surveillance and response capacity in the country. In particular, the role that GFELTP team of physicians, veterinarians and laboratory scientists played in the investigation and response to the AI outbreak in Ghana in 2007, the multiple outbreaks of rabies in 2009 -2011, and the monkey-associated herpes-B encephalitis outbreak in 2011 demonstrated the great value of the One Health concept and the multi-disciplinary team approach which the GFELTP has adopted.
Disease surveillance and field studies: As part of end-of-year one field requirements, 31 evaluations of various disease surveillance systems were conducted between 2008 and 2011. They included both communicable and non-communicable diseases. Residents have also analyzed available large datasets for 28 selected diseases at the regional health directorates. Cape Town, South Africa (December, 2010): At the 6 th TEPHINET Global Scientific Conference, Nine GFELTP residents presented three orals and six poster presentations. One of them, Ms. Joyce Der, a cohort-II laboratory track resident was the overall winner in the oral presentation category. She presented the epidemiological and laboratory investigation of a food poisoning outbreak at a popular urban-area food center in the Eastern Region of Ghana.
African Regional Conferences
Accra-Ghana, December, 2005: GFELTP hosted the 1 st AFENET Regional Scientific Conference following the birth of AFENET in August of the same year in Accra Ghana [4]. The residents made five oral presentations and six poster presentations.
Kampala, Uganda-December 2007: Nine presentations were made by GFELTP residents at the 2 nd AFENET Regional Scientific Conference in Kampala. Four were orals and five were poster presentations.
Mombasa, Kenya-August 2009: A total of 15 posters and 6 oral presentations were made at the 5 th TEPHINET African Regional/3 rd AFENET Scientific Conference by GFELTP residents. One of them, Dr Paul Polkuu, a veterinarian and cohort II epidemiology track resident received the runner-up award for the best poster presentation. The presentation was on the investigation of an Influenza-like Illness (ILI) outbreak at a coeducational high school in the Eastern Regional mountains of Ghana.
4) Residents publications
A paper by a cohort-II epidemiology track resident "Community-wide outbreak of cholera following unhygienic practices by small-scale unregistered gold miners, East-Akim District, Ghana -2010" was accepted for publication by the Ghana Medical Journal in September, 2011 Four public health articles by residents have been published in two veterinary bulletins and two national daily newspaper columns
5) Onsite field supervision mentorship and public health advocacy
In addition to SPH faculty members, selected Regional Directors of Health Services and District Directors of Health Services were oriented from the start of GFELTP to serve as supervisors and mentors for residents at various field sites. In May 2009, a Resident Advisor was appointed for GFELTP. Since then, in collaboration with the Ghana Health Service Public Health Division, he has conducted periodic rounds of visits to residents' Field Sites. The aims of the visits are to 1).provide mentorship, supervision and tutoring to residents during their field trainings, 2) conduct local stakeholders' feedback and public health consensus seminars and 3) conduct program advocacy and sensitization meetings with key stakeholders at regional and district levels. Multiple visits have been made to the Eastern, Central, Brong-Ahafo, Greater Accra Northern, Upper West, Upper East, Volta and Western regions. There have been 10 regional stakeholders' seminars where residents made presentations on projects they undertook in various regions or districts to stakeholders from the community, Ghana Health Service and Veterinary Services Directorate. These fora provided opportunities for feedback, inter-sectoral discussions leading to consensus on public health action and sharing of information on GFELTP activities and opportunities. This novel approach of collaborative training and service at the local level has enhanced public health decision, action and GFELTP visibility at the health system frontline level.
6) Improving Management of Public Health Interventions Workshops
The GFELTP has hosted three workshops on Improving Management of Public Health Interventions. This followed an introductory course to train proposed trainers in 2008. The trainers were Deputy Directors in charge of Public Health at Regional level in Ghana. There were 17 participants and the training was facilitated by CDC, Ghana Health Service (GHS) and GFELTP staff. The first workshop was held from June 22-July 17, 2009 and was targeted at health practitioners in the African Sub-Region. Twenty-two health officials from four African Countries attended the course. Out of the 22 participants, 19 were Ghanaians, 1Kenyan, 1 Tanzanian and 1 Ugandan. All 19 Ghanaian participants were staff from the Ghana Health Service. The course was divided into four modules. These four modules were designed to touch on all aspects of health management. Uniqueness of the course was that during the four-week period, participants presented project proposals on management of public health interventions at the beginning of the course. They were helped to develop the proposals and implement them over the subsequent three months after the course. All participants were visited by a facilitator once during the three months of implementation. The Ghanaian participants came back for a day to present the results of what they implemented before receiving their certificates. The regional participants were visited by the coordinator and the AFENET focal person for the course in their various countries. Participants made their presentations at a meeting of stakeholders before they were awarded their certificates.
The workshop with the field component was evaluated six months after the first four-week IMPHI course ended. The goal of the evaluation was to determine whether the four-week training led to application of skills on the job as outlined in the curriculum and program objectives. It was a joint evaluation by CDC-SMDP and stakeholders at the School of Public Health in Ghana.
Six months after the 4-week IMPHI course ended all 12 participants who were interviewed for this evaluation reported implementing a change in management practice at their places of work. Only one participant interviewed could not provide any hard evidence for any of the changes she implemented.
7) Integrated Avian Influenza outbreak and pandemic influenza course
In collaboration with the USAID/STOP AI Programme, the GFELTP, in May 2010, pre-tested a newly developed set of modules on Integrated Avian Influenza Outbreak Response and Pandemic Influenza in a special two-week training workshop. The purpose of the workshop was to determine the usefulness of these modules in the African setting, with a view of introducing these modules in other FELTPs. The GFELTP has since then adapted materials from the modules into the GFELTP curriculum, and it has been organized yearly with facilitators from the Veterinary Services, School of Veterinary Medicine, National Disaster Management Organization (NADMO) and SPH.
GFELTP graduates strengthening public health workforce in Ghana
GFELTP collaboration with the Veterinary Services Directorate, Ministry of Food and Agriculture in Ghana has led to the strengthening of the regional epidemiology capacity of the service. Two GFELTP graduates currently serve as the regional veterinary epidemiologists in the Brong-Ahafo and Upper West regions. Two others are awaiting appointment letters to serve as regional epidemiologists in the Central and Volta regions. Similarly, the Ghana Health Service is finalizing formal plans to deploy the GFELTP graduates to fill such positions in the regions. Currently, two of the graduates serve as deputy national program managers for malaria and non-communicable diseases respectively, one as deputy head of the national public health and reference laboratory, three as district directors of health service and de facto regional epidemiologists ( Table 2).
GFELTP Steering Committee
A steering committee made up of representatives from stakeholders (MOH, GHS, Veterinary Services, Laboratory Services, NADMO, CDC, Noguchi Medical Research Institute and SPH) steers the management of the GFELTP to achieve the objectives of the programme. The Committee is chaired by MOH/GHS and it meets every quarter. The meetings are well documented and shared with all members/partners. The committee follows up plans and recommendations through designated members with support of GFELTP secretariat.
GFELTP assessment
A Matrix Tool for FELTP Assessment was used to do an internal evaluation of the programme and the result presented to the GFELTP Steering Committee. The Ghana FELTP has also gone through assessment by AFENET and awarded Quality Assurance Certificate for 2010 Placement of Graduates Table 2 shows the placement of graduates, pre and post certification.
Discussion
The genesis and evolution of GFELTP is an example of a national identification of a workforce capacity need and the use of multi-sectoral collaboration with international technical and financial assistance to institutionalize indigenous capacity development in applied epidemiology. The SPH at the University of Ghana is a well-established constituent member of the College of Health Sciences of the University. The MPH program which is the flagship program of the School is thriving well with enrolment from Ghana, the African Region and beyond. The GFELTP was developed as a special program based in the Epidemiology Department of the School. The contribution of the FELTP to the strengthening of the epidemiology curriculum of the MPH program in the SPH has been acknowledged by both graduates and the Ministry of Health at several of the School's annual dissemination forum. The current policy of the Ministry of Health and the Veterinary Services Directorate of deploying graduates of the GFELTP in strategic posts in the national public health service clearly shows the appreciation of the competencies and skills of the graduates.
The outputs of the residents of the GFELTP have demonstrated the scientific rigor that has characterized the field investigations and dissertations that have been produced. Two of the members of the initial cohort have submitted their upgraded dissertations for the award of PhD in epidemiology as of 2011. The emphasis on scientific writing and communication has also reflected in the oral and poster presentations that residents from the program have made in Regional and Global Scientific Conferences. The graduates of the program have all returned to positions with an evolving career structure that is likely to motivate them to remain in the public health service. As part of the new public health institute model facilitated by the international association of public health institutes (IANPHI) initiative in Ghana, the Ghana Health Service is developing a core public health technical or expert team career path that uses the GFELTP graduates to fill the critical role of epidemiologists at the subnational and national levels as well as along specific disease control or public health program lines. Crossover to public health administration track at the top of the path is an option and defined promotion track in keeping with the national public health service policy has been proposed. There is ample evidence of improved public health surveillance and response as well as evidence-based decision making taking place in the National Health Service following the joint evaluation of surveillance systems, disease dataset analyses, outbreak investigations, public health interventions with more regular reports, information sharing and periodic stakeholders' public health seminars at all levels. There has been a definite strengthening of the public health workforce and increased networking between programs in Ghana and with other countries [5]. The prospect of increasing support from the local stakeholders should see increasing enrolment in the program as demonstrated by the 2011 enrollment of nine service professionals, the highest number so far of the five cohorts. This should hasten the attainment of the vision and mission of the program.
Challenges
The major challenge of the GFELTP has been the slow follow up on pledges of the major national stakeholders of the programme in honouring their funding commitments as specified in the Memorandum of Understanding (MOU). This has resulted in limiting the number of qualified residents that could have been admitted into the programme. But from testimonies that all stakeholders have given on various occasions about the value they place on the service provided by graduates of the programme, it is expected that their support should be forthcoming. At the formal public launching and 1 st certification ceremony of the programme on 02 June 2011, the Minister of Health and the Director of Veterinary Services both emphasized their new policy of utilizing the graduates of the GFELTP in strategic positions in the public health system of the country in order to improve the response to existing public health threats and the emerging zoonotic diseases. These pronouncements encourage our optimistic view regarding the programme sustainability based on continuing support from these key indigenous stakeholders.
Prospects for the future
There is no doubt that the establishment of the SPH and the subsequent addition of the GFELTP in Ghana has contributed significantly to addressing the competency and skills needs of the public health workforce. This is evidenced by the large number of disease outbreak investigations and the timely response that residents of the programme have been able to carry out to date. In particular, the role that GFELTP team of physicians, veterinarians and laboratory scientists played in the investigation and response to the AI outbreak in Ghana in 2007 demonstrated the great value of the One Health concept and the team approach, which the GFELTP has adopted. The unique feature of the GFELTP that permits trainees to provide service to the Public Health Service even while still in training has made the outputs of the trainees well appreciated by relevant employers. Consequently, the demand for the course has been growing. As more local stakeholders' support come on board, it is expected that larger numbers of trainees will be admitted into the programme in order to respond to increasing challenges of growing complex of public health emergencies in the country and the Sub-region.
Wurapa F, Afari E, Ohuabunwo C, Sackey S: Contributed to development and design of the concept, writing the article and providing important intellectual content, reviewed several drafts and final approval of the version to be published. Clerk C, Kwadje S, Yebuah N, Amankwa J, Amofah G, Appiah-Denkyira E: Contributed to revising the article for important intellectual content and factual content from perspective of service partners, and approval of the version to be published. | 2018-04-03T04:31:01.473Z | 2011-12-14T00:00:00.000 | {
"year": 2011,
"sha1": "84db97911a7abce47876c5732ac570d78f253aaa",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "84db97911a7abce47876c5732ac570d78f253aaa",
"s2fieldsofstudy": [
"Education",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258594918 | pes2o/s2orc | v3-fos-license | ESTRO ACROP guideline on prostate bed delineation for postoperative radiotherapy in prostate cancer
Highlights • There is important variability in existing contouring guidelines for prostate bed radiotherapy.• We developed a contemporary ESTRO-ACROP guideline for prostate bed contouring to improve consistency in delineation.• It aimed at optimizing the therapeutic ratio by reducing the unnecessary irradiation of normal tissues.• These recommendations are generally supported by local progression patterns on MRI and PET imaging.
Introduction
Postoperative radiotherapy to the prostate bed is a potentially curative treatment for prostate cancer patients at increased risk of local recurrence due to high-risk pathologic features (adjuvant radiotherapy) or due to biochemical, clinical, or radiological evidence of disease relapse (salvage radiotherapy) [1,2].
Several technological advancements in radiotherapy such as image guidance and intensity modulation have allowed more precise treatments with better sparing of adjacent normal tissues. An important part of the radiotherapy planning process is the clinical definition and delineation of the target volume. In prostate cancer patients treated with prostatectomy, the accurate target volume definition can be challenging given the absence of the prostate boundaries and a variable postoperative anatomy. Despite the existence of various prostate bed contouring guidelines [3][4][5][6][7], the recommendations remain inconsistent resulting in considerable inter-observer variability in the treatment target delineation [8][9][10]. A contouring guideline on target volume delineation encourages a consistent application of prostate bed treatments across clinical trials, institutions, and individual clinicians. A practical guideline that is easily applicable and disseminated can improve the accuracy of radiation treatment, the reproducibility for reliable reporting and ultimately enhance oncological outcomes with reduced treatment-related toxicity.
The ESTRO-ACROP decided to develop a new consensus guideline to add to and refine the existing publications to account for more current practices, that include the use of novel imaging modalities. Therefore, this work primarily aimed at developing a contemporary consensus guideline for a standardized delineation of the prostate bed for postoperative prostate radiotherapy.
Methods
An ESTRO contouring consensus panel consisting of eleven European radiation oncologists (AD, PD, VK, CC, CC, VF, PG, AGI, AZ, AB, TW) and one radiologist (VP) with known subspecialty expertise, performed a contouring exercise and delineated the required postoperative clinical target volumes (CTVs) of the prostate and seminal vesicles in the setting of three independent clinical scenarios. These cases focused on three clinical factors; presence of positive surgical margin, extracapsular extension, and seminal vesicles involvement, that are likely to impact on delineation. The consensus generating process consisted in the contouring of three CTVs by all participants. We considered that all three cases had no evidence of macroscopic local recurrence on the postoperative imaging. Recommendations for Organs at Risk (OAR) delineation have been previously published [11], therefore OAR contouring was not required. The clinical cases were as follows: Case 1 (CTV1): 60-year-old man with a preoperative clinical T2a, Gleason 9 (5 + 4), PSA 16 ng/mL adenocarcinoma of the prostate. He underwent robotic-assisted radical prostatectomy and was found to have pathologic T3b, pN0 (12 lymph nodes removed) Gleason score 9 (5 + 4, ISUP 5) disease. Pathology revealed extracapsular extension at the right base with positive focal surgical margin at this level (R1), 3 mm extension, and an infiltration of the seminal vesicle on the right side. The patient was referred to radiation oncology 90 days after surgery with an undetectable PSA, < 0.1 ng/mL. The patient denied urinary continence but complained about erectile dysfunction.
Case 3 (CTV3): 70-year-old man with a preoperative clinical T3a, Gleason 8 (4 + 4), ISUP 4, PSA 20 ng/mL adenocarcinoma of the prostate. He underwent robotic-assisted radical prostatectomy and was found to have pathologic T3a, pN0 (15 lymph nodes removed), Gleason score 8 (4 + 4), ISUP 4 disease. Pathology revealed extracapsular extension at the apex with positive surgical margin at this level, 5 mm extension. PSA was detectable at 0.1 ng/mL approximately 2 months after surgery and then rose to 0.2 ng/mL 2 months later. Postoperative MRI and PSMA-PET scan did not show any evidence of disease and he was referred for assessment of salvage radiotherapy.
The computed tomography (CT) as well as postoperative magnetic resonance imaging (MRI) datasets were shared via the FALCON platform (Fellowship in Anatomic elineation and CONtouring) from ESTRO (European SocieTy for Radiotherapy and Oncology) and the software EduCase TM from RadOnc eLearning Center, Inc. Fremont, CA, USA was used. This is a web-based contouring and analysis tool that has a graphical user interface for the management, storage, and publishing of contouring of the clinical cases. The software allows image fusion of the simulation CT scan with MRI, as well as an integrated analysis on contouring proficiency.
Multiple methods to assess the data and reach consensus were used, which included quantitative and qualitative analysis of the contours. Contour analysis was performed using the Sorensen-Dice similarity coefficient to compute the degree of association or overlay between the set of images [12]. These metrics were calculated and compared with the average contour of the group for each case.
Heat maps of all collected contours for each clinical scenario were created which provided a visual assessment of controversial regions of the prostate and seminal vesicles bed. The heatmap was created by overlaying each observer's contours on CT scan and generating specific voxel values. A solid single colour was generated according to the number of image voxels included in the superposed contours. Based on the number of observers including those voxels in their contour set, different colours were generated and various iso-surfaces with different colours were generated. This method allows "qualitative" analysis of the geographic locations of higher and lower agreement for the identification of specific regions of controversy and subsequent discussion among the experts. All images were generated in Python version 3.7.4 (Phyton Software Foundation, USA). Two-Dimensional (2-D) images for the transversal, coronal, and sagittal view were created by overlaying the contour maps and plotting every cross-section along the X, Y, and Z axis using Matplotlib's Pyplot module. Three-Dimensional (3-D) images were created using Plotly.graph_objects 3-D scatter function.
After all contours were submitted, 20 case-specific questions addressing detailed recommendations on target volume delineation were formulated by the leading authors (AD, PD, TW) and electronically sent to participants for discussion and consensus development. The survey also encompassed questions including but not limited to imageguidance, delineation of the bladder neck, required imaging methods, planning target volume (PTV), use of rectal spacer and endorectal balloon (Supplementary material). The twelve participants discussed discrepancies in the recommendations in multiple informal discussions by electronic mail and videoconferences. For each question, the quality of consensus in terms of percentage of agreement was measured and documented. Consensus was defined when 75% or more agreement were achieved for each recommendation as per the German S3 guidelines [13].
Results
This work was carried out by a group of eleven genitourinary radiation oncologists and one genitourinary radiologist. The radiologist' input was not directly included in the quantitative/qualitative analysis of the contours but was included in the remaining methods for consensus.
The mean CTV1 (adjuvant RT) volume was 76 cc (SD = 26.6), CTV2 (salvage radiation with PSA progression) was 51.80 cc (SD = 22.7.6), and CTV3 (salvage radiation with persistently elevated PSA), 57.63 cc (SD = 25.2). Compared to the median, the mean Sorensen-Dice similarity coefficient for CTV1 was 0.60 (SD 0.10), CTV2 was 0.58 (SD = 0.12), and CTV3 0.60 (SD = 0.11). Overall, each expert contoured CTV2 and CTV3 in a similar fashion. CTV1 presented the largest volumes. This case represented the patient with pT3b disease and extra-prostatic extension (EPE) at the prostate base, and the larger volume was attributed to the clinicopathological characteristics of the case. Therefore, the group agreed to proceed with a recommendation that considers the possible clinical variations that can occur, independent of the radiotherapy timing, but with a possibility of tailoring the volume delineation according to case-specific risk factors.
A heatmap for each clinical scenario was generated. We developed a heat map analysis of all contours to identify areas of disagreement.
Supplementary Video 1 is a visual representation of consensus development with "warmer" colours, in red and orange and controversial areas in "cooler" colours, blue. Several controversial areas of the prostate bed CTV were identified based on both heatmaps and questionnaires. This formed the basis for discussions via videoconference where the panel achieved consensus on the prostate bed CTV to be used as a novel guideline for postoperative prostate cancer radiotherapy. Fig. 1 shows a heat map figure with 3 areas of greatest variability consisting of [1] the superior-most aspect at the seminal vesicles level, [2] the inferior part at the prostate apex level, and [3] anteriorly, next to the pubic symphysis.
Questionnaires used in our case-specific surveys (Supplementary data) led to the recommendations that are be summarized below.
Inferior border
The group agreed to use the vesico-urethral anastomosis (VUA) as a landmark to delineate the inferior border of the prostate bed, and the CTV should start inferiorly 8-12 mm below the VUA. The VUA can be defined as the slice below the last slice where fluid (urine) is seen (Fig. 2a). The group does not recommend routine use of contrast for the definition of in the inferior border of the CTV. When available, a postoperative MRI can be used for the identification of the VUA (Fig. 2b). VUA positioning on MRI is reliable and has a strong correlation between readers [14]. If VUA is not identified, the group agreed that CTV should start at the slice right above the penile bulb, which is usually visible on both CT and MRI.
Anterior border
Cranially, at the level of the seminal vesicles bed, the CTV should cover 1-2 cm of the posterior bladder wall (Fig. 3a). However, the group agreed that, if daily Image-Guided Radiation Therapy (IGRT) is used and provided consistent reproducibility of the bladder, the CTV may be contoured up to the posterior margin of the bladder wall (Fig. 3b), which would avoid unnecessary irradiation of the normal bladder tissue.
Caudally, the anterior border should stop at the posterior margin of the pubic bone up to half to two thirds of the symphysis pubis (Fig. 3c). Of note, the antero-cranial border should not extend to the top of the symphysis pubis to avoid unnecessary irradiation of the bladder. However, if the cranial limit is located at the midpoint of the symphysis pubis on sagittal view, a proper coverage of the VUA must be ensured.
Posterior border
Posteriorly, the CTV should stop at the anterior rectal wall (Fig. 4a). However, at the level of the seminal vesicles bed, if the meso-rectal fascia is clearly visualized, it can be used as the posterior border of the CTV. Of note, the group agreed to include the existing surgical clips within the prostate bed CTV and the antero-lateral angles of the rectum, known to be an area at risk for recurrence (Fig. 4b).
Lateral border
Cranially, the lateral borders of the CTV are the internal margins of the internal obturator muscles, bilaterally (Fig. 5a). The group agreed that the CTV should not be extended antero-laterally toward the region of the obturator lymph nodes to spare normal bladder irradiation. Caudally, the lateral borders are the internal margins of the internal obturator muscles or internal borders of the levator ani muscles (Fig. 5b).
Superior border
Superiorly, the group agreed to include a "bridge" of 3-5 mm between the seminal vesicles bed or seminal vesicles remnants. In the absence of seminal vesicles invasion on pathology (pT2-pT3a), the CTV should include the region of the seminal vesicles base (lower third), i.e., up to the level of cut end of vas deferens (Fig. 6a). In the presence of seminal vesicles invasion, the CTV should ensure the inclusion of the entire seminal vesicles bed, i.e., include cranial surgical clips if present and considered relevant. When available, the use of preoperative MRI or diagnostic CT is recommended as it can help guide the delineation of the preoperative seminal vesicle bed.
An overview of the delineation prostate bed CTV slice per slice is presented in Supplementary data.
Use of postoperative MRI
Although CT scan is recommended for the delineation of the prostate bed CTV, postoperative MRI with or without preoperative MRI can be helpful for guidance. The pelvic anatomy and specifically the prostate bed differs depending on the surgical technique [15]. The prostate bed is better visualized on postoperative MRI as compared to CT due to better soft tissue resolution, therefore MRI can be of value for a more accurate target definition [16][17][18]. If available, the co-registration of a preoperative MRI may give an insight of the extent and location of the disease which could help adjust the final prostate bed volume. The postoperative MRI acquisition protocol should include T2-weighted imaging at three orthogonal levels, a diffusion-weighted imaging with high b values and dynamic contrast enhanced imaging. Images should include the VUA, the prostatic bed, the bladder base, the levator ani, the rectum, and the residual seminal vesicles as these are common sites of relapse [19]. Of note, these prostate bed CTV recommendations can be applied to patients treated with MR-guided radiotherapy.
Expansion of the CTV at the area of positive margin
Previous guidelines [4] have recommended a supplementary 5 mm CTV expansion in the direction of microscopically involved tumor margins as reported by the pathologist (except the rectal wall). The current work did not reach consensus on the expansion of the CTV at the area of positive margin; therefore, the judicious expansion of the volume will be left to the discretion of the radiation oncologist.
PTV margins
There is no uniform recommendation for PTV margins to the prostate bed [20]. The group agreed to a minimum of 5 mm isotropic expansion
IGRT
The group agreed that the preferred IGRT method is cone-beam CT (CBCT), ideally daily or at a minimum of 3 times a week, with alignment to the soft tissue and/or surgical clips. Absence of image guidance or CBCT with alignment to bony anatomy are not recommended. An adequate training of radiation therapists to perform online soft tissue matching with CBCT scans is critical. The group does not recommend the use of implanted fiducial markers for postoperative radiation treatment of the prostate bed. To our knowledge, no specific guideline for postoperative IGRT is available, however the authors refer to ESTRO-ACROP guideline on IGRT for localized prostate cancer [21] for further guidance. Variations in rectum and bladder filling have greater impact on prostate bed treatment compared to primary radiotherapy of the prostate. Therefore, a protocol to consistently maintain the rectum empty and a comfortably full bladder is recommended.
Endorectal balloons
Endorectal balloons have been used to stabilize the internal anatomy of the rectum, decreasing inter-and intra-treatment target motion for prostate cancer treatment [22][23][24][25]. The group does not recommend the routine use of rectal balloons in the postoperative radiation treatment of the prostate bed.
Rectal spacers
The use of a biodegradable substance in the anterior perirectal fatty space allows displacing the prostate away from the rectal wall reducing the rectal volume exposed to high level doses [26,27]. Data is limited on the use of rectal spacers for postoperative radiation treatment of the prostate bed; therefore, the group does not recommend the use of rectal spacers in this setting.
Discussion
Postoperative radiotherapy is the only potentially curative treatment after radical prostatectomy failure or as a measure to prevent tumour recurrence. Prostate bed irradiation is associated with improved longterm oncologic outcomes compared with observation and overall has a favorable therapeutic ratio [1,2,28,29]. An accurate and precise delineation of the radiotherapy target volume is of particular importance to provide reproducible and reliable reporting, maximize oncological outcomes and minimize treatment-related toxicity. Variation in contour delineation among radiation oncologists is common and can affect the resulting treatment delivery and patient outcomes [9,[30][31][32]. Several studies have shown that the use of guidelines and contouring atlases can improve inter-observer variability of both the target volume and OAR with additional evidence that these improvements in contour delineation can improve predicted tumour control and normal tissue complication probability [31][32][33][34].
Although other consensus guidelines are available in literature, our consensus guidelines provided a few distinct recommendations as shown in Supplementary Table S1. We have attempted to refine some of the previous recommendations and optimize the therapeutic ratio by reducing the unnecessary irradiation of normal tissues. Several advancements in image guidance and radiotherapy delivery have been seen in the last decades, therefore current recommendations are intended to adapt to these improvements. One of the main innovations of this work is the recommendation of postoperative MRI for better soft tissue outlining as well as to make sure there is no residual macroscopic recurrence [18,35]. The use of MRI possibly increases observer agreement and decreases unnecessary dose to the organs at risk in the post-operative setting [16]. Comparable to what is observed with prostate CTV delineation, there is a reduction of the overall post-operative CTV on planning MRI compared with CT. Especially the inferior border can be more clearly visualized on T2-weighted MRI sequences, leading to a shift of the CTV in caudal direction. Additionally, the anterior rectal wall is more clearly marked on MRI, resulting in minimal discrepancies between observers compared with CT-only delineation. A central review of a phase 3 trial for salvage radiotherapy using CT-based delineation showed that limited adherence to treatment protocol recommendations (e.g., overlap of the CTV with rectal wall), was associated with a higher risk of toxicity and a trend toward worse biochemical control [9]. When MRI is used to aid CT-based CTV contouring, however, it is important to ensure an appropriate image fusion which often can be challenging due to differences in bladder and rectum volumes between the imaging modalities. It is important to consider and mitigate variations in bladder and rectum filling between CT and MR imaging, and to reproduce treatment conditions. Prior to the delineation of the prostate bed both image sets should be aligned. This alignment can be initially set on bony structures, for example closure of the pubic bones, but subsequently adjusted at the level of the external urethral sphincter.
Regarding the anatomical landmarks, previous guidelines [3,5] recommend that the anterior border of the CTV should extend superiorly to the top of symphysis pubis, consequently covering an important portion of the normal bladder. In the absence of radiologic evidence of macroscopic disease, the normal bladder at the level of the top of symphysis pubis is unlikely to harbor microscopic disease and therefore should not be considered part of the prostate bed CTV. This work, therefore, recommends that the prostate bed CTV should extend up to half or two-thirds of the symphysis pubis on sagittal view (Fig. 3c). Anatomical patterns of progression in the prostatic fossa on MRI and PET imaging have shown that recurrences at the cranial aspect of the posterior edge of the pubic bone are rare [36,37]. Two recent 68 Ga-PSMA datasets [38,39] lead to similar insights for developing new contouring guidelines. Together, more than 250 patients with local recurrences on the 68 Ga-PSMA PET/CTs after prostatectomy showed the importance of including the antero-lateral angles of the rectum, including more than 5 mm below the VUA; and the possibility of excluding the CTV at the cranial aspect of the posterior edge of the pubic bone. However, it is important to note that some recurrences at the anterior portion of the CTV may not be well visualized in the 68 Ga-PSMA PET/CT due to the urinary activity in the bladder.
Yet, postoperative variations of the supero-inferior positioning of the VUA exist, and radiation oncologists must use their best judgment for the adequate coverage of the VUA, which is a common site of prostate bed recurrences [40,41]. In addition, patterns of recurrence on PET scan have shown that the anterolateral angles of the rectum is a common site of disease, therefore we concur with the Groupe Francophone de Radiothérapie Urologique (GFRU) [42] recommendation on extending the posterior border of the contour to include this area.
Considering the anterior margin of the CTV cranially, we recommend that the CTV should be contoured up to the outer margin of the bladder wall as, by definition, intravesical tissue is not at risk for recurrence due to micro-metastatic disease. However, it is important to note that the postero-superior part of the CTV has greater motion, and a larger anteroposterior PTV expansion of the upper prostate bed has been previously suggested [43].Therefore, daily IGRT and consistent reproducibility of bladder and rectum volumes should be ensured to safely reduce the anterior coverage of the CTV more cranially. This is the first guideline for postoperative RT to use a qualitative analysis through heatmaps which provided a visual assessment of controversial regions of contouring. It is important to mention that these guidelines were developed in the setting of adjuvant or early salvage radiotherapy without the evidence of local disease on mpMRI, PET-CT or PET-MRI. The delineation of target volumes in the setting of visible disease is the subject of a separate work by the same ESTRO-ACROP team. These guidelines were developed in the context of conventionally fractionated treatment, and recommendations on alternate fractionation schedules (e.g., moderate, or extreme hypofractionation) are beyond the scope of this work. We believe our recommendations would still be applicable irrespective of the fractionation regime.
Future research should focus on the validation of the reproducibility of these recommendations and on its potential impact for treatment planning and clinical outcomes. Moreover, these recommendations could be an opportunity for the development and deployment of knowledge-based planning and artificial intelligence contouring solutions.
Conclusion
This work showed variability in prostate bed delineation in a group formed by radiation oncologists and a radiologist, all with expertise in prostate cancer. A single contemporary ESTRO-ACROP consensus guideline was developed to address areas of dissonance, promote standardization, improve previous recommendations, and increase consistency in prostate bed delineation, independent of the indication.
Disclaimer
ESTRO cannot endorse all statements or opinions made on the guidelines. Regardless of the vast professional knowledge and scientific expertise in the field of radiation oncology that ESTRO possesses, the Society cannot inspect all information to determine the truthfulness, accuracy, reliability, completeness or relevancy thereof. Under no circumstances will ESTRO be held liable for any decision taken or acted upon as a result of reliance on the content of the guidelines.
The component information of the guidelines is not intended or implied to be a substitute for professional medical advice or medical care. The advice of a medical professional should always be sought prior to commencing any form of medical treatment. To this end, all component information contained within the guidelines is done so for solely educational and scientific purposes. ESTRO and all of its staff, agents and members disclaim any and all warranties and representations with regards to the information contained on the guidelines. This includes any implied warranties and conditions that may be derived from the aforementioned guidelines.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2023-05-11T15:03:00.650Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "75256e5faf3f9a8ada3895fc433d11ac549f0bc1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ctro.2023.100638",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba38a3f92517304f06d7a2a7ad7f2e214ed351f3",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231717610 | pes2o/s2orc | v3-fos-license | Providing a Model of Agritourism In Rural Development Case Study: Masal County, Guilan Province, Iran
Purpose : The main purpose of this study is to present a model of agritourism for rural development. Based on the quantitative approach and principles of sustainable tourism theory, it tries to present the optimal model of agritourism for rural areas of Masal county. An attempt is also made to identify the elements of agritourism and discovere their internal relations in order to evaluate its development. Research Method : This is a descriptive-analytical study with a mixed research method, i.e. a combination of quantitative and qualitative methods. The research tools in this study include several types of questionnaires and interview cards. For the purpose of data analysis, a variety of statistical tests are used in quantitative and qualitative sections. Findings : The results show that the optimal model of agritourism for rural development in Masal county will be possible in each of the three elevated areas in the presence of five factors, namely tourists, farmers, rural environments, facilitators and common agricultural activities in the region. However, in this study the role of the two factors, including facilitators and rural environments in Masal area is far more than the other three factors. It is also found that, depending on the prevailing conditions in the studied rural areas, the foothills, plain and mountain areas are suitable for agritourism activities, respectively. Originality/ Value : For the first time in Iran, this research has addressed the issue of agritourism model. Also, all the elements of agritourism have been studied together for the first time.
INTRODUCTION
The main function of rural areas in the third world is agriculture, and due to its special effect on employment, poverty and income adjustment, food security, and self-sufficiency, it is of great importance (Shayan and Bouzarjomehri, 2012: 151). The experiences of developed countries regarding the exploitation of agricultural elements to achieve rural development show that it can be achieved using agriculture and related activities in rural areas. In this regard, in order to provide the villagers with sustainable livelihoods, complementary agricultural activities can be used in the villages. Accordingly, agritourism provides the possibility of increasing the income of rural households, creating jobs, and preventing rural migration by improving the quality of life and balanced distribution of services and welfare facilities in rural areas.
Agritourism can be defined as any work and tourist attraction activity that is based on agriculture (Athar, 2013: 36). Agritourism means tourism based on farms or vacationing on farms (Kuhen et al., 1998). Agritourism, which is sometimes called farm tourism, is a type of rural tourism that is directly related to agriculture in rural areas (Moradi et al., 2012: 6;Javan and Saghaei, 2004: 128). This type of tourism can be considered as a combination of natural situation and the process of cultivation and harvesting of agricultural products as an opportunity of tourism experience (Torabi, 2016;Yazdi and Saghaei, 2003). Agritourism has no adverse effect on environment, is education-oriented, and well-known for recreational activities, which is a subset of rural tourism activities (Mahaliyanarachchi, 2017: 16). Agritourism can be explained as an interactive activity among agricultural producers, visitors, agricultural products, and facilities of agricultural producers that is in favor of both groups (Malkanti, 2012).
In fact, agritourism is a type of tourism in which tourists live with rural households and farmers and learn about agricultural activities, living in specific fields and agricultural areas (agriculture on terraced areas, sugarcane production farms, cocoa gardens, pineapple orchards, and etc.). In this way, tourists interact with or participate in traditional agricultural activities without having negative consequences on the ecosystem of the host areas. On the other hand, the hosts provide a series of activities and services to tourists so that while satisfying them and creating peace of mind, earn money themselves. Agritourism is not a new phenomenon and has increased significantly over the last ten years, which is expected to have a bright future (Gil Arroy et al., 2013: 39). Agritourism is one of the strategies that has been proposed in recent decades to diversify the rural economy and sustainable rural development (Su, 2011: 36). Agritourism can be effective as a solution to improve livelihoods in rural areas, elevate recession, and reduce the immigration process. The nature of this tourism is such that it can lead to positive economic benefits, including diversifying the local economy, increasing public employment, developing the tax base, and increasing income (Vermeziari, 2013: 3, Jalag, 1996. Sznajder (2009) considers the performance of agritourism in rural areas to include spatial-environmental, economic, and socio-psychological effects. He also believes that monetary exchanges and increased sources of income, followed by surplus income, will reduce the migration of villagers and rural elites to the city and will stabilize the rural population, leading to the progress of development engine in these areas (Sznajder, 2009 Bir (1997) carried out a study entitled "Agritourism and Its Formation Conditions" and proposed that the conditions for rural tourism, especially agritourism, include the existence of a small and valid natural scale rich in cultural structure, areas with suitable landscape and single-product attractions on a large scale, good transportation for accessibility, proper infrastructure, and stable political conditions. In a study entitled "explaining the factors affecting the tendency of villagers to tourism," Anabestani and Mozaffari (2018) considered education and skills, government support and policy, non-developmental ideas, and farmers' viewpoints, access to city, self-confidence, and risk-taking as factors affecting the tendency of villagers to agritourism. They also showed that there is a relationship between tendency towards agritourism and the variables of age, job, level of education, amount of income, amount of work in farms and gardens, and the amount of traffic to the city during the week.
In the study of Amiri et al. (2017) entitled "presenting a conceptual model to investigate the impact of agritourism on rural entrepreneurship development," agritourism is assumed to have four dimensions of village, farmer, farm and tourist that affect the development of rural entrepreneurship development through economic, socio-cultural, and environmental effects.
Karimi (2014) conducted a study entitled "agritourism entrepreneurship, a new strategy for rural development" and believed that agritourism can play an important role in sustainable agricultural and rural development as a new strategy. Economically, it can diversify agricultural activities. Environmentally, it can help protect the environment, ecosystems and agricultural lands, and reduce environmental damage and agricultural pollution. Socioculturally, this type of tourism can preserve rural culture and traditions, improve farmers' social status, and empower women farmers.
A coherent framework of agritourism can be considered in two levels. At the first level, it is important to consider the actors of this type of tourism and their role in tourism activities. At the second level, there are issues around which the action takes place, i.e. tourism activities are considered in the form of the main and side activities (Moradi, 2012: 30).
Studies have shown that agritourism actors include tourists, hosts (farmers-villagers), facilitators (governmental and non-governmental organizations, associations), and the farm itself and its related activities, each of which affects this type of tourism in some way.
One of the areas that can be studied in connection with the discussion of agritourism is Masal county, Gilan province. In this county, more than 60% of rural households are dependent on agricultural activities. On the other hand, it receives more than 20,000 native and non-native tourists annually. It seems that the combination and use of tourism and agricultural activities under the name of agritourism can be a very effective driving factor in achieving the development of rural areas in this county.
In this regard, the present study tries to rely on the theory of sustainable development so that while identifying and studying the elements of agritourism, study the rural typology of Masal county (plains, foothills and mountains) to investigate the possibility of developing various types of agritourism. Then, the results are presented in the form of an analyticalexperimental model.
RESEARCH METHODOLOGY
Based on theoretical foundations and conceptual model of the study, agritourism has been studied based on 5 factors of farm, village, tourist, farmer and facilitators. Masal county has 108 rural points. According to the height index, these points are located at three plain (up to 100 meters), foothill (100 to 500 meters), and mountainous levels (height more than 500 meters) (Guilan Land Management Plan, 2017). These areas include 57 villages in the plains, 27 villages in the foothills, and 24 villages in the mountains.
Out of a total of 108 rural points, based on the sampling method in descriptive studies (Hafeznia, 2010: 164-163), 20 villages, including 9 villages from plains, 6 villages from foothills, and 5 villages from mountainous areas were randomly selected. The Cochran's formula was used to determine the number of samples for farmers and farms. According to this formula, 338 households were selected as the sample size. Also, 338 farms were selected as the sample of the farm population. Cohen's sampling formula was also used to determine the sample size of the tourist population. As a result, the sample size for tourists, taking into account the initial standard deviation, was 140 people in one year and in four different seasons. Table 1 shows the population and sample sizes.
Data collection methods and tools
The data collection method is both library and field. If required in each of the research steps, one of these two methods and or both have been used. In relation to the data collection tool, the most important tool is the researcher-made questionnaire and the interview card, which is designed for each component.
Features of the questionnaires used in the research
In this research, four questionnaires with combined (open-and close-ended) questions have been used. These include farmers' questionnaire, farm or agricultural activities' questionnaire, village questionnaire, and tourists' questionnaire.
In the case of facilitators, the interview card was mostly used in the form of face-to-face and telephone interviews. Table 2 shows the elements, indicators and variables examined in this study.
Data analysis
Due to the nature and purpose of the research, quantitative methods were used to analyze the data. After collecting the data, MATLAB, EXCEL, SPSS software, statistical tests and appropriate descriptive analyzes were used for final analysis. In the descriptive analysis section, descriptive statistics, such as frequency and frequency percentage were used. In the statistical tests' section, tests, such as regression-interaction analysis, DEMATEL-ANP, MICMAC, and importance-performance evaluation model (IPA) were used. On the other hand, environmentalclimatic, economic, social, cultural and biological factors were used to study and classify rural typology based on the components of agritourism. Barbieri,c and Tew,c. 2009: Joanne Lack, K. 1995: Rezaei, M. 2016: Imani, SF. 2011: Motallebi Varkani, A. 2012: tork choran, T. 2015: Taheri, K. 2011: Jensen, K. 2006. Galhate. SR, 2010E.D Tocu, 2007;Barbieri. C, 2009: Joanne Lack, K. 1995Sangam, K. 2013;Jensen, K and et al., 2006;Malkanthi S. H. P. and Routry J. K, 2011;Parzych, K. 2013;Rohana P Mahaliyanaarachchi, 2016
Study area
Masal county, Gilan province, one of the northern provinces of Iran, is located in the southern part of the Caspian Sea, with an area of 465 square kilometers. The main activity of the rural sector in this county is agriculture. Also, this county has rich natural resources and a diverse animal life. The most important activities of Masal county are listed separately in the Performance table of 2016 in Table 3.
Also, this county is a tourist destination in Iran, and especially in Gilan province. According to the latest available statistics, it receives more than 25,000 passengers annually from all over Iran and other parts of the world.
According to Table 4, the trend of tourist arrivals in Masal county has been increasing since 2010, with a growth rate of 31.55% in 2016. This trend has been increasing rapidly in the following years, so that this amount was close to 50% in 2019 (Guilan Cultural Heritage, Handicrafts and Tourism Office: assistance of Statistics and Information, 2017).
As mentioned, agritourism is formed as a result of the relationship between five factors 1 . In this regard, in order to understand the role and position of each of these factors in affecting and being affected in line with the development of agritourism, the group decision-making method was used based on paired comparisons and expert judgment. This method, which is known as the method of discovering causal relationships and is based on diagrams, is the DEMATEL technique.
As a result, in connection with the role of five factors in the development of agritourism in Masal county, it was found that the two components of facilitators and rural environment will have the greatest impact on attracting tourists to Masal county and have a great impact on the components of tourists, farmers and farms. The components of the rural environment and facilitators in this research are the "cause" and components of tourists, farmers and farms are the "effect". Mic-Mac's analyticalstructural method was also used to investigate the relationship between the variables of each of the five elements. In this model, it was also determined that the components of facilitators and the rural environment have the most impact and the variables of tourists, farmers and farms are greatly affected. The effect of each of the five components on the development of agritourism in the study area is investigated with the help of regression test.
Tourists and agritourism
One of the factors affecting the formation of various tourism activities is the presence or absence of tourists in the place and their attitude to the spatial structure of the destination environment. In Masal county, tourists with different social and economic characteristics will have different effects on the formation of various types of agritourism activities. The results of the questionnaire in field studies showed that the quality of the environment of Masal county villages (3.187%), development of infrastructure facilities (3.080%), and agritourism components (2.873%) have the greatest impact on attracting tourists. Tourists have also considered the quality of the environment and the resulting landscape as an important factor in attracting them to Masal county as a tourist destination. They pointed out the importance of indicators, such as tourism infrastructure, agritourism components, welfare facilities (with an impact of 1.836), environmental security (with an impact of 1.9090), and cultural, historical and religious characteristics (with an impact of 2.280).
Rural environment and agritourism
Regarding the role of rural environment in the development of agritourism in Masal county, it was found that according to rural experts, components, such as accommodation facilities and services (3.00%), infrastructure facilities (2.940%), touristic rural environment (2.860%), natural landscape (2.780%), historical-cultural monuments (2.010%), and the human perspective (1.920%) had the greatest impact. In other words, according to rural experts, the impact of the six indicators of rural environment on the development of agritourism includes: accommodation and catering services, infrastructure, tourist-friendly environment, natural landscape, historical and cultural monuments, and it is a human perspective. Accordingly, villages on the foothills of Masal are in the first place, villages of the plains are in the second place, and villages of the mountainous areas are in the third place.
Farmers and agritourism
Regarding the role of farmers in the development of agritourism in Masal county, in addition to individual and capital characteristics and assets of farmers, there are other components involved. Among these, the level of interest and desire of farmers to provide services in line with livestock activities (2.284%), agriculture and horticulture (2.130%), entertainment and recreational affairs (1.788%), accommodation and catering (1.668%), and cultural products (1.717%) are of great importance. On the other hand, the motivation and skill of farmers (2.003) in accepting new changes in agritourism and their attempt to diversify agricultural activities is noteworthy.
Farm and agritourism
Farm refers to components, such as general characteristics of the farm, its products and services, marketing of its products, its workforce, and its level of being tourist-friendly. These components were studied in rural farms of Masal county and the results showed that the level of being tourist-friendly (2.020%) is a very important indicator in the development of agritourism, with marketing components of products (1.849%), farm products and services (1.837%), farm labor (1.767%), and general farm specifications (1.702%) ranked next.
Facilitators and agritourism
Regarding the role of facilitators in the development of agritourism, documentary and field studies showed that the current management of all sectors of tourism in Iran is in line with the model of governing tourism management. This means that all tourism sectors are under the full supervision of the sovereignty and the central government. In this model, the management structure at the regional and local levels has no role in decision-making and is in fact the executor of the orders and decrees of the central government. The gap between the levels of political space management is wide, and there are many similar rules that sometimes disrupt the organizational tasks of institutions. In the current situation, this factor is an interfering factor and an obstacle to development in rural and agritourism sector, which has made it necessary to review the structure of tourism management.
In general and based on the opinion of experts, considering the environmental conditions of rural areas in masal county, and regarding the capacity of formation and development of agritourism, available statistics and information and also field studies and direct observations showed that rural areas located in the foothills and plains have been prioritized in planning. This is due to having some of the components required in various types of tourism, including accommodation and food supply centers, medical and transportation services, security and welfare, access, and etc. Also, they can recapitulate agritourism tourists. The typology of the studied villages shows that the general landscape of the villages of masal county has the capacity to develop agritourism. In this way, the foothills are in the first place, the plain areas are in the second place, and the mountainous areas are in the third place. The texture of these villages is gradually getting out of the traditional state and more urban symbols can be seen in the architecture of the building. Vegetation is often influenced by cultivation patterns, and forest and rangeland vegetation is negligible.
Rural areas are closely related to urban areas. They have relatively good conditions in terms of construction infrastructure.
Agricultural lands are cultivated in spring and summer, which are very beautiful and eye-catching. The second cultivation is common in early autumn and winter. Sometimes in the cold seasons of the year, they are rented by cattle breeders to graze livestock (cattle and sheep).
The prevailing outlook is the use of agricultural land. The rural household economy in these areas relies on rice cultivation, and the second cultivation is common. There are also fish farms in some villages.
The vegetation of rural areas in these areas is a combination of forest areas, shrubs, meadows, arable lands and gardens.
Combining agricultural activities with forested areas and gardens has created a beautiful landscape.
Rural life, rural culture and rural work tools form a special landscape that can be attractive to the viewer.
The historical background of the villages and the existence of historical symbols such as monuments and antiquities is another factor that has made the villages of these areas attractive.
Tolabdareh
Overall and based on the existing conditions and the potentials of each of the three regions, the conditions are suitable for the development of attached or secondary agritourism, i.e. a form of agritourism that takes place alongside agricultural activities. In this form of agritourism, farmers still rely on agricultural activities and their main source of income is through these activities. But in addition, they use the components of tourism, especially agritourism, in some seasons to finance their families. This can gradually lead to complementary and priority agritourism.
Examining the documents, field studies, interviews and consultations with experts, it was found that the optimal model of agritourism for Masal county is the design of agritourism package as a horizontal profile from plain to mountain according to the role and position of each of the five factors of rural environment, farm, farmers, tourists, and facilitators. This model should be designed in such a way that the effect of each of the five factors is considered.
County
Rural name Topography Characteristic of rural environments of each area(foothill, plain, mountain)
Asbariseh
Mountain The economy relies on livestock and the rural community is scattered.
Access to these areas, however, is relatively difficult due to the type of road. But it is attractive due to the presence of cultural-natural attractions.
These areas have a cold mountain climate that is often accompanied by snow and rain in winter.
The forest cover and then the vast pastures provide the conditions for the presence of ranchers.
The presence of small and large flocks of sheep and cattle, as well as wild horses, creates a special attraction.
The villages of the mountainous areas narrate an example of the ancient culture and history of the Talesh 1 Ethnicity.
The special architecture of the building, the coexistence and adaptation to the climate and the continuity of life based on the principles and beliefs of the past, has formed a special form of rural settlements.
Chesli Salimabad latasht Khoyadool and diversifying income sources is a factor that encourages farmers to do so. Also, agritourism tourists like to enter rural farms and see all kinds of agricultural activities, shop, participate in agricultural activities, and learn.
Finally, agritourism facilitators should be mentioned. These are groups, organizations, institutions, laws, etc. related to agritourism that facilitate its creation and development in various ways. Therefore, with the presence of these factors, agritourism will be formed and will be a factor in rural development.
Research model
The Tourists with behavioral, social, and economic characteristics mean that their attitude towards existing facilities and services, welfarerecreational facilities, rural landscape, quality of the destination environment, environmental security, and acceptance of agritourist attractions cause prosperity and development of tourism activities.
Rural in the sense of rural environment is the totality of rural life and culture in agritourism. Agritourism tourists have been thinking about their destination ever since they set out to travel. This is because they would like to see an attractive, high-quality place with facilities that meet their daily needs. Accordingly, the rural environment should have infrastructure, accommodation-welfare, historical-cultural, natural and agritourist attractions to be able to attract tourists and keep them for a while.
But undoubtedly, the most important factor in the formation of the agritourism system in rural environment is farmers, a group that is directly related to agricultural activities and the nature in rural environment. This group is responsible for supply in the agritourism services market.
Verification of the optimal model of agritourism
After compiling the optimal model, 30 tourism experts were consulted to validate the model. The most important features measured in the model included transparency, regularity, flexibility, coherence, continuity, legitimacy, effectiveness and appropriateness.
Based on the opinion of experts, score of each model dimension is presented here. This model is scored 8.86 out of 10 in terms of its appropriateness.
In terms of effectiveness, legitimacy, continuity, flexibility, and transparency, its score is 8. 60, 8.30, 8.11, 8.85, and 8.63, respectively. Overall, the average score of the proposed model is 8.76 out of 10. This indicates that the model is acceptable by experts.
DISCUSSION AND CONCLUSION
The results showed that Masal County is one of the important target areas of tourism in Guilan, where the dominant type of tourism is mass tourism. Field surveys showed that this type of tourism could not be of much use to the county, especially to rural areas. Because it is not possible for all villagers to participate in tourism activities. Therefore, it seems necessary to provide conditions in which all villagers interested in working in the field of tourism can participate. In this regard, agritourism can be an effective and adaptable solution. Regarding the optimal model of agritourism in Masal county, the role of five factors influencing the formation of agritourism should be mentioned. Studies have shown that in this county, the villages of the plains and foothills can be considered as the main centers of agritourism and accommodation to provide agritourism services. In these areas, it is possible to make tourists stay longer by providing accommodation, gas stations, parking, medical service centers, food and handicraft stores, restaurants, local exhibitions, local markets, and etc. These areas should be planned in such a way that tourists can spend time and rest and eat after arriving in Masal. Then, tourists should be directed to different areas to visit agritourist attractions. In this regard, villages, such as Shalma, Taskooh, Gonzar, Sheikh Nishn, Bitam, Tabarsara, Imamzadeh Shafi, and etc. are of good condition. Rural families should be encouraged to provide accommodation for tourists, taking into account a specific standard. By training farmers, who are engaged in agri-activities, they can be prepared to receive tourists on the farm.
By providing local transportation for tourists to visit agritourism attractions, personal cars can be prevented from entering rural areas and traffic can be reduced. It will also reduce casualties and financial losses and provide income for the villagers. Local transportation should be used to transport tourists to see agri-tourist attractions in the villages of Masal. Some of its benefis are reduction of the entry of personal cars and traffic load, reduction of humanrisk (deaths from accidents) and financial losses, and provision of conditions for the villagers to earn money. This is especially suitable for villages in mountainous and forested areas, such as Chesli, Salimabad, and Khoidel. The idea of "skill learning houses" can be used to educate the rural community. In these houses, the villagers can learn any skill. Facilitating institutions, including banks, insurance companies, agricultural departments -livestock and fisheries -can provide such houses to promote and educate farmers to create components of agritourism.
According to the results of this research, the optimal model for agritourism in the study area is to design an agritourism package from the plains to the mountains, which can create many economic, social and environmental opportunities.
From an economic perspective, benefits such as increased income, job creation and start of small businesses, increased profits from the sale of agricultural products, creation of a source of supplementary income, and eradication of economic poverty can be achieved.
From the environmental perspective, it makes effective use of natural resources, protects natural habitats and ecosystems (natural heritage), uses surplus and unused farms, and also creates space to increase knowledge and awareness about the environment and farming through education and experience.
From a social point of view, it can lead to the improvement of the quality of life of farmers (creating social welfare), improvement of social security, empowerment of women farmers, maintainance of lifestyle, preservation of local rituals and traditions (cultural heritage), and social interaction with guests.
In fact, this form of tourism, as a key strategy for the revitalization and sustainable development of rural areas with a variety of uses in farms, helps maintain employment, and strengthen agricultural resources and lifestyle in villages and low-income areas. In addition, it prevents the migration of villagers to the cities of Masal and Bazaar Jomeh and other cities of Guilan province.
As a result, agritourism is an interactive, participatory, people-centered, entrepreneurial, environmental-oriented, green, and high-yielding strategy with executive guarantees and high economic benefits that can create complementary income sources for rural farmers in Masal county, preserving and valuing lands. It can also be a good alternative to the current common tourism model of the region, namely mass tourism. Because it can reduce the intensity of demand for land sales by generating revenue and directing farms to production and profitability.
Due to the nature of agritourism, most farming villagers will be able to participate in tourism activities in the form of this type of tourism, but this requires providing the infrastructure and conditions that are offered as below: 1-Identifying the trustee or the main trustees of agritourism in Masal county and determining their duties.
2-Unification of rules and regulations in line
with the defined model of agritourism for Masal county.
3-Coordinated tourism management of the county and preventing tourists from invading the Ulusbelangah summer area by creating tourist centers in the plains and foothills and guiding tourists in different places in order to prevent excessive pressure on the biological capacity of environmental resources.
4-Comprehensive rural education in Masal county in the form of skill training houses.
5-Setting up local agritourism tours and using rural communities to guide and teach cultural and traditional principles to tourists (training local leaders) in the villages of Masal county.
6-Identifying the main centers of agritourism attractions in Masal county and creating a spatial network in order to properly manage agritourists.
7-Preparing a cadastral map, determining the boundaries of land ownership, and separating the lands of the villagers from the national lands.
8-Promoting the tourism model of agriculture, creating different attractions, and managing the behavior of tourists in order to stop the mass and unplanned tourism cycle in the region.
Accordingly, the appropriate model of agritourism in Masal county -Guilan provinceis formed as a result of the relationship between the five elements including: agriculture, rural environment, tourism, farm, and facilitators. The role and function of each of the elements mentioned in this model is as below: Tourist: "His need" causes the formation of various attractions and services in agricultural tourism.
Rural environment: All agricultural tourist attractions are formed in the rural environment.
Farm: includes all agricultural, livestock and rangeland activities.
Farmer: includes individual, socio-economic and cultural characteristics of agriculture.
Facilitators: Including all institutions, organizations, groups, celebrities, tools, laws and regulations, etc. that help agricultural tourism activities to be formed faster.
In other words, agritourism activities in Iran will be successful when the above five elements play their role well. | 2021-01-07T09:06:56.861Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "e576d59ed56ec8f562b92604bd440eb0c5abca58",
"oa_license": "CCBYNCSA",
"oa_url": "http://jas.sljol.info/articles/10.4038/jas.v16i1.9187/galley/6537/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8d33eaf01e14303ff5c3b6693178604a28697331",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
3850121 | pes2o/s2orc | v3-fos-license | Occupational Exposure and Health Impairments of Formaldehyde on Employees of a Wood Industry
Background: Occupational exposure to formaldehyde may decrease white blood cell counts and change blood concentration. In this study, the influences of occupational exposure to formaldehyde on the number of white blood cells and blood concentrations were studied. Methods: This case-control study was conducted in June of 2012 at North Wood Factory, Golestan Province, Iran. The US-NIOSH method No. 2541 was used to determine the occupational exposure of 30 workers of the production line (case group) and 30 administrative staffs (control group) to formaldehyde. The number of white blood cells and blood concentration were determined using the normal blood count method and related indices. Demographic features as well as the symptoms of being exposed to formaldehyde were collected using a standard questionnaire. Results: The occupational exposure of case group ranged from 0.50 ppm to 1.52 ppm. The prevalence of all studied symptoms from formaldehyde exposure in workers (2<median<5; range 1 to 5) was significantly different (P<0.001) towards the administrative staffs (median 1; range 1 to 4). The number of white blood cells in production line workers was not significantly different from those in administrative staff. The average blood concentration in the case group was significantly different from the control group (mean difference= 0.9 [95% CI: 0.40-1.39];P=0.007). Conclusion: Occupational exposure to formaldehyde changed the blood concentration of the studied workers but did not change the number of their white blood cells.
Introduction
Exposure to toxic agents in various occupational environments is the most important issues in occupational health. 1 Formaldehyde as a single carbon compound with chemical formula of O═C─H 2 is a highly reactive and potent stimulus. 2 Exposure to low levels (0.1ppm) of formaldehyde irritates eyes; nose and upper respiratory airways. 3,4 Exposure to high concentrations may result in nerve palsy, impairment of pulmonary function and asthma. 5 In prolonged exposures, cases of nasopharyngeal cancer 5 as well as leukemia 6,7 have been observed in humans. The known effect of formaldehyde in human cancer will sig-nificantly increase if in addition to nasopharyngeal cancer, the leukemia is also considered. 8 The risk of lymphohematopoietic cancers especially myeloid blood cancer has been observed in industrial workers exposed to formaldehyde. 9 A possible relationship between the lymphohematopoietic cancers and the level of exposure to the formaldehyde has been reported. 10 The prevalence of myeloid leukemia is higher than other leukemia. 10,11 The question remains whether the formaldehyde with such a reactive specification reaches to the bone marrow and the primitive hematopoietic cells which are the targets in case of cancer or not. 12,13 One of the clinical consequences of damaging primitive hematopoietic cells is the reduction of circulated red and white blood cells (WBC) as well as the number of plackets. 14 Formaldehyde can mutate primitive cells leading to gene mutation or broken chromosomes which may result in cancer. [15][16][17] Inhalation of formaldehyde may damage the liver of some animal species. 18 In 1995, U.S. Environmental Protection Agency (US-EPA) 19 , and in 2004, the International Agency for Research on Cancer (IARC) recommended formaldehyde as a group A " human carcinogen material". 20,21 In 2011, in its 12 th report, IARC announced that prolonged exposure to formaldehyde can lead to nasopharyngeal cancer and a type of leukemia. 22,23 American Conference of Governmental Industrial Hygienists (ACGIH) has not announced the 8-h occupational exposure limit for formaldehyde but has recommended the ceiling level of 0.3ppm for it. 24 Previous studies have reported the exposure of workers in particle board production line to be from 0.1 ppm to 1 ppm 25,26 that is higher than the determined levels for the prevention of leukemia. Thus, in processes such as particleboard and plywood production, the workers are likely to experience changes in the number of white blood cells and the concentrations of their blood. In this study, we examined the association between WBC count and blood serum concentration of workers with their occupational exposure to formaldehyde in a wood factory.
Study participants
This case-control study was conducted in June of 2012 at North Wood Factory, Golestan Province, northern Iran. Cochran's sample size formula was used to calculate the sample size.
The α was set at 0.05 and the level of precision was 0.1. Thirteen workers of the production line (case group) and 30 administrative staffs (control group) were participated in the study according to the set of criteria. Non-smoking, full time workers with at least one-year experience in the studied factory with no-infectious diseases affecting the WBC count and blood concentration were included in the study. The participants with no full corporation and those unwilling to continue the study were excluded. Maintenance personnel and those who were not permanently exposed with formaldehyde were also excluded. Eight samples were excluded from the study for confounding variables. They included three cases due to smoking, one case due to lung disease, one case due to renal disease and 3 cases due to severe stress of the participant during the test.
To eliminate possible confounding parameters, the administrative staffs of the same company who were largely in terms of lifestyle and diet type similar to the case group were determined as the control group.
Demographic variables including age, work experience, smoking habits, health condition regarding the WBC count, blood serum and symptoms related to formaldehyde exposures, were collected by a standard questionnaire. The objectives of the study were explained to the participants prior to their start completing the questionnaire.
Sampling of formaldehyde
US-NIOSH method No. 2541 was applied for air sampling. For this purpose, the subjective SKC-UK sampling pumps connected to a special sorbent tube (ST) through a polyethylene tube was used. All sampling pumps were calibrated in rate of 0.1 l/min by a Rota-meter, prior to start air sampling. Sampling was conducted within volumes of 1 to 36 liters in various stations according to the recommended method. Air sampling was performed from 8:30 to 14:30 in the morning shift, where maximum volumes of working were expected. 27 A total number of 60 air samples was taken from the worker's breathing zone. The sorbent tubes were taken off their circuit and capped after sampling. All samples were left in cover and special holder (ice jar) keeping them away from any moisture, shock and light while transferring to the laboratory. Gas chromatography (GC 6890) using flame ionization detector (FID) was used to analyze the samples. 27
Blood sampling
Blood samples were taken one hour before the end of working shift in the health care center of the factory by a nurse (monitored by a medical doctor) from workers arm vein. Blood samples were refrigerated until tested. To measure the peripheral blood parameters, the cell counter set of Sysmex model made in Japan was used. For typical investigation of peripheral blood parameters, the common method of blood cell count and the related indices were used. The required blood for cell counts was 2 ml. The blood was taken on ethylenedi-amine-tetra acetic acid (EDTA) in order to avoid its coagulation. 28 Moreover, to check white blood cell morphology, a blood smear was prepared on slides from each sample, stained with Giemsa color, and tested using Zeiss microscope.
Ethical Issues
Ethical aspects were concerned due to the aggressive nature of some tests. For this purpose, after explanation of the tests including the purpose of the study plan, a written consent was received from each participant prior to blood sampling. The provisions of the Declaration of Helsinki for medical research involving human subjects were considered. The proposal of the study was approved by the Ethical Committee of Research and Science Branch of Islamic Azad University. Meanwhile, the results of the tests were kept completely confidential.
Statistical analysis
Statistical analyses were performed using the SPSS 16 for Windows (SPSS Inc., Chicago, IL, USA). Statistical t-test was used to compare the mean age and working experience of workers in the case and control groups. The Mann-Whitney U-test was applied to compare the education levels of workers in the case and control groups. The prevalence of irritating symptoms in both case and control groups were compared with Friedman test. Multiple regression analysis was performed to assess statistical distribution of factors influencing the white blood cell counts and blood concentration in studied groups. In all analyses P<0.05 was considered as significant.
Demographic characteristics
The age of the case group ranged from 27 to 55 yr with an average and standard deviation of 37.23 (SD 9.07) yr. The control group was in the range of 25 to 58 yr old with an average and standard deviation of 41.53 (SD 7.86) yr. The highest number of participants in the case group (11 workers) was in the 20-30 age group. Among subjects in the control group, 14 of them were in the 40-50 age group. Statistical t-test showed no significant difference between the average age of case and control groups (P=0.055).
Working experience of the case group ranged from 2 to 22 years, with an average and standard deviation of 10.57 (SD 6.91) yr. The working experience of the control group was in the range of 3 to 26 yr with an average and standard deviation of 15.27 (SD 7.24) yr. The highest number of participants in the case group (12 workers) had work experience of < 5 years. Among subjects in the control group, 9 of them had work experience of 15-20 years. The statistical t-test showed a significant difference between the average working experience in case and control groups (P=0.013).
Education levels of case and control groups are compared with Mann-Whitney U-test. The highest number of participants in the case group (16 workers) had diploma. Among subjects in the control group, 11 of them had Bachelor of Science (BS) degree. The results of statistical test showed that education level of groups are significantly different (P = 0.001).
Symptoms of exposure to formaldehyde
The Mann-Whitney U-test showed that the average points obtained for prevalence of symptoms from formaldehyde exposure in workers (2<median<5; range 1 to 5) was significantly different (P<0.001) towards the administrative staffs (median 1; range 1 to 4). Average points obtained for prevalence of symptoms in the case group were higher than those of the control group (Figure 1). The Friedman statistical test also showed that the prevalence of irritating symptoms in both case and control groups was significantly different (P<0.001). Figure 1 shows the significant differences in the prevalence of irritating symptoms in the case and control groups. The prevalence of irritating symptoms is compared in Table 1.
Exposure to formaldehyde
The occupational exposure of control group to formaldehyde was 0.00 ppm, while the exposure of case group ranged from 0.50 ppm to 1.52 ppm with a median of 1.30 ppm. According to the results of Mann-Whitney U-test, the average occupational exposure to formaldehyde in control and case groups was significantly different (P=0.001).
Exposure level of all control samples (administrative staff) was lower than the recommended carcinogenesis of 0.1ppm. All workers in produc-tion line are experiencing higher exposure levels than the ceiling level of 0.3ppm recommended by US-ACGIH. About 57% of the workers had an exposure level of 1.25-1.50ppm. About 17% and 13% of them had exposure levels of 1.00-1.25 ppm and 0.50-0.75 ppm, respectively.
The number of white blood cells
The number of WBC in control group ranged from 4.90×10 3 /µl to 10.80×10 3 /µl with an average ± standard deviation number of 7.07 ± 1.29×10 3 /µl. The white blood cell counts in case group ranged from 4.70×10 3 /µl to 10.70×10 3 /µl with an average ±standard deviation number of 6.78 ± 1.49×10 3 /µl. The result of statistical test showed no significant difference between the mean white blood cell count in the case and control groups. Pearson correlation coefficient calculated for case group also showed no meaningful relation between the exposure levels and the number of white blood cells in case group (r=0.128; P=0.498). Table 2 shows the possible factors affecting the white blood cell counts in case and control groups. These factors include age, work experience and education level, evaluated using multiple regression test. There was no significant relationship between these factors and the average number of WBC in both case and control groups.
Blood concentration
The minimum, maximum, mean ± standard deviation of blood concentration in the control group was 12.30g/100cm 3 , 15.60g/100cm 3 , and 14.25±0.99g/100cm 3 , respectively. Minimum, maximum, average ± standard deviation of the blood concentration in case group was 12.70g/100cm 3 , 16.00g/100cm 3 , and 15.00 ± 0.91g/100cm 3 , respectively. T-test showed that the average concentration of the blood in case group was significantly different from the mean blood concentration of the control group (mean differ-ence= 0.9 [95% confidence interval (CI): 0.40-1.39]; P=0.007). Pearson correlation coefficient calculated for case group also showed that there was a meaningful relation between the exposure levels and blood concentration in case group (r=-0.473 P=0.015). Statistical distribution of factors influencing the blood concentration is shown in Table 3. They include age, work experience and education level. Statistical test using multiple regressions showed no significant difference between these factors and blood concentrations in studied groups.
Discussion
Participants in this study were all working in the same factory and in terms of social, and demographic characteristics such as gender, age, lifestyle and diet were almost similar. However, the work experience and the education level of case and control groups were significantly different. The education level of control group was significantly higher than the workers of production line.
The workers of production line (case group) were experiencing significantly higher levels of formaldehyde than the administrative staffs (control group). Formaldehyde exposure level of all participants in the case group was higher than the ceiling level of 0.3ppm recommended by US AC-GIH 24 , while 90% of them were imposed with exposure levels of higher than 0.5ppm. The number of WBC decreases as the age goes up. 29 Since the work experience represents the age in some way thus, the number of white blood cells are expected to decrease as the work experience increases as well. In industrial plants, usually higher educated personal have supervising and administrative duties expecting to have less exposure to the air pollutants. Thus, the age, work experience, and the education level may be considered as the influencing factors on blood cell counts of the workers exposed to formaldehyde.
Previous studies have reported the exposure level of workers in particleboard production line to be from 0.1ppm to 1ppm 25,26 that are lower than the exposure levels of workers measured in this study (e.g. 0.50 ppm to 1.52 ppm). The measured levels of exposure to formaldehyde for case group were well higher than the determined levels (0.1ppm) for the prevention of leukemia. 28,30 However, the high solubility of formaldehyde in water as well as its respiratory, eye and mucus tubes irritation is causing symptoms such as runny eyes, runny nose, coughing and wheezing while breathing it. In this study, these stimulatory signals were also examined. The prevalence of all irritation symptoms in case group (e.g. workers of production line) was significantly higher than the prevalence of irritation symptoms in the control group.
Tearing had the highest and the chest pain had the lowest prevalence in production line workers. These results are in consistence with the results described earlier, which showed a high prevalence of eye irritation symptoms among exposed workers. 31 The number of white blood cells in adults ranged from 4.5×10 3 /µl to 10×10 3 /µl. The number of white blood cells in case group compared to this range, shows that it is shifted down a little but it is not significantly being reduced compared to the control group.
The number of WBC could be considered as an indicator of formaldehyde exposure level 32 that is difficult to confirm it in low exposure levels similar to those experienced in present study. No significant relationship was observed between the number of WBC of case and control group.
The results of this study indicated that longterm exposures to low levels of formaldehyde do not change the number of WBC. The finding is in agreement with previous findings. 33 The exposure to the formaldehyde significantly (P=0.0016) reduced the number of WBC of Chinese workers 34 which contradicts with the results of the present study. In order to eliminate possible confounding factors, 8 cases of confounding variables were excluded from the analysis.
This study shows that formaldehyde exposure significantly increased the blood concentration of workers, which is consistent with previous study in China. 4 The increased occupational exposure to formaldehyde caused the blood viscosity reduction. 4 Inhaled formaldehyde causes hypoxia which decreases tissue oxygen. In reaction to this condition, hypoxia is secreted by the kidneys through erythropoietin hormone frequently. This stimulates the production of red blood cells in the bones and consequently increases the amount of hemoglobin. As the blood viscosity increase, this difference is created. 35,36 but with higher exposure, the reactive power of the body is expected to decrease leading to lower blood concentration. Formaldehyde effects only at the beginning of its contact. 36 A limitation of this study is that the numbers of cases and controls were relatively small. However, these findings are limited by the use of a cross sectional design. This research has thrown up many questions in need of further investigation. Further work needs to be done to establish whether occupational exposure to formaldehyde affects human bone marrow cells. More broadly, research is also needed to determine the effect of formaldehyde exposure on the differential white blood cell counts.
Conclusion
According to the results of the study, the number of white blood cells is not an appropriate indicator of long term exposure to formaldehyde. However, it is necessary to select reliable indicator as a biological indicator of exposure to formaldehyde with additional testing. One of the more significant findings to emerge from this study is that low-level formaldehyde exposure caused an increase in the concentration of the blood. | 2018-04-03T01:54:16.679Z | 2016-01-30T00:00:00.000 | {
"year": 2015,
"sha1": "0a2188b9375b10b866be0eff0df66a96df02eda8",
"oa_license": "CCBY",
"oa_url": "https://hpp.tbzmed.ac.ir/PDF/HPP_2940_20150330233149",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a2188b9375b10b866be0eff0df66a96df02eda8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258135269 | pes2o/s2orc | v3-fos-license | Exercise for sarcopenia in older people: A systematic review and network meta‐analysis
Abstract Background Sarcopenia is a serious public health concern among older adults worldwide. Exercise is the most common intervention for sarcopenia. This study aimed to compare the effectiveness of different exercise types for older adults with sarcopenia. Methods Randomized controlled trials (RCTs) that examined the effectiveness of exercise interventions on patient‐important outcomes for older adults with sarcopenia were eligible. We systematically searched MEDLINE, Embase and Cochrane Central Register of Controlled Trials via Ovid until 3 June 2022. We used frequentist random‐effects network meta‐analyses to summarize the evidence and applied the Grading of Recommendations, Assessment, Development, and Evaluations framework to rate the certainty of evidence. Results Our search identified 5988 citations, of which 42 RCTs proved eligible with 3728 participants with sarcopenia (median age: 72.9 years, female: 73.3%) with a median follow‐up of 12 weeks. We are interested in patient‐important outcomes that include mortality, quality of life, muscle strength and physical function measures. High or moderate certainty evidence suggested that resistance exercise with or without nutrition and the combination of resistance exercise with aerobic and balance training were the most effective interventions for improving quality of life compared to usual care (standardized mean difference from 0.68 to 1.11). Moderate certainty evidence showed that resistance and balance exercise plus nutrition (mean difference [MD]: 4.19 kg) was the most effective for improving handgrip strength (minimally important difference [MID]: 5 kg). Resistance and balance exercise with or without nutrition (MD: 0.16 m/s, moderate) were the most effective for improving physical function measured by usual gait speed (MID: 0.1 m/s). Moderate certainty evidence showed that resistance and balance exercise (MD: 1.85 s) was intermediately effective for improving physical function measured by timed up and go test (MID: 2.1 s). High certainty evidence showed that resistance and aerobic, or resistance and balance, or resistance and aerobic exercise plus nutrition (MD from 1.72 to 2.28 s) were intermediately effective for improving physical function measured by the five‐repetition chair stand test (MID: 2.3 s). Conclusions In older adults with sarcopenia, high or moderate certainty evidence showed that resistance exercise with or without nutrition and the combination of resistance exercise with aerobic and balance training were the most effective interventions for improving quality of life. Adding nutritional interventions to exercise had a larger effect on handgrip strength than exercise alone while showing a similar effect on other physical function measures.
Introduction
In geriatric research and clinical settings, sarcopenia is a major public health issue among older adults. 1 The prevalence of sarcopenia increases with age, ranging from 5-13% in those aged 60-70 years to 11-50% in those 80 years and older. 2 A systematic review reported that the prevalence of sarcopenia varies by sex and among different settings: 12.9%, 26.3% and 29.7% for men and 11.2%, 33.7% and 23.0% for women in the community, nursing homes and hospitals, respectively. 3 According to a conservative estimate, more than 50 million people are now affected by sarcopenia, which is predicted to rise to 200 million in the next 40 years. 4 In recent years, the direct expense of sarcopenia has accounted for 1.5% of overall medical costs. Sarcopenia is associated with poor quality of life. 5,6 Older adults with sarcopenia are at a higher risk of many adverse outcomes, including falls, 7,8 fractures, 8 disability, 7 hospitalization 9 and death. 10,11 There are no specific drugs approved to treat sarcopenia, and physical exercise is the most effective intervention for sarcopenia. [12][13][14] Evidence-based clinical practice guidelines usually provide strong recommendations for physical activity as a primary treatment for sarcopenia. 15 In practice, exercise is the fundamental intervention for sarcopenia, but evidence for the most effective type of exercise is conflicting. [16][17][18][19][20] However, exercise programmes for sarcopenia vary widely in type (resistance, aerobic, balance training or multicomponent, etc.), and the best types of exercise for this population have not been established because the effect sizes of different exercise types on patient-important outcomes are unclear. 21 For example, one systematic review proved that resistance training had positive effects on body fat mass, handgrip strength, knee extension strength, gait speed and the timed up and go (TUG) test, 22 whereas another review reported that aerobic exercise was most effective to improve muscle strength and physical performance. 23 Network meta-analysis (NMA), also known as mixed-treatment comparison or meta-analysis of multiple treatment comparisons, provides methods to compare and rank the effect sizes of different exercise types for sarcopenia by estimating direct and indirect comparisons. 24 Although we identified two previously published NMAs, 23,25 one review reported the effects of mixed exercise interventions, without further classification of the specific type of exercise. 25 The two reviews and NMAs did not provide the overall quality of evidence. 23,25 There is one large randomized controlled trial (RCT) 26 including 1519 older adults with sarcopenia available but was not included in the two NMAs. Moreover, our team previously conducted an umbrella review of systematic reviews trying to summarize the evidence for exercise as a treatment for sarcopenia and found that the quality of existing systematic reviews was low and, crucially, did not report on quality of life or all-cause mortality. 27 Therefore, the objective of this study was to conduct a systematic review and NMA of RCTs to compare the effect of different types of exercise on patient-important outcomes among older adults with sarcopenia. This information is critical for informing clinical practice guidelines on the optimal exercise interventions for older people living with sarcopenia.
Protocol registration
This systematic review followed the reporting guideline of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and PRISMA 2020 and the extension statement for NMA (PRISMA-NMA) 28,29 and was registered on web PROSPERO with CRD42021278038.
Guideline panel involvement
This study supported a clinical practice guideline for the diagnosis and treatment of sarcopenia. A guideline panel composed of geriatricians, endocrinologists, kinesiologists, general internists, dietitians, cardiologists and methodologists provided critical oversight for this study. The panel reviewed the protocol, identified the population, formulated the clinical questions and selected and ranked patient-important outcomes.
Search strategy
We searched MEDLINE, Embase and Cochrane Library (Cochrane Central Register of Controlled Trials) via Ovid until 3 June 2022. We cross-checked the reference lists of the key reviews. We formed the searching strategy by combining the keywords and Medical Subject Headings (MeSH) terms, including sarcopenia, exercise, physical activity and RCTs (see details in Appendix S1).
Eligibility criteria
We included RCTs with parallel arms if they compared any type of exercise with any type of nutrition, placebo or usual care in older adults (age ≥60 years) with sarcopenia. We did not restrict the diagnostic assessment to a specific criterion and followed the study-reported definition of sarcopenia, including but not limited to the European Working Group on Sarcopenia in Older People (EWGSOP) and the Asian Working Group for Sarcopenia (AWGS). The criteria for sarcopenia at least include low muscle mass or low muscle strength/poor physical performance. We decided to include studies published only in English due to feasibility and most high-quality RCTs were published in English. Studies published in some Chinese journals did not provide the details of randomization, and most of these studies are likely not real RCT 30 ; moreover, a systematic review 31 noted that restricted English language probably would not introduce systematic bias for treatment effect estimations in the conventional medicine field. We also excluded cross-over trials. Two reviewers independently performed the title/abstract screening and then conducted full-text manuscripts using EndNote X9 (Clarivate Analytics, Philadelphia, PA, USA). We resolved discrepancies by discussion, if needed, by consulting a third reviewer.
Data extraction
Two researchers independently extracted the data using a standardized form, which a third researcher further checking the extracted data. We resolved disagreements by discussion and extracted the following information: (1) study characteristics (publication year, author, region and setting, sample size, diagnostic criteria and severity of sarcopenia, follow-up and treatment duration, and treatment strategy); (2) patients' characteristics (age and sex); and (3) outcome data (mean and standard deviation of results for all continuous, proportion or event rates for binary outcomes). When multiple times of follow-ups for each outcome measure were recorded in eligible studies, we used the data with the longest follow-up period. We privileged intent-to-treat (ITT) analysis results over per-protocol results. When ITT analysis was not available for most continuous outcomes, we used data from the per-protocol analysis. For missing data in reported outcome measures, we calculated the required effect size in our analysis. For example, if eligible studies had reported the mean and standard deviation before and after the intervention, we followed the formulas recommended by the Cochrane Handbook to estimate the missing absolute outcomes change (mean difference [MD]) using the baseline data and post-treatment data. 32 When we did not get all information required in the formula, we excluded this study from the outcome-specific analysis.
Outcomes
Panels judged and rated the patient-important outcomes as follows: (1) critical outcomes: all-cause mortality, quality of life, falls, any adverse events, muscle strength (handgrip strength) and physical performance (usual gait speed, TUG test and five-repetition chair stand test) and (2) important but surrogate outcomes: knee extension strength, maximal gait speed and muscle mass (appendicular skeletal muscle mass index [ASMI], skeletal muscle mass index [SMI], appendicular skeletal muscle mass [ASM], fat-free mass and fat mass, and skeletal muscle mass [SMM]). We adopted the study-reported definition for these outcomes.
Risk-of-bias assessment
Two reviewers independently assessed risk of bias, with adjudication by a third reviewer using a modified Cochrane riskof-bias tool 33 for assessing risk of bias in randomized trials, which includes random-sequence generation, allocation concealment, blinding, missing outcome data and selective reporting of outcomes. Each domain was answered with 'definitely yes' (low risk of bias), 'probably yes', 'probably no' or 'definitely no' (high risk of bias). The major reason to choose the modified version of the Cochrane risk-of-bias tool is that existing instruments frequently include items that do not address the risk of bias. 34 We judged an overall high risk of bias if any domain had a high risk of bias.
Data analysis
This study performed a frequentist NMA with a graphtheoretical method by R package netmeta. If eligible studies reported outcomes (quality of life and knee extension strength) measured by different scales or instruments/units, we used standardized MD (SMD) to pool the improvement following the intervention. For other outcomes, we can get the effect size with the same unit; we used MD to pool the effects of the interventions. The treatment effect heterogeneity was defined by the generalized methods of moments estimate of variance. We used forest plots and league tables to display the network estimations and P score to rank the interventions. Cochran's Q was used to assess the global and local statistical heterogeneity. To examine network loop structure and assess inconsistency, we used the node-splitting method. We assessed the potential incoherence by calculating the ratio or difference of direct and indirect estimates and corresponding 95% confidence intervals (CIs) as well as the P value for the inconsistency. We also evaluated whether there is a clinically important difference between direct and indirect estimates by comparing the overlap of the point and interval estimates. We used the comparison-adjusted funnel plot and Begger and Egger's test to detect publication bias.
We also conducted six subgroup analyses, including (1) sex (female and male; large effect in female group), (2) setting (hospital and community; large effect in hospital group), (3) co-obesity or not (obesity and non-obesity; large effect in obesity group), (4) duration (intervention over 6 months and within 6 months; large effect in over 6 months group), (5) nutrition (antioxidants vs. amino acid supplements vs. protein supplements vs. comprehensive nutrition; large effect in comprehensive nutrition group) and (6) diagnosis criteria for sarcopenia (AWGS, EWGSOP or other criteria; large effect in AWGS or EWGSOP criteria). If there was a positive result, we used Instrument for assessing the Credibility of Effect Modification Analyses (ICEMAN) 35 to assess the credibility of the subgroup effect.
To assess whether the study's effect was important for the patients, we used the minimally important difference (MID) for important sarcopenia outcomes. The MID for grip strength, usual walking speed, five-repetition chair stand test and TUG test was 5.0 kg (grip strength), 36 0.10 m/s (usual walking speed), 37 2.3 s (chair stand test) 38
Assessment of evidence certainty
We followed the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) method to rate the certainty of the evidence for direct, indirect and network estimates as high, moderate, low and very low certainty. Seven issues were considered for rating down the certainty, including the risk of bias, inconsistency, indirectness, publication bias, intransitivity, incoherence and imprecision. 40,41 This study adopted the minimally contextualized framework to rate the imprecision and draw conclusions from an NMA. [42][43][44] We used the null effect as the decision threshold and usual care as the reference group. The interventions were categorized into three groups: among the most effective, intermediately effective and among the least effective, as well as the high/moderate certainty and low/very low certainty groups. 42
Role of the funding source
The funder of this study had no role in the study's design, data collection, data analysis, data interpretation, report writing or decision to submit for publication.
Description of included studies
We identified 5988 records for initial screening and 120 records for full-text screening. Of them, 42 RCTs that included 3728 older adults proved eligible ( Figure 1). The median age was 72.9 (inter-quartile range [IQR]: 69-79.5) years, median female proportion was 73.3% (50-100%), median length of follow-up was 12 (12)(13)(14)(15)(16) weeks and duration of treatment in the trials ranged from 8 to 144 weeks (Appendix S3). Nine studies (20.9%) had a high risk of bias downrated by Figure 3 shows the league table for quality of Table 1 shows the summary of findings. Figure 4 presents the categorization of interventions from among the best to among the worstwhen compared with usual care and the certainty of the evidence on main outcomes; other outcomes are in Appendix S5.7. We presented the results of the subgroup analyses in Appendix S7 and the certainty of evidence for direct, indirect and network estimates in Appendix S6.1.
All-cause mortality
One study 26 The quality-of-life score in the intervention group was on average 1.11 SDs (0.54 to 1.68) higher than in the usual-care group.
High
Resistance plus nutrition (indirect estimation) The quality-of-life score in the intervention group was on average 1.07 SDs (0.23 to 1.91) higher than in the usual-care group.
Resistance and balance
Resistance and balance (1 RCT; 54 participants) The quality-of-life score in the intervention group was on average 0.02 SDs (À0.55 to 0.58) higher than in the usual-care group.
Moderate
Resistance and balance plus nutrition (indirect estimation) The quality-of-life score in the intervention group was on average 0.36 SDs (À0.26 to 0.98) higher than in the usual-care group.
Resistance and aerobic
Resistance and aerobic (1 RCT; 77 participants) The quality-of-life score in the intervention group was on average À0.07 SDs (À0.52 to 0.38) higher than in the usual-care group.
Moderate
Resistance and aerobic plus nutrition (1 RCT; 73 participants) The quality-of-life score in the intervention group was on average 0.12 SDs (À0.34 to 0.58) higher than in the usual-care group.
Resistance and aerobic and balance
Resistance and aerobic and balance (2 RCTs; 130 participants) The quality-of-life score in the intervention group was on average 0.68 SDs (0.32 to 1.04) higher than in the usual-care group. improving physical function measured by TUG test (MD: À1.85 s, 95% CI: À3.22 to À0.49), and the CIs of the effect size crossed the MID threshold (2.1 s). Four trials, including 227 patients, reported on five-repetition chair stand test. High certainty evidence showed that resistance exercise combined with balance or aerobic training are the intermediately effective interventions for improving physical performance measured by the chair stand test. The 95% CIs of these effect sizes (MD: around À1.70 s for exercise alone and À2.28 s for adding nutrition to resistance and aerobic exercise) cross the pre-set MID threshold (2.3 s) (Figure 4 and Appendix S5.3).
Any adverse events
Seventeen studies reported no adverse events associated with the intervention. Falls were recorded in 80 of the 605 (13.2%) participants in the multicomponent intervention group and 49 of the 600 (8.2%) participants in the lifestyle education group (RR: 1.62, 95% CI: 1.16 to 2.27). 26 One study 45 reported no fall associated with intervention. A study published in the BMJ 26 in 2022 reported that 337 of the 605 (55.7%) participants in the intervention group and 297 of the 600 (49.5%) participants in the lifestyle education group experienced at least one adverse event (including any adverse Figure 4 Summary of effects of interventions on critical outcomes. We categorized the interventions and rated the certainty of outcomes by whether the intervention was better or worse than usual care and some other interventions (the 95% confidence interval [CI] not crossing null effect). The best, intermediate and worst categories show the effect for each intervention, whereas the certainty of evidence shows whether the effect is trustworthy or not. Bold text represents statistical significance. MD, mean difference; SMD, standardized mean difference. event defined by the trial) during the trial (RR: 1.13, 95% CI: 1.01 to 1.25).
See Appendix S5.7 for other outcomes.
Subgroup analyses
We used meta-regression to examine the effects of subgroups and did not identify any subgroup effects except for settings and sex for some outcomes (Appendix S7.1
Principal findings
This systematic review and NMA is the most thorough examination of currently available evidence on exercise interventions in sarcopenic older adults. We analysed direct and indirect comparisons from 42 RCTs that compared multiple exercise intervention arms in~3728 older people with sarcopenia. We found that adding nutritional interventions to exercise had little effect on quality of life and physical performance (such as usual gait speed, TUG test and the chair stand test). Still, adding nutritional interventions improved handgrip strength compared to exercise alone in terms of both quality of evidence and effect size. With respect to the optimal type of exercise, resistance exercise alone has the largest effect on quality of life; however, it is better to add balance or aerobic training to resistance exercise to improve other physical function measures.
Strengths and limitations
In this review, we conducted a broad search that included the most comprehensive synthesis of evidence to date on exercise for older adults with sarcopenia. A nationwide multidisciplinary guideline panel contributed to formulating the review questions, subgroup analyses and identifying patient-important outcomes. This review included a considerable sample size of older adults with sarcopenia. We used the GRADE framework to assess the overall quality of evidence and presented our main findings according to GRADE guidance for NMA. 42,46 The major limitations of this review are the limited currently available evidence on all-cause mortality and the inconsistency in the definition of adverse events across trials. Although we included a considerable sample size of older adults with sarcopenia, only a few eligible studies were included in the analyses for some specific interventions and outcomes. For example, although nine studies in total reported quality of life, only one study provided direct comparisons for almost every intervention. Four studies in total reported chair stand test, and only one study provided direct comparisons of each intervention. In this review, we found 42 eligible studies with 3728 participants, but we did not further explore grey literature and contact experts to review the search strategy, which is one of the limitations of this review. When interpreting the results, we should consider the heterogeneity in participants across eligible studies that include various diagnostic criteria for sarcopenia and some participants also with osteoporosis or receiving dialysis as comorbidity. In addition, we used a modified Cochrane riskof-bias tool to assess the risk of bias, which was not formally validated. A number of credible alternatives are available for assessing risk of bias. For many years, Cochrane RoB 1.0 was widely used. It does, however, have an important limitation: The 'unclear' option was widely used and is uninformative.
As it turns out, information is usually available that demonstrates that, for blinding, reviewers can make accurate inferences even when authors' statements regarding blinding are not completely explicit. 47 Thus, response options that include 'probably yes' and 'probably no' are desirable and are included in the revised RoB 1.0 we used in our study. Cochrane RoB 2 recognized this issue and includes the 'probably yes' and 'probably no' options. The revised Cochrane RoB 2 has demonstrated low interrater reliability, challenges in its application and no demonstration that it improves the validity of risk-of-bias assessment beyond RoB 1.0. For these reasons, we chose the revised RoB 1.0 to address risk of bias in our study.
Comparison with other studies
Recently, there are two other systematic reviews with NMAs on interventions for sarcopenia published in 2021 25 and 2022. 23 Wu et al. reported that a comprehensive exercise intervention has beneficial effects on muscle strength (handgrip strength) and physical performance (dynamic balance). 25 This study was limited to only including participants over the age of 65, which yielded 26 eligible studies with 2561 participants. In their review, the definition of sarcopenia was based on only one criterion (only muscle mass, only muscle strength, muscle mass and muscle strength, or physical per-formance). Furthermore, this study used a broader classification of interventions, which makes it difficult to draw conclusions about specific exercise types; comprehensive exercise included whole-body vibration, resistance exercise, mixed exercise and other types of exercise, with substantial clinical heterogeneity. In the review by Negm et al., 23 aerobic exercise was the most effective intervention to improve muscle strength and physical performance, and resistance combined with aerobic exercise were suggested as the most effective intervention for improving muscle mass, muscle strength and physical performance, which yield different conclusions with our review. Our review only included studies with older adults. In contrast, the review conducted by Negm et al. 23 also did not restrict their population to older people with some participants aged <55, which may be one of the reasons for the discrepancies in results of the two reviews.
In addition, the clinical practice guideline published by Dent et al. 15 recommended that prescribed exercise with resistance-based training improved muscle strength, skeletal muscle mass and physical function (grade: strong recommendation, moderate certainty of evidence). However, most of the evidence behind this recommendation comes from two background meta-analyses published in 2014 18 and 2017, 16 which included relatively few RCT studies. Our study, with expanded sample size, found that both resistance exercise and resistance plus nutrition were the most effective intervention for improving quality of life and handgrip strength and the intermediately effective intervention for usual gait speed with effect sizes that may exceed the MID threshold. Further, we found that adding balance training to resistance exercise is the most effective method for improving most physical function measures, such as usual gait speed, TUG test and chair stand test. The finding is consistent with single RCTs. For example, Liang et al. 45 conducted an RCT of patients with sarcopenia aged 80-99 years and confirmed that balance exercise plus resistance exercise significantly improved usual gait speed, handgrip strength and short physical performance battery (SPPB) scores compared to resistance exercise alone. Runge et al. 48 explored the effects of a balance training programme alone compared to a strength training programme. The results showed that balance training was more effective in increasing muscle strength as well as achieving muscular equalization, which may partially explain why adding balance training to resistance exercise seems more favourable than resistance alone.
Muscle mass decreases with age, and strength and power decrease as well. After age 30, the rate of mass muscle decline is~3-8% per decade, and it is even more rapid after age 60. 49,50 Muscle loss, strength loss and function loss in older people are fundamental causes of disability. A large, randomized trial 26 demonstrated that a multicomponent intervention (exercise and nutritional counselling) could reduce the incidence of mobility disability for people aged 70 years or older with frailty and sarcopenia. Participants in the multicomponent intervention arm also experienced handgrip strength or muscle mass reductions over 36 months of follow-up. These results indicate that multicomponent interventions may not be able to compensate for the loss of muscle and function that occurs over years in older adults. Early targeted interventions (such as resistance exercise alone or combined with balance or aerobic exercise) may be necessary for mitigating later-life muscle and functional loss among older adults.
Conclusion
In conclusion, high or moderate certainty evidence shows that resistance exercise with or without nutritional intervention and the combination of resistance and balance or aerobic exercise are the most effective interventions for improving quality of life in older adults with sarcopenia. Adding nutritional interventions to exercise had a larger effect on handgrip strength than exercise alone while showing a similar effect on other physical function measures to exercise alone. Moderate certainty evidence showed that adding balance training to resistance exercise was the most effective intervention for improving physical function measures. These findings can be used to guide the optimal exercise prescription for improving patient-important outcomes among older adults with sarcopenia. | 2023-04-15T06:17:47.904Z | 2023-04-14T00:00:00.000 | {
"year": 2023,
"sha1": "89a0cd9fdea648f70ad5e4b0d61c24d0f14a80d0",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcsm.13225",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "37c60cfaea44810f46e26fd2441c318a5b760c38",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
133748472 | pes2o/s2orc | v3-fos-license | Experimental and numerical investigation of flow patterns in shallow rectangular reservoirs with symmetrically positioned inlet and outlet channels
Shallow flows correspond to turbulent flows whose horizontal dimensions are considerably larger than the vertical one. In Hydraulic Engineering, they refer, e.g., to stormwater basins, stabilization ponds for wastewater treatment and aquaculture tanks. Since they involve low flow velocities, a continuous settling process often affects such shallow reservoirs. Therefore, it is important to expand the knowledge about the influence of its geometry on the hydrodynamic behavior and the sedimentation tendency. This paper aims to analyze flow patterns in a rectangular reservoir with symmetrically positioned upstream and downstream channels, taking into account three different flow rates under steady flow regime (0.50, 1.25 and 3.40 L/s). Experimental tests were performed in a laboratory prototype, consisted of a 3.0 m long and 2.0 m wide reservoir, with a maximum depth of 0.30 m. Also, it was applied the WOLF 2D software for numerical modeling of all variants. Experimentally, a symmetrical hydrodynamic behavior was observed only for the lowest flow rate, while the flow pattern was asymmetrical for the other cases. On the other hand, the numerical model indicated hydrodynamic symmetry for all scenarios.
INTRODUCTION
Shallow flows are defined as layered turbulent flows bounded by a bottom surface and by another surface in contact with the atmosphere (free surface), characterized by horizontal hydraulic dimensions significantly greater than the vertical one (JIRKA; UIJTTEWAAL, 2004).
The study of shallow flows involves a range of applications in Hydraulic Engineering domain and their respective areas of interest.Among them, it can be highlighted the study of hydrodynamics and morphodynamics of shallow reservoirs, such as: (1) detention basins, often employed in urban drainage projects for flood control (PERSSON;WITTGREN, 2003;JANSONS;LAW, 2007); (2) stabilization ponds of sewage systems, whose sedimentation efficiency must be high so that they have adequate performance (OLUKANNI; DUCOSTE, 2011;CAMNASIO, 2012;LI et al., 2013); (3) aquaculture tanks, which demand a correct understanding of their hydraulic behavior, since they are associated with the development of fish, algae and crustacean cultures (MASALÓ, 2008); as well as many other water storage systems susceptible to sedimentation.
All these examples of shallow reservoirs are inevitably subject to some degree of sedimentation, since they alter the morphodynamic balance of the original watercourse, modifying the hydraulic characteristics of the flow and, consequently, its sediment transport capacity (KANTOUSH, 2008;CAMNASIO et al., 2013).
The definition of the design criteria for the conception of shallow reservoirs depends on their objectives.When their use is related to irrigation, flood control or hydroelectric generation, the sedimentation process should be minimized.On the other hand, if they are used as hydraulic sedimentation structures, the decantation of solids must be clearly maximized (DUFRESNE et al., 2010b).
As highlighted by Dufresne et al. (2010b), the prediction of sediment deposition in such reservoirs is not yet fully understood, given the complex influence of reservoir geometry, hydraulic conditions, sediment characteristics and solid inflows (sediment supply).Thus, it is a challenge for Hydraulic Engineering to understand objectively how the flow developed in such reservoirs interferes with sedimentation and, consequently, in the operation of these systems.
Visual investigation tests were carried out in a 10.40 m long and 0.985 wide glass flume, using dye injection indicated the existence of four flow patterns (as well as an unstable and transitional one), as shown in Figure 1 (DUFRESNE et al., 2010a).For this purpose, a 2.00 m long and 0.285 m wide contraction was installed at the entrance of this channel (positioned inside it), simulating an inlet channel.In addition, another 1.00 m long and 0.285 m wide contraction was installed on the downstream side to characterize the outlet channel.These contractions were movable along the longitudinal direction, thus allowing the length adjustment of the central part, which represented the reservoir.Basin lengths of 1.80, 2.00, 2.20 and 7.00 m were evaluated.
As outlined in Figure 1, the flow patterns were identified as a function of their symmetry versus hydrodynamic asymmetry, the number of recirculation zones and the location of reattachment points.According to the experimental observations of Dufresne et al. (2010a,b), flow pattern S0 (Figure 1d) was observed for comparatively short reservoirs.A straight jet from upstream to downstream, with two symmetrical recirculation zones along either jet sides, characterizes this flow pattern.In the case of that experiment, such flow pattern occurred when the length of the channel (basin) was adjusted to 1.80 m (parameter L), taking into account that its width remained fixed in all evaluated scenarios (0.985 m).
When the length of the reservoir was extended, an asymmetrical flow field was observed, resulting in flow pattern A1 (Figure 1c).For the tests performed, this flow pattern was observed when the channel length was changed to 2.20 m.For intermediate reservoir lengths between the two above, a transition flow pattern could be seen (A1 / S0), oscillating between the symmetrical flow pattern (S0) and the asymmetrical flow pattern (A1) (DUFRESNE et al., 2010b;CAMNASIO;ORSI;SCHLEISS, 2011).In the case of Dufresne et al. (2010b), the A1/S0 pattern was verified when the 3/12 channel length was set to 2.00 m.However, for longer reservoirs, the experiments showed an asymmetrical flow pattern with two reattachment points (A2, Figure 1b).This occurred in the Belgian experimental studies when the canal length reached 7.00 m.Finally, for even longer reservoirs, an additional recirculation zone developed in the downstream part of the reservoir (flow pattern A3, Figure 1a).This latter flow pattern, however, was not evaluated experimentally in that study, although it has been cataloged.
This paper aims to characterize flow patterns using physical and numerical experiments to analyze the effect of different boundary conditions in a particular flat and shallow rectangular reservoir with symmetrically positioned upstream and downstream channels.
Physical modeling
Laboratory tests have been carried out in the Hydraulic Research Center of the Federal University of Minas Gerais (UFMG), Brazil.A 3.0 m long, 2.0 m wide and 0.30 m deep flat rectangular reservoir (Figure 2) was constructed of sheet metal covered by epoxy coating, in order to protect it against oxidation.
The upstream and downstream channels were also constructed of sheet metal.Both channels are 1.0 m long, 0.125 m wide and 0.30 m deep, corresponding to the same depth of the reservoir.
The tests contemplated three different hydraulic conditions, under steady flow regime (Table 1).For tests 1, 2 and 3, the aforementioned two channels were positioned aligned to the longitudinal basin axis, as seen in Figure 2 (downstream channel).
In order to control the water depth within the reservoir, a sill was installed at the end of the outlet channel, screwed to the structure, with a thickness of 2 mm and height adjusted according to the required water depth.
Two pumping systems were assembled: one to discharge the inflow from an auxiliary reservoir and another to pump the outflow to the same reservoir.The first of these pumping systems corresponded to a Schneider three-phase centrifugal motor pump, BC-92 model, 1.0 HP and maximum rotational velocity of 3500 rpm.The adjustment of the reference flow rates in each test was made possible by the installation of a Weg frequency inverter, CFW 09 model, which allowed to regulate the rotational velocity of the motor pump assembly.For this purpose, the pump rotation corresponding to each desired flow rate was calibrated.Flow rates were measured at the outlet section of the downstream channel and obtained by volumetric measurement, using a bucket and a stopwatch.In order to ensure the correct establishment of the pump rotational velocity to liquid flow rate ratio, the discharge was only checked after reaching a permanent flow condition, i.e. when there was no further variation of the water level in the basin.It was verified by means of successive measurements in millimeter rulers installed in different places inside the reservoir.The time demanded to reach this stationary condition was usually over 5 minutes.The rotational velocity of the pumping system corresponding to each required flow was only defined after repeated measurements.During the experiments, the outflow at the outlet section of the downstream channel was returned to an auxiliary reservoir using the aforementioned second pumping system, which corresponded to a Weg motor pump, with 5.0 HP and rotational velocity of 3485 rpm.
It is important to mention that a few pipes of 20.0 mm diameter PVC were juxtaposed placed within the upstream channel to uniformly distribute the flow at the entrance of the basin, thereby reducing turbulence that could be generated in that area of interest.
All experiments were filmed (usually during 8 to 12 minutes) for subsequent capture and processing of photos for the determination of surface velocities based on LSPIV (Large Scale Particle Image Velocimetry) technique (WEITBRECHT; KÜHN;JIRKA, 2002;KANTOUSH, 2008).For this purpose, a Logitech webcam C920 model was coupled to a 4.0 m high metallic stand, positioned over the test basin to record high-resolution (HD) videos with up to 1080 pixels.
This installation height of the camera was properly calculated to enable the capture of representative images of the entire rectangular area of the reservoir.The filming used a portable computer and the AMCap software, version 9.22.
To determine the superficial flow velocity by means of the LSPIV technique, tracer particles were released continuously, one particle injected every second for about 3 to 4 minutes, and uniformly within the middle portion of the upstream channel.These tracers are essential for the calculation of velocity vectors by the algorithm compatible to the measurement technique adopted.Thus, PET plastic bottle caps (black color, to contrast with the blue bottom of the reservoir) were used, due to their ease of handling and the fact that they floated on liquid surface, as recommended by Meselhe, Peeva and Muste (2004).Experimental and numerical investigation of flow patterns in shallow rectangular reservoirs with symmetrically positioned inlet and outlet channels Moreover, the bottom of the basin was entirely squared using a silver tape, composing a series of elements of 25.0 cm side (Figure 3).The purpose was to facilitate the identification of the so-called interrogation and research areas demanded by the software adopted.
The FUDAA-LSPIV software (JODEAU; HAUET; LE COZ, 2013) was used to transform the recorded images into surface velocity vectors.For that, a fixed time step Δt was defined to the obtaining of subsequent images, according to the flow velocity.As recommended by other studies and by the FUDAA-LSPIV manual itself, the subsequent time step Δt adopted was between 1 and 2 seconds.It was selected a shorter time step (1 second) for the higher flow rate (Q = 3.40 L/s), due to the faster displacement of the tracers.On the other hand, the lower flow velocity for the other flow rates (Q = 0.50 L/s and 1.25 L/s) favored the acquisition of images at a larger time step (every 2 seconds, for example), which reduced the software processing time.
Numerical modeling
For the numerical simulations, the finite volume academic code WOLF 2D, developed by the University of Liège (ULg, Belgium), was used (DEWALS et al., 2008;DUFRESNE et al., 2011).This computational program is based on the use of two-dimensional depth-averaged equations of volume and momentum conservation.The concerning equations are known as shallow water equations (SWE).The WOLF 2D software is also coupled to the k-ε type turbulence model so that the three-dimensional turbulent processes are partly taken into account.(DEWALS et al., 2008;ERPICUM et al., 2009;DUFRESNE et al., 2011).
To construct the characteristic mesh of the UFMG's experimental facility, 327 elements were established in the abscissa plane (Δx), parallel to the flow, and 166 elements in the ordinate plane (Δy), perpendicular to the flow.The mesh resolution was adjusted to 0.0125 m.For each test, it was considered uniform values of water level as initial simulation condition, equivalent to what was expected to be measured in the input section of the downstream channel.Regarding to boundary conditions, it was prescribed specific discharge at the entrance of the basin (final section of the upstream channel) and the required depth in the initial section of the downstream channel.Also with regard to the upstream boundary condition, slightly disturbed distributions of the specific discharge at the inflow were considered, as recommended by Dewals et al. (2008Dewals et al. ( , 2012)).Otherwise, there is no possibility from the numerical point of view of obtaining an asymmetric result.On the other hand, the existence of this slightly disturbed upstream boundary condition does not prevent a simulation from converging to a symmetric result, since the calculation attenuates this dissymmetry.The time step Δt assigned in all simulations was 0.1 s.
The simulations were interrupted as soon as the steady flow condition was reached.In this case, WOLF 2D has two tools indicative of this stationarity: the first refers to the possibility of tracking the numerical solution for the velocity in a given element of the mesh to the user's choice.From the moment this graphically displayed value becomes constant or with slight variations, it is assumed that the solution is stable; the other tool consists of the algebraic difference of the values assigned to the velocities of each element between the last two simulated time steps.When the velocity variations begin to be relatively small, of the order of 10 -5 m/s or even smaller, the error factor is considered as acceptable for the numerical model.
Physical modeling
The flow regime in the final portion of the upstream channel was subcritical in all experiments, as well as in the reference studies cited above.It should be noted that such information validates the adoption of the boundary conditions in the numerical simulations.
5/12
Otherwise, if the flow were supercritical at the entrance of the reservoir, it would be necessary to take into account the water depth and the flow rate as upstream boundary conditions.Table 2 presents the parameters correlated to the determination of the Froude number (Fr), which resulted in values between 0.09 and 0.10.
In order to determine the flow regime according to the flow path, the calculation of the Reynolds number (Re) for all experiments was demanded (Table 3).The calculation of the Reynolds number considered the hydraulic radius of the upstream channel cross section as the characteristic length.
It is important to highlight that the obtained Reynolds numbers resulted in instabilities during the experiments, with the formation of complex flow patterns, which are indicative of the turbulent flows.Those instabilities and complex flow patterns for turbulent flows have already been observed by Jirka and Uijttewaal (2004) and Kolyshkin and Nazarovs (2005).
In the sequence, the results obtained through the LSPIV technique (Figures 4 to 6) are presented.They refer to the mapping of the mean surface velocity fields, for each flow rate, throughout the area of the rectangular reference reservoir.• The maximum surface velocity identified in the reservoir was 0.05 m/s, observed only along the main jet and in the initial portion of the recirculation zone on the right side of the basin, close to the downstream sidewall; • There was a clearly defined flow recirculation zone on the left side of the basin, relative to the right one, denoting apparent preference of the tracer in recirculating the basin on that side.However, the limitation of the LSPIV technique for vectorization of the velocity field near the sidewalls might have resulted in a partial characterization of the vortex formed to the right of the main jet.Experimental and numerical investigation of flow patterns in shallow rectangular reservoirs with symmetrically positioned inlet and outlet channels • It was also verified the formation of two small vortices in the upstream portion of the basin, one to the left and one to the right of the locality where the inlet channel was installed.Both vortices had very low velocities (around 0.02 m/s) and appeared to circulate predominantly clockwise, with the left side practically imperceptible; • The maximum surface velocity observed was 0.10 m/s, at the entrance of the basin and along the main jet.In general, the higher velocities were concentrated within the path formed by the flow, decreasing gradually until reaching the downstream channel and forming an even slower recirculation, with maximum velocities of 0.03 m/s; • The central parts of the three vortices identified represented stagnation zones, where it was not possible to observe the presence of surface velocity vectors.
Finally, with regard to the flow rate of 3.40 L/s, the mapping of surface velocities is shown in Figure 6.The main observations of this experimental scenario are presented below: • The flow pattern in this case consisted of type A1, asymmetrical, similar to the one mapped to the flow rate of 1.25 L/s; • The maximum surface velocity recorded through the LSPIV technique was 0.14 m/s and once again concentrated along the main jet (diverted to the right).They decreased gradually from upstream to downstream and notably in the recirculation branch of the larger vortex, derived from that jet; • The larger vortex, counterclockwise, appeared to occupy an area larger than the corresponding one for the flow rate of 1.25 L/s.On the other hand, this impression must be due to the highest velocity modules in the entire basin area for Q = 3.40 L/s, whose vectors could be better characterized by the FUDAA-LSPIV software; • A small vortex was formed clockwise and with reduced velocities (not exceeding 0.05 m/s) in the upstream portion of the rectangular reservoir and to the right of the main jet.The left-side vortex could not be identified as for the 1.25 L/s flow rate.
It is important to note that an unavoidable limitation of LSPIV results was observed, e.g. in Figure 6: there was an undue indication of the direction of some vectors around the two crossbars in the experimental apparatus.Due to the fact that the test basin was constructed of metal sheets, it was necessary to lock the longer sidewalls with the installation of these bars.These were inevitably projected within the bottom of the basin during filming.
In summary, the flow pattern was symmetrical for the flow rate of 0.5 L/s and asymmetrical for the flow rates of 1.25 and 3.40 L/s.These results are consistent with Kantoush's (2008) experiments in what can be compared, that is, essentially for the flow rate of 1.25 L/s and supposedly for the higher of them.However, as for the higher flow rates, the results presented here differ from what was expected by Dufresne et al. (2010a), whose flow pattern is supposed to be symmetrical.
The experiments of Dufresne et al. (2010a) were carried out within a 1 m wide channel, whereas the reservoir of this study was 2 m wide and that of Kantoush (2008) was 4 m wide.It is hypothesized that there is some scale effect of the reservoir studied by Dufresne et al. (2010a) in relation to those of Kantoush (2008) and of this research.
Thus, since the three models used the same fluid (i.e.water) and the surface tension is identical in all three models, the proportion between the Froude numbers (representative of the ratio between inertial and gravitational forces acting on the flow) and Weber numbers (representative of the ratio between surface tension forces and inertial forces) are different.Froude similarity to the three models implies that the Weber number is proportional to the square of the geometric scale.Thus, the Weber number for the Kantoush model ( 2008) is 1/16 times that of Dufresne et al. (2010a) and 1/4 times that of the CPH/UFMG, consequently the relationship between gravitational forces and surface tension forces are more evident in the model of Dufresne et al. (2010a).
According to Martin andPohl (2000 apud HELLER, 2011), it should be noted that the influence of surface tension can be neglected for most prototypes in hydraulic engineering, except for small water depths, as is the case, for air entrainment, among other applications.
Numerical modeling
In general, the simulations demanded about 10-12 hours of uninterrupted computational processing, corresponding to at least 11,000 iterations, recorded every second.Therefore, the simulations represented the insertion of clear water in the basin for a minimum time equivalent to 3 real hours until they were interrupted.It is important to note that each iteration corresponded to 10 simulation time steps Δt.This has made the output file size smaller than otherwise.
In the following, Figures 7 to 9 are presented, corresponding to the numerical results for the reference flow rates of 0.50, 1.25 and 3.40 L/s.
The combined analysis of Figures 7 to 9 allows to highlight the indication of a symmetrical flow representative of the flow pattern S0 defined by Dufresne et al. (2010a,b), regardless of flow rate.In all three cases, the main jet did not show significant lateral deflection and was therefore practically straight and aligned with the longitudinal axis of the basin.We could also see the development of two vortices of approximately similar proportions and relatively large, one to the left and another to the right of the main jet.However, it was not possible to identify by the related figures the direction of rotation of such recirculation zones.
The highest mean flow velocities occurred along the main jet.For the flow rate of 0.50 L/s (Figure 7), it was noted that this velocity reached 0.08 m/s.Experimental and numerical investigation of flow patterns in shallow rectangular reservoirs with symmetrically positioned inlet and outlet channels With respect to the flow rate of 1.25 L/s (Figure 8), the maximum numerical value of the mean flow velocity was around 0.10 m/s.
Finally, for the flow rate of 3.40 L/s (Figure 9), it could be seen that the maximum mean velocity was 0.14 m/s.Around the two vortices formed in all three experiments, this value abruptly decreased to about 0.01 to 0.02 m/s for the lowest flow rates and between 0.01 to 0.03 m/s, for the largest of them.
Comparison between experimental and numerical results
From the experimental point of view, the flow symmetry condition was verified only for the flow rate of 0.50 L/s.Concerning the other tested flow rates (1.25 and 3.40 L/s), the flow pattern was asymmetrical, with the main jet diverting towards the right lateral wall of the basin and giving rise to the formation of a large vortex in counterclockwise direction.
According to Shapira, Degani and Weihs (1990) and Dewals et al. (2008), the main jet diversion is a result of the increase in velocity on one side of the jet, usually the one closest to a sidewall, with consequent decrease in local pressure, amplification of flow deflection and flow asymmetry.This behavior is known as Coanda effect, the main reason for the observed hydrodynamic asymmetry.
On the other hand, with respect to the numerical results obtained through the WOLF 2D software, it was verified that there was no significant deviation of the main jet for any flow rate tested.This flow moved throughout the basin practically aligned with its longitudinal axis, as shown by the analysis of Figures 7 to 9. That is, convergence of numerical and experimental results was observed only for the flow rate of 0.50 L/s.
9/12
Being the WOLF 2D software a two-dimensional mathematical model, it is not fully capable of reproducing the instability of the observed flow.In this case, the shallow water equations (SWE) are not able to model the problem, because the flow is three-dimensional.On the other hand, with the use of ultrasonic velocity profilers, Kantoush (2008) found that the vertical component of velocity was comparatively low relative to the horizontal component.For this reason, he used two-dimensional numerical models in his study, as well as other authors, such as Dufresne et al. (2011).Concerning the comparison of the velocities obtained in the laboratory tests and through the WOLF 2D program, it should be emphasized that, numerically, the values corresponded to the mean vertical velocity profile.On the other hand, the technique used in the physical modeling to determine velocities through image acquisition (LSPIV) referred to the superficial velocity values.
Therefore, it would be expected that the values obtained by the WOLF 2D model would be lower than those based on the LSPIV technique.According to Table 4, it could be observed that the maximum numerical and experimental velocities were identical for the flow rates of 1.25 and 3.40 L/s, whereas the WOLF 2D computational model seemed to overestimate by 60% the values of the respective velocities for the flow rate of 0.50 L/s, in relation to the data of the experimental setup.
It is worth noting that in-depth velocities, in spite of the great convenience of data acquisition by the LSPIV (surface velocity) technique, should not be neglected or discarded during the experimental tests.On the contrary, both forms of data acquisition should be used simultaneously, if possible, to enrich the experimental analysis.
If the velocity data were in depth, it would be possible to evaluate the ability or limitation of the WOLF 2D software in determining the mean velocity of the flow, since it would be possible to compare the data related to the same parameter.A strongly recommended alternative when performing this type of experiment is the use of Vectrino profilers and/or UVP (Ultrasonic Doppler velocity profiler) type probes.
Threshold between symmetrical and asymmetrical patterns
In order to investigate precisely at which flow rate the flow pattern changes from symmetrical to asymmetrical, three complementary experimental tests were carried out (tests 4 to 6).In this sense, the flow rate was varied in a range between 0.80 L/s and 1.25 L/s, as shown in Table 5, without, however, changing the height of the sill installed in the final section of the downstream channel.That is, it was tried to evaluate the sensitivity only of the inflow rate on the flow pattern, despite the small variation of the depth in the reservoir naturally provoked by the variation of the flow discharge.The water depths in all experiments were obtained within the inlet section of the downstream channel.
All the complementary experimental tests were also filmed so that it was possible to analyze the corresponding flow pattern to these scenarios.The FUDAA-LSPIV software was used again to generate the mean surface velocity fields for each reference flow rates.Figures 10, 11 and 12 correspond, respectively, to the results obtained using the LSPIV technique for flow rates of 2), the latter, related to Test 5, was 0.066 m deep (Table 5).
For evaluation of the flow rate from which the flow pattern became asymmetrical, the result synthesized by Figure 4 for the flow rate of 0.50 L/s was considered as a reference.In this case, it was found that the flow was symmetrical, with the existence of a main jet rectilinear and aligned with the longitudinal axis of the basin and two large vortices in opposite directions, one counterclockwise to the left and another clockwise to the right of that jet.
Figure 10 allowed to elucidate that the flow pattern continued being symmetrical, even for the flow rate of 0.80 L/s.The only noticeable differences with respect to the flow pattern were a slight increase in the maximum surface velocity (from 0.05 to 0.06 m/s) with increasing flow and a slight shortening of the two vortices near the upstream portion of the basin.
Concerning the flow rate of 1.00 L/s (Figure 11), the flow pattern remained practically symmetrical even though the vortex to the right of the main jet has not been well represented by the software.Possibly, it is a transition flow between the two flow patterns, that is, from symmetrical to asymmetrical.
According to Figure 12, it was also verified that the flow pattern became asymmetrical with increase of the flow rate to 1.25 L/s, with deviation of the main jet to the right side.In addition, it can be seen the formation of a large vortex in counterclockwise direction, similar to that observed in Figure 5, but with smaller velocities, no higher than 0.08 m/s against 0.10 m/s of that scenario.In this case, there was no clear evidence of the existence of the two other smaller recirculation zones in the upstream portion of the rectangular reservoir.
Therefore, it was concluded that the change of the flow pattern from symmetrical to asymmetrical occurred for flow rate between 1.00 and 1.25 L/s, associated to the water depth of around 6 cm.Miranda et al.
11/12
With regard to the quality of the flow field results, especially in Figures 10 to 12, it should be emphasized that it could be significantly improved if a larger amount of plastic tracer particles were used.A uniform distribution of such tracers along the entire upstream wall would also be preferable rather than their insertion only within the upstream channel.This would possibly result in a dense velocity vector mapping.
CONCLUSIONS
This paper presented a qualitative-quantitative characterization of the hydrodynamic behavior of a specific rectangular reservoir under clear water condition.
Considering the data obtained experimentally, it was possible to highlight the following aspects: (1) there was hydrodynamic symmetry, corresponding to the development of two vortices of similar proportions and in opposite rotation directions, only for the flow rate of 0.50 L/s.The two identified vortices were formed on either side of the main jet, which did not show lateral deflection and, therefore, displaced rectilinear and aligned with the longitudinal axis of the basin; (2) for the other flow rates (1.25 and 3.40 L/s), it was observed that the flow was asymmetrical, with the formation of a large counterclockwise vortex originated from the main jet diverting to the right side of the basin; (3) according to the considerations of Dufresne et al. (2010a), the flow pattern would supposedly be symmetrical for the largest flow rates tested in the UFMG laboratory.On the other hand, the results of the present research were consistent with the experimental observations of Kantoush (2008).
With respect to the last commented aspect, it is possible that there was some scale effect of the reservoir studied by Dufresne et al. (2010a) in relation to those of Kantoush (2008) and of this research.While the tests of the first authors were performed within a 1.0-meter wide channel, Kantoush's (2008) experiments were conducted on a 4.0-meter-wide rectangular reservoir.In turn, the UFMG's basin was 2.0 m wide.
It was also observed experimentally that the change of the flow pattern from symmetrical to asymmetrical occurred for a flow rate between 1.00 and 1.25 L/s, considering the water depth of around 6.0 cm.
Regarding the numerical results, it was observed the existence of a hydrodynamically symmetrical flow pattern, regardless of the flow rate.
Comparing the numerical results with the experimental ones, it was verified a divergence of the hydrodynamic behavior identified for the flow rates of 1.25 and 3.40 L/s.While the simulations of the WOLF 2D software pointed to symmetry of the flow pattern, it was observed experimentally that the flow was asymmetrical.
Because it is a two-dimensional computational program, it is possible that the turbulence model coupled with the WOLF 2D software could not adequately reproduce the experimental observations.In order to properly model the behavior observed in the experiments, it is recommended the use of the Navier-Stokes equations without simplifications and turbulence models in a discrete domain with a mesh refined enough to capture the instabilities.
Figure 3 .
Figure 3. Detail of the checkered bottom of the reservoir (25.0 cm side).
Figure 4
refers to the 0.50 L/s flow rate.The following aspects were observed from the interpretation of this figure and the visual perception during the experiment: • The flow pattern in this case would correspond to the type S0 (DUFRESNE et al., 2010a,b; CAMNASIO; ORSI; SCHLEISS, 2011), characterized by the existence of a main jet aligned to the longitudinal basin axis and by the formation of two large vortices in opposite directions, a counterclockwise vortex to the left and a clockwise one to the right of the jet.The two vortices had similar dimensions;
Figure 5 Figure 4 .
Figure 5 represents the mapping of the mean surface velocities for the flow rate of 1.25 L/s, under steady flow regime.• The flow pattern, according to the theoretical-experimental approach of Dufresne et al. (2010a,b) and Camnasio, Orsi and Schleiss (2011), corresponds to the type A1, asymmetrical, contrary to what was observed for the lower flow rate.Furthermore, it was noticed the deviation of the main jet toward the right lateral wall of the basin, forming a large vortex in counterclockwise direction;
Figure 10.Mean surface velocities obtained experimentally for Test 4.
Table 1 .
Hydraulic parameters related to the tests.
Test number Flow rate Q (L/s) Water depth h (m)
Table 3 .
Determination of the Reynolds number and the associated flow type.
Table 2 .
Determination of the Froude number and the associated flow type.
Table 4 .
Comparison of maximum velocities between physical and numerical modeling under clear water condition. | 2019-04-27T13:09:56.337Z | 2018-05-28T00:00:00.000 | {
"year": 2018,
"sha1": "473055cf49c96d3e22f0c7b7786af751cc8dd120",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbrh/v23/2318-0331-rbrh-23-e19.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "473055cf49c96d3e22f0c7b7786af751cc8dd120",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
} |
229032990 | pes2o/s2orc | v3-fos-license | Influence of Crop Geometry and Intercropping on Growth Characters and Light Interception in Pearlmillet [Pennisetum glaucum (L.)]
Pearl millet is an important coarse grain cereal generally grown as rainfed crop on marginal lands under low input management condition. It is adapted to drought and poor soil fertility, but responds well to good management and higher fertility levels. It is a dual-purpose crop; its grain is used for human consumption and its stalk for cattle. Crop geometry refers to the shape of space available for individual plants. Crop geometry is altered by changing inter and intra row spacing. Optimum crop geometry is one of the important factors for higher productivity by efficient utilization of resources and also harvesting as much as solar radiation for better photosynthate formation.
Introduction
Pearl millet is an important coarse grain cereal generally grown as rainfed crop on marginal lands under low input management condition. It is adapted to drought and poor soil fertility, but responds well to good management and higher fertility levels. It is a dual-purpose crop; its grain is used for human consumption and its stalk for cattle. Crop geometry refers to the shape of space available for individual plants. Crop geometry is altered by changing inter and intra row spacing. Optimum crop geometry is one of the important factors for higher productivity by efficient utilization of resources and also harvesting as much as solar radiation for better photosynthate formation.
Limited availability of land resources and declining soil fertility both globally and as well locally, heightened the concerns regarding agriculture's ability to sustain the demands of ever-increasing population. To increase and sustain agriculture productivity we have to look for the ways of using available land and resources more effectively than in the past. This objective can be achieved by intercropping, which is an effective practice to augment the total Article Info productivity per unit area of the land per unit time by growing more than one crop in the same field by altering the crop geometry.
Materials and Methods
The field experiment was conducted in field No. 37F at Eastern Block farm, Department of Agronomy, Tamil Nadu Agricultural University, Coimbatore during Kharif season (July-October) of 2019. The farm is located in the Western Agro Climatic Zone of Tamil Nadu at 11.01701 N latitude, 76.93504 E longitude and at an altitude of 438 m above MSL The experimental plot was slightly alkaline in nature with low soluble salts and a medium range of organic carbon content. The water source used for irrigation was slightly alkaline (pH 8) with high soluble salts (EC 5.7 dS m -1 ).
The experiment was designed in split plot design and replicated thrice. In main plots viz., crop geometry [M 1 -Pearlmillet in 45 x 15 cm, M 2 -Pearlmillet in 60 x 15 cm, M 3-Pearlmillet in paired row sowing 30/60 X 15 cm and M 4 -Pearlmillet in paired row sowing 30/90 x 15 cm] and intercropping [S 1 -Greengram, S 2 -Sesame and S 3 -No intercrop] were allotted in subplots. Pearlmillet (CO 10) was used as maincrop and the intercrops are greengram (CO (Gg) 8) and sesame (TMV 7). Five plants were selected randomly from the net plot area of each plot for observing growth parameters.
The photosynthetically active radiation (PAR) was measured above the canopy (I 0 ) and below the canopy (I b ) adjacent to the soil surface by using AccuPAR ceptometer (Model LP-80). The measured PAR value was showed in micro mols per meter per second (µmol m -1 s -1 ). The light interception (LI) was calculated by the following equation. It is expressed in percentage.
Where, LI -Light Interception I 0 -PAR measured at above canopy, and I b -PAR measured at below canopy.
Effect of crop geometry and intercropping on plant height and number of tillers
Plant hight was recorded at 30, 60 DAS and at harvest stage (Table 1) Similarly, number of tillershill -1 was recorded at various growth stages ( Table 2). The tiller production was decreased as the crop reached the stage of harvest. Tiller production is significantly influenced by crop geometry on all the stages of crop. M 4 -Pearlmillet in paired row sowing 30/90 x 15 cm recorded maximum number of tillers per hill at all the stages.
With respect to intercropping 30 DAS and harvesting stage there is no significant difference but at 60 DAS there was a significant difference. Higher number of tillers hill -1 was recorded in S 3 -no intercrop condition than the intercropping system. These results are similar to Baldevram et al., (2005).
Effect of crop geometry and intercropping on light interception
Light Interception (%) was significantly influenced due to difference in interrow spacing of pearlmillet. Higher LI was found in M 1 (45cm X 15cm) at all the stages of crop and lower LI was observed in M 4 (Pearlmillet in paired row sowing 30/90 x 15 cm) ( Table 3). Narrow spacing crop covers the soil better than other crop geometry which was reason for better LI, similar result was obtained by Mohan Kumar et al., (2019) and Steiner (1986). Sub plots shows significant in LI due to growth behaviour of intercrops.
It is concluded that plant height not influenced by crop geometry and intercropping. Pearlmillet in paired row sowing 30/90 x 15 cm a greater number of tillers. Sole pearl millet recorded more plant height and tillers hill -1 . Light interception was low in Pearlmillet in paired row sowing 30/90 x 15 cm. From the above results main crop as well as intercrop may grow well in paired row sowing 30/90 x 15 cm because the intercrops receive more solar radiation. Since the findings are based on the research done in one season it may be repeated for further confirmation. | 2020-11-12T09:07:37.729Z | 2020-09-20T00:00:00.000 | {
"year": 2020,
"sha1": "8ad0cf322bd9ee1f4ee14f49e6baa6bcd3c88724",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-9-2020/K.%20Nagarajan,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e6993d1c9cd0d8aac1e2ee633e8a7332d7699c7c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
10972751 | pes2o/s2orc | v3-fos-license | Methanol masers in environments of three massive protostars
We present the first EVN maps of 6.7 GHz methanol masers of three high-mass protostar candidates selected from the Torun unbiased survey of the Galactic plane. A variety of linear and arc like structures was detected. A number of maser clusters with projected sizes of 20-100 AU show monotonic velocity gradients. Some of them are roughly perpendicular to the major axes of these structures and can arise behind shock fronts.
Introduction
Observations of the 6668.519 MHz methanol maser transition, first detected by Menten (1991), appear to be powerful tools to identify the massive early type stars still embedded in their parental dense molecular clouds. The high brightness of this line enable us to investigate structures at milliarcsecond (mas) scales (a few hundreds of AU at the distances of a few kpc). Phillips et al. (1998) analyzed 45 methanol maser sources in star-forming regions and divided them into five groups on the basis of their morphology: linear (curved), elongated, pair, complex and simple. The linear masers were outstanding in their survey and showed a monotonic or near-monotonic velocity gradient along the source major axis that is consistent with a model of masers embedded in a rotating disk (radius of a few thousands of AU) seen edge-on or nearly edge-on around a high mass (up to 120 M ⊙ ) protostar or OB star Minier et al. 2000). However, Walsh et al. (1998) found only 12 sources showing velocity gradients in the sample of 97 methanol sites studied. They proposed a scenario in which the methanol masers appear before the UC H II phase around the protostar, associated with embedded non-ionizing stars. The dense knots of gas are compressed and accelerated by the passage of the shock and local conditions cause different geometries of maser spots. Dodson et al. (2004), based on the VLBI data, proposed that the 6.7 GHz methanol masers arise behind low-speed (<10 km s −1 ) planar shock. A shock propagating nearly perpendicular to the line-of-sight produces a linear spatial distribution of maser components. Interaction of the shock with density perturbations in the star-forming region disrupts the linearity of maser structures.
The VLBI results presented below are first in a series for a large sample of methanol masers detected in the unbiased survey of the Galactic plane (Szymczak et al. 2002). Our aims are twofold; to investigate the mas scale of 6.7 GHz methanol maser structures in order to test the above mentioned hypotheses and to search for relationships of methanol emission with other tracers of star-forming activity. We improved the positions of newly detected 30 sources in the Toruń survey using the single baseline of MERLIN (Mark II and Cambridge antennas). ⋆ formerly Niezurawska, e-mail: annan@astro.uni.torun.pl
Observations and data reduction
The observations of methanol maser emission at 6668.519 MHz of the three star-forming regions G33.64−0.21, G35.79−0.17 and G36.11+0.55 1 were carried out with the European VLBI Network (EVN) on 2003 June 08. Useful data were obtained from four antennas (Cambridge, Effelsberg, Jodrell Bank and Onsala). In the single dish survey these sources showed complex methanol spectra with three of more clearly visible features of flux densities higher than 10 Jy. All of them are OH emitters (Szymczak & Gérard 2004). The coordinates of the target sources and their errors as measured with the Mark II − Cambridge baseline of MERLIN are given in Table 1. The targets were observed for a total of 12 hr, together with observations of continuum source 3C345 for the purpose of bandpass, delay and rate calibration. The source J1907+0127 (0.2 Jy at 6.7 GHz) was used as a phase calibrator for all three targets. The cycle time between each target and the phasecalibrator was 5.5 min + 3.5 min. We used a spectral bandwidth of 2 MHz, resulting in the velocity coverage of 100 km s −1 , divided into 1024 channels at correlation to give a velocity resolution of 0.09 km s −1 . The bandwidth was centred at the local standard of rest (LSR) velocity of 65 km s −1 . Observations were made in left-and right-hand circular polarization, but since the methanol emission is not strongly circularly polarized both polarizations were averaged in order to improve the signal-tonoise ratio.
The data were correlated on the EVN Mk IV Data Processor operated by JIVE. We carried out the data reduction with standard procedures for spectra line observations using the Astronomical Image Processing System (AIPS). To make final maps we used 1 mas pixel separations and an elliptical restoring beam 14×6 mas with a PA of −1 • and applied the uniform weighting. The rms noise level was of 7−10 mJy beam −1 in emission-free Stokes I maps.
We created fringe rate maps of the brightest channels of the targets but still we failed to find the absolute position of our sources. The target sources were near zero declination (from +0. • 5 to +3. • 1). The phase-calibrator J1907+0127, the nearest one from the VLBA Calibrator Survey, was about 3 • apart from all three targets. Furthermore, due to use of only four EVN telescopes listed above the uv-plane coverage was poor for N-S baselines. It is likely that these factors together with too long phase-referencing cycle time precluded a proper phase calibration. However, the positions of the strongest components of each target are known with accuracies better than 0. ′′ 6 in RA and 4 ′′ in Dec (Table 1).
Results and Discussion
3.1. G33.64−0.21 Fig. 1 shows the spectrum and the overall distribution of maser emission in G33.64−0.21. The emission is seen in the velocity range from 58.5 to 63.3 km s −1 and 94 maser components brighter than 40 mJy b −1 (5σ) are detected. These form 12 clusters distributed along an arc like structure of overall size of about 150 mas that corresponds to 585 AU for the assumed near kinematic distance of 3.9 kpc. We do not note any regularity in the velocity of the whole structure that can be expected for a rotating disk. However, monotonic velocity gradients are found within six clusters named as A, C, F, G, J and K (Fig. 1). Angular sizes of individual clusters range from 5 to 18 mas i.e. their projected linear sizes range from 20 to 70 AU. We notice that for most clusters the velocity gradients are roughly perpendicular to the arc. We state, the methanol maser emission in G33.64−0.21 does not show large-scale velocity gradient but about half of the maser clusters exhibit internal velocity gradients some of which are roughly perpendicular to the arc. This can suggest that the maser structure forms in a shock front postulated by Dodson et al. (2004).
The inset in Fig. 1 shows that amplitude of feature L of the EVN spectrum is comparable (within 30%) with that of the single dish spectrum. For the rest of features the amplitudes observed with the EVN are a factor of 3-4 lower. This implies, even though variability and/or errors in flux density cannot be ruled out, that a large fraction of maser flux was missed in the interferometric observations. Therefore, G33.64−0.21 appears as a good candidate to map a low intensity and diffuse maser emission.
G35.79−0.17
The spectrum of G35.79−0.17 is composed of four features within the velocity range from 59.9 to 62.7 km s −1 (Fig. 2). The shape of the spectrum is the same as observed in 2000 (Szymczak et al. 2002) but amplitude is about a factor of 2.5 lower. Variability of the source and calibration errors cannot fully account for such a difference. We suggest that there is diffuse emission missed during our EVN observations. We found the emission in 33 spectral channels and its distribution appears as a 9.5 mas linear structure (Fig. 2). Analysis of the position−velocity diagram revealed a linear velocity gradient along NE (red-shifted) -SW (blue-shifted) direction. Such a characteristic structure can arise from masers from an edge-on rotating disk or accelerating outflow. In a case of an edge-on rotating Keplerian disk we can estimate the lower limit of the mass of a central star (e.g. Phillips et al. 1998). Assuming the far and near kinematic distances of 4.6 kpc and 10.3 kpc we infer that the enclosed mass is 0.4 and 0.9 M ⊙ , respectively. These values are certainly underestimated; we have assumed that there is no inclination and that the disk size equals the extension of the maser emission. Minier et al. (2000) found similar structures in W 51 and G 29.95−0.02. They also derived sub-solar masses and concluded from a larger sample that they had detected only a small fraction of the disk, which typically extends over 1000 AU. Indeed, the maser structure in G35.79−0.17 is similar in a size with individual clusters observed in G33.64−0.21 (e.g. the cluster A). We cannot exclude that the observed structure arises due to an outflow from a massive protostar signposted by infrared source IRAS18547+0223. Proper motion studies would help to solve the problem of origin of the linear structure of methanol maser in G35.79−0.17. Fig. 3 shows the spectrum and the distribution of the 6.7 GHz methanol maser emission in G36.11+0.55. We found two active regions, north-western and south-eastern separated by 1. ′′ 1. The maser components in both regions are elongated along a position angle of ∼45 • . These structures are nearly perpendicular to the major axis. The NW region is composed of eight clusters, labelled from A to H, within the LSR velocity range from 81.1 to 84.5 km s −1 . The SE region is composed of six maser clusters, labelled from I to N, of the LSR velocity from 73.0 to 76.2 km s −1 . 4 out of 14 clusters exhibit monotonic velocity gradients; two of them (NW region) are perpendicular to the major axis. For the assumed near kinematic distance of 5.3 kpc the two clumps NW and SE are separated by 5840 AU.
G36.11+0.55
The methanol emission of G36.11+0.55 lies 5. ′′ 5 SE from the centre of a giant molecular clump Mol 77 (IRAS18527+0301) with embedded high-mass protostars of type B2.5−O8.5 (Brand et al. 2001). Since the absolute position of methanol masers is known within 0. ′′ 2 and 1. ′′ 4 (RA and Dec) this positional offset may be significant. The velocities of thermal emission of CS, CO and HCO + range from 71.9 to 82.1 km s −1 with the peaks about 75.5 km s −1 and match well the velocity range of methanol emission. We note that the overall structure delineated by methanol masers is nearly (Table 1). Each triangle corresponds to a single component found in the channel maps. Maser clusters A, C, F, G, J and K show clear velocity gradients traced by arrows (pointed to the blue-shifted velocities). The spectrum (inset) is composed of emission profiles of maser clusters labelled from A to L. Thin line shows the single dish spectrum taken in 2000 and scaled with the right hand ordinate. Fig. 2. The EVN 6.7 GHz methanol spectrum (left) and the distribution of maser components (right) in G35.79−0.17. The coordinates are relative to the brightest maser component (Table 1). Each circle corresponds to a single component found in the channel maps. The sizes of circles are proportional to the logarithm of the brightness. The numbers indicate the LSR velocities (km s −1 ) of the components. (Table 1). Enlarged areas of the NW and SE emission are shown on the left side. Each triangle corresponds to a maser component found in the channel maps. Arrows mark velocity gradients similarly as in Fig. 1. The spectrum shown by thin line is the sum of spectra of individual maser clusters (thick line). perpendicular to the axis of an outflow traced by the blue-(71.0−73.8 km s −1 ) and red-shifted (79.5−83.2 km s −1 ) emission of the 13 CO line as reported by Brand et al. (2001). Therefore it is very likely that the 6.7 GHz maser emission mapped with the EVN are physically related to a high mass protostar.
Conclusions
The 6.7 GHz methanol maser emission towards three starforming regions was imaged with milliarsecond resolution. Linear and arc like structures were detected. There were considerable numbers of maser clusters with internal velocity gradients roughly perpendicular to the major axis. These geometrical structures can arise in outflows and behind shock fronts. | 2014-10-01T00:00:00.000Z | 2004-11-30T00:00:00.000 | {
"year": 2004,
"sha1": "ac39e802c739f81986229819b2ae7025bf8efd73",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fb55ba20cc550e9008b33ec0dd2de1d86ae4df21",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264206821 | pes2o/s2orc | v3-fos-license | Lutein Attenuates Both Apoptosis and Autophagy upon Cobalt (II) Chloride-Induced Hypoxia in Rat Műller Cells
Retinal ischemia/reperfusion injury is a common feature of various retinal diseases such as glaucoma and diabetic retinopathy. Lutein, a potent anti-oxidant, is used to improve visual function in patients with age-related macular degeneration (AMD). Lutein attenuates apoptosis, oxidative stress and inflammation in animal models of acute retinal ischemia/hypoxia. Here, we further show that lutein improved Műller cell viability and enhanced cell survival upon hypoxia-induced cell death through regulation of intrinsic apoptotic pathway. Moreover, autophagy was activated upon treatment of cobalt (II) chloride, indicating that hypoxic injury not only triggered apoptosis but also autophagy in our in vitro model. Most importantly, we report for the first time that lutein treatment suppressed autophagosome formation after hypoxic insult and lutein administration could inhibit autophagic event after activation of autophagy by a pharmacological approach (rapamycin). Taken together, lutein may have a beneficial role in enhancing glial cell survival after hypoxic injury through regulating both apoptosis and autophagy.
Introduction
Ocular diseases associated with retinal ischemia/reperfusion (I/R) injury lead to irreversible retinal cell death.In rodent models of retinal I/R injury induced by blockade of internal carotid artery, a remarkable cell loss was found in both ganglion cell layer (GCL) and inner nuclear layer (INL) in the retina [1].Another study using an animal model of I/R injury induced by elevating the intraocular pressure also demonstrated an increase of apoptotic nuclei in INL [2].
Autophagy is an evolutionary conserved mechanism that allows the cell to degrade damaged proteins and intracellular organelles, maintaining cell homeostasis against nutrient deprivation and cellular stress [3].Autophagy appears to be protective at the early onset of stress condition but can lead to cell death when excessively up-regulated.Produit-Zengaffinen et al and Piras et al reported that autophagy was triggered after I/R injury and resulted in further damage in retinal neurons [4,5].
Lutein is a member of xanthophyll family of carotenoids and it can be found in some dark leafy vegetables such as kale and spinach [6,7].Lutein cannot be synthesized by the human body; therefore, it has to be obtained from the daily diet.Lutein consists of two hydroxyl groups, making it reacting more strongly with singlet oxygen than other carotenoids [8,9].Lutein is also an efficient pigment for absorbing high energy blue light and protects photoreceptors from phototoxicity [10,11]; therefore lutein is known as a potent anti-oxidant and oxygen free radical scavenger.Clinically, lutein has been found to improve visual function and macular pigment optical density in patients with age-related macular degeneration (AMD) [12][13][14].In addition, lutein has been shown to be neuroprotective in different retinal disease models including endotoxin-induced uveitis (EIU), light-induced retinal degeneration and retinal ischemia/reperfusion injury [1,15,16].
Műller cells are the principle glia of retina and they protect retinal neurons from excitotoxic damage as well as reactive oxygen species (ROS) induced by ischemia [17].Műller cell gliosis responding to I/R injury results in retinal cell death [18].We have previously shown that lutein administration protects retinal neurons from I/R injury in vivo and from oxidative stress in vitro [1,19].In vitro hypoxia can be achieved by chemical-induced hypoxia or by oxygen-glucose deprivation (OGD) [20].Cobalt (II) chloride (CoCl 2 ), a common reagent to mimic the hypoxic/ischemic condition, induces the generation of reactive oxygen species (ROS) and in turn increases oxidative stress, resulting in cell death.It has been reported that ROS was induced in retinal ischemia and eventually led to retinal cell death [17].We previously used CoCl 2 to induce chemical hypoxia and demonstrated that lutein treatment attenuated the release of pro-inflammatory cytokines in a cultured rat Műller cell line (rMC-1) [21].In the present study, we aim to further evaluate the anti-apoptotic effects of lutein in rMC-1 cells against CoCl 2 -induced hypoxic injury.In addition, as autophagy and apoptosis have been shown to be co-activated upon CoCl 2 insult [22], we hypothesize lutein exerts a protective role in hypoxia-induced autophagy in rMC-1 cells.
Cell culture
An immortalized rat Műller cell (rMC-1) was routinely maintained in Dulbecco's modified Eagle's medium (Gibco, Carlsbad, CA) supplemented with 10% fetal bovine serum (FBS, Hyclone, Logan UT, USA), 100U/ml penicillin and 100ug/ml streptomycin (Gibco) [23].Cells were grown in a humidified incubator of 95% air and 5% CO 2 at 37˚C and passaged when reached 80% confluent.Chemical-induced hypoxia was induced using cobalt (II) chloride (CoCl 2 ) as described previously [21].Briefly, rMC-1 cells were prepared in 6-well plates at a density of 2 x 10 5 cells per well in DMEM and incubated 24 hours before treatment.Next, the cells were starved in DMEM with 1%FBS for 4 hours before inducing hypoxia.For dose dependent study, CoCl 2 (300μM) was used to induce chemical hypoxia together with different doses of lutein (2.5, 5, 10 and 20 μM) or vehicle (0.01% DMSO) for 24 hours.For time dependent study, CoCl 2 (300μM) was used to induce chemical hypoxia together with lutein (20 μM) or vehicle (0.01% DMSO) for designated time points.To examine the involvement of autophagy in CoCl 2 -induced cell death, rMC-1 cells was treated with 3-MA (1mM) for 2 hours before CoCl 2 treatment for 24 hours.To access the autophagic flux, rMC-1 cells were incubated with CoCl 2 and ammonium chloride (NH 4 Cl) for 24 hours.To induce the mTOR-mediated autophagy, rMC-1 cells were exposed to rapamycin and chloroquine for 16 hours.
Lactate dehydrogenase (LDH) cytotoxicity assay
CoCl 2 -induced cytotoxicity in rMC-1 cells leading to leakage of cytoplasmic enzyme lactate dehydrogenase (LDH) into culture medium was measured by Cytotoxicity Detection Kit (Roche) according to the manufacture's instruction.CoCl 2 -induced hypoxia in rMC-1 cells was performed as described above.After treatments, both cell culture medium and cell lysate were incubated with the substrate containing Diaphorase/NAD + as well as INT/sodium lactate for 15 minutes.Absorbance of LDH activities were measured at 490nm.Amount of LDH released from apoptotic cells was calculated as percentage of the total LDH activities.The results were obtained from five individual experiments with duplicate samples.
Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) assay
In Situ detection of apoptotic cells were performed using a DeadEnd TM Fluorometric TUNEL System (Promega, Madison, WT).The assay was performed in accordance with the manufacturer's protocol.Briefly, rMC-1 cells were grown on chamber slides and treatments were given as mentioned above.The cells were then fixed with 4% paraformaldehyde for 15 minutes at 4˚C.After permeabilization with 0.1% Triton X-100 for 2 minutes, cells were incubated with the reaction mix containing terminal deoxynucleotidyl transferase (TdT) and nucleotides for 1 hour at 37˚C.4',6-diamidino-2-phenlindole (DAPI) was used to stain the nuclei.Five nonoverlapping fields in each well were captured using a light microscope (Eclipse 80i Nikon, Tokyo, Japan) equipped with a digital camera (Diagnostic Instruments, Inc., Sterling Heights, MI).More than 250 cells in each field were counted using Image J software (National Institute of Mental Health, Bethesda, MD).Cells with TUNEL labeling co-localized with DAPI staining were counted as TUNEL-positive.The results were expressed as percentage of TUNEL-positive cells over the total number of cells counted from five individual experiments with duplicate samples.
Analysis of CoCl 2 -induced autophagy in rMC-1 cells
Assessment of autophagic activity in rMC-1 cells was performed using Cyto-ID 1 Autophagy Detection kit (Enzo, Life Science).In brief, cells were grown on a chamber slide and treated with CoCl 2 (300μM) as well as Lutein (20μM) for 24 hours.Cells co-treated with 0.5μM rapamycin and 120μM chloroquine for 16 hours were used as the positive control to induce autophagy and to identify the autophagic vesicles in rMC-1 cells.After washes by assay buffer, cells were incubated with Cyto-ID 1 Green Detection Reagent for 45 minutes at 37˚C.Hoechst 33342 was used as a nuclei counterstain.Eight non-overlapping fields were photographed in each group using a light microscope (Eclipse 80i Nikon, Tokyo, Japan) equipped with a digital camera (Diagnostic Instruments, Inc., Sterling Heights, MI) and analyzed with Image J software (National Institute of Mental Health, Bethesda, MD).At least three independent experiments were performed.
Immunocytochemical analysis of autophagosomal-associated LC3 expression in rMC-1 cells
Assessment of autophagosomes in rMC-1 cells was performed by immunocytochemistry of LC3 expression.Briefly, cells were grown on a chamber slide and treated with CoCl 2 (300μM) and NH 4 Cl (20mM) as well as Lutein (20μM) for 24 hours.The cells were then fixed with icecold methanol for 15 minutes at -20˚C.After washes by PBS, cells were permeabilized with 0.3% Triton X-100 for 2 minutes.After blocking with 5% serum for 1 hour, the cells were incubated with LC3 antibody (1:400; Cell Signaling Technology, Beverly, MA) overnight at 4˚C.After incubation with anti-Rabbit IgG FITC-conjugated secondary antibody, cell nuclei were counterstained by 4',6-diamidino-2-phenlindole (DAPI).Eight nonoverlapping fields in each well were captured using a light microscope (Eclipse 80i Nikon, Tokyo, Japan) equipped with a digital camera (Diagnostic Instruments, Inc., Sterling Heights, MI) and analyze with Image J software (National Institute of Mental Health, Bethesda, MD).At least three independent experiments were performed.
Statistical Analysis
Data were presented as mean ± SEM and analyzed using Prism v4.0.(GraphPad software Inc., San Diego, CA).ANOVA followed by Bonferroni multiple comparison tests was used for all data analysis.P value less than 0.05 was set as statistically significant.
Lutein ameliorated cobalt (II) chloride (CoCl 2 )-induced hypoxic cell death in cultured rat Műller cells (rMC-1)
To mimic the hypoxic/ischemic condition, 300μM of CoCl 2 was used to induce chemical hypoxia in rMC-1 cells [19,21].The morphology of vehicle-treated rMC-1 cells started to change 6 hours after the CoCl 2 treatment when compared with that in the normal control (Fig 1E and 1F).Cells became round in shape and cytoplasmic vacuoles were observed (Fig 1E and 1F).On the other hand, the morphology of lutein-treated cells remained similar to that in the normal control despite the exposure to CoCl
Lutein protected rMC-1 cells from hypoxic-induced cell death by altering expression of apoptosis-associated proteins
The role of Bcl-2 family proteins in CoCl 2 -induced apoptosis and lutein-mediated protection was studied.Exposure of rMC-1 cells to CoCl 2 significantly down-regulated the expression of two anti-apoptotic proteins, Bcl-2 and Bcl-X L when compared with normal control, resulting in more than 50% reduction in their protein levels at 24 hours (Fig 3A and 3B; Fig 4A and 4B).However, expression of a pro-apoptotic protein, Bax, was shown a trend of increase upon CoCl 2 treatment (Figs 3C and 4C).The ratio of Bax to Bcl-2 protein expression and cleaved caspase 3 were significantly up-regulated after incubation with CoCl 2 for 24 hours in vehicle-treated cells (Fig 3D and 3E; Fig 4D and 4E).Protein level of Bcl-2 but not Bcl-X L was enhanced by 20 μM of lutein at 24 hours while that of Bax remained unaffected (Fig 3A , 3B and 3C; Fig 4A , 4B and 4C).Furthermore, Bax/Bcl-2 ratio as well as cleaved caspase 3 was remarkably attenuated by 20μM of lutein upon CoCl 2 -induced hypoxia for 24 hours (Fig 3D and 3E; Fig 4D and 4E).Together, these data suggested that the Bcl-2 protein family was involved in lutein-mediated protection against CoCl 2 -induced apoptosis in rMC-1 cells.
Lutein was involved in anti-autophagic mechanism in CoCl 2 -induced autophagy in rMC-1 cells
It has been reported that hypoxia-inducible factor 1 α (HIF-1α) was activated by CoCl 2 and triggered hypoxic cell death and autophagy [22,24].We therefore further investigated whether CoCl 2 induced autophagy and caused the loss of viability in rMC-1 cells.rMC-1 cells were treated with 3-Methyladenine (3-MA), a selective autophagic inhibitor through suppressing the activity of class III PI3K (an autophagic initiator) and inhibiting the early stage of autophagosomal formation [25] before CoCl 2 treatment for 24 hours.Viability of 3-MA pre-treated cells was higher when compared with the vehicle control upon hypoxic injury while 3-MA treatment only without hypoxia did not cause any significant difference in cell viability, indicating CoCl 2 -induced autophagy contributed to hypoxic cell death in rMC-1 cells (S1 Fig) .To further monitor activation of autophagy, immunofluorescence analysis of the autophagosomes and immunoblotting of microtubule-associated protein light chain 3 (LC3) were performed [26].LC3 comprises of two forms, the cytosolic LC3I and the lipidated LC3II.During onset of autophagy, cytosolic LC3I is modified into the LC3II and localized on the double-membrane of autophagosome.Thus, levels of LC3II are closely associated with abundance of autophagosome [27].In our experiments, protein expression of LC3II was determined after CoCl 2 treatment; it was activated at 2 hours after hypoxic injury and increased gradually until 24 hours (Fig 5B).Yet, this was reversed by the application of 20μM lutein at 24 hours (Fig 5A and 5B), indicating that lutein administration was able to attenuate the autophagic event upon CoCl 2 treatment.Next, we examined autophagosome formation after 24 hours of CoCl 2 exposure.Immunofluorescence analysis revealed that green punctate vacuoles were observed at the .The decreased number of autophagsomal related vacuoles is in agreement with the attenuated LC3II expression upon lutein administration.Accordingly, these data illustrated that CoCl 2 induced autophagy as demonstrated by increasing LC3II expression and autophagosome formation and these can be reversed by 20μM of lutein for 24 hours.
Lutein attenuated autophagosomes formation through regulating mTORmediated pathway
Up-regulation of the autophagic marker by hypoxic injury can be due to either the increase of autophagic flux or reduction of autolysosome degradation.Treatment with lysosomotropic agent ammonium chloride (NH 4 Cl) in rMC-1 cells resulted in an increased protein expression of LC3II, suggesting the blockade of autophagosome-lysosome fusion led to accumulation of autophagosomes (Fig 6A , lane 2).When cells were co-treated with CoCl 2 and NH 4 Cl for 24 hours, LC3II expression was further elevated when compared with cells treated with CoCl 2 alone (Fig 6A , lane 4).These data depicted that increased level of LC3II by CoCl 2 is due to the enhancement of autophagic flux rather than reduced clearance of autophagosome.Furthermore, co-treatment of CoCl 2 and NH 4 Cl for 24 hours showed an accumulation of punctate form of LC3 in rMC-1 cells (Fig 6B).To further investigate whether lutein is involved in attenuation the flux of autophagosome formation, rMC-1 cells were treated with lutein together with NH 4 Cl and CoCl 2 for 24 hours.LC3II expression as well as the percentage of LC3 punctate cells was significantly decreased when compared with those without lutein treatment, suggesting that lutein reduced autophagosomal flux in CoCl 2 -induced autophagy in rMC-1 cells (Fig 6A lane 5 and 6C).
Previous study reported that treatment of CoCl 2 triggered an inhibition of mTOR pathway, leading to autophagy in cardiomyoblasts [22].To examine the involvement of mTOR-asscoiated pathway upon CoCl 2 -induced hypoxia, protein expression of phosphorylated AMPactivated protein kinase (AMPK) which is a key energy sensor and functions as a negative regulator to mTOR was detected [28].Phosphorylation of AMPK was remarkably up-regulated upon CoCl 2 treatment for 24 hours, suggesting that AMPK phosphorylation was triggered upon hypoxia in rMC-1 cells (Fig 7A).Furthermore, activation of AMPK caused by CoCl 2 treatment for 24 hours led to suppression of the phosphorylation of mTOR, p70S6K and ULK1 (Ser757), strengthening the notion that inhibition of mTOR pathway was induced upon CoCl 2 treatment (Fig 7B , 7C and 7D).However, phosphorylation activities of both mTOR and p70S6K were restored in lutein-treated cells (Fig 7B and 7D), depicting that mTOR pathway is a possible pathway in lutein-mediated anti-autophagic effect.To further validate this finding, rMC-1 cells were treated with rapamycin, which has been used to inhibit mTOR and induce autophagy [26].Activation of rapamycin-induced autophagy was determined by protein When rMC-1 cells were exposed to both rapamycin and chloroquine in the presence of lutein, LC3II expression was remarkably attenuated to nearly the basal level of chloroquine-induced LC3II (Fig 7E , lane 7).Together, these results suggest that lutein attenuates autophagic flux from the mTOR-mediated autophagy in rMC-1 cells.
Discussion
Retinal ischemia/reperfusion (I/R) injury has been associated with different ocular diseases such as diabetic retinopathy leading to irreversible neuronal death.Lutein belongs to the family of xanthophylls and is known as a potent anti-oxidant.It protects the macula from the damage of high energy blue light [12].Our previous studies demonstrated the neuroprotective effects of lutein in the retinal ischemic animal model [1,21,29].We found that lutein protected retinal neuron and attenuated Műller cell gliosis from I/R injury by its anti-oxidative and antiinflammatory properties [1,21].Moreover, lutein could improve the viability of retinal neurons from CoCl 2 -induced hypoxia in vitro [19] while the protective effects of lutein were also observed in other in vitro models [30][31][32].In the present study, lutein increased cell viability and decreased LDH release after CoCl 2 treatment.Lutein also attenuated the number of TUNEL positive cells upon hypoxia, in agreement with our previous study that lutein reduced apoptotic nuclei after retinal I/R injury [1].
CoCl 2 have been commonly used in vitro to mimic the hypoxic/ischemic condition.It induces reactive oxygen species (ROS) generation and leads to cell death.It has been reported that ROS was induced by retinal ischemia and resulted in retinal cell death [17].Although the CoCl 2 model only partly mimics the hypoxic condition, our earlier study clearly showed the protective effects of lutein in Műller cells in vitro [21].Another hypoxic model that is also commonly used is the oxygen-glucose deprivation (OGD), achieved by culturing cells in glucosefree medium under low oxygen environment [20].
Several studies depicted that CoCl 2 could alter protein expression of Bcl-2 family [33] and trigger caspase-cascade apoptosis [34].In our study, CoCl 2 treatment suppressed the expression of anti-apoptotic proteins Bcl-2 and Bcl-X L .Although the expression of pro-apoptotic protein Bax remained unaffected, there have been an increase of Bax/Bcl-2 ratio, an indicator of intrinsic apoptotic pathway [35], followed by activation of cleaved caspase 3, suggesting that CoCl 2 -induced hypoxia triggered this pathway.Our previous findings demonstrated that 20μM of lutein effectively suppressed the release of pro-inflammatory factors upon CoCl 2induced hypoxic challenge in rMC-1 cells [21].In the present study, we showed that treatment of 20μM of lutein also enhanced Bcl-2 protein expression and abrogated Bax/Bcl-2 ratio after 24 hours of hypoxic injury.More importantly, lutein attenuated cleavage of caspase 3, consistent with our previous finding using an animal model of retinal detachment [36], further strengthening the notion that lutein mediated cytoprotection through regulating the cascade of caspase-associated pathway.
The beneficial or detrimental effects of autophagy in ischemic injury remain controversial.However, activation of autophagy in ganglion cell layer after retinal I/R injury has been documented differently, leading to ganglion cell death [4,37].Most importantly, there is increasing evidence that inhibition of autophagy is protective in various in vivo I/R models [38][39][40].Moreover, several in vitro studies also demonstrated that suppression of both apoptosis and autophagy enhanced cell survival [22,41].Apart from apoptosis, CoCl 2 -induced hypoxia also triggered autophagy in rat cardiomyoblasts (H9c2) and retinal ganglion cells (RGC-5) cell line [22,42].A study reported that the anti-apoptotic protein Bcl-2 acts as a pro-survival protein and negatively regulates autophagy, indicating the crosstalk between apoptosis and autophagy [43].Here, we also showed that CoCl 2 induce not only apoptosis but also autophagy in rMC-1 cells.Inhibition of autophagy using a pharmacological approach with 3-MA prevented the loss of viability against CoCl 2 -induced hypoxic injury, suggesting that the autophagic event contributed to CoCl 2 -induced cell death.Our findings also revealed that protein level of Bcl-2 was suppressed by CoCl 2 treatment but its expression was enhanced in lutein-treated cells, suggesting that restoration of the pro-survival protein Bcl-2 might partly contribute to the protective effects of lutein against both apoptosis and autophagy.In addition, expression of lipidated form LC3II and autophagic vacuoles was attenuated in lutein-treated rMC-1 cells, suggesting that lutein might be involved in alleviation of autophagy provided that LC3II abundance is associated with formation of autophagosomes [27].Moreover, accumulation of LC3 punctate cells were diminished in lutein-treated groups, further depicting our findings that lutein reduced autophagosome formation through reducing the autophagic flux.
It is known that mTOR is an important regulator for cell growth and metabolism; disruption of mTOR activity might contribute to cell death [44].More importantly, mTOR negatively regulates autophagy through the phosphorylation of ULK1 (Ser 757), preventing the formation of ULK complex and subsequently inhibit autophagosome formation [28,45].AMPK is an essential energy sensor that helps to maintain cellular homeostasis and it negatively regulates mTOR hence induces autophagy [28].Our findings clearly showed that phosphorylated form of AMPK was up-regulated in CoCl 2 -treated rMC-1 cells while phosphorylation of mTOR and its substrate p70S6K was inhibited, suggesting that abrogation of mTOR activity is associated with CoCl 2 -induced autophagy.Phosphorylation of ULK1 (Ser 757) was also suppressed upon CoCl 2 treatment, indicating that CoCl 2 triggers autophagy by inhibition of mTOR pathway and de-phosphorylation of ULK1.Lutein administration restored the phosphorylation of mTOR and p70S6K, indicating that reactivation of mTOR pathway may be associated with lutein-mediated anti-autophagic protection.Rapamycin is commonly used as the inhibitor of mTOR pathway and it suppresses the inhibitory function of mTOR, leading to the formation of ULK complex and activates autophagy.Here, rapamycin treatment triggered LC3II expression in rMC-1 cells and the autophagosome-associated LC3II further accumulated after impairment of lysosomal degradation by chloroquine, indicated by the increased number of green fluorescence-labeled autophagic vesicles.We further found that lutein effectively attenuated the protein expression of LC3II in rapamycin and chloroquine cotreated cells, suggesting that lutein is involved in mTOR-associated autophagy.
We together with others have reported the anti-apoptotic, anti-oxidative and anti-inflammatory effects of lutein in various in vivo and in vitro models of retinal diseases [16,21,[46][47][48][49].
Here, we further strengthened the anti-apoptotic effects of lutein and its possible mechanistic pathway on protecting retinal cells using a chemical induced hypoxic cell model.Lutein therefore may be beneficial in eye diseases where apoptotic cell death has been reported such as glaucoma, diabetic retinopathy and retinopathy of prematurity [50][51][52].Most importantly, we discovered a novel protective effect of lutein in preventing autophagosome formation upon hypoxia-induced autophagy.Suppression on the flux of autophagosome formation can thus protect the retinal cells from hypoxic damage.In fact, modulation of autophagy become a therapeutics target for treatment of ocular diseases such as glaucoma, diabetic retinopathy and age-related macular degeneration (AMD) [4,53,54] however, the safety and side effects of the anti-autophagic drug has been concerned [55].Developments of novel anti-autophagic drugs that are more specific to autophagic pathway and clinically safer are emerged.Several studies documented that mTOR can be the therapeutic target in various ischemic diseases [56][57][58].A recent proteomics study reported that mTOR pathway was suppressed after retinal I/R injury, further delineating the protective role of mTOR [59].It has been reported that increasing mTOR phosphorylation was neuroprotective against ischemic brain injury [58] and this prompt us to speculate that lutein not only protects the retinal Műller cells from hypoxiainduced apoptosis but also autophagy, possibly by targeting mTOR-associated pathway and improving cell survival against hypoxic injury.
Műller cells are the major glial cell type in the retina, it maintains homeostasis and protects retinal ganglion cells from neurotoxicity [60].Our previous studies using retinal ischemic animal model demonstrated that lutein protected retinal neuronal cell death as well as Műller cell gliosis [1,21].Together with our earlier studies [1,19,21], the present study prompt us to speculate that lutein protects ganglion cell loss possibly through attenuating apoptosis and autophagy in glial cell.Taken together, lutein has been shown to be safe as a daily supplement for ocular health with minimal side effects, its therapeutic potential as a clinically safe-to-be used protectant is strongly indicated.
2 for 24 hours (Fig 1I).rMC-1 cells treated with lutein alone (20 μM) did not show any significant difference in cell viability when compared with normal control in all time periods tested.On the other hand, viability of vehicle-treated rMC-1 cells started to decline at 6 hours after CoCl 2 treatment; there was approximately 50% loss of cell viability after hypoxic challenge for 24 hours (Fig 1J).Lutein-treated rMC-1 cells were also affected upon CoCl 2 -induced injury, resulting in around 20% reduction of cell viability.However, viability of lutein-treated rMC-1 cells were not further affected upon prolonged CoCl 2 treatment (Fig 1J).CoCl 2 -induced cytotoxicity in rMC-1 cells also led to release of lactate dehydrogenase (LDH) at 24 hours after hypoxic challenge; yet, LDH release was attenuated in lutein-treated rMC-1 cells (Fig 1K).To further assess if CoCl 2 -induced cytotoxicity was associated with apoptosis, TUNEL staining was performed to identify the apoptotic nuclei in rMC-1 cells (Fig 2A) and their numbers were counted.Percentage of apoptotic nuclei started to elevate at 2 hours after incubation with CoCl 2 and continued to rise until 24 hours in vehicletreated rMC-1 cells, indicating that CoCl 2 induced apoptosis-related cell death in a timedependent manner (Fig 2B).The percentage of apoptotic nuclei was decreased after lutein treatment for 24 hours when compared with the vehicle-treated rMC-1 cells.This finding is consistent with the cell viability assay that lutein rescued rMC-1 cells from CoCl 2 -induced hypoxia at 24 hours.
Fig 5 .
Fig 5. Lutein protected rMC-1 cells from CoCl 2 -induced autophagy.Western blot analysis of expression of an autophagic marker, LC3II, along with the densitometric quantification (normalized by β-actin).(A) rMC-1 cells were exposed to CoCl 2 (300μM) together with various concentration of lutein for 24 hours.LC3II expression was up-regulated upon CoCl 2 -induced hypoxia and attenuated by 20μM of lutein.(B) Expression of LC3II was up-regulated 2 hours after CoCl 2 treatment and continued to increase in the remaining time points.Lutein was able to attenuate LC3II expression at 24 hours after hypoxic challenge.(C) Autophagosome formation was monitored in rMC-1 cells 24 hours after CoCl 2 treatment.Both normal control and treated cells were stained with Cyto-ID ® green dye and Hoechst 33342.Representative fluorescence microscopy images exhibited a decreased number of green fluorescencelabeled autophagosomes in lutein-treated cells when compared with that in vehicle-treated cells.Rapamycin and chloroquine co-treatment in rMC-1 cells were used as the positive control to identify the presence of autophagosome in the cells.(D) Quantification of Cyto-ID ® green positive cells.Data was presented as the percentage of Cyto-ID ® green positive cells over the total number of cells counted.n = 4 in each group.** P< 0.01, *** P<0.001 versus normal control group; # P<0.05, ## P<0.01 versus vehicle-treated group.Scale bar, 50 μm.doi:10.1371/journal.pone.0167828.g005
Fig 6 .
Fig 6.Lutein protected rMC-1 cells from CoCl 2 -induced autophagy through reduction of autophagic flux.rMC-1 cells were treated with CoCl 2 in the presence or absence of an autophagic flux inhibitor, ammonium chloride (NH 4 Cl), for 24 hours.(A) Western blotting and the densitometric quantification revealed that LC3II expression was activated after CoCl 2 treatment in the presence of NH 4 Cl (lane 4) while lutein was able to attenuate this LC3II accumulation (lane 5).(B) Representative fluorescence microscopy images showed an increased number of LC3 punctate cells in CoCl 2 and NH 4 Cl co-treated cells.Administration of lutein alleviated the puncate cells upon CoCl 2 treatment.(C) Quantification of number of LC3 puncate cells.Data was presented as the percentage of LC3 puncate cells over the total number of cells counted.n = 5 in each group.* P< 0.05, *** P<0.001 versus normal control group; ## P< 0.01 versus NH 4 Cl and CoCl 2 cotreatment group; # P< 0.05, versus vehicle-treated group.Scale bar, 50 μm.doi:10.1371/journal.pone.0167828.g006
Fig 7 .
Fig 7. Anti-autophagic property of lutein was involved in mTOR-mediated autophagy pathway.(A) Western blotting and the densitometric analysis showed that the phosphorylated AMPK was up-regulated in CoCl 2 -treated cells.(B-D) Protein levels of mTOR-associated proteins including P-mTOR, P-p70S6K and P-ULK1 (Ser757) were measured by Western blotting (normalized by β-actin) and quantified by densitometry.20μM of lutein was able to restore the phosphorylation levels of P-mTOR and P-p70S6K upon CoCl 2 -induced hypoxia.(E) Rapamycin was used to induce autophagy and chloroquine was also added to block the formation of autolysosome upon rapamycinmediated autophagy.Densitometry analysis showed that LC3II protein expression was up-regulated in rapamycininduced autophagy and accumulated in the presence of chloroquine (lane 6).Lutein treatment was able to decrease LC3II expression in rMC-1 cells upon rapamycin and chloroquine co-treatment (lane 7).n = 5 in each group.* P< 0.05, ** P<0.01, *** P<0.001 versus normal control group; # P< 0.05, ## P<0.01 versus Lutein-treated group; # P< 0.05, rapamycin and chloroquine co-treatment group.doi:10.1371/journal.pone.0167828.g007 | 2018-04-03T01:40:35.560Z | 2016-12-09T00:00:00.000 | {
"year": 2016,
"sha1": "cde7847f78ffb8fc528c4a6350c6c2bda762e849",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167828&type=printable",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a689879778a5a6a1948dfed1842f03d1176cb61",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
257046023 | pes2o/s2orc | v3-fos-license | Circulator function in a Josephson junction circuit and braiding of Majorana zero modes
We propose a scheme for the circulator function in a superconducting circuit consisting of a three-Josephson junction loop and a trijunction. In this study we obtain the exact Lagrangian of the system by deriving the effective potential from the fundamental boundary conditions. We subsequently show that we can selectively choose the direction of current flowing through the branches connected at the trijunction, which performs a circulator function. Further, we use this circulator function for a non-Abelian braiding of Majorana zero modes (MZMs). In the branches of the system we introduce pairs of MZMs which interact with each other through the phases of trijunction. The circulator function determines the phases of the trijunction and thus the coupling between the MZMs to gives rise to the braiding operation. We modify the system so that MZMs might be coupled to the external ones to perform qubit operations in a scalable design.
www.nature.com/scientificreports/ with each other to form a lattice structure. We thus consider an improved design where the trijunction is located outside of the loop as shown in Fig. 1b, which is topologically equivalent with the design in Fig. 1a. Further, we can use the circulator function to realize the braiding of Majorana zero modes (MZM) 9,10 for topological quantum computing 11,12 . Topologically-protected quantum processing is expected to provide a path towards fault-tolerant quantum computing. Since quantum states are susceptible to environmental decoherence, protection from local perturbation is an emergent challenge for quantum information processing. Non-Abelian states are the building block of topological quantum computing carrying the nonlocal information. The nonlocally encoded quantum information is resilient to local noises and, if the temperature is smaller than the excitation gap, temporal excitation rate is exponentially suppressed. Majorana zero modes, γ , are predicted to exhibit non-Abelian exchange statistics, and they are self-adjoint γ † = γ in contrast to ordinary fermion operators. The theoretically proposed structures attracted a great deal of intention to realizing MZMs in condensed matter systems. MZMs are predicted to emerge in ν = 5/2 fractional quantum Hall states 11,13 , p-wave superconductors 14,15 , and one-or two-dimensional semiconductor/superconductor hybrid structures 16 . The branches in our scheme for braiding contains semiconductor/superconductor hybrid structures with p-wave-like superconductivity induced from s-wave superconductors via proximity effect.
In two-dimensional spinless p + ip topological superconductors MZMs are hosted in vortices or in the chiral edge modes as localized Andreev-bound zero-energy states at the Fermi energy. The p-wave-like superconductivity can be induced from s-wave superconductors via proximity effect in a hybrid structure 17 . Semiconductor thin film with Zeeman splitting and proximity-induced s-wave superconductivity has been expected to be a suitable platform for hosting MZMs 18 . On the other hand, the one-dimensional semiconducting nanowire has also been shown to provide MZMs at the ends of the nanowire 19 . The MZMs should be prepared, braided, and fused to implement qubit operations. In one-dimensional wire the braiding is not well defined, which can be overcome in a wire network of trijunction. However, the original scheme 17 with Josephson trijunction has not yet been explored.
Recently, an experimental evidence of MZM in a trijunction has been reported 20 . The nanowire trijunctions are manipulated by the chemical potential 21 , the charging energy 22 , and the phase 23 . In the present study a pair of MZMs can be introduced in each branch near the trijunction of Fig. 1b. Three MZMs of each pair are coupled through Josephson junctions with phase differences ϕ ′ 1 , ϕ ′ 2 , and ϕ ′ 3 in the system. The three Josephson junction loop controls the selective coupling among three MZM pairs. By applying a threading flux into one of the loops of Fig. 1b we can use the circulator function to control the phases φ ′ i and thus the couplings among MZMs in the trijunction to perform the braiding operation and, further, quantum gate operations. In contrast to the previous phase modulation scheme 23 trying to switch off the current mediated by MZMs which are inside of the loop the present proposal uses circulating function to perform braiding operations. Further, our scheme enables the interaction between MZMs outside so that we may provide a scalable design in a one or two-dimensional lattice system for coupling between MZMs which belong to different trijunctions.
Results
Three-Josephson junction loop with a trijunction. The precise fluxoid quantization condition of superconducting loop reads −� t + (m c /q c ) � v c · d � l = n� 0 with v c being the average velocity of Cooper pairs, q c = 2e the Cooper pair charge, and m c = 2m e the Cooper pair mass 24,25 . The total magnetic flux t threading the loop is the sum of the external and the induced flux t = ext + ind . With the superconducting unit flux quantum � 0 = h/2e we introduce the reduced fluxes, f t = � t /� 0 = f + f ind with f = � ext /� 0 and f ind = � ind /� 0 , expressing the fluxoid quantization condition as kl = 2π(n + f t ) with l being the circumference of the loop, k the wave vector of the Cooper pair wavefunction and n an integer.
The scheme in Fig. 1a consists of three-Josephson junction loop and three small loops with threading fluxes f i = � ext,i /� 0 . The fluxoid quantization conditions around three loops, including the phase differences ϕ i and ϕ ′ i across the Josephson junctions, are represented as the following periodic boundary conditions 26,27 , where k i , l , and l ′ are the wave vector of Cooper pairs, the length of the three-Josephson junction loop, and three branches, respectively, and m i 's are integer. Here, ϕ i 's are the phase differences of Josephson junctions in the three-Josephson junction loop and ϕ ′ i 's phase differences of the trijunction whose positive direction, we choose, is clockwise as shown in Fig. 1a. Which branches carry current, while the other not, is determined by threading a flux, f i , into a specific loop.
The induced flux f ind,1 , for example, can be written as to represent the boundary conditions as In the system of Fig. 1a three Josephson junctions with ϕ ′ i compose a trijunction which satisfies the periodic boundary condition ϕ ′ 1 + ϕ ′ 2 + ϕ ′ 3 = 2πn ′ with an integer n ′ . By using this condition and summing above three equations we can check that the boundary condition for three-Josephson junction loop can be expressed as with an integer n, which can also be derived directly from the fluxoid quantization condition. If we assume the superconducting branches in www.nature.com/scientificreports/ where the effective inductances are defined as L eff ≡ L K + L s and L ′ eff ≡ L K + L s + 9(L ′ K + L ′ s ) . Here and after, the indices, i, are modulo 3, for example, i + 1 = i + 1 mod 3.
The dynamics of Josephson junction is described by the capacitively-shunted model, where the current relation is given by I = −I c sin φ + CV = −I c sin φ − C(� 0 /2π)φ with the critical current I c , the capacitance C of Josephson junction, and the voltage-phase relation, i with the Josephson coupling energy E J = � 0 I c /2π and the current I = −(n c Aq c /m c ) k . From the Lagrangian By using the quantum Kirchhoff relation the equation of motion, then, can be represented as which consists of the inductive energies of the loops and Josephson junction energies with E ′ J being the Josephson junction energy of trijunction. We can easily check that the effective potential U eff ({ϕ i , ϕ ′ i }) satisfy the equation of motion in Eq. (11) for φ i = ϕ i with k i 's in Eq. (9). The kinetic inductance L K is much smaller than the geometric inductance L s . For the usual parameter regime for three-Josephson junction qubit L K /L s ∼ O(10 −3 ) 30 so that we can approximate the effective inductances as L eff ≈ L s and should also satisfy the quantum Kirchhoff relation for the phase variables ϕ ′ i . In Fig. 1a we consider the currents Ĩ i across the Josephson junction with phases ϕ ′ i and I ′ i flowing in the branch, where the direction of Ĩ i is counterclockwise and Supplementary Information). Then with the current conservation relation at nodes, Limiting case. In the system of Fig. 1a we can consider the limit that the length of branches goes to zero, l ′ → 0 , and thus two nodes at the either ends of a branch collapse to a point. As a result, we have three loops with geometric inductance L s /3 which meet at the trijunction. In this limit L ′ (12) becomes which describes the inductive energies of three loops with geometric inductance L s /3 and the Josephson junction energies 25,31,32 , complying with the intuitive picture.
Circulator function. In order to perform the NISQ computing we need to construct a scalable design with the circulator function, where the trijunctions are connected to others and the current directions can be controlled in situ in the circuit. However, in the design in Fig. 1a the trijunction is inside of the loop so it is not possible to couple the branches with others outside. Hence we consider an improved design where the trijunction is located outside of the loop as shown in Fig. 1b. In the Supplementary Information we show an archetype for a scalable design. Actually the inner branches and the trijunction are turned over, but the design is topologically equivalent with the design in Fig. 1a. Here the length l of central branch is not equal with others anymore. We then introduce more general boundary conditions for the scheme in Fig. 1b including the phase differences across the Josephson junctions as www.nature.com/scientificreports/ with integers m i . The boundary condition in Eq. (15) describes the outmost loop containing the Josephson junctions with phase differences ϕ 1 and ϕ ′ 1 and the conditions in Eqs. (16) and (17) the left and right loop in Fig. 1b. With the geometric and kinetic inductances L s and L K = m cl /An c q 2 c for the central branch, respectively, the induced fluxes become f ind, 1l − (L s /L K )k 3 l/3] to give rise to the relations similar to those in Eqs. (5), (6) and (7) where k ′ 1 l ′ 's are replaced with k ′ 1l . From these relations in conjunction with the relations in Eq. (8) we can similarly calculate k i and k ′ i with i = 1, 2, 3 in terms of ϕ i and ϕ ′ i (see the Supplementary Information). In order to induce current flowing between the branches across ϕ ′ 1 , we initially apply the flux ext,1 so that f 1 = � ext,1 /� 0 = f , but f 2 = f 3 = 0 . We then can easily check that the following effective potential satisfies the equation of motion in Eqs. (11) and (13), in Eq. (12) for the system in Fig. 1a with f 1 = f and f 2 = f 3 = 0 . Figure 2 shows the effective potential for the design in Fig. 1b, which is qualitatively similar to that for the model in Fig. 1a.
We introduce a coordinate transformation such as The effective potential in Eq. (19), then, can be expressed as Figure 2a shows the effective potential U eff as a function of (ϕ p , ϕ m ) for m 1 = m 2 = m 3 = n = n ′ = 0 , which is minimized with respect to ϕ ′ p , ϕ ′ m and ϕ 1 . If the value of the external flux f = 0.5 , two degenerate current states, clockwise and counterclockwise, are superposed so that we cannot determine the direction of current. We thus set the value of the external flux f = 0.42 to obtain a stable minimum. The effective potential U eff (ϕ p , ϕ m ) along the dotted line in Fig. 2a is shown in Fig. 2b, where U eff (ϕ p , ϕ m ) has a minimum at ϕ p /2π ≈ 0.124 . Figure 2c shows the profile of effective potential U eff (ϕ p , ϕ m ) as a function of ϕ m for ϕ p /2π ≈ 0.124 . Here the effective potential has the minimum at ϕ m = 0, i.e., ϕ 2 = ϕ 3 . Figure 2d show that ϕ ′ m = 0, i.e., ϕ ′ 2 = ϕ ′ 3 at the minimum of the effective potential U eff (ϕ p , ϕ m ) . From Eqs. (4) and (9) we can see that k 2 = k 3 and thus I 2 = I 3 and from Eq. (10) k ′ 1 = 0 , and thus I ′ 1 = 0 , which is consistent with the current conservations, I 3 − I 2 = I ′ 1 = 0 , in Eq. (8). Hence, in Fig. 1b we can determine the direction of current such as I ′ 3 = −I ′ 2 � = 0 , and I ′ 1 = 0 . If we consider the case that Hence we can selectively determine the direction of currents flowing through a trijunction by threading a magnetic flux into a specific loop in the design of Fig. 1b, which can realize the circulator function in a scalable design.
Braiding of Majorana zero modes.
We can use the circulator function for the braiding of Majorana zero modes (MZM) for topological quantum computing. As shown in Fig. 4a we introduce three pairs of MZMs in the semiconducting nanowire with p-wave-like superconductivity induced from s-wave superconducting branch via proximity effect. For the quantum computing the scheme for quantum gate operation should be provided. Hence we consider the system of Fig. 1b because for the system of Fig. 1a the MZMs are inside of the loop so that the MZMs cannot interact with MZMs outside 23 .
In Fig. 3a we show the currents I ′ 1 = I 3 − I 2 , I ′ 2 = I 1 − I 3 , and I ′ 3 = I 2 − I 1 of the system in Fig. 1b as a function of f 1 − f 2 . If f 1 = f = 0.42 with f 2 = f 3 = 0 , the current direction is determined such that I ′ 1 = 0 , but In this case the current flows between the branch with γ 2 and the branch with γ 3 . This is the initial state of the system shown in Fig. 4b, where the three MZMs, γ ′ 1 , γ ′ 2 and γ ′ 3 , are tunnel-coupled with each other through the Hamiltonian 22,23 with Majorana Josephson energy E M and coupling energy α . Then the current carried through MZMs across trijunction is given by (21) www.nature.com/scientificreports/ with a 4π-periodic behavior 33 . Actually we have ϕ ′ 1 /2π ≈ 0.246 and ϕ ′ 2 /2π = ϕ ′ 3 /2π ≈ −0.123 at the minimum of the effective potential U eff (ϕ p , ϕ m ) in Fig. 2a. Then the current I 1 has a larger amplitude than I 2 = I 3 as shown in Fig. 3b, which is denoted as a solid (dotted) line for I 1 (I 2 and I 3 ) in the trijunction of Fig. 4b. As shown in Eq. (22) the current mediated by MZMs I i ∝ sin ϕ ′ i /2 , while the Cooper pair current Ĩ i ∝ sin ϕ ′ i . If we consider a simplified model such that the Josephson junctions in the three-junction loop in Fig. 1a are removed (22) www.nature.com/scientificreports/ as in the previous study 23 , the boundary condition becomes approximately ϕ ′ i − 2πf i ≈ 0 . Here, even if we set f i = 0.5 and thus ϕ ′ i ≈ π , we cannot switch off the current mediated by MZMs as I i = 0 while Ĩ i ≈ 0 . Hence, instead of switching off I i we change the current direction by using circulating function to perform the braiding operation.
In general, for f i = 0.42 with f i±1 = 0 we have ϕ ′ i /2π ≈ 0.246 and ϕ ′ i±1 /2π ≈ −0.123 . The different phases are due to the current direction, resulting in the asymmetry in the amplitude of I i at the trijunction. In next stage we adiabatically apply the flux f 3 , while decreasing f 1 (See Eq. (S27) of Supplementary Information for general f i ). In Fig. 3a, then, |I ′ 1 | increases while |I ′ 3 | decreases. In the meanwhile, |I ′ 2 | decreases to zero and then grows up to the maximum value. Finally for f 3 = 0.42 with f 1 = f 2 = 0 , we have I ′ 3 = 0 , but I ′ 1 = I ′ 2 � = 0 . Hence the current direction is changed: the current I i flows between the branch with γ 1 and the branch with γ 2 but there is no current in the branch with γ 3 as shown in Fig. 4c, and meanwhile the green MZM loses its weight in γ 3 and gains weight in γ 1 . Here the current I 3 has a larger amplitude than I 1 = I 2 , and thus the asymmetry in the amplitude of I i is changed. In this way, between t = τ and t = 2τ , the yellow MZM loses its weight in γ 2 and gains weight in γ 3 as shown in Fig. 4d. At the last stage the green MZM loses its weight in γ 1 and gains weight in γ 2 . As a result, the green and yellow MZMs are exchanged with each other as shown in Fig. 4e, completing the braiding operation.
In Fig. 5 we show an architecture for a scalable design for a superconducting circuit with MZMs. Two MZMs belong to different trijunctions (the green box in Fig. 5) can be coupled or fused to perform quantum gate operations and quantum measurements. For the green box operation, for example, we can introduce a gate voltage applied to the sector between two MZMs to control the chemical potential of the nanowire 34 . Though the system in Fig. 5 is one-dimensional, we can extend it to two-dimensional lattice straightforwardly.
Discussion
In conclusion, we proposed a scheme for the circulator function in a superconducting circuit consisting of three small loops and branches which meet at a trijunction. Usually the effective potential in the Hamiltonian for superconducting circuit is phenomenologically obtained. However in this study we obtained the boundary conditions from the fundamental fluxoid quantization condition for the superconducting loop to derive the effective potential of the system analytically, which is required for accurate and systematic study for the quantum information processing applications. We expect that this kind of study can be applied to other systems.
At the minimum of the effective potential we can see that two branches carry current while the other does not. By applying a magnetic flux into one of the loops we can determine which branches among three carry the current, achieving the circulator function. For the NISQ computing we need to perform the circulator function in a scalable design. We thus introduced an improved model where the trijunction is extracted out from the outmost loop to interact with other external branches. For the improved design we obtained the ground state of the system from the effective potential, and showed that it can perform the circulator function in the trijunction loop.
Instead of switching off the current mediated by MZMs in the previous study, in this study we selectively choose the current directions to give rise to MZM braiding. We thus use the circulator function to achieve a non-Abelian braiding operation by introducing three pairs of MZMs in the branches that meet at a trijunction in the improved model where MZMs are introduced outside of the loop. The circulator function determines the phases of the trijunction and thus the coupling between the MZMs. Initially we apply a magnetic flux into one of the three loops to selectively couple two pairs of MZMs. By applying adiabatically a flux into another loop while decreasing the previous flux we are able to gain the weight of MZM while losing in the previous branch. Consecutive executions in this way can perform the braiding operation between two MZMs. This scheme could be extended to a scalable design to implement braiding operations in one-or two-dimensional circuits. www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2023-02-21T14:52:27.718Z | 2021-01-19T00:00:00.000 | {
"year": 2021,
"sha1": "c77f1e70b636e5a1ee536b009840167ccc3af69d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-81503-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "c77f1e70b636e5a1ee536b009840167ccc3af69d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
8959199 | pes2o/s2orc | v3-fos-license | Field enhancement in a circular aperture surrounded by a single channel groove
: Numerical analysis of diffraction by a single aperture surrounded by a circular shallow channel in a metallic screen shows the possibility of a 50-fold increase of the electric field intensity inside the central aperture, when compared to the incident field. Detailed analysis of cavity modes and their coupling through surface plasmon wave determine the parameters leading to maximum field enhancement. This effect can be used in high-efficiency single-molecule fluorescence analysis in attoliter volumes.
Introduction
Light transmission through single or periodically arranged apertures has attracted the interest of scientists, at first, due to the possibility to confine the electromagnetic field into a relatively small region, smaller than the diffraction limit in free space, thus generating evanescent wavevectors, leading to increased resolution in optical microscopy [1][2][3]. Even the early studies have shown [2] that the transmission values exceed significantly scalar theoretical predictions [4,5]. The observation made by Ebbesen et al. [6] that in periodically arranged array of apertures the transmission enhancement can be almost 100 times stronger, was sufficient to attract the special attention of scientific community. Although the role of the surface plasmon excitation by the periodicity of the geometry was already invoked in this work, it generated many theoretical and numerical studies aiming to explain this phenomenon. While the first attempts were made by use of models with one-dimensional (1-D) periodicity (classical lamellar gratings), it was necessary to apply a two-dimensional periodicity analysis to better understand the complexity of the process. In a few words, while the periodical arrangement of the apertures is responsible for the surface plasmon excitation on both sides of the metallic sheet, the tunnelling of the field inside the apertures is made through the fundamental waveguide mode supported by the hollow metallic waveguide inside the holes [7,8]. While 1-D slits can support the TEM mode which has no cut-off, this mode cannot propagate in 2-D apertures (coaxial apertures being the exception), and even the fundamental mode has a cut-off and thus the optical field is evanescent inside small apertures. A new application was born [9,10] that profits from the small size of the cross-section area of single apertures, combined with the exponential decrease of the electromagnetic field inside it gives the possibility to significantly reduce the investigated volume (down to a few attoliters) for single-molecule probing in solutions with micromolar concentrations. In addition, and quite helpfully, when the fundamental mode is close to its cut-off, there is a substantial (6-7 fold) increase of the local field intensity inside the aperture [10,11] that contributes to enhance the detected signal. This can be explained by the fact that close to the cut-off the mode group velocity tends to zero, so that its field is "accumulated" at the aperture opening.
As already mentioned for hole arrays, the periodicity of the structure can lead to significant local field enhancement by resonantly exciting surface plasmons. A single aperture can have the surrounding metal surface structured in a specific manner in order to better excite the surface plasmon [12,13]. Instead of the simple periodicity, the structuring has to follow rules established in cylindrical geometry, where Bessel functions naturally substitute the exponentials that serve as basic functions in Cartesian coordinates [14]. The first problem is that one ends up with relatively complex structures of quasi-periodic concentric grooves which are difficult to reliably fabricate and control.
The second problem is that from the point of view of practical applications, only the final excitation intensity per surface unit matters. With the aperture surrounded by concentric grooves [12,13], it is necessary that both the surface structure and the incident wave cover a diameter of about 4 times the wavelength. This limits the focusing of incident light and the incoming intensity per surface unit (for a fixed input power). Therefore, even with large enhancements, the excitation intensity in the central aperture may not be significantly higher that the intensity reached with a tightly focused laser beam without the nanostructure. For instance, in the case of a bull's eye structure [12,13], the relative electromagnetic enhancement is about 100 (computed for an incoming plane wave), but the structure radius and thus the laser beam focus spot are about 4 times the diffraction limit (minimum focus size). If we compare the net intensity reinforcement to the case of a diffraction-limited spot, the intensity reached with the bull's eye is increased by 100 x (1/4)² = 6.25, which is about the reinforcement reached with a single (bare) aperture. Still, more complex nanostructures can further improve practical realizations by increasing the emission rate of molecules or modifying the emission radiation pattern to improve the detection efficiency.
The aim of this paper is to study the possibility of field enhancement by using the combined effect of simultaneous excitation of cavity resonances inside the aperture and in a surrounding channel groove, coupled through the surface plasmon wave. This triple interaction results in 30-50 fold field enhancement in the opening of the central aperture. To this aim, it is sufficient to use a single channel of about one wavelength diameter, i.e., enabling strong focusing of the incident beam down to the diffraction limit. Therefore, the net excitation intensity reinforcement in the central aperture is 30-50 fold as compared to the maximum intensity reached in a diffraction limited focused spot. Moreover, a single channel is much simpler to fabricate than a large-area quasiperiodic channels.
For the sake of completeness, we review in Sec.2 the waveguide mode excitation in a single aperture and its effect on the field enhancement. The modes in the coaxial channel are described in Sec.3, and their effect on the field in the central region is described in Sec.4, together with a theoretical and numerical analysis of the role played by the surface plasmon that can propagate along the metal-glass surface.
Numerical analysis is made using the differential method formulated in cylindrical coordinates using Fourier-Bessel functions basis [15] and using eigenvalue/eigenvector technique due to the piecewise invariance of the geometry along the z-axis. Schematic representation of a screen with a circular aperture and a circular channel around it, made in an aluminum screen with thickness t = 200 nm. The cladding is glass and the substrate is water. The channel width is R 3 -R 2 , its depth h, and it is filled with glass.
Intensity enhancement -effect of the channel
The structure under study is presented schematically in Fig.1. It represents a 200 nm thick aluminum screen with a circular aperture filled with water, as is the substrate, as well. The cladding is glass, which fills also a surrounding circular channel with depth h. The structure is illuminated by a plane wave incoming from the glass side, with wavelength λ = 488 nm and linear polarization along the x-axis. We chose that configuration in order to have the maximum intensity enhancement in epi-configuration (illumination and detection from the glass side), with the metal structure restricting the observation volume to the inner of the aperture. This corresponds to the configuration leading to the best results in single molecule fluorescence analysis [10]. Throughout this paper, we will consider the averaged intensity I S over the central aperture in a plane located at z = -5 nm inside the aperture: normalized in a such way that the incident electric field amplitude is equal to unity. For a single circular aperture, a sharp maximum is observed [10,11] when R 75 nm ≈ , a value lying below the cut-off dimensions for the fundamental waveguide mode supported inside the hollow metallic waveguide formed inside the aperture. The introduction of a channel around the aperture can multiply I S by a factor of 6-9, as observed in Fig.2, which presents the influence of the channel depth h on the field intensity inside the axial aperture, for R 2 = 200 nm and R 3 = 280 nm. R 1 , R 2 and R 3 values were chosen to obtain the maximum effect. The dependence of the enhancement factor on R 2 for three different values of the channel width R 3 -R 2 are presented in Fig.3 and one observes a welldefined maximum around R 2 = 200 nm, a value explained in Sec.4.1. In order to keep the investigated volume small, the depth of the channel has to be kept relatively small compared to the screen thickness, otherwise it is possible to enhance the transmission by tunneling through the channel bottom. When h = 0 (i.e. no channel), the enhancement is about 6-fold as compared to free space, while the channel can enhance it substantially. The large enhancement brought by the channel groove can seem strange, taking into account that there is no possibility that the excitation of the coaxial cavity resonances could directly influence the field in the central aperture by tunnelling through the metal walls, 125 nm thick in our case. Such coupling can happen only through the surface plasmon wave, propagating along the interface metal-glass, as discussed in Sec.4. The small depth of the channel leading to maximum field enhancement can be understood in analogy with the 1-D periodical diffraction gratings, where the deeper channels modify substantially the plasmon propagation constant and increase its losses [16,17]. The optimal excitation of surface plasmon waves along metallic gratings happens when the groove depth is about 10% of the wavelength [18], a value observed also for circular channel in Fig.2.
Given the optimal channel depth, the dependence of the enhancement factor on R 2 and R 3 is presented in Fig. 3 and Fig. 4. The optimal value of 200 nm for the radius R 2 of the inner channel wall is discussed in Sec.4, and is due to the optimal coupling between the incident wave and the plasmon surface wave, at one side, and between the coaxial modes and the plasmon, at the other.
Cavity resonances inside the channel
When the outer radius R 3 of the channel increases, one observes several maxima and minima in Fig.4. To understand this behavior, we also present on Fig. 4 the propagation constants of the modes inside the channel, defined later in eq. (2). The determination of the propagation constants is rather straightforward for circular waveguide with perfectly conducting walls, coaxial or not. Introducing cylindrical coordinates (ρ, ϕ, z) with z-axis lying along the aperture axis, the elementary solutions in Fourier-Bessel basis are the solutions with ϕdependence following that of the incident field, which is represented for normal incidence and x-polarization in the last column of Table 1, with the notations defined in eqs. (2) and (3).
Coaxial channel can have electromagnetic cavity resonances, which represent the mode guided by the corresponding infinitely long coaxial waveguide. It can support a TEM mode that has no cut-off, a property that could enhance the transmission through coaxially structured small apertures [19]. However, in normal incidence this mode cannot be excited with linearly polarized light, because of the different symmetry of the electromagnetic fields of the linearly polarized wave and the TEM mode, the latter having only radial ρ-component of the electric field, which is invariant in ϕ. The other modes are of two types [20], H (or TE, for transverse electric) modes with zero axial electric field component, and E (or TM, transverse magnetic) modes with zero axial magnetic field component. The ρ-dependence of these modes (having the same ϕ-dependence of the electric field as the incident field), are listed in Table 1, together with the components of the incident electric field. The zdependence is given by with a normalized propagation constant γ in z-direction , and Here k is the wavenumber inside the waveguide, k 0 is the free-space wavenumber and n is the refractive index of the channel filling medium.
In the case of a single aperture containing the axis, the coefficients a E,H are put equal to zero, because the Bessel functions Y 1 diverge in origin. The same is valid for the plasmon surface wave that can be excited along the surface of the metallic layer, and its field dependence is the same as for the E-modes: In the case of highly conducting layer material, the plasmon surface wave has a propagation constant in ρ-direction only slightly exceeding the wavenumber in the cladding, and its magnetic field vector is almost parallel to the surface (corresponding to TM polarization in Cartesian coordinates), thus the field is close to the field of the E-mode close to its cut-off. The coaxial modes are determined to fulfil the boundary conditions at the vertical walls, where E ϕ and E z as to vanish: The cut-off appears when γ = 0, i.e. , so that for the central aperture, the first root of eq.(5) is , when λ = 488 nm and n = 1.33. The smallest cut-off radius of the Emode is almost twice that of the H-mode and is of less interest for field localization in small volumes. Finitely conducting walls allow for the field penetration inside them and thus the cut-off radius is reduced. For the case of aluminum, the cut-off radius of the central aperture becomes R 0 k nk ρ = R 108 nm ≈ 1 = 85 nm, and the maximum value of I S is reached slightly below this cut-off, thus the choice of R 1 = 75 nm.
In coaxial waveguides, the mode propagation constants of the H and E modes are rather different when the inner diameter is smaller compared to the wavelength. The fundamental H 11 mode has a large cut-off wavelength, given approximately by the relation [21] , and thus the cut-off radius is quite small, in the case of λ = 488 nm and n = 1.5, its value is approximately equal to 55 nm, smaller that R c 2 3 (R R )n λ ≈ π + 1 . Thus, this mode is always propagating in our conditions.
Higher modes behave in a completely different manner. For mode H 1m and a perfectly conducting metal, the cut-off wavelength λ c is determined by the width of the channel, . In addition, the difference between the E and the H modes diminishes with the growth of the inner diameter, and becomes almost negligible, when , as can be observed in Table 2 for n = 1.5. As can be observed, already at R c 3 2 = 200 nm, we are quite close to the limit value of the cut-off width, equal to λ/2n, and shown in the last column of Table 2. The cut-off position in the case of finite conductivity is not well-defined as for perfect conductivity, because there is a gradual transfer from evanescent (small radii) to propagating (larger radii) character, as can be clearly observed in Fig.4, where the dependence of the real and imaginary parts of γ of the cavity resonances inside the coaxial channel are plotted as a function of R 3 .
In the case of aluminum, the first mode cut-off can be approximately defined at R 3 -R 2 = 135 nm (compared with 164 nm for perfect conductivity) using the data in Fig.4, the value of R 3 taken where the imaginary part of the γ almost vanish. However, one observes the position of maximum field enhancement at lower radii. In addition, the excitation of the second cavity resonance is accompanied by a minimum of I S , contrary to the excitation of the first and the third modes (Fig.4). There are several factors that complicate the simple link between the cavity resonances in the channel and the field enhancement inside the central aperture.
First, as already discussed, the surface plasmon that propagates along the aluminum/ cladding interface is the only channel of coupling between the cavity modes in the central aperture and in the circular channel. However, the interaction between the plasmon surface wave and the cavity modes will modify both of them [16]. Thus the exact role of the cavity resonances has to be determined by taking into account the finite depth of the channel, which is made numerically in the calculations of I S , but cannot be evaluated analytically.
Second, the maximum field enhancement can be expected not at the mode cut-off radius, but below it, at radii corresponding to the minimum of the real part of γ, where the group velocity of the cavity resonance in z-direction is null, because this will correspond to accumulation of energy at the channel opening. This can explain why local maxima of I S can be observed below the cavity mode cuts (Fig.4). However, due to the small channel depth, neither the cuts nor their role in the field enhancement in the central aperture can be determined without taking into account the next point.
Third, the interaction between the plasmon surface wave and the cavity modes is determined by the coupling integral of their fields, which depends strongly on R 2 and R 3 , as discussed in the next section.
Fourth, the plasmon surface wave exhibits its own resonances due to the scattering on the channel walls, which will have their influence on the field in the central region.
The last two factors are discussed in detail in the next section.
The choice of the channel inner wall
In the approximation of infinitely conducting walls, the cavity modes have vanishing ϕand z-electric field components at R 2 and R 3 , thus the coupling between the surface plasmon and the coaxial cavity modes will be maximal when the plasmon field pl E will satisfy the same conditions. From Table 1 it is evident that the condition is equivalent to a zero of the azimutal and axial components. Fig.5 presents the dependence of (7) 0 ,pl 2 2 ,pl 2 abs[J (k R )+J (k R )] ρ ρ on R 2 (red curve), compared with the same dependence of the enhancement factor I S , when R 3 -R 2 = 100 nm with . As it can be observed, the zero of eq. (7) is located around 185 nm, a value smaller than 200 nm corresponding to the maximum of I ,pl 0 k = k (1.5578 + i 0.01425) ρ S . This difference can be easily understood taking into account that for finitely conducting walls, the electric field penetrates inside them at 10-15 nm, so that the zeros of eq.(7) must appear at values smaller than R 2 . Another factor that plays an important role for the field excitation is the direct coupling between the incident wave and the plasmon surface wave due to the scattering on the channel walls. The coupling integral when ρ = R 2 is given as: where the overbar means complex conjugate. Its maxima appears at the zeros of , i.e., the zeros of eq.(7).
If we increase the index of the cladding, this will lead to an increase of by almost the same factor, and thus the optimal value of R ,pl k ρ 2 will decrease. And indeed, for n cl = 2 the optimal value of R 2 is 160 nm, and the corresponding enhancement of I S is given in Fig.6 as a function of R 3 , together with the dependence of eq.(7) with . As in Fig.5, the maximum of I ,pl 0 k = k (2.1423 + i 0.037) ρ S appears close to the zero of eq.(7).
The role of the channel external wall
Like the internal channel wall, the external one also serves as a perturbation that couples the incident wave into the plasmon surface wave, but the sign of this perturbation is opposite to the perturbation at R 2 [14], so that the cumulative effect of the two walls is given by applying twice eq.(8): This result as a function of R 3 when R 2 = 200 nm is presented in Fig.7 together with the numerical values of the amplitude of the surface plasmon electric field, both curves presenting qualitatively similar behavior. More detailed analysis requires taking into account the cavity modes and their coupling with the surface plasmon. The difficulty is that the model of a coaxial channel with perfectly conducting walls is not valid for quantitative analysis. On the other hand, taking into account the influence of the finite conductivity on the mode propagation constant and field distribution, together with its coupling with the surface plasmon, requires for detailed electromagnetic analysis, which is made using the numerical code based on a rigorous electromagnetic method.
A qualitative understanding of the link between the field enhancement in the central aperture and the cut-off radius of the channel groove can be found using the following argument. As it is well-known, the real part of the propagation constant ,pl k ρ of the plasmon surface wave along the finitely but highly conducting surface is slightly higher than the free wavenumber in the cladding, as observed from the numerical values given in the previous subsection. This leads to small but almost purely imaginary values of the wavenumber along the z-axis (i.e., the surface plasmon field is evanescent in the cladding). Such values of correspond to cavity modes that are just slightly below their cutoff. Thus, the surface plasmon will couple more efficiently with the cavity modes that lie close below their cut-off. This explains the maxima of the enhancement factor I S observed in Fig.4 for values of R 3 lying just below the mode cut-off positions (R 3 = 325, 475, and 630 nm). The minimum around R 3 = 500 nm is already explained in connection with Fig.7 and is due to the interaction between the incident wave and the surface plasmon. The first maximum in Fig.4 at R 3 = 250 nm is probably due to the excitation of the mode H 11 , but we were not able to find a simple direct link. Figs.4 and 7) as a function of R 3 of the structure with cladding index n cl = 2 and R 2 = 160 nm. (a) I S and γ, (b) incident wave -surface plasmon coupling integral according to eq.(9). Similar conclusions can be drawn for another configuration, where the cladding index is increased to n cl = 2, which changes the plasmon propagation constant to As already shown at the end of Sec.4.1 (Fig.6), this change will decrease the optimal value of R ,pl k = ρ 0 k (2.1423 + i 0.037). 2 to approximately 160 nm. The dependence of the enhancement factor on the channel width is presented in Fig.8, compared with the values of the cavity modes propagation constants (Fig.8a), given in the vicinity of their cut-off radii. One observes that the enhancement factor has maxima lying below the mode cut-off radius. The minimum around 325 nm can be explained by the coupling strength between the incident wave and the surface plasmon, as given by eq. (9) and presented in Fig.8b.
Conclusion
The simultaneous excitation of cavity modes in the central aperture and in the surrounding coaxial channel can lead to almost 50-fold increase in the electric field intensity in the central aperture, compared to the intensity of the incident field. The coupling between the two cavities is made through the surface plasmon wave that propagates along the metal-cladding interface. The optimal conditions for maximum field enhancement require that the cavity mode inside the central aperture is below its cut-off. The coupling between the two cavities is made through the surface plasmon propagating along the upper metal-dielectric interface; as can be expected from general physics considerations, this coupling is the strongest when the field of the channel cavity mode matches the field of the plasmon surface wave, i.e, when the channel modes are below their cut-off. In addition, it is necessary to ensure maximum excitation of the surface plasmon by the incident wave. The results are useful in optimizing optogeometrical parameters of circular apertures in metallic screens leading to strong field enhancement.
This work has been funded by the grant ANR-05-PNANO-035-01 "COEXUS" of the French National Research Agency. | 2017-06-11T04:57:58.331Z | 2008-02-04T00:00:00.000 | {
"year": 2008,
"sha1": "82c92414fdb6abfe9a7084f25b91e42abaeffc27",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.16.002276",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0e0d9bb79b0aec3eeddfd7741cfc0ca842bde892",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
229416027 | pes2o/s2orc | v3-fos-license | Composting of Pig Effluent as a Proposal for the Treatment of Veterinary Drugs
Pig farming currently occupies a prominent place in the southern states of the Brazil, owning approximately 50% of the national squad, estimated at 42 million pig heads. However, the swine activity contributes significantly to the generation of environmental impacts on the environment. Recently, the greatest need for animal protein has exerted pressures on the current animal production system and one of the alternatives has been to the use of veterinary medicines, which have several uses ranging from therapeutic use, preventive in the treatment of various diseases and as growth promoters. Its indiscriminate and uncontrolled use is currently endangering the environmental balance of producing sites through effluent contamination. Many producers have been using contaminated slurry as a biofertilizer. In this sense, further studies on techniques and processes of treatment of organic effluents contaminated by veterinary drugs are necessary. Alternative low-cost and environmentally viable treatment systems are needs to minimize the entry into the environment of these contaminants. Therefore, the composting process that can defined as a process of aerobic microbial decomposition of organic matter and nutrient recycling can be an alternative for the treatment of effluents contaminated by veterinary drugs.
Composting
The composting technique emerged around the year 1920, when a researcher named Albert Howard developed the process in India [1]. This process happens naturally next to the environment, where the biological degradation of the compounds occurs [2], this technique can still be characterized as a process of treatment of different types of residues and origins, among them (urban, industrial, forestry and agricultural), where a diverse population of microorganisms (bacteria, fungi) act [3].
Physical-chemical composting parameters
For the composting process to succeed, some aspects such as heat transfer, airflow, steam and finally moisture balance must observed [6]. The main parameters to be evaluated during the composting process are: oxygen/aesthesia; pH; Moisture content; Temperature; C/N ratio, in addition to the presence of microorganisms [6].
Oxygen is directly interconnected with microbial activity, due to the composting process, being aerobic. The obtaining of oxygen from the composting system can be obtained by mechanical, physical or even forced aeration [7]. Its consumption and supply is tied, the humidity of the composting system, having optimum humidity in the range of 55%. The higher the microbial activity, the higher the demand for oxygen and the lower the humidity in the composting system [7]. Also for the author, systems with humidity below 40%, can inhibit the activity of bacteria, predominantly the higher activity of fungi. Composting phases according to the temperature in the system. Source: Adapted from [3].
Composting of Pig Effluent as a Proposal for the Treatment of Veterinary Drugs DOI: http://dx.doi.org/10. 5772/intechopen.94758 The aeration has equal importance in the process, because with greater revolving and aeration of the system, the rate of decomposition of organic matter is increased. In his experiment of mechanized composting [5], he made the aeration during the injection of effluents in the trees and after 2 days, where he made only the revolving of organic matter.
Moisture in the composting system is of fundamental importance, being correlated with the rate of decomposition of organic matter by aerobic microorganisms. As already mentioned above, the ideal rate varies between 50% [3], and 55% [7]. Its control can was established by revolving, because at high humidity, anaerobiosis may occur in the system [3]. This, can also was controlled by the relationship between effluent injection and the amount of dry mass (DM). The author [5] used an initial rate of 1.47 liters of scum per kg (DM) in the first days up to a rate of 0.21 liters of scum per kg (DM) at the end of the experiment. Other aspects need observed, which are the phases of the composting process, mesophilic and thermophilic phase, both related to the elevation and decrease of temperature and water evaporation.
The state of organic matter decomposition can be measured from the hydrogenic potential (pH) found in the composting system, because for each phase of the process, the pH variation will occur. The initial pH values can be in the order of 4-5 [7], this due to the release of minerals acids and carbonic gas, where at this time the bacteria acidophilus and fungi act in the decomposition of cellulose [3]. Subsequently, the pH can reach values between 7 and 12, due to the formation of organic acids by alkalophiles bacteria [7], when there is stabilization of the compound [3].
With the ease of measuring throughout the process, the temperature profile plays a crucial role in composting. The temperature can range from 10-70°C, mesophilic phase and thermophilic respectively [3]. Its elevation and decrease linked with humidity and microbial activity, as demonstrated in the temperature profile in Figure 1.
Another important role of this parameter is the inertization and maturation of the compound over time. Pathogen microorganisms have their inactivation from three days at temperatures higher than 55°C [8], but in composting cells where revolving often occurs; the minimum is 15 days with temperatures above 55°C [4]. Aspects such as raw material, composting system configuration, presence of microorganisms, moisture, oxygen, directly affect temperature. Still the temperature can define the amount of revolving of the composting system. In his work [1], he observed the temperature variation in a process of composting of cattle slaughter residues, and in another study, [4], in a process of composting of pig effluents; both studies presented maximum temperatures did not exceed 55°C. In relation to Figure 3(a), it is also noted that when revolving the composting system, the internal temperature increases significantly. Temperature profiles vary from research to research, [5] observed 72°C in the composting system at 22 days of experiment, [2], observed values not exceeding 52°C, already [9], obtained the maximum temperature of 61°C.
To prepare the composting system it is necessary to note the Carbon/Nitrogen ratio. Both compounds are important in the process, where carbon acts as an energy source and nitrogen as a respirometric source of microorganisms [3], also acting, in cell growth, formation of proteins, amino acids and nucleic acids [2]. The C/N ratio has been observed by several authors [1-4, 6, 9-11], who evaluated that this ratio should not exceed 30/1 in the initial phase, because in high relationships, the degradation of the compounds is delayed, already at the end, when the compound is matured, the ratio may reach 10/1. In this process, much of the carbon is released into the atmosphere in the form of CO 2 [6], and nitrogen can be released, when the ratio is low, in the form of ammonia, characterizing bad odor [2].
Use of veterinary drugs
The indiscriminate use of veterinary medicinal products in animal husbandry, especially in pig farming, has become the gateway of these pollutants to the environment [12]. Antibiotics used since prevention, therapeutic use, helping in the treatment of diseases such as infections, diarrhea, still being able to act as growth promoters [13][14][15][16]. Currently, there is a great concern in the academic community, with residues derived from veterinary drugs, due to their potential contaminant, but also, by the non-absorption completely, by the animal organism. Several authors, such as [15][16][17][18][19] In addition, to soil contamination by heavy metals [35], contamination of water resources [36], the application of biofertilizers, directly or indirectly in the soil [34], has caused change in the soil biotic community [37]. Another point that draws attention, and more significant is the resistance of microorganisms to antibiotics, several authors have found evidence of the resistance of microorganisms [38][39][40][41][42] thus enhancing the risk to human health [34, 38,43].
The availability and use of pig effluent are justified by the size of the production chain. Brazil is among the four largest producers of pigs in the world, behind China, the United States and the European Union, with an estimated herd of 42 million heads, representing US$ 1 billion annually in meat sales [44,45]. The Southern Region of the country, where the states of Paraná, Santa Catarina and Rio Grande do Sul are located, account for 49.3% of the national production [46]. With a well-established production chain, a large amount of waste generated. Point out that a pig in the finishing phase can produce up to 7.6 liters of manure/day [47], often causing failures and overload in effluent treatment systems, which mostly treated by biological ponds.
Many of these existing environmental problems and pressures are due to traditional organic effluent treatment systems, widespread in pig-producing units, which are not efficient in the treatment of these pollutants [48]. Compliance with environmental laws, as well as the feasibility of managing waste produced in farms, generated the need for alternatives for the treatment of effluents.
With this, composting emerged as a treatment proposal, which is a natural process of nutrient recycling, through aerobic microbial decomposition of organic matter [4], under favorable conditions of temperature, pH, oxygen, humidity, presence of chemical substances, raw material and C/N ratio [3,47], resulting in a material with relative stability and quality [49]. The advantages range from minimizing the volume of effluents of about 90% [4], reducing the emission of greenhouse gases and the proliferation of vectors. Another important factor is the technical feasibility, to expand the current pig production systems [2][3][4][5], showing be a practical proposal, low cost [23, 25, 50], still classified as a clean and viable method [28], for the correct management of waste. Another advantage is the inactivation and immobilization of pathogens, nutrients and veterinary drugs [13, 48, 51, 52], thus becoming a potential proposal for the treatment of veterinary antibiotics [23, 25, 48,53].
Composting as a proposal for treatment of swine effluents
Technologies that seek to reduce residues of veterinary medicinal products (RMV), mainly veterinary antibiotics (AVs) found in organic and industrial effluents disposed as fertilizer in the soil is a necessity to minimize the environmental impacts generated by these compounds [25]. Traditional organic effluent treatment systems, widespread in pig-producing units, are not efficient in the treatment of these pollutants [48].
Among the various technologies and treatment systems for different origins and compositions of organic effluents, including pigs, composting has been shown to be a practical proposal, low cost [23, 25, 50], classified as a clean and viable method [28] for the correct management of waste. This technique can be developed as an alternative for the treatment of effluents in small properties, located in regions with high concentration of pigs and with little agricultural area available for final disposal [4], as well as proposal for treatment of veterinary antibiotics [23,25,48,53].
Composting can be defined as a process of aerobic microbial decomposition of organic matter, being a natural process of nutrient recycling, used since ancient civilizations [4], under favorable conditions of temperature, pH, oxygen, humidity [47], presence of chemicals, raw material and C/N ratio [3], resulting in a material with relative stability and quality [49]. Treatment by composting reduces the volume of effluents, inactivates and immobilizes pathogens, nutrients and veterinary drugs [13,48,51,52], and finally produces a by-product (substrate), with economic and agronomic value [ 49,50]. Figure 2, shows the cycle of inputs and outputs during the composting process.
This treatment proposal has been shown to be effective in the management of organic waste from production processes confined to pigs, poultry and cattle, and has the potential to treat emerging organic pollutants (POEs) [57]. The decay of the concentration of veterinary medicinal products through composting has been researched by several authors [13,48,49,51,[54][55][56], for different types of effluents and organic residues.
The decline of 27% OF CTC was observed in swine effluents [57] and 92% in poultry manure in a composting system for 42 days. When analyzing the decline of 4 antibiotics (florfenicol, sulfadimetoxin, sulfametazin and tylosin) [51] during the composting process of domestic effluents, approximately 95-99% of antibiotics were degraded after 21 days of testing. Antibiotic decline [53] was evaluated (tetracycline 96%, 99% sulfonamides and macrolides 95%) during the composting process, [48], after 35 days of bench-scale composting, they did not detect the presence of antibiotics from the Sulfonamide Group (Sulfametazin (SMZ) and Sulfametoxazole (SMX)). Figure 3 shows one of the processes of composting of existing pig effluents, the mechanized one, which consists of mixing the waste produced by pigs in the rearing systems, with shavings, sawdust or straw in beds/beds [50].
However, this process as a proposal for the treatment of veterinary medicines of different classes, has yet been developed in the country, justifying the proposal, having presented good results in research already developed, its application becomes important in the search for new alternatives to minimize the potential environmental risks caused by these contaminants, since in contact with environmental matrices can be accumulated in the soil, as well as being leached for water resources [56].
Use of composting in the treatment of veterinary drugs
One of the determining points for the development of research and its scientific relevance is the potential for contamination by veterinary antibiotics. Currently the pig production chain in the south of the country is estimated at 20.5 million heads. Considering only the state of Rio Grande do Sul (7 million heads), and assuming that the main group of antibiotics, tetracycline [15,21], which is given in the order of 400 mg/animal/80 kg. The medicated with the main antibiotic group, and 70% of the dose is excreted by urine and feces [18]. If this residue is deposed in current treatment systems, which can reduce its concentration by 50%, this would result in 0.140 g/tetracycline/animal, representing 0.98 tons of antibiotics that would be dumped into the soil annually along the effluent in the form of biofertilizer [59].
The search for technical alternatives for the treatment of pig manure contaminated with residues of veterinary drugs was decisive for the accomplishment of the
Mechanized composting system of swine effluents (a) [58], shaving bed after effluent injection and revolving (b) municipality of Concordia, SC.
work, considering the size of the production chain in the country, due to the lack of research at the national level, but mainly in minimizing the potential damage, they can cause to the environment. The results observed in the research point a potential for chronic contamination and disturbances at the environmental level, but also at the social level, very expressive, but also point to the need for research aimed at the search for technological alternatives for the treatment of these residues, often left aside.
Based on the results obtained [58], it was observed that composting proved to be effective in the degradation of 19 veterinary drugs, divided into 8 groups. The decay/degradation rate ranged from 33.7-100% in 150 days. The antibiotics sulfatiazole, tetracycline and chlortetracycline showed 100% decay. The mean degradation of antibiotics was 97.2%, proving composting as a technique for the treatment of swine effluents contaminated by antibiotics, however, at the end of composting, some antibiotics presented residues in the order of milligrams per kg in the final compound. Therefore, further research on the behavior of these compounds during composting would elucidate whether these compounds are actually degraded or if they generate some kind of metabolites or other substance.
Regarding the community of microorganisms for Bacteria and Fungi [58], a great diversity in the level of phylum and genera observed in both kingdoms throughout the composting. Regarding phyla and genera of bacteria, 7 phylum and more than 70 genera of bacteria were observed over time (0, 15, 30, 45, 60, 75, 90, 120 2150 days). Fungal diversity at phylum and gender level was 2 phylum and 16 genera. This abundance and diversity may be related to the proposed identification methodology, new generation sequencing, which proved to be able to identify a wide range of the micro biota found during composting. In this context, a correlation between environmental variables and antibiotics with microorganisms was, also observed, proving through redundancy analysis that the main factors to have significance in the bacterial community were humidity, but not influencing the fungal community. Veterinary antibiotics (Tilcomisin and Ciprofloxacin) showed a positive correlation in the vast majority of bacteria genera, an effect not observed in fungi. In the fungal genera, the antibiotic Tilmicosin has a positive correlation with the genera (Apiotrichum and Penicillium) and Ciprofloxacin has a positive relationship with the genera (Tricosporium, Parascedosporium, Petriella and Cryptococcus).
Microbial communities of the composting process
The application of animal waste contaminated with residues of veterinary medicinal products has become the gateway to the expansion of several types of antibiotic resistance genes, caused by the indiscriminate use of antibiotics in the production of animal protein [60,61]. In the composting process, there are different types of microorganisms, among them the predominance of bacteria, fungi and actinomycetes, divided into aerobic, thermotolerant and mesophilic [48], which are responsible for about 95% of microbial activity [3]. One of the most important parameters for the proliferation of these microorganisms is temperature, which should not exceed 65°C, for fungi and actinomycetes, and for bacteria, temperatures should be higher than 40°C [48].
Another important aspect is the presence of microorganisms, which are capable of contaminating the environment [9] including E. coli and other pathogens. Also according to the authors, during the experiment, carried out with composting of swine effluents, the average presence of 2 to 5 (log 10 NMP g −1 ) of total coliforms was found.
It found 39 species of fungi in the composting process [50], many of which were identified only at the beginning or at the end of the experiment ( Table 1). Source: [50]* genres that occurred throughout the process. Table 1.
Fungi identified during the process of composting of swine effluents with residues of treated seeds.
It is observed that of the total fungi, five species were found since the beginning of the process (Alternaria alternata* Aureobasidium floccosum* Fusarium oxysporum* Helminthosporium spp*). In a system of composting of poultry, waste found 3 phylos (kingdoms) in greater quantity: Betaproteobacteria; Firmicutes and Bacteria [49].
Evaluating the resistance of microorganism genes to the antibiotic Oxytetracycline (OTC) [13], observed the predominance in 95.3% of the bacteria found, with the following phylos (kingdoms) Actinobacteria, Bacteroidetes, Chloroflexi, Firmicutes and Proteobacteria, and that of these kingdoms, 50 Also in relation to the resistance of microorganisms with antibiotics, [57] they state that CTC inhibited the growth of 12 soil bacteria at different concentrations. In addition to the increased intake of antibiotics in the environment, this can pose risks to human health, such as increased allergy to antibiotics and increased resistance to antibiotics, as many foods develop in places with inadequate effluent disposal and transfer a contaminated load.
Microorganisms such as E. coli have shown antibiotic resistance in several studies [58,[62][63][64]. E. coli resistance was tested from wastewater and wastewater treatment system in 24 antibiotics [64], distributed in 6 classes (Penicillins, Cephalosporins, Chynomas, AminoGlycosides, Sulfonamides and Tetracycline). The results showed that the groups of antibiotics with the highest resistance were, Penicillin; Cephalosporin; Kilonomonas; Sulfonamides and Tetracycline.
In another study [14] they found 14 tetracycline-resistant genes and three antibiotic resistance genes Sulfonamines, which modified ribosomal protection proteins, enzymatic inactivation proteins. These results can also be confronted by the high persistence and accumulation capacity that antibiotics have when they are in environmental matrices, especially in soils. Evaluated the persistence of 5 antibiotics in the soil (Tetracycline, Sulfametazin, Norfloxacin, Erythromycin and Chloramphenicol), where the highest rate of antibiotic adsorption in the soil was: Tetracycline > Norfloxacin > Erythromycin> Chloramphenicol> Sulfametazin, thus increasing the risk to the environment [61].
Conclusions
In the end, we can admit that the composting process presented itself as an alternative to the current treatment systems, since it combines, at the same time, the treatment of swine effluent, but it has the capacity to degrade antibiotic residues found in swine effluents, minimizing their effects. Impacts on the environmental matrices (soil and water), and still at the end, generate a product (compost) with agricultural potential superior to the use of effluents directly in the soil. | 2020-12-17T09:09:10.787Z | 2020-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "df9e38f303932d23226b344baa5ebfba03088ea2",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/74170",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ceba83ec4d29bd3450dfe7904f32ed614e4e14f1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54720599 | pes2o/s2orc | v3-fos-license | Beam dynamics design of the main accelerating section with KONUS in the CSR-LINAC
The CSR-LINAC injector has been proposed in Heavy Ion Research Facility in Lanzhou (HIRFL). The linac mainly consists of two parts, the RFQ and the IH-DTL. The KONUS (Kombinierte Null Grad Struktur) concept has been introduced into the DTL section. In this paper, the re-matching of the main accelerating section will be finished in the 3.7 MeV/u scheme and the new beam dynamics design up to 7 MeV/u will be also shown. Through the beam re-matching, the relative emittance growth has been suppressed greatly along the linac.
Introduction
The Heavy Ion Research Facility in Lanzhou (HIRFL) has been upgraded with the HIRFL-CSR project at the end of the year 2007 and supplies 7000 hours operation time annually [1]. The injector of CSR consists of two cyclotrons, Sector Focusing Cyclotron (SFC) and Separator Sector Cyclotron (SSC). However, the linear accelerator becomes very popular as the injector of the subsequent accelerator in recent years, such as, GSI-HIS, TRIUMF-ISAC, CERN-LINAC3, HIMAC and HIT. Due to larger beam acceptance, higher transmission and higher accelerating gradient, the linac injector can supply higher intensity and better beam quality. As the number of the applicants increasing rapidly and the experimental requirement improved simultaneously, one new injector become most essential for HIRFL-CSR. The new injector called CSR-LINAC has been proposed, which will achieve the double-terminal operation in parallel and make the operation time to increase by more than 5000 hours per year and attract more comprehensive physical experiments, as shown in Fig.1.
The CSR-LINAC injector can supply all kinds of heavy ion beam with 7 MeV/u for HIRFL-CSR. The charge-to-mass ratio is from 1/7 to 1/3 and the designed beam intensity is chosen to 3 emA. Both RFQ and DTL are essential in the whole linac scheme and the layout of this linac injector is shown in Fig.2. The beam will be accelerated to 300 keV/u in RFQ and then transported to 7 MeV/u in the main accelerating section. The main parameters of the CSR-LINAC are summarized in Table 1. The KONUS concept is introduced into the main accelerating section in order to get higher accelerating gradient. The physics design of the 3.7 MeV/u scheme has been proposed by the Institute for Applied Physics (IAP) [2]. So only the beam dynamics design of the downstream section become one new objective. In this paper, the re-matching of the 3.7 MeV/u main accelerating section is firstly finished and then the 3.7 MeV/u to 7 MeV/u beam dynamics scheme is shown completely. 2 Re-matching of the 3.7 MeV/u scheme The 3.7 MeV/u scheme of CSR-LINAC had been proposed four years ago. The LORASR code is applied to the beam dynamics of the KONUS concept in the main accelerating section. However, this scheme can be optimized any more to get better quality beam and larger error tolerance. As seen from Fig.3(left), the beam matching is not good along the whole DTL section, especially in the first DTL cavity. The non-symmetric beam will cause the emittance growth and beam coupling in the symmetric RF electric field. For matching from the exit of RFQ to the DTL section easily, the 5-Quadrupole scheme will be replaced by the 6-Quadrupole scheme in the MEBT section. The symmetric beam matching method is adopted to reduce the beam coupling in the RF field. In this case, The emittance growth evolution is shown in Fig.4. After the re-matching, the relative RMS emittance growth is reduced greatly in all three phase space. The maximum envelope is decreased by 3 mm, which is benefit for alignment and suppressing the beam nonlinear effect, as shown in Fig.3.
Up to 7 MeV/u beam dynamics
The KONUS concept is applied for the 3.7 to 7 MeV/u beam dynamics scheme. LORASR is used as the only code for the KONUS concept [3]. The period structure concept is proposed in the KONUS beam dynamics design. A KONUS period is composed of three sections with separated function. The first section consists of a few gaps with a negative synchronous phase of typically from -25to -35and acts as a rebuncher. Then the beam is injected into the main accelerating section with surplus energy and phase compared with a synchronous particle. Finally, the multi-gap section is followed by the transverse focusing elements, such as the magnetic quadrupole triplets [4].
In the beam dynamics design of the KONUS period structure, it is very important to choose the key parameters [5], such as:
The choice of the effective voltage distribution
The effective voltage per gap should be chosen to ensure that the spark don't appear during the commissioning and operation. The effective voltage per cell mainly depends on the operation frequency, the tube radius, the gap length and the geometry of the pole.The operation frequency is 108.48 MHz and 216.96 MHz in the main accelerating section, which corresponding to the spark electric field of 21.05 MV/m and 27.37 MV/m respectively, as 1.8 times of the Kilpatrick electric field is chosen. The CST STUDIO software is applied to research the relation between the peak electric field (Ep) and the maximum surface electric field (Es,max). The optimized ratio Ep/Es,max depends on the tube radius, the gap length and the geometry of the pole. So the maximum peak electric field corresponding to the spark can be given at the different tube radius and gap length, as shown in Fig.5(line)and the peak electric field distribution per DTL section is also exhibited in the Fig.5(dot). The effective voltage should be chosen in which the peak electric field per cell is below the maximum peak electric field to avoid the spark on the pole. As can be seen in Fig.5, the choice of the effective voltage distribution per section is reasonable in the 3.7 MeV/u scheme, which will verify the validity of this research once more. According to this Ep-Es,max database, the effective voltage distribution can be also chosen in the last three cavities.
The choice of the gap number per section
The rebuncher section is used to bunch the beam in the longitudinal phase space. Generally, the synchronous phase is chosen to -35and the gap number of this section depends on the starting focusing status in the transverse and longitudinal phase space. At the end of the rebuncher section, the beam should be focused simultaneously at three phase space so that transported through the 0section effectively. In order to reduce the gap number of the rebuncher section, the phase spread is set to about 45and the relative energy spread is chosen to around 1.5%. The cell number of the rebuncher section is set to 10, 5 and 8 in DTL4, DTL5 and DTL6 separately.
The choice of the starting phase and energy depends on the phase space distribution at the entrance of the 0section. A good choice will bring smaller emittance growth and reasonable output phase space distribution, which is benefit for the beam transport at the downstream DTL section. In addition, the simultaneous matching is an important criterion at the transverse and longitudinal phase space, which can determine the gap number of the 0section. The cell number of the 0section is set to 24, 24 and 16 in DTL4, DTL5 and DTL6 separately.
The evolution of the reference particle in the longitudinal phase space is shown in Fig.6. As can be seen, the trajectory of the center particle is anomalous in the longitudinal phase space of the fourth DTL. Because no consideration is shown on updating to higher energy when the last DTL is designed, the endmost beam is difficult to matching a new KONUS period in the 3.7 MeV/u scheme. The modification of the beam dynamics in the third DTL may be a good choice in the future. The aperture of the tube is chosen to 22 mm and the beam pipe in the triplet section is 26 mm, which is helpful for controlling the non-linear effect caused by the RF electric field. The gap length distribution per section will determine the peak electric field distribution and the transfer time factor. The gap length distribution per section is checked firstly to ensure the transit time factor above 0.8. The peak electric field distribution along the main accelerating section is shown in Fig.7. In this design, the maximum quadrupole gradient will reach 90 T/m, which approaches the limitation for the conventional magnetic quadrupole, according to the present status in IMP.
End-to-end beam dynamics
The end-to-end beam dynamics simulation has been accomplished. The beam envelope evolution is shown in Fig.8. As can be seen, there is a good beam matching and small envelope along the linac. The maximum envelope appears at the end of DTL3 and the beam envelope be maintained down to 90% of the tube radius. Figure 9 exhibits the phase space distributions in the input and output end of the DTL section. The distribution in the entrance is rebuilt from the phase space in the exit of RFQ. As can be seen from the distribution in the exit, the longitudinal phase space has some filament caused by the non-linear field, which will result in the longitudinal emittance growth. As exhibited in Fig.10, the relative RMS emittance growth is 6.3%, 11.7% for x-x', y-y' phase space, and the emittance growth reaches 28.1% in the longitudinal phase space.
Conclusion
The beam re-matching of the 3.7 MeV/u beam dynamics scheme is proved advantageous for reducing the RMS emittance growth. It would be a good idea to choose the symmetric beam during the transportation in the RF electric field. The reasonable effective voltage can be chosen and this method is proved valid through the 3.7 MeV/u beam dynamics design proposed by IAP. The 3.7 to 7 MeV/u scheme which uses only 3 cavities reveals that the KONUS structure has a high accelerating gradient. The end-to-end simulation shows that the whole beam dynamics design is reasonable in the case of no changes in the 3.7 MeV/u scheme. | 2013-06-20T01:24:22.000Z | 2013-06-20T00:00:00.000 | {
"year": 2013,
"sha1": "73010aeb7060732b1b3e739415ec019ac29dfb14",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73010aeb7060732b1b3e739415ec019ac29dfb14",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
264789129 | pes2o/s2orc | v3-fos-license | Progress in the Correlation Between Inflammasome NLRP3 and Liver Fibrosis
Liver fibrosis is a reversible condition that occurs in the early stages of chronic liver disease. To develop effective treatments for liver fibrosis, understanding the underlying mechanism is crucial. The NOD-like receptor protein 3 (NLRP3) inflammasome, which is a part of the innate immune system, plays a crucial role in the progression of various inflammatory diseases. NLRP3 activation is also important in the development of various liver diseases, including viral hepatitis, alcoholic or nonalcoholic liver disease, and autoimmune liver disease. This review discusses the role of NLRP3 and its associated molecules in the development of liver fibrosis. It also highlights the signal pathways involved in NLRP3 activation, their downstream effects on liver disease progression, and potential therapeutic targets in liver fibrosis. Further research is encouraged to develop effective treatments for liver fibrosis.
Introduction
Liver disease is a serious public health problem, accounting for about 2 million deaths worldwide each year. 1 Chronic liver injuries are caused by a range of stimuli including viral hepatitis, alcoholic and nonalcoholic liver disease, and autoimmune liver disease.These conditions lead to liver inflammation and fibrosis, ultimately progressing to cirrhosis.In China, liver cirrhosis accounts for 11% of all the deaths from liver diseases worldwide. 2Constant or repeated inflammation and necrosis of liver cells lead to an enhanced repair response, triggering massive production of fibrous substances such as collagen, proteoglycans, etc. Insufficient degradation of fibrous substances results in the formation of liver fibrosis.If timely interventions are taken, the possibility of liver fibrosis evolving into cirrhosis, liver failure, and liver cancer can be reduced.
Inflammasomes comprise a variety of protein complexes assembled with the involvement of cytoplasmic pattern recognition receptors, and are a key component of the innate immune system. 3Inflammasome components are found in various cells, including immune and nonimmune cells, such as macrophages, neutrophils, monocytes, hepatic stellate cells (HSCs), and fibroblasts/myofibroblasts.Those components are expressed in multiple intracellular locations including mitochondria, Golgi apparatus, and nucleus. 4,5Inflammasomes recognize damage-associated molecular patterns (DAMPs) and pathogen-associated molecular patterns (PAMPs), and subsequently activate caspase-1.This triggers the release of interleukin (IL)-18 and IL-1β, which contributes to the progression of fibrosis.To date, several types of inflammasomes have been revealed, including NOD-like receptor protein 1 (NLRP1), NLRP3, and NOD-like receptor C4.Among them, NLRP3 has been studied extensively and is known to play a crucial role in antibacterial immunological responses. 6,7Abnormal activation of NLRP3 has been linked to various diseases, including Alzheimer's disease, arthritis, atherosclerosis, and cancer. 80][11] This review discusses current research on the role of NLRP3 in liver fibrosis.
Activation of NLRP3
NLRP3 is a typical NLR protein that contains the innate immune receptor NLRP3, caspase-1, and apoptosis-associated speck-like protein containing a caspase-recruitment domain (ASC).In the activation of NLRP3 (Fig. 1), the first step is the initiation of NLRP3, involving the upregulation of NLRP3, IL-18, and IL-1β.PAMPs bind to Toll-like receptors (TLRs) and activate the transcription factor nuclear factor-kappa B (NF-κB), which subsequently mediates the transcription of NLRP3, IL-1β precursor (pro-IL-1β) and IL-18 precursor (pro-IL-18).Meanwhile, damaged cell DAMP signals such as uric acid crystals, cholesterol crystals, reactive oxygen species (ROS), and oxidized mitochondrial (mt)DNA activate and oligomerize NLRP3. 12,13fter NLRP3 inflammasomes are triggered, the components are recruited and assembled, promoting the cleavage of procaspase-1 into active caspase-1.This process facilitates the maturation of IL-18 and IL-1β.Additionally, activated caspase-1 cleaves gasdermin D and release its N-terminal domain, which induces pyroptosis and the subsequent release of cellular contents. 8Caspase-11 activates NLRP3 inflammasomes through pyroptosis. 14In addition, NLRP3 can be activated by a noncanonical activation pathway that involves the activation of caspase-4 and caspase-5 in humans or caspase-11 in mice.The interaction between caspase-4/5/11 and lipopolysaccharide (LPS) along with lipid A results in their transformation into an active form.5][16] Recent research suggests that the orphan receptor Nur77 combines with mtDNA and LPS to mediate nontypical activation of NLRP3.However, Nur77 association with intracellular LPS does not depend on caspase-11 or gasdermin D. 16
K + outflow
K + efflux is one of the upstream signals that activates NLRP3.For example, extracellular ATP triggers K + efflux through the ATP-gated P2X purinoceptor 7 channel and the two-pore domain weakly inward rectifying potassium channel 2, which then triggers the activation of NLRP3. 17,18Moreover, particles like calcium pyrophosphate crystals, cholesterol crystals, and silica can also induce potassium efflux, activating NLRP3. 19A study has shown that NLRP3 is activated when the K + content of cells drops below 80%. 20Moreover, caspase-11 triggers the noncanonical inflammasome pathway and involves the activation of the pannexin-1 channel and leads to K + efflux and NLRP3 activation. 21
Lysosome rupture
Under some pathological conditions, such as the phagocytosis of particulate matter, lysosome damage can activate the NLRP3 inflammasome.Phagocytosed crystals lead to lysosome acidification, swelling, and loss of lysosomal membrane integrity over time.Upon damage, lysosomal contents leak into the cytoplasm and trigger NLRP3. 19,22Release of lysosome contents into the cytoplasm is also related to the activation of caspase-1. 23
ROS and mitochondria
Cells under harmful stimuli produce ROS and reactive nitrogen species that cause physiological and pathological responses in cells and tissues.Excess ROS can result in oxida- tive stress.Oxidative stress can increase liver inflammation and activate HSC, thereby enhancing the production of extracellular matrix (ECM), ultimately leading to fibrosis. 24Damaged hepatocytes caused by various factors such as alcohol abuse, hepatitis virus infection, and chronic cholestasis may generate ROS and participate in the assembly and activation of NLRP3.ROS is one of many important NLRP3 inflammasome activators. 25Conversely, ROS inhibitors (e.g., diphenyl iodine, and n-acetyl-l-cysteine) can suppress NLRP3 transcription. 5In the early stage of an inflammatory response, ROS activate the NF-κB pathway.7][28] A study reported that the activation of NLRP3 occurred via the ROS-TXNIP axis. 29Furthermore, both O 2− and H 2 O 2 in some cells have been shown to participate in NLRP3 activation. 29,30itochondria are the main source of ROS.Therefore, mitochondrial dysfunction can trigger inflammatory responses through the inflammasome signaling pathway.The production of mtROS during mitochondrial injury is a known activator of NLRP3. 31A study showed that excessive free fatty acids in the livers of high-fat/calorie diet mice led to mitochondrial damage, leading to ROS generation and NLRP3 activation. 32nder long-term ethanol stimulation, mouse macrophages, or human peripheral blood mononuclear cells were shown to induce the release of mtROS, activating NLRP3. 33The 66 kDa isoform of Shc, a redox enzyme can mediate the generation of mitochondrial ROS and activate NLRP3 inflammasome, hence promoting HSC activation.ROS can also induce the oxidation of mtDNA. 34Oxidized mtDNA is capable of binding and directly activating NLRP3, which triggers caspase-1 activation, and promotes the release of IL-18 and IL-1β.In addition, mtDNA amplifies the activation of NLRP3. 35Notably, most NLRP3 agonists lead to mitochondrial malfunction, ROS generation, and mtDNA oxidation, all of which encourage NLRP3 activation. 36Similarly, the activation of NLRP3 also leads to mitochondrial damage and mtROS production.Recent evidence suggests that mitochondrial homeostasis largely depends on the removal of damaged mitochondria. 37nhibition of mitochondrial autophagy can increase the accumulation of ROS, thus activating NLRP3 inflammasomes. 38ctivation of NLRP3 involves complex and diverse mechanisms, such as K + efflux, lysosome rupture, oxidative stress, etc.K + efflux functions with many NLRP3 activators but is not necessary for NLRP3 inflammasome activation.For instance, CL097 and imiquimod directly target mitochondria without involving K + efflux to induce NLRP3 inflammasome activation. 39An ethanolic extract of Artemisia anomala has a lysosome protective function by inhibiting the TAK1-JNK pathway, thus preventing activation of NLRP3. 40However, it neither inhibits mitochondrial damage nor affects the efflux of K + and chloride ions. 40In addition, multiple cell signaling events sometimes overlap and function with each other.For instance, lysosome damage and K + efflux together participate in NLRP3 inflammasome activation driven by polybrominated diphenyl ethers. 41Similarly, K + efflux-induced mtDNA release activates NLRP3 inflammasomes. 42Apilimod relies on lysosomal mediated mitochondrial damage and ROS production to activate NLRP3. 43Overall, the activation signals can act independently or in together.Such complexities make the activation mechanism of NLRP3 inflammasome more multifaceted and diversified.Therefore, a precise NLRP3 activation mechanism under specific conditions remains unknown.
Fibrosis of liver
Under various chronic stimuli, chronic inflammation, and ne-crosis of hepatocytes trigger an enhanced repair response, resulting in massive proliferation and insufficient degradation of fibers.This causes a massive deposition of fibrous materials in the liver tissue, i.e. liver fibrosis.Numerous cellular pathways participate in fibrosis, and HSCs play a significant role.Many stimuli act on HSCs to promote their activation, resulting in a significant buildup of ECM progressing to fibrous scar tissue.Systemic inflammation driven by immune cells is another key factor in the progression of cirrhosis.Macrophages ensure immune balance in the liver and also participate in inflammation.Inflammatory responses in the liver mediate hepatocyte damage, cause cell differentiation and proliferation, perpetuate chronic liver inflammation, promote fibrous tissue growth, and worsen liver fibrosis.Both Kupffer cells (KCs) and HSCs have high levels of NLRP3 inflammasome activation, which is critical in the development of liver fibrosis. 44
HSCs and NLRP3
In healthy livers, about 15% of resident cells are HSCs, which is about one-third of the population of nonparenchymal cells.After activation, HSCs can transform into myofibroblasts, which secrete ECM, generate fibrous scars, and participate in the process of liver fibrosis.HSC is the main source of myofibroblasts, but other sources include resident liver cells, portal vein fibroblasts, and bone marrow-derived cells. 45,46nder normal conditions, HSCs are quiescent.When the liver is damaged, HSCs are activated by inflammatory mediators or other stimulatory factors.Activated HSCs proliferate and move toward the injured liver tissue.Apart from producing alpha smooth muscle actin (α-SMA), activated HSCs produce tissue inhibitors of metalloproteinases (TIMPs) that inhibit the activity of matrix metalloproteinases.This reduces ECM degradation, causing excessive deposition of ECM and thereby the formation of fibrotic scars. 45,47 wide range of factors are involved in HSC activation, such as platelet-derived growth factor, transforming growth factor beta (TGF-β), IL6, IL8, and inflammasomes (NLRP1, NLRP3, etc.).NLRP3 is closely related to hepatic fibrosis and acts on HSC to promote liver fibrosis.NLRP3 along with the main proinflammatory factor NF-κB promotes profibrosis molecules (IL-1β and IL-18) to activate HSCs. 269][50] The NLRP3 inflammasome is a downstream effect factor of DAMPs, and it has been reported that DAMPs released from dead hepatocytes may directly or indirectly promote HSC activation and fibrosis (Fig. 2). 51,52Notably, NLRP3 mutant mice had significantly higher expression of connective tissue growth factor and TIMP 1 than wild-type mice.That implies that NLRP3 inflammation can induce HSC activation and collagen deposition. 51
Macrophages and NLRP3
Hepatic macrophages mainly include resident macrophages (KCs) and monocyte-derived macrophages, all of which ensure immunological homeostasis in the liver.In the steady state, resident macrophages derived from the yolk sac predominate.Under injury stimulation, monocyte-derived macrophages are recruited, which differentiate from circulating monocytes in the liver. 53Macrophages can be grouped into M1 macrophages and M2 macrophages.M1 macrophages produce inflammatory cytokines with a proinflammatory role.M2 macrophages have healing and anti-inflammatory functions that regulate inflammation.The balance of M1 and M2 macrophages may mediate the advancement and regression of liver fibrosis. 47,54Upon liver damage, a large number of bone marrow-derived monocytes aggregate in the liver and differentiate into macrophages to produce proinflammatory and profibrotic cytokines that promote inflammatory responses and HSC activation.Activated HSCs express α-SMA and collagen I, which promotes ECM deposition and progression of liver fibrosis. 47,52Studies have shown that when HSCs are cocultured with KCs, KCs promote the proliferation and activation of HSCs.Furthermore, HSCs cocultured with KCs secrete more intracellular and extracellular collagen I, as well as TIMP 1. 55 NLRP3 is mainly expressed in macrophages. 56In contrast to HSCs, KCs were shown to express higher levels of NLRP3, NLRP1, and Absent In Melanoma 2 in a mouse model of hepatic fibrosis. 13After binding to the membrane receptors on KCs, PAMP activated NLRP3 in KCs through the NF-κB signaling pathway (Fig. 2), causing the production of its related components (NLRP3, caspase-1, and IL-1β). 4In addition, the macrophage X-box binding protein-1 (XBP1) gene can induce M1 macrophage polarization and activate macrophage NLRP3. 57Activation of macrophage NLRP3 has a significant impact on liver fibrosis.A study suggested that the activation of macrophage NLRP3 can promote disease progression in cholestasis causing liver damage. 58Zhang et al. 44 reported that NLRP3 inflammasomes play a vital role in S. japonicuminduced liver fibrosis through the NF-κB signaling pathway.They also revealed that NLRP3 inflammasomes in both KCs and HSCs contributed to the development of liver fibrosis in S. japonicum-infected mice, and NLRP3 activation was mainly caused by KCs.In addition, s100a8-mediated NLRP3-dependent macrophage pyroptosis was shown to promote the activation of human HSCs. 59NLRP3 in mouse macrophages participated in ECM deposition by activating HSCs (Fig. 2). 60
Gut microflora and NLRP3
Due to the existence of the gut-liver axis, risk factors originating from the intestine have become one of the contributing factors in the development of liver diseases.The gut microbiota is a group of microorganisms that are present in the human intestine and affect health.In addition to participating in digestion and absorption, it also has a role in immune regulation.NLRP3 is widely distributed in epithelial cells and immune cells.In the intestine, PAMPs bind to pattern recognition receptors to activate NLRP3, triggering an inflammatory response to maintain intestinal immune homeostasis. 61xternal stimuli such as infection, trauma, drugs, poor diet, etc., can disrupt the gut microbiota, increasing the propor- tion of harmful bacteria.Metabolites and toxins secreted by harmful bacteria can cause intestinal inflammation and intestinal barrier damage.Damage of the intestinal barrier allows intestinal LPS entry into the liver through the portal vein, where LPS binds to TLR and activates NLRP3, causing liver inflammation (Fig. 3). 62][65][66] Recently, it was shown that ursolic acid inhibited the NOX4/ NLRP3 inflammasome signaling pathway, reduced the abundance of harmful gut bacteria, and increased the abundance of beneficial gut bacteria, all of which helped to reverse liver fibrosis. 67Tylophora yunnanensis Schltr can regulate the gut microbiota by inhibiting the activation of NLRP3 to improve nonalcoholic steatohepatitis (NASH). 68Astragaloside IV can regulate gut microbiota imbalance, improve intestinal barrier function, inhibit the NLRP3/Caspase-1 inflammatory signaling pathway, and alleviate alcohol-induced liver inflammation. 69Additionally, probiotics can enhance the intestinal mucus barrier by increasing the secretion of specific mucins.Probiotic intervention can help rebalance the gut microbiota and regulate intestinal barrier, thereby alleviating the liver damage. 64,70,71RP3 downstream molecules IL-1β is a key inflammatory cell factor.It is an active version of IL-1 that is mainly produced by macrophages. 72PAMPs and DAMPs participate in the release of mature IL-1β and IL-18 by triggering NLRP3.IL-1β and IL-18 have biological activity and participate in fibrosis. 4,72IL-1β can regulate the expression of TIMPs and matrix metalloproteinases, which have an impact on fibrosis and tissue regeneration. 73The NLRP3/IL-1β secretory axis is also present in the HSCs. 72In vitro studies have demonstrated that IL-1β can directly activate HSCs, promoting their proliferation and differentiation into myofibroblasts.The myofibroblasts increase the release of fibrosis markers such as collagen and TGF-β. 13IL-1β promotes fibrous tissue development by binding to cell surface IL-1β receptors. 74Endogenous inhibitors of IL-1β receptors were shown to improve liver fibrosis in a mouse model of alcoholic hepatitis. 75ultifunctional cytokine IL-18 has proinflammatory and fibrosis-promoting activity.IL-18 has previously been linked to the progression of fibrosis in the lungs, heart, and kidneys. 76It also has a key role in the progression of liver injury and liver fibrosis.Significant increase of IL-18 plasma level has been observed in chronic liver disease and hepatosclerosis. 77Increased IL-18 expression was found in the livers of NASH patients, and involvement in liver fibrosis. 78IL-18 can activate HSCs promoting their differentiation into myofibroblasts, upregulating the expression of collagen genes, and the production of connective tissue growth factor and α-SMA. 76As liver cells do not have IL-18 receptors, IL-18 cannot directly act on the hepatic cells.However, IL-18 can activate CD4 + T cells.The CD4 + T cells secrete various cell factors that exacerbate liver inflammation, progressing to liver fibrosis.In conjunction with this, anti-IL-18 therapy can reduce liver inflammation and noticeably delay liver fibrosis. 79
NAFLD
NAFLD includes a range of liver changes, starting with nonalcoholic fatty liver potentially progressing to NASH.In advanced cases, NASH can lead to cirrhosis, liver failure, and liver cancer. 80The occurrence and progression of NAFLD supposedly involve multiple parallel attacks involving different events such as lipid toxicity, chronic inflammation, and oxidative stress that simultaneously participate in the development of NAFLD (Table 1). 81Abnormal activation of NLRP3 is a major driver of liver injury, steatosis, inflammation, and fibrosis (Fig. 3). 82,83The role of abnormal activation of NLRP3 in NALFD has been extensively studied (Table 1).In NAFLD patients and NASH mouse models, activation of NLRP3 exacerbates liver inflammation and progression of liver fibrosis. 9,82In NASH patients, XBP1 promotes lipid accumulation and expression of proinflammatory factors in hepatocytes by activating NLRP3 in macrophages, thereby exacerbating the progression of steatohepatitis.On the contrary, XBP1 knockout in macrophages inhibited the expression of TGF-β and HSCs activation. 57Mitochondria-derived risk signals (ROS and mitochondrial dysfunction) promote expression of inflammatory factors and activate HSCs (Fig. 3), driving liver fibrosis in mice and NASH patients. 84,85Disturbed mitophagy was shown to activate NLRP3 inflammasomes, which was associated with the progression of nonalcoholic steatosis to nonalcoholic steatohepatitis. 32The above examples demonstrate the close relationship of NLRP3 with NAFLD.Many studies have suggested that inhibiting NLRP3 reduced liver inflammation and fibrosis.For instance, blocking NLRP3 inflammasome activation with echinatin can improve NASH and lessen liver inflammation and fibrosis. 86The NLRP3 inhibitor MCC950 was shown to reduce the severity of liver inflammation. 9Although MCC950 is an effective inhibitor of NLRP3, it was found to be hepatotoxic in phase II clinical trials of rheumatoid arthritis, which prevented further evaluation. 879][90][91] Although targeting the inflammasome pathway can inhibit the development of NAFLD, the studies are still at an early stage, which limits clinical application.
ALD
ALD, which ranges from early steatosis to alcoholic fatty liver, cirrhosis, and liver cancer, is the result of liver damage brought on by long-term ethanol toxicity and a complex immunological reaction. 23Long-term ethanol consumption activates the innate immune system, producing proinflammatory and antiinflammatory cytokines.It induces an inflammatory cascade in the liver and in the whole body. 23Long-term exposure to ethanol increases neutrophil and macrophage recruitment, which promotes the activation of NLRP3/caspase-1/ASC inflammasome and the release of pro-inflammatory cytokines (Table 1, Fig. 3).Mice lacking caspase-1, ASC, and IL-1 receptors had a reduction in ethanol-induced hepatic steatosis and inflammation. 75,92This suggests that NLRP3 activation in ALD is closely related to inflammatory response and liver injury.Correspondingly, inhibiting the activation of NLRP3 can improve the prognosis of alcoholic liver disease.For instance, diallyl disulfide was shown to inhibit the activation of ethanolinduced mouse liver NF-κB signals and NLRP3, slowing disease progression. 93Zeaxanthin dipalmitate inhibited hepatic inflammatory infiltration and fat droplet accumulation in a rat ALD model by restoring mitophagy that was impaired due to ethanol poisoning and suppressed NLRP3. 94A traditional Chinese medicine magnolol extract can inhibit NLRP3 preventing alcohol-induced liver injury. 95thanol inhibits the breakdown of fatty acids, which promotes fat accumulation in liver cells, which makes them prone to lipid peroxidation and oxidative damage (Fig. 3).ROS production by dysfunctional mitochondrial and oxidative stress are key causes of ALD.Oxidative metabolism of alcohol damages mitochondria, which produce ROS and activate NLRP3, causing inflammatory responses in the liver (Table 1). 80insenoside Rg1 was shown to suppress NLRP3 activation by preventing oxidative stress, which alleviated pathological changes in the liver tissue of mice and rats on alcohol. 96roxylin A can reduce the accumulation of mitochondrial superoxide and intracellular ROS in hepatocytes induced by ethanol, thus mediating the inactivation of NLRP3. 97The inhibition of NLRP3 signaling can restrain the oxidative stress response in ALD, thus improving ALD.Traditional Chinese medicine extracts astragaloside IV was shown to inhibit the NLRP3/Caspase-1 inflammatory signaling pathway, alleviating alcohol-induced liver inflammation and oxidative stress in the liver. 69Moreover, hepatocytic pyroptosis is closely associated with NLRP3 activation in the pathogenesis of ALD.Diallyl trisulfide alleviates alcohol-induced hepatocyte apoptosis by downregulating the accumulation of intracellular ROS and inhibiting NLRP3. 98In conclusion, NLRP3 plays a pivotal role in the pathogenesis and progression of ALD, and suppression of NLRP3 activation can ameliorate the prognosis of alcoholic liver disease.
Viral hepatitis
Viral hepatitis is an infectious disease mainly caused by multiple hepatitis viruses (hepatitis A, B, C, D, and E viruses).
The most common are hepatitis B and C. Viral infection activates the host immune response system, causing inflammatory responses activating NLRP3 (Table 1, Fig. 3).An excessive and ongoing inflammatory response causes chronic inflammatory disorders that lead to liver fibrosis.The expression levels of NLRP3, ASC, and IL-1β in the cytoplasm of hepatitis B virus (HBV)-negative patients are lower, while the same increase in HBV-positive patients. 99The severity of HBV-induced liver inflammation is proportional to the expression levels of NLRP3, gastric dermal protein D, caspase-1, IL-1β, and IL-18. 100Therefore, therapeutic targeting of NLRP3 can potentially suppress excessive inflammatory responses and alleviate inflammatory damage caused by viral hepatitis.HBV infection induces hepatic injury through the actions of HBV-associated proteins.Hepatitis B core antigen upregulates NLRP3 by promoting the phosphorylation of NF-κB thereby promoting liver injury. 101Hepatitis B virus X protein activates NLRP3 under oxidative stress, enhancing NLRP3 inflammasome-mediated inflammation and pyroptosis by enhancing the generation of mtROS in liver cells. 99Investigating the activation mechanisms of NLRP3 in hepatitis B virus infection can aid the development of NLRP3-directed antiviral therapies.Hepatitis C virus (HCV) infection can activate NLRP3 inflammasomes, thus increasing the expression of NLRP3-related components in HCV-infected liver cells. 12,102NLRP3 can influence macrophage activation and promote the regulation of the immune response.HCV activates NLRP3 in liver macrophages or KCs, driving liver inflammation.HCV core protein activates NLRP3, promoting the production and release of IL-1β by macrophages. 103HCV infection activates NLRP3 in KCs by inducing potassium efflux, resulting in production of IL-1β.The secretion of IL-1β drives chemokines, proinflammatory cytokines, and immunoregulatory genes that are associated with the severity of HCV disease. 104NLRP3 is activated in HCV infection through the NF-κB signaling pathway.In addition, HCV infection can induce endoplasmic reticulum stress that increases the release of intracellular ROS and subsequently activates NLRP3. 102Although the aberrant activation of NLRP3 is important in viral hepatitis pathogenesis, the specific regulatory mechanism remains needs to be further explored. 105
Autoimmune hepatitis
Autoimmune hepatitis (AIH) is an autoimmune inflammation reaction in liver tissue, involving the action of innate immune cells, such as macrophages, T cells, and natural killer T cells (Fig. 3). 106,107NLRP3 is a component of the innate immune system that the occurrence and development of AIH.In the pathogenesis of AIH, T helper (Th)0 lymphocytes differentiate into Th1 and Th2 cells.Th1 can activate macrophages by secreting IL2 and interferon gamma, thereby releasing IL-1. 106TLRs 2, 4, and 9 can mediate the activation of inflammasomes in AIH and de novo autoimmune hepatitis, suggesting that the inflammasome activation has a role in the pathogenesis of AIH. 108NLRP3 inflammasomes are known to contribute to concanavalin A (Con A)-induced hepatitis (AIH model).NLRP3 and ASC expression levels are upregulated in Con A-induced hepatitis.NLRP3 inflammasome activation, IL-1β production, and pyroptosis were significantly increased in Con A-induced AIH mice. 10 Recombinant human IL-1 receptor antagonists can inhibit NLRP3 in AIH by inhibiting ROS production and mitochondrial dysfunction in liver tissue. 10he activation of NLRP3 may involve the NF-κB (Table 1) and the protein kinase A (PKA) signaling pathways.Formononetin inhibits NLRP3 activation by inhibiting the NF-κB pathway and protects the liver against Con A-induced liver injury in mice. 109Dimethyl fumarate can inhibit the activation of the NLRP3 inflammasome by regulating the PKA signaling pathway, and prevent Con A-induced hepatitis. 110The regulatory mechanism of NLRP3 is extensive, and its relationship with the PKA and NF-κB signaling pathways in the pathogenesis of AIH should be intensely studied.
Primary biliary cholangitis
Primary biliary cholangitis (PBC) is a chronic inflammatory autoimmune cholestasis liver disease that is characterized by immune-mediated bile duct injury and is accompanied by chronic cholestasis. 111,112However, the specific pathogenesis of PBC is still unclear.Bile stasis can trigger TLR 4 signaling and enhance NF-κB activation, activating NLRP3 and thereby aggravating liver fibrosis (Table 1, Fig. 3). 113NLRP3 is involved in liver inflammation and fibrosis.It is not only expressed in immune cells but also in liver cells and bile duct cells. 114NLRP3 expression is significantly increased in the livers of PBC patients and mice, as shown by studies. 115Moreover, in a mouse PBC model, galectin-3 directly stimulated the activation of NLRP3, causing autoimmune cholangitis and fibrosis. 116MCC950, an NLRP3 inhibitor, can dramatically lessen bile duct ligation-induced liver injury by inhibiting the activation of NLRP3. 117Paeoniflorin can reduce the degree of liver injury and liver fibrosis in PBC mice by inhibiting NLRP3 and related cascade inflammatory pathways. 115Therefore, inhibiting NLRP3 and related cascading inflammatory pathways may be a new approach to the prevention and treatment of PBC.
Primary sclerosing cholangitis
Primary sclerosing cholangitis (PSC) is also a chronic cholestatic liver disease.Inflammation and fibrosis result in multifocal biliary strictures and end-stage liver disease.The etiology of PSC is not clear, and so far, there is no specific and effective treatment.Elevated markers of NLRP3 inflamma-some activation have been detected in liver biopsies of PSC patients. 118NLRP3 immunostaining had positive expression of reactive bile duct cells in the livers of PSC patients and mouse PSC models, suggesting that activation of NLRP3 may have a role. 119Although NLRP3 does not affect the proliferation of bile duct cells, it can destroy the integrity of bile duct epithelium, increasing epithelial permeability. 114,120It has been established that primary pathogenesis of PSC in Mdr2 −/− mice (a common PSC mouse model) includes loss of the integrity of the bile duct epithelial cell layer. 120Furthermore, NLRP3 was found significantly activated in human PSC and Mdr2 −/− livers.The extent of liver fibrosis in PSC patients positively correlates with the levels of NLRP3 and IL-1β. 65Therefore, targeting NLRP3 is a new direction in the treatment of PSC.
Conclusion
This article provides a comprehensive review of the relationship between NLRP3 and liver fibrosis.The process of liver fibrosis involves the interaction of multiple cells and molecules.The mechanisms of these interactions need to be further studied in the future.Several NLRP3 inflammasomerelated molecular inhibitors have been studied in liver diseases and have shown good results in reducing inflammation, fibrosis, and other tissue damage.Many traditional Chinese medicines have lipid-metabolism regulating, anti-inflammatory, and antioxidant effects, and alleviate hepatitis and liver fibrosis by inhibiting the NLRP3 inflammatory pathway.Furthermore, traditional Chinese medicines can slow down the progression of chronic liver disease by regulating the gut microbiome. 68,69These experimental studies provide a preliminary foundation for clinical practice and new strategies for the development of drugs or treatments targeting NLRP3.In the future, the pharmacological effects of NLRP3-related molecular inhibitors and the synergistic effects of other drugs, especially traditional Chinese medicine preparations, are worthy of further exploration.Although some preliminary clinical trials and animal studies have shown the potential efficacy of targeted NLRP3 therapy, it has not yet been widely applied in clinical practice.The activation mechanism of the NLRP3 pathway is not fully understood and therefore targeted NLRP3 therapy needs deeper evaluations.
Fig. 3 .
Fig. 3. Activation of NLRP3 inflammasome in chronic liver disease.High-fat diet and long-term alcohol consumption damage hepatocytes, leading to the accumulation of lipid in hepatocytes and causing an inflammatory response.Lipid-accumulated liver cells are prone to lipid peroxidation and oxidative damage, activating NLRP3.Viral proteins activate NLRP3, resulting in inflammatory reactions and activating HSCs.Abnormal expression of autoantigens may lead to an abnormal immune response in the liver, causing the activation of NLRP3.Immune-mediated bile duct injury results in intrahepatic bile duct narrowing and bile stasis.Bile stasis, in turn, activates NLRP3 through the NF-κB pathway.High-fat diet, alcohol, virus infection, and cholestasis lead to dysbiosis of the gut microbiota and increase intestinal permeability.Gut-derived PAMPs enter the liver and then activate NLRP3.ECM, extracellular matrix; HSCs, human hepatic stellate cells; HBV, hepatitis B virus; HCV, hepatitis C virus; IL-1β, interleukin 1β; IL-18, interleukin 18; LPS, lipopolysaccharide; NLRP3, NOD-like receptor protein 3; NF-κB, transcription factor nuclear factor-kappa B; ROS, reactive oxygen species. | 2023-11-01T15:08:07.764Z | 2023-10-30T00:00:00.000 | {
"year": 2023,
"sha1": "0621eac6a45a1e672e5c009ffe897615882b0018",
"oa_license": "CCBYNC",
"oa_url": "https://publinestorage.blob.core.windows.net/journals/JCTH.2023.0(0).0.00231.Meihua%20Sun.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d3cdfc8c0a61bdc8a16ac98d77f6484c6a51ee6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
232291567 | pes2o/s2orc | v3-fos-license | T Cells Plead for Rejuvenation and Amplification; With the Brain’s Neurotransmitters and Neuropeptides We Can Make It Happen
T cells are essential for eradicating microorganisms and cancer and for tissue repair, have a pro-cognitive role in the brain, and limit Central Nervous System (CNS) inflammation and damage upon injury and infection. However, in aging, chronic infections, acute SARS-CoV-2 infection, cancer, chronic stress, depression and major injury/trauma, T cells are often scarce, exhausted, senescent, impaired/biased and dysfunctional. People with impaired/dysfunctional T cells are at high risk of infections, cancer, other diseases, and eventually mortality, and become multi-level burden on other people, organizations and societies. It is suggested that “Nerve-Driven Immunity” and “Personalized Adoptive Neuro-Immunotherapy” may overcome this problem. Natural Neurotransmitters and Neuropeptides: Glutamate, Dopamine, GnRH-II, CGRP, Neuropeptide Y, Somatostatin and others, bind their well-characterized receptors expressed on the cell surface of naïve/resting T cells and induce multiple direct, beneficial, and therapeutically relevant effects. These Neurotransmitters and Neuropeptides can induce/increase: gene expression, cytokine secretion, integrin-mediated adhesion, chemotactic migration, extravasation, proliferation, and killing of cancer. Moreover, we recently found that some of these Neurotransmitters and Neuropeptides also induce rapid and profound decrease of PD-1 in human T cells. By inducing these beneficial effects in naïve/resting T cells at different times after binding their receptors (i.e. NOT by single effect/mechanism/pathway), these Neurotransmitters and Neuropeptides by themselves can activate, rejuvenate, and improve T cells. “Personalized Adaptive Neuro-Immunotherapy” is a novel method for rejuvenating and improving T cells safely and potently by Neurotransmitters and Neuropeptides, consisting of personalized diagnostic and therapeutic protocols. The patient’s scarce and/or dysfunctional T cells are activated ex vivo once by pre-selected Neurotransmitters and/or Neuropeptides, tested, and re-inoculated to the patient’s body. Neuro-Immunotherapy can be actionable and repeated whenever needed, and allows other treatments. This adoptive Neuro-Immunotherapy calls for testing its safety and efficacy in clinical trials.
T cells are essential for eradicating infectious organisms and cancer, immune response to injury, organ repair, and for various other health-promoting missions (1). T cells have also various beneficial and important roles in the healthy brain, where they have pro-cognitive as well as neuroprotective roles (2,3).
T cell exhaustion refers to deterioration of T cell function. Exhausted T cells (Tex) are characterized by low proliferation in response to antigen stimulation, progressive loss of effector function (cytokine production and killing function), expression of multiple inhibitory receptors such as PD-1, Tim3, and LAG3, and metabolic alterations from oxidative phosphorylation to glycolysis (4,(9)(10)(11)(12)(13)(14)(15)(16). Exhausted T cells are further discussed below in the section Severe Chronic Viral Infections.
T cell senescence or biological aging is a process that results from a variety of stresses, and leads to a state of gradual deterioration of functional characteristics and irreversible growth arrest (4,(16)(17)(18).
T cell stemness is characterized by the capacity of T cells to selfrenew, and be multipotent. T cell stemness combines the ability of a T cell to perpetuate its lineage and give rise to differentiated cells, and to interact with its environment to maintain a balance between quiescence, proliferation, and regeneration (4,19).
Individuals who have scarce and/or exhausted, senescent, altered and dysfunctional T cells are very susceptible and at high risk of morbidity and mortality, and become a harsh multi-level clinical, physiological, and economic burden on many other people, organizations, and societies (specified only in Figure 1B-right and in further detail in its legend, due to word limit of this paper).
T CELLS OF NUMEROUS HUMAN BEINGS ARE IMPAIRED AND DYSFUNCTIONAL
Aging T cells undergo characteristic deterioration with age, leading to increased incidence of cancer and infectious diseases, reduced immunogenicity and efficacy of many vaccines, and problematic recovery after injuries, surgeries, and transplantations (16,20).
The T cell impairments in elderly people are due to decreased output of lymphoid cells from the bone marrow, and the involution of the thymus (16). Concomitantly, there is an accumulation of highly differentiated T cells that previously encountered antigens, and this leads to diminished T cell receptor (TCR) repertoire.
T cells of aging people share some similarities with senescent cells (17), i.e. cells that underwent a permanent proliferative arrest. The senescent cells stop dividing due to various stress-inducing factors, among them both environmental and internal damaging events, abnormal cellular growth, oxidative stress, autophagy factors, and others (16,17). The similarities between T cells of elderly people and senescent cells include shorter telomers, accumulated DNA damage, and metabolic changes (16,17).
CD8 + T cells are important for protective immunity against intracellular microorganisms and tumors, but in chronic infections or cancer, the CD8 + T cells are exposed to persistent antigen and/or inflammatory signals, leading to a gradual deterioration of their function, a state called "exhaustion". Tex are characterized by progressive loss of effector functions (cytokine production and killing function), expression of multiple inhibitory receptors (such as PD-1 and LAG3), dysregulated metabolism, poor memory recall, homeostatic self-renewal, homeostatic proliferation, and distinct transcriptional and epigenetic programs (11)(12)(13)(14)(15)(16)(17)(18)(19)(20).
These altered functions are closely-related with altered transcriptional program and epigenetic landscape that clearly distinguish Tex from normal effector and memory T cells. T cell exhaustion represents a continuous spectrum of cellular dysfunction induced during chronic viral infection, facilitating viral persistence and associating with poor clinical outcome. Several types of modulations of T cell exhaustion can restore function in Tex, promoting viral clearance (14).
Several recent studies document dramatic reduction in the total number of T cells and specifically in CD4 + and CD8 + T cells, with functional exhaustion of T cells, in SARS-CoV-2-infected patients, especially in patients requiring intensive care (21,22). Counts of total T cells, CD8 + T cells or CD4 + T cells lower than 800, 300, or 400/ml, respectively, negatively correlate with survival of SARS-CoV-2-infected patients (12). T cell numbers are negatively correlated with serum IL-6, IL-10, and TNFa levels. Patients in disease resolution period show reduced levels of these cytokines and restored T cell counts. T cells from SARS-CoV-2-infected patients have significantly higher levels of PD-1 (12). Moreover, increased PD-1 and Tim-3 expression on T cells is seen as patients progress from prodromal to overtly symptomatic stages (12).
Cancer
Many cancer patients often have low number of effector T cells (Teffs), and these cells suffer from anergy, exhaustion, senescence, and impaired stemness (4,5,11,14,15). T cell impairments in cancer patients are caused by several factors that can be grouped in several very broad categories, among them: Category 1: The adverse effects imposed on T cells by the cancer cells; Category 2: The anti-T cell deleterious side effects of various cancer treatments: chemotherapy, immunosuppressive drugs, other drugs, radiation and surgery; Category 3: Psychosocial factors including ongoing severe stress, anxiety, depression, fear, pain, sleep disturbances; Category 4: Cancer-related malnutrition.
Thus, the T cells of cancer patients are often dysfunctional and unable to eradicate the cancer cells efficiently, due to many different reasons. On top of the above-mentioned division into general categories, the reasons and factors that limit and/or impair the T cells of cancer patients can be divided into distinct families, each containing multiple specific members, as discussed in detail in (11). Among these are the following: Factor family 1: Lack of antigen processing, T cell recognition and TCR signaling; Factors family 2: Negative immune modulation; Factors family 3: Non-accessibility of tumor to T cells; Factors family 4: Immune editing, equilibrium and escape (11). Due to their low numbers and multiple impairments, the cancer patient's T cells are unable to eradicate effectively both the cancer cells infectious organisms that infect the patients, and are also unable to perform proper tissue repair and their other essential and beneficial tasks in the periphery and brain.
On these grounds, an effective and safe rejuvenation of scarce, exhausted and dysfunctional T cells, and induction of multiple beneficial T cell functions in cancer patients, remain an urgent desired clinical goal.
Chronic Stress, Severe Depression, and Sleep Disturbances
Chronic stress is associated with immune dysfunction and various peripheral T cell abnormalities, including increased frequency and suppressive function of CD4 + CD25 + and CD4 + FoxP3 + regulatory T cells (Tregs), synergistic decreased function of effector T cells (Teffs) and antigen presenting cells (APCs), and a shift of the Th1 to Th2 cytokine responses (23). The increased percentage of Tregs is associated with inflammatory and neuroendocrine responses to acute psychological stress and poorer health status in older men and women (23).
A subset of individuals with major depressive disorder (MDD) have impaired adaptive immunity characterized by a greater vulnerability to viral infections and deficient responses to vaccinations, along with decreased number and/or activity of T cells and natural killer cells (NKCs) (7).
MDD patients also have significantly increased percentage of CD127 low /CCR4 + Tregs, and memory Tregs, and reduced CD56 + CD16 -(putative immunoregulatory) NKC counts (7). Moreover, CD4 + T cells of MDD patients are characterized by higher frequencies of CD4(+)CD25(high)CD127(low/-) cells, higher FOXP3 mRNA expression, and less diverse TCR Vb repertoires (6). T cells from MDD patients also suffer from significantly lower surface expression of the chemokine receptors CXCR3 and CCR6, which are central to T cell differentiation and trafficking. decreased in their numbers, and/or exhausted, senescent, impaired, defective, biased, and dysfunctional (right). Some of the multiple published evidences for these T cell alterations in human beings are cited in the paper. (B) Some of the envisioned multi-level detrimental implications of altered, scarce, exhausted, senescent and dysfunctional T cells. The various different T cell impairments, in people in a kaleidoscope of abnormal conditions (left), do not allow effective eradication of infectious organisms and of cancer cells, and prevent, limit or impair other absolutely essential T cells functions in the periphery and in the brain. Some of the negative consequences and implications of these T cell impairments (right) are the following: 1. The people whose T cells are suboptimal in number and function are more susceptible to infectious organisms: viruses, bacteria, fungi, and parasites, and cannot eradicate them in an optimal manner. 2. People carrying infectious organisms for prolonged periods, on top of suffering themselves from chronic infectious diseases, also become contagious and can infect many other people. This high risk creates a need of, and leads to and a very difficult and problematic mission of, social distancing and even complete isolation of the infected people as seen in the SARS-Cov-2 (Covid-19) pandemics. 3. The T cell impairments do not allow effective prevention and eradication of cancer. Needless to remind the readers that cancer kills, and that so do certain harsh complications and side effects of some anti-cancer treatments. 4. The T cell impairments can limit, and even distort, the person's responses to some vaccinations, drugs and medical procedures, and as such may lead to unexpected side effects, which would not occur in people with normal and properly functioning healthy T cells. Due to that, I would argue that it is very important to perform routine and periodic tests for the entire population, to analyze the T cells of each person for their total counts, as well as analyze their phenotype, subpopulations, condition, and overall functionality. Such recommended routine personalized tests may have profound impact and implications, as they could indicate if the person's T cells are in normal number and condition, or rather scarce, exhausted, senescent and dysfunctional. I further claim that such tests must become obligatory prior to any administration of vaccination, or drugs, performance of surgery, or other medical procedure. 5. The immunocompromised people whose T cells are scarce and dysfunctional, and that become increasingly and chronically ill, often for many years, become a physical, physiological, and economical burden on hospitals, in-hospital medical staff, emergency and intensive care units, critical medical instruments, out-of-hospital healthcare providers, insurance companies, and other people and organizations. 6. The low number and abnormal T cell function of elderly people, and of patients with various diseases, and the increased susceptibility and risk of the respective people, can induce severe ongoing/chronic stress, depression, fear and anxiety. These psychological problems can affect later also their family members and other relatives and colleagues. This increased stress, often becoming chronic, has by itself multiple very severe and well-documented detrimental effects on the health of all these people. 7. The people whose T cells are scarce and dysfunctional, become a heavy clinical, social can physiological and economical problem of, and a heavy burden on, their society and country, due to all the above. Sleep disturbance, which is a core symptom of MDD, is by itself associated with alterations in lymphocyte distributions. Within the MDD group, self-reported sleep disturbance was associated with an increased percentage of effector memory CD8+ cells, but with a lower percentage of CD56+CD16− NKC (7).
Moreover, sleep deprivation disturbs severely the functional circadian rhythm of natural Treg counts, and reduces CD4 + CD25 -T cell proliferation (7). MDD and associated sleep disturbances also increase effector memory CD8 + and Treg pathways (8).
Together, these findings indicate that the T cell phenotype and TCR utilization are skewed on several levels in MDD patients and in people with sleep disturbances, and as such may increase the patient's susceptibility to infectious diseases and cancer. It is hypothesized that these T cell abnormalities may even contribute to abnormal cognitive brain function and deficient CNS protection.
Interestingly, CD3 + T cells were shown to be critical for resolution of comorbid inflammatory pain and depression-like behavior in a model of peripheral inflammation (24). Indeed, it was shown that T cells were required for resolution of the comorbid persistent mechanical allodynia, spontaneous pain and depression in this model of peripheral inflammation, indicating that the immune system can contribute to both the onset and resolution of these comorbidities (24). It was suggested that proresolution effects of T cells may have a major impact on treating patients with comorbid persistent pain and depression (24).
Acute Wound, Injury and Trauma
Lymphocytes are essential for wound healing and tissue repair (25). For example, in response to wound, on top of neutrophils' first and monocytes' later recruitment to the clot, as a first line of defense against bacteria, the adaptive immune system comprising Langerhans cells, dermal dendritic cells and T cells, are also activated to combat self and foreign antigens (25).
Serious injury in humans and in experimental animals is associated with severe decreases in T cell-dependent immune functions, leading to generalized immunosuppression, which, in turn, increases host susceptibility to infections and sepsis (26,27).
Following severe injury, there is a diminished production of IL-12, loss of Th1 function and cytokine production, and a shift to a Th2 phenotype, with increased production of IL-4 and IL-10 cytokines known to inhibit Th1 function. Moreover, the interactions between the innate and adaptive immune systems is disturbed following injury (26).
These T cell impairments are associated with decreased resistance to infections, and can impair multiple essential T cellmediated activities in both the peripheral organs and the nervous system. Some immunomodulatory strategies had success in animal models in ameliorating the diminished resistance to infection commonly seen after major traumatic or thermal injury (26,27). In the review by Lederer et al., entitled The effects of injury on the adaptive immune response, the authors discuss the immunomodulatory strategies in animal models that succeeded in improving resistance to infection after major traumatic or thermal injury, but emphasize that immunomodulatory treatments that are successful in preventing infection may be contraindicated once infection is manifest (26).
GENERAL SUGGESTED CRITERIA FOR AN OPTIMAL SOLUTION TO REJUVENATE AND IMPROVE SCARCE AND DYSFUNCTIONAL T CELLS OF NUMEROUS PEOPLE IN A KALEIDOSCOPE OF ABNORMAL CONDITIONS
The scarce, and often impaired, exhausted and dysfunctional T cells of so many people ( Figure 1A-right, Figure 1B-left), and their severe multi-level consequences ( Figure 1B-right), create an urgent global need for new out-of-the box solutions to overcome the problem.
I humbly suggest, without underestimating or ruling out any other proposed solution, that the optimal/best desired cellular method for rejuvenation and amplification of beneficial T cell functions, should hopefully meet all the following 20 criteria.
1. Be personalized, 2. Be safe and free of any detrimental side effects, and in fact lead only to positive side effects on the person's overall wellbeing, owing to the improved T cell function, 3. Be as natural/physiological as possible, by using natural signaling molecules which bind and activate their natural receptors, and by doing so improve patient's own T cells, 4. Be very effective, by inducing and improving simultaneously or sequentially multiple beneficial T cell features and functions (rather than only a single effect), 5. Be patient-friendly and painless, do not necessitate hospitalization or any other procedure or treatment, and allow continuation of everyday life during therapy, 6. Be applicable to many people in very different ages and abnormal conditions, who suffer from scarce, sub-optimal and dysfunctional T cells, 7. Allow repeated and timely therapy whenever needed, and for as long as needed, and as such allow even prolonged periodic treatment over months and years, for people with chronic T cell dysfunction, 8. Be flexible and contain several build in options and modalities, allowing higher degree of freedom and action, 9. Contain preliminary personalized ex vivo diagnostic cellular functional tests that can be done at any time, using small quantity of the patient's own T cells (purified from small quantity of peripheral blood), and reliable biomarkers. These ex vivo tests should be able to evaluate the present condition of the patient's T cells, and measure their ex vivo responsiveness to the planned adoptive cellular therapy. As such, these diagnostic tests could evaluate and predict the person's potential benefit from tentative treatment and to tailor personalized regimen, 10. Be independent of a prerequisite of knowing apriori the antigen/s expressed on the disease-causing cells or microorganisms. Therefore, the method should not be limited to, or suitable only for, cases in whicheither the tumor antigens or the antigens of infectious organisms (e.g. the spike protein of Covid-19) are known and can be used.
11. Not depend on, and not utilize, the T cell receptor (TCR) and its associated proteins, for avoiding both further Activation-Induced Cell Death (ACID), T cell exhaustion, and the risk of autoimmunity, 12. Not manipulate the T cells genetically, 13. Not park/culture the T cells in vitro for prolonged periods, for not losing some essential traits/capabilities. It is especially important that the therapeutic method would not impair T cell ability to migrate and home to, and penetrate into, various organs and tissues which either contain disease-causing cells or infectious organism, or are injured and require T cell help for healing, 14. Not use T cells whose in vivo activity is dependent on cytokines or growth factors and need such subsequent support in vivo. Thus, the therapeutic method should preferably be free of any detrimental side effects of systematically-administered cytokines (e.g. IL-2), and of any cytokine storms, 15. Not perform any pre conditioning procedure prior to the therapy itself (e.g. chemotherapy or biologics based lymphoablation, radiation or others), and do inject any drug before, during, or after the infusion of the adoptive transfer of the patient's rejuvenated and improved T cells, 16. Not change, inactivate or even suppress transiently any natural receptors, ion channels, or other proteins expressed on the T cell surface during the ex vivo process of T cell rejuvenation, activation and amplification, prior to the inoculation of the patient's improved T cells into his body, 17. Not necessitate single infusions of billions of improved T cells each time, for avoiding in vivo cytokine storms and competition over natural ingredients and space. Settle with much less T cells injected repeatedly over few weeks or months 18. For conditions which are NOT cancer, use and adoptively transfer rejuvenated and improved T cells which can be "friendly" to, and communicate with, other cells, rather than very aggressive T cells which may damage, or compete with other cells for natural resources and space, in lymphoid organs, and in other tissues and organs, and which may also cause autoimmunity, 19. The new immunothreapy should stand on its own, as a mono therapy, for saving, prolonging, and improving life, but must not interrupt or compete with any other efficient treatment/ drugs from which the person can benefit from, and which may overcome T cell exhaustion and allow better T cell function. Thus, any other prior, simultaneous, in between, or later safe and effective treatment would be possible, as long as they do not harm T cells and do not induce immunosuppression.
20. On top of being used on their own, it would be advantageous if the diagnostic and therapeutic methods and protocols of the new immunotherapy could be used also as "add on technologies'', and allow improvement of other adoptivelytransferred therapeutic T cells.
T CELL IMMUNOTHERAPIES-"THE MEDICAL EQUIVALENT OF SPLITTING THE ATOM"
T cell immunotherapies (34,35) have revolutionized medicine and even called by The New York Times: "The medical equivalent of splitting the atom" (28).
In line with this scientific and clinical revolution, there is an enormous number of scientists and clinicians working on T cell immunotherapies, and a meteoric rise of companies developing and utilizing them, primarily for cancer, but also for some infectious diseases. Current adoptive/cellular T cell therapies include mainly the following types: donor lymphocyte infusions, tumor-infiltrating lymphocytes, T-cell receptor-engineered T cells, chimeric antigen receptor (CAR) T cells, and virus-specific T cells. These T cell immunotherapies are reviewed in many papers, among them (34,35).
While each of these potent T cell immunotherapies has their own clear focus, advantages, and successes, primarily for some types of cancer, to the best of my knowledge none can rejuvenate and rescue T cells from exhaustion and senescence whenever needed and for whoever needs it, and that manage to improve multiple T cell functions simultaneously, among them increase migration, homing and extravasation into tissues (absolutely essential for penetrating and combating "Cold tumors").
I'm also not aware of adoptive T cell immunotherapies that fulfil all, or at least most, of the 20 suggested criteria specified in the preceding section, and are suitable for broad use for all, at least most conditions of scarce, exhausted, suboptimal, and dysfunctional T cells.
HYPOTHESIS AND SUGGESTION: "NERVE-DRIVEN IMMUNITY" MAY REJUVENATE, ACTIVATE, AND AMPLIFY BENEFICIAL T CELLS IN A DIRECT, SAFE, AND EFFECTIVE MANNER
How can we safely and potently rejuvenate, activate, and improve scarce, exhausted and/or senescent T cells whenever needed and fulfil the 20 suggested criteria for optimal solution defined in the previous section?
My idea and suggestion are to try to mimic and translate into therapeutic terms, what I hypothesize that the nervous system most probably normally does in everyday life: i.e. "talks" directly to T cells in various parts of the body via Neurotransmitters and Neuropeptides, that in turn bind to their specific receptors in T cells (as well as many others cells) and induce on their own multiple direct, rapid, potent, timely and beneficial effects. I further hypothesize that T cells need Neurotransmitters, Neuropeptides and their receptors, and the direct, rapid and potent effects they induce, for performing multiple healthguarding T cell tasks, and for communicating with the brain and with other body systems and organs.
Definitions and Characteristic Features of Neurotransmitters and Neuropeptides in the Nervous System
Neurotransmitters are traditionally defined as endogenous chemical substances used by the nervous system to transmit messages, either between neurons, or from neurons to muscles, or from neurons to gland cells. In addition, many Neurotransmitters induce direct effects on T cells and other immune cells, as well as on different cell types which express their receptors. Neuropeptides are traditionally defined as small protein-like molecules, i.e. peptides, produced and released by neurons through the regulated secretory route, and acting on neural substrates. Once again, T cells and other immune cells ought to be added as target cells that are affected directly by Neuropeptides. Figure 2A summarizes the classical features of Neurotransmitters and Neuropeptides in the nervous system. Yet, these definitions and characteristics ignore completely the direct and very potent effects of many Neurotransmitters and Neuropeptides on T cells and other immune cells, each via its own functional Neurotransmitter/Neuropeptide receptors that are highly expressed in these cells (29)(30)(31)(32)(33) Most organs are innervated, and the nerve endings secrete Neurotransmitters and Neuropeptides. The innervated body organs include: the brain, muscle, spinal cord, skin, gut, blood vessels, all the lymphoid organs and tissues (47)(48)(49)(50), and almost all other body organs.
Interestingly, with regard to the gut-the gut microbiota has been shown to produce mammalian Neurotransmitters and Neuropeptides and/or consume them (51)(52)(53). It was also shown that Neuropeptides and Neurotransmitters contribute to the mutual microbiota-host interactions (52,53). Thus, I envision that the microbially-derived Neurotransmitters and Neuropeptides (51) can also bind and affect some T cells in a direct and powerful manner.
T cells and other immune cells express on their cell surface functional and important receptors of most, if not all, types of Neurotransmitters and Neuropeptides, among them Dopamine receptors (33,37,38,(54)(55)(56), Glutamate receptors (36, 39, 40, 46, FIGURE 2 | (A) Neurotransmitters and Neuropeptides-Definitions and characteristic features in the nervous system. Neurotransmitters are traditionally defined as endogenous chemical substances used by the nervous system to transmit messages either between neurons, or from neurons to muscles, or from neurons to gland cells. The communication between two neurons happens in the synaptic cleft (the small gap between the synapses of neurons), where electrical signals that have travelled along the axon are briefly converted into chemical ones through the release of Neurotransmitters, causing a specific response in the receiving neuron. A Neurotransmitter influences a neuron in one of three ways: excitatory, inhibitory or modulatory. An excitatory Neurotransmitter promotes the generation of an electrical signal called an action potential in the receiving neuron, while an inhibitory Neurotransmitter prevents it. Whether a Neurotransmitter is excitatory or inhibitory depends on the receptor it binds to. Neuropeptides are traditionally defined as small protein-like molecules, i.e. peptides, produced and released by neurons through the regulated secretory route, and acting on neural substrates and additional ones. The Neuropeptides are derived from precursor molecules that must be post-translationally processed to yield the active peptides. The Neuropeptides may diffuse for long distances within the extracellular space before binding to their specific receptors, which are almost exclusively G protein-coupled receptors. The Neuropeptides and their receptors modulate many diverse functions of the central nervous system, including sleep, arousal, reward, feeding, pain, cognition, stress responses, and emotions. (B) Basic features of few Neurotransmitters and Neuropeptides that induce many direct, potent, beneficial and therapeutically-relevant effects on resting/naive human T cells. The Fig includes basic information about Dopamine, Glutamate, GnRH-II, Somatostatin, CGRP and Neuropeptide Y, and about their receptors. These Neurotransmitters and Neuropeptides induce very potent effects on various "classical" well-known target cells throughout the body. These Neurotransmitters and Neuropeptides induce multiple direct, potent, beneficial and therapeutically-relevant effects in resting/naive human T cells, and in some of other immune cells. Each of these Neurotransmitters and Neuropeptides has a family of its own receptors, expressed in different levels and compositions in its target cells, T cells included. (C) Nerve-Driven Immunity & 'Braining' T cells: Concept and main findings so far. The figure shows the principle elements of 'Nerve-Driven Immunity' (30,31), allowing direct communication between the brain and T cells, via Neurotransmitters and Neuropeptides secreted by nerve endings, and their receptors expressed on the cell surface of T cells. The figure also shows a partial list of the direct effects that some Neurotransmitters and Neuropeptides, primarily: Dopamine, Glutamate, GnRH-I, GnRH-II, CGRP, Somatostatin and Neuropeptides Y, induce in resting/naive human T cells, revealed in our experiments so far. Most of these effects are described in our published papers (29)(30)(31)(32)(33)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45), but some are still unpublished data, or submitted to publication. (D) Personalized Adoptive Neuro-Immunotherapy. This is a new mode of personalized adoptive/cellular immunotherapy, that was developed on the basis of the direct, rapid, potent and beneficial effects that selected Neurotransmitters and Neuropeptides induce on their own in resting/naive human T cells (29)(30)(31)(32)(33)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45). The aim and vision of the Personalized Adoptive Neuro-Immunotherapy is to rejuvenate, activate and improve scarce, exhausted, senescent, impaired and dysfunctional T cells, in any person having such T cells, and which is in need of immunotherapy. The Personalized Adoptive Neuro-Immunotherapy was designed to meet all the 20 criteria specified in the text of this paper. The Personalized Adoptive Neuro-Immunotherapy has two stages and corresponding protocols: Personalized diagnostic protocol-Ex vivo only, and Personalized therapeutic protocol-Ex vivo + In vivo. The Personalized Adoptive Neuro-Immunotherapy diagnostic protocol: consists of few parallel in vitro tests, performed in parallel, on a small number of the person's own T cells, soon after their separation from his small blood sample. These tests can be performed for any human individual, at any time, and repeated at any stage. Person's T cells are separated from a small quantity of blood and subsequently tested in vitro, during few days only, for their: (A). Total number, and also number of few T cell subsets, (B). Viability and overall condition, (C). Beneficial functional responses to selected Neurotransmitters and Neuropeptides. (single exposure). The functional responses are judged simultaneously in several tests, which measure the levels of few well-defined T cells features and functional responses (mainly some of those we tested successfully already, and listed in Figure 2C, and which serve as our well-defined reliable biomarkers for T cell activation, rejuvenation, and improvement. At the end of the diagnostic phase, valuable results are obtained, and personalized decisions are being made, with regards to the chances that the given patient, at that specific time point of his life and health condition, would benefit from the Personalized Adoptive Neuro-Immunotherapy. The results of the diagnostic tests also teach which Neurotransmitter and/or Neuropeptide induce the best effects in the patient's own T cells. Of note, we already performed successfully such diagnostic tests, with either resting/naïve normal human T cells of healthy subjects, or scarce and abnormal T cells of small number of cancer patients. The Personalized Adoptive Neuro-Immunotherapy therapeutic regimen will be administered only to patients whose T cells were found to be responsive in the pre-clinical diagnostic tests, and therefore viewed as people that can benefit from this treatment, at this specific time point of their life. The therapeutic procedure will take place weekly, for several months, and the entire therapeutic package can be repeated whenever, and for as long as, needed. During this anticipated treatment, the candidates for the therapeutic procedure will first undergo leukophoresis, and then their T cells will be separated and frozen in aliquots. By doing so, a unique and very valuable personalized T cell bank is created, that can be used both for the Personalized Adoptive Neuro-Immunotherapy, and for various other diagnostic or therapeutic purposes. Then, one to two times once or twice a week, portions of patient's own T cells are thawed, activated, "rejuvenated", and improved ex vivo by single exposure to selected natural Neurotransmitters and Neuropeptides (those found to be the best in the diagnostic phase), and inoculated soon afterwards into the patient's body. It is envisioned that the people receiving the Personalized Adoptive Neuro-Immunotherapy will not be hospitalized, and will not need any additional treatment before, during, or afterwards. The person's own T cells, that were rejuvenated and activated ex vivo by the Neurotransmitters and Neuropeptides, are expected to have substantially improved abilities to reach and eradicate cancer and infectious organisms in his body, as well as to perform all their other essential T cell tasks. By inducing all these effects, it is envisioned and hoped that the "Personalized adoptive Neuro-Immunotherapy" will improve significantly the patient's condition in various ways and levels, and even save patient's life. Yet, words or cautions and modesty are absolutely required here, since the therapeutic protocol has not yet been tested in clinical trials, and is of course not an approved therapy yet. The Personalize Adoptive Neuro-Immunotherapy was invented and patented by the author of this paper: Dr. Mia Levite, Israel. Currently, when the technology is IP protected, attempts are being made for bringing it closer to the patient's bedside. A final important note, I think that repeated injection of Neurotransmitters and Neuropeptides into the patient's body can not replace the cellular/adoptive therapy, primarily since injected Neurotransmitters and Neuropeptides will most probably induce various detrimental side effects. Thus, I fear that if a person would get repeated continuous/prolonged infusion, over few weeks or months, of either of the Neurotransmitters and Neuropeptides we used so far to rejuvenate, improve and activate T cells, these Neurotransmitters and Neuropeptides would bind their respective receptors in various cells that express their receptors, and subsequently induce various side effects within multiple organs and tissues. Such side effects would of course not happen if only T cells are exposed to Neurotransmitters and Neuropeptides, and if this exposure is only ex vivo, as in the suggested Personalized Adoptive Neuro-Immunotherapy. 57-61), Acetylcholine receptors (62,63), GABA receptors (64) (65), VIP receptors (66) and others. I would therefore propose that most/all the Neurotransmitters and Neuropeptides secreted from nerve endings at the vicinity of T cells, can affect the T cells that express their receptors, in most if not all body locations in which they reside and/or patrol through, in a direct, safe, and potent manner (30)(31)(32).
Moreover, T cell-derived Neurotransmitters can affect neural cells, as shown for example for Acetylcholine-synthesizing T cells that relay neural signals in a vagus nerve circuit (73) and induce other potent effects on other cells (62,63). Two other examples are: 1. T cell-derived glutamate that endows astrocytes with a neuroprotective phenotype (71), and 2. Glutamate produced and secreted to the extracellular milieu by stimulated Th17 cells, using a vesicular release pathway mediated by b1 integrins and Kv1.3 channel signaling, and involving SNARE protein machinery complex (72).
Furthermore, immune-derived Neurotransmitters have strong autocrine and paracrine effects on T cells, as demonstrated for example for catecholamines (70,74,75).
Taken together, I hypothesize that Neurotransmitters and Neuropeptides can convey rapid, potent and precise functional "messages" in at least four different routes: 1. Neuro -> Neuro route: Within the nervous system; 2. Neuro -> Immune route: From the nervous system to the immune system; 3. Immune -> Neuro route: From the immune system to the nervous system; 4. Immune -> Immune route: Within the immune system.
Neurotransmitters and Neuropeptides should therefore be considered as "NeuroImmunotransmitters" and "NeuroImmunopeptides." To these four routes, few additional ones can be added, for example the microbiota-host route (52,53).
Interestingly, some of the disease conditions described in the previous parts of this paper are associated with changes in the levels and/or function of certain Neurotransmitters and/or Neuropeptides, or of their receptors. Yet, to the best of my knowledge, in many cases, the direct consequences of these changes on immune cells were not studied systematically so far. One exception is the consequences of alterations in the dopaminergic system. Curious readers may read several publications on this topic written by different authors, including my book chapter entitled: Dopamine in the Immune System: Dopamine Receptors in Immune Cells, Potent Effects, Endogenous Production and Involvement in Immune and Neuropsychiatric Diseases (37), and my later review on Dopamine and T cells (38). The readers are also referred to the other chapters in the book entitled: Nerve-Driven Immunity: Neurotransmitters and Neuropeptides in the Immune system, each chapter dedicated to a different Neurotransmitter or Neuropeptide (32).
The T cells which we tested so far and found such effects include: A. Normal resting/naive (i.e. as is) CD3 + T cells of healthy subjects purified from their blood, which are in normal number and condition (36,(41)(42)(43)(44)56) B. CD3 + T cells of few patients with Head and Neck cancer -Head and Neck Squamous Cell Carcinoma (HNSCC) (45) and of few liver cancer patients. Hepatocellular carcinoma (HCC) (submitted paper) purified from their blood, which are in abnormally low number and granular condition, C. Mouse resting/naive Th0, Th1, and Th2 antigen-specific cell lines and clones (29).
Together, by inducing all these direct and beneficial effects (i.e. not by a single effect/mechanism/pathway), evident at various time points after their binding to their receptors in T cells, the above-mentioned Neurotransmitters and Neuropeptides can on their own improve dramatically many essential T cell functions.
Having said that, a word of caution is noteworthy: interindividual variability, defined as intrinsic differences between people and evident with regard to responses or sensitivity to almost factor/drug or procedure, is also seen with regard to the effects of Neurotransmitters and Neuropeptides on T cells. I envision that such variabilities may be due to inter-individual variabilities with regard to the expression level and functional status of the receptors for Neurotransmitters and Neuropeptides expressed in people's T cells, due to to their present physiological, physical, psychological or pathological condition, or even due to their nutrition or medication they take.
Footnotes *Our solid yet still unpublished data, research ongoing and new papers either already submitted or in preparation. **CD3zeta is a TCR-associated chain essential for eradication of cancer, but downregulated pathologically in many patients with multiple cancers. Therefore, elevating CD3zeta in T cells of cancer patients is a therapeutic goal. ***CD147 is an extracellular matrix metalloproteinase inducer, needed for cell penetration and extravasation of solid organs. Therefore, elevating CD147 in T cells can augment T cell extravasation.
****PD-1 is a physiological and beneficial checkpoint protein, a type of "off switch" protein, that delivers important negative immunoregulatory signals to T cells, to remain silent when T cell activation is not needed, and to avoid autoimmunity. However, cancer cells use T cell's PD-1 for their own harmful purposes. Indeed, cancer cells have their PD-1 ligand (PD-1L) that bind to the T cell's PD-1 and by doing so keep T cells in an inactive/suppressed mode, and prevent T cell reactivity against themselves. Monoclonal antibodies that target either PD-1 or PD-L1 can block this binding and boost the immune response against cancer cells. These drugs have shown a great deal of promise and success in treating certain cancers.
Our Most Recent Findings: Neurotransmitters and Neuropeptides Decrease PD-1 in T Cells of Both Healthy Subjects and Elderly Liver Cancer Patients, and Induce Their Rejuvenation, Proliferation and Killing of Liver Cancer Cells
In our most recent study (submitted paper) we studied the direct effects of Dopamine, Glutamate, GnRH-II, CGRP or Neuropeptide Y ( Figure 2B) on CD3 + peripheral T cells of few elderly people aged 79-86 years, suffering from HCC and a kaleidoscope of comorbid conditions. We also tested the effects of these Neurotransmitters and Neuropeptides on CD3 + naïve/ resting T cells of additional healthy subjects. In all cases, the CD3 + T cells were purified from small blood samples.
We found all the following significant findings: 1. The HCC patients had 5-10 fold less T cells than healthy subjects, 2. The patient's T cells were abnormal, i.e. very small and granular, 3. The human T cells, express all dopamine receptors: DR1-5, and glutamate receptors: AMPA-R and NMDA-R, 4. Dopamine, Glutamate, GnRH-II, Neuropeptide Y, and CGRP (each at low conc. of 10 −8 M) induced the following effects: A. Decreased significantly both the percentage of PD-1+ T cells and the level of PD-1 expression per cell (up to 60% decrease, within in 1 hr. only), B. Increased, up to seven fold, the number of alive T cells that proliferated in vitro in response to human HCC cells (either HepG2 or Huh7 cell line), C. Increased significantly killing of human HCC cells in vitro by the T cells (up to 2 fold increase).
Moreover, we found that few unexpected combinations of Neurotransmitters and Neuropeptides induced even stronger effects than the single Neurotransmitters/Neuropeptides, and that Dopamine D1-5R agonists, of which D4R was the best, also decreased PD-1 and increased T cell numbers.
Together, these findings demonstrate that Dopamine, Glutamate, GnRH-II, Neuropeptide Y and CGRP, each on its own or in combinations, can activate, rejuvenate, and improve T cells, even when they are scarce and suboptimal T cells of elderly people with cancer and several diseases, and that they do so in low physiological concentration, single exposure and direct manner-via their own receptors in T cells, and by inducing multiple beneficial effects. Yet, once again, we found significant inter-individual variability with regard to the effects of these Neurotransmitters and Neuropeptides on human T cells.
I propose that the "Personalized Adoptive Neuro-Immunotherapy" has the ability to rejuvenate, activate and improve T cell number, condition, migration and function. By doing so, this novel type of adoptive cellular therapy may hopefully increase survival, improve life quality, and prevent multiple harsh multi-level implications ( Figure 1B). I hypothesize that all these could be gained due to the ability of selected natural Neurotransmitter and Neuropeptides to decrease PD-1 in human T cells, and to increase all/most of the following beneficial T cell features and functions: T adhesion to extracellular matrix glycoproteins, chemotactic migration, homing, extravasation into solid organs, gene expression, cytokine secretion, expression of key proteins (e.g. CD3zeta, CD147 metalloproteinase inducer, laminin receptor and others), proliferative response to cancer, killing of cancer, ability to recruit other immune cells to the site of disease or injury, and most probably additional advantageous effects not revealed yet.
The Personalized Adoptive Neuro-immunotherapy is composed of two stages and respective protocols ( Figure 2D): 1. Personalized diagnostic protocol: In vitro evaluation of the overall condition of the candidate patient's T cells, and the performance of several parallel quantitative in vitro tests, to measure the functional responsiveness of the patient's T cells to several Neurotransmitters and/or Neuropeptides. These diagnostic tests aim to reveal if the patient's T cells, at that specific time point and present healthy condition, respond favorably to the Neurotransmitters and Neuropeptides, and if indeed so, to select the best Neurotransmitter and/or Neuropeptide for activating and improving these T cells. This protocol, using relatively small number of patient's own T cells, soon after their purification from his blood sample, can be applied to anyone, at any time. We already performed successfully such personalized diagnostic tests on T cells of many healthy people, few HNSSC patients and few HCC patients, and received very good and encouraging results [ (38,40,(54)(55)(56)(57)(58)(59)(60)(61)(62)(63)(64), and new submitted paper]; 2. Personalized therapeutic protocol: The actual Neuroimmunotherapy, applied only to patients whose T cells passed successfully the first pre-clinical diagnostic tests.
The Personalized Neuro-immunotherapy is designed to meet all the 20 criteria specified herein, in one of the preceding chapters, and be safe, effective, painless, patient-friendly, and importantly, without requiring hospitalization. However, it should be emphasized that this therapeutic protocol was not tested yet in clinical trials, and of course not approved yet for clinical use.
In principle, the "Personalized Adoptive Neuro-immunotherapy" may be applied to all people with scarce, exhausted, and suboptimal T cells ( Figure 1A-right).
Having said that, I'm fully aware of the realistic possibility that this Neuro-Immunotherapy will not be applicable to, or sufficiently effective and beneficial for, all people, in all conditions. Only clinical trials can teach us who are the human beings that benefit from it the most.
Currently, the "Personalized Adoptive Neuro-immunotherapy" is IP protected, and various actions are performed for further testing its safety and efficacy, and for bringing it closer to the patient's bedside.
CONCLUDING REFLECTIONS: NEUROTRANSMITTERS AND NEUROPEPTIDES TALK DIRECTLY AND BENEFICIALLY WITH T CELLS, AND ALL SIDES CAN GAIN
My opinion and vision is that the direct communication between the nervous system and T cells and other immune cells, via Neurotransmitters and Neuropeptides (on top of via other signaling molecules), is essential and beneficial for all involved sides, and can be translated into therapeutic terms ( Figure 2D).
Healthy ongoing bi-directional communication between the nervous system and the immune system (76), and T cells being activated and improved by Neurotransmitters and Neuropeptides, are expected to contribute substantially to a wide range of essential health-guarding T cell activities in peripheral organs, to better function and protection of the brain (2,3,76), and even to better resolution of comorbid persistent stress, depression, and pain (24).
In fact, I envision that all sides and parties can gain and benefit from direct activation of T cells by Neurotransmitters and Neuropeptides in multiple aspects and levels, as specified in the coming sentences.
What Could the T Cells and the Entire Immune System Gain?
What Could the Nervous System Gain?
Whole body control, orchestration, coordination and adaptation, allowable by the direct real-time information conveyed to, and received from, T cells and other immune cells, in either health or disease.
What Could Medicine Gain?
I humbly propose that we may have the ability to protect, improve, and even save many people's lives, by using and actually mimicking the natural "language" by which the brain "talks" to T cells, for new and potentially very safe and effective mode of therapy-The "Personalized Adoptive Neuro-Immunotherapy" (Figure 2D). This Neuro-Immunotherapy may be beneficial for a wide spectrum of very different pathological conditions in which T cells are scarce, exhausted, impaired and dysfunctional.
Who Could Gain?
If the "Personalized Adoptive Neuro-Immunotherapy" turns out to be indeed safe, effective and patient-friendly, an enormous number of people whose T cells are malfunctioning, as well as their relatives, health caregivers, healthy services, hospitals, insurance companies, and entire societies, could benefit from it ( Figure 1B).
Moreover, I foresee that repeated periodic strengthening of T cells of people at risk, especially in older age, can become almost routine, generic, and broad spectrum method of immune-protection, not limited each year only to few selected microbial organisms. The current Covid-19 pandemics teach all of us a devastating and warning lesson: relying only on vaccinations to already identified viruses and other infectious organisms, is not sufficient, and can lead to disastrous worldwide implications.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
ACKNOWLEDGMENTS
The author is very grateful to Prof. Robert Dantzer, Director of Dept. of Symptom Research, The University of Texas, MD Anderson Cancer Center, Houston, Texas, USA, and Editor-in-Chief of Psychoneuroendocrinology, for his critical reading of the manuscript, knowledgeable and excellent comments and advices, and stimulating discussions.
The author is also very thankful to Prof. Eithan Galun, Director of Gene Therapy Institute, Hadassah University Medical Center, Jerusalem, Israel, for ongoing fruitful collaboration, encouragement, enlightening conversations and smart ideas. | 2021-03-22T13:25:58.884Z | 2021-03-22T00:00:00.000 | {
"year": 2021,
"sha1": "69ce241e3c7110f4dceb049c4d790beb3c9fdf5b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.617658/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69ce241e3c7110f4dceb049c4d790beb3c9fdf5b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203051549 | pes2o/s2orc | v3-fos-license | Curiosity as Brazilian tourist motivation in visiting Europe
Although the theme of push and pull motivations has received increasing attention in tourist behavior literature, little attention has been devoted to the investigation of the interaction between single push motivations and visitor loyalty and other relevant variables influencing tourist behavior. Given its undoubtable relevance in motivating human behavior, we propose curiosity as a single push motive by examining its causal relationships with destination attributes (evaluated in holistic way), attitude toward destination, and loyalty. In particular, we tested a new research model on a sample of 273 potential Brazilian travelers to Europe by using a structural equation modeling approach. Sample size is in line with the state-of-theart in literature (Ciasullo et al., 2017). The data moderately well fitted the “curiosity model” and the findings highlighted that curiosity plays a crucial role in shaping attitude and pull motivation, and in influencing tourist loyalty. Consequently, destination managers or European Union institutions should magnify the role of curiosity, attitude towards destination, and pull motivations in terms of marketing policies.
Resumen
Si bien el tema de las motivaciones push and pull ha sido cada vez más considerado en la literatura sobre el comportamiento de los turistas, se ha prestado poca atención a la investigación de la interacción entre las motivaciones de un impulso único con la lealtad de los visitantes y otras variables relevantes que influyen en sus comportamientos. Dada su indudable relevancia en motivar el comportamiento humano, proponemos la curiosidad como
INTRODUCTION
The World Travel & Tourism Council's (WTTC) annual report indicates that the growth of the travel and tourism sector in 2015 (2.8%) overtook that one of the global economies (2.3%) for the fifth successive year, generating 9.8% of global GDP and supporting 284 million jobs. Similarly, despite many challenges faced by travel and tourism in Europe starting from the end of 2015 (e.g., terrorist attacks, the economic crisis, Brexit, etc.), the sector is still expected to grow by 3.1%, confirming tourism as one of the services industries remarkably resilient in times of economic recession (WTTC, 2016).
Among various foreign destinations, Europe is the continent with the highest tourism demand (Sheth, 2011), especially regarding emerging countries and, particularly, South America. Departures from South America to Europe, in fact, amount to 26%, followed by 23% in North America, 32% in other South-American nations and 19% in the rest of the world. Particularly, the most likely to visit Europe are, above all, Brazilians (Euromonitor International, 2012).
In the current hyper-competition among tourist destinations, a thorough analysis of tourist motivation and its relationships with loyalty and attitude toward destination is crucial for developing adequate policies able to sustain tourism flow within the destination. Especially, research on tourist's decision examining the behavior of emerging markets travelers attending Europe (such as Brazilians) could represent an interesting marketing challenge, since it can contribute to increase loyalty, intercept new tourism segment, and design adequate tourism policies in line with a sustainable vision.
Scholars (Crompton, 1979;Dann, 1977;Uysal & Jurowsky, 1994) demonstrated that tourists travel because they are "pushed" to adopt a specific behavior toward the destination by their psychological factors; at the same time, they are "pulled" within the destination by its characteristics. An analysis of literature data shows the dichotomy push-pull motives in explaining tourist behavior has been generally accepted (Chen & Chen, 2015;Prayag & Ryan, 2011;Yiamjanya & Wongleedee, 2014;Yoon & Uysal, 2005). Thus, the adoption of push and pull framework requires a simultaneous analysis of both visitors' internal desires and core destination attributes (Caber & Albayrak, 2016).
Although numerous studies have investigated push and pull factors holistically and globally, very few have performed an analysis of push (or pull) motivation impact on tourist behavior individually. Moreover, previous studies on tourist behavior have rarely analyzed the relationship between the level of the visitor inner cognitive stimulation (i.e., cultural knowledge gap) and consumer behavior (Botti et al., 2015a;2015b). Among the numerous push factors capable of explaining either the creation of a specific feeling (i.e., attitude), the adoption of a precise conduct or the relation with other reasons for an individual interest in traveling (i.e., pull motivations), undoubtedly one of the possibilities may be represented by curiosity and curiosity knowledge gap. In fact, psychologists underlined that, when we feel curious to discover something, we also have feelings for engaging with novel stimuli or adopting a certain behavior (Kashdan et al., 2009, Dalli 2015. More clearly, it has been well pointed out that "It is hard to deny the power of curiosity as a force for motivating human behavior" (Hardy et al., 2017, p. 230).
To address the aforementioned gaps, the goal of this study is to investigate the causal relationships among curiosity (a well-outlined push motivation), destination attributes (pull motivations), attitude toward destination (a well-known way of thinking that affects a person's behavior) and loyalty. In particular, we developed a new research model (the Curiosity model of Tourist Behavior -CTB) to examine the relationships among these constructs in the context of potential Brazilian travelers to Europe, by conducting a survey and analyzing data using Structural Equation Modeling (SEM).
The objective of this research is twofold: first, we propose a theoretical advance in tourist motivation research, starting from which future in-depth research stream with curiosity as centerpiece could be explored. Second, the study helps destination marketing organizations of European Union or European Country to effectively use curiosity in their marketing campaigns. Practically, the study summarizes the impact of curiosity on the most relevant destination attributes (evaluated in holistic way as recently proposed by Leong et al., 2015), on attitude toward destination, and on behavioral intention.
Tourist Motivation
Motivation, understood as an altered state leading to behavior directed toward a specific goal, represents a widely debated topic in the tourism literature (Su et al., 2018;Fieger, Prayag & Bruwer, 2019) and, specifically, in marketing studies, since the 1940s (Albayrak & Caber, 2018). It consists of needs, feelings, and desires driving people to a certain behavior. Motivation is the starting point for consumers' decision process and an important construct for understanding tourist behavior (Pereira & Gosling, 2019).
In tourism research, motivation is an important area of study because it represents a fundamental construct for understanding tourist behavior, being at the basis of its decision-making process. Furthermore, motivation is an important predictor in evaluating tourists' attitude (Huang & Hsu, 2009;Lee, 2009a). According to Murray (1964, p. 7), a motive is "an internal factor that arouses, directs, and integrates a person's behavior". A definition of motivation in the tourism and travel context was offered by Dann (1981, p. 205): "a meaningful state of mind which adequately disposes an actor or group of actors to travel and which is subsequently interpretable by others as a valid explanation for such a decision".
Studies on tourist motivation provide various frameworks and scale for measuring motivation. In this regard, Valls et al. (2018) point out that the influence exerted by psychological factors on tourists when choosing a destination has long been studied and acknowledged in literature. Consistently, Park et al. (2019) highlight that the analysis of tourist motivation is fundamental for the understanding, explanation, and conceptualization of travel behaviors, since travel motivation influences tourists' attitudes, perceptions, and involvements. Plog (1974Plog ( , 2001 proposed an allocentric/psychocentric model which explains why different people tend to travel to different destinations; allocentric people are venturesome and self-assured, while psychocentrics have some common personality tendencies (such as territory boundedness, generalized anxieties, and sense of powerlessness) (Hsu & Huang, 2008). Iso-Ahola (1982), Mannell and Iso-Ahola (1987) proposed a social psychological model of tourism motivation based on escape-seeking dichotomy (Matheson et al., 2014).
Based on Maslow's (1970) hierarchy of needs, Pearce (1988) developed the Travel career ladder model. The main argument of this model is that human needs tend to ascend higher levels of the career ladder as they keep on doing more and more travel experiences being motivated by sophisticated factors.
In studies on tourist motivation, the push and pull framework elaborated first by Dann (1977;1981) and then extended by Crompton (1979), perhaps represent the most widely accepted paradigm (Jang et al., 2009;Jang & Cai, 2002;Kim & Lee, 2002;Kim et al., 2003;Prayag & Hosany, 2014) for understanding tourists' needs and willingness to enjoy. Push factors reveal the psychological factors of behavior (Wu & Pearce, 2014) such as the desire to escape from everyday environment, adventure, relaxation, and prestige. Push factors are the reasons for and direction of behavior (Iso-Ahola, 1982). Pull factors include specific destination features and attributes influencing when, where, and how people travel (Mill & Morrison, 2002;Prayag & Ryan, 2011). In this regard, pull factors stimulate consumer to travel and represent tourist's generic desire to travel (Yang et al., 2011).
Research in tourism has used the push-pull paradigm for three main purposes. The first one is to explore personal motivations that direct people towards specific behaviors. In this context, some studies attempt to clarify the motivational differences in relation to demographics (Kim et al., 2003). The second one is market segmentation (Frochot & Morrison, 2001) in which the most implemented criteria are the following: segmenting tourists from a specific source market, tourists to a specific destination, tourists traveling for a specific product within a destination, or any combination of the three ways mentioned. Finally, researchers have investigated the relationships between motivations and satisfactions (Huang et al., 2014;Yoon & Uysal, 2005). In particular, Yoon and Uysal (2005) found that tourist satisfaction in turn connected to loyalty, is directly related to authentic experiences.
Literature review on tourist motivation in an emerging market is still in its infancy. In fact, while western tourists' destination perceptions of western destinations are well researched (e.g., Beerli & Martin, 2004;Chi & Qu, 2008;Prayag & Hosany, 2014), travel motivations and perceptions of tourists from emerging markets toward western destinations is fairly recent (Li & Stepchenkova, 2012;Ryan & Mo, 2002).
2.2 Curiosity as push motive Litman and Spielberger (2003, p. 75) define curiosity "as a desire for acquiring new knowledge and new sensory experience that motivates exploratory behavior". Basing on previous Berlyne's work (1954Berlyne's work ( , 1960, Voss and Keller (1983, p. 17) similarly stated that "curiosity is a motivational prerequisite for exploratory behavior". Daniel Berlyne (1954Berlyne ( , 1960, representing perhaps the most authoritative mentor of exploratory behavior, in fact, distinguished between two types of curiosity (perceptual and epistemic) and two types of exploratory behavior (diversive and specific). Recently, knowledge has been strictly linked with curiosity: Loewenstein (1994) argued that exploratory behavior would increase when manageable levels of a knowledge gap existed. Menon and Soman (2002) more clearly evidenced that knowledge gap indicates a difference between what people know and what they want to know. Although the theoretical foundations of curiosity here highlighted are not always invoked, numerous studies use the push and pull framework considering among push motivations curiosity, knowledge gap, novelty or need for cognition (e.g., Wong et al., 2013;Chen & Chen, 2015;Bansal & Eisel, 2004;Wang et al., 2016). In other words, many tourist studies discuss motivations to travel adopting different terms that seem to bring back to the conceptualization of curiosity as outlined so far. For instance, Kim and Lee (2002), analyzing motivations in attending festivals, enumerate five categories including "curiosity". These results are in line with the previous work of Scott (1996) that propose "curiosity" as one of the motives that push toward festival: the study demonstrates that curiosity discriminates among first-time and repeat visitors. Dunn Ross and Iso-Ahola (1991), evaluating motivation dimensions of a sightseeing tour, employ a "general knowledge" motive adopting items as "To see the famous sites" or "To visit the places I especially want to see". Similarly, Cha et al. (1995), studying the motivations of Japanese overseas travelers, include "knowledge" intending for that the need of experiencing a foreign destination or traveling to historical places, the willingness to see as much as possible and to learn new things. Xu et al. (2013), using a qualitative method to explore the motivation of tourist players, identify six factors (namely, curiosity, exploring the destination, socialization, fun and fantasy experience, challenge, and achievement), putting at the basis of the motivational pyramid only curiosity. Particularly, curiosity emerged as the most popular factor: in their study, several groups of respondents mentioned curiosity as their first motivation that influences what they could do in the destination in terms of shopping, food, etc.
Recently, curiosity has been analyzed for its impact on sport consumer behavior and, more generally, on purchase motivation (Hill et al., 2016).
Although several studies include curiosity among the push motivation and curiosity has been investigated individually in terms of influence on consumer loyalty, to the best knowledge of the authors there are no studies focused on the relationship between curiosity and consumer behavior in the tourism field.
Destination attributes as pull motive
Pull motivations are considered as external, situational, or cognitive drivers influencing consumer behavior (Dann, 1981;Devesa et al., 2010), deriving from the perception of destination characteristics. Since this kind of construct is strongly correlated not only with the destination type (big city, historical town, sun and beach village, etc.) but also with the specific destination, authors tend to generate highly detailed list of the specific destination attributes (Table 1). Consequently, a general theoretical construct of pull motivation is still lacking. Sung et al. (2015) Attitude and quality of service Without the aim to propose an exhaustive categorization of pull motivations but willing to avoid high bias that could over-shadow the focus of the study, we tried to select the most relevant. In fact, in investigating traveler behavior literature, some common macro-areas of pull motivation emerge.
The construct "pull motives" was measured through different studies, differentiating from each other via discriminant validity. To do that, we have assessed the construct validity by estimating a confirmatory factorial model (Wong & Cheung, 2005). Moreover, further preliminary indications for the purification of the measurement scale by performing the exploratory factorial analysis: factor loadings, extracted variance, possible factorial structure between the several dimensions. Exploratory factorial analysis was applied to analyze the relationships between observed variables to identify a latent structure. The objective was to summarize a number "m" of items in "n" factors (or components), with m>n. In fact, in the process of developing the measurement scale, the exploratory factorial analysis allows to have a first estimate of factor loadings and to verify the opportunity for further purification of the scale (Tan, 2001). The application of the EFA made possible to assess the factorial structure of interest, purifying the measurement scales, and excluding the indicators with low factor loadings on the expected factor or substantial cross loading. In this way, it was possible to obtain a thrifty structure to be submitted to the confirmatory test.
In particular, four categories of destination attractiveness can be distinguished: 1) a qualitative dimension (regarding the service offering proposed by accommodations); 2) a cultural dimension (related to the relevance of cultural heritage in choosing a destination); 3) a leisure activities dimension (namely, nightlife and entertainment, shopping and how to spend free time during destination staying); 4) accessibility and transportation destination convenience dimension (Yoon & Uysal, 2005;Prayag & Ryan, 2011).
In detail, the first category refers to a set of service infrastructure offered by a destination (accommodations, food, shopping, recreation) influencing consumer decision-making process (Ritchie & Crouch, 2003). Particularly, studying the Mauritian case, Kassean and Gassita (2013) find that accommodation services are the main factors leading tourist to visit the destination.
The second dimension, on the other hand, represents cultural, historical, and natural resources which increase destination attractiveness (Casarin & Iasevoli, 2012). Moreover, several studies stress the relevance of cultural motivations mechanisms in influencing tourist's needs, wants, and preferences. For instance, in a study on Korean national parks, Jeong (1997) reveals that visitors perceived natural, historical, and cultural resources as the most important attractions. They represent and constitute a key driver for fostering the peculiarity of each single park, enhancing its identity. Similarly, Kim et al. (2003) confirm that cultural and historical resources drive visitor's decision, whereas Yoon and Uysal (2005), analyzing tourist pull motivations, focus on customer's willingness to experience a "different culture".
Regarding the third dimension (nightlife and entertainment), a fragmentation seems to emerge. Although, nightlife seems to be one of the most common sub-dimensions adopted in defining pull-motives of this category, Yoon and Uysal (2005) determining pull motivation attracting tourists to Northern Cyprus, adopt the unified dimension of "nightlife and local cuisine", whereas refer only to entertainment. In any case, a common reference to nightlife, entertainment and how to spend free time, seem to characterize the most relevant literature.
Finally, the fourth pull motivational factor representing a common thread among the different works on traveler motivation is "accessibility to the destination". Kim et al. (2003) find a significant correlation between "accessibility and transportation" and push motivations, particularly regarding "natural resources and health" dimension, showing that good destination accessibility influences visitors' willingness to experience nature. Among the others, Sung et al. (2015) identify a cluster of travelers whose primary travel motivations are convenience and ease of travel.
The four categories of destination attractiveness, their abbreviations and the references used to elaborate measurement items are shown in Table 2.
Attitude toward destination and loyalty
Tourist attitude qualifies the psychological orientation expressed by the favorable or unfavorable evaluation of tourists when engaged in certain behaviors (Ajzen, 1991;Schiffman & Kanuk, 2004, Lee, 2009aSparks, 2007). It has been underlined that attitude toward destination is characterized by three components: cognitive, affective, and behavioral (Unger & Wandersman, 1985;Vincent & Thompson, 2002). The cognitive component refers to the way in which the attitude is forming; the affective component reflects the psychological traits in terms of tourist preference; the behavioral component captures the tourist intention.
Given its relevance, attitude toward destination could be significantly predictive of tourist loyalty. In fact, it has been maintained that tourist attitude is an effective predictor of tourist's decision to travel to a certain destination (Ragheb & Tate, 1993;Jalilvand & Samiei, 2012).
Loyalty has been conceptualized by one of the three main approaches, such as behavioral, attitudinal, and composite (Jacoby & Chestnut, 1978). The behavioral approach is based on the analysis of the purchase process or purchase volume and by using repeat visit as a measurement indicator. This approach has been criticized for its inability of explaining the factors affecting customer loyalty (Yoon & Uysal, 2005). Many empirical studies demonstrated that behavioral intention, rather than actual behavior, is an effective indicator of loyalty (Horng et al., 2011;Kaplanidou & Gibson, 2010). The attitudinal approach estimates tourist revisit intention to a destination or recommendation to other potential tourists. Studies have established that a positive correlation exists between tourists' intention to recommend and image components of destination, including overall image (Bigné et al., 2001), affective image (Lee et al., 2005), and cognitive image (McDowall & Ma, 2010). The composite approach advances the integration of both behavioral and attitudinal approaches (Backman & Crompton, 1991;Iwaskaki & Havitz, 1998). In this regard, tourists who demonstrate behavioral loyalty toward specific destinations tend to have a positive attitude toward those destinations (Zhang et al., 2014). Studies found that a positive attitude toward destination leads to higher level of composite loyalty demonstrated by tourists (Bosque & Martín, 2008;Lee, 2009a) and affects future tourists' behavior (Lee, 2009b).
THE PROPOSED MODEL AND RESEARCH HYPOTHESES
The aim of this study is to investigate the causal relationship between a single push-motive (namely, curiosity), pull-motives, attitude toward the destination, and loyalty. The proposed model is represented in Figure 1. Pull motivations are measured as a second order factor assessed by four specific destination attributes, which are combined to propose an integrated and holistic dimension, as suggested by Leong et al. (2015).
Source: The authors
The study hypothesized that curiosity (push motive) positively influences the holistic destination attributes (pull motive)-more precisely, a combination of night life and entertainment (NL & E), accommodation quality (AQ), accessibility to the destination (AD), and relevance of cultural heritage (CH) sub-dimensions-attitude toward destination and loyalty. Moreover, both curiosity and pull motivations affect loyalty, and pull motivation influences attitude. Table 3 summarizes the hypotheses formulated and reports a selection of references supporting such hypotheses as derived from literature review and from the discussion held in previous paragraphs. The method used in this research contemplates a pen-and-paper survey. A first group of subjects has been personally contacted in some of the main relevant Brazilian educational institutions by three field researchers from September 5, 2016 to September 12, 2016.
From the various Brazilian cities, the survey was administered in Rio de Janeiro for several reasons. First, it is the second-most populous municipality in Brazil and the sixth-most populous in the Americas. Secondly, in line with the economic development of the entire nation, Rio de Janeiro middle class has expanded considerably over the last ten years, determining an increase in gross domestic product (UNWTO Tourism Highlights 2015). It follows that the emerging middle class, characterized by high spending power, represents the driving force of outbound tourism demand and of economy, in general. Brazilian tourists' arrivals in Europe are continuously growing (European Travel Commission, 2015).
Moreover, the research has been conducted in different universities and cultural public institutions. The involved institutions are the Cultural Institute of the Italian Consulate in Rio de Janeiro and several Brazilian Universities, such as: UNISUAM (Centro Universitário Augusto Motta); Estácio de Sá; Federal University of Rio de Janeiro (UFRJ). Finally, the survey has been administered to the students and professors of SUESC School (Sociedade Unificada de Ensino Superior e Cultura), to the students and managers of Maestro Lorenzo Fernandes School and to the employees of Brazilian satellite television "Nossa Tv".
In total, the researchers contacted 290 visitors by adopting a convenience sampling approach: however, 17 questionnaires were incomplete and, therefore, eliminated from the study. It follows that 273 questionnaires were accepted for the final analysis, with a response rate of 93.7%.
Data analysis
All constructs in this study were measured with multiple items, as recommended by Churchill (1979) and Kline (2005). A preliminary list of measuring items was generated after an extensive review of the literature on the push-pull framework, including the conceptualizations of attitude and loyalty. The questionnaire was elaborated in Italian and then translated into Portuguese. A pre-test was conducted with 10 graduate students majoring in Economy at the UFRJ. Items identified as ambiguous were reformulated for more clarity. The final list of measurement items, presented in Table 2, was adapted from previous studies. For all constructs, a seven-point Likert scale was adopted, ranging from 1 ("strongly disagree") to 7 ("strongly agree").
To empirically validate the proposed research hypotheses, the technique of structural equation modeling (SEM) was employed using AMOS 22.0 and the maximum likelihood method of estimation. Structural equation modeling is commonly adopted in tourism marketing literature in general (Lee et al., 2004;Bosque & Martin, 2008;Nowacki, 2009) and specifically in tourism studies (Chi and Qu, 2008;Wang et al., 2016;Yoon & Uysal, 2005). This technique allows to statistically test multiple relationships among variables measured with multiple items. Differently from multiple regressions, it simultaneously allows estimating the relationship between multiple dependent and independent variables not observed (Gefen et al., 2000).
According to the procedure recommended by Anderson and Gerbing (1988) a two-stage testing has been adopted. In the first step, confirmatory factor analysis (CFA) has been used to estimate the measurement model and, in the second stage, concerning the assessment of the structural model, the hypothetical relationships among all the variables have been identified.
Regarding model fit, chi-square, Root Mean Square Error of Approximation (RMSEA), and Standardized Root Mean Square Residual (SRMR) have been measured to understand how the research model fits the data without comparisons with other models. Moreover, Comparative Fit Index (CFI) has been used to reveal how the research model fits the data comparing it to null model, which hypothesizes that all variables are uncorrelated. Scholars highlight that when the CFI index exceeds 0.9 and RMSEA and SRMR indices do not exceed 0.08, adequate fit has been achieved, while, when CFI exceeds 0.95 and RMSEA does not exceed 0.06, the proposed model is acceptable (Hair et al., 2010;Hooper et al., 2008).
Demographic characteristics of respondents
Concerning the demographic characteristics of the respondents, as shown in Table 4, the sample is composed of 141 females (51.8%) and 131 males (48.2%) and it is made up of more married (51.2%) than single (48.8%) travelers. The majority of the subjects is among 26 and 35 years old (33.3%) and among 19 and 25 years old (20.6%).
With reference to occupation, most respondents belong to the middle class: civil servants, in fact, represent 40.9% of the sample. The second most common category is that of students (22.8%), followed by businessmen, showing that the sample has an acceptable heterogeneity in terms of social classes. Besides, consumers in the sample have a high educational level, since half of the respondents hold a bachelor's degree (50.8%) and 38.3% have completed a postgraduate degree (see following Table 4).
Measurement validity
The measurement model derived from CFA reveals satisfactory levels for all fit indices with χ 2 /df equal to 1.869, SRMR equal to 0.061, CFI equal to 0.954, and RMSEA equal to 0.054 (p-close = 0.099).
Additionally, all the constructs demonstrate adequate psychometric properties of measurements and show high Composite Reliability coefficients above the cut-off point of 0.7; these results show a high level of reliability for each construct (Nunnally & Bernstein, 1994). Similarly, as indicated in Table 5, all average variance extracted (AVE) values for the multi-item scales are above the minimum levels of 0.5 (Hair et al., 2010), indicating an acceptable level of convergent validity for all the proposed constructs (Garbarino & Johnson, 1999). The AVE of each construct is greater than the variance shared between that construct and the other ones in the model, which demonstrates satisfactory discriminant validity. In fact, to confirm discriminant validity, the square roots of AVE have been calculated. Table 6 lists the correlation matrix for all first-order constructs. Diagonals are the square root of AVEs. In all cases, the square root of AVE for each construct is larger than the correlation of that construct with all the other constructs in the model, which indicates satisfactory discriminant validity (Fornell & Larcker, 1981).
Hypothesis testing and discussion
In the light of the satisfying results of the measurement model, to test the overall relationships among constructs, the structural model has been assessed.
First, the structural model shows adequate levels for all fit indices with χ 2 /df equal to 1.869, SRMR equal to 0.0617, CFI equal to 0.954 and RMSEA equal to 0.057 (p-close = 0.099). All indices reveal that we have a good structural model.
Second, the estimated results of the proposed research model highlight that five of six proposed hypotheses are supported (see Table 7). Research results related to H1, which states that individual's curiosity positively influences the individual's attitude toward destination, is significant (path coefficient = 0.32; p<0.001). Furthermore, individual's curiosity positively influences the individual's response to the holistic destination attributes (path coefficient = 0.358; p<0.001) and, additionally, individual's curiosity positively influences the individual's future visit intention and willingness to recommend (path coefficient = 0.392; p<0.001). Thus, H1, H2, and H3 strongly and highly demonstrate that curiosity has: (1) a crucial role in amplifying tourists' loyalty and consequentially word of mouth and revisiting intention; (2) a strong impact in shaping visitors' attitude toward the destination; (3) a robust influence on the motives by which a tourist is pulled to the destination. In terms of policy implications, these results underline that for tourism organizations it is fundamental to satisfy need of cognition and travelers' curiosity prior to the departure for the selected destination. In other terms, the purposive feeding of destination information represents a key-factor in developing and maintaining high levels of curiosity, for instance, through the Internet and well-designed and well-organized institutional destination web sites. Surprisingly, the holistic destination attributes (pull-motives) are not statistically significant in predicting individual's future visit intention and willingness to recommend (loyalty), being the p-value higher than 0.05. In any case, standardized coefficient would have been negative. Although not significant, the negative relationship indicates that the holistic destination attributes are not sufficient on a stand-alone basis, to create loyalty behavior. Thus, H4 is not supported. The result is in line with Yoon and Uysal (2005). In the research context of this study, such a result is likely connected to the current situation of European context. Besides terrorist attack, European institutions (European Parliament, European Commission, etc.) are now experiencing a moment of great difficulty because of the nationalist forces in many countries (e.g. Brexit, etc.): nowadays it is very difficult to program any kind of policy in any economic (and not only) field (namely, transportation, cultural heritage, tourism, etc.). In addition, a repeatedly emphasized bureaucracy prevails in the European Institutions. Consequently, in absence of a broader European tourism policy, visitors outside the European Union do not perceive Europe as a unique destination, reducing the impact of destination attributes on willingness to recommend and revisiting intention.
The last observation is in line with the H5, which is statistically supported: individual's response to the holistic destination attributes positively influences the individual's attitude toward destination (path coefficient = 0.159; p-value<0.05). In fact, no doubt that Europe (as a destination) has many features and attributes that are a result of its history, of the tourism policies that individual countries composing European Union have arranged in the past and keep on arranging todays. Thus, in the absence of a European Union tourism policy, the attributes of Europe as destination fail to affect loyalty or, at least, minimally impact on visitors' attitude toward Europe as destination.
Finally, individual's attitude toward destination positively produces a significant and positive effect on individual's future visit intention and willingness to recommend (path coefficient .324; p<.001), supporting H6. This finding is perfectly in line with previous tourism studies, which outline that attitude influences willingness to recommend (Lee et al., 2008), confirming the link between attitude and loyalty in destination management. This emphasizes the critical role of tourist attitude toward destination in mediating relationships among the destination attributes and tourist loyalty and between curiosity and tourist loyalty. Thus, it is still critical for European Union institutions or for the destination management organizations of each individual European Country to promote a positive attitude among travelers' potential referent groups, specifically, toward travel agents in Brazil. In other words, organizations or institutions should emphasize the uniqueness of Europe for history, cultural, and natural heritage, for representing a melting pot of people with their own traditions (Bertoli & Resciniti, 2013). These social and cultural environments could be considered as unique and appealing to the Brazilian travelers, who could be pulled to the destination to satisfy their curiosity on Roman and Greek civilization, on German organization, on their own Portuguese origin, etc.
The analysis of squared multiple correlation (SMC) values offers additional information allowing further discussions consistent with the previous assessment. In particular, the proposed model explains a substantial amount of the variance of loyalty as the squared multiple correlation (SMC) reveals (SMC = 0.313) and explains only a partially acceptable amount of the variance of attitude toward destination (SMC = 0.164). Finally, curiosity, the single push motive, explains only 12.8% (SMC = 0.128) of the variance in destination pull motives. These results are consistent with the multidimensional nature of motivation (Reiss, 2012), confirming that willingness to recommend and revisiting intention and attitude toward destination are the consequence of different motives through which curiosity plays a crucial role.
The measurement of destination attributes as a holistic phenomenon pursued by means of the second-order factor requires discussion here. As pointed out by the SMC values, the accessibility to the destination (AD) is the main component (SMC = 0.860) of the overall pull motive followed by accommodation quality (AQ) (SMC = 0.568). This finding suggests that even though Europe is famous for its culture, (both cultural and natural) heritage, cities, and the possibility to walk through historical city-centers, the ability of European institutions to facilitate the arrangement of direct and relaxed flights or to facilitate the way by which reaching and traveling within Europe is crucial, taking into account the dominant role of these two dominant pull forces. Europe pull motives perform probably just a role of mediation between curiosity and attitude toward destination. In this direction, policy makers should address strategic policy initiatives for valuing cultural heritage, for instance stimulating travelers visiting by offering bundling services in an ecosystems perspective of value cocreation (Barile & Polese, 2010;Barile et al., 2012;Barile et al., 2014;Pencarelli & Splendiani 2008;Tommasetti et al., 2017;Wieland et al., 2012).
CONCLUSION, LIMITS AND FUTURE RESEARCH
The purpose of this study was to investigate the causal relationships between curiosity, destination attributes, attitude toward destination, and loyalty in the context of potential Brazilian visitors willing to travel to Europe. The originality of the work should be seen in the mediation perspective of the model that, as we opportunely suggested, is still not explored in literature.
In this regard, we found that curiosity represents the starting point of the potential Brazilian tourist decisionmaking, directly, and strongly influencing visitor attitude towards destination, the individual's response to the holistic destination attributes, and the willingness to recommend or revisit Europe (see Figure 2). From a theoretical point of view, the identification of a specific push motivation, which has a role in modeling pull motivations, in shaping attitude and in influencing loyalty, represents a general advancement in tourist's motivation research, improving the existing understanding on the key role that psychological factors and destination attributes play in shaping tourist assessment and decision-making. Additionally, by examining the destination attributes measurement, the study provided a better understanding of the pull motives specifically attracting Brazilian tourists to Europe. In terms of policy and managerial implications, the study allows exploiting visitors' curiosity to travel to Europe, considering the direct effect on loyalty pull motives and attitude. In order to generate loyalty, it will be crucial for Europe to develop a common ground tourism policy, promote direct flight to Europe, making it easy to travel across Europe, and finally propose an adequate quality standard of accommodation across all the countries comprising the European Union. Moreover, given its strong influence, curiosity should be stimulated through promotional campaigns by implementing a purposive feeding of destination information from an ecosystems' perspective of value co-creation.
Source: The authors
The study examined Brazilian tourists only. Such a limitation offers opportunities for future researches in other emerging Countries (i.e. other Latin American countries, or Asian countries). Furthermore, the study should be verified in relation to other tourist destinations. In this context, what other destinations might include curiosity as a push motive and how do they compare to this destination?
The model was aimed at starting a debate on curiosity as push motivation to travel in the tourist emerging market and in order to reduce the potential complexity of the model, we intentionally omitted to explore curiosity taking into account all its components (e.g., perceptual, epistemic, diversive, etc.). Therefore, future research might introduce these constructs for a more refined version of the model developed here. | 2019-09-17T02:40:20.125Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "fdfa00515402870a6c541c3bc0a53b0174d361a8",
"oa_license": "CCBY",
"oa_url": "https://rbtur.org.br/rbtur/article/download/1596/1328",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0388615f3d9df797a0bfbf1fd529ae717661765a",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
53031916 | pes2o/s2orc | v3-fos-license | Perioperative genitourinary infection associated with sodium-glucose co-transporter 2 inhibitor use
ABSTRACT Context: Sodium-glucose co-transporter 2 (SGLT-2) inhibitors are a novel treatment approved for type 2 diabetes mellitus to lower hyperglycemia, systolic blood pressure, and promote weight loss. Commonly reported serious adverse events include increased mycotic urogenital infections, orthostatic hypotension, and normoglycemic ketoacidosis. Case report: We present a case of a 47-year old man with a history of type 2 diabetes mellitus initiated on the SGLT-2 inhibitor canagliflozin preoperatively before a penile implant, who presented with late postoperative MRSA bacteremia and scrotal abscess requiring implant extraction. Conclusion: As the SGLT-2 inhibitors are gaining in popularity, prescribers must be aware of the potential adverse genitourinary infectious outcomes. Providers should use caution and avoid initiating SGLT-2 inhibitors in the perioperative setting, and may even consider holding or discontinuing this medication in the setting of impending GU surgery.
Introduction
Sodium glucose co-transporter 2 (SGLT-2) inhibitors are a relatively new class of oral hypoglycemic medications used for type 2 diabetes mellitus via an insulinindependent mechanism [1]. On 29 March 2013, canagliflozin became the first FDA approved SGLT-2 inhibitor for type 2 diabetes mellitus [2]. Since approval it has been shown to lower hemoglobin A1c in those patients who have not achieved adequate control with diet and exercise [3]. In addition, SGLT-2 inhibitors have also been shown to lower blood pressure and promote weight loss. However, an increased risk of genital mycotic infections, orthostatic hypotension, and normoglycemic ketoacidosis have been reported [4]. Unfortunately, the specific incidence of urinary tract infections has not been well established in the setting of genitourinary (GU) surgery.
Case presentation
A 47-year old man with a history of diabetes mellitus Type 2 presented with acute onset progressive scrotal swelling, pain and fever three weeks following penile implant surgery. Medications on admission included canagliflozin-metformin, added immediately preceding surgery to improve his perioperative glucose control. On physical examination, the patient was febrile with scrotal swelling and tenderness to palpation. Laboratory evaluation was unremarkable including WBC count of 7500 c/ mm 3 and serum lactate of 1.0 meq/L. Urinalysis was positive for glucose and trace ketones but negative for bacteria and white cells. A CT pelvis identified scrotal fluid consistent with abscess, and he was brought to the OR for penile prosthesis explant. Cultures grew MRSA and gram-negative rods. Blood cultures found MRSA bacteremia on second day of hospitalization. The patient was eventually discharged on IV vancomycin and amoxicillin-clavulanic acid for 14 days. In addition, the patients SGLT-2 inhibitor, canagliflozin, was discontinued. The incident was reported to the FDA's postmarketing surveillance system.
Discussion
SGLT-2 inhibitors are a novel treatment option for Type 2 diabetes mellitus which have been shown to lower hyperglycemia, systolic blood pressure, and promote weight loss [5] with reported adverse events including infections involving the GU tract due to the medication mechanism of action [6]. Approximately 180g of glucose is filtered through the renal system daily with almost 90% reabsorbed in the proximal tubule by the glucose transporter protein, SGLT-2 [7]. Canagliflozin is a selective SGLT-2 inhibitor which increases urinary excretion of glucose but is associated with an increased rate of GU infections [8]. The incidence of GU infections on an SGLT-2 inhibitor in the setting of GU surgery has not been described, but caution should be taken in perioperative use of this class in light of this risk.
This case highlights the potential risks of hyperglucosuria in the face of genitourinary surgeries. Although a single case report, there have been numerous other reports of genitourinary infection rates in non-surgical patients that make this association possibly causative.
Providers should use caution and avoid initiating SGLT-2 inhibitors in the perioperative setting, and may even consider holding or discontinuing this medication in the setting of impending GU surgery. If perioperative glucose control is needed in patients undergoing GU surgery, other hypoglycemic agents should be considered. Further studies are needed to determine if providers should avoid SGLT-2 inhibitors in the perioperative period for GU surgeries, and if so, for how long. | 2018-11-10T06:29:28.198Z | 2018-09-03T00:00:00.000 | {
"year": 2018,
"sha1": "8e0c0f1b380c411c537d9afd429cf7f94a5d14e1",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20009666.2018.1527667?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e0c0f1b380c411c537d9afd429cf7f94a5d14e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211836988 | pes2o/s2orc | v3-fos-license | Isolation and Screening of Phosphate Solubilizing Bacteria from Paddy Rhizosphere Soil
Rice (Oryza sativa L.) is one of the most important cereal crops in the world belongs to the family Poaceae. Rice is the important food grain and staple food for over the billions of people in most of the countries particularly in East Asia. It’s a major part of balanced diet and rich source of energy and carbohydrate. It is extensively grown food crop in 114 countries across the world, occupying area of about 163.20 million hacters of farm lands with annual production of over 758.9 million tonnes with a productivity of 4448 kg ha -1 per hectare.
Introduction
Rice (Oryza sativa L.) is one of the most important cereal crops in the world belongs to the family Poaceae. Rice is the important food grain and staple food for over the billions of people in most of the countries particularly in East Asia. It's a major part of balanced diet and rich source of energy and carbohydrate. It is extensively grown food crop in 114 countries across the world, occupying area of about 163.20 million hacters of farm lands with annual production of over 758.9 million tonnes with a productivity of 4448 kg ha -1 per hectare.
Phosphorus is one of essential macro-minerals for the growth and development of plant Phosphorus is one of essential macro-minerals for the growth and development of plant. The main of this study to isolate the PSB and screened for its solubilization efficiency on Pikovskaya's agar and broth at different incubation period. Totally 40 PSB were isolated from paddy rhizosphere soils of Raichur and Koppal districts. PSB isolates were studied for zone of solubilization, solubilization efficiency, solubilization index, ph change titrable acidity and phosphatase activity. The solubilization zone, efficiency and index were highest on 6 th day of incubation period. From that 10 isolates were found to be good phosphate solubilizer. Among 10 isolates, isolate PPSB-21 showed highest zone of solubilization (18.4 mm) phosphate solubilization efficiency (253.84%) and solubilization index (3.53) on Pikovsakaya's medium. These efficient isolates can be used as biofertilizer. (Schachtman et al., 1998). It is a major component in ATP, the molecule that provides energy to the plant for such processes as photosynthesis, protein synthesis, nutrient translocation, nutrient uptake and respiration. In addition, P has been observed to increase root growth and influence early maturity, straw strength, crop quality and disease resistance (Deepak Kumar, 2011). The available phosphorus in soils of India is generally poor. Efficiency of P fertilizer is around 10-25 % throughout the world (Isheword, 1998).
India, majority of the phosphorus provided in the form of chemical fertilizer. Inorganic P occurs in soil, mostly in insoluble mineral complexes, these insoluble, precipitated forms cannot be absorbed by plants (Rengel and Marschner, 2005). Large amount of phosphorus applied as fertilizes enter into immobile pool through chelating action with highly reactive Al 3+ and Fe 2+ in acidic soil (Gyaneshwar et al., 2002). In nature, wide range of microbial biosolubilization mechanisms exist which are necessary to maintain global cycle (Whitelaw, 2000).
Phosphorus solubilizing bacteria a group of beneficial bacteria play a key role in soil Phosphate solubilization (Abd-Alla, 1994) there by increasing the bioavailability of soil P for plants (Zhu et al., 2011). Phosphate solubilization occurs by production of organic acids and released by microorganisms; this release also decreases soil pH (Rodriguez et al., 2006). Organic acids solubilize insoluble P either by decreasing the pH or by the complexing the cation which is bound to the P (Vassilev et al., 2006). The organic acids such as succinic acid, malic acid, propionic acid and oxalic acid are released by Phosphorus solubilizing bacteria (Panhwar et al., 2012), which chelate the cation bound to phosphate and being converted to soluble forms through their hydroxyl and carboxyl groups. Large number of bacteria including species of Pseudomonas, Azospirillum, Azotobacter, Klebsiella, Enterobacter, Alcaligenes, Arthrobacter, Burkholderia, Bacillus, Rhizobium and Serratia have reported to enhance plant growth by with their different plant growth promoting activities including phosphate solubilization (Kumar et al., 2012).
Collection of soil sample
A total of forty soil samples from rhizosphere of paddy were collected from different locations of Raichur and Koppal district of Karnataka by adopting standard soil sampling methods described by Jackson (1973). Soil samples were collected in sterilized polythene bags. The Polythene bags were properly tied; labeled and at most care was taken to avoid contamination. Soil samples were stored in refrigerator at 4 o C for the isolation of phosphate solubilizing bacterial isolates.
Analysis of soil chemical properties
Air dried samples were analysed for pH, EC, OC, N, P, K and Zn content estimation by following standard procedures.
Isolaton of PSB from paddy rhizosphere soil
One gram of soil from each sample was suspended in 9 ml blank sterile distilled water and serially diluted up to 10 -6 homogenization of soil was carried out by keeping it on shaker. The dilutions were plated on Pikovskaya's agar medium Pikovaskaya's medium (Pikovskaya's, 1948) contains (grams/liter); Glucose, 10.0gm; Ca 3 (PO 4 ) 2 , 5.0 gm; (NH 4 ) 2 SO 4 , 0.5 gm; NaCl, 0.2 gm; MgSO 4 .7H 2 O, 0.1gm, pH 7.3±2. These plates were kept at 28±2 º C for 7 days in order to isolate the PSB. The bacterial colonies exhibiting the clearance zone around the colonies were selected, purified, sub-cultured and stored on the slants of Pikovskaya's agar for further studies.
Screening of PSB
Each of the isolates was screened for their ability to solubilize Tri phosphate Phosphate (TCP) present in the Pikovaskaya's medium. A loopful of bacterial culture was placed on the centre of the same agar plates and incubated for 28±2 o C for 6 days. The solubilization zone, efficiency and index were calculated at different incubation period. The phosphate solubilization efficiency was calculated (Kannapiran and Ramkumar, 2011).
Solubilization index on the solid media was calculated considering the ratio of the total (colony + halo zone) diameter and colony diameter (Edi-Premono et al., 1996).
Quantitative estimation of available phosphorous in Pikovsakaya's broth in vitro
The PSB isolates were tested invitro by estimating available phosphorus in the Pikovskaya's broth supplemented with TCP as a substrate. Cultures of PSB were inoculated to 100 ml of Pikovskaya's broth in 150 ml conical flask in triplicates with uninoculated controls and incubated for 10 days at 30 0 C and centrifuged at 10000 rpm for 10 minutes. Then separate the supernatant from cell growth and insoluble phosphate. The clear supernatant was collected in 100 ml volumetric flasks. The available phosphorus in the filtrate was estimated by method of Olsen, 1954.
pH change
Estimation of change in pH of the broth due to the growth of PSB was measured with a pH meter at different incubation period of 3 rd , 6 th , and 9 th day ( Parimal et al., 2015).
Titrable acidity
In order to study the titrable acidity of culture medium, 5 days old cultures were centrifuged at 1000 rpm for 10 min. 10 ml culture filtrate was taken in a 50 ml conical flask. 5ml of supernatant was added with few drops of phenolphthalein indicator and titrated against 0.01 N NaOH solution. The end point of titration was determined as pink colour. The titrable acidity was expressed as ml of 0.01 N NaOH consumed per 5 ml of culture filtrate ( Ponmurugan and Gopi, 2006).
Phosphatase activity
In order to study the phosphatase activity in response to the phosphorus enrichment, culture filtrate were centrifuged and subjected to estimate phosphatase activity following the standard procedure (Tabatabai and Bremner, 1969).
Results and Discussion
Total 40 PSB were isolated from 40 rhizospheric sample of rice from different paddy growing regions of Raichur and Koppal districts. Among 40 isolates, ten isolates showed the remarkable zone of solubilisation (table 1). Based on clear zone formation around colonies on Pikovsakaya's medium the solubilization efficiency and index calculated (table 1) Hallow zone of solubilization was ranged from 18.5 mm to 12.5mm Plate-1 shows maximum zone of solubilization. There was a correlation with incubation time and zone size. It was also observed that increasing in the incubation time, increases the in the zone size of each isolates. Percent Solubilization efficiency was ranging from 80.95 to 253.84% from 3 rd to 9 th day of incubation period. The SE% increased with incubation period. The solubilization index based on colony diameter and halo zone for each PSB isolate is presented in Table 1. The results showed that, among the 10 efficient isolates, the solubilization index varied from 1.80 to 3.22. The solubilization efficiency index enhanced with incubation period. The diameter of clear halo zone formed by the bacterial isolates has the direct correlation with the phosphate solubilization efficiency.
All the ten isolates were able to solubilize the insoluble phosphorus in Pikovaskaya's broth at different incubation period and were represented in table 2. Among ten isolates, available phosphorous content was ranged from 31.92 to 171.84mg/L. All the isolates were recorded decreased pH with increased incubation period. Among ten isolates, the reduction in the pH varied from 6.12 to 3.15. The drop of pH was due to the production of organic acids.
The titrable acidity of the culture medium was measured. All ten isolates showed values in the range of 0.30% to 0.65%. The titrable acidity increased with incubation period. The variability in the phosphatase activity of soil ranged from minimum of 16.28µmoles/g/hr to maximum of 42.40µmoles/g/hr with the application of PPSB-21 and PPSB-5 respectively (table 3). There was a positive correlation between the phosphate solubilizing capacity and phosphatase activity. Rice is the important food grain and staple food for over the billions of people in most of the countries particularly in East Asia. It's a major part of balanced diet and rich source of energy and carbohydrate. Phosphorus is one of the essential macro-minerals for the growth and development of plant (Schachtman et al., 1998). It is a major component in ATP, the molecule that provides energy to the plant for such processes as photosynthesis, protein synthesis, nutrient translocation, nutrient uptake and respiration (Deepak kumar, 2011). But after application, a considerable amount of them are rapidly transferred into less available forms by forming complexes with Fe, Ca and Al cations before roots have a chance to absorb it (Alam and Ladha, 2004).
Under such conditions PSB play fundamental role in biogeochemical phosphorous cycling in natural and agricultural ecosystem. Extensive use of chemicals as fertilizer improves the plant health and productivity but disturbed the ecological balance of soil and resulted in nutrient depletion. This has necessitated the search for alternate source of this element. The use of PSB in agriculture practice is not only offset high cost of manufacturing phosphatic fertilizers but also make availability of insoluble P fertilizer.
The maximum halo zone was found with PPSB-21 with zone diameter of 18.5 mm. The results showed that, among the 10 isolates, the PSB-21 isolate showed a maximum solubilization efficiency and index of 253.84 and 3.53 respectively on 9 th day of incubation period over other bacterial strains. The solubilization efficiency enhanced with incubation period. The diameter of clear halo zone formed by the bacterial isolates has the direct correlation with the phosphate solubilization efficiency. The percent solubilization increased with incubation period. Similarly, Ngomle et al., (2014) isolated ninteen PSB. The P-solubilization efficiency was found maximum in UBPS-22 (28 mm) followed by UBPS-19 (25 mm) and UBPS-18(24 mm) respectively within 72 hrs of incubation in Pikovskaya's medium. Similar outcomes have been reported by many workers viz. Frateme et al., (2014), Deepak et al., (2018). Among ten isolates PPSB-21 recorded the highest available phosphorous content of 171.84mg/L on 9 th day of incubation. All the ten isolates were able to solubilize the insoluble phosphorus in Pikovaskaya's broth at different incubation period it may be due the abilities of the isolates to solubilize inorganic phosphate by the production of organic acids. Similarly, Karpagam and Nagalakshmi (2014) isolated 8 potent isolates, 3 strains showed high soluble phosphate production of 0.37mgL -1 , 0.30mgL -1 and 0.28mgL -1 in broth culture. Similar work was carried out by Manivannan et al., (2011), Buddhi andMin-Ho (2013), Gandhi et al., (2014), Manouchehr et al., (2016), Amit et al., (2017) and Deepak et al., (2018).
PPSB-21 showed highest reduction in pH of 3.15 on 9 th day of incubation. The drop of pH was due to the production of organic acids. Other reason like microbial respiration may also be involved for drop in pH. Tensingh and Jemeema (2015) observed the pH change of upto 4.6 by the Bacillus sp. Amit et al., (2017) isolated 8 PSB isolates, among 3 isolates showed lower pH ranging 3.08 ±0.08 to 3.82 ± 0.12. Similar results were obtained by Oliveira et al., (2009), Buddhi andMin-Ho (2013). Those bacteria that produced halo zones around colonies in PVK medium were able to produce organic acids in broth culture. This result is in accordance with Ogut et al., (2010). This is also in agreement with Mehta et al., (2001), Chen et al., (2006), Ponmurugan and Gopi (2012), Studies related to the production of organic acids have shown that citric and oxalic acids were two major organic acids produced by PSB (Alam et al., 2002).
Among all ten isolates, highest titrable acidity of 0.65% was expressed by PPSB-21. This might be due to reduction in the pH and secretion of organic acids. Results showed that strong positive correlation was found between titrable acidity and P solubilization. PSB produce phosphatase enzymes in soil, the activity of enzymes leads phosphate solubilization. There was a positive correlation between the phosphate solubilizing capacity and phosphatase activity. Phosphate solubilization occurs by production of organic acids and released by microorganisms; this release also decreases soil pH (Rodriguez et al., 2006). Organic acids solubilize insoluble P either by decreasing the pH or by the complexing the cation which is bound to the P (Vassilev et al., 2006). The organic acids such as succinic acid, malic acid, propionic acid and oxalic acid are released by Phosphorus solubilizing bacteria (Panhwar et al., 2012), which chelate the cation bound to phosphate and being converted to soluble forms through their hydroxyl and carboxyl groups. Phosphatesolubilizing bacterial strains isolated were identified by biochemical tests; the isolates PPSB-21 and PPSB-5 were identified as Pseudomonas sp. and Bacillus sp.
In conclusion, our study demonstrated that many of the bacteria had P solubilizing properties and the ability was not exclusive to specific genera, suggesting the importance of preliminary screening in vitro for a wide range of bacteria to characterize their potent P-solubilizing or mineralizing trait. The PSB population was found higher in rhizo-sphere when compared with non-rhizosphere soil. Isolated PSB strains were able to solubilize P, produce organic acids, and enzymes. These beneficial characteristics would be considered as potential biofertilizer of isolated PSB for rice production. | 2020-03-04T07:06:21.355Z | 2020-02-20T00:00:00.000 | {
"year": 2020,
"sha1": "03ba146374a35a8a0053dbf736c183996fb920d0",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-2-2020/Eramma,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03ba146374a35a8a0053dbf736c183996fb920d0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
240068954 | pes2o/s2orc | v3-fos-license | A Rock Material Micro-Strength Calibration Model for Bonded Models in Particle Flow Code
Distinct Element Method (DEM) can accurately simulate the large deformation and crack propagation phenomenon of soft rocks. When employing the DEM method via Particle Flow Code, an initial issue is to calibrate the parameters utilized in the model. Typically, trial and error are used to achieve calibration; however, a lack of accuracy and time and effort requirements highly threaten its application. Based on the Parallel Bonded Model (PBM), we researched the bond’s failure criterion in this study. Then, the influence of micro parameters (tension strength σ c and cohesion strength c ) on the rocks’ macro strength and failure mechanisms are studied. Two different test groups were considered by changing the two parameters (1): remains constant (2): σ c remains constant. The results illustrated that rock macro strength is positively correlated with its inferior PBM strength parameters. The macro strength will gradually stabilize at an upper boundary equal to 3.3c or 3.3σ c . With the increasing of c, the rock failure mechanisms vary from block fragmentation to shear failure. In general, the macro strength depends on the coupling effect of c and σ c expressed with an exponential relationship. Through this relationship, only two steps are enough to calibrate the two micro-strength parameters. This research proposed a new way to calibrate PBM micro-strength parameters and provided insight into building the correlation between other micromechanical parameters in the DEM.
Introduction
The Distinct Element Method (DEM) can deal with discontinuous medium problems and analyze rock's large deformation or crack propagation [1][2][3][4]. DEM has been widely applied in the simulation of rock mechanics [5], with the Particle Flow Code (PFC) being one of the most used tools [6]. However, it's not mature to apply this method to largescale engineering problems. One of the problems is the low efficiency of calibrating the micro-strength parameters [7][8].
Calibrating parameters is the initial step in building a DEM model [9]. However, the traditional method, trial and error, is time-consuming and inaccurate [10]. Researchers have conducted several studies on micro parameters calibration. First, the influence of particle states on the mechanical properties is studied. Guo and Zhang et al. [11] studied the effect of particle shapes on mechanical properties by creating an anisotropic rock model with different clump components. Researches on particle size showed that the macro strength and elastic modulus are positively correlated with the ratio of model length to particle diameter [12][13][14][15]. The influence of contacting model's parameters on macro mechanical properties was widely studied. Xu and Wu et al. [16] studied the effects of micro-strength parameters of the Bonded Model on rock's tension strength by designing Brazilian tensile tests in DEM. Zhou and Xu et al. [17] proposed a systematic approach to calibrate the micro parameters for the Flat Jointed Model. Huang and Zhang et al. [18][19] studied the effects of micro parameters of the Smooth Joint Model on the macro mechanical properties. Xia and Zhao et al. [20] investigated the relationship between the Clump Bonded Model's micro and macro mechanical parameters. In addition, many researchers focused on the properties of the Parallel Bonded Model (PBM). These works have revealed that the model's peak strength is affected mainly by micro cohesion strength when the ratio of to tension strength is less than 2 [21][22][23][24]. However, these research results only show the qualitative effects of and . Lots of "trial and error" work are also required when calibrating those micro parameters.
Hence, we designed an orthogonal DEM simulation test for the PBM to efficiently calibrate microstrength parameters when building DEM models. In addition, we have studied the relationship between micro parameters and macro strength. Two kinds of tests were conducted: (1): remains constant (2): remains constant. Figure 1 shows the components of PBM, from which we can find that some bonds are acting on the contacts between the particles. These bonds can resist tension and shear forces. The specific physical models are shown in figure 1. PBM involves two states (unbonded and bonded), indicating linear elastic property in the unbonded state. Applying PBM is similar to the process of bonding the particles with cement which resembles a rock material. PBM can be regarded as a combination of two pairs of parallel springs with normal and tangential stiffness. However, the bonded parts can be destroyed under ultimate load, and the failure criterion is shown in figure 2. Note that there is no specific constitutive relation in the DEM system. To meet the needs of researchers, we need to calibrate many micro parameters of DEM particles and bonds. Typically, this process is accomplished by a trial and error process, which is time-consuming. Because the strength is critical to the simulation results, we mainly focused on the micro-strength parameters. 3 Yet, the model's failure is even closely related to the bonds damage. Therefore, it's essential to figure out the criterion of bonds failure in the simulation process. The bonds can be damaged when subjected to tension or shear forces. The calculating process in PFC is shown as following:
Parallel Bonded Model (PBM)
(1) Updating the basic physical parameters: The model will compute a minimum by equation (1) and calculate it in equations (2) and (3), where is the area of bonding section and is the moment of inertia.
(2) Updating the normal force: = + ∆ (4) Where ∆ is normal displacement and is normal force and the is the normal stiffness. (3) Updating shear force : = − ∆ (5) Where ∆ is shear displacement and is the tangential stiffness.
(4) Updating bending moment in the cross-section: (5) Updating the normal stress and tangential stress: In this step, the normal and tangential stresses can be calculated using equations (7) and (8). (6) Determining bond breakage according to the criterion shown in figure 2: In this step, the code will ascertain whether the bonds will be broken under tension forces. Then, the tangential shear force will also be compared with the formula = − tan ∅.
The abovementioned steps are used for the damaging bond criterion, which is initial to the failure of the DEM model. The bonds failure process is closely related to the parameters of and . However, there are more than two parameters in the DEM. Beside and , the other parameters such as stiffness, friction angle, particle size, model size, etc., are also crucial to the mechanical behavior of the DEM rock models. Therefore, in the next section, the specific assignment of those parameters is discussed.
The schemes of DEM simulation experiments
In this section, we explained the schemes of the experiments and gave the specific parameters. A DEM model is composed of many particles and bonds with many associated parameters. The ratio of the normal stiffness to tangential stiffness influences the crack propagation and failure mode but has little effect on the macro strength [20]. In scale effect, the influence of particle size on macro strength is low if the ratio of the model's length to particle radius is large than 50 [12]. The friction coefficient affects the mechanical properties of sand-like materials, but it has minimal effects on the rock models. In general, only and influence the macro strength, which can also be explained by the bonds failure criterion in section 2.1. Therefore, in this study, we mainly investigated the influence of cohesion and tension strength on the macro strength. This study focused on soft rock engineering, with the given basic parameters based on experimental results of the typical soft rocks in southern China. The experiments were carried out in a previous study [25], and the calibrating result is shown in figure 3. Figure 3 (a) shows the stress-strain curves of simulation and experimental results. Except for the strain error in the compaction stage, this calibrating process is consistent with soft rock's mechanical behavior. In addition, figure 3 (b) shows the cracks results of simulation and experiments, with the main crack's direction being similar. Finally, table 1 shows the calibration details. Figure 4 shows the experimental scheme, note that the value of in the fixed tests varied from 0 to about 3.3 and the value of in the fixed tests also varied from 0 to about 3.3 .
The proposing of the calibration model.
According to the failure criterion of PBM, we know that the bonds may fail under tension or shear xv. Generally, when aload is applied, two kinds of forces co-exist in the sample. Therefore, the value of and will yield a coupling effect on the final strength, and the smaller one will restrict the macrostrength. We have done a lot of work on obtaining a relation to describe the coupling effect of and .
The variate is controlled between and in this paper according to the experimental schemes shown in figure 4. We increased under a constant in the first group of experiments, where the values of are 5 MPa, 10 MPa, 15MPa, 20MPa, respectively. Similarly, we executed the simulation tests by increasing the value of under a fixed value of in another group of experiments. Series of strength were obtained. We found that the sample's peak strength increased rapidly during the early stage and gradually vanished to 0 in the experiments. There is a coupling effect between and ; when the ratio of / is less than 2, the macro strength is affected mainly by and when the ratio of / is greater than 2, the value of will restrict the final macro strength. 6 We can use an exponential relation (9) to describe the abovementioned coupling effect, where ( , ) represents the macro peak strength of the sample and is the shape parameter for the relation. In the following sections, we will use the sign of to replace the for simplicity.
When is the variate, the partial differential equation (10) shows the law of strength evolution, and we can easily find the ′ tends to 0 when tends to infinity.
When is the variate, the partial differential equation (11) shows the law of strength evolution too. When tends to infinity, the equation (11) can be simplified to ′ = ′ ( ) + ′( ).
In the next section, we will show the performance of this exponential relation based on the results of orthogonal test results, along with some verifications. Figure 5 shows the crack formats of the DEM model when c is 10 MPa. The crack propagation formats vary with the increase in . When is less than 5 MPa, many fully developed wing cracks were observed in the failure state. On the contrary, when is greater than 5 MPa, only one main crack with a few wing cracks is seen in the final state. This phenomenon is caused by the coupling effect of and . When is much smaller than , even a small tension force will damage the bonds and produce many wing cracks. Therefore, the impact of will be restricted in the failure process. Figure 6 shows the macro strengths from the simulations. The curves show a similar increasing trend. When remains constant, the macro-strength increased rapidly and then decreased until it became constant. We can easily fit those macro strengths by equation (9), and the results are shown in equations (12), (13), (14), (15), respectively. Figure 7 shows the crack formats of the DEM model when is 10 MPa, similar to the results of fixing . We can find that the crack propagation formats vary with increasing . When is less than 4 MPa, many fully developed wing cracks are observed in the failure state. On the contrary, when is greater than 4 MPa, one main crack with a few wing cracks is shown in the failure mode. This phenomenon is also caused by the coupling effect of and . When is much smaller than , even a small shear force will damage the bonds and produce many wing cracks. Therefore, the effect of will be limited in the failure process. In section 2.3, we provided the relation (11) to describe the variation law of macro strength with the changing of , which is more complicated than relation (10). This phenomenon can be found in figure 8. Figure 8 shows the test results when remains constant. Those curves show similar trends. When increases, the strengths of samples firstly increase linearly, and then stabilize at a final value. However, these curves are not smooth, and it's hard to give a relation when the variate is . Figure 8. Test experimental results of fixing . As shown in figure 8, those curves finally stabilize at upper boundaries, which are limited by the . The strength data is positively correlated with the value of before those critical points. We can obtain these critical points from figure 8. The ratio value of / in the four critical points is 2.2, 1.9, 1.87, 1.8, respectively. These values are close to 2, similar to the conclusions derived by other researchers [21][22][23][24]. It is believed that the macro-strength is mainly affected by c when / is less than 2, and the macro-strength is mainly controlled by when / is greater than 2. Therefore, in combination with equation (11), the following expression can be derived:
Experimental results of fixing c.
In addition, the value of ′ ( ) + ′ ( ) equals to 0 according to the equation (16), which is consistent with the curves shown in figure 8. According to the strength results, the values of / at the critical points were derived, and the average value is 3.3, which reveals the value of upper boundary. We can also get a similar result by analyzing the ultimate strength in figure 6, and it is expressed as =3.3c. Therefore, when applying PBM in DEM, the ultimate macro-strength of models will be restricted at 3.3 or 3.3c. (16) when the variate is . If we apply the specific and to equation (16), the strength can be calculated. Therefore, we can verify the feasibility of this equation by the results of fixing in section 3.2. We have selected the four critical points in figure 8 to verify the calibration relation in this study. Table 3 shows the strength error calculated by the equation (16), which are 5.9%, 1.8%, 3.4%, and 6.2%, respectively. The average error is 4.3%; hence the calibration relation given in equation (16) can be considered reliable within the error of 5%. Figure 9. The calibration process. According to equation (16), to calibrate the model's strength, we only need to pick the specific microstrength parameters in 2 steps (see figure 8). Therefore, using the exponential relation proposed in this study allows us to conveniently and effectively calibrate the micro-strength parameters in two steps instead of trial and error.
Conclusions
• The exponential relation proposed in this study can well describe the coupling effect of and , the error between the calibration results and the target strength is less than 5%, which can be applied to calibrate rock and soil materials.
• In this study, the influence of the micro-strength parameters on the macro-strength and deformation is studied through groups of orthogonal experiments. The upper boundary macrostrength of the two experiments can be expressed as 3.3 or 3.3 . • Using the exponential mathematical model to establish relationships between DEM strength parameters provides insight nto dealing with other micro parameters. | 2021-10-28T20:10:59.501Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "28783d15cf05b85464e437de8b82e391d8cb4e7b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/861/4/042095",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "28783d15cf05b85464e437de8b82e391d8cb4e7b",
"s2fieldsofstudy": [
"Geology",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
224850033 | pes2o/s2orc | v3-fos-license | Intravenous fluids in acute pancreatitis: a prospective study
AP can be categorized into mild, moderately severe, and severe. It is not only important in diagnosing AP, but also categorizing the grade of severity. Usually mild acute pancreatitis (MAP) resolves by itself, but severe acute pancreatitis (SAP) has a good chance of mortality due to fatal complications. SAP is seen in 20% of patients. There is a wide variation in morbidity and mortality between MAP and SAP (mild <5% versus severe 20-25%). Determinants of outcomes are presence of organ failure or local complications. To improve the clinical outcome in AP, an accurate assessment of severity and appropriate management plan is essential.
INTRODUCTION
Acute pancreatitis (AP) is a disease with a substantial burden on the health system. Acute pancreatitis can be due to various etiological factors. In Indian population it is commonly due to alcohol and gall stones. The incidence of AP is increasing steadily. There is an increase of 2.7% per year in the incidence of AP between 1988 and 2010. 1 AP can be categorized into mild, moderately severe, and severe. It is not only important in diagnosing AP, but also categorizing the grade of severity. Usually mild acute pancreatitis (MAP) resolves by itself, but severe acute pancreatitis (SAP) has a good chance of mortality due to fatal complications. SAP is seen in 20% of patients. 2 There is a wide variation in morbidity and mortality between MAP and SAP (mild <5% versus severe 20-25%). 3,4 Determinants of outcomes are presence of organ failure or local complications. To improve the clinical outcome in AP, an accurate assessment of severity and appropriate management plan is essential.
The pathophysiological changes in acute pancreatitis are most marked in the first 24-72 hours of the illness. Supportive therapy is considered as the most important therapeutic strategy in the management of acute pancreatitis. Early fluid resuscitation is believed to play an important role in the prevention of complications like pancreatic necrosis and organ failure by preserving pancreatic micro circulation. 5 But the evidence of benefit of early aggressive fluid therapy on the prognosis of acute pancreatitis is derived from indirect data. Recent studies show that patients who received small amount of fluids during the initial 24 hours did not have a worse outcome and the administration of a great amount of fluid during the initial 24 hour was independently associated with organ failure and local complications. 6 The failure to clearly demonstrate the superiority of one fluid strategy over another may come from the great variability of individual response to volume expansion and the specific hemodynamic status of each patient at a given time.
This study aims at analyzing the persistence or occurrence of SIRS and organ failure in patients with acute pancreatitis receiving normal volume fluid therapy and high volume fluid therapy in the initial 24 hours.
METHODS
This was a single centered prospective observational study conducted in the Department of General Surgery and Department of Gastroenterology, St. John's Medical College Hospital, Bangalore, a tertiary level teaching hospital. Study period was from June 2016 to July 2017. The sample size was calculated using nMaster software with a confidence interval of 80% and an alpha error of 5%. Sixty patients admitted with AP as per the definition of modified Atlanta criteria were included in the study. Exclusion criteria included: patients with congestive cardiac failure and chronic renal diseases; patients already received treatment from other hospitals; patients presenting after 48 hours of onset of symptoms and pregnant women. This study was approved by the St. John's Medical College and Hospital institutional ethics committee.
At admission, haematocrit, haemoglobin, blood counts, arterial blood gas analysis, liver function test, serum amylase and serum lipase values are obtained. Patients who receives intravenous fluids at a rate of 100-150 cc/hour in the first 24 hours was included in the normal volume group and those receives intravenous fluid at a rate of 150-250 cc/hour was included in the high volume group. Systemic inflammatory response syndrome (SIRS) score and modified Marshalls score were accessed at admission. Patients are assessed at 24 and 48 hours for the persistence or worsening of SIRS, organ failure and local complications.
All analysis was performed using Statistical Package for the Social Sciences (SPSS) version 2.15.0.
RESULTS
The study population consisted of 39 males (65%) and 21 females (45%) ( Table 1). The etiology of acute pancreatitis was most commonly alcohol (n=29; 48.33%) and gallstone (n=24; 40%) related. Other 7 cases were due to drugs and post endoscopic retrograde cholangiopancreatography (ERCP) (Figure 1). 41 patients (68.33%) presented with classical symptoms of AP. Most of the patients (51.66%) presented after 24 hours from the onset of symptoms ( Figure 2). In the normal volume group the mean amount of fluid given in initial 24 hours was 2495±273.34 ml and the mean rate of fluid administration was 105.833±10.75 ml/hour. In the high volume group the mean amount of fluid given in initial 24 hours was 5793±884.71 ml and the mean rate of fluid administration was 259.33±39.29 ml/hour. The fluids used were normal saline and ringer lactate.
Persistence/worsening of SIRS at 48 hours were more in normal volume fluid group compared to the high volume fluid group (p=0.076). Organ failure at 48 hours is more in normal volume fluid group compared to the high volume fluid group (p=0.074). Incidence of local complications was equal in both groups ( Table 2). In normal volume group with organ failure the system involved was renal system in one and respiratory system in the other. In high volume group the organ system involved was renal system.
DISCUSSION
In our study 65% of the patients with AP were males. Few other studies too have found a male predominance in AP, also suggesting a significant association between the gender and etiology of AP. Alcohol is the primary cause for both acute and chronic pancreatitis in most of the countries; both being common in men. 7 48% of AP in our study was secondary to alcohol abuse and seen only in male patients. 40% of AP was secondary to gallstone disease with a female predominance; this gender-etiologic pattern seen in other studies too. In our study 61.66% of patients were between the ages of 18-40 years ( Figure 3). Gallstone pancreatitis is more common in female subjects, and alcoholic pancreatitis was more common in middleaged male subjects. 8,9 Other etiological factors for AP observed in the study included post ERCP, drugs and idiopathic. Two patients developed post ERCP acute pancreatitis. The drugs which caused AP were steroid and valproate. This observation gains importance in the view that most of the patients who develop AP are in the younger productive age group. Morbidity, mortality, diagnostic and treatment costs associated with AP will have an adverse health and socioeconomic outcome on these patients at an individual level and at the societal level.
In the study 41.66% had a normal body mass index (BMI), but 53.32% had a BMI more than 23. Various studies have shown that obesity is associated with an amplified systemic inflammatory response in acute pancreatitis and is a prognostic factor for mortality, local, systemic complications and severity in AP. 10 Most of the patients in our study group (51.66%) presented after 24 hours from the onset of symptoms. This observation is important as the pathological changes in AP develop much earlier before the serum tests are positive. This delay in presentation of the patients to hospital and initiation of fluid therapy may have an influence on the outcomes of the disease.
In the normal volume group patients received intravenous fluids at an average rate of 105.833±10.75. The type of fluid used was normal saline and ringer lactate solution. In normal volume group at admission 97% of patients had mild pancreatitis and 3% had moderate severe pancreatitis according to revised Atlanta criteria. After 24 hours of normal volume intravenous fluid therapy 90% had mild pancreatitis, 7% had moderate severe pancreatitis and 3% had severe pancreatitis. At the time of presentation, in the normal volume group 7 patients had SIRS and 1 patient had organ failure. At the end of 48 hours 4 patients had SIRS, 2 patients had organ failure and 1 patient developed local complication in the form of acute fluid collection. In organ failure the system involved was renal system in one and respiratory system in the other (Figure 4). In the high volume group 13 patients had SIRS and 4 patients had organ failure at the time of admission. After 48 hours 4 patients had SIRS, 1 had organ failure and 1 developed pancreatic ascitis ( Figure 5). The organ system involved was renal system.
Persistence/worsening of SIRS at 48 hours are more in normal volume fluid group compared to the high volume fluid group (p=0.076). Organ failure at 48 hours is more in normal volume fluid group compared to the high volume fluid group (p=0.074). Incidence of local complications equal in both group. However the above observations did not have any significance. Many recent prospective studies suggest that early aggressive fluid therapy is not associated with improved outcomes in patients with AP. 11,12 These studies also have shown an association between aggressive fluid resuscitation and increased organ failure, acute pancreatic fluid collection, renal and respiratory insufficiency, intensive care unit admissions, sepsis and mortality. There are few observational studies which support aggressive fluid management in AP. 13,14 However most of the randomized trials provide evidence in favour of non-aggressive fluid therapy.
At present aggressive fluid therapy is the recommended for initial management of AP. However our study did not show any significant difference in outcomes in patients with AP receiving normal or high volume fluids in the initial 24 hours. One main limitation of our study was that most of the patients in our study group presented after 24 hours from the onset of symptoms. This delay in presentation of the patients to hospital and initiation of fluid therapy may have an influence on the outcomes.
CONCLUSION
To conclude we have not found any statistically significant difference in the clinical outcomes of AP patients receiving normal or high volume fluid resuscitation in the initial 24 hours. We have found that our study does show the need for multi-centric randomized control trial with a larger study population to determine the rate and type of fluid resuscitation in the initial management of patients with AP. | 2020-10-08T09:27:26.645Z | 2020-09-23T00:00:00.000 | {
"year": 2020,
"sha1": "d84d91903dce9c32b6e8dc030a94bc1ab96f5a78",
"oa_license": null,
"oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/6470/4132",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d84d91903dce9c32b6e8dc030a94bc1ab96f5a78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261898134 | pes2o/s2orc | v3-fos-license | Trajectory Prediction for Robot Navigation using Flow-Guided Markov Neural Operator
Predicting pedestrian movements remains a complex and persistent challenge in robot navigation research. We must evaluate several factors to achieve accurate predictions, such as pedestrian interactions, the environment, crowd density, and social and cultural norms. Accurate prediction of pedestrian paths is vital for ensuring safe human-robot interaction, especially in robot navigation. Furthermore, this research has potential applications in autonomous vehicles, pedestrian tracking, and human-robot collaboration. Therefore, in this paper, we introduce FlowMNO, an Optical Flow-Integrated Markov Neural Operator designed to capture pedestrian behavior across diverse scenarios. Our paper models trajectory prediction as a Markovian process, where future pedestrian coordinates depend solely on the current state. This problem formulation eliminates the need to store previous states. We conducted experiments using standard benchmark datasets like ETH, HOTEL, ZARA1, ZARA2, UCY, and RGB-D pedestrian datasets. Our study demonstrates that FlowMNO outperforms some of the state-of-the-art deep learning methods like LSTM, GAN, and CNN-based approaches, by approximately 86.46% when predicting pedestrian trajectories. Thus, we show that FlowMNO can seamlessly integrate into robot navigation systems, enhancing their ability to navigate crowded areas smoothly.
I. INTRODUCTION
Pedestrian trajectory prediction is an essential aspect of robotics research. It enables us to predict how pedestrians will move in various contexts. This feature is critical in robotics, particularly in guaranteeing safe interactions between robots and humans in complex and chaotic settings. Several applications make use of pedestrian trajectory prediction. In autonomous robot navigation [2]- [5], for example, self-driving cars employ a mechanism for predicting pedestrian trajectories to estimate future pedestrian coordinates and avoid collisions with pedestrians on the scene [9]- [14], thus ensuring safe navigation. Social robotics [15]- [18], [22] enhances humanrobot collaboration, particularly in crowded spaces, by using trajectory prediction to teach robots to avoid interrupting pedestrians or groups and maintain a safe distance, thereby avoiding any impact on their mental state. Robots tasked with crowd management at public events or transportation hubs benefit from trajectory prediction, as it optimizes traffic flow and enhances safety.
There are quite a few challenges in this domain. The complexity of human behavior, influenced by individual choices, * social norms, and environmental factors, forms a formidable obstacle to achieving precise predictions. Furthermore, the inherent uncertainty in human movements presents a particularly demanding aspect of the problem. In crowded environments, the prediction of pedestrian trajectories becomes even more intricate due to the close proximity and interactions among individuals. Additionally, emergency situations necessitate rapid and accurate trajectory predictions, introducing an added layer of complexity as pedestrian behavior can drastically change under such circumstances.
Numerous deep-learning models have been explored to predict the trajectory of objects in a scene. Convolutional Neural Network-based models are used to predict the trajectory from a sequence of frames [30], [30], [32]. However, convolution filters cannot capture temporal relationships in the frames. Images are treated as independent entities. LSTMs are another family of neural network architectures that have been used for trajectory predictions [44], [45], [57]. They are designed to capture temporal relationships. Some models have explored scaling the number of LSTM networks with the number of entities present in the scene [43]. This would be computationally expensive. Furthermore, random chaotic motion cannot be modeled from temporal data.
In our work, we model the trajectory of crowds as a Markov process, where the positions of the entities at a given time step only depend on the previous time step. The system is modeled using dissipative dynamics, where the model predicts the flow of the entities within the system. Rather than representing the system with discrete entities, we represent the scene with the flow of the entity with respect to the previous time step, which is analogous to modeling the flow of a fluid.
Neural Operators have shown promise in modeling various types of fluid behavior and dissipative chaotic systems [1]. In this setting, we present a neural operator-based model FlowMNO to model the flow of entities, captured using optical flow estimation techniques.
Our main contributions can be summarized as follows: 1) Optical Flow Generation: FlowMNO incorporates optical flow as a key component for input data generation. Optical flow is generated using the Farneback method in combination with a pedestrian detection algorithm. 2) Modeling trajectories as a Markov process: FlowMNO adopts a Markovian process model for pedestrian trajectory prediction. This model relies on the assumption that the future state of a pedestrian primarily depends only on their current state. 3) Comprehensive Evaluation: The model's performance is evaluated on multiple datasets, including ETH and HOTEL [26], ZARA1, ZARA2, UCY [27], and RGB-D pedestrian [71]. The evaluation includes a comparative study of FlowMNO against other deep-learning models commonly used in pedestrian trajectory prediction. The evaluation metric uses the average displacement error and final displacement error, providing a quantitative measure of prediction accuracy. We show that FlowMNO outperforms various state-of-the-art deep learning models by approximately 86.46%.
II. RELATED WORKS
In the realm of pedestrian trajectory prediction research, various deep learning models have been explored to address the intricate challenges inherent in this task. Among these models are CNN-Based Approaches, which encompass diverse techniques and exhibit unique technical characteristics. Noteworthy examples include Yi et al. [30], who introduced a deep neural network framework for understanding and predicting pedestrian behavior. Varshneya and Srinivasaraghavan [32] proposed spatially aware deep attention models, enhancing spatial perception in human trajectory prediction. Yu et al. [35] advanced the field with spatio-temporal graph transformer networks, adept at capturing complex pedestrian dynamics. Dan [36] innovated by integrating a spatial-temporal block and LSTM network for trajectory forecasting, introducing temporal dependencies. Jain et al. [37] presented a discrete residual flow model, adding probabilistic elements to pedestrian behavior prediction. Ridel et al. [38] proposed a scene-compliant trajectory forecast model employing agent-centric spatio-temporal grids, enhancing predictive accuracy. Meanwhile, Zhang et al. [39] introduced the Social-IWSTCNN model, strategically incorporating social interactions into predictions. Zhao and Liu [40] presented STUGCN, a social spatio-temporal unifying graph convolutional network, revolutionizing trajectory prediction approaches. Zamboni et al. [41] focused on pedestrian trajectory prediction using convolutional neural networks. Mohamed et al. [42] unveiled Social-STGCNN, an intricate social spatio-temporal graph convolutional neural network tailored for precise human trajectory prediction. It is worth noting that while these models exhibit impressive performance, they share common limitations, including challenges with generalization beyond their training data, computational intensity, and limited consideration of environmental factors and interactions with unpredictable agents. Ethical concerns, such as privacy and bias, also necessitate careful consideration in real-world deployments in diverse urban environments.
LSTM-Based Models have gained prominence for their ability to capture temporal dependencies and interactions in trajectory data. Alahi et al. presented the "Social LSTM" method, focusing on predicting human trajectories in crowded spaces using LSTM networks [44]. SG-LSTM utilizes Social Group LSTM with group detection to enhance robot navigation in dense crowds, as proposed by R. Bhaskara et al. [43]. Lee et al. introduced "DESIRE," a framework for distant future prediction in dynamic scenes with interacting agents [45]. Fernando et al. proposed an LSTM framework with soft and hardwired attention mechanisms to predict trajectories and detect abnormal events [46]. Moreover, works like "Trajectron" [55], "STGAT" [57], and "Spatio-temporal Attention Model" [58] have leveraged spatiotemporal modeling and attention mechanisms to enhance prediction accuracy. Monti et al. introduced "Dag-net," a double attentive graph neural network for trajectory forecasting [59]. Finally, "SSeg-LSTM" [60] and "Multi-agent Tensor Fusion" [61] incorporate semantic scene segmentation and multi-agent fusion, respectively, to improve contextual trajectory prediction in diverse scenarios. Despite their advancements, these models often encounter challenges like accurate long-term trajectory forecasting, sensitivity to initial conditions, computational inefficiencies, handling multimodal predictions, limited context awareness, and interpretability issues. These challenges can impact their real-world applicability and require further research and development.
Recent advances in pedestrian trajectory prediction have witnessed the development of various Generative Adversarial Networks (GANs) Based Models. Gupta [67]. Meanwhile, Huang et al. introduced STI-GAN, a multimodal pedestrian trajectory prediction model using spatiotemporal interactions within a GAN framework [68]. Despite their progress, these GAN-based models face challenges such as data dependency, subjective behavior definitions, adherence to physical constraints, handling contextual diversity, computational complexity, metric standardization, and generalization across diverse scenarios.
In conclusion, various trajectory prediction approaches, including CNN-based, LSTM-based, and GAN-based models, exhibit distinct limitations, ranging from generalization issues to computational complexities and ethical concerns. However, FlowMNO, as a Markovian process, addresses some of these challenges and provides more accurate long-term trajectory forecasts.
III. PROBLEM FORMULATION
The problem of trajectory prediction of multiple moving entities within the environment observable to a mobile robot is modeled as a Markov process, where the positions of these entities are predicted solely based on their current positions. We make the following assumptions for the model Assumptions 1. The observable scene has multiple entities. 2. The motions of the entities are independent of each other. 3. We only consider the motion of the entities within the frame of reference. 4. The overall motion of all entities within the scene is considered random. 5. The system as a whole is chaotic. 6. The system as a whole is modeled as a Markov process.
A. Modeling Randomness
Markov Process A Markov Process, which is a random process, characterizes a sequence of events or states in which the probability of transitioning from one state to another depends solely on the current state, exhibiting the Markov property. This property implies that the future state of the process is conditionally independent of its past states, given the current state, making it memoryless. Markov Processes are extensively employed [6]- [8] for modeling systems with inherent randomness and uncertainty, enabling the analysis and prediction of future states or events based on historical observations. The problem of trajectory prediction of a system of independent entities can be thought of as a Markov process because the system as a whole is too random to be governed by deterministic models. For instance, a person may suddenly change walking direction, and the previous states cannot be used to predict this phenomenon.
Transition Dynamics: Probabilistically, the transition dynamics can be represented as: Here, a t signifies the actions taken by an agent a, within the system, such as a robot or a person, at time step t.
Sobolev Norms and Solution Operators: To model the solution operator, that learns to approximate the operator mapping the solution from the current to the next stepŜ h : f (t) → f (t + h), we use Sobolev norm. The model approximates the solution operator S h , an element of the underlying continuous time semigroup {S t : t ∈ [0, ∞)}, using a neural operator. A neural operator, as presented in Li et al [1], is a neural network that can learn infinite dimensional functions. A neural operator has been shown to be powerful in modeling ergodic systems, dissipative dynamics, and chaotic systems. In this problem, we model the motion of crowds as a stochastic system, with groups of entities flowing in and out of the observable frame of the robot. Crowds can be modeled on Riemann 3n Manifold in 3D Euclidean space [31]. However, from the point of view of a robot, entities get bigger as they move closer and smaller as they move away, denoting motion along the z-axis. They can move in a left-right direction. Movement along the up-down direction is minimal unless there are slopes in the environment. This entire system can be modeled as a dissipative stochastic process or an open thermodynamic system. If the inertial forces of an individual entity are weaker than the overall force of the system, the system can be thought of as being very crowded. Dissipative systems have a global attractor set or an invariant measure, that can be learned using the Sobolev norm. However, in a stochastic process, the attractor set is more random, representing a probability distribution of where the entities can converge. Human motion in most cases can be constrained by traffic rules, i.e., most people tend to walk along a sidewalk. This trajectory is always in the system. While there are infinitely many paths that people can take, they generally don't do so and move in a countably finite set of trajectories.
Given the ground truth operator S h and the learned operator S h , along with the residual r =Ŝ h (f ) − S h (f ), the neural operator computes the step-wise loss in the Sobolev norm as shown in [1], defined as: Furthermore, for values of, p = 2, the Sobolev norm can be efficiently computed in Fourier space as: Here,r represents the Fourier series of r and f represents the optical flow inputs.
Long-Term Predictions:
The neural operator exhibits the capability of making long-term predictions, leveraging the semigroup properties of the solution operatorŜ h . By repeatedly composingŜ h with its own output, long-time pedestrian trajectories can be approximated efficiently. Thus, for any n ∈ N, f (nh) is computed as follows: Theoretical Foundation: The neural operator is a solution operator that approximates the solution operator to a dynamic system that is locally Lipschitz [1]. On a compact set K, the neural operator, denoted by,Ŝ h , estimates the system for any nϵN, within a specified error ϵ, as follows
IV. FLOWMNO ARCHITECTURE A. Optical Flow Estimation and Integration with MNO
In FlowMNO, the input frame at time t − 1 and time t, denoted as I t−1 and I t respectively, are used to generate the optical flow using the Farneback Optical Flow estimation method. Optical flow estimation is a crucial component of our pedestrian trajectory analysis methodology. It serves as a fundamental tool for quantifying the motion of objects across consecutive frames in a video sequence. We adopt the Farneback method [28], a widely used technique for optical flow computation. The process begins by initializing the YOLOv3-tiny model [72], which is pre-trained with weight parameters and configurations to identify pedestrians within each frame. Subsequently, detected pedestrian bounding boxes, along with their associated confidences and class IDs, are extracted from the frames. To enhance the precision of pedestrian localization, we apply non-maximum suppression (NMS) based on confidence scores, ensuring that only the most reliable bounding boxes are retained.
The core of our optical flow calculation relies on the Farneback method, which is described by the following equations: where I(x, y, t) represents the intensity of a pixel at coordinates (x, y) and time t, and (u, v) denotes the optical flow vector.
This method estimates the optical flow between the two frames I t−1 , I t and generates the optical flow O t at time step t for frame I t . The optical flow results are visualized using a color map, and flow lines are drawn within the pedestrian bounding boxes to illustrate the motion patterns. Finally, the computed optical flow frames are saved for subsequent analysis or visualization. This iterative process is executed for each frame in the video sequence, enabling a comprehensive examination of pedestrian trajectories, their direction, and speed. This analysis provides valuable insights into crowd dynamics and behavior.
The generated O t is then used as input to the Markov Neural Operator (MNO), which estimates the optical flow at time step t + 1. The output generated by MNO is denoted as O t+1 . The FlowMNO pipeline as shown in Fig.2, can be summarized as follows: The optical flow information that MNO generates, denoted as O t+1 , is later utilized for estimating pedestrian trajectories from time step t to t + 1.
B. Trajectory Estimation
In the Trajectory Estimation stage of FlowMNO, we focus on estimating the trajectories of pedestrians from time step t to t + 1 using the optical flow information provided by the O t+1 frame. To achieve this, we leverage the centroid coordinates x and y of detected pedestrians within the bounding boxes in frame I t .
For each pedestrian in frame I t , we compute the displacement based on the optical flow information in O t+1 . Specifically, we use the optical flow value at the centroid (x, y) of the pedestrian's bounding box in frame I t , denoted as (dx,dy). This optical flow value indicates the displacement of the centroid from frame I t to frame I t+1 .
To estimate the trajectory for a pedestrian at time step t + 1, we calculate the new centroid coordinates (x t+1 , y t+1 ) in frame I t+1 using the optical flow vectors, allowing us to estimate their trajectory for the next time step. This trajectory estimation is valuable for tracking pedestrians in dynamic scenarios and is a key component of FlowMNO's functionality.
C. Integration of FlowMNO Trajectories with GVO Framework for Robot Navigation
In this integration, FlowMNO's predictions for pedestrians' future positions at time t + 1 (x t+1 and y t+1 ) serve as dynamic obstacles within the Generalized Velocity Obstacles (GVO) framework, enhancing robot navigation in dynamic environments. GVO [70], a widely-used navigation technique, calculates velocity obstacles considering both robot and obstacle dynamics. FlowMNO, our innovative pedestrian trajectory prediction model, provides these predictions based on optical flow information, enabling the robot to anticipate pedestrian movements. The robot's constraints are defined by a state transition function R(t, u), where u represents control inputs, including steering angle (u ϕ ) and speed (u s ), and t is time. Equation (3) characterizes the robot's position evolution.
To incorporate the predicted pedestrian positions (x t+1 and y t+1 ), we treat them as additional moving obstacles, modifying the GVO equations to account for their positions and velocities. The key to effective navigation within the GVO framework lies in calculating the relative velocity (v ri ) between the robot and each pedestrian (i). This relative velocity is pivotal in assessing the potential collision risk and determining the robot's desired velocity (v d ) for collision avoidance.
The relative velocity (v ri ) between the robot and pedestrian i is computed using the following equation: Where v rrobot represents the velocity of the robot relative to a stationary frame. v r pedestrian i represents the velocity of pedestrian i relative to the same stationary frame. For more information, please refer to [70].
By calculating v ri for each pedestrian i, the GVO framework evaluates the dynamics of robot-pedestrian interactions. This information is then utilized to determine the robot's desired velocity (v d ), which ensures safe navigation while avoiding potential collisions. The desired velocity (v d ) is typically computed based on optimization criteria, considering factors such as minimum separation distance and collision avoidance strategies, and is an essential component of the robot's motion planning process within dynamic environments.
V. TRAINING FlowMNO underwent intensive training on the RGB-D pedestrian dataset [71], spanning 60 epochs. The dataset was split into 70% training, 20% validation, and 10% testing sets. A batch size of 5 was used for efficient training. Key parameters included a learning rate of 0.0005, a scheduler step size of 10, and a gamma value of 0.5. Training employed the Adam optimizer and a learning rate scheduler (step size: 10, gamma: 0.5) for stability. The loss function utilized during training was Mean Squared Error (MSE), a standard measure for regression tasks. The process harnessed the computational power of an Nvidia A30 GPU, significantly expediting performance optimization.
A. Evaluation Metrics
FlowMNO's performance is assessed using two key evaluation metrics: 1) Average Displacement Error (ADE): ADE measures the average Euclidean distance between the predicted and ground-truth pedestrian positions over multiple time steps. It quantifies the accuracy of FlowMNO's trajectory predictions by considering the entire prediction horizon. 2) Final Displacement Error (FDE): FDE quantifies the accuracy of FlowMNO's predictions at the final time step. It calculates the Euclidean distance between the predicted position at the last time step and the groundtruth position, providing insight into the model's ability to make accurate long-term predictions. These evaluation metrics offer a comprehensive assessment of FlowMNO's performance across various datasets and scenarios, enabling a quantitative analysis of its prediction accuracy.
B. Results
The evaluation results of FlowMNO on datasets such as ETH and HOTEL [26], as well as ZARA1, ZARA2, and UCY [27], using the established metrics, are presented in Table I. Our observations indicate that modeling pedestrian trajectory prediction as a Markovian process yields promising results. Additionally, we conducted experiments on the RGB-D pedestrian dataset [71], where we achieved notable performance with an ADE of 0.03 and an FDE of 0.04. These findings underscore the effectiveness of our approach in various realworld scenarios.
To further emphasize FlowMNO's performance, we calculated the reduction in Average Displacement Error (ADE) compared to other state-of-the-art models. FlowMNO significantly outperforms these models, achieving substantial reductions in ADE. The table II provides a clear comparison of FlowMNO's performance against several other state-of-theart models in predicting pedestrian trajectories. The "ADE Reduction (%)" column indicates how much FlowMNO reduces the Average Displacement Error (ADE) compared to each model, with higher percentages representing better performance. On average, FlowMNO outperforms these models by approximately 86.46%. These significant improvements highlight FlowMNO's capability to provide highly accurate pedestrian trajectory predictions, making it a valuable tool for enhancing safety and efficiency in various trajectory prediction applications.
VII. CONCLUSION
In this study, we introduced FlowMNO, a novel framework combining Markov Neural Operators (MNOs) and the Farneback optical flow estimation method for pedestrian trajectory prediction. FlowMNO predicts future pedestrian positions (t+1) based solely on the current time step (t), reducing the need for extensive historical data storage. This characteristic, coupled with FlowMNO's ability to model pedestrian movement as a chaotic system, holds great promise for realtime applications, such as autonomous navigation and crowd management.
Nevertheless, FlowMNO faces certain challenges. The computational demands of MNOs may pose obstacles for resourceconstrained robots, demanding exploration of real-time optimization and hardware acceleration. Additionally, relying on optical flow estimation may introduce inaccuracies in scenarios with obstructions or complex scenes, impacting prediction accuracy. To improve FlowMNO's practicality, future research should focus on strategies like real-time optimization, sensor fusion to overcome optical flow limitations, model interpretability enhancements, data augmentation for generalization, and hybrid model integration to handle diverse scenarios effectively. Online learning mechanisms can further empower FlowMNO to adapt dynamically, solidifying its role in autonomous systems for enhanced safety and efficiency in urban environments. Thus, future research endeavors should focus on refining FlowMNO's robustness, adaptability, and real-time capabilities to fully harness its potential. | 2023-09-19T01:01:28.143Z | 2023-09-17T00:00:00.000 | {
"year": 2023,
"sha1": "58e8b22237503ad6c8f81e1405ceb8cc7b4b762b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "61bbb3764bc54bf76908e180cdf879668a186253",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
257024442 | pes2o/s2orc | v3-fos-license | KRAS(G12C)–AMG 510 interaction dynamics revealed by all-atom molecular dynamics simulations
The first KRAS(G12C) targeting inhibitor in clinical development, AMG 510, has shown promising antitumor activity in clinical trials. On the molecular level, however, the interaction dynamics of this covalently bound drug–protein complex has been undetermined. Here, we disclose the interaction dynamics of the KRAS(G12C)–AMG 510 complex by long timescale all-atom molecular dynamics (MD) simulations (total of 75 μs). Moreover, we investigated the influence of the recently reported post-translational modification (PTM) of KRAS’ N-terminus, removal of initiator methionine (iMet1) with acetylation of Thr2, to this complex. Our results demonstrate that AMG 510 does not entrap KRAS into a single conformation, as one would expect based on the crystal structure, but rather into an ensemble of conformations. AMG 510 binding is extremely stable regardless of highly dynamic interface of KRAS’ switches. Overall, KRAS(G12C)–AMG 510 complex partially mimic the native dynamics of GDP bound KRAS; however, AMG 510 stabilizes the α3-helix region. N-terminally modified KRAS displays similar interaction dynamics with AMG 510 as when Met1 is present, but this PTM appears to stabilize β2–β3-loop. These results provide novel conformational insights on the molecular level to KRAS(G12C)–AMG 510 interactions and dynamics, providing new perspectives to RAS related drug discovery.
In KRAS(G12C)-AMG 510 structure, atomic displacement parameter, also known as B-factor that indicates the atomic fluctuations in the crystal 18 , displays the highest values in switch-II and at the end of the helix-α3, which is connected to a disordered loop region D105-E107 (Fig. 1C). Conversely, based on B-factor the endregion of switch-I, interswitch region and beginning of switch-II appear relatively stable. Crystal contacts, however, may have influence on switch conformations and stability, which is in fact a commonly shared characteristics among KRAS structures that display ordered switches 15 . Indeed, a closer inspection revealed crystal contacts on top of the stabilized switch regions (Fig. 1D). This suggest possibility that the crystal contacts may play a decisive role for the observed switch conformations in the KRAS(G12C)-AMG 510 structure.
Ligand binding may change the free energy landscape of its target protein, which defines conformations and their frequency that the protein populates 19 . Discrepancy in KRAS conformational dynamics for G12C targeting covalent inhibitors was revealed by hydrogen/deuterium-exchange mass spectrometry (HDX MS) 16 . In the study, . AMG 510 (green) binds to the SII-P and exploits a cryptic pocket formed by H95/Y96/Q99. (C) B-factor values of KRAS(G12C)-AMG 510 structure. Thick and red ribbon indicate a high B-factor value, whereas thin and blue a low value. Cα-atoms of start and end residues of higher B-factor regions are highlighted with cyan spheres (regions: G0-T2; F28-T35; Y40-S51; A59-R68; H95-P110; L120-V125; I163-H166). A disordered region of the structure, D105-E107, is indicated with dashed red line. (D) Observed crystal contacts in KRAS(G12C)-AMG 510 structure. Residues within 4 Å of the protein are shown in sticks together with their molecular surface (yellow). In (B, D) KRAS switch regions: switch-I: residues 25-40 and switch-II: residues 58-72 are highlighted with red and blue, respectively. C12 is highlighted with orange. [20][21][22] . Based on the published structures by Dharmaiah et al. 20 , acetylation of the T2 is crucial for stabilizing KRAS conformation when the initiator M1 is excised from the structure. The acetyl group displays a water mediated contact to residues located in β2-β3-sheets in the interswitch region. No MD simulations of KRAS with this PTM have been reported; therefore, its influence on KRAS dynamics is still somewhat unclear.
Here, we utilized long timescale MD simulations to investigate the dynamic interactions of the KRAS(G12C)-AMG 510 complex. Our simulations suggest, in contrast with what is observed in crystal structure of the KRAS(G12C)-AMG 510 complex, that the switches are not in a closed defined configuration but rather appear in an ensemble of conformations. Furthermore, we investigated for the first-time the effect of the N-terminal PTM on KRAS dynamics by MD simulations. Interestingly, our data suggests that this PTM further stabilizes KRAS in the interswitch β2-β3-loop compared to KRAS with M1. Fig. S1; Fig. 1B). Overall, the interactions observed in the crystal structure are well maintained throughout the MD simulations of Full (M1-H166) systems ( Fig. 2A). Similar interaction profile is also displayed with N-terminally modified NAc (AcT2-H166) systems (Fig. 2B). However, dissimilarities www.nature.com/scientificreports/ in interactions between the crystal structure and MD simulations exist. Remarkably, methylpiperazine linker appears solvent exposed and is not shielded from water by switch-II ( Fig. 2A,B), while in the crystal structure it is enclosed beneath the switch-II ( Fig. 1B; Supplementary Fig. S1). In addition to the interaction with K16, the carbonyl-oxygen of AMG 510 next to the covalent attachment point displays a water bridged interaction to residue D33 in switch-I ( Fig. 2A,B). This reveals a surprising possibility for AMG 510 to interact with switch-I. This interaction is displayed in two out of five replicate simulations with both systems (Supplementary Figs. S2-S3). Hydroxyl group of AMG 510, which do not present any direct polar interaction in the crystal structure, displays several possibilities for polar interactions, as water bridged interactions to E62, D69 or R102 and direct H-bonds with D69 or R102 are observed. AMG 510 exploits so-called cryptic pocket in SII-P, which is formed by the residues H95/Y96/Q99 23 . From these residues, Y96 forms π-π interactions with azaquinozoline and these interactions are well conserved throughout the simulations ( Fig. 2A,B); whereas Q99 and H95 form more unspecific water bridges (Supplementary Figs. S2-S3). Furthermore, from this heteroaromatic azaquinozoline a cation-π interaction is occurring with R68 of switch-II.
KRAS
In the energy-minimized crystal structure, carbonyl group of the azaquinozoline displays interactions to two water molecules, one forming a putative water bridge to Q61. However, throughout the simulations no conserved interactions are observed for this carbonyl oxygen. Similarly, from the polar heteroatoms of AMG 510, aromatic nitrogens in the solvent interface are not displaying any conserved interactions during simulations, whereas pyridine nitrogen interacts with water in the crystal structure. Nevertheless, their role to interact with water is quite evident due to their location on solvent interface. These short-lived not-conserved interactions indicate that solvent is disordered in this AMG 510 interface.
Regardless of the increased solvent exposure compared to the crystal structure, AMG 510 is extremely stable throughout the simulations (Fig. 2C,D, Supplementary movies). This implies that a closed switch configuration is not decisive for the compound stability at the SII-P binding site.
AMG 510 influences on KRAS dynamics. Next, we next ascertained if AMG 510 has an influence on KRAS protein dynamics. Based on the protein backbone root-mean-square fluctuation (RMSF) values, overall dynamics of AMG 510 KRAS complex appears similar as observed previously for GDP bound KRAS in long timescale simulations 2 (Fig. 3). However, differences are also evident. Switch-II region fluctuations are alleviated compared to values observed for KRAS bound to GDP, indicating a stabilization of this switch by AMG 510 (Fig. 3A). Yet, this switch-II stabilization is not so evident with NAc systems (Fig. 3B). Residues forming the cryptic H95/Y96/Q99 pocket are remarkably stabilized in both systems. Moreover, AMG 510 not only stabilizes these pocket residues with direct interactions, but also the whole α3-helix and part of the loop after the helix (residues N86-E107) (Fig. 3C). This stabilization, however, is lower in NAc systems compared to Full systems, but still exists. Also, NAc systems exhibit a trend of higher fluctuation in the end-part of switch-I (D33-S39) compared to GDP only or Full systems. In contrast, NAc systems display remarkable stability in the loop between the β2 and β3 sheets (β2-β3-loop) (Fig. 3D). This observation suggests a clear influence from this PTM to the dynamics of β2-β3-loop in the interswitch region.
AMG 510 do not stabilize KRAS switches in the crystal conformation. Based on the RMSF values, switch regions of KRAS appear highly flexible. Next, we further examined locations of these regions in simulations compared to the crystal structure. To this end, we monitored the distances from C12 to selected reference residues: F28; D33 and T35 in switch-I and Q61; E62 and Y64 in switch-II (Fig. 4). Compared to KRAS(G12C)-AMG 510 crystal structure, simulations indicate clearly higher distances especially in the connection interface of the switches: in end of switch-I and beginning of switch-II. This is demonstrated by higher distances of C12-T35 and C12-Q61 (Fig. 4C,D). In contrast, distances from C12 to the beginning of switch-I (F28) and to the end of switch-II (Y64) are in better agreement with the crystal structure. These results indicate that the switches clearly prefer more open conformations, particularly in the connection interface of the switches.
KRAS(G12C)-AMG 510 complex populate an ensemble of conformations. Switch regions display
highly dynamical characteristics in the MD simulations that deviate from the observed KRAS(G12C)-AMG 510 crystal structure conformation. Based on the data presented above, switches prefer more open states compared to the crystal structure; however, with ambiguous configurations. To obtain more comprehensive information of their conformational distribution, we utilized Markov state modelling (MSM) approach 24,25 . MSM is an optimal method to disclose relevant conformational states of a biomolecule from a long timescale simulation data.
MSM of Full (M1-H166) systems revealed four metastable states (S 1 -S 4 ) for the KRAS(G12C)-AMG 510 complex (Fig. 5). Overall, it is quite evident that all these states deviate from the crystal structure conformation (Fig. 5C). In metastable states S 1 and S 2 , switch-II is forming a helical conformation that is in orientation perpendicular to α2-helix. This type of helical switch-II configuration is observed in many published KRAS crystal structures ( Supplementary Fig. S4) (e.g. PDB IDs: 6quu 26 ; 6gj7 27 ; 6mqg 28 ; 6bof 29 ; 6qux 26 ; 6quw 26 ; 6quv 26 ). Overall these S 1 and S 2 conformations appear similar; however, with S 2 switch-I is in more open conformation at the beginning of the switch (residues D30-D33) and also α2-helix is more stable with this state.
In the second most populated state S 3 , the end-part of switch-I (starting from T35) exists in a fully open conformation (Fig. 5C). This conformation resembles the KRAS conformation observed in SOS1-KRAS complex ( Supplementary Fig. S5) (PDB ID: 6epl) 30 . In fact, this might be a native pre-bound configuration of switch-I for GDP-bound KRAS that is required for SOS1 binding and facilitates the nucleotide exchange. Furthermore, S 3 displays a specific helical configuration at the beginning of switch-II (D57-Q61).
Scientific RepoRtS | (2020) 10:11992 | https://doi.org/10.1038/s41598-020-68950-y www.nature.com/scientificreports/ The most populated metastable state S 4 exhibits a loop-like configuration of switch-II. The switch-I conformation displays higher variability within this metastable state compared to other states, as both closed and open conformations are observed. These variable switch-I configurations are probably reflecting to the observed multiple energy minima that are included within this state (Fig. 5A).
Transitions between these metastable states occur in microsecond timescale, where mean-first-passage-time of transitions vary between 1.76 μs (from S 3 to S 4 ) and 16.79 μs (from S 4 to S 2 ) ( Supplementary Fig. S6).
Discussion
Our simulations reveal novel atomic scale insights of a KRAS(G12C) targeting inhibitor in the clinical development. Surprisingly, AMG 510 does not fix KRAS switches to a closed conformation as would be expected based on the crystal structure. Instead, in the simulations this complex exists in an ensemble of open conformations, confirmed by MSM. In fact, KRAS(G12C)-AMG 510 complex appears to follow generally the native dynamics of GDP-bound KRAS. The observed conformations for the switches in the KRAS(G12C)-AMG 510 crystal structure are most likely induced by neighbouring proteins in the crystal environment (crystal contacts). This is indeed a common feature observed in all KRAS structures that display ordered switch conformations 15 . These open states of KRAS switches have been also observed in other long timescale simulations with a membrane, where a different force field was used 31 .
Regardless of the switch fluctuations, AMG 510 appears extremely stable throughout the simulations. This occurs even though the switches are not fully shielding AMG 510 from solvent. This implies that an inhibitor bound to SII-P needs to be optimized for a dynamic environment. In fact, the observed switch fluctuations that occur even when an inhibitor is bound, may provide a putative explanation for difficulties in interpretation of structure-activity-relationship (SAR) with KRAS binding ligands. Additional interesting feature is that through a water bridged interaction to D33, AMG 510 can have direct connection and influence on switch-I. This adds an additional layer of complexity in KRAS-inhibitor interactions.
Although switch regions of KRAS are highly dynamic, other parts of the protein appear more fixed. Especially the cryptic pocket that is exploited by AMG 510 is extremely stable throughout the simulations. Interestingly, One metastable state of KRAS(G12C)-AMG 510, state S 3 , resembles a pre-nucleotide exchange conformational state. Although switch-II conformation in this state when an inhibitor is bound to SII-P disallows binding of SOS1, it may be important for the efficacy of an inhibitor to allow KRAS to visit these native conformations.
To the best of our knowledge, we report here for the first-time MD simulations with N-terminally processed KRAS. As shown by Dharmaiah et al. 20 , N-acetylation of T2 stabilizes KRAS dynamics upon the excision of M1. Here with AMG 510, dynamics of full N-term with iMet and N-acetylated (T2) appear also comparable, agreeing with the reported structural observations. Furthermore, the interaction profile of AMG 510 is similar with or without this PTM (Fig. 2). Therefore, N-terminal modification of KRAS is not expected to affect binding of S-IIP inhibitors, as is clearly demonstrated with the observed efficacy of AMG 510 12,23 . Nevertheless, this PTM appears to stabilize KRAS dynamics in the loop region that connects beta-sheets β2 and β3 (residues [45][46][47][48][49][50]. This β2-β3-loop may play a crucial role at KRAS dynamics at the membrane, as based on the 'exposed' configuration NMR data driven structure this loop exist in in a direct contact to the membrane (PDB ID: 2msc) 35 . Moreover, in a long timescale simulations study of KRAS at the membrane this orientation was frequently populated 31 . Recent NMR data driven models of KRAS-RAF on nanodiscs suggest that this loop is in contact with the CRDdomain of RAF (PDB IDs: 6pts, 6ptw) 36 . In a study with HRAS, mutations to residues in this region D47A, E49A were shown to enhance RAS nanoclustering and its signalling activity 37 . Also, T50I mutation with NRAS is observed in Noonan syndrome that leads to increased signalling 38 . These results highlight the fact that it is important to take into account the possibility that N-terminal processed KRAS may behave differently at the membrane compared to full-length KRAS with M1 present, which has been used in most of the studies. Therefore, future research should clarify if this N-terminal PTM affects KRAS dynamics at the membrane.
The dynamics here generally follow what has been previously observed for KRAS. As always, in MD simulations the selected methodology could however cause bias to the observed dynamics (see details in methods). For instance, the applied force field here is not a polarizable force field 39 , which could have influence on the results.
Overall, these results provide novel atomic-level insights to KRAS(G12C)-inhibitor complex. First, they suggest that KRAS exists in an ensemble of conformations when AMG 510 is bound. Second, they indicate that a KRAS targeting inhibitor should be optimized to a more solvent exposed dynamic environment then what a crystal structure suggests, as the switches appear dynamic even when an inhibitor is present in SII-P. Finally, potential influence of N-terminal PTM of KRAS at the membrane needs to be clarified in future. Proper understanding of KRAS and its conformational dynamics is crucial, especially when targeting other mutant forms of this oncoprotein which are lacking the cysteine for covalent inhibition. OPLS3e force field 41,42 . OPLS3e generated parameters were applied for the ligands, GDP and AMG 510. For the simulations, we utilized the AMG 510′s lead molecule -KRAS complex (PDB ID: 6p8y 43 ). Engineered mutations of the structure were reverse mutated back to native wild-type form (C51S; C80L; C118S). The ligand was manually changed to AMG 510 and the complex was optimized and energy-minimized by Protein preparation wizard 44 . Additionally, for the PTM NAc systems, M1 was deleted and the resulting terminal T2 was acetylated. Simulations were run using Desmond MD engine 45 . These minimized KRAS(G12C)-AMG 510 complexes were solvated to cubic boxes with a minimum distance of 15 Å from the protein. Water was described with TIP3P water model 46 . K + -ions and Cl − -ions were added to obtain 0.15 M ionic strength with a total net charge neutral. MD simulations were run in NpT ensemble (T = 310 K, Nosé-Hoover method; p = 1.01325 bar, Martyna-Tobias-Klein method) with default Desmond settings. RESPA integrator with 2 fs, 2 fs and 6 fs timesteps were used for bonded, near and far, respectively. The default value of 9 Å was used for Coulombic cutoff.
Five replicate simulation for each system using different seed numbers were run. Full (M1-H166) KRAS(G12C)-AMG 510 simulation were simulated for 10 μs and NAc systems for 5 μs, resulting in total of 50 μs and 25 μs aggregate simulations, respectively. A default Desmond relaxation protocol was applied before the production simulations.
Maestro tools were used for all of the analysis, except for MSM (see MSM details below). Visualization of the structures was done with PyMOL 47 .
Markov state model generation. MSM generation was conducted with PyEMMA 2 48 . Bayesian MSM was conducted following the general recommendations 49 . The individual trajectories of full M1-H166 systems were used as an input for MSM generation. For featurization, we used the backbone torsions of residues from switch-I and switch-II together with selected residues in contact with AMG 510 (residues 25 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 95, 96, 99, 103). During the decision process of which residues were selected to be included in the final model, we monitored closely Each metastable state (S i ) is illustrated with three representative structures (coloured cartoons) and the crystal structure conformation of KRAS(G12C)-AMG 510 complex is shown as a reference in grey. Equilibrium probability (π i ) for each state is indicated below the conformations together with circles with an area that is relative to state probability. | 2023-02-20T15:09:38.011Z | 2020-07-20T00:00:00.000 | {
"year": 2020,
"sha1": "07b915a4f49643e253f3ed55f80af4006b4a74b1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-68950-y.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "07b915a4f49643e253f3ed55f80af4006b4a74b1",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
258331792 | pes2o/s2orc | v3-fos-license | Curvature effects in the spectral dimension of spin foams
It has been shown in [1] that a class of restricted spin foam models can feature a reduced spectral dimension of space-time. However, it is still an open question how curvature affects the flow of the spectral dimension. To answer this question, we consider another class of restricted spin foam models, so called spin foam frusta, which naturally exhibit oscillating amplitudes induced by curvature, as well as an extension of the parameter space by a cosmological constant. Numerically computing the spectral dimension of $1$-periodic frusta geometries using extrapolated quantum amplitudes, we find that quantum effects lead to a small change of spectral dimension at small scales and an agreement to semi-classical results at larger scales. Adding a cosmological constant $\Lambda$, we find additive corrections to the non-oscillating result at the diffusion scale $\tau\sim 1/\sqrt{\Lambda}$. Extending to $2$-periodic configurations, we observe a reduced effective dimension, the form of which sensitively depends on the values of the gravitational constant $G$ and the cosmological constant $\Lambda$. We provide an intuition for our results based on an analytical estimate of the spectral dimension. Furthermore, we present a simplified integrable model with oscillating measure that qualitatively explains the features found numerically. We argue that there exists a phase transition in the thermodynamic limit which crucially depends on the parameters $G$ and $\Lambda$. The dependence on $G$ and $\Lambda$ presents an exciting opportunity to infer phenomenological insights about quantum geometry from measurement of the spectral dimension, in principle. [1]: S. Steinhaus and J. Th\"urigen, Emergence of Spacetime in a restricted Spin-foam model, Phys. Rev. D 98 (2018) 026013
Introduction
Deriving observable consequences is crucial for understanding the structure of any quantum theory of gravity and of quantum space-time itself, especially when formulated using discrete structures, such as in the approaches of loop quantum gravity [2,3], spin foams [4], tensorial group field theories [5][6][7], causal dynamical triangulations (CDT) [8,9] and causal set theory [10,11]. The spectral dimension, derived from the spectrum of the Laplacian on quantum space-time, has proven to be an informative phenomenological quantity.
This observable provides a notion of an effective dimension of emergent space-time on different scales. On the one hand, it allows for a consistency check that the space-times obtained from quantum gravity exhibit the observed dimension of D = 4 at large length scales. This is already highly non-trivial as various approaches only yield fractal or twodimensional geometries [12,13]. On the other hand, new small-scale effects beyond classical continuum gravity, sourced by the quantum nature of space-time, can be examined. Most prominently, a dimensional flow to values 0 < D < 4 at small scales is an effect present in many quantum gravity approaches [1,[14][15][16][17][18][19][20][21][22][23][24][25][26][27][28], possibly leaving observable traces in gravitational wave astronomy [29]. In any case, this phenomenon is interesting as it allows to compare conceptually different approaches at small scales. Investigating the existence and structure of such a dimensional flow in spin foam quantum gravity [4], specifically using the restricted setting of spin foam frusta models [30,31], is the main objective of this article.
Spin foam models provide a path integral quantization of discretized geometries where the microscopic gravitational degrees of freedom are encoded in group representation labels and intertwiners. Although model defining amplitudes are defined rigorously and consistently on the quantum level, challenges remain to turn spin foam quantum gravity into a consistent computational formalism. While there is a clear relation to semi-classical discrete gravity on a local level [32][33][34][35], exploring the semi-classical regime of extended discrete structures is a subject of active research [36][37][38][39][40][41][42]. Moreover, a consistent quantum theory of gravity needs to be independent of any discretization chosen in the quantization procedure. Discretization independence of spin foams can be attained either in the group field theory formalism [5][6][7] or via a spin foam renormalization procedure [43][44][45]. Furthermore, in recent years, there has been promising progress in numerical methods [46] that tackle the challenge of performing calculations in spin foams from different angles [35-39, 41, 42, 47-51]. Defining and explicitly computing the spectral dimension in spin foam models makes contact with these open questions and serves as a coarse characterization of the quantum space-times obtained therein.
A first attempt to determine the spectral dimension D s of spin foams in [1] has been in the setting of spin foam cuboids [51][52][53][54], a restricted subclass of the Euclidean Engle-Pereira-Rovelli-Livine/Freidel-Krasnov (EPRL-FK) model [55,56] defined within the Kaminski-Kisielowski-Lewandowski extension to general 2-complexes [57]. Therein, a restriction to N -periodic geometric configurations proved crucial for numerically feasible calculations. At finite N , the spectral dimension shows an intermediate value which depends on the choice of a face amplitude parameter α. In the limit N → ∞, the analytical results of [1] suggest that there is a phase transition from D s = 0 to D s = 4 at the point α * where the amplitudes become scale-invariant.
Although an important step towards control of the spectral dimension of spin foam quantum gravity, the results on spin foam cuboids [1] come with two major limitations. First, the spectral dimension is computed utilizing semi-classical amplitudes which capture only the large scale behaviour correctly. Second, and most importantly, spin foam cuboids are inherently flat, reflected in the fact that the semi-classical action vanishes on critical points. As a consequence, cuboid amplitudes exhibit a simple semi-classical scaling behaviour without curvature-induced oscillations. Overcoming these limitations is the main purpose of the present work.
Since the spectral dimension for the full EPRL-FK model currently exceeds computational capacities, we choose the so-called spin foam frusta model, introduced in [30] and further examined in [31,51,58], to include curvature effects and a non-vanishing cosmological constant. Though the underlying combinatorial structure is a hypercubic lattice, the intertwiners of the theory are restricted in such a way that the 4-dimensional building blocks semi-classically describe 4-frusta. These building blocks can be understood as a 4-dimensional generalization of regular trapezoids, with 3-cubes as base and top connected by six boundary 3-frusta. It has been shown in [30] that semi-classical spin foam frusta exhibit curvature if the 3-cubes are of different size. Due to their strongly restricted geometry, frusta exhibit a discrete analogue of spatial homogeneity. Thus, frusta are promising candidates for a cosmological subsector of spin foam models [30]. At the same time, this homogeneity leads to simplifications of the Laplace operator, providing a feasible setting for numerical computations of the spectral dimension.
Extrapolating 1-periodic frusta quantum amplitudes and computing the spectral dimension therewith, we discover a non-trivial flow of the spectral dimension, exhibiting an intermediate value which is controlled by the face amplitude parameter α. Compared to a semi-classical analysis, the quantum amplitudes lead to an additive correction to the spectral dimension at small scales. Adding a cosmological constant Λ and thus introducing oscillations to the amplitudes, we find additive corrections to the spectral dimension at diffusion scales τ ∼ 1/ √ Λ. Supported by analytical calculations, we can explain these effects from quantum amplitudes and Λ-oscillations in terms of an effective scaling behaviour of the amplitudes. For 2-periodic frusta amplitudes we observe an intricate dependence of the dimensional flow on Newton's constant G and Λ, showing the significant role of curvature. On these grounds, we argue for a phase transition in the large-N limit that crucially depends on the parameter values of α, G and Λ.
We begin this article by setting up the spin foam frusta model, its semi-classical limit and by defining the Laplace operator on frusta geometries. Thereafter, we present the concept of N -periodicity, which has been introduced already in [1] and provide the formulas for computing the quantum spectral dimension. In Sec. 3, we provide a method to extrapolate 1-periodic quantum amplitudes. Using the resulting extrapolated amplitudes, we compute the 1-periodic spectral dimension in Sec. 4.1. Proceeding with semi-classical amplitudes, we extend the analysis of the spectral dimension to include a cosmological constant in Sec. 4.2 and finally to 2-periodic configurations in Sec. 4.3. Based on the analytical estimate of the spectral dimension presented in 4.4, we provide a broader discussion of our results in Sec. 5 and give and conclude our work in Sec. 6 with an outlook on possible future studies.
Spin foam hyperfrusta and the spectral dimension
In this section we provide a brief introduction to the setting of the present work and, in particular, define the spectral dimension as a spin foam observable. Giving first the relevant formulas for classical hyperfrusta, we introduce spin foam frusta as restrictions of the Euclidean EPRL-FK model. Thereafter, we set up the classical spectral dimension in a discrete setting and translate these ideas to spin foam hyperfrusta.
Spin foam frusta
In the present work, we employ the Euclidean EPRL-FK model [55,56] with a Barbero-Immirzi parameter γ bi < 1. Originally defined on a 2-complex Γ dual to a triangulation, this model was later generalized to arbitrary 2-complexes [57,59], which includes in particular the hypercuboidal structure we intend to work with. In the following, we introduce all the defining ingredients of the EPRL-FK model.
Definition of spin foam frusta
Given a 2-complex Γ with vertices v, edges e and faces f , we associate to it the following partition function The variables j f denote irreducible SU(2)-representations, frequently referred to as spins, and are assigned to the faces f of Γ. The ι e are SU(2)-intertwiners, i.e. elements in the invariant subspace of the tensor product of representations meeting at the edge e. A f , A e and A v are the face, edge and vertex amplitudes, respectively, and we define them in the following. Characteristic for Euclidean gravity, the local gauge group is SO(4), or its double cover Spin(4) ∼ = SU(2) × SU (2), with SU(2) being a subgroup. A distinctive property of the Euclidean EPRL-FK model is the simplicity constraint which provides an embedding Y γbi of SU (2) into Spin(4)-representations [55]. For γ bi < 1, the explicit relation between SU(2) spins j and Spin(4)-labels (j + , j − ) is given by with the condition that the j ± are half-integers. For this map to be non-empty, γ bi is required to be rational. Given a certain range of spins with N spin configurations in total, only a certain number N (γ bi ) is allowed. 1 A plot for the ratio of N (γ bi ) and N spin is given in Fig. 1. Different face amplitudes are proposed in the literature 2 , the choice of which has direct consequences for the critical behaviour of the partition function [61], as well as the spectral dimension [1]. We parametrize this ambiguity by α ∈ R as (2.3) 1 The ratio N (γbi)/Nspin for γbi = p q ∈ Q is either 1/q or 1/2q depending on whether q ± p is even or odd which explains the asymmetry in Fig. 1 (e.g. for q = 5 we have ratio 1/5 for p = 1, 3 while 1/10 for p = 2, 4). More precisely, let q ∈ N and p ∈ {1, 2, ..., q − 1} but p ⊥ q (coprime, otherwise there are smaller p ′ , q ′ with γbi = p q = p ′ q ′ to be considered). Then the EPRL condition Eq. (2.2) yields q±p 2q n ∈ N. Write q±p 2q = r s with r ⊥ s. Then the condition is fulfilled if n s ∈ N, that is for every s's spin. Thus, the ratio is N/Nspins = s and there are two cases for given q, either s = q or s = 2q. To see this, note first that q + p ⊥ 2q iff 2q − (q + p) = q − p ⊥ 2q, thus the two cases ±γbi give the same s. Then, q + p and 2q have a common divisor iff q + p is even in which case s = q and this happens for every second p for a given q.
2 Following [60], the two most common choices for the face amplitude in the Riemannian setting are either the dimension of the Spin(4)-representation or the dimension of the SU(2)-representation. Viewing the Euclidean EPRL-model as a constrained Spin(4)-BF theory, the dimension factor is a result of expanding δ-distributions on Spin(4) in terms of representations. Choosing the SU(2)-dimension instead can be motivated physically, as argued in [60]. The two choices correspond to α = 1, respectively α = 1 2 . Clearly, the parametrization of Eq. (2.3) constitutes a continuous interpolation between these two choices. The edge amplitude of the model is introduced as a normalization factor (2.4) and the vertex amplitude is defined as where the trace is understood so that either (Y γbi ι e ) or (Y γbi ι e ) † are contracted, depending on the edge e being ingoing or outgoing from the vertex v, respectively.
With the general definition of the model being set up, we now introduce two restrictions on the partition function, being of combinatorial and geometrical kind. First, we assume that the 2-complex Γ and thus also the discretization is hypercubic. As we will see later in Sec. 2.2.1 and Sec. 2.2.2, respectively, the choice of a hypercubic lattice is convenient for defining a discrete Laplace operator as well as for considering periodic configurations. Notice, that this choice of combinatorics does not imply a reduction to hypercubic geometries.
On the geometrical side, we introduce the restriction to regular 4-frustum geometries [30]. The boundary of such configurations consist of two cubes of a priori different size and six equal 3-frusta. It is therefore reasonable to impose a 3-frustum shape on the Livine-Speziale coherent intertwiners [62], since these objects are naturally peaked on the geometry of 3-dimensional polyhedra. 3 An explicit expression for 3-frustum intertwiners is 3 Notice that the frustum symmetry reduction is a restriction on the quantum level, as it is imposed on the coherent intertwiners. Such a procedure is distinct from a classical symmetry reduction followed by quantization. We remark further, that lifting this restriction is unfortunately not straightforward. As elaborated in Sec. 6, a triangulation offers a more manageable setting to study models with less symmetries. given by where the case j 1 = j 2 = j 3 reduces ι j 1 j 2 j 3 to a cubical intertwiner [63]. In Eq. (2.6), "▷" denotes the action of an SU(2)-group element on coherent states in representation space, e 3 is the unit vector in R 3 along the axis e 3 andr l = e −i π 4 lσ 3 e −i ϕ 2 σ 2 ▷ê 3 for l ∈ {0, 1, 2, 3}. A depiction of a regular 3-frustum is given in Fig. 2.
A semi-classical 3-frustum can be understood as the generalization of a regular trapezoid to three dimensions, characterized by three areas j 1 , j 2 and j 3 of the base and top square and of the bounding trapezoids, respectively. Another convenient parametrization is given by j 1 , j 2 and the slope angle ϕ. The relation between the spins and ϕ is given by which follows from the closure condition. Clearly, for the slope angle to be well-defined, the condition −1 ≤ j 1 −j 2 4j 3 ≤ 1 is required to hold. Gluing two cubes and six 3-frusta as indicated above, one obtains a 4-dimensional hyperfrustum. The two 3-cubes lie in separated spatial hypersurfaces at the base and top, connected by the six 3-frusta. A visualization of the unwrapped boundary of a single 4-frustum is given in Fig. 3.
As a whole complex, the discretization can be understood as a slicing, where the nth thick slice is bounded by two spatial hypersurfaces cubulated by 3-cubes with area j n and j n+1 , respectively, and connected by 3-frusta with spatio-"temporal" areas given by k n . Due to the gluing conditions, the spins j n , j n+1 and k n are constant throughout a whole slice. Therefore, spin foam configurations of hyperfrusta are translation invariant in spatial directions. This implies that the volume of cubes changes only between different spatial hypersurfaces, i.e. in temporal direction. This makes hyperfrustum spin foams well-suited for the description of discrete classical cosmology, where we refer to [30] for further details.
Asymptotics of spin foam frusta
In general, the quantum amplitudes defined by Eq. (2.5) and Eq. (2.4) are highly involved functions of the representation labels. To find a suitable approximation and to obtain an analytical expression for later purposes, we present in the following the results of an asymptotic analysis [32,33] of the frustum quantum amplitudes [30]. The regime of large representation labels is interpreted as the semi-classical sector of the theory, thereby connecting it to classical discrete gravity [64].
Introducing a uniform rescaling of spins, j i → λj i , and sending λ → ∞, the quantum face amplitude in Eq. (2.3) is approximated by To find the asymptotics of the vertex and the edge amplitude A i , we notice the factorization (2)-amplitudes evaluated on the spins j ± . This is a result of the Y γbi -map entering Eq. (2.4) and Eq. (2.5) which holds for γ bi < 1. We rewrite an amplitude A ± i as an integral of a complex exponential of an action which scales linearly in λ. Then one can apply an extended stationary phase approximation [32] in the limit λ → ∞ which consists of evaluating the integrand on the critical points of the action, multiplied by the inverse square root of the Hessian and some factors of π. Following this procedure for the SU(2)-vertex amplitude, one finds [30] where D is the determinant of the Hessian, defined as and G is the gravitational constant, which has been added by hand as in [31]. 4 Given Eq. (2.9), the semi-classical Spin(4)-vertex amplitude is where we have introduced a cosmological constant Λ, following the work of [66] and [67]. Therein, the quantum amplitudes are deformed in a heuristic fashion by a real parameter which, in a semi-classical limit, can be related to a cosmological constant of either sign. It enters with the 4-volume V (4) of a hyperfrustum, defined in Eq. (2.28) below, which is strictly positive and thus does not correspond to a signed volume. 5 Repeating the analysis, the semi-classical SU(2)-edge amplitude is given by (2.14) Consequently, the semi-classical approximation to the Spin(4)-edge amplitude, Eq. (2.4), is For general discretizations an edge is shared by two vertices. Special to a hypercubic lattice, every face is shared by four vertices. These two facts allow to define a dressed vertex amplitude such that the amplitude of the whole complex can be written as the product of dressed vertex amplitudes. Even in the presence of oscillations, originating from a non-vanishing cosmological constant or a non-vanishing Regge curvature, the dressed vertex amplitudes behave under uniform rescaling as v (λj n , λj n+1 , λk n ) ∼ λ 12α−9 .
(2.17) 4 In units where ℏ = c = 1, the gravitational constant has the dimension of area. Considering the jn as mere representation labels, only Gjn has the interpretation of an area. For the majority of the spin foam literature, the spins are implicitly understood to have the dimension of area and therefore implicitly depend on G [31]. 5 Notice, that this positive quantity is the same quantity that enters the definition of the Laplace operator in Eq. (2.26). The cosmological constant term enters with a cosine in the quantum amplitudes since the critical points correspond to the two orientations.
Comparing v to the dressed vertex amplitude of spin foam cuboids [63], we see that the this scaling is the same. However, v is not a homogeneous function in the spins, due to the two types of oscillations in Eq. (2.16), which are not present in the case of cuboids. As a consequence, we cannot deduce that α = 3 4 is a point of scale-invariance of the frusta model.
Spectral dimension of spin foam frusta
In this section, we introduce the notion of spectral dimension, first classically on semiclassical frusta geometries. After restricting to so-called N -periodic configurations, we set up the spectral dimension of spin foam frusta as a quantum expectation value.
Spectral dimension and semi-classical geometry of hyperfrusta
The spectral dimension serves as an effective measure of the dimension of a space. In the continuum, consider a Riemannian manifold (M, g) together with the heat kernel K(x, x 0 ; τ ) which is a solution of the heat equation [14] ∂ τ K(x, x 0 ; τ ) = ∆K(x, x 0 ; τ ). (2.18) Here, x, x 0 ∈ M and τ provides a measure of the size of the probed region, often referred to as diffusion time. The classical spectral dimension D cl s (τ ) is then extracted from the scaling of the return probability Clearly, the return probability implicitly depends on the metric g and therefore is a functional of the geometry. Thus, in a quantum gravity path integral picture with partition function where S EH [g] is the Einstein-Hilbert action, the expectation value of the return probability is Consequently, one defines the spectral dimension as Notice, that we do not compute ⟨D cl s (τ )⟩ but define the quantum spectral dimension as the scaling of ⟨P (τ )⟩.
To translate these continuum notions to the context of spin foams, we introduce now a discrete formulation of the Laplace operator, the return probability and ultimately the spectral dimension. Assuming that the space M is discretized on a hypercubic lattice, we denote vertices in the dual graph Γ by ⃗ n ∈ Z 4 . Interpreting the return probability in Eq. (2.19) as the trace over the heat kernel, the discrete return probability is simply given by [68,69] P (τ ) = ⃗ n∈Γ K(⃗ n, ⃗ n, τ ). (2.24) Then, one can rewrite the return probability as [69] where spec(∆) is the spectrum of the discrete Laplacian. Consequently, to obtain the spectral dimension, knowledge of the full spectrum on the whole lattice is required. Following [68], the notion of an exterior derivative and a Laplace operator can be defined on general cellular complexes, using methods of discrete exterior calculus [70]. To that end, one introduces a scalar test field 6 discretized on the complex, which can be either placed on the vertices of the discretization or on its dual vertices. The strategy we follow here is to consider the scalar field ϕ ⃗ n on dual 7 vertices of the complex Γ. From the continuum perspective, this case can be obtained by integrating a smooth scalar field over the region of a 4-cell. Finally the discrete Laplacian is defined by its action on the scalar test field ϕ ⃗ n [69] where the sum runs over all adjacent vertices ⃗ m, indicated by "∼". ∆ ⃗ n ⃗ m are the coefficients of the discrete Laplacian, which can be split into a diagonal part and a part which is proportional to the adjacency matrix of the complex Γ. V and is the same 4-volume that accompanies the cosmological constant in the vertex amplitude of Eq. Eq. (2.13). Going beyond the scope of this work, we do not consider the possibility of working with oriented spin foam amplitudes and with signed 4-volumes in the Laplace operator.
As discussed in detail in [69], the definition of volume and dual edge length and thus that of a discrete Laplacian is not unique in general. For constructing the dual 2-complex, we pursue the following strategy. In 4-dimensional Euclidean space, we orient a 4-frustum along the t-axis such that the two cubes lie in a constant-t hypersurface. Vertices of the dual lattice are then obtained by forming the average of the corner points of the 4-frustum. 6 Importantly, test fields should not be confused with physical fields, minimally coupled to spin foams [54].
In particular, a test field does not have to satisfy spatial homogeneity even if the spin foam is fixed to spatially homogeneous configurations. Also, N -periodicity, which we are going to introduce down below does not need to be fulfilled by ϕ ⃗ n . 7 Other choices work equally well. E.g. in. [54], a scalar field is placed on the vertices of the discretization.
Beyond scalar fields, other tensor or p-form fields might yield different results for "generalized" spectral dimensions, though [25].
Consequently, these points lie on the t-axis on half of the 4-frustum height. Dual edges, "spacelike" and "timelike", are chosen to be orthogonal on the faces of the 4-frustum such that their lengths are minimal. 8 Notice, that these definitions do not correspond to a barycentric construction. The distance between barycentric vertices would yield the dual edge lengths we obtain below, rescaled by a constant factor. For the return probability, such a constant pre-factor can be absorbed into the scale τ and thus has no physical effect. In the following, we define the geometric quantities that enter Eq. (2.26), based on the construction of the dual lattice discussed above. Thereby, we adopt the notation for which, in the nth "slice", the spins j n , j n+1 refer to the area of the lower and upper cube, respectively, and where k n labels the spatio-temporal area of boundary 3-frusta.
First, we notice that the four-dimensional dihedral angles imply the following restriction on the possible spin values for each slice [51], posing a stronger condition than closure in Eq. (2.7). Volume and height of a 4-frustum in the nth slice are respectively given by and Since the boundary of a 4-frustum consists of two types of building blocks, cubes and 3frusta, we make a distinction in the following. This can be interpreted as separating the cases where ⃗ n and ⃗ m having "spacelike" or "timelike" separation. 9 "Timelike" dual edges 3-dimensional volumes between the vertices ⃗ m and ⃗ n, which connect the (n − 1)th and nth slice, are simply given by the volume of cubes with area j n V (3) ⃗ m⃗ n = j 3 2 n . (2.30) Following the construction of a dual lattice outlined above, the length of dual edges is given as the half of the sum of heights of "past" and "future" hyperfrusta (2.31) A visualization of the dual edge and its length in a 3-dimensional analogue is given in Fig. 4. Figure 4: Three-dimensional representation of a "timelike" dual edge, drawn in red, connecting the midpoints of two timelike separated frusta. From this representation, the duality of timelike edges and cubes (here squares) is immediate. This defines all the ingredients of the Laplace operator ∆ ⃗ m⃗ n for neighbouring vertices ⃗ m, ⃗ n which have a "timelike" separation, and we use the notation ∆ n−1n in the following.
"Spacelike" dual edges In "spacelike" direction, so within a given slice n, 4-frusta are connected with each other via boundary 3-frusta. The corresponding 3-volumes are given by (2.32) To obtain the length of "spacelike" dual edges, for which a visualization is given in Fig. 5, it suffices to project the geometry onto the plane spanned by the t-axis and the dual edge. In this picture, the dual edge connects two glued trapezoids and is orthogonal with respect to the connecting face. For Θ the dihedral angle between the (n + 1)-cube and the boundary 3-frustum, defined as [30] Θ n = arccos 1 tan(ϕ n ) , (2.33) the dual edge length is then given by We denote the components of the Laplacian ∆ ⃗ n ⃗ m with vertices ⃗ n and ⃗ m having a "spacelike" separation by ∆ nn .
Note that the definition of the discrete Laplacian requires a semi-classical interpretation of the geometry, as interpreting the spins j n , k n as areas is only valid in a semi-classical regime. However, in the computation of the spectral dimension, we assume that the definition ∆(j n , k n ) holds for arbitrarily small spins, which can be seen as a continuation of the semi-classical Laplacian and provides one possible definition of ∆ in the quantum regime. While there may be many valid definitions of microscopic Laplacians, they must converge to the semi-classical definition provided above.
N -periodicity
Following the discussion in [1], evaluating the spectral dimension even in the setting of restricted spin foams is challenging because of two main reasons. First, the computation of the Laplacian's spectrum for many classical configurations is very costly. Second, evaluating the spin foam partition function scales exponentially with the number of lattice sites. We elaborate on these numerical challenges in the following.
Let us consider the spectrum of the Laplacian for a given spin configuration first. As Eq. (2.25) shows, the return probability, and therefore the spectral dimension, requires full knowledge of the spectrum of the discrete Laplacian. 10 Given that the lattice contains L sites in each direction, this amounts to setting up and diagonalizing an L 4 × L 4 matrix, the complexity of which grows exponentially in L. At the same time, L determines the scale τ comp at which boundary effects become dominant. To avoid fixed data on the boundary, we henceforth assume periodic boundary conditions, equipping the lattice with a toroidal topology. The resulting compactness leads to a fall-off in spectral dimension for τ > τ comp [69]. Consequently, a large lattice size is required to observe a non-trivial spectral dimension between the smallest lattice scale τ ∼ j min and τ comp , which is in clear conflict with computational feasibility.
Apart from the computational effort to diagonalize the discrete Laplacian for a given spin configuration, the evaluation of the partition function becomes increasingly difficult with larger lattice size. That is because the number of spin configurations scales exponentially with the lattice sites. Due to spatial homogeneity there are in total 2L different spins, each of which takes N = 2(j max − j min ) + 1 values. Here, j min and j max are the lower, respectively upper cut-offs of the spins, the meaning of which we discuss in detail in Sec. 4.1. Imposing the Riemannian EPRL condition of Eq. (2.2) and the inequality of Eq. (2.27), N allowed values for the spins (j n , k n ) remain. The total number of configurations on the whole lattice is then N 2L . Consequently, the Laplacian needs to be diagonalized for each configuration to compute the return probability. This part is most demanding in numerical resources. Once the return probability for every spin configuration is obtained, the expectation value can be computed rather efficiently. To do so, one writes the amplitudes as a vector and the return probabilities as a matrix, with one index representing the configurations and the other one the diffusion time τ . The expectation value of the return probability is then obtained by a simple matrix-vector multiplication which is a highly optimized numerical operation.
To solve both of the above problems simultaneously, we adopt the assumption of Nperiodicity proposed in [1], based on results of [71]. The key idea is to assume that the geometry of the spin foam is N -periodic in every direction, meaning that the geometric labels repeat after N steps in every direction. As a consequence, one can perform a Fourier transform of the discrete Laplacian in the Brillouin zone which amounts to diagonalizing an N 4 ×N 4 -matrix. The spectrum of the Laplacian is then given in terms of four momenta p µ , which are either discrete or continuous, depending on whether the lattice is respectively finite or infinite. The return probability in Eq. (2.25) is then obtained as a sum, respectively an integral, over the momenta. Also at the level of the amplitudes, periodicity reduces the computational complexity, as the number of independent spin variables reduces to 2N .
Spectrum of the discrete Laplacian
The first step in deriving the spectrum of the discrete Laplacian is to introduce the Fourier transform of the test field and to notice that the homogeneity of the geometry effectively reduces the Laplace coefficients ∆ ⃗ n ⃗ m to an L × L matrix. To make this explicit, let us first introduce some notation: We indicate the "spatial" component of ⃗ n ∈ Z 4 L as n ∈ Z 3 L . Taking N -periodicity into account, we indicate a slice as n 0 + zN where n 0 ∈ Z N and z ∈ Z L/N . Thus, the variable z labels the N -cell in which the slice is located and n 0 denotes the n 0 -th slice within a given N -cell. Using this notation, the scalar test field is written as ϕ To write Eq. (2.26) in Fourier space, we consider a similar ansatz to the proposal of [71], given by ϕ where c n 0 is an N -dimensional vector. A phase with spatial momentum p i is picked whenever changing one lattice site in spatial direction i. In contrast, a phase with temporal momentum p 0 is only picked up when changing to another N -cell. This pattern of picking up phases will be reflected in Eqs. (2.37), (2.38) and (2.39). For a finite lattice of size L 4 with periodic boundary conditions, the momenta p µ take values In the limit L → ∞, the momenta lie in the Brillouin zone p µ ∈ [−π, π]. Inserting the ansatz of Eq.
∆ nn+1 are the components of the Laplace operator on dual edges connecting slices n and n + 1, defined by Eqs. (2.28), (2.30) and (2.31). ∆ nn are the components of the Laplace operator within a slice, defined by Eqs. (2.28), (2.32) and (2.34), associated to "spacelike" separated vertices. Due to spatial homogeneity, ∆ nn is independent of the spatial direction and therefore factorizes from the spatial momenta. For the slices n = 0, N − 1 connecting neighbouring N -cells, exponential factors of e ±ip 0 are picked up where for brevity, we introduced the notation Exploiting N -periodicity and spatial homogeneity, we observe that the action of the Laplace operator reduces to a vector equation in N dimensions. Altogether, Eqs. (2.37), (2.38) and (2.39) are captured by with the matrix M being defined as For a given spin configuration, the spectrum in momentum space is then given by the eigenvalues of the matrix M (p 0 , p 1 , p 2 , p 3 ), which must be computed for every combination of momenta.
Expectation value of the return probability
Following Eq. (2.25), the return probability of a given spin configuration is obtained by either summing or integrating over the spectrum of the Laplacian. Let ω i ({p µ }) denote the N momentum-dependent eigenvalues of the Laplace matrix in Eq. (2.42). Then, the N -periodic return probability P N (τ ) on a lattice of length L is given by where the integers k µ and the lattice momenta p µ are related by Eq. (2.36). In the limit L → ∞, the summation is replaced by an integration over the Brillouin zone, yielding Notice, that the eigenvalues ω i ({p µ }) depend on the geometry of the entire lattice, turning the return probability into a highly non-local quantity. These different eigenvalues are usually called branches, e.g. the "acustic" branch in which ω i → 0 for all p µ → 0.
Already for the 2-periodic case, the summation, respectively integration in Eq. (2.43) and Eq. (2.44) cannot be performed individually, since the eigenvalues ω i ({p µ }) do not split into a sum of terms for each momentum component p µ . This severely affects the computational effort to compute the return probability at a given configuration. Summation is implemented by L 4 nested for-loops. For the numerical integration on the other hand, one cannot perform a product of one-dimensional integrals but needs to consider instead a single 4-dimensional integration. Using the Cuba-package 11 , such higher-dimensional integrations are possible but more costly and issues of convergence due to an exponential decay are more likely to arise.
The expectation value of the return probability for a finite N -periodic lattice in this setting is where the spin labels are in the range j min ≤ j i , k i ≤ j max . 12 Note that a priori, A denotes the full quantum amplitude of 4-frusta. However, as we argue in Sec. 4.1, the asymptotics of the dressed vertex amplitude already captures the behaviour of the spectral dimension sufficiently for τ ≳ 10 2 .
As argued in Sec. 2.2.2, the lattice size L is required to be large in order to resolve a non-trivial spectral dimension between the minimal scale τ ∼ j min and the compactness scale τ comp ∼ L. Consequently, the amplitudes enter Eq. (2.45) with large powers, requiring the utilization of arbitrary precision floating point numbers. Arithmetics with this format is costly in memory and computation time. To circumvent this issue, we truncate the total number of amplitudes by assuming that the amplitudes of a single N -cell sufficiently capture the relevant information of the whole spin foam. Within this approximation, the expectation value of the return probability is finally given by In Sec. 5.2 we discuss the limit of N → ∞, corresponding to an infinite lattice with infinitely many degrees of freedom.
To summarize, we compute the return probability for the spin foam quantum spacetime by summing up the return probability for all possible spin foam frusta geometries, weighted by the quantum or semi-classical spin foam amplitudes for various diffusion times τ . From this expectation value, we derive the spectral dimension D s as Notice, that we do not consider D s as an observable itself. Rather, the "quantum spectral dimension" characterizes the scaling of a quantum expectation value, that is here the heat trace.
1-periodic quantum amplitudes from extrapolation
The quantum amplitudes of frusta spin foams, introduced in Sec. 2.1.1, are the necessary ingredients to determine expectation values. In essence, there are two ways to compute these. First, the intertwiners in Eq. (2.6) can be explicitly computed via SU(2)-integrations. Then, as the formulas in Eq. (2.4) and Eq. (2.5) suggest, quantum amplitudes are straightforwardly computed by contracting 3-frustum and 3-cube intertwiners accordingly. Notice, that due to the higher valence of the intertwiners, the frustum computation is more costly than an analogous computation on a triangulation. Another conceivable strategy is to derive a spin network expression of the vertex amplitude via SU(2)-recoupling theory, in the spirit of [34]. The result is then to be contracted with the overlap of intertwiners in the spin network and coherent state bases. Still in this case, the computation is more costly than in a triangulated setting. Choosing to follow the first strategy, we face the following numerical challenge. Since the range of magnetic indices grows with the spin size, numerical contractions of intertwiners either demand increasing memory when using the TensorOperations 13 package, or increasing time using nested for-loops. This sets numerical limits to the computation of the vertex amplitude already at low spins j ∼ 4 [51], in conflict with resolving a nontrivial flow of the spectral dimension that requires jmax j min ≫ 1, as argued below in Sec. 4.1. Consequently, the number of configurations for which the exact quantum amplitudes are available, is insufficient to compute ⟨P (τ )⟩.
While resorting to semi-classical amplitudes as done in [1] is a sensible choice, the approximation deviates significantly for small spins j ≲ 10 [51]. In order to find amplitudes which are closer to the actual quantum amplitudes for small spins, while still showing a convergence to the semi-classical amplitudes in the limit of large spins, we present in this section a method to extrapolate quantum amplitudes in the simplest case of periodicity N = 1 using the FindFit-function in Mathematica [72]. The restriction to 1-periodic spin foam frusta reduces the model to a specific subclass of spin foam cuboids which satisfy geometricity [1,51,63]. A particular feature of cuboidal amplitudes is that they exhibit a pure scaling behaviour without any oscillations, therefore simplifying the analysis drastically.
As mentioned above, the amplitudes of the Riemannian EPRL model factorize into SU(2)-amplitudes for γ bi < 1. Thus, we first present an extrapolation method for the SU(2)-vertex and edge amplitude in Sec. 3.1. Thereafter, we combine these results of that to set up an extrapolation of the dressed quantum vertex amplitude in Sec. 3.2, which will be used later on to compute the spectral dimension.
SU(2)-quantum amplitudes
One-periodic spin foam frusta are characterized by the amplitudes of a single 4-frustum, the initial and final 3-cubes of which carry the same spin j. The connecting 3-frusta are uniquely characterized by the area of the trapezoidal faces, labelled by k. Hence, the pair (j, k) fully determines the geometry. In this case, the numerical limitations allow a computation of the exact quantum vertex and edge amplitudes for spins in the range 1 2 ≤ j, k ≤ 4 [51], thus yielding 64 data points.
Best suited for fitting, we compute the relative error with respect to the semi-classical amplitudes, which are given in Eq. (2.9) and Eq. (2.14). Here, i indicates the relative error of vertex (i = v) and edge (i = e), respectively. Within a homogeneous limit (λj, λk) with λ → ∞, the relative error ε i converges to zero. Importantly, when keeping one variable fixed and only scaling the other one, we expect ε to converge to a small but non-zero value. 14 Still, this convergence will ensure that we obtain the convergence to the semi-classical amplitude in a homogeneous limit, as we demonstrate down below. Now, for every j fixed, we find a fit of the relative error ε i in k-direction. The resulting fitting function is then used to extrapolate ε i up to a chosen value of k = 5000 for every j. With the results, we proceed similarly to extrapolate in j-direction for fixed k. Ultimately, we obtain a 10000 × 10000 matrix for the relative error ε i (j, k), from which the amplitudes can be derived via Eq. (3.1). Since the fitting procedure is only performed with one-dimensional functions, where either k or j is assumed to be fixed, it is a non-trivial consistency check to show the convergence of the quantum to the semi-classical amplitude in a homogeneous limit (λj, λk) with λ → ∞. In the following, we show that this is indeed the case.
Extrapolated vertex amplitude
On the left panel of Fig. 6, we show that the relative error ε v (j, k) converges to zero in the limit where (j, k) = (λ, λ) are homogeneously scaled up. For values of λ ≥ 13, the relative error is smaller than 1%, indicated by the black line in Fig. 6. Notice, that we have plotted ε v only up to λ = 35 since, from this value on, the difference between both amplitudes is already so small that it cannot be resolved with usual double precision-numbers. Notice that the two outlying points are a consequence of the fitting procedure and occur, when there is a sign change in the fitting parameters. A side-by-side comparison of the extrapolated and semi-classical amplitude is presented on the right panel of Fig. 6. Clearly, the semi-classical amplitude is an over-estimate of the extrapolated quantum amplitude for small spins which is alignment with the results of [1].
These results indicate that the extrapolated amplitude shows the correct behaviour for large spins. Also for small spins, the extrapolated amplitude provides a better approximation of the actual quantum amplitude than the semi-classical. To show this explicitly, we present in Fig. 7 a plot of the relative error between the quantum amplitude and the extrapolated, respectively the semi-classical amplitude. As visible, the extrapolated amplitude is closer to the quantum amplitude by several orders of magnitude, measured by the relative error.
Extrapolation edge amplitudes
The relative error ε e between extrapolated quantum and semi-classical SU(2)-edge amplitudes is depicted on the left panel of Fig. 8. Similar to the vertex amplitude, we observe a rather fast convergence with ε e < 1% at spins λ > 13. The plot is only drawn for λ < 254, since for larger spins, the difference between extrapolated and semi-classical amplitude cannot be resolved. On the right panel of Fig. 8, we see that the semi-classical edge amplitude under-estimates the extrapolated amplitude for small spins. That is, because the edge amplitudes in Eq. (2.4) are defined as the inverse of the intertwiner norm.
Following this consistency check, we show in Fig. 9, that the extrapolated edge amplitude serves as a better estimate for the exact quantum amplitude at low spins. Indeed, the relative error between extrapolated and quantum amplitudes is below 1%, while it is of the order 1 for the semi-classical amplitude.
To summarize, the extrapolation of quantum amplitudes in the simplified case of 1- periodic frusta spin foams provides a good approximation to the exact quantum amplitudes. For small spins in particular, the extrapolated amplitudes do not deviate as much from the quantum amplitudes in comparison to the semi-classical approximation. We emphasize that the whole procedure hinges on the assumption of 1-periodicity. For higher periodicities N > 1, the vertex amplitudes oscillate. Since only few data points of the quantum amplitudes are available, it is to be expected that a simple fitting procedure cannot capture the behaviour of the amplitudes sufficiently. To do so, one would probably have to assume the semi-classical oscillation behaviour which, for small spins, yields the wrong phase [51]. Furthermore, this assumption might not be valid for a non-homogeneous scaling, where all but one spin are kept fixed.
Dressed quantum vertex amplitude
In this section, we utilize the extrapolated SU(2)-amplitudes of the previous section to compute an approximation of the quantum dressed vertex amplitude according to Eq. (2.16).
Depending on the Barbero-Immirzi parameter γ bi , the SU(2)-spins j are mapped to different Spin(4)-representations (j + , j − ). Consequently, the componentsÂ(j, j, k) are com- posed out of different components of SU(2)-amplitudes A i (j ± , j ± , k ± ), where i indicates either face, edge or vertex. Here and in the following section, we choose the least excluding value of γ bi = 1 3 (see Fig. 1 for details) unless indicated otherwise. In this case j ∈ 3 2 N is mapped to ( 2 3 j, 1 3 j) ∈ N × 1 2 N, satisfying the EPRL-condition in Eq. (2.2). We first analyse the convergence of the resulting amplitude to the semi-classical approximation in the limit λ → ∞. Thereafter, we show that the extrapolated dressed vertex amplitude provides a better approximation than the semi-classical amplitude, which is to be expected from the results of Sec. 3.1. We conclude by presenting the effective scaling of the extrapolated dressed vertex amplitude, which is an important factor for the spectral dimension [1].
While the semi-classical SU(2)-amplitudes provide a good approximation already at small spins of λ ≳ 15, the semi-classical Spin(4)-amplitude is expected to have a slower convergence because of two reasons. First, since various powers of edge and vertex SU(2)amplitudes enter the dressed amplitude, defined in Eq. (2.16), the relative errors ε e and ε v add up. Second, the face amplitude, given in Eq. (2.3), introduces a third deviation ε f (α), which depends on the α-parameter. By definition of the semi-classical and quantum face amplitude, a larger α leads to a stronger deviation. The plots of Fig. 10 support these arguments, where we consider the relative error between extrapolated and semi-classical amplitude for α ∈ {0.5, 0.75, 1.0}. We find a good agreement of the two amplitudes for ε < 1%, which is the case for spins λ ∼ 1000.
To get a more detailed picture of its behaviour and its dependence on α, Fig. 11 shows the extrapolated dressed amplitude in comparison to the semi-classical approximation for α ∈ {0, 0.5, 0.75, 1.0}. Fig. 11a depicts the dressed amplitudes for a trivial face amplitude. Here, the semi-classical amplitude is an over-estimate for small spins. Increasing α to 0.5, which is a value of interest in Sec. 4, the extrapolated amplitude is in fact larger than the semi-classical one, as Fig. 11b shows. A short numerical check reveals that the transition from the amplitude being smaller to being larger than the semi-classical approximation takes place for α ≲ 0.24. At the value α = 0.75, the semi-classical 1-periodic amplitude becomes scale invariant. Following Fig. 11c, the extrapolated dressed amplitude reaches the scale-invariant behaviour asymptotically from above. Above scale-invariance the dressed amplitude diverges in the limit λ → ∞. Visualized in Fig. 11d, the extrapolated amplitude is larger than the semi-classical amplitude with a large relative error at small spins. Only few components of the dressed Spin(4)-amplitude can be computed, since the exact SU(2)-quantum vertex amplitude is only available for spins j, k ≤ 3. Consequently, we can compute 3 × 3 entries of the exact dressed amplitude, listed in Fig. 12. There, the relative errors of extrapolated and semi-classical dressed amplitudes with respect to the exact quantum dressed amplitude are depicted for α = 0.5. As the plots indicate, the extrapolated dressed amplitude provides a better approximation to the quantum amplitudes for all tuples (j, k).
As a last point, we compute the effective scaling γ of the extrapolated dressed amplitude, defined as Demonstrated for cuboids in [1], the scaling of the amplitudes is a quintessential factor for the behaviour of the spectral dimension. As we show later on in Sec. 4.1 and Sec. 4.4, this holds also true for semi-classical as well as for extrapolated frusta amplitudes. A change in scaling directly translates to a change of the spectral dimension. Our results for the Fig. 13. For comparison the semi-classical scaling exponents are depicted as black horizontal lines. These are constant because semi-classical amplitudes exhibit a simple polynomial decay. Since the extrapolated amplitudes approach the semi-classical limit from above for α > 0.24, the corresponding effective scaling is below the constant semi-classical value. As expected from the previous arguments given above, exactly the opposite behaviour can be observed for α = 0.
Spectral dimension from spin foam frusta
With the ingredients introduced in Sec. 2 and Sec. 3, we present in this section our numerical and analytical studies of the quantum spectral dimension for spin foam frusta. In Sec. 4.1, we present numerical results using quantum and semi-classical 1-periodic amplitudes. Proceeding with semi-classical amplitudes, we introduce a cosmological constant in Sec. 4.2 and thereafter generalize to 2-periodic configurations in Sec. 4.3. We close this section with an analytical estimate of the spectral dimension in Sec. 4.4.
1-periodic spectral dimension
Assuming that the geometry of frusta is 1-periodic, the momentum space Laplace operator, defined in Eq. (2.42), reduces to a single component matrix which decomposes into components ω (µ) (p µ ). Consequently, the classical return probability on an infinite lattice, given in Eq. (2.44), can be written as a product of integrals where in the last step, we exploited spatial homogeneity. Factorization into one-dimensional integrals is advantageous, as the convergence of numerical integration impairs for increasing dimensionality.
Employing the extrapolated amplitudes of the previous section, we compute the expectation value of the return probability via Eq. (2.46). In the case of N = 1, the formula reduces to The numerical results for the expectation value of the return probability and the spectral dimension are presented in Fig. 14 for different values of α. Probing space-time at scales below the lowest lattice scale, τ ≪ j min , D s is zero. Above the largest scale, i.e. τ ≫ j max , every classical configuration exhibits a spectral dimension of four and hence the quantum spectral dimension is four as well. Similar to the findings of [1], we observe a non-trivial dimensional flow between 0 and 4 for α in a certain interval [α min , α max ]. We discuss in detail the various influence factors of the spectral dimension in the following paragraphs.
α α α-parameter The most salient factor driving the spectral dimension is the value of α. As Fig. 14 To be more precise, it is the ratio jmax j min that is required to be sufficiently large. As numerical tests have shown, the ratio of cut-offs is required to be at least of order ∼ 10 2 in order to resolve the first sign of a plateau. This signature in D s presents itself as two points of inflection which are absent if less configurations are taken into account.
Due to the inherent minimal length scale in the theory, geometry cannot be probed below τ ∼ j min such that this regime, where the spectral dimension flows to zero for τ → 0, is not of physical interest. Similarly, for scales τ ≫ j max , a dimension of four is inevitably reached, since all superimposed spin geometries possess a spectral dimension of four. While the existence of a minimal spin is a quintessential feature of most spin foam models, the upper cut-off is introduced for numerical purposes. 15 In order to recover a physical interpretation of our results, we therefore need to consider the limit j max → ∞. As numerical tests with different j max as well as the results of [1] show, the intermediate regime extends to infinity in the limit of infinitely large upper cut-off.
Barbero-Immirzi parameter In the case of 1-periodic spin foam frusta, the Barbero-Immirzi parameter γ bi controls the spacing of allowed SU(2)-spins according to the Y γbi -map defined in Eq. (2.2). Thus, changing the value of γ bi results in a rescaling of the allowed spins, which in turn can be absorbed into the diffusion scale τ . This holds for quantum as well as for semi-classical amplitudes. 16 In contrast, the value of γ bi has a non-trivial effect on amplitudes of N > 2. As the semi-classical vertex amplitude in Eq. (2.13) suggests, γ bi 15 Following [66,67,73], apart from what is considered in this work, a cosmological constant can be added to spin foams by replacing the group SU(2) by its quantum deformation SU(2)q. Consequently, an upper cut-off jmax is introduced, related to the cosmological constant via jmax ∼ π Λl 2 P [64]. However, this value is expected to be much larger than what can could numerically implemented. For instance, a cosmological constant of order ∼ 10 −122 would imply a maximal cut-off of order jmax ∼ 10 122 . 16 For j ∈ N/2 that do not satisfy the EPRL-condition, the amplitudes are zero. We exploit this exclusion for the computation of the return probability in that we only compute it for the allowed configurations, given a value of γbi. All other components would be multiplied with zero in the expectation value. controls the relative phase of the oscillations. Therefore, it is in general to be expected that the spectral dimension is non-trivially affected by the value of γ bi .
Compactness effects Considering a finite lattice of length L with periodic boundary conditions rather than an infinite lattice, compactness effects are introduced to the spectral dimension which set in at scales τ > τ comp (L). 17 Following [69], the return probability of a classical configuration with spins j is constant for τ > j and thus, the spectral dimension reaches zero at τ ∼ j. For the quantum spectral dimension, this implies that for L sufficiently large, i.e. such that τ comp (L) ≫ j max , a dimensional flow between zero, a possible intermediate value and four will be resolved. At τ close to compactness, D s will then flow to zero and remain zero for all τ > τ comp . For j min ≲ τ comp ≲ j max and an intermediate regime D α s existent, the spectral dimension will flow to this intermediate value and then back to zero after the compactness scale is reached. For τ comp ≪ j min , the spectral dimension is zero everywhere.
Semi-classical amplitudes Employing the semi-classical amplitude for computing the expectation value of the return probability and the spectral dimension, we observe a behaviour similar to that obtained with extrapolated amplitudes. A direct comparison is presented in Fig. 15 for α = 0.5. With semi-classical amplitudes, the spectral dimension is constant in the intermediate regime. In particular, in the limit j max → ∞, where the upper cut-off is removed, this plateau extends to τ → ∞.
Considering the results of Sec. 3.2 and the analytical explanations of [1], the deviation between the two curves is a consequence of the different scaling behaviours of the amplitudes for small spins. From Fig. 13, it follows that larger effective inverse scaling γ of the amplitudes implies a larger spectral dimension. Also, since the effective scaling of the extrapolated amplitudes is non-constant, there is a non-constant flow of the spectral dimension to the semi-classical constant value at larger scales. 18 As this flow is visible at scales τ > 10, it is not a mere discreteness artifact but a physical effect.
The different behaviour of the spectral dimension due to the different amplitudes appears in the regime 10 −2 < τ < 10 2 and is of quantitative nature. Although providing an increasingly bad approximation at low spins, this suggests that the semi-classical amplitude is sufficient for extracting the spectral dimension on large scales. In particular, there is agreement with the quantum amplitude results for scales τ > 10 2 , even in the limit of infinite upper cut-off. Therefore, we are going to employ semi-classical amplitudes for the rest of this work.
Using the semi-classical amplitudes comes with the following three advantages. First, having an analytical expression for the amplitudes available allows for numerical integration, which is based on assuming the spins to be continuous variables. In that way, the results we have obtained so far can be compared to the findings of [1], which are based on continuous integration. Second, the semi-classical setting allows for a straightforward inclusion of a cosmological constant via an ad hoc deformation of the amplitudes [66,67]. Third, the extrapolation method for quantum amplitudes, discussed in Sec. 3, strongly relies on the assumption of 1-periodicity. Since we do not expect this method to be straightforwardly applicable for N > 1, quantum amplitudes beyond small spins are not in reach for N > 1. Due to these technical limitations, resorting to semi-classical amplitudes allows the study of the spectral dimension at higher periodicities. In the following, we take advantage of the possibilities that the semi-classical amplitudes offer, and discuss these cases in greater detail.
Discrete summation vs. numerical integration Up to this point, we computed the expectation value of the return probability via a discrete sum over all spin configurations in the range j min ≤ j, k ≤ j max . In the case where jmax j min ≫ 1, the sum can be replaced by an integral, which is the strategy employed in [1]. As a consequence, the expectation value of the return probability is obtained as an integral over the configurations with the amplitudes and the return probability being continuous functions of the spins. Note that, for this strategy, an analytical expression of the amplitudes is required. Determining the corresponding functions for quantum amplitudes is currently out of reach. Therefore, one needs to resort to the semi-classical approximation, where the spins are simply understood as continuous variables. 19 We have numerically checked that both methods yield very similar results. For small and large τ , we observe a convergence, while the spectral dimension differs quantitatively in the regime where the intermediate value D α S is reached. This is, because according to [69], continuous configuration variables lead to a smoothening of discreteness peaks.
Cosmological constant
A way to add a cosmological constant to the simplicial EPRL-FK model was introduced in [66] and generalized to arbitrary 4-dimensional polyhedra in [67]. In essence, the vertex amplitudes of the model are deformed while keeping the boundary Hilbert space fixed. Relating the deformation parameter with the cosmological constant, the asymptotic vertex amplitude yields the Regge action with a cosmological constant. 20 Given that the Regge curvature, defined in Eq. (2.12), vanishes for 1-periodic configurations, the introduction of a cosmological constant allows to consider oscillations even when N = 1. In this setting, the sign of the cosmological constant is irrelevant for the amplitudes, as Eq. (4.4) below shows. Following from the definition of the Spin(4)-vertex amplitude defined in Eq. (2.13), these oscillations are of a particular type in comparison to cases of vanishing Regge curvature. First, Λ oscillations allow only for simple cosine type shape. In contrast, the Regge term describes a superposition of cosines with different phases controlled by Barbero-Immirzi parameter. Notice that this superposition is a peculiar feature of the Riemannian EPRLmodel, absent in the Lorentzian setting [33]. Second, since Λ enters the action via a 4-volume, these oscillations scale quadratically in spins, whereas curvature terms scale linearly. Despite its simple form, we consider the cosmological constant in order to get a first glimpse of the effects of oscillations on the spectral dimension.
Explicitly, the oscillating term is given by where D is the determinant of the Hessian, defined in Eq. (2.10). Consequently, for a given upper cut-off j max , the amplitudes and hence the return probability are not altered for The strongest deviation from the Λ = 0 case is localized at the scale at which the first oscillation takes place, marked by a green vertical line. If this scale is in the intermediate regime, the spectral dimension is larger than in the case of vanishing Λ. To be more precise, it is the regime below the first zero of the amplitudes, given by which we expect to be most influential. To visualize this, Fig. 17 shows the first oscillation of the (rescaled) amplitudes as well as the effective scaling γ of the amplitude for all λ < λ (Λ) 0 . Since the shift in Eq. (4.4) is constant, the oscillations are not symmetric with respect to the λ-axis. Still, since Re{D} |D| < 1, the amplitudes attain negative values when crossing 20 Another, mathematically more rigorous way of introducing a cosmological constant to spin foams is to replace the group SU(2) by its quantum deformation SU(2)q [73][74][75][76][77][78][79]. The deformation parameter q is related to Λ via q = exp(2πi/(k + 2)) with k = 1/(ℏG √ Λ), see e.g. [77]. For more methods on implementing a cosmological constant in 4d spin foams see e.g. [80,81]. 0 , the scaling oscillates rapidly, which however does not affect the spectral dimension. Similar to the comparison of quantum and semi-classical amplitudes, a value γ > 9 − 12α larger than the semi-classical scaling implies a larger spectral dimension. This effect is in particular resolved in the regime λ < λ (Λ) 0 . Notice that an increase of D s is a feature present for all values of Λ ̸ = 0, since the amplitudes exhibit a scaling γ > 9 − 12α for λ < λ (Λ) 0 .
2-periodic spectral dimension
The results of the previous section have shown, that the semi-classical amplitude serves as a sufficient approximation to extract the qualitative behaviour of the spectral dimension in the 1-periodic case. It is therefore reasonable to employ the semi-classical amplitudes to compute the spectral dimension with higher periodicity, being N = 2 here. We discuss possible quantum corrections to our results at the end of this section.
Following Eq. (2.42), the Laplace operator at N = 2 becomes a 2 × 2 matrix in mo- with the same parameters as on the left. Notice that both of the plots assume the spin variable λ to be continuous, ignoring the EPRL-condition. mentum space where W 0 and W 1 , defined in Eq. (2.40), depend on the spatial momenta p i . The corresponding eigenvalues are given by (4.7) Compared to the 1-periodic case in Eq. (4.2), this expression is more involved due to the intermingling of the p 0 and p i terms. As a consequence, the return probability from momentum integration, cannot be written as the product of 1-dimensional integrals. Instead, full 4-dimensional integration is required to compute P 2 , leading to larger numerical computation times. Under the assumption that the amplitudes of a single N -cell, here a 2-cell consisting of 16 hyperfrusta, already capture the relevant information, the full expression for the expectation value of the return probability is given by Again, the range of all of the spins is given by j min ≤ j i , k i ≤ j max . For the numerical results presented below, we have chosen j min = 1 2 and j max = 201. Since in the 2-periodic case, the 3-cubes are not restricted to equal size, frustum geometries arise which lead to non-vanishing Regge curvature. Consequently, the vertex amplitudes exhibit oscillations even in the case of vanishing cosmological constant, Λ = 0, which we assume from now on if not stated otherwise. Given three fixed spins (j, j ′ , k), a single dressed vertex amplitude scales aŝ (4.10) where S R is the Regge action and φ is the phase of the determinant D of the Hessian, both being evaluated on the spins (j, j ′ , k). Notice that φ is invariant under a homogeneous scaling of all spins. Since the expectation value of the 2-periodic return probability, Eq. (4.9), contains high powers of cosine-functions, the amplitudes are narrowly peaked on the maxima of oscillation. Furthermore, since the powers in Eq. (4.9) are even, no negative values occur.
Large Newton's constant As a first computation, we consider the limiting case of G → ∞, where curvature oscillations become negligible. In Fig. 18 we present the results for G = 10 10 . This already captures the large G behaviour for the chosen j max as we have checked that for larger G the results do not change anymore.
Just as for the 1-periodic case, we observe that there exists an intermediate flow of the spectral dimension for α-values in a certain interval α ∈ [α min , α max ], where α min ≈ 0.68 and α max ≈ 0.7. From the results Sec. 4.1, we conclude that the intermediate spectral dimension is again a result of the amplitudes exhibiting a scaling behaviour. Compared to the 1-periodic case, the size of the interval of α is smaller which is in accordance with the findings of [1]. As we are going to discuss in more detail in Sec. 4.4, this is expected. In the following we discuss the effects of a finite value of G.
Regge curvature oscillations Given that in principle an intermediate spectral dimension D α s can be observed in the case of N = 2, we study next the influence of oscillating amplitudes by varying G. Following from the form of the amplitudes in Eq. (4.10), Regge curvature oscillations are expected to become relevant for G being comparable to S R . In the light of the results of Sec. 4.2, we expect that the oscillations, now induced by Regge curvature, have an effect on the spectral dimension. Our numerical results show that the spectral dimension is indeed perceptive to the value of G. Since small changes in G lead to very different flows of D s , we conclude that the spectral dimension is in fact highly sensitive to Newton's constant. In Fig. 19, we present the spectral dimension at fixed α for three exemplary values of G, showing that one can have either a positive or a negative correction to the case G → ∞, or no correction at all.
In contrast to 1-periodic amplitudes oscillating with a cosmological constant, the flow of the 2-periodic spectral dimension is not straightforwardly understood by considering the scaling behaviour of the amplitudes. Main reason for that is the strong dependence of the amplitudes on the spins (j, j ′ , k), given explicitly in Eq. (4.10). Fig. 19, the blue and red curves might indicate that the interval [α min , α max ] in which an intermediate spectral dimension occurs depends on the value of G. We test this possibility by computing the spectral dimension for G = 10 −0.5 and G = 10 2 and a wide range of α. Our results are depicted in Fig. 20. Indeed, we observe that for different values of G the boundaries α min and α max change. While at G = 10 −0.5 , these are α min ≈ 0.67 and α max ≈ 0.69, their values at G = 10 2 are α min ≈ 0.69 and α max ≈ 0.71.
G G G-dependence of intermediate regime Given the plots of
An additional feature we observe is the following. Within the interval [α min , α max ], the spectral dimension of purely scaling amplitudes is approximately a decreasing linear function of α. In contrast, Fig. 20 suggests that the slope of D s as a function of α can be positive or negative, depending on G. We will discuss interpretations of these phenomena and their entailing consequences for the limit of large periodicities N in Sec. 5.2.
Cosmological constant Since the Regge curvature does not vanish for N > 1, the effects of a non-vanishing cosmological constant, Λ ̸ = 0, on the amplitudes is not as apparent as in the 1-periodic case. Following Eq. (2.13), Λ introduces a phase shift to the cosine term containing the parameter γ bi . Entering with the 4-volume of the frusta, such a term scales quadratically in the spins, leading to an intricate behaviour in combination with the Regge curvature contributions. Fixing G = 1 and α = 0.69, we present the 2-periodic spectral dimension for a selection of Λ-values in Fig. 21. Effects of Λ become important only for Λ not much smaller than G as expected from the form of the amplitudes. The resulting phase shift in the oscillating part of the amplitudes has significant impact on the flow of the spectral dimension, as Fig. 21 shows. In comparison to the 1-periodic case discussed in Sec. 4.2, we observe the following additional features. First, high frequency oscillations due to large values of Λ do not appear negligible, at least in the small range of τ that we can observe. Second, the region of τ 's, where Λ leads to a deviation in the flow of D s , is not as clearly localized as in Fig. 16. This is apparent by observing that in the region τ ∈ [10 −2 , 10 −0.6 ], the spectral dimension is affected for various orders of magnitude of Λ. Third, the direction in which the spectral dimension is corrected by the presence of Λ, so to lower or larger values than for Λ = 0, is obscured compared to the 1-periodic case. That is, because Λ does not solely control the position of the first root of the amplitudes when N = 2 but leads to a phase shift, the consequences of which are not as straightforward to analyse.
In the presence of a non-vanishing Regge curvature, the amplitudes are not invariant under sign change of Λ. However, numerical tests have shown that, at least for the small region of τ 's depicted in Fig. 21, different signs of Λ have a negligible effect in D α s . Hence, the magnitude of the cosmological constant appears to be the significant quantity. Similar to the above, we find that the range of α, for which an intermediate dimension exists, depends on the value of Λ. In summary, α min and α max are functions of both, Newton's constant G as well as the cosmological constant Λ. We will pursue this discussion in Sec. 5.2.
Analytical estimate of the spectral dimension
Building up on the ideas of [1], we present in this section a strategy to extract information from the spectral dimension of oscillating amplitudes using analytical methods. Tackling first the spectral dimension of general N -periodic spin foam frusta, we obtain a qualitative expression for the spectral dimension which is, however, still too intricate to compute explicitly. Nevertheless it serves as a support for the numerical results as well as a guidance for the limit N → ∞, discussed in Sec. 5.2. In the second part of this section, we consider an explicitly integrable model which qualitatively explains the cosmological constant results of Sec. 4.2.
An analytical argument
For the analysis of the spectral dimension, it is advantageous to introduce the average spin variable r 2 := 1 n j 2 f , where n is the total number of degrees of freedom, being n = 2N in the case of N -periodic spin foam frusta. Technically, the variable r is a radial coordinate in the space of configurations j f . Likewise, the remaining variables can be seen as an angular part, and we therefore denote them by Ω in the following. As the definition of the Laplace operator in Sec. 2.2.1 and the arguments of [71] and [1,27] show, it is reasonable to assume under the spin foam measure where ∆ is the Laplace matrix on the equilateral hypercubic lattice. Within this assumption, let us have a closer look on the expectation value of the return probability with respect to semi-classical amplitudes. As we have argued in Sec. 4.1, for jmax j min ≫ 1, the summation over configurations can be approximated by an integral. Following that and performing a change to spherical coordinates as described above, we obtain Forming the logarithmic derivative of this expression yields Since Tr e τ ∆ r is in fact a function of the ratio τ /r, we can trade the derivative with respect to τ with an r derivative, (4.14) Using the r derivative, we can integrate by parts, Here, ∂I denotes the boundary term in the partial integration, explicitly given by The other terms inside the brackets stem from the r-derivative acting first on r n and then on the product of amplitudes. Notice that − r Av ∂Av ∂r is exactly the effective scaling γ of A v which we have discussed previously in Sec. 3.2.
Before tackling more general cases, we consider the simplified scenario in which the amplitudes satisfy a uniform scaling behaviour, i.e. A v = h v (Ω)r −γcons . For a scaling γ cons = 9 − 12α, this describes N -periodic cuboids (and thus 1-periodic frusta) with a vanishing cosmological constant, Λ = 0. Importantly, the radial and angular parts factorize as a consequence of the scaling assumption of ∆, simplifying the following equations significantly. As the effective scaling of A v with respect to the radial coordinate is the constant γ cons , the spectral dimension is Our results from 1-periodic semi-classical frusta, together with the findings of [1] suggest that, if α allows for an intermediate spectral dimension 0 < D α s < 4, the boundary term vanishes there. Consequently, D α s is given by If the value of D α s lies outside the interval [0, 4], the boundary term counteracts to yield either zero or four. Due to the spatial homogeneity of frusta as well as the assumptions we introduced, the number of vertices V and the number configurations n are respectively given by V = N 4 and n = 2N . Plugging these expressions into Eq. (4.18) for N = 1, the analytical estimate is compatible with the numerical findings of Sec. 4.1 as well as the previous results in [1].
Quantum amplitudes as well as semi-classical amplitudes for N > 1 do not show a simple scaling behaviour. However, all these more general cases have in common that they factorize into a scaling part and a non-trivial residual part. To capture this, we write where C v (r, Ω) can be understood as a correction term to the scaling part with constant γ cons . Splitting the amplitudes into this form allows to re-express the spectral dimension as For oscillating correction terms that attain many zeros, and therefore lead to divergences of the effective scaling, the integration domain of the above needs to be restricted accordingly. Aspects of well-definedness and convergence need to be addressed for each given C v individually. Put into this form, Eq. (4.20) suggests that the pure scaling value of D α s = 2((9 − 12α)V − n) is corrected by a term arising from the effective scaling − r Cv ∂Cv ∂r of the correction term C v as well as the boundary term. Assuming that the boundary term is negligible in the regime of an intermediate spectral dimension, we compare the analytical estimate in Eq. (4.20) with the results of 1-periodic quantum amplitudes, a cosmological constant for N = 1 and finally the 2-periodic case.
For α ≳ 0.24, extrapolated quantum amplitudes exhibit an effective scaling larger than the semi-classical value, as Fig. 13 shows. This implies that (4.21) and hence, that the spectral dimension is corrected to a larger value. This is exactly what we observe in Fig. 15.
In the presence of a cosmological constant Λ, the correction term in Eq. (4.4) is a shifted cosine with a quadratically scaling phase. Since the constant shift is smaller than one, the correction terms hits many zeros, such that the effective scaling diverges at these points. Consequently, Eq. ]. This suggests that for growing τ , the first correction to the spectral dimension is to a larger value, which can be observed numerically in Sec. 4.2. For the remaining integration range r ∈ [r (Λ) 0 , j max ], we conjecture that the rapidly changing scaling behaviour is averaged out within the scales that are emphasized by Tr e τ ∆ r . However, it is currently not in reach to substantiate this statement further.
Frusta amplitudes of periodicity N > 1 exhibit a highly non-trivial correction term C v (r, Ω), as Eq. (4.10) indicates. Given that C v hits many zeros, leading to divergences of its effective scaling, the integral in Eq. (4.20) has a highly restricted domain of validity which depends sensitively on the values of (G, γ bi , Λ). Due to these obstacles for understanding the correction term, let us first consider a regime where C v is approximately constant, obtained for large values of G, and comment on the more general case afterwards. In the absence of the correction and the boundary term, the intermediate spectral dimension is given by Eq. Our numerical results of Sec. 4.3 are in partial accordance with this prediction. Qualitatively, we find an intermediate spectral dimension which is controlled solely by α, as Fig. 18 shows. Also, we find that the window of admissible α for such a regime is smaller compared to the 1-periodic case. However, the qualitative predictions of the analytical arguments, notably Eq. (4.18), do not agree with the numerical results. In particular, the values of α min and α max as well as the values of D α s of the analytical derivation appear to be shifted with respect to the numerical ones. Recall that the whole analytical argument hinges on the assumption that the Laplacian exhibits a scaling behaviour with a negligible angular dependence, given in Eq. (4.11). While this might be justified for N = 1, the angular dependence of ∆ for N > 2 is more intricate. Therefore, we suspect this assumption to be the main source of the discrepancy between the analytical predictions and the numerical results. As a result, changes of the intermediate dimension which are not captured by Eq. (4.18) are conceivable. Still, we recall that the window of intermediate scales j min < τ < j max for the results of Sec. 4.3 is small, necessitating further analysis of the qualitative inconsistency between analytical and numerical results.
When correction terms cannot be neglected, i.e. when the value of the action S R is of order G, the spectral dimension is sensitive to the values of G and Λ, as the numerical results of Sec. 4.3 show. In the intermediate regime j min < τ < j max one observes corrections to D α s for some (G, Λ) to smaller as well as to larger values. Moreover, the window [α min , α max ] is shifted depending on G and Λ. This suggests that for higher periodicities, the correction term is predominant, introducing an intricate dependence of the intermediate dimension on the parameter values (α, G, Λ).
An integrable model with oscillations
Faced with the obstacle of computing the corrections of oscillating terms explicitly, we consider in the following a simplified integrable system. The results we derive support the intuition we have attained in the preceding part of this section.
The simplified model is an equilateral lattice. We find closed expressions for the heat trace in one dimension, though the results should extend to hypercubic lattices of any dimension since the lattice heat trace factorizes [69]. On the one-dimensional lattice, it is possible to integrate the heat trace P 1d (x) := π 0 dp e −x(1−cos p) = πe −x I 0 (x) (4.22) where I 0 is the modified Bessel function. For this, integration with a purely scaling measure (constant γ = γ cons ) gives where p F q a 1 , ..., a p b 1 , ..., b q ; z is the generalized hypergeometric function. 21 In the regime j min ≪ τ ≪ j max , this example gives the expected spectral dimension D s = 2(γ − 1), presented in Fig. 22. In particular, one finds this value as the exact constant result for the integral with j min = 0 and j max = ∞. This example can still be integrated with an oscillating measure over the positive reals, Interestingly, the result turns out to be a function solely in the combined variable ωτ up to a factor ω γ−1 , As a consequence, the spectral dimension is also a function in ωτ . Surprisingly, even though there are no boundary terms, this spectral dimension shows a flow from 2(γ − 1) to D s = 1 with an intermediate maximum at τ ≈ 1/ω directly followed by a local minimum already close to D s = 1, see Fig 22. For a finite upper integration boundary j max > ωτ we would therefore expect no significant difference. The spin foam measure in Eq. (4.4) that we want to model has a correction which is a linear combination of a constant term and an oscillating one. Thus, we have to consider the spectral dimension of the combination a⟨P (τ )⟩ γ + ⟨P (τ )⟩ γ,ω , that is, the quantity From earlier work [69] we know that a linear combination of two heat trace expectation values ⟨P (τ )⟩ γ and ⟨P (τ )⟩ γ ′ with different scalings γ > γ ′ leads to a spectral dimension of value 2(γ − 1) followed by a value 2(γ ′ − 1) at larger scale τ . Something similar happens in the linear combination here when j max ≫ 1/ω: As Fig. 23 shows, we see the oscillation peak of the ⟨P (τ )⟩ γ,ω part at ωτ ≈ 1 but after that the ⟨P (τ )⟩ γ part dominates. Together, 21 For an explicit definition of the generalized hypergeometric function pFq , see for instance [82]. this yields a spectral dimension which looks like the purely scaling part with an extra local maximum superposed at scale τ ≈ 1/ω. This is exactly the qualitative behaviour found in Fig. 16. In this way, the simplified example gives an explanation of the mechanism underlying the effect of a cosmological constant on the spectral dimension of spin foams.
Renormalization and the spectral dimension
From the perspective of classical general relativity, return probability and spectral dimension are reasonable observables. The return probability is coordinate independent since it is a trace of the heat kernel over the entire space-time. Equivalently, it is given by the exponentiated integral of the spectrum of the Laplace operator. Therefore it should be a suitable observable in the context of quantum gravity. Diffeomorphism invariance is typically broken in spin foam models, yet we expect its restoration to be tied to (a notion of) discretization independence [43-45, 52, 53, 83-86]. This is vital if we interpret the spin foam 2-complex as a fiducial object, providing a regularization of the theory. Then, predictions of the theory must be consistent for different regulator choices, i.e. the regulator can be removed and the observables remain well defined in a suitable refinement limit. In principle this logic should apply to the spectral dimension as well, yet it is more subtle.
To point out the particularities of renormalization in a background independent setting, it is helpful to first consider the spectral dimension on two different discretizations of the same manifold, where one is the refinement of the other. For diffusion times much larger than the typical length scale of the triangulation, their spectral dimension will agree with the continuum result. By definition, for the coarser triangulation this scale is larger, therefore its spectral dimension will deviate from the continuum result for larger diffusion compared to the finer triangulation. This is perfectly expected as the coarser triangulation is ignorant to dynamics below its discretization scale. In the context of spin foams however, this question is more intricate.
In a spin foam setting, consider two 2-complexes, where the coarser one arises from coarse graining the fine one. Let us additionally assume that we can compute the coarse graining flow of spin foam amplitudes, such that we can assign a theory to the fine 2-complex and its coarse grained version to the coarse one. Crucially, in a background independent theory, the discretization scale is not a parameter but part of the variables we are summing over. Thus, instead of the scale, the difference between coarse and fine theory is that the fine theory features more variables and can thus capture more configurations than the coarse one. Some of these fine configurations will correspond to representations of coarse configurations on the finer discretization, between which one could relate with embedding maps, yet some configurations cannot be captured in the coarse case. Therefore, one would expect differences to occur when the spectral dimension probes these fine configurations, assuming one is using the same definition of Laplace operator. This might be avoided by also coarse graining the Laplace operator such that the coarse version effectively reflects how the scalar field probes the fine version configurations leading to a modified spectrum of the renormalized operator. 22 However, this procedure is ambitious and goes beyond the scope of this article.
Another method to employ a refinement limit is to study the same observable on finer and finer 2-complexes, with the same theory assigned to each complex. This is closer to the setup of this article, but strictly speaking not a renormalization procedure. The idea is to send the number of building blocks to infinity with the goal to identify whether the observable converges under refinement; and further whether there are indications for a phase transition, if it is indeed an order parameter for this transition. In this way one might determine the set of parameters for which the original theory approximates a potential fixed point. Close to such a fixed point discretization independence would be approximately satisfied. 23 Beyond the technical challenges, one must first define how to systematically refine the 2-complex in order to define such a limit (if it indeed exists). In our setup of Nperiodic spin foams on hypercubic lattices, this is straightforwardly the limit N → ∞, which simultaneously removes the assumption of periodicity. To distinguish this limit from the refinement limit as determined by a renormalization procedure 24 , we call the limit N → ∞ together with j max → ∞ the thermodynamic limit. We discuss the implications of our analytical results for the thermodynamic limit in the next section. 22 The interpretation is that the coarse theory arises as an effective dynamics from the fine one after integrating out fine degrees of freedom. This would apply to the scalar field even though it is unphysical. 23 Conceptually, the ideas outlined here can be straightforwardly applied for 2-complexes without a boundary. For 2-complexes with boundary, one still needs to relate boundary states in different Hilbert spaces to describe the same transition. 24 It is not straightforward to implement N -periodicity in a coarse graining procedure: imagine coarse graining an N -periodic spin foam such that the number of degrees of freedom are halved. Unfortunately, without additional restrictions the resulting spin foam will not be an N /2-periodic spin foam.
The thermodynamic limit
Several assumptions underlie the numerical results of Sec. 4, the strongest of which is clearly that of N -periodicity of the geometric configurations. Furthermore, we truncated the number of dressed vertex amplitudes to N and introduced an upper cut-off j max for a feasible numerical implementation. Both the cut-off and the periodicity need to be removed for physically viable results in a limit j max → ∞ and N → ∞, respectively. In the following, we discuss these limits and the resulting interpretation of the spectral dimension. As shown, the cut-offs j min and j max mark the boundaries for the scale τ between which an intermediate spectral dimension is possible; outside these values it flows to the value zero and four, respectively. In the limit j max → ∞, we therefore expect an Clearly, in the limit N → ∞, the interval shrinks to a single point α * , corresponding to the value at which the amplitudes are scale invariant. This is due to the fact that we are considering higher and higher powers of the amplitudes. Following [1], in the context of cuboids, this point marks a phase transition since the spectral dimension changes discontinuously from 0 to 4 at α * . Since the degrees of freedom as well as the combinatorial length L = N are taken to infinity while keeping their ratio fixed, N → ∞ corresponds to a thermodynamic limit. Numerical results for 1-periodic frusta, presented in Sec. 4.1, are in alignment with the analytical formula in Eq. (4.18). Also, the interval [α min , α max ] of admissible α-values to observe such an intermediate regime are supported by our results. Generalizing to 2-periodic or higher configurations, the parameter space of the theory extends as the amplitudes now depend on (α, G, γ bi , Λ). Qualitatively, for fixed (G, γ bi , Λ), we still observe that α controls the value of the intermediate dimension and that the interval between α min and α max is smaller in comparison to the 1-periodic case. This suggests that a similar argument as in [1] applies: In the limit N → ∞, the spectral dimension exhibits a non-trivial flow only when α is tuned to some fixed value α * . Notice that, in comparison to cuboids, frusta amplitudes are strictly speaking never scale-invariant. Consequently, α * is in general not given by α * = 3 4 . Crucially, we observe that (G, γ bi , Λ) impact the spectral dimension only in the intermediate regime, which is controlled by α. Conversely, Figure 24: Sketch of a critical surface Σ * in the parameter space of theory with the γ bi -direction suppressed. Given that α is fixed (blue plane), the intersection with Σ * , indicated by the red curve, marks the critical values of G and Λ. outside the relevant α interval, we did not observe these parameters inducing a change in the spectral dimension.
If we vary (G, γ bi , Λ), the value α * might change in principle and become a function of these parameters. Geometrically, this would imply in general a non-trivial surface Σ * of co-dimension one in 4-dimensional parameter space (α, G, γ bi , Λ) which marks the phase transition in the limit N → ∞. As demonstrated in Sec. 4.3, the values of α min and α max are indeed dependent on G and Λ. 25 In the limit N → ∞, the interval of α's will shrink to a point, the value of which depends on G, γ bi and Λ. Consequently, α * (G, γ bi , Λ) defines a non-trivial embedded surface Σ * which, tentatively speaking, marks the critical surface of a phase transition.
Taking the perspective that γ bi and α are fixed and non-flowing parameters 26 , the intersection of the α and γ bi hypersurfaces together with Σ * yield the lines of (G, Λ), which we interpret as critical lines. A graphical intuition for that is presented in Fig. 24 with the γ bi -direction suppressed. This intersection might also be empty though, depending on the values of α and γ bi .
In light of the previous discussion on renormalization, we emphasize that the large-N limit is not to be understood as a renormalization flow but rather as a removal of the Nperiodicity assumption. Defining a spin foam renormalization flow requires a refinement of the combinatorial structure as well as a relation between coarse and fine geometric variables. In contrast, the large-N limit simply considers the addition of building blocks 25 We fixed γbi to render the simulations feasible. However, if not fixed by other arguments such as from black hole entropy matching [87], we expect that γbi plays a similar role as G and Λ, influencing the value of α * . We therefore assume in the following that α * depends also on γbi. 26 Matching the black hole entropy from LQG to the Bekenstein-Hawking formula requires γbi to take a specific value [87]. We remind the reader that α parametrizes an ambiguity in the face amplitudes. Choosing a specific model with a given face amplitude therefore corresponds to a fixed non-flowing α. and does not affect the variables or the states. Consequently, the RG-fixed points are in general not related to the "critical values" (α * , G * , γ bi * , Λ * ) of the limit N → ∞, but might provide indications for such a fixed point.
Information about G, Λ and γ BI from the spectral dimension
As demonstrated in Sec. 4.2 and Sec. 4.3, Newton's constant G and the cosmological constant Λ have an immediate effect on the flow of the spectral dimension. It is furthermore to be expected that γ bi plays a similar role for frusta of N > 2. In principle, this could open the possibility to extract the values of G, γ bi and Λ in a given regime from determining the effective, spectral dimension. In this section we discuss this possibility and its limitations. Consider the 1-periodic case discussed in Sec. 4.2 first, where the Regge curvature vanishes and only the ratio Λ/G becomes relevant. Given that an intermediate regime exists, the spectral dimension will locally flow to a larger value in the vicinity of the scale at which the amplitude completes the first oscillation. This scale is directly related to the value of Λ/G. In principle, a given 1-periodic spectral dimension flow therefore provides insight to the value of Λ/G. This picture changes drastically when going to higher periodicities where S R ̸ = 0. In these cases the flow of the spectral dimension is highly sensitive to the three parameters (G, γ bi , Λ), where the effects do not seem to be localized around a certain scale. Again, a given measurement of the spectral dimension could, in principle, be used to attain partial knowledge on the values of these parameters. Various caveats and limitations follow these suggestions, the most important ones of which we discuss in the following.
First, for computing the 2-periodic spectral dimension we have employed semi-classical amplitudes. Therefore, quantum effects have been neglected on small scales τ ∼ j min . Such effects have presented themselves in two ways. As discussed in Sec. 4.1, quantum amplitudes show a non-constant modified scaling behaviour. Following [51], oscillating quantum amplitudes also show a phase shift with respect to the semi-classical ones on low scales. Clearly, both of these quantum effects need to be taken into account when considering the spectral dimension as an observable quantity.
Second, the results of Sec. 4.3 suggest that the map between the parameters (G, γ bi , Λ) and the corresponding spectral dimension is only surjective, such that one cannot extract a unique triple (G, γ bi , Λ) from a measured flow of D s . Moreover, for N > 1, the amplitudes exhibit an intricate oscillatory behaviour which leads to a spectral dimension that is highly sensitive to the values of (G, γ bi , Λ). Indeed, we expect that it is necessary to measure several observables to determine these parameters accurately. Nevertheless, knowing D s for all scales contains a lot, albeit coarse grained, information of quantum space-time that should allow us to constraint the range of admissible parameters, e.g. to distinguish whether G is large or small.
Third, the triple (G, γ bi , Λ) entering the semi-classical amplitudes in Eq. (2.13) is considered as the bare parameters. 27 Thus, under a renormalization flow and under the assumption that we project back onto the original theory space e.g. as in [31], it is in general to be expected that (G, γ bi , Λ) flow as well. As stressed above, for physical viability, the spectral dimension must be considered at a fixed point of the renormalization group flow to ensure discretization independence of the result. The values of (G, γ bi , Λ) at this fixed point are then the quantities that can be in principle observed.
An observable effect of the parameters on the spectral dimension requires the existence of an intermediate regime. As discussed in Sec. 5.2, in the thermodynamic limit j max → ∞ and N → ∞, the values of (α, G, γ bi , Λ) for which such a regime exists shrink to a point. At this transition point, the spectral dimension is expected to show a non-trivial behaviour. However, with the assumptions of finite upper cut-off j max and periodicity N , necessary for the computations in Sec. 4, determining the spectral dimension at the transition point is currently out of reach.
As a last caveat, we remind the reader that the model we consider here presupposes a Euclidean space-time signature. The kinematics of physical scalar fields is governed by the d'Alembert operator and not the four-dimensional Laplace operator. Hence, deviations between predictions based on Euclidean models and results from measurements are in general to be expected. In addition to Lorentzian effects, we recall that the Laplace operator was defined via its action on scalar test fields. Physical fields used for actual measurements couple non-trivially to the geometry of space-time and lead to back-reactions on the gravitational field, see e.g. [54]. As a result, the spectrum of the Laplace operator and therefore the return probability as well as the scaling behaviour of spin foam and matter amplitudes are modified.
Conclusions
In this work, we have studied the effects of quantum amplitudes and oscillations on the spectral dimension within the EPRL-FK model, restricted to N -periodic frusta geometries. This marks a significant expansion of previous work [1] on flat, non-oscillatory and purely semi-classical cuboid geometries [52]. To by-pass the steep numerical costs for computing quantum frusta amplitudes already at small spins [51], we have presented a method to extrapolate quantum amplitudes for N = 1 to large spins in Sec. 3, serving as an improved approximation compared to semi-classical amplitude, in particular at low spins. This marks the first result of our work.
Computing the spectral dimension with respect to extrapolated amplitudes, we find additive corrections at low scales compared to the semi-classical results. Supported by analytical computations, these quantum corrections can be traced back to a modified effective scaling behaviour of the extrapolated amplitudes, constituting our second result.
As a third result we have found that curvature induced by a cosmological constant Λ yields additive corrections of the 1-periodic spectral dimension at scales τ ∼ 1/ √ Λ. We have shown that such effects of Λ can also be understood qualitatively by considering the effective scaling of the amplitudes. Furthermore, we have given an explanation of the mechanism underlying these results in terms of a simplified integrable model with such oscillating measure.
Finally, we found that 2-periodic amplitudes with an intricate oscillatory behaviour lead to a flow of the spectral dimension which depends on the full set of parameters (α, G, γ bi , Λ). Summarizing, our numerical and analytical results show that curvature is an essential factor for the spectral dimension that requires further study.
In an overarching perspective, the results of the present work are an intermediate step towards understanding the spectral dimension of more general quantum geometries. Spin foam frusta with their inherent high degree of symmetry present a strong restriction of the quantum geometry compared to the general case. Retaining hypercubic combinatorics, a feasible scenario would be to construct a more general restricted model, which however quickly becomes cumbersome and the numerical challenges in the quantum regime remain. Furthermore, there is evidence that the EPRL-FK model for higher valent vertices differs from the one defined on triangulations in the implementation of simplicity constraints [89,90], and geometric critical points with torsion and non-metricity exist [52,91]. Therefore, if all restrictions on the geometry are lifted, it is advantageous to directly work on triangulations, where the semi-classical amplitudes are well studied [32,33,[92][93][94][95][96] and more numerical methods are available and in development [35-37, 41, 47].
In the following, we briefly discuss the challenges one faces when defining the spectral dimension on unrestricted spin foams defined on a triangulation.
• N -periodicity on a triangulation: The introduction of N -periodicity [1] is highly beneficial for reducing the computational effort. In particular, formulating the spectrum of the Laplace operator on momentum space is far more efficient than directly diagonalizing the Laplace operator and can be straightforwardly generalized to larger 2-complexes. The notion of N -periodicity is tied to combinatorial "directions", which are naturally defined on a hypercubic lattice. On 4-dimensional triangulations these intuitions are not applicable and N -periodicity is not straightforwardly defined.
• Vector geometries in the semi-classical limit: The semi-classical amplitudes for a 4simplex exhibit different types of critical points, depending on the boundary data [32]. While Regge geometries present one class of solutions, so-called vector geometries [32] contribute with the same degree of polynomial decay. Since such configurations do not correspond to geometric 4-simplices, their 4-volume is not defined, and it is not obvious how to generalize the definition of a Laplace operator.
• Non-matching simplices / complex critical points: The restriction to cuboid or frusta geometries is special, as each 4d building block is evaluated on a critical point and glued along matching 3d polyhedra. In recent years, there has been growing evidence that configurations beyond real critical points must be taken into account for large, but finite spins. In effective spin foams [37,40] and the hybrid algorithm representation [36] these are parametrized as geometric but non-matching simplices, i.e. the shared 3d building blocks have different shapes as seen from their 4d building blocks. Alternatively, such configurations are called complex critical points [41,50]: such configurations can be interpreted as exhibiting non-vanishing torsion and play an important role in understanding and circumventing the flatness problem [37,50,[97][98][99][100]. Following our definition, the Laplace operator for such a non-matching configuration might not be symmetric any more.
• Pre-geometric configurations: Beyond a modified scaling behaviour, the deep quantum regime of spin foams additionally features pre-geometric configurations which are not peaked (as coherent states) on the shape of semi-classical polyhedra. It is an intriguing question how a scalar test field can probe such a quantum geometry.
• Numerical challenges in and beyond the quantum regime: Although the computation of quantum amplitudes is more feasible in the general simplicial case, the mere number of ten spin configurations per vertex presents serious numerical challenges.
• Lorentzian signature: The studies of the spectral dimension presented here essentially examine a diffusion process on Euclidean quantum space-time. It will be interesting to see whether these concepts can be generalized to Lorentzian test fields probing a Lorentzian quantum space-time, see e.g. [101] for an implementation in causal set theory. For the gravitational side, choosing the Lorentzian EPRL-FK model might be beneficial compared to the Euclidean one: semi-classical Lorentzian 4-simplices feature oscillations proportional to γ bi .
We expect that several of the features mentioned above will have an impact on the spectral dimension, which we currently cannot estimate. In particular, it will be interesting to see whether the effective scaling of the amplitudes can still explain the behaviour of the spectral dimension if pre-geometric configurations are considered. Therefore, it is imperative to lift the restrictions of the frusta model and work towards exploring the spectral dimension on unrestricted spin foam quantum geometries.
A possibility to define N -periodicity in a simplicial context could be to triangulate N -periodic cubical lattices. There exist several inequivalent options to realize that, two examples of which are given in [102] and [103], respectively. Geometrically, these configurations correspond to different triangulations of the torus. Explicitly setting up N -periodic triangulations and computing the spectral dimension thereof is left for future work.
Within the context of spin foam frusta, a conceivable method to compute the spectral dimension for higher periodicities and larger cut-offs j max is to restrict to configurations with small dihedral angles, e.g. via a Gaussian. Previous work [1] supports the conjecture that this restriction still captures the relevant geometric information. If valid, this reduction of configurations would greatly simplify the numerical effort required to compute D s , enabling exploration of regimes currently out of reach. Also for less restricted geometries on a triangulation, a linearization around flat configurations could be advantageous as this simultaneously simplifies the form of the amplitudes and the Laplace operator.
Going beyond small periodicities is numerically challenging regardless of the specific model at hand. This is even more true for triangulations without geometric restrictions, where we must consider vastly more variables compared to the symmetry restricted frusta cases. To reliably compute expectation values of observables, it is therefore imperative to use a numerical method that does not exponentially scale with the number of variables of the system. In many areas of physics, Monte Carlo methods serve this role, yet cannot be readily applied in spin foams. Due to the oscillatory nature of spin foam amplitudes, the spin foam partition function might not be naively usable as a probability distribution to sample results with. A way around this might be to define Markov Chain Monte Carlo on Lefschetz thimbles of the spin foam partition function [39], where the integration contour is changed such that the imaginary part of the system is constant and thus non-oscillatory. Alternatively, a potential strategy might be to propose a new probability distribution to sample configurations with. Recently, random sampling of bulk quantum numbers was applied to approximate spin foam amplitudes with many bulk faces in [49].
To conclude, the spectral dimension remains an intriguing observable in spin foam models that is far from understood. Our results show that its value at small scales depends on all the parameters α, G, γ bi , Λ, though the specific relations are sensitive to the specific restricted model. Many effects such as pre-geometric configurations might alter its behaviour. Hence, our work shows how the spectral dimension of spin foam geometry can provide us with a deeper, coarse grained understanding of quantum space-time while at the same time allowing us to connect to other approaches of quantum gravity as well as continuum physics. | 2023-04-27T01:15:49.135Z | 2023-04-25T00:00:00.000 | {
"year": 2023,
"sha1": "9737918816ed9304f32e8bba06dae72813e27fe6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9737918816ed9304f32e8bba06dae72813e27fe6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
246823725 | pes2o/s2orc | v3-fos-license | Understanding the Digital News Consumption Experience During the COVID Pandemic
During the COVID-19 pandemic, people sought information through digital news platforms. To investigate how to design these platforms to support users' needs in a crisis, we conducted a two-week diary study with 22 participants across the United States. Participants' news-consumption experience followed two stages: in the \textbf{seeking} stage, participants increased their general consumption, motivated by three common informational needs -- specifically, to find, understand and verify relevant news pieces. Participants then moved to the \textbf{sustaining} stage, and coping with the news emotionally became as important as their informational needs. We elicited design ideas from participants and used these to distill six themes for creating digital news platforms that provide better informational and emotional support during a crisis. Thus, we contribute, first, a model of users' needs over time with respect to engaging with crisis news, and second, example design concepts for supporting users' needs in each of these stages.
to users' well-being may be exacerbated by a crisis. A large and diverse body of literature has investigated people's experiences with news media in many contexts, documenting, for example, where people get their news [90], what news they find credible [31], what they share on social media in times of crisis [62], and how news about disasters spreads [43]. Prior work has also shown that specific patterns of news consumption during a crisis are linked to anxiety and depression [14,74,85]. Here, we build on this substantial foundation by: 1) eliciting user needs with respect to consuming news in times of crisis, and 2) identifying design principles for meeting these needs.
Specifically, we examine people's experiences using news platforms during a two-week window in the early stages of the COVID-19 crisis in the United States. Americans' consumption of news and current events increased 215% in response to COVID-19 [76], signaling a clear demand for designs to support news consumption in this context. To better understand how designers can best support this scenario, we ask three research questions: RQ1. How, if at all, did people's patterns of engaging with digital news change during the COVID-19 crisis?
RQ2. What user needs drove these patterns of engagement?
RQ3. What designs are likely to support these needs?
Our ultimate goal was to inform the design of interactive news platforms, such as news apps and websites (RQ3). To do so, we examined people's information-seeking experiences holistically (RQ1, RQ2), drawing on the construct of "digital journalism" [32,70]-which includes news websites, alternative journalism sites, blogs, social media posts, news applications, and individual and group announcements-and looked at their engagement patterns broadly.
To answer these research questions, we conducted a two-week diary and interview study in April of 2020 with 22 participants, distributed geographically across the United States. All participants lived in areas with severe rates of COVID-19 disease activity at the time of the study, including New York, Michigan, and Washington. The study began with semi-structured interviews, in which we probed participants' news consumption experiences during the pandemic.
Participants then completed a two-week diary study in which they recorded their daily news consumption behaviors, reflected on how these behaviors made them feel, and documented design ideas that occurred to them in response to their in situ experiences. Drawing on themes that emerged across diary entries, the research team created prototype platform designs. Participants then provided feedback on these design concepts and shared reflections in a final exit interview.
From our findings, we derived a formative conceptual model representing two stages of crisis news consumption: a seeking stage and a sustaining stage. In the initial seeking stage, participants sought more information, invested increased time, and strove for greater variety of sources than they had in the past. They also relied more on local news. In this stage, informational needs-the need to find, understand, and verify relevant news pieces efficiently-drove participants' news-engagement behaviors. As the crisis developed, people became exhausted and overwhelmed by the unrelenting inflow of information, especially negative news. Their news-engagement behaviors then shifted to a sustaining stage, in which they tried to reduce their consumption, form intentional and bounded news-checking habits, and find a balance between "being informed" and "getting too much. " In this stage, emotional needs-including the need to contain the time they spend with the news, cope with the negative sentiments it provoked, and connect with their family and friends around news items-became as important as informational ones.
Based on this conceptual model and on design feedback from participants, we created example interface components that designers might leverage to support the informational and emotional needs that arose for participants during this prolonged crisis. These components include, for example, a bounded sandbox for crisis-related news, visuals Manuscript submitted to ACM foregrounding the time a user spends reading the news, spatial separation of opinion pieces, and sentiment tags for articles. In short, we contribute: • Empirical data about users' in situ experiences with news media over time during the COVID-19 crisis which informed a taxonomy of common needs with respect to interactive news media. This in turn led to a conceptual model of digital news consumption during the COVID-19 crisis.
• Exemplar designs, developed iteratively with users, illustrating how interactive news platforms might support people in each of the dimensions documented in the model.
Emerging literature (e.g., [23,24,74]) has begun to document people's information seeking and engagement with mainstream media during the COVID-19 pandemic. This work has shown, for example, that people increased their news consumption during the pandemic [23] and checked more local news [24]. However, this prior work provides only point-in-time snapshots of people's experiences with the news, rather than following the arc of their reflections and needs as the crisis wore on. It also does not attempt to draw design implications based on these experiences. Thus, we corroborate and expand on these past findings, and we contribute the model of user needs that emerged over time along with user-centered design guidance for creating interactive news media platforms to meet these needs.
RELATED WORK
One prominent characteristic of current news consumption is the shift from traditional paper and television formats to interactive online sources, including social media platforms [15,70,75]. Interacting with news content (such as commenting and sharing a news post) has become part of the consumption experience and broadened the way people engage with news. Although these new platforms provide more timely and varied news compared to traditional media, users struggle to extract useful articles from the huge amount of available information. In this section, we review previous work on people's digital news consumption, negative experiences with online news, and news consumption and during times of crisis.
Understanding Online News-Reading Behaviors
Over the past decade, the internet has steadily become more entrenched as a first-class news source for many people. In 2012, the average American browsed five news pages per month [37], but by 2019, internet news consumption had increased significantly, and a series of surveys conducted by the Pew Research Center [90] showed that over 53% of Americans got their news from online sources like websites and social media platforms. In 2019, Bentley et al. [15] collected browser logs from 174 participants, finding that 23% of all web-browsing sessions included views of news pages, and 23% of this traffic originated from social media platforms. Most participants took in news from sources that presented differing perspectives.
A rich body of work has also investigated people's engagement patterns with online news in general. In one of the earliest investigations in this space, Aikat [4] studied online news-reading behaviors on two news websites in 1998, describing patterns that continue to apply to today's news environment. Readers visited news websites more often during working hours than outside of work, and they usually skimmed headlines with short dwell times. More recently, Diakopoulos and Naaman [29] studied the online news comments on sacbee.com, finding that people had different motivations for writing comments, including to share information, to express opinions, to debate or entertain, and to socially interact with others. Morgan et al. [71] investigated news-sharing behaviors on Twitter, and found that the more tweets a user sent, the more ideologically diverse the shared news items were.
Manuscript submitted to ACM Despite the public's eager engagement with digital news, it has also introduced new threats to users' mental wellbeing.
For example, negativity bias [99] describes people's tendency to focus more on negative news than positive news. This bias, coupled with algorithms that prioritize content that attracts users' attention, has led to systems that are more likely to surface highly emotional content and make sensationalized content viral [106]. Similarly, the huge amount of available news on digital platforms can lead to information overload, wherein a taxing amount of information erodes mental health and inhibits decision-making [21]. When overloaded with news, people feel less confident about whether they have found the news they want [80], tend to shut down cognitively, will deny the need to continue to consume news [5], and will eventually cut back on news engagement [80]. Given the negative ways in which digital news platforms can impact users' wellbeing, we sought to understand not only how people used these platforms in a crisis, but also how the experience made them feel. Specifically, we found that as the crisis wore on, news-engagement became more emotionally taxing, leading people to create coping strategies, such forming new habits and seeking out positive content.
Digital News in Times of Crisis
Every crisis is different, and the way in which people experienced the COVID-19 pandemic may or may not align with people's experiences of other crises. According to the sociology of disaster research (p. 50 in [82]), crisis events can be categorized along several dimensions [77], including the type of hazard (e.g., natural vs. human-induced), temporal development (instantaneous vs. progressive), and geographic spread (focalized vs. diffused). Previous research has found that information flow varies substantially across these different crisis types [55,77]. However, during crisis events, social media is often an important information source, and has been identified as a primary news resource [7]. The practice of using social media to actively engage with news information is also referred as "citizen journalism" [9], where public citizens "[play] an active role in the process of collecting, reporting, analyzing, and disseminating news and information" [18]. For instantaneous and natural crises such as earthquakes and floods, researchers have identified six main ways in which people leverage social media platforms such as Twitter and Weibo during a crisis, including: providing emotional support, identifying affected individuals, sharing information about donations and opportunities to volunteer, providing caution and advice, informing others about infrastructure and utilities, and reporting useful information [77,84].
Official organizations have also increasingly adopted social media platforms for making announcements in such crises [28,52]. In those types of crises, previous research demonstrated two roles of social media platforms: 1) because of the immediacy of social media, people tend to rely on information from these platforms to understand the situations and get resources. 2) because everyone can engage and participate, social media supports collaborative sense-making and the co-creation of knowledge during the time of crisis [61,64,79,94], such as information sharing and seeking, talking cure and understanding the "why" [43]. Inspired by these previous studies, we sought to understand participants' sense-making experience during their news consumption in COVID-19, and identified several informational needs during the procedure.
For progressive crises such as pandemics (including COVID-19), people also used social media and other news platforms to support their risk assessment and long-term decision making. For example, Gui et al. [40] investigated how people used social media to make travel-related decisions during the Zika-virus outbreak. They found that local residents and previous travelers offered crucial resources that were absent from the news and other formal channels.
People also began to form new consumption behaviors during the long-term crises, such as proactively looking for actionable news items [77,83] and shifting their news interests [24]. On the other hand, the long-lasting and intense news reports around the crisis also were negative in tone [87], and could easily led to information overload [3,10] and mental health struggles like depression and anxiety [14,25,33,34,74,85] compared to the consumption in normal times [23]. In this project, we specifically looked into the negative effect caused by news consumption and how the experience revealed their emotional needs.
To date, this prior literature has primarily included surveys and interviews to understand people's experiences consuming COVID-19 news and information. We expand on this foundation, first, by capturing in situ data about news consumption using diary methods, and derived a conceptual model that could be potentially applied to a broad spectrum of news consumption behaviors during the crisis times. We then draw upon this foundation to investigate how to design a positive experience for digital news consumption under such a crisis.
Designing Supportive Information Platforms in Times of Crisis
Given the problems and unique patterns of information consumption during crises, researchers have proposed designs and systems to improve the information platforms, including digital news and social media platforms to support the users' needs in the times of crisis. One major scope in this area is to combat the misinformation: prior work has examined people's trust in online news specifically in times of wildfire crisis, finding that family and friends were a more trusted information source than mass media [92], indicating that platforms should support for group and community-based information sharing mechanisms. Researchers also investigated the design elements that affect users' perceptions of the credibility of a news source [31,36,49,103]. Specifically, Bhuiyan et al. [17] proposed design practices such as presenting appropriate details in news articles (e.g., showing the number and nature of corrections made to an article) to promote the transparency and trust.
Previous research also focused on design mechanisms to deliver high quality news contents. For example, the functionalities such as comments and upvoting allowed the users to perform network gatekeeping and to organize and validate the breaking news feeds on Reddit [58]. Zhang et al. [105] reviewed 668 visualizations in COVID-19 news articles and synthesized themes that crisis visualizations should focus on. However, most of the previous work focused on designing a specific components (such as visualization, or commenting) for digital news platforms, and focused mainly on improving the information quality (such as clarity, credibility) of the news content. In this paper, via examining the dynamic behavior change of news consumption, we identified both informational and emotional needs during a crisis, and worked with participants together to present design ideas for the whole spectrum of digital news consumption.
METHOD
To understand people's online news consumption experience during the COVID-19 pandemic, we conducted a two-week diary study with 22 participants geographically distributed across the United States. The diary study was preceded by an initial semi-structured interview and followed by a final design-feedback interview. The study was approved by our university's Institutional Review Board.
Participants and Recruitment
We conducted the study from April to June, 2020, a period that roughly corresponded to the first peak in COVID-19 disease activity in the United States 1 . We aimed to include participants with a mix of different backgrounds (e.g., based on ethnicity, age, and location) to elicit a wide variety of news-reading practices. Thus, the research team posted recruitment advertisements on multiple city and state-specific subreddits and on several institutional mailing lists.
Inclusion criteria were that the participant be over 18 years old, fluent in English, and use at least one technology device daily. We did not add specific criteria regarding news consumption, as we wanted to include participants with a wide range of news-reading habits.
All participants lived in the places where COVID-19 disease activity was very high and "stay-at-home" orders had been been issued by the state government 2 . All participants owned a smartphone and most used smartphones and computers regularly. All participants gave their consent to participate in the study.
Procedure
Each participant first attended in a one-hour semi-structured entrance interview remotely via Zoom, focusing on research questions R1 and R2. After explaining the procedure and the purpose of the study, we asked the participant about their technology use and news consumption behavior before the pandemic, including how they accessed the news, how frequently they read the news, whether they engaged with the news or not (for example, by sharing or commenting on articles), and how they felt about these experiences before the pandemic. In keeping with the definition of digital journalism, we gave participants the freedom to bring up any "online news" experiences that were salient to them [30]. We then asked questions about their online news consumption during the pandemic, including changes to the news sources they depended on (if any), changes to their engagement (if any), their feelings about these changes (if any), and the challenges they encountered (if any). Finally, we asked them to pick one news-related application they used daily and demonstrate the steps they go through when checking news. A complete interview script is provided in Appendix B. During this walk-through demonstration, participants were encouraged to describe how the features of the app affected their experience, either in a positive or negative way. Interviews were video-recorded, and participants were compensated with a US $20 Amazon gift card for completing this portion of the study.
After the entrance interview, all participants took part in the diary study, in which they completed a daily survey for two weeks. Survey questions are shown in Appendix C. The questions focused on three topics: 1) the participant's news consumption, 2) how technology affected the participant's feelings about their news consumption, and 3) how they wanted to improve their news-consumption experience. Participants could also attach a screenshot to document features they liked or disliked, to illustrate design ideas upon which they could elaborate. To encourage high-quality answers and retention, we compensated participants only if they completed the survey more than five days per week (all participants met this threshold in our study). Participants received Amazon gift cards worth US $20 for the first week and US $40 for the second, with a gift card worth US $10 as a bonus for each week where they provided particularly detailed answers.
Based on the data from the entrance interviews and the diary study, we derived design themes for improving the crisis-time news consumption experience. We then made sketch interfaces demonstrating those themes and evaluated them via the exit interview of the diary study. Participants were reimbursed with a $20 gift card for the one-hour interview. In all, participants could be compensated for a minimum $100 and a maximum $120 if completing all the three stages.
Data Analysis
In all, we collected data from 22 interview sessions and 275 diary entries. We transcribed all interviews and coded transcripts and diary responses using an inductive analysis approach [27]. For each participant, we also examined changes in habits and in feelings throughout the two weeks. After individually reviewing all data, three members of the research team met collaboratively online to discuss themes that emerged. The team refined these themes by reviewing specific examples together, leading to an initial codebook. Each entry in the codebook included the category name, an explanation, a list of defined subcategories, and example quotes for each subcategory. The first author then coded the data using this preliminary codebook and refined its categories. The final codebook contained four categories (behaviors, feelings, coping strategies, design-ideas), and 41 nested subcategories (e.g., under behavior were increase in consumption time, increase in resource diversity).
We conducted a separate analysis of the design ideas we collected from the participants through entrance interviews and diary entries. We reviewed all designs and then constructed affinity diagrams [48] to organize similar designs into coherent themes. The diagrams were created via Miro, an online collaboration platform. The final diagram contained 36 subcategories organized into six themes (e.g., filtering and customization, organizing information, social factors). We used these themes to create interface sketches, which we presented to participants for feedback in the exit interview (see Section 7.7). We conducted exit interviews with 17 participants, and their feedback was also coded via the thematic analysis and affinity diagramming.
A CONCEPTUAL MODEL FOR CRISIS NEWS CONSUMPTION
Across the themes that emerged from our data, we found that participants' news consumption clustered into two different temporal stages (see Figure 1). At the beginning of the crisis, participants expressed a common set of behaviors and needs that reflect what we term the Seeking Stage. In this stage, participants increased their consumption time and the diversity of the sources they sought out, and they focused on local news and sought out actionable activities. As time went on, participants shifted into what we term the Sustaining Stage, wherein new emotional needs emerged as a result of the prolonged crisis and participants' new news-consumption behaviors. Feeling saturated with an excess of crisis-related news (news that was often negative, irritating, or sad), participants said they began to look for positive news pieces for emotional relief. In this stage, emotional needs became the main focus, as participants wanted to feel connected to others and in control of their information consumption. In the following sections, we present the data that informs this model and, specifically, the 1) behaviors (Section 5 change and the main needs in two stages in more depth.
NEWS-CONSUMPTION BEHAVIORS DURING THE PANDEMIC
We first describe how our participants changed their news consumption behaviors during the pandemic. Along with these changes, participants described what they valued and what they found challenging when engaging with news platforms, and we describe these user needs in Section 6.
In the Seeking Stage, participants reported three common ways in which their news consumption changed in response to the pandemic, specifically, an initial spike in consumption time, increase in the variety of news sources and more local news; however, after having enough news consumption in the first stage, participants moved to the Sustaining Stage.
The most salient change in this stage was increasing positive engagement and resisting negative engagement. We elaborate on each of these themes below.
Seeking Stage: An Initial Spike in News Consumption
All participants mentioned increasing their time with digital news as the pandemic took hold. Due to stay-at-home orders, people spent more time in their homes and on technology devices, giving them greater opportunity to check the news just as the crisis became urgent. At the beginning of the pandemic, participants mentioned seeking information actively, in contrast to their prior approach to engaging with the news passively. One reason for this change was that at the start of the crisis, they wanted to learn as much as possible about this completely novel situation created by a new pathogen. For example, P3 set a dashboard of COVID cases in the U.S. as her homepage so that she could track the numbers all the time. P12 explained that she had become more likely to "seek out" content proactively: "before COVID my husband and I, at night, we would watch, like, a TV show on Netflix. But now when he's done with work, you know, he wants to see what is going on in the world." People also explained that they began actively checking the news to find resources and plan their personal life. For example, P16 described "planing my life around it (the news)" and started to hoard toilet paper when he saw the related tweets. P11 started to check news about the flights and hotels, as she had future travel plans. P5 constantly checked for pandemic-related updates on local government websites to "plan out, like, if I have to go to the grocery store and it's in a certain city". They also described being more likely to share content, such as sharing resources with older family members, sharing official information to combat the spread of misinformation, and sharing articles to raise awareness about the pandemic.
P4 mentioned reading news and sharing resources with her older relatives who did not use technology: "I was also checking a lot of the news websites because I wasn't just purchasing for myself. I was also trying to make some purchases for my relatives in another state. Because they are old, they don't know how to order anything online. "
Seeking Stage: Increases in the Variety of News Sources
We also found that participants increased the variety of their news sources. At the time of the study, all participants were using more than one platform to access news about COVID-19. According to diary responses, 18 participants used news websites and services, such as The New York Times and Apple News, and 18 participants used social media platforms such as Reddit and Twitter. Although text was the most common format for the news participants consumed, nine participants also listened to podcasts, and eight participants watched online videos. For example, P10 mentioned that she started to listen to podcasts when she was exercising, as it did not require her full attention. P18 started watching videos on YouTube because she could "see what's going on and feel the emotions." Participants also increased their use of social media, especially Twitter: eight participants mentioned starting to use Twitter to access the news or using it more often than they had in the past. Participants explained that Twitter offered a convenient way to access local news and updates from official organizations in real time: I also started to use Twitter to follow a whole bunch of local government accounts to get local news. I never used Twitter for news before, but now on Twitter, like, I followed the Seattle Times and the local Bellevue Six participants also started to subscribe to multiple media platforms with different perspectives. Other participants said that they started checking the news from "Governor Cuomo" (P15), "certain countries like Italy because of friends" (P13), "state websites like Way County" (P5) and numerous other locally specific sources.
Sustaining Stage: Increasing Positive Engagement and Resisting Negative Engagement
All participants said that after an initial spike in news consumption to learn about the virus and ways to prevent it, they intentionally began to take steps to manage their news consumption and to engage with the news in more bounded, more positive, and more interpersonal ways. For example, three participants mentioned volunteering to help others by answering their questions on social platforms. P10, an attorney who did a lot of research on employment policies during the COVID-19 crisis, started to answer questions about unemployment on Reddit. P1 provided answers on Reddit to feel connected with others.
In other instances, people said they made changes to cut down on or manage their usage. Overall, sixteen participants reduced the time they spent consuming news (i.e., their second week average time was less than the first week), three participants maintained similar levels of consumption time, and three increased their consumption time. Similarly, three participants mentioned that they made an intentional shift to form regular news-checking habits instead of endlessly scrolling, as a way to stay "in control" (P7) of their news consumption. And P11 started to listen to more broadcasts when cooking and exercising, so that she could focus on other activities while also getting informed.
Participants also shifted the content they consumed, more actively seeking positive content as time went on. P10 and P17 both mentioned that they started to look for positive news, even they were interested in "depressing news" before COVID. P3 and P7 started to consume more "research" style news, such as lectures and talks about COVID-19, that were less sensationalized and emotionally charged.
At the same time, participants actively tried to reduce patterns of engaging with the news that affected them negatively. Participants mentioned about getting information overload, and started to decrease their consumption intentionally. For example, six participants mentioned that they tried to avoid political and opinion-based news. P16 explained in one diary entry that he found he had "spent entirely too much time arguing about COVID-19 and debating with people today, " something he said would have been unusual before the pandemic. In response, he tried to consume less news by not using his computer. P5 also found that she was irritated by her friends on Facebook sharing opinions she did not agree with. At first, she tried to post long and thoughtful comments, finding that she " felt more dread today because I encountered more resistance-type mentality. " As the pandemic went on, she reflected on her engagement and started to step back from the comments and posts. At the end of the diary study, she wrote "I felt okay because my engagement was lighter and more carefree. So long as I don't take others opinions seriously, I won't be as affected. "
NEWS-CONSUMPTION NEEDS DURING THE PANDEMIC
Participants reported a variety of benefits and challenges arising from their experience with digital news. These illuminated several underlying needs, which clustered into two broad categories: informational needs and emotional needs. Specifically, informational needs were the main drivers of users' behaviors in the Seeking Stage, as the crisis began and participants tried to form a comprehensive understanding of the crisis and how they might respond. As time wore on, and participants continued to face an exhausting stream of largely negative news, their emotional needs began to emerge and, along with information needs, began a primary driver of their behaviors in the Sustaining Stage.
Seeking Stage: Informational Needs
Participants wanted to efficiently get information that mattered to them in the Seeking Stage. Although participants said they had more time to spend checking the news during the lockdown, the explosion of information and constant updates created enormous barriers to finding and digesting useful and truthful information efficiently. P12 mentioned that she kept the news playing on her television all day so she would not miss anything. As participants confronted this flood of information through digital platforms, they consistently struggled with 1) finding, 2) understanding, and 3) verifying the information they encountered.
6.1.1 Finding. Fourteen participants explained that they struggled to sift through the sea of possible articles, and they found that it was not always easy to find the news that mattered to them. As they used multiple platforms for digital news, they frequently encountered redundant content. For example, although P17 intentionally cycled through different websites to ensure she engaged with content from different perspectives, she found that achieving this well-informed view required wading through repetitive coverage: Most of the sites have similar news stories like they might be in a headline position on one site and down lower in another website. I'm always hoping to find some information on one site that's not anywhere else. -P17 Participants also struggled to find content that was useful and actionable, something many said they were seeking.
They explained that they felt informed after reading the news, but they also felt helpless, as they did not know what to do about the upsetting information they encountered. To combat this, participants continued to look for news, but curated what they saw; for example, P16 muted many news media accounts on Twitter because they only reported the death counts without providing useful resources. Some participants directly went to government websites trying to find relevant resources they were looking for. However, they also found that these websites were "clumsy" and not updated frequently. For example, P10 described checking for policies on paid sick leave on a government website, saying: When participants turned to social media for news, they continued to struggle to sift through content and prioritize information they were interested in. They explained that the lack of coherent organization and categorization of posts made it difficult to zero in on particular topics or make sense of the landscape of information holistically. P11 followed many accounts on Twitter, but she found that it was hard to track them, as there was no way to prioritize and organize the feed by topics.
Collectively, these examples illustrate participants' common desire to easily separate the information they are looking for from everything else. The difficulty they encountered in doing so led them to spend a frustrating amount of time digging online, and in some instances, to avoid consuming news in a noisy environment.
6.1.2 Understanding. Even when participants found a news article they thought would be useful, they (n=11) often struggled to understand its contents. For news on Twitter, content was limited to 280 characters. Without further context, this information was often incomplete or even misleading. On the other hand, news pieces containing too much content also caused participants frustration, because the useful points were buried in a huge amount of information.
P21 mentioned that he "hated long articles" because he lacked patience; P7 mentioned that he preferred clips rather than the whole segment when watching video news.
Participants frequently cited statistics and diagrams as elements that improved their understanding. Many participants mentioned the usefulness of statistics, as it gave them direct access to the phenomenon being described in the article.
Diagrams, especially geographic ones were preferred by many participants, although sometimes their meanings could be challenging to understand. For example, P18 described the difficulty of understanding a scientific diagram saying: "I feel it's just like a long (diagram) and it stays the same there, and the value versus log-I don't even understand, what does that mean." 6.1.3 Verifying. As information about the pandemic exploded, most participants (n=19) said assessing the credibility of the news was vital but sometimes difficult. Several participants expressed concern about accessing credible information on social media platforms, as there were too many opinions and too few facts: I think that it (Twitter) can get into a pretty toxic space, especially, sharing things without actually reading it and just going off the headlines... It's interesting to see where people are -especially politicians -like if they're pro or against something, But I don't find it really useful in terms of its actual content. -P3 Participants also realized that even credible information could be biased. For example, P12 mentioned that she needed to "switch up the channels here and there" to get a balanced view of the news, but different sources also had different types of news "they chose to put forward, " which could lead to bias.
Three participants also utilized the comment section as a tool to validate the credibility of an article. P1 mentioned that someone might post the original source behind a news piece, which helped her to validate the content. However, participants like P17 found that the comments were not useful, because most of them were opinions: "They're just people's opinions. Some of them are just very inflammatory, and I don't put any credence in them. Although a couple of them make you think."
Sustaining Stage: Emotional Needs
Although reading the news makes one feel informed, it can also negatively affect mental health. This challenge problem became more salient for participants in the Sustaining Stage. We found that almost all participants expressed frustration and a sense of feeling overwhelming by the news, especially in the later part of the diary responses. Participants surfaced three broad categories of emotional needs, which we labeled: 1) containing 2) coping and 3) connecting.
6.2.1
Containing. Twelve participants expressed a sense of "being a slave to the news, " saying they could not resist checking continually. For example, P16 spent most of his time refreshing and searching for news about COVID, because he was afraid of missing something important. Although aware of the anxiety caused by his consumption habits, he said that checking news was "the only way I have control over to keep informed".
Even those who did not spend much time consuming news felt overwhelmed at the beginning of the crisis, because everyone was talking about the event. P20 mentioned that she "did not even have to go looking for the information, just because every single person on the Instagram was posting stuff about the COVID. " Manuscript submitted to ACM The feeling of being overwhelmed gradually decreased for some participants as the pandemic wore on, and most participants decreased their news consumption relative to the beginning of the crisis. However, diary responses show that many participants were still trying to find a balance between "getting informed" and "consuming too much. "
6.2.2
Coping. Not only did the quantity of news stories explode at the start of the pandemic, a large portion of the news was also negative. Moreover, unlike negative news stories before the pandemic, COVID-19 was of direct personal relevance to participants and their loved ones. Fourteen participants reported struggling with the overall negative sentiment of the news. P20 described being more emotionally affected by the news than before, saying: Now when I'm looking at the news, it's more emotional. And I would be stressing more if I'm reading a title that is shocking. But beforehand I knew nothing bad is going on in the world necessarily. It was not a stressful thing, to be honest, to read the news every day. -P20 When participants realized that the news came with an emotional tax, they often wanted to take a step back. But by then, they found that COVID has already penetrated every aspect of their lives. For example, P7 used to listen to fitness and cigar podcasts, and they all began to talk about the pandemic. P9 was exposed to COVID-related news on her Instagram feed, which left her feeling frustrated, because she used the platform for entertainment. As a result of encountering negative crisis-related news at every turn, participants wrote that they felt, for example, "exhausted" (P4) and "numb" (P13).
6.2.3
Connecting. Because of the "stay-at-home" orders, many participants spent their time at home without physically interacting with others. Seventeen participants mentioned that reading news helped them feel connected with the outside world. When these attempts were successful, they were emotionally rejuvenating: P5 mentioned that reading news articles helped her establish common topics with her friends when talking over the phone. P10 mentioned feeling "isolated" and that reading and sharing news was a coping strategy to stay virtually connected with her family.
However, in other instances, participants attempts to connect with others failed. P13 tried to engage with his relatives who did not care about the virus by sharing the news, but "it just doesn't work. So it's probably best to just let it come and not respond back because it just fuels the fire a little bit more." When participants encountered the situations like this, they usually ended up passively decreasing or stopping the interaction with the members.
Collectively, these findings illustrate a variety of struggles with respect to news consumption during the COVID-19 crisis. Participants were more motivated than usual to look for personally relevant news and resources, and they felt frustrated when this information was not accessible. Yet, due to the flood of news about the pandemic, they often found they consumed both too much news and not enough. Emotionally, they found it exhausting to check the news, even though they sought out this information voluntarily. And when they tried to shut it out, they found it was all around them. After identifying these needs, we sought to explore designs to improve these experiences, as we describe in the next section.
DESIGNING A BETTER DIGITAL NEWS CONSUMPTION EXPERIENCE
After modelling participants' consumption behavior and identifying the needs driving these behaviors, we generated design ideas that digital news platforms could apply to support these needs. The design ideas were generated via a user-centered approach: during the diary survey, we asked participants to reflect on their daily engagement with the news and to write down ideas for improving these experiences. We analyzed the data using affinity diagrams
Customization and Filtering
Providing more granularity of news sorting and filtering mechanisms helps the user find relevant information efficiently.
For example, participants suggested the following design concepts, all of which call more granular customization and filtering. All examples address the informational need finding.
Sorting Feeds by Popularity. Five participants mentioned using Reddit for getting news as it aggregated the news and sorted the information based on human interest. This crowd-sourced approach of sorting by popularity reflects the concept of collective attention [50]: helping the user grasp important information with limited attention resources by leveraging the input of the crowd. For example, P8 mentioned that by skimming through popular posts he can quickly get a sense of what is happening. Providing alternative sorting options, such as by views or comments, helps the user to identify important information more efficiently. Prior work has shown that crowd-sourced content sorting and recommendation mechanisms have better performance than automated algorithms, and they are more effective in reducing misinformation [6].
Displaying News by Location. We found that during the crisis, people were more likely to focus on local news than they had been previously. Having the option to filter local news, or display the news on a geographic map would support this informational need. This design would have the added benefit of helping people efficiently check for news of relevance to their family and friends, which was of great importance to our study participants. Five participants mentioned checking the news in the locations of their family and friends. P11 wished there was a way to "specify regions of the countries that I would care about most." Moreover, when combined with sorting functions, this design enables them to find important local news more easily.
Distinguishing Opinion Pieces from Factual Articles. Of the informational needs raised by participants, verifying was the most prevalent (mentioned by 19 people). Providing credibility clues could help the reader determine whether to read an article, and how to digest the information. Furthermore, the coping need suggests an opinion tag could also help readers avoid potentially negative sentiments. Eight participants mentioned that they would like an indicator to tell them whether a news piece is opinion or fact, such as a tag after the title. They found most opinions and quotes unhelpful for making decisions. On the other hand, P6 enjoyed reading opinion pieces to understand "how other people might have viewed an incident, " which helped him to form his own opinion.
Organizing News in One Place
Participants said that news aggregation and categorization not only improves efficiency, it also helped them avoid news they did not want to see. Many participants suggested designs for improving categorization, as illustrated below. These supported users' needs to find and verify information, and their emotional need to contain it.
Creating a Hub for Crisis-related News. During a crisis, the deluge of crisis-related news can overwhelm users. To help with the containing and coping needs expressed by participants, a news hub unifying all related resources could organize the consumption experience help users regain a sense of control over what they choose to engage with. Three participants mentioned that they wanted to maintain a personal space when using social media, because they all experienced being exposed to an enormous number of COVID-related posts unexpectedly. Creating a hub for crisis-related news offers the user a sense of control over what they consume instead of feeling overwhelmed. P2 said, " I think a lot of people are just trying to focus on other things right now and actually don't want more news directly put in their face, " and she explained that she appreciated the daily briefings from the New York Times, because they offered a dedicated section for COVID news, as well as news on other topics. The design idea has already been adopted by many social media and news websites, where COVID-19 hubs were created in their pages.
Aggregating News on the Same Topic. When users described their struggles finding information, they often mentioned encountering repetitive content across different news sources. Aggregating news on a common topic has the potential to both improve organization and also increase diversity of coverage, and aggregation has been shown to lead to more balanced coverage in other contexts [17] . Several participants used Google News which aggregated information from different sources together (where similar coverage is aggregated under one headline, with an expander giving the user the option to click for "full coverage" to reveal more reports), which they found to be extremely helpful.
Providing Context and Resources
Participants said that providing related context and resources improves users' understanding of an event, and limited contents can be misleading or not useful at all. Below are examples of participant designs that expand on the context of a news article and tie in related resources. Collectively, these design address users' needs to understand and verify content, and to help them cope with it.
Providing Related Context. News information on social media is usually short and concise due to character limitation and the short attention of users. Adding contextual information could support the understanding need: all participants found a news piece useful and understandable when it attached related factual reports, links or official statistics. Consuming a single piece without knowing its background can lead to bias, as participant mentioned when they raised their need to verify information. For example, P10 mentioned reading an article claiming that a Belgian study had found walking could spread the virus, but the study itself did not draw the conclusion. Adding the original research article could prevent the results being misrepresented or misinterpreted.
Providing Actionable Resources. For the coping need, Many participants described feeling burned out by consuming so much negative news. Constructive journalism [41] offers a pathway to address this problem: instead of only focusing on exposing the issues, the news could also offer resources and actions a reader can take. For example, P18 used an app showing case statistics in her local area that also allowed her to submit information by filling out a survey. Making this contribution was rewarding and left her feeling good about contributing to the community.
Adding Summaries for Long Articles. As content has grown increasingly abundant during a crisis, attention becomes the limiting factor in the consumption of information. A long article (or video) might contain valuable resources, but it might take too much time to fully digest the piece (it might also require certain expertise to digest).
Providing a gist to long articles or videos addresses the understanding need: it helps the readers extract important takeaways, without hurting their ability to dive into the details. Ten participants mentioned that most of the time they just scan the headlines of the news without clicking to read articles, and would skip paragraphs that were too long.
Three of them suggested that an article should provide a summary or overview section at the beginning to help the reader grasp the core content easily.
Creating a Positive News Experience.
Participants also surfaced design ideas that they thought would help them keep a positive outlook in the face of so much negative news. These interventions supported people in meeting their needs to contain crisis news and to cope with it.
Consuming Non-Crisis News. As the containing and coping needs reveal, participants felt overwhelmed by the dominance of COVID-19 as a news topic, giving them little opportunity to escape. P4 mentioned that she had to search for cute animal pictures to have a break, and that she started taking online courses just to get back to normal life.
Participants suggested intentionally providing diverse content in the news feed, to give people the opportunity to periodically direct their attention to other topics.
Showing Sentiment Clues. The negativity of COVID-related news caused many participants to feel helpless and anxious, as the coping need shows. Participants mentioned searching for positive information "on purpose": P6 found himself watching more "heartwarming" stories, which he had not previously done. Participants came up with several designs ideas to help them find more positive news: P10 proposed a "good news section" when she found it was harder to find "light-hearted bits"; P18 suggested adding emojis as sentiment tags for each article to help people decide whether to engage with it or not. For example, foxnews.com has a "Good News" section, and the Citizen 3 app, which sends location-based alerts and updates, will regularly display positive news as a counterweight to the overall negative sentiment. Many platforms enable users to react with emojis.
Tracking Consumption Time. Self-monitoring tools for tracking device use generally have shown to be helpful for promoting digital wellbeing [26,45]. Similar tools, such as time tracking, reading status (for example, how many articles the user has read) can be embedded in interactive news platforms to raise readers' awareness and support their need to contain the experience and reduce endless scrolling). P5 proposed time tracking on media to raise her awareness of her news consumption and help her spend more time in "the real world." P16 also mentioned setting a timer to remind himself to stop reading tweets. In diary responses, we also found that participants generally felt more positive when they decreased their news consumption time.
Making Statistics Interactive and Self-explanatory
All participants preferred tracking the progress of the pandemic via numbers and visualizations, such as using websites [1,98] to track case numbers. Interactive maps were the most preferred visualization, because participants could check on the reported statistics using different scales, for example, switching from information about a city to information about a nation. Participants also wanted designs that provided explanations alongside statistics. For example, when P3 was viewing a prediction graph from the New York Times, she did not understand the prediction model. She felt that adding an explanation about how the data was collected and how to interpret it would improve her understanding.
Bringing Social Factors into Digital News
Other designs brought social interactions into the digital news consumption experience to promote the feelings connectedness. In addition to meeting users' needs for connection, these interventions also supported users in finding and validating information.
Viewing Friends' Content. Many participants said they get information from their families and friends, and they trusted a news article more if it was recommended by someone close to them. Showing friends' news-consumption activity could help address the finding and connecting needs. This kind of "friendsourcing" [16] activity personalizes the user's newsfeed, by eliciting pieces that one's friends have interacted with. P18 brought up a feature that displayed how many friends had read an article on the app WeChat, and commented that seeing the number was really useful because it helped her to decide whether article might be important. P6 also suggested showing the articles read by her friends, as it could help identify common topics with her friends, making her feel connected. One example of this idea is the "Top Stories" section of Wechat, where the user can find articles that their friends have read and recommended.
Recommend Function. Another strategy to interact with friends through reading the news is to "passively" recommend, rather than "actively" share a news piece. As mentioned in the connecting need, sharing news to other people might result in a negative response, either because they have different points of view, or they already consumed the same information. P12 suggested having a "recommend function" to lower the bar of sharing in a less proactive way: It might be nice to include a recommended by friends article section in the Facebook COVID19 Information Center. Rather than a friend putting this on their wall (where everyone sees it in their feed), this would allow one to recommend articles for those who are interested in the topic. -P12 Voting Mechanism on Comments. Many information systems use user comments to provide complementary information to the main content [101], and some provide visualization tools to support digesting comments [86,104].
While most participants did not comment on news, they did view the comments as an information source. Compared to the official contents, comments sometimes "provide other people's points of view with their personal experiences" (P2). To better utilize the comment section, many participants proposed a Reddit-like voting mechanism: insightful comments would be up-voted and stay on top; rude ones would be down-voted and hidden or collapsed.
An Imaginary News App
In order to gather participants' feedback on the design ideas, we sketched a low-fidelity news platform prototype, combining the interface related designs that could be reflected in the sketch (such as tracking consumption time). The interface and explanation of the prototype is presented in Appendix D. The prototype was presented to the participants in the exit interview: 17 of the 22 participants took part in the exit interview (12 women, 5 men) 4 .
Most designs received positive feedback from all participants, including all features from Customization and Filtering
and Organizing News in One Place. For the filter function, many participants mentioned the helpfulness of the location option: P12 commented that she would use the feature "because I can find if something is directly impacting me locally in my town. " P19 would like to see an option to filter by political leaning, such as showing news from left-leaning versus right-leaning sources.
However, attitudes toward the social feature Showing Sentiment Clues were split. In the interface, the sentiment clues of a news piece was displayed via emoji reactions. Although some participants found the sentiment cue to be an interesting concept and reported they might use it, several people raised concerns about the objectiveness and the effect of emotion manipulation. P10 mentioned that different user groups could have different reactions to an article, and for different users, the sentiment of the news could vary. Hence a single emoji might not be a meaningful way to reflect an article's sentiment. P19 brought up an issue of emotion manipulation, saying that viewing the reaction before reading the news could introduce bias.
Overall, participants offered positive feedback on all the design ideas and expressed that they would like to use those features to help improve their news consumption experience.
DISCUSSION
We found that the COVID-19 crisis changed what users need from digital news platforms. Their information-seeking became more action-oriented and personal, as they sought out news with an end-goal of informing their own decisions.
Participants encountered a new emotional cost to reading the news, as it described a high-stakes, personally relevant crisis they were actively living through. Although they found themselves struggling to cope with the burden of facing the news, they also felt an obligation to stay informed, leading to a tension wherein users were both hungry for and burdened by new information.
Thus, participants' informational needs and emotional needs both became highly relevant to their use of digital news platforms. Under ordinary circumstances, the purpose of a digital news platform might be primarily to inform the public [102], but during the COVID-19 crisis, participants wanted digital news platforms that not only informed them but also, 1) empowered them to take action, and 2) helped them cultivate resilience to the difficult emotional work of staying informed. Achieving these aims entails prioritizing both informational needs and emotional needs as first-class design considerations.
Participants' feedback further illustrates a concrete set of ways in which designers of digital news platforms can create experiences that address both types of needs. Here, we outline this design agenda, as surfaced by users' reactions to both existing systems and novel design ideas. Our results suggest that these design approaches can support users' interest in: finding news that informs their decision-making, avoiding unnecessary emotional costs, and bounding engagement with the news.
Designing for Action-Oriented Information Seeking
Participants' perspectives suggest several ways in which designers can support users' informational needs during times of crisis. First, we saw that participants' goal in reading the news shifted from an abstract interest in "being informed" to a concrete need to take action. As the crisis introduced great uncertainty and new risk, participants became dependent on the news for information to make vital decisions. They began to seek out more content specific to their local area, and they sought news that coupled descriptive information with actionable resources. They were frustrated when they spent time on articles that turned out to be opinion pieces, and they expressed broad exasperation with the vast sea of articles about COVID-19 they had to sift through to find the small subset that could inform their decision-making.
Collectively, these themes point to a need for crisis-sensitive news platforms to support an action-oriented approach to news consumption, that assumes users want to act on what they learn. Participants suggest designers move toward this goal by adding customization and filters to segment the massive landscape of news and aggregation to preorganize information into subtopics. Although we see potential benefits to these proposed designs, complex questions remain about how to implement them and how to mitigate problematic side effects, should they arise. For example, deciding how to support filtering, aggregation, and segmentation is no easy task, and algorithmic decisions organizing content have the potential to come with negative consequences [11]. An enormous amount of literature continues to Manuscript submitted to ACM examine the echo chambers and filter bubbles that arise from personalization (e.g., [35]), complicating participants' recommendation to aggregate similar content and filter for what is personally relevant. And although participants expressed interest in designs that allow users to vote on others' comments, prior research reports that human-interest and opinion content were more likely to be upvoted than news reports [57]. These examples all illustrate the need for nuance in implementing designs for this space. For example, one alternative could be dividing the general voting into several metrics, such as informativeness, usefulness, interestingness, to reflect the value of a comment in different perspectives.
Finally, participants' need to take action led them to a three-stage process of → → information. Although each of these stages will be of relevance to news consumption in ordinary times, in moments of crisis people will need to be successful in each of these stages to make decisions with confidence. Designers of digital news platforms can use this model to interrogate their interface, examining how to design for each of these distinct phases. For example, designers might organize content by topic to improve users' ability to find information, provide summary bubbles to help users understand important take-aways, and add automated confidence scores to show agreement across diverse sources to help users verify whether these take-aways are reliable. A large body of prior work offers designs that support each of these goals. For example, showing multiple sources on a single topic can help users find the diverse set of content they are looking for [81], providing summaries and statistical visualizations can reduce readers' cognitive load and improve understanding [47], and priming users with specific strategies can improve their ability to spot misinformation and verify the credibility of an article [39]. Designers can provide comprehensive support across this pipeline by drawing on these and other evidence-based techniques for each of the stages we identify.
Designing for Emotionally Resilient News Consumption
When participants' lives were turned upside down by the COVID-19 crisis, they suddenly found themselves in a double bind: checking the news to stay informed suddenly became both far more necessary but also far more emotionally taxing.
This challenge is reflected in their shift from seeking news to sustatining their ability to endure the process of seeking news. Many participants felt that they could not afford to shut out unpleasant news, because of its potential to be of direct relevance to their personal welfare or that of their loved ones. And yet, they often felt that engaging with digital news platforms took an unnecessary toll at a time when they were already under acute strain. A minority of participants found that they were unable to endure this emotional burden and did choose to shut out the news, but this came with the cost of being under-informed at a time when being informed mattered a great deal. Many participants said they felt trapped in their own endless doomscrolling (i.e., continuing to browse negative news and feeling unable to stop), unable to look away even as digital news environments left them feeling helpless, confused, and overwhelmed. Prior work has shown that information overload can compromise mental health [80,89], even in ordinary circumstances. In times of crisis, the potential for information overload is even greater, and our participants described feeling that COVID news is "everywhere. " Simultaneously, collective mental health is compromised by crisis itself [38,67], creating challenging conditions for maintaining mental well-being and strong justification for designers to attend to these needs.
Reading about a personally relevant, actively unfolding crisis is certain to take an emotional toll that is beyond the control the designer. But participants explained that design decisions could either help users manage this burden or needlessly exacerbate it. First, designers can work to ensure the news-consumption experience is only as emotionally taxing as is necessary to meet users' informational needs. This might mean, for example, refraining from manufacturing anxiety with inflammatory headlines or viral content. Second, designers can optimize for the amount of control the user has over engaging with emotional news. As users' design ideas suggest, this might mean including upfront summary and sentiment details or allowing users to organize and filter content by its emotional gravity. The needs that users expressed arose from elements of the COVID-19 pandemic that are common across many crises. A deluge of information [44], content of questionable authenticity [95], feelings of isolation [22], and overwhelmingly negative sentiment [13] have all marred people's experiences living through past disasters. This suggests that platform features that participants say would sustain them as they seek to understand the COVID-19 pandemic would sustain them in other crisis contexts as well.
Finally, both the two-stage model of users' needs and participants' distinct emotional needs to cope with crisis content, contain it, and connect with others around it may offer useful structure to the design process. Platforms can be both created and evaluated with consideration for interface elements that support each of these distinct needs.
For example, resilience theory outlines many concrete ways in which people can develop more sophisticated and successful meta-cognitive skills for coping with adversity, uncertainty, and trauma [51]. Similarly, mindfulness practices teach evidence-based techniques for cultivating focused attention that can enable an individual to set aside a specific concern [20,54]. And post-crisis coping theory [46] documents the importance of self-and community-connectedness in times of crisis, suggesting specific social mechanisms designers might incorporate into digital news platforms.
Designing for Bounded Engagement
One of the most commonly suggested and well-received design concepts was a time-tracker surfacing to the user how much time they had spent checking the news. This was consistent with the fact that many participants said they struggled to strike a balance between "getting enough information" and "consuming too much news." For many people, news-checking is a compulsive habit and source of self-frustration, even under ordinary circumstances [8,56,59,68].
Participants told us this struggle was compounded by the crisis, as they had good reason to spend time with the news, engaging in repetitive checking habits that are known to be habit-forming [78].
The monetization of users' attention [69], poses challenges to designing to support bounded engagement. News media platforms profit from attention and thus also use it to define their success metrics [73]. If endless scrolling-driven by feelings of dread-increases advertising impressions and time-on-task, products are likely to optimize for it. This puts emotionally sensitive designs that encourage bounded attention at odds with many companies' profit models. However, some prior work has found that engaging with the attention economy increases traffic but not revenue to news media companies [72]. And other work has shown that people dislike and often abandon experiences that do not respect their attention [97]. These and other studies (e.g., [63]) suggest there is potential for news platforms that help users manage the emotional toll of the news to disrupt the status quo.
However, there is also good reason to question whether companies creating news media platforms are able or willing to self-regulate their monetization of user attention. Public opinion increasingly favors greater regulation and oversight from government [19], which may be necessary to achieving broad change.
Limitations
While the research contains three stages which spanned over two weeks, the timeline spanned the first peak of the pandemic in the United States and did not capture the long-term news consumption habits. We also focused mostly on participants' digital news behaviors, thus did not ask about their engagement with other forms of news such as newspapers and radios. All participants were in the U.S., and design preferences vary across cultures and regions [65], as does media infrastructure, government oversight, and disaster response. However, users struggle with challenges like information overload, a sense of technology dependence, and overwhelming negativity in news articles in many countries (e.g. [53,66,88]), suggesting the design insights we share here are likely to have some degree of cross-cultural transferability. In this study, we refer to the term "crisis" as the COVID-19 pandemic event, where all participants were located in the United States and in cities with "stay-at-home" orders. Prior work shows that the parameters of crisis events can vary dramatically, and news consumption habits may differ accordingly. The diary study prompting the participants to reflect may have affected their daily consumption behaviors. Finally, participants' evaluation of design sketches were speculative, and participants had no opportunity to experience them directly.
CONCLUSION
In this paper, we investigated how Americans consumed digital news during the COVID-19 pandemic and how digital news platforms can support users during times of crisis. Through the interviews, a two-week diary study, and an analysis of participant design ideas, we found that participants moved through two different stages: an initial seeking stage followed by a sustaining stage. The seeking stage was characterized by: 1) an initial spike in news consumption, 2) increases in the variety of news sources people sought out, 3) increases in consumption of local news, and 4) a shift away from negative patterns of engagement to more altruistic ones. During this stage, users' primary needs were to find, understand, and verify information, all of which were systematically supported by some designs (like filters, map-based aggregators, and interactive visualizations) but not others.
The intensity of the seeking stage left participants feeling overwhelmed and soon gave way to a second sustaining stage, characterized by deliberate boundary-setting, and an active pursuit of positive and actionable news items. This stage was driven by a common set of emotional needs (specifically, to contain crisis news, cope with crisis news, and connect with others around crisis news). This model provides structure for creating platforms that present news and information in times of crisis. We contribute this organizing scheme along with user-generated design examples aligned with each of its categories. Table 1. Demographics of the participants in the study. Gender and ethnicity were self-reported and participants had the option to not disclose the information.
Id
Age • Generally speaking, how much time do you spend on technology products?
• And what tasks/applications do you spend time on?
• Before the COVID-19 pandemic, how frequently did you read news?
• Did you have a regular habit of checking the news?
-Can you describe what that was like? • Where did you get news from?
• What kind of news are you interested?
• When reading the news, did you also engage with the content actively, such as sharing it with others or commenting on it?
-(if yes) Describe what you did or how frequently you did it? • Thinking back before the Coronavirus outbreak, how did checking the news usually make you feel? to customization and filter features). Lastly, there are two new sections in the feed that users can browse: "good news" and "recommended by friends. " | 2022-02-14T06:48:01.853Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "723cddcbeee0e0e82b9b8241e890504373c8f247",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "723cddcbeee0e0e82b9b8241e890504373c8f247",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
79611995 | pes2o/s2orc | v3-fos-license | A Histopathological Study on Invasive Ductal Carcinoma Breast with Respect to ER/PR Status and Mast Cell Distribution
Introduction: When a carcinoma develops our immunity plays a major role in fighting against tumor cells. Mast cells which play a major role in our innate immunity are also found to get activated in a developing tumor. So far only few studies have been conducted on the role of mast cell in carcinoma breast which have shown the cytolytic activity of these on tumor cells and their relationship in response to hormone receptor status. Aim: To study the histopathological characteristics of invasive ductal carcinoma breast. and mast cell count in them and their relationship with Estrogen & Progesterone receptor status. Materials and Methods: We did a descriptive histopathological study on the specimens of modified radical mastectomy received in the Department of Pathology, Government Medical College, Thiruvananthapuram from 1 st January 2012 to 2014. Histopathological characteristics of each case with respect to their ER/PR status was studied. Mast cell were stained using special stains and counted. Data was statistically analysed with SPSS software using Univariate analysis and chi square test to detect the significance of mast cell presence with receptor status in invasive ductal carcinoma breast. Results: Out of total 150 cases the mast cells were present in 56 cases (37.3%) and were absent in 94 cases (62.7%). The relationship between presence of mast cells and estrogen and progesterone receptor positivity was found to be stastically insignificant as the p-value was more than 0.05. Thus there was no significant relationship between estrogen and progesterone receptor status with the presence of mast cells. Conclusions: The present study showed that there is no significant relationship between presence of mast cells in peritumoral tissue and the hormonal status of the patient. So as per this study mast cell presence cannot be suggested as a definitive cheap easily assessable prognostic factor in carcinoma breast. Newer modalities for detection of new prognostic indicators can help in implementation of adjuvant therapies in a patient with ductal carcinoma breast. www.jmscr.igmpublication.org Impact Factor 5.84 Index Copernicus Value: 71.58 ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v5i10.172
Introduction
Among the various cancers carcinoma breast is the most common cancer among women in developed countries and is the second most common cancer among Indian females. Developmentally, the female breast is under the control of estrogen and progesterone. So these hormones play a strong role in the causation of breast cancer. It can be of 2 types-ductal and lobular of which Invasive Ductal Carcinoma (IDC) is the commonest form. When a carcinoma develops our immunity plays a major role in fighting against tumor cells. Mast cells which play a major role in our innate immunity are also found to get activated in a developing tumor. So far only few studies have been conducted on the role of mast cell in carcinoma breast which have shown the cytolytic activity of these on tumor cells and their relationship in response to hormone receptor status. Various studies have been conducted to find out the relationship between the hormone receptor status and cellular status in tumor microenvironment. The invasive breast carcinomas is graded histologically as per the Nottingham modification of the Scarff-Bloom-Richardson grading system which grades breast carcinomas by adding up scores for tubule formation, nuclear pleomorphism, and mitotic count. The scores for each of these are added together to give an overall final score and corresponding grade for IDC. The Clinical Grading is using TNM staging. Lower the stage betters the prognosis.ER and PR status is assessed as per Allred score. Positivity for these receptors favors good prognosis. A study is being planned to find out histopathological characteristics of invasive ductal carcinoma breast in our setup and to find out the relationship of mast cell distribution in invasive carcinoma breast with respect to various grades and their ER/PR status.
Observations and Results
The present study was done on the mastectomy specimens done for carcinoma breast received in the department of pathology, Government Medical College, Thiruvananthapuram during a period of two years from January 2012 to January 2014. A total of 150 cases were studied.
Age Distribution
In this study all the cases were divided into 3 groups based on age as <40yrs, 40-60yrs and >60 yrs. Majority of the cases were between 40-60 yrs (68%) followed by 23% cases above 60 yrs. The youngest patient was 30 yrs old and oldest was 78 yrs old. (Table 1, Figure 1) Table 1-Age Distribution
Tumor Size
Tumor size was graded into 3 groups ie.a)Less than or equal to 2 cm size b)more than 2cm to size less than or equal to 5cm and c)more than 5 cm and majority of the cases ie 110 cases (73.4%) had tumor size between 2 to 5 cm. (
Mast cells and Hormonal Status
The relationship between presence of mast cells and estrogen receptor positivity and progesterone receptor positivity was analysed separately using Chi-Square test. The result was found to be statistically insignificant as the p-value was more than 0.05. Thus there was no significant relationship between estrogen and progesterone receptor status with the presence of mast cells. (Table 12,Table 13)
Tumor Size
In the present study it was seen that majority of the cases had tumorsize between 2 cm to 5 cm category which was comparable with the observations made by Rojananin et al 11 and Tan PH et al 12 . However, Baxter et al 6 found that predominant tumours size was less than 2 cm. Rusby JE et al 13 categorisedtumour size into ≤ 2 cm and > 2 cm and observed a higher proportion of tumours in ≤ 2 cm category (63%).
Oestrogen Receptor Status
In the present study which was done with control, positivity for oestrogen receptor was observed in 58% of cases. This proportion is comparatively lesser than the observations noted in other studies.
Progesterone Receptor Status
In the present study which was done with control, positivity for progesterone receptor was observed in 49.3% of cases. This proportion is comparatively lesser than the observations noted in other studies.
Comparison of Hormonal Status and Presence of Mast Cell in Breast Carcinoma
In the present study on analyzing the relationship between the presence of mast cells with the estrogen receptor status by chi square test it was found that there is no significant association between presence of mast cells in the peritumoral tissue and estrogen receptor status. This was similar to findings by ShahriarDabiri 13 et al which showed that there was no significant relationship between estrogen receptor status and mast cells by conducting studies on 348 cases of invasive ductal carcinoma breast. It was dissimiliar to findings by Heidrapour 16 et al which showed that stromal mast cells correlated to Estrogen receptor positivity and hence a good prognostic indicator. The present study showed no relationship between progesterone receptor status with mast cells which was similar to the study findings by Mitra Heidrapour et al 16 and ShahriarDabri 13 et al which also showed no correlation between the presence of stromal mast cells and PR positivity.
Conclusion
The present study showed that there is no significant relationship between presence of mast cells in peritumoral tissue and the hormonal status of the patient. So as per this study mast cell presence cannot be suggested as a definitive cheap easily assessable prognostic factor in carcinoma breast. Newer modalities for detection of new prognostic indicators can help in implementation of adjuvant therapies in a patient with ductal carcinoma breast. | 2019-03-17T13:08:22.469Z | 2017-10-31T00:00:00.000 | {
"year": 2017,
"sha1": "1a69cddc50b10209e05d21d905af6debd456556a",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v5i10.172",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7929a1bc56da6c4e08b7974a6ea17c5fb0dc638d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6838952 | pes2o/s2orc | v3-fos-license | A Method for Quantifying, Visualising, and Analysing Gastropod Shell Form
Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology.
Introduction
Empirical and theoretical approaches in the study of shell form The external form diversity of organisms is the most obvious evidence for their evolution, and thus is a key element in most branches of biology. The molluscan shell has been a popular example in morphological evolution studies because it is geometrically simple, yet diverse in form. The shell form is controlled by the shell ontogenetic process, which follows a simple accretionary growth mode where new shell material is accumulatively deposited to the existing aperture. The evolution of shell forms has been studied either by using empirical approaches that focus on the quantification of actual shell forms or by using theoretical approaches that focus on the simulation of shell ontogenetic processes and geometric forms.
Notwithstanding the active development in both empirical and theoretical approaches to the study of shell form, there has been very little integration between both schools. For the empirical approach, the quantification methods of shell form have evolved from traditional linear measurement to landmark-based geometric morphometrics and outline analyses (for an overview see [1]). At the same time, for the theoretical approach, the simulations of shell form have evolved from simple geometry models that aimed to reproduce the form, to more comprehensive models that simulate shell ontogenetic processes (for an overview see [2]). Hence, each of the two approaches has been moving forward but away from each other, where synthesis between the two schools of shell morphologists has become more challenging.
In empirical morphological studies, shell form, either in terms of heights and widths in traditional morphometrics or in terms of geometry of procrustes distances in geometric morphometrics, is quantified by a set of homologous reference points or landmarks on the shell, which can be easily obtained from the fixed dimensions of the shell. Thus, both methods could abstract the shell form in terms of size and shape of the particular shell dimensions, and the between-sample variation of shell size and shape can be assessed (in most cases only within one study). On the other hand, it is not possible to reconstruct the actual shell form from these quantitative measurements, because the shell's accretionary growth model and spiral geometry cannot be quantified on the basis of arbitrary reference points or fixed dimensions [3]. Nevertheless, the traditional and geometric morphometric methods have been accepted widely as standard quantification methods for shell form in many different fields of research.
In contrast to empirical morphometrics in which the aim is to quantify the actual shell, theoretical morphologists focus on the simulation of an accretionary growth process which produces a shell form that is similar to actual shells. This field was established with the theoretical shell model of D.M. Raup [4,5]. Within the first two decades after these publications, only a few different versions of shell models were proposed [6][7][8][9][10]. The subsequent two decades, thanks to the popularity and power of desktop computing, many more theoretical shell models were published [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]. Finally, we saw further improvements in the published theoretical models in recent years. These recent models simulate shell forms that more accurately resemble actual shells because of improved programming software, more complex algorithms, and advancement of 3D technology [2,[26][27][28][29][30][31][32][33]. Here, we will not further discuss the details of the at least 29 published shell models, but refer to the comprehensive overviews and descriptions of these models in Urdy et al. [2] and Dera et al. [34].
In brief, the latest theoretical shell models are able to simulate irregularly-coiled shell forms and ornamentations that resemble actual shells, whereas the earlier models could only simulate the regular and general shape of shells. The major refinements that have been made during the almost five decades' development of theoretical shell models are the following modifications of the algorithm: (1) from a fixed reference frame to a moving reference frame system; (2) from modelling based on numerical geometry parameters to growth-parameter-based modelling (e.g. growth rates); (3) from three parameters to more than three parameters, which has made fine-tuning of the shell simulation (e.g. aperture shape) possible. The key element of the theoretical modelling of shells is the generation of shell form by simulating the aperture ontogeny in terms of growth trajectory and form along the shell ontogeny. Hence, this has an advantage over the empirical approach in the numerical representation of the shell geometry form in terms of the 3D quantification and the actual shell ontogenetic processes [35].
Since the empirical and theoretical researchers studying shell form with two totally different quantification methods, our understanding of shell evolution cannot progress solely by using either empirical morphometrics or theoretical models. Ideally, theoretical models need to be evaluated by empirical data of shell morphometrics, and, vice-versa, empirical morphometric methods need to be improved to obtain data that better reflect the actual shell form and morphogenesis which can then be used to improve the theoretical models (see also [36]). In this dilemma lies the central problem of shell form quantification and it urgently needs to be addressed in order to integrate and generalise studies of shell form evolution.
Empirical studies rarely use theoretical shell models Despite the fact that, since the 1980s, many shell models have been published that are more complex and versatile, the first theoretical shell model of Raup still remains the most popular. There were many attempts by empirical morphologists to use the original or a modified version of Raup's parameters to quantify natural shell forms [37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53]. Surprisingly, all the other shell models, many of which produce more realistic forms, have received very little attention as compared to Raup's model (see [35,[54][55][56] for exceptions). This ironic situation might be explained by the elegance and generality of Raup's model that is intuitively and mathematically simple to be used by empirical morphologists (mostly biologists), with limited mathematical and programming experience.
As discussed above, most of the theoretical models can simulate a shell that has a form resembling the actual shell in a realistic 3D geometry, based on shell ontogeny processes. In contrast, empirical morphometrics can only quantify and compare certain dimensions of actual shells. Clearly, the theoretical approach is better than the empirical approach in its accuracy of shell form quantification, because accurate morphological quantification is essential for functional, ecological and evolutionary studies of shell form. Below, we identify and discuss a few impediments that prevent empirical morphologists from adopting the theoretical approach in shell form quantification at the present and in the past.
First, the requirement of a computation resource was an impediment in the past. These theoretical models may only be implemented in a computation environment. As mentioned above, the advances of computation hardware in speed and 3D graphic technology have promoted the development of more complex theoretical shell models. For example, the current speed and storage of a desktop computer is at least four orders of magnitude greater than those used by Cortie [20] only two decades ago. Clearly, the computation hardware is no longer an impediment (e.g. [57]) for the application and development of theoretical shell models.
Notwithstanding the hardware development, programming skills are still a prerequisite for the implementation of theoretical models. Many of the early models that were published between the 1960s and 1990s, used third-generation programming languages such as Fortran and C++, which essentially lack of easily accessible graphic APIs. This situation has improved now that the simulation of theoretical shell models can be done in fourth-generation programming languages such as Mathematica [28,55,56,58] and MATLAB [2,32,59]. Most of these shell models were described with intensive mathematical notation, at least from a biologist's point of view, in the publication; and some of these were published together with the information on algorithm implementation. However, the actual programming codes are rarely published together with the paper though they may be available from the authors upon request (but see [28,55,58]). Only two theoretical modelling software packages based on Raup's model have a graphic user interface that is comparable to contemporary geometric morphometric software [36,58]. However, both of these software packages cannot be used for irregularly coiled shells. The rest of the modern theoretical models are far less approachable than the morphometric software for empirical morphologists. This is because those advanced theoretical models have not been delivered in a form that allowed empirical morphologists to have "hands-on experience" with them, without extensive mathematical literacy [57,60].
Second, theoretical shell models simulate the shell form based on the input of a set of parameters, which could be non-biological or/and biologically meaningful. Non-biological meaningful parameters are counter-intuitive for empirical morphologists because these parameters are not intrinsic shell traits. Nevertheless, many of these non-biological parameters are required for the model to fit the shell form schematically [61]. When the biological parameters do represent shell traits, they are often difficult to obtain accurately and directly from the actual shell because of the three-dimensional spiral geometry [12,14,19,36,[61][62][63][64][65]. Since the development of theoretical shell models, almost all simulated shell models have been made by an ad hoc approach, where the parameters are chosen for the model and then the simulated shells are compared with the actual shells. In almost all cases, the correct parameters are chosen after a series of trial-and-error, and the parameters are selected when the form of the simulated shell matches the actual shell. Okamoto [12] suggested that this ad hoc approach based on pattern matching was easier than obtaining the parameters empirically from the shell.
Third, although the overall forms of the simulated shells resemble the actual shells, the simulated shell is not exactly the same as the actual shell [41,66]. For many models, its original parameters are not sufficient to simulate the shell form exactly [17,64]. These simulated general shell forms are adequate for theoretical morphologist interests in their exploration of general shell forms. However, the subtle features on a real shell or the subtle differences among different shell forms of real species that cannot be simulated by theoretical models may have significant functional implications that are important for empirical morphologists.
In brief, it is clear that the implementation of current theoretical shell models is less accessible to empirical shell morphologists. This has caused the utility of growth models for descriptive and discriminator purposes to have been underappreciated. Yet, empirical morphologists are using traditional and geometric morphometrics as a routine method for shell quantification.
Popularity of traditional and geometric morphometrics in empirical studies
In addition to the impediments arising from the theoretical shell model itself that are limiting its popularity among empirical morphologists, the theoretical approach faces competition from geometric morphometric methodology. The popularisation of desktop computing that led to the flourishing of theoretical shell models in the late 1980s, also promoted the development of morphometric methods, such as Elliptical Fourier Analysis (EFA) and geometric morphometrics (GM). Rohlf and Archie [67] set a benchmark for the quantification of an organism's form by EFA, which was improved from Kaesler and coworkers [68,69]. Rohlf and Slice [70] and Bookstein [71] developed a complete standard protocol for GM. In fact, geometric morphometrics was developed mainly by Bookstein [72,73] based on the idea of Thompson (see Chapter 17 in [74]. Soon after these pioneer papers, various software with Graphic User Interface (GUI) were developed for the application of EFA and GM ( [75], see http://life.bio.sunysb.edu/morph/). In contrast to the application of theoretical shell models, an understanding of mathematics and programming languages is not a prerequisite for the user of these morphometric tools. Thus, EFA and GM have been well received by biologists, and have been adopted in the morphometric study of shell form. To our knowledge, GM was first applied in a shell morphometric study by Johnstone et al [50] when they placed the varix-suture intersection landmark along the spiral. On the other hand, EFA was first used by Costa et al. [76].
These geometric morphometric software packages have standard and interactive workflows that help empirical morphologists in every step of: obtaining morphometric data (e.g. placing landmark coordinates), analysing data (e.g. procrustes superimposition), statistical analysis (e.g. ANOVA, PCA), and visualising shape and shape changes (e.g. thin-plate spline, PCA plots). This has made geometric morphometrics approachable to empirical morphologists, who want to examine the similarities and differences among shell forms. However, geometric morphometrics is actually a statistic of shape that is calculated from Cartesian coordinate data from a sample of objects [75]. Hence, it is not an exact quantification of form and is not particularly suitable for comparison and quantification of shell form, for the following two reasons.
Geometric morphometrics can be practically useful when the shape comparisons are made among taxa or shells that are similar in shell form-usually in a narrow taxonomic range. However, Geometric Morphometrics that is strictly based on homologous landmarks would have little use for shape comparison among a wide range of taxa or shell forms. In most cases, 2D landmarks are chosen at the shell apex, suture, and aperture or whorl outline that can be identified from a 2D image that is taken in standard apertural view of a shell. This is especially the case in our study where great variation in shell form exists among the species within a very narrow taxonomic range. As reviewed in Liew et al. [77], these taxa are extremely hard to compare because of their unconformity in shell coiling regime and the fact that the typical aperture standard view cannot be applied to these shells, and hence it is not possible to obtain sufficient biologically homologous landmarks.
GM was formalised and developed by Bookstein [72,73] based on the conceptual idea of Thompson (see Chapter 17 in [74]. In the same publication, Thompson did not use the conceptual GM, but used a logarithmic spiral approach to compare shells (see Chapter 11 in [74]. Furthermore, shells were also not included in Bookstein [71] despite various examples of different organisms to show the effectiveness of GM in shape comparative analysis. Biologically meaningful homologous landmarks are absent from some of the shells. Second, the results of separate, independent studies of shell forms cannot be integrated, even though these studies use the same GM method. Statistical analysis of the Cartesian coordinate data that abstractly represent the shell form is adequate in quantifying the variation of a shell within a context of other shells that are included in a single study or within similar taxa where similar landmarks are obtained. However, the raw coordinate data and analysed shape variation from a study are incomparable and incompatible with the data from other studies [78]. For example, the raw data (coordinates) from two studies cannot be combined if they use different landmarks and the shape variables (e.g. PCA scores) from a study cannot be compared and analysed together with other studies.
Despite the fact that geometric morphometrics has been widely used by empirical morphologists, it is not an ideal tool in the quantification of shell form for the reasons given above. Hence, it is important to return to the core of the question: what do biologists want to learn from the study of shell form? Clearly, in addition to quantitatively compare shell forms, biologists want to know more about the general characteristics and physical properties of the shell form that are key elements in gaining insight into functional and ecological aspects of the shell [79]. However, functional and ecological aspects of shell form can only be determined if the shell form can be exactly quantified.
Using 3D technology to quantify shell form based on aperture ontogeny profiles In this paper, we propose an interactive approach to the quantification and analysis of shell forms based on state of the art 3D technology and by integrating the theoretical principles of shell modelling and the empirical principles of morphometric data handling. There are no theoretical models that can simulate all existing shell forms. However, the theoretical background of the theoretical models is biologically sound-simulating the shell form by simulating the shell ontogenetic process. On the basis of this shell-ontogenesis principle, we used state-of-theart X-ray microtomography (micro-CT scan) and 3D modelling software to obtain a series of shell aperture changes from the shell in an interactive workflow that is similar to empirical morphometric analysis. All our procedures were implemented by using open source and free software with the exception of 3D scanning instrumentation and software.
First, a series of shell aperture outlines were digitised directly from the reconstructed 3D shell model obtained from micro-CT scanning by using open-source 3D-modelling software-Blender ver. 2.63 (www.blender.org). Then, the growth trajectory and form of the shell aperture outline were quantified and extracted with our custom scripts that run in Blender through its embedded open-source Python interpreter (http://www.python.org/). The changes of aperture size and shape, and aperture growth trajectory in terms of curvature and torsion along the shell ontogeny axis length were obtained (hereafter "aperture ontogeny profiles", see [80]). The final aperture ontogeny profiles are in a form of multivariate time series data, which consist of a number of instances (i.e. number of quantified apertures that depends on the length of the whorled shell tube) and attributes that represent the growth trajectories, aperture form, and size. These aperture ontogeny profiles can be plotted when each shell is examined individually. In addition, the differences between shells can be assessed quantitatively by calculating the dissimilarity of aperture ontogeny profiles among shells. Furthermore, the dissimilarity matrix can be used to plot the dendrogram. A detailed step-by-step manual and a video tutorial are available as S1 Protocol and S1 Movie. Finally, we discuss some possible applications and implications of these shell form quantification methods in theoretical morphology, functional morphology, taxonomy and shell shape evolutionary studies.
Computation software and hardware
Various commercial 3D modelling and statistical software exist for visualising, manipulating, and understanding morphology, such as Amira 1 (Visage Imaging Inc., San Diego, CA) and Autodesk Maya (San Rafael, CA) (reviewed by [81]). However, in this study, we used only two open-source 3D data modelling and processing software packages, namely Blender ver. 2.63 (www.blender.org) and Meshlab ver. 1.3.2 ( [82], http://meshlab.sourceforge.net/). Both have been used in biology to visualise and model morphology (for Meshlab: [83][84][85]; for Blender: [86][87][88][89][90][91][92][93]). However, these programs have not been used to their full extent in morphological quantification and analysis of 3D data for organisms. For quantification of morphology, we used the open-source Python interpreter ver. 3.2 that is embedded in Blender 2.63. In addition, we also used an extension to the Python programming language-NumPy [94] which consists of high-level mathematical functions.
All the morphological data were explored and analysed with the statistical open source programming language R version 3.0.1 [95] in the environment of RStudio [96]. We installed two additional packages in R, namely, "lattice": Lattice Graphics [97], and "pdc": Permutation Distribution Clustering [98,99].
All the computation analyses were carried out with a regular laptop computer with the following specifications: Intel 1 Core™i7-3612QM @ 2.1GHz, 8 GB memory (RAM), NVIDIA 1 GeForce GT 630M with 2GB memory. Procedures 1. Obtaining digital 3D models from actual shells. The scan conditions were as follows: voltage-80kV or 100kV; pixel-1336 rows × 2000 columns; camera binning-2 × 2; image pixel size-3-6 μm; rotation step-0.4°or 0.5°; and rotation-360°. Next, the volume reconstruction on the acquired images was done in NRecon. The images were aligned to the reference scan and reconstruction was done on the following settings: beam hardening correction-100%; reconstruction angular range-360 degrees; minimum and maximum for CS to image conversion (dynamic range)-ca. 0.12 and ca. 20.0; and result file type-BMP. Finally, 3D models were created from the reconstruction images in CT Analyser with the following setting: binary image index-1 to 255 or 70 to 255; and were saved as digital polygon mesh object ( Ã .PLY format).
2. Pre-processing digital shell models. The 3D models were then simplified by quadric edge collapse decimation implemented in MeshLab [82] to reduce computation requirements. The raw polygon mesh shells in PLY format have millions of faces and a file size between 20 to 80 Mbytes. Thus, we reduced the number of faces for all model to 200,000-300,000 faces, which range between 3 and 6 Mbytes in file size. In addition, for the sake of convenience during the retopology processes, all 3D models were repositioned so that the shell protoconch columella was parallel with the z-axis. This was done by using manipulator tools in MeshLab.
3. Creating reference: Tracing aperture outlines and ontogeny axis from shell models. (S1 Movie: from 00:40 until 22:00 of the video). The digital shell 3D model in PLY format consists of 3D Cartesian coordinate vertices in which each of the three vertices constitutes a triangular face, and all faces are connected through a complex network. In order words, these vertices and faces are not biologically meaningful structures, and it is not possible to extract aperture outlines data directly from a raw PLY digital shell model. Monnet et al. [100], for example, attempted to extract aperture outline automatically from a digital 3D model by making a plane cross-sectioning of the shell model, but its outlines do not reflect the form of the actual aperture outlines. Hence, we retopologised the raw 3D mesh models according to the aperture ontogeny for later data extraction.
We used Blender, which is more flexible than the commercial software used by Monnet et al. [100]. For the sake of convenience, we describe the following workflow, including the tools or the function (e.g. "Import PLY") which can be called after hitting the SPACE bar while in the Blender environment. However, this workflow may be modified by the user.
To begin, we imported a PLY shell model into the Blender environment ("Import PLY"). Then, we resized the model 1000 × ("Resize") so that the scale of 1 Blender unit was equal to 1 mm. After that, we examined the traces of aperture outlines (i.e. growth lines, ribs, spines) ( Fig 1A) and ontogeny axis (i.e. spiral striation, ridges, colour lines) (Fig 1B) of the actual shells. However, it is not possible to trace apertures from the shell protoconch because the protoconch is an embryonic shell that may not grow accretionarily and usually has no growth lines. In many cases, the aperture of the overlapping whorls cannot be traced from the outer shell wall. One of the ways to deal with this situation is to trace the aperture at the inner shell wall and the obscured aperture outline can then be inferred by studying conspecific juvenile specimens (see video tutorial 05:00-08:00 of S1 Movie). It does not really matter whether the aperture outline was traced from outside or inside. After it was traced from the inside, the subsequent retopologising stage would need take into consideration the shell thickness of the overlapping whorl.
After these aperture traits were identified, we selected the 3D model (by clicking "right mouse button"), and traced all these traits on the surface of the raw 3D mesh model in Blender by using the "Grease Pen Draw" tool. After that, the grease pen traced aperture traits were converted to Bezier curves with "Convert Grease Pencil" (Fig 1C). We would like to emphasise that this is the most critical step that determines the efficiency of this shell quantification method. Thus, the key lies in the good understanding of the way the aperture is structured, which is essential to trace the aperture outlines accurately. However, the orientation of the shell when the aperture is digitalised would not influence the aperture ontogeny data.
4. Retopologising aperture outlines from the reference and generating retopologised shell models. (S1 Movie: from 22:01 until 53:00 of the video; and File 4). For each shell, we created a set of new Non Uniform Rational B-Splines (NURBS) surface circles ("Add Surface Circle") and modified these ("Toggle Editmode") according to the aperture outlines. We created a 16 points NURBS surface circle and aligned the circle to the aperture outline by translation ("Translate"), rotation ("Rotate"), and resizing ("Resize") ( Fig 1D). After the NURBS surface circle was generally aligned, each of the 16 points of the NURBS surface circle were selected and adjusted by translation ("G") one by one, so that the outline of the NURBS surface circle was exactly the same as the aperture outline. At the same time, the second point of the NURBS surface circle was aligned to the ontogeny axis (Fig 1B and 1C). In the case of the shells that we used in this study, which have a relatively simple, almost circular aperture, 16 points is sufficient to capture the aperture's outline. In the case of more complex aperture shapes, a greater number of points of NURBS surface circle may be required to capture the aperture's outline. In any case, the number of chosen points will not affect the final surface model that will be generated by these surface circles, as long as the aperture shape is properly represented by the NURBS surface circles.
After the first aperture outline was retopologised as a NURBS surface circle, the NURBS surface circle was selected by using a python script (S2 Text), duplicated ("Duplicate Objects") and aligned to the next aperture outline as the previous one. This step was repeated until all the aperture outlines were retopologised into NURBS surface circles (Fig 1D and 1E). Then the shell surface was created in the form of a NURBS surface based on the digitised aperture NURBS surface circle ("(De)select All" and "Make Segment" in "Toggle Editmode") ( Fig 1F and 1G). Lastly, we made the surface meet the end points in U direction and increased the surface subdivision per segment (resolution U = 8) through the properties menu of the object (Properties (Editor types)>Object Data>Active Spline).
After that, we converted the NURBS surface 3D model into a 3D Mesh model that consists of vertices, edges, and faces ("Convert to"-"Mesh from Curve/Meta/Surf/Text"). The final retopologised 3D shell Mesh consists of X number of apertures outlines and each aperture outline has Y number of vertices and then a total of X Ã Y vertices. Each of the vertices is connected to four other nearest vertices with edges to form a wireframe shell and face (Fig 1H).
It is important to note that the NURBS surface circle is defined by a mathematic formula which does not imply any biology perspective of the shell. We choose NURBS surface circle because the 3D aperture outline form can be digitalised by a small number of control points and shell surface can be recreated by NURBS surface based on the digitised aperture NURBS surface circle. The final 3D polygon mesh model is more simplified than the raw PLY 3D model and each of its vertex data resemble the actual accretionary process of the shell (Fig 1A and 1H).
5. Quantifying aperture growth trajectory. (S1 Movie: from 53:01 until 56:00 of the video). The aperture ontogeny profiles were quantified as described in Liew et al. [80] with slight modifications where both aperture growth trajectory and aperture form were quantified directly from the retopologised 3D shell model. This aperture growth trajectory was quantified as a spatial curve, which is the ontogeny axis as represented by a series of first points of the aperture outlines. We estimated two differential geometry parameters, namely, curvature (κ) torsion (τ), and ontogeny axis length for all apertures [12,29]. The local curvature and torsion, and accumulative ontogeny axis length were estimated from the aperture points along the growth trajectory by using weighted least-squares fitting and local arc length approximation [101]. All the calculations were done with a custom-written Python script which can be implemented in Python interpreter in the Blender ver. 2.63 environment. The whole workflow was: (1) selecting the retopologised 3D shell Mesh (by clicking "right mouse button"), (2) input parameters for number of sample points "q = ##" in the python script, and (3) paste the script into the Python interpreter (S1 Text). The final outputs with torsion, curvature and ontogeny axis reference for each aperture were saved as CSV files.
We found a convergence issue in curvature and torsion estimators (see also [36,101]). The accuracy of the curvature and torsion estimates depends on the number and density of the vertices in the ontogeny axis (i.e. number of aperture outlines), and the number of sample points. Nevertheless, different numbers of sample points can be adjusted until good (i.e. converged) curvature and torsion estimates are obtained. We used 10% of the total points as number of sample points of the ontogeny axis, which gave reasonably good estimates for curvature and torsion.
Notwithstanding the algorithm issue, the curvature and torsion estimators are informative in describing the shell spiral geometry growth trajectory. Curvature is always larger or equal to zero (κ ! 0). When κ = 0, the spatial curve is a straight line; the larger the curvature, the smaller the radius of curvature (1/ κ), and thus the more tightly coiled the spatial curve. On the other hand, the torsion estimator can be zero or take either negative or positive values (-1 τ 1). When τ = 0, the spatial curve lies completely in one plane (e.g. a flat planispiral shell), negative torsion values correspond to left-handed coiling and to right-handed coiling for positive torsion values; the larger the torsion, the smaller the radius of torsion (1/ τ), and thus the taller the spiral.
6. Quantifying aperture form. (S1 Movie: from 53:01 until 56:00 of the video). We quantified the aperture outline sizes as perimeter and form as normalised Elliptic Fourier coefficients (normalised EFA) by using a custom-written Python script which can be implemented Python interpreter embedded in the Blender environment. The workflow was (1) selecting the retopologised 3D shell mesh (by clicking "right mouse button"), (2) input parameters for "number_of_points_for_each_aperture = ##" in the python script, and (3) paste the script into the Python interpreter of Blender (S1 Text). The final outputs were saved as CSV files.
Aperture outline perimeter was estimated from the sum of lengths (mm) for all the edges that are connecting the vertices (hereafter "aperture size"). For aperture form analysis, we used 3D normalised EFA algorithms [102] and implemented these in the custom python script. Although many algorithms exist for describing and quantifying the form of a closed outline [103], we used EFA because it is robust to unequally spaced points, can be normalised for size and orientation, and can capture complex outline form with a small number of harmonics [67,102]. In this study, we used five harmonics, each with six coefficients which were sufficient to capture the diverse aperture shapes of our shells. For quantification of apertures shape that are invariant to size and rotation, we normalised EFA of aperture outlines for orientation and size. If needed for comparison with other studies, the normalised EFA can be repeated for the same dataset with higher or lower numbers of harmonics.
After normalisation, we ran principal components analysis (PCA) to summarise the 30 normalised Fourier coefficients as principal components scores (hereafter "aperture shape scores"). After that, we selected the major principal components (explaining > 90% of the variance) for further analysis. The aperture shape scores of each selected principal component were plotted and analysed against the ontogeny axis.
7. Visualising aperture form and trajectory changes along the shell ontogeny. For exploration of data, we used a graphical technique for representing aperture ontogeny profile changes along the shell ontogeny. For each shell, we made a vertical four-panels scatter plot in which each of the four variables (namely, curvature, torsion, aperture size, and the first principal component aperture shape score) were plotted against the ontogeny axis. When necessary, the second and third principal component aperture shape scores were also included. In addition, the axis of each variable was rescaled so that it was the same for the same variable of all shells. After standardisation of the axis, the aperture ontogeny profiles of several shells could be quantitatively compared side by side.
8. Quantitative comparison between shell forms. In addition to the qualitative comparison between shells forms as described above, the dissimilarity between different shells can be analysed quantitatively. We used Permutation Distribution Clustering (PDC) which finds similarities in a time series dataset [98,99]. PDC can be used for the analysis of the changes in a variable along shell ontogeny between different shells (i.e. two-dimensional dataset: number of shells × number of apertures) and multiple variable changes between shells (i.e. three-dimensional dataset: number of shells × number of variables × number of apertures). We applied the most recent analysis developed by Brandmaier [98,99] because it has an R package that can be applied and can calculate the trend similarity. That said, the same data can always be analysed by other algorithms that may become available in the future.
Although PDC is robust to the length differences between datasets, our preliminary analysis showed that the PDC output would be biased when there was a great (around two-fold) length difference in the total ontogeny axis length. As we compared the entire ontogeny profiles (i.e. right after the protoconch until the final aperture) among the shells, larger shells would have a longer ontogeny axis. Thus, we resampled the ontogeny profiles (100%) at each 2% of the ontogeny axis of each of the shells. Hence, we standardised the data as in procedure 7, but dividing the ontogeny axis of each shell into 50 equal length intervals and obtained the ontogeny profiles data at these 50 points along the ontogeny axis. This standardisation procedure allows comparison of trends in variable changes along the shell ontogeny. In addition to the shape comparison, we obtained the shell size in terms of volume by using "Volume" function in Blender after the 3D shell model was closed at both ends by creating faces "Make edge/ Face") on selected apertures at both end ("Loop Select") in EDIT mode.
The aperture ontogeny profiles of all shells were combined into a three-dimensional data matrix consisting of n shells × four variables × 50 aperture data points. We ran five PDCs, each for the five data matrices with: 1) all four variables, 2) torsion, 3) curvature, 4) aperture size, and 5) aperture shape scores. The parameter settings for the PDC analysis were as follows: embedding dimension = 5; time-delay of the embedding = 1; divergence measure between discrete distributions = symmetric alpha divergence; and hierarchical clustering linkage method = single. The dissimilarity distances between shells were used to produce the dendrogram. PDC analysis was performed with the "pdc" library [99] in R version 3.0.1 [95] (S3 Text).
Worked example: Comparative analysis of Opisthostoma and Plectostoma species shell form and simulated shell form
We evaluated the above-described shell form quantification method by using the shells of Opisthostoma and Plectostoma, which exhibit a great variability in shell form. Some of the species follow a regular coiling regime whereas others deviate from regular coiling in various degrees. It remains a challenging task to quantify and compare these shell forms among species, either by using traditional or geometric morphometrics, because a standard aperture view for the irregular and open coiled shells cannot be determined.
We selected four species, namely, Plectostoma laidlawi Skyes 1902 (Fig 2A), Plectostoma crassipupa van Benthem Jutting, 1952 (Fig 2B), Plectostoma christae Maassen 2001 (Fig 2C), and Opisthostoma vermiculum Clements and Vermeulen, 2008 (Fig 2D), for which the shell forms are, respectively: regularly coiled, slight distortion of the last whorl, strong distortion of the last whorl, and complete distortion of most of the whorls. Despite the narrow taxonomic range of the selected species, the range of shell forms of these four species do cover a very large diversity of shell form. We retopologised these four shells by following the procedures 1 to 4 (S1 Dataset).
In addition to the four retopologised 3D shell models, we manually created another four shell models by transforming three out of the four retopologised NURBS surface 3D shell models by using the "Transform" function in Blender. These models are: 1) Plectostoma laidlawi that was resized to half the original size and given slight modification of the aperture size ( Fig 2E); 2) Plectostoma christae that was reshaped into an elongated form by reducing the model size (linear dimension) to one-half along the x and y axes, and by doubling the size along the z axis ( Fig 2F); 3) Plectostoma christae that was reshaped into a depressed form by multiplying by 1.5 the model size along the x and y axes, and by reducing to one-half along the z axis ( Fig 2G); and 4) Opisthostoma vermiculum that consists of one Opisthostoma vermiculum original 3D model of which we connected the aperture to another, enlarged, Opisthostoma vermiculum ( Fig 2H). Finally, we analysed all these eight shell models by following the procedures 5 to 8.
Results and Discussion
Retopologied 3D shell models All the final retopologised 3D shell models can be found in PLY ASCII mesh format (S2-S9 Datasets), with the raw data as a list of vertices, followed by a list of polygons, which can be accessed directly without the need of any 3D software. Each vertex is represented by x, y, z coordinates. Each polygon face consists of four vertices. This simplified yet biologically informative 3D mesh shell model allows the quantification of aperture form and growth trajectory. Moreover, the 3D shell models and their raw vertices data could potentially be used in studies of functional morphology and theoretical modelling of shell form, respectively.
Malacologists have been focusing on empirical shell morphological data, from which the functional, ecological and evolutionary aspects were then extracted. The physical properties were then determined by its form (e.g. [55,56]). By using the 3D models, the shell properties and function can be analysed in silico. For example, the thickness of the shell can be added to the 3D shell model (Fig 3E and 3F) in order to obtain the shell material's volume, the shell's inner volume, its inner and outer surface area, and centre of gravity. We used the "build" function of the software, which can only "solidify" the model by uniform thickness. However, if necessary, it is possible to write a custom Python script to add the desired thickness to the shell. Quantification of shell properties may then be done by using the geometry approach in Meshlab or Blender, as compared to the pre-3D era where mathematical descriptions of the shell form were required [3,104,105]. Furthermore, it is possible to convert the 3D models to a 3D finite element (FE) model, of which the physical properties (e.g. strength) can be tested (e.g. [32]).
In addition to the potential use of 3D shell models in functional morphology, the coordinate data of the vertices of 3D shell models could be used directly by theoretical morphologists (see Figure 1 in [2]). For example, these data can be extracted in different formats that fit the data requirement of different types of theoretical shell models, namely, generating curve models using a fixed reference frame or moving reference frame (Fig 3C), helicospiral or multivector helicospiral models using a fixed reference frame (Fig 3A, 3B and 3D) or growth vector models using a moving reference frame (Fig 3A and 3B).
The retopologising of the aperture ontogeny from a raw 3D shell model (procedures 1 to 4) is a time-consuming and tedious process compared with traditional and geometric morphometrics. There are no differences in the time required for data analysis between GM and our method. The only time differences are in the data acquisition. In our experience, two to three days are needed to collect the aperture data from the shell. For example, the four shell models were created by retopologising between 73 and 96 separate apertures (ca. 1500 points for 90 apertures). From the viewpoint of short-term cost-benefit balance, this may be seen as a waste of time, because GM requires not more than a few dozen points for each shell, which can generate the shape variables for a study, even though these points are not comparable to other points of other shells or other studies. However, in the long run, it is a good time investment, since it will allow the understanding of shell function, growth, and evolution, as the same set of data is obtained from different shell forms and can be accumulated and analysed together. Moreover, as with all newly-developed techniques, improvements in efficiency and automation are possible and may remove these impediments in the future (e.g. [36]).
Comparing shell form from the view of shell ontogeny Fig 4 gives an overview of the aperture ontogeny profile and shell volume for each species. The curvature, torsion perimeter, and ontogeny axis are represented by true numerical values with the unit of mm -1 and mm, and thus can be interpreted directly. In contrast, the aperture shape scores are just statistics of Fourier coefficients and are not the absolute quantification of aperture shape. The PCA score of an aperture shape depends on the shape of other aperture outlines and thus it might change whenever other aperture outlines are added into the analysis. Nevertheless, the aperture scores will stabilise as data of more shells become available and when most of the extreme aperture forms are included. In this study, the first principal component explained 92% of the total variance; the second and third principal component explained only 3% or 1% of the total variance. We showed that the shell form can be represented by the ontogeny changes of the aperture growth trajectory in terms of curvature and torsion, and aperture form, in terms of perimeter and shape.
Our first example evaluates this method in illustrating the differences between two shells that have the same shape but differ in shell size-the half-size Plectostoma laidlawi (Fig 4E) shell and the original Plectostoma laidlawi shell ( Fig 4C). As revealed by their aperture ontogeny profiles, the size difference between the two shells has had an effect on the curvature, torsion, ontogeny axis length and aperture size. For the resized Plectostoma laidlawi shell, the values of curvature and torsion are twice as large as for the original, whereas the ontogeny axis length and aperture size are only half those of the original shell. The overall trends in the changes of these variables along the ontogeny axis are comparable between these two shells ( Fig 5B).
Another example shows the ontogeny profiles of three shells, namely, the elongated ( Fig 4G), depressed (Fig 4H), and original ( Fig 4A) versions of the Plectostoma christae shell. Comparison of aperture profiles among these show the most obvious discrepancies in greater torsion values for the elongated shell, which change in a more dramatic trend along the shell ontogeny. In addition, each of the three shells has its unique aperture shape scores, though there are no big discrepancies in the aperture size. The differences in ontogeny axis length, curvature and torsion are related to the differences of the aperture shape statistics among the three shells. However, our small dataset with only three shells is not sufficient for thorough disentangling of the interplay between aperture size, shape, and growth trajectory in relation to the shell form.
Our last example is the comparison between the original (Fig 4D) and the composite ( Fig 4F) Opisthostoma vermiculum shell. It is clear that our method has high sensitivity and robustness in the analysis of such bizarre shell forms. As shown in Fig 4F, the start of the aperture ontogeny profile of the composite shell was the same as for the original shell (Fig 4D). In addition, the later parts of the ontogeny profile trends are still comparable to the first part, but different in value because of the scalar effect.
As we have shown in Fig 4, the shell forms can be explored and compared qualitatively on the basis of aperture ontogeny profiles. Users might need some training in the interpretation of the plots because they are different from both linear dimension measurement plots and geometric morphometric shape coordinate plots. Our evaluation suggested that the data visualisation method is sensitive and robust in capturing the aperture ontogeny profile for any shell form and thus make the qualitative comparison across gastropod taxa and studies possible.
This method could be applied in malacological taxonomy because its core business is the description of shell form. Despite hundreds of years of taxonomic history of shells, there has been little change in the way shell form is being described. For example, shell from is usually described in terms of linear dimensions: shell width and height; number of whorls; shell shape -flat, depressed, globose, conical, or elongated; whorls shape-from flat to convex. Here, we suggest that the aperture ontogeny profiles would be a great supplement to the classical approach to shell description. For example: (1) the size of the shell (its volume) depends on the ontogeny axis length and aperture size; (2) the shell shape depends on the growth trajectory in terms of curvature and torsion; (3) the shape of the whorls depends on the shape of the aperture (Fig 4). In our case of the four shells (Fig 2A-2D), it is clear that aperture size of each shell is constricted at roughly the same part of the respective shell ontogeny, namely between 70% and 85%, regardless of the dissimilar shell sizes and shapes (Fig 4A-4D). In fact, these aperture size decreases during ontogeny are in accordance with the shell constriction, one of the shell characters that have been used in the taxonomy of the genera Opisthostoma and Plectostoma [77,106]. However, the shell constriction has not been quantified previously, and we show that it could also be an important developmental homology for the two genera. This preliminary results suggest that these aperture ontogeny profiles could aid the taxonomist in decision-making for grouping taxa based on homologous characters. Fig 5A shows the hierarchical clustering of the eight shells based on all four aperture ontogeny profiles. From this dendrogram, the composite Opisthostoma vermiculum is completely separate from the other shells. The remaining seven shells are clustered into two groups. One consists of the more regularly coiled shells, namely, Plectostoma christae and its two transformed shells, and Plectostoma crassipupa; the other group consists of the shells that deviate from regular coiling, namely Plectostoma laidlawi and its transformed shell, and Opisthostoma vermiculum. Nevertheless, there were high dissimilarities between shells within each group as revealed by the long branch lengths in Fig 5A, except for the two Plectostoma laidlawi shells ( Table 1). The aperture ontogeny profiles for the Plectostoma laidlawi shell and its reduced version are almost the same. The high dissimilarity among the other six shells can be explained when each of the variables in the aperture ontogeny profile is analysed separately as shown in Fig 5B. Fig 5B shows the dendrograms of aperture ontogeny profiles for each of the four variables. All four dendrograms have a different topology than the one in Fig 5A. Among the variables, the aperture ontogeny profile of the curvature has the smallest discrepancies among shells. The two Plectostoma laidlawi shells are the only pair that clusters together in all the dendrograms of Fig 5A and 5B because they are identical in every aspect of aperture ontogeny profile except torsion. Hence, the independent analysis of aperture ontogeny profile variables corresponds well to the overall analysis of aperture ontogeny profiles. The analysis of PDC is based on the standardised ontogeny profiles and their trends. Thus, it is useful for the comparative analysis of shell shape, but not shell size. Nevertheless, the size comparison between shells is rather straightforward.
Quantitative comparison between different shell forms
In this study, we quantified the shell size as shell volume, which can be estimated easily from retopologised 3D shell models (Fig 4). This quantification of shell size in terms of volume is more meaningful from the functional and developmental point of view because a snail should grow a shell in which its entire soft body can fit when the snail withdraws into the shell. We can then compare the form between shells when the dendrograms are interpreted together with shell size (volume) data. For example, the Plectostoma laidlawi shell has the same shape as, but is eight times larger than, the resized Plectostoma laidlawi.
In addition to the construction of morphospace, the dissimilarity matrix can be used in phylogenetic signal tests [107]. Furthermore, it can also be analysed together with other distance matrices, such as for geographical or ecological distance, to improve our understanding of the evolutionary biology of shell forms.
Conclusions, Limitations and Future Directions
We demonstrated an alternative workflow for data acquisition, exploration and quantitative analysis of shell form. This method has several advantages: (1) robustness-this method can be used to compare any shell form: The same aperture profiles can be obtained from any form of shell. Then, these profiles from different shells and/or different studies can be analysed together. These parameters can be obtained from the aperture as long as the shell grows accretionarily at the aperture; (2) scalability and reproducibility-the data obtained from different studies and different gastropod taxa can be integrated: Aperture ontogeny profiles were obtained from the aperture outlines. This is a trait that exists in every gastropod shell. We believe that the aperture outline that is obtained by multiple experienced malacologists, on different shells, would be highly similar; (3) versatility-outputs from this method are comply with data standard that is required in taxonomy (e.g., functional morphology, theoretical modelling, and evolutionary studies: the raw 3D shell mesh models can be used for visualisation of shells in taxonomic research (e.g. [76]), coordinates data of the vertices can be used for theoretical modelling (e.g. [2]), aperture ontogeny profiles can be used for shell functional studies [108], and dissimilarity matrix between shell forms can analysed with phylogenetic distance matrix. Yet, our method has its limitations. Firstly, our retopology procedures rely on a 3D shell model that requires CT-scan technology. In fact, although a CT-scan 3D shell model can certainly facilitate the retopology process of a shell, it is not indispensable. The key of the retopology processes is to digitise the aperture along the shell ontogeny, and thus a shell can be retopologised fully in Blender with a good understanding of the aperture ontogeny profiles by studying the real specimens even without a reference shell model. Secondly, the retopology procedure which is essential for our data acquisition is more time-consuming than traditional and geometric morphometric where data can be obtained from an image taken from a shell. Thirdly, our method is effective in the analysis of overall shell form, but not of the shell ornamentation.
In the future, our method can be improved to accommodate the shell ornamentation analysis. Parts of our method (i.e. procedures 1-6) can be used to obtain shell ornamentation data, such as radial ribs (i.e., commarginal ribs), but these data cannot be analysed with our qualitative and quantitative approaches that focus on longitudinal growth (i.e. procedures 7-8). Finally, we hope this shell form quantification method will simulate more collaboration within malacologists that work in different research fields, and between empirical and theoretical morphologists.
Supporting Information S1 Protocol. A step-by-step manual. (PDF) S1 Movie. Video tutorial for procedure 3 and 4. | 2017-08-07T10:15:47.190Z | 2016-06-09T00:00:00.000 | {
"year": 2016,
"sha1": "778da0026a9786217bbc035570f310c4474e489e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0157069&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b2de18a6f7ff22169dc595bdefd6a9119b57696a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
53276443 | pes2o/s2orc | v3-fos-license | Peritumoral radiomics features predict distant metastasis in locally advanced NSCLC
Purpose Radiomics provides quantitative tissue heterogeneity profiling and is an exciting approach to developing imaging biomarkers in the context of precision medicine. Normal-appearing parenchymal tissues surrounding primary tumors can harbor microscopic disease that leads to increased risk of distant metastasis (DM). This study assesses whether computed-tomography (CT) imaging features of such peritumoral tissues can predict DM in locally advanced non-small cell lung cancer (NSCLC). Material and methods 200 NSCLC patients of histological adenocarcinoma were included in this study. The investigated lung tissues were tumor rim, defined to be 3mm of tumor and parenchymal tissue on either side of the tumor border and the exterior region extended from 3 to 9mm outside of the tumor. Fifteen stable radiomic features were extracted and evaluated from each of these regions on pre-treatment CT images. For comparison, features from expert-delineated tumor contours were similarly prepared. The patient cohort was separated into training and validation datasets for prognostic power evaluation. Both univariable and multivariable analyses were performed for each region using concordance index (CI). Results Univariable analysis reveals that six out of fifteen tumor rim features were significantly prognostic of DM (p-value < 0.05), as were ten features from the visible tumor, and only one of the exterior features was. Multivariablely, a rim radiomic signature achieved the highest prognostic performance in the independent validation sub-cohort (CI = 0.64, p-value = 2.4×10−5) significantly over a multivariable clinical model (CI = 0.53), a visible tumor radiomics model (CI = 0.59), or an exterior tissue model (CI = 0.55). Furthermore, patient stratification by the combined rim signature and clinical predictor led to a significant improvement on the clinical predictor alone and also outperformed stratification using the combined tumor signature and clinical predictor. Conclusions We identified peritumoral rim radiomic features significantly associated with DM. This study demonstrated that peritumoral imaging characteristics may provide additional valuable information over the visible tumor features for patient risk stratification due to cancer metastasis.
Material and methods
200 NSCLC patients of histological adenocarcinoma were included in this study. The investigated lung tissues were tumor rim, defined to be 3mm of tumor and parenchymal tissue on either side of the tumor border and the exterior region extended from 3 to 9mm outside of the tumor. Fifteen stable radiomic features were extracted and evaluated from each of these regions on pre-treatment CT images. For comparison, features from expert-delineated tumor contours were similarly prepared. The patient cohort was separated into training and validation datasets for prognostic power evaluation. Both univariable and multivariable analyses were performed for each region using concordance index (CI).
Results
Univariable analysis reveals that six out of fifteen tumor rim features were significantly prognostic of DM (p-value < 0.05), as were ten features from the visible tumor, and only one of the exterior features was. Multivariablely, a rim radiomic signature achieved the highest prognostic performance in the independent validation sub-cohort (CI = 0.64, p-value = 2.4×10 −5 ) significantly over a multivariable clinical model (CI = 0.53), a visible tumor radiomics model (CI = 0.59), or an exterior tissue model (CI = 0.55). Furthermore, patient stratification by the combined rim signature and clinical predictor led to a significant improvement on the clinical predictor alone and also outperformed stratification using the combined tumor signature and clinical predictor. PLOS
Introduction
Lung cancer remains the leading cause in cancer-related mortality worldwide [1]. Histologically, adenocarcinoma represents the most common type of non-small cell lung cancer (NSCLC). Locally advanced NSCLC patients represent about 30% of newly diagnosed lung cancer [2]. These patients typically receive a combination of surgery, chemotherapy, and radiation therapy [3]. Despite these treatment approaches, the survival rate of these patients is limited to~25% at five years due to disease progression [4,5]. The limitations of the current treatment approach necessitate novel prognosticators that allow for further stratification of different risk groups and more refined therapeutic strategies. Quantitative imaging has been increasingly employed to assess treatment response to cancer therapy. Especially for lung cancers, CT imaging remains the modality of choice as it is noninvasive, renders anatomical details in high resolution and can quickly capture patient thoracic anatomy so that artifacts due to respiratory motion can be minimized. Planning CT images are routinely acquired in lung cancer patients prior to radiation therapy. Recently, numerous studies have shown imaging-based radiomic features can quantify tumor heterogeneity and hold potential for their application as clinical biomarker for patient stratification [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. In particular, radiomic studies have shown CT-derived image features may be prognostic for distant metastasis (DM) and treatment responses in NSCLC [13,14,17,24].
Previous studies have predominantly investigated the association between clinical outcomes and radiomic features within the primary tumor volume [24][25][26][27]. However, recent cancer research has shown evidences that extratumoral lung parenchymal tissues surrounding the primary tumor can become involved as cancer infiltrates and metastasizes. Pathological studies have demonstrated that lung tumor can spread through blood and lymphatic vasculature as well as airspaces in lung parenchyma [28][29][30][31][32][33][34][35] and that extratumoral cancerous presence may lead to worse clinical performance. In all aforementioned modes of cancer spreading, study results have consistently found significantly stronger association with distant or local recurrences for the extratumoral cancerous presence than their intratumoral counterparts [28,31,33]. Thus, we hypothesized that tumor metastatic progression may manifest itself in the imaged peritumoral tissue characteristics and the underlying relationship may be explored using radiomics based profiling of the normal-appearing tissue beyond the identified tumor region. Given the lack of biomarkers for DM in NSCLC, an understanding of the peritumoral tissue radiomics as an imaging biomarker may provide additional information to the existing approaches that only quantify characteristics within the visible tumor volume for identifying patients at higher risk and facilitating improved treatment design.
In this study, we present a radiomics investigation on the association of DM with peritumoral tissues in a cohort of 200 adenocarcinoma NSCLC patients. For clinical utilization, their prognostic performances were compared to the tumor-only radiomic features and clinical factors.
Patient characteristics
This study was conducted under Dana-Farber/Harvard Cancer Center IRB protocol. As the study was retrospective and involved no more than minimal risks to the subjects, the need for patient consent was waived. Our study cohort included patients with pathologically-confirmed lung adenocarcinoma with locally advanced NSCLC (overall stage II-III). Patients treated with surgery or chemotherapy prior to the CT simulation date were excluded from our analysis. Patients receiving SBRT treatment were also excluded. For unbiased validation purposes, our 200 cohort was temporally split into two halves, the training Dataset A (n = 100) and the independent validation Dataset B (n = 100).
Clinical endpoints
The clinical outcome evaluated for this study was DM. Follow-up CT scans were performed every three to six months after treatment for tumor progression assessment. DM was considered as the disease spread to sites outside of the lungs. Time to DM was defined as the time interval between the start date of radiation therapy and the first scan date of radiographicallyevident DM and censored at the date of last negative scan in patients without recurrence. Time to OS was defined as the time between radiotherapy start date and date of death, and censored at the last follow-up date.
CT image acquisition and segmentation
Planning CT images were acquired on GE LightSpeed RT16 CT scanners (GE Medical Systems, Milwaukee, WI, USA) following clinical imaging protocol which mostly uses 120 kVp and reconstructed using standard convolution kernel. The most common voxel spacing on the CT image was 0.93mm, 0.93mm, 2.5mm. Exceptions to this protocol were two cases scanned using 140 kVp and 6 cases reconstructed using 3.75mm or 5mm slice thickness. All patients received contrast injection unless there existed contraindication. The primary tumors were contoured using Eclipse software (Varian Medical Systems, Palo Alto, CA, USA) by an experienced CT imaging researcher. Tumors contours were reviewed by an expert radiation oncologist (R.H.M). To ensure that the imaged tumor regions are of good quality for our analysis, cases with motion artifact were excluded.
Peritumoral contour preparation
Based on the proximity to the primary tumor, two peripheral regions of tissue were designed and termed here as tumor rim and tumor exterior. Tumor rim was defined to be the region that included the outer 3 mm of the tumor and 3 mm of tumor-adjacent lung tissue on either side of the tumor contour boundary; tumor exterior the region of lung tissues extending from 3mm to 9mm outside of the tumor contour (Fig 1.1). Rationale for tumor rim tissue inclusion was to designate a "real invasive front" and account both for the aggressive invading front of the tumor tissue and the adjacent normal lung layer, where cancerous islets can be frequently found [34]. The designation for tumor exterior followed from the spatial extent of microscopic tumor presence as found in pathology studies [28,36,37]. The generation of masks for delineating the tumor edge and the exterior tissue was accomplished through mathematical morphology operations of erosion and dilation, implemented using SimpleITK toolbox [38]. Since we were interested in the lung parenchymal tissue characteristics that may show association to cancer metastasis, care was taken to only include the tissue contours that are within the lung masks.
Radiomic feature extraction and selection
A comprehensive radiomics computational toolbox, PyRadiomics, was employed for feature extraction [39] (Fig 1.2). Designed to facilitate data reproducibility, PyRadiomics platform provided open-source standardized algorithm for radiomic feature computation. Its implementation included built-in wavelet and Laplacian of Gaussian filters for image processing and computed a total of 2175 radiomic features, including first order statistics, shape, and texture classes. Texture classes included gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), gray level size zone matrix (GLSZM), neighboring gray tone difference matrix (NGTDM), gray level dependence matrix (GLDM), and gray level distance zone matrix (GLDZM). Feature computation was performed at resampled voxel dimensions of 3×3×3mm 3 and an intensity bin width of 25 Hounsfield units. Radiomic feature extraction was performed for each of the three regions investigated in this study, i.e., tumor, tumor rim, and tumor exterior (data in S5-S10 Files). Feature selection followed a two-step procedure, feature stability and relevance, as shown in Fig 1.3. The selection of stable features was performed on Dataset A for each tissue region using the external test and retest RIDER dataset [40], subject to the intraclass correlation coefficient criterium (ICC > 0.85) [41] (description in S1 File). Subsequently, the set of stable features was processed using the minimal redundancy maximal relevance technique (mRMR) for dimension reduction, resulting in fifteen radiomic features for each region. Based on mutual information (MI), mRMR performed feature selection sequentially by determining the feature with maximum MI with the target variable and the minimum MI with the already selected features (Bioconductor "survcomp" package [42]).
The prognostic value of the peritumoral radiomic features was compared to the tumor ones as well as conventional and clinical parameters. The conventional variables considered for this study were the maximal 3D tumor diameter, and the volume of gross tumor volume (GTV). Clinical model included gender, age, overall stage, T-stage, N-stage, performance status, and tumor size.
Univariable analysis
The prognostic value of the radiomic features was evaluated using concordance index (CI) from "survcomp" package [43]. Noether's test was applied to assess the statistical significance of the computed CI from random chance (CI = 0.5) [42]. To account for multiple testing, a false-discovery-rate procedure by Benjamin and Hochberg was applied to adjust the p-values [44]. Univariable analysis was performed using the fifteen features selected using mRMR method for each of the tumor regions. All analyses were performed using R software (version 3.3.1) [45].
Multivariable model construction and validation
Multivariable models were constructed using Cox regression method, where the model was trained using Dataset A and the model predictions were validated in Dataset B. The multivariable radiomics models were constructed with the mRMR selected features where the feature complementarity was explored for potential prediction enhancement. Based on the principle of parsimony, the fifteen features for each tumor region were included to the model incrementally according to their mRMR ranking and 1000 cross-validations were performed on Dataset A for each intermediate model in order to assess its predictive power. The cross validations were performed through random subset sampling with balanced event ratios of 70:30 using caret package [46]. For each tumor region, the optimal feature set was the combination that rendered the highest mean CI value before decreasing and was termed signature (Fig A in S2 File). To determine the improvement due to radiomic features, clinical factors were incorporated to the individual radiomics model to generate combined clinical-radiomics model. The statistical significance of the model performance between a pair of multivariable models was assessed using the cindex.comp function ("survcomp" package). To assess clinical efficacy, we evaluated the statistical significance of our best performing combined clinical and peritumoral radiomic signature to multivariable clinical model and combined clinical and tumor radiomics signature.
Assessing patient stratification by peritumoral radiomics model
Finally, to demonstrate the clinical efficacy of our proposed peritumoral radiomics model, a Kaplan-Meier analysis was performed. We investigated the performance of three models, the clinical model, the combined clinical and tumor radiomics signature, and the combined clinical and rim radiomics signature, where the cox regression technique was used for combining separate models. For each model, the median prediction value from training Dataset A was used for stratifying the patients in Dataset B. A log rank test was performed to determine the statistical significance of risk to developing DM between the two groups.
Clinical characteristics
A total of 200 NSCLC patients with adenocarcinoma were analyzed in this study. At the time of diagnosis, the median age was 64 years (range: 35-93 years). The median follow-up was 28.1 months (range: 1.8-142.1 months). The median time to DM was 13.5 months (range: 0.3-119.1 months), with 128 (64%) patients having developed DM versus 72 (36%) who did not. Patient characteristics and cancer progression information can be found in Table 1. Peritumoral radiomics for metastasis prediction in locally advanced NSCLC Fig 2 displays the results of univariable analyses performed on Dataset A. In each tumor region, the full set of radiomic features was reduced to fifteen features based on relevance to DM events. In the tumor region (Fig 2A), ten features were significantly predictive of DM, had a CI range of 0.59-0.64, p-value < 0.05, and were all texture-based except for the 3D firstorder median, a statistics-type feature ( Table B in S3 File). The top tumor feature was GLCM Difference Entropy, which measures the variability in neighborhood intensity differences. In the rim region (Fig 2B), six significant features were found with CI range: 0.59-0.63 (Table D in S3 File). Four of these features were texture-based and two were statistics-based (first-order range and first-order minimum). The best performing feature in this region was GLRLM RunEntropy, providing a measure for the randomness in the distribution of run lengths and gray levels, with higher value indicating more heterogeneity in the texture pattern. In the exterior region, the only significantly predictive feature was first-order Kurtosis, which is a descriptor of the intensity distribution, with higher kurtosis value indicating the distribution is concentrated more towards the tails (Table F in S3 File). More detailed description of the selected features is provided (Tables A, C, E in S3 File). For comparison, the conventional features of tumor volume and maximum 2D axial and 3D diameters had CI values of 0.54, 0.55, and 0.54, respectively and were not statistically significant. In addition, the maximum 3D diameter and the contour volume of the rim and exterior regions were also not predictive. The feature expression trends between DM and non-DM cases were shown in Figs A-C in S4 File.
Radiomic signature validation and inter-comparison
Multivariable models were generated based on cox proportional hazard method. The forward selected radiomic signature from each tumor region was validated using Dataset B. The tumor radiomic signature was log sigma 0.5mm 3D GLCM DifferenceEntropy, which quantifies the intensity variability in neighboring voxels, and wavelet HLL GLRLM RunEntropy, which measures the variation in the distribution of run lengths and gray levels. The tumor rim radiomic signature consisted of LoG 1.5mm 3D GLRLM RunEntropy and Wavelet LHL NGTDM complexity; these measure, respectively, the entropy of gray level runs and large intensity changes in neighboring pixels. The radiomic signature of the exterior region includes log sigma 2.5mm 3D firstorder Kurtosis and log sigma 3.0.mm 3D NGTDM Strength, which, respectively, measures the fourth moment of the intensity distribution and deviation from homogeneity. To account for the potential confounding effect of tumor size and volume, statistical significance was tested between our radiomic signatures and these factors. The prognostic performance of the tumor and rim radiomic signatures were determined to be significantly stronger than tumor dimension or volume.
The performance of the radiomic signatures was compared to a clinical model constructed using Cox regression method. In Dataset B, this clinical model achieved a CI value of 0.53 (pvalue < 0.44). In the radiomics models, the multivariable rim signature achieved a CI value of 0.64 (p-value < 2.37×10 −5 ) in Dataset B, compared to the multivariable tumor signature CI value of 0.59 (p-value < 0.04) and the multivariable exterior model of CI value of 0.55 (pvalue < 0.15). Incorporating the rim multivariable model to the clinical parameters yielded a CI value of 0.65 (p-value < 7.57×10 −6 ). For comparison, this combined model was found to be significantly more predictive than the clinical model (p-value < 0.003). A composite radiomics model including the tumor, rim and exterior regions was also constructed (CI = 0.63), but was found to be less predictive than the tumor rim signature (p-value < 0.30). The prediction by combined clinical and tumor radiomics signature was found to be not statistically different from the combined clinical and rim predictor (p-value < 0.13), neither was the composite clinical, tumor and rim predictor (p-value < 0.38). The results of multivariable model validation and inter-model comparison were displayed in (Fig 4A). Neither the log rank test showed a significant p-value nor did the hazard ratio show significance in the validation cohort. Fig 4B showed risk stratification using the combined clinical and tumor radiomics model, which had a significant log rank p-value of 0.012. Fig 4C demonstrated the potential stratification that can be achieved using our proposed model combining clinical and rim radiomic signature. Using the patient risk scores derived from the training cohort, Kaplan-Meier analysis in validation Dataset B showed statistically significant difference (p-value < 1×10 −3 ) for metastasis-free probability estimates. The lower risk group showed a hazard ratio of 0.44 compared to the higher risk group. Comparison of prognostic performance across the different multivariable models in the validation cohort (n = 100). � indicates statistical significance (" � " indicates p-value <0.05, " ��� " indicates p-value <0.0001 from random prediction (Noether test)). From left to right, the compared multivariable models include clinical, visible tumor, exterior, tumor rim, combined clinical and tumor, combined clinical and rim, and the combined radiomics model. Crossbars indicate the comparison made between CI of two multivariable models, where � indicates significant difference and ns not statistically significant. https://doi.org/10.1371/journal.pone.0206108.g003
Discussion
Advances in cancer biology research have provided insights on how the tumor proliferates through its interaction with the surrounding normal tissue [47][48][49][50]. Tumor invasion into the peripheral normal tissues on the cellular level may translate to tissue morphological changes, which, in turn, may inform us about the level of metastatic activity. Here, we leveraged radiomics analysis to quantitatively characterize the peritumoral imaging features on planning CT images and assessed their prognostic power on DM. To our knowledge, this is the first study that correlates radiomic features from the normal-appearing peritumoral tissues with cancer metastasis in NSCLC.
The use of radiomic features to predict DM for NSCLC has only been investigated in few studies [24,25]. Fried et al [25] extracted 198 tumor features from averaged CT, 4DCT, and contrast-enhanced CT images; and showed qualitatively through Kaplan-Meier analysis that the combination of radiomic features with clinical parameters may enable patient stratification for DM in a 91-patient stage III cohort. Coroller et al [24] investigated intratumoral radiomic features for DM in a cohort of 182 NSCLC and demonstrated predictive power in a validation cohort (CI = 0.61). However, these were based on tumor-only radiomic features and did not account for the cancerous infiltration into the surrounding normal parenchymal tissues. This study explored the tumor rim and exterior radiomic features as potential DM prognosticator in a larger cohort of 200 patients with locally advanced NSCLC using a quantitative metric of CI. Importantly, we discovered a higher prognostic value for the tumor rim radiomic signature than the tumor-only one, where a comparison between the two showed statistical significance (p-value = 0.048).
Our data driven approach led us to the discovery of potentially important tumor rim radiomics signature for DM prediction and also the finding that the other tumor regions show relatively less prognostic power. In the exterior region, that the multivariable radiomics model was not significant was consistent with the overall weaker signal as evident in the univariable analysis. As for the visible tumor region, its significant features were similar to the rim region in terms of feature type and prognostic power. However, while the selected features from the tumor showed similar performance as the rim features univariablely, the rim radiomic signature showed the superior performance over the tumor one. This may be due to the complementary effect accomplished by combining top performing feature, LoG 1.5mm 3D GLRLM RunEntropy, and the moderately performing feature, Wavelet LHL NGTDM complexity. Interestingly, as larger value from either rim feature suggests reduced risk to developing DM, i.e. tumor rims with more heterogeneity in run lengths and gray levels as well as more rapid changes in gray level intensity would be less prone to tumor metastatic activities. Moreover, despite the fact that the CI of the combined clinical and rim radiomics model prediction was not shown to be statistically different from that of the combined clinical and tumor radiomics one, Kaplan-Meier analysis suggested that the former would allow for better risk stratification in terms of log rank p-value (Fig 4B vs. 4C). Furthermore, our identification of the tumor rim as the crucial imaging biomarker for DM was consistent with pathology and tumor biology findings and it was well known that the periphery of the tumor harbored many activities of cancer invasion and metastasis, e.g. epithelial-mesenchymal transition [51], tumor-associated macrophages [50,52], tumor budding [53], and lymphovascular invasion [31,54]. While a correlation with biological processes at the tumor periphery was beyond the scope of the present study, ample evidence from tumor biology literature supported our hypothesis that the tissue features on the tumor-normal interface may indicate tumor aggressiveness towards DM. Thus, our findings were hypothesis generating and may facilitate new discoveries in the DM prognostication using information originated from tumor rim.
Limitations of this study include the choice of 6mm shell of tissues around the tumor for our correlation analysis with DM. Given that there may exist individual variation in terms of the disease spread pattern and location of extratumoral cancer colonies, we attempted to address this by taking a relatively wide margin of 6mm and expand our region of interest radially outward from the tumor to mimic the disease spread pattern. Other limiting factor may be the variation in CT acquisition parameters as patient CT simulation dates spanned from 2001 to 2014. We sought to mitigate this by removing cases of motion artifacts and performed image resampling at 3×3×3 mm 3 to reduce voxel noise. Lastly, our findings may be limited by our cohort size (n = 200) and patient cases collected in our institution. We had performed temporal split of our data to generate an independent validation cohort, of similar patient and treatment characteristics, for model testing. Future investigations would involve testing our hypothesis by expanding our study to include patients of other histology types and evaluating the generalizability of our findings using multi-institutional image data. In spite of these limitations, our investigation demonstrated differential predictiveness of imaging features between the tumor and its surrounding tissues for distant metastatic spread.
Conclusion
In conclusion, we have demonstrated strong prognostic value of peritumoral radiomic features for DM in patients with locally advanced NSCLC. The presented rim radiomic signature was independently validated and was shown to have better predictive power compared to tumor radiomic signature. Such pretreatment imaging predictor may benefit patients susceptible to developing DM in precision medicine approach.
Supporting information S1 File. Fig A. Tumor regions generated for RIDER test/retest data. Subfigures a., c., and e. represent the contours of the tumor, tumor rim, and the tumor exterior, respectively, from the test dataset. Subfigures, b., d., and f. represent the counterparts in in the re-test dataset. | 2018-11-15T16:41:54.569Z | 2018-11-02T00:00:00.000 | {
"year": 2018,
"sha1": "c850b6121ed4d921d8ff2a4f1a9a9af0fad06a28",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc6214508?pdf=render",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d9a7cb5bb9d4613a308ed5556d7403567d4e07b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219572531 | pes2o/s2orc | v3-fos-license | Compressive Behaviour of Steel Fiber Reinforced Concrete Exposed to Chemical Attack
In this modern age, Civil engineering constructions have their own structural and durability requirements, every structure has its own intended purpose and hence to meet this purpose, modification in traditional concrete has become mandatory. It has been found that steel fibers added in specific percentage to concrete improves durability of structure. The present study aims at comparing the durability in terms of weight loss and reduction in compressive strength of controlled concrete (with 0% steel fibers) and steel fiber reinforced concrete (with3% steel fibers), when exposed to acid, Sulphate and chloride attack. The grade of concrete designed for the study is M30. Hookend steel fibers with aspect ratio 50 at 0% and 3% of weight of cement are used. The specimens of 150 × 150 × 150 mm cubes are casted to find the weight loss and compressive strength of concrete. The specimens were demoulded after 24hours from the time of casting and the specimens are kept underwater for a period of 28days after that the specimens are immersed in the 5% concentrated solutions of H2So4, MgSo4 and Nacl for a period of 30, 60, 90, 120, 150 and 180days. The experimental studies revealed that the steel fiber reinforced concrete is performed better than that with controlled concrete after exposed to the chemical attack.
Introduction
The concrete containing Portland cement, being highly alkaline, is not resistant to attack by strong acids. The most vulnerable part of the cement hydrate is Ca(OH) 2 , but C-S-H gel canal so be attacked. Concrete can be attacked by liquids with a pH value below 6.5. The attack is severe only at a pH below 5.5. The attack is very severe at a pH below 4.5. Acid rain which consists mainly of Sulfuric acid and nitric acid and has a pH value between 4 and 4.5 may cause surface weathering of exposed concrete. Acids such as H 2 SO 4 , nitric acid, hydrochloric acid, acetic acid are very aggressive as their calcium salts are readily soluble and remove from the attack. Sulfuric acid is very damaging to concrete as it combines an acid attack and sulphate attack. HCl has a mild attack and Na 2 CO 3 is negligible.
Sea water generally contains 3.5% of salt by weight. The pH of sea water varies between 7.5 to 8.4. Sea water also contains some amount of CO 2 . It is commonly observed that deterioration of concrete in sea water is often not characterized by the expansion found in concrete exposed to sulphate action, but takes more in the form of erosion or loss of constituents from the parent mass without exhibiting undue expansion. It is also found that concrete will have lost some part of lime content due to leaching. Concrete exposed to sea water will suffer deterioration due to attack of dissolved chemical s on the products of hydration, there by crystallization of salts within the concrete under the alternate conditions of wetting and drying, frost actions, mechanical attrition and impact by waves and corrosion of reinforcement embedded in it.
Solid salts do not attack concrete but, when present in solution, they can react with hydrated cement paste, particularly common are sulphates of sodium, potassium, magnesium and calcium which occur in soil or in ground water. Sulphates in ground water are usually of natural origin but can also come from fertilizers or from industrial effluents. The main cause of sulphate attack on concrete is due to transport of sulphate ions in various concentrations in water together with cations, the more common of which are
Literature Review
Nithin dsouza et al (2018) [12] Studied on strength and durability aspects of steel fiber reinforced concrete. For their study M25 grade concrete with hookend type steel fibers dosages of 0.5%, 1% and 1.5% by weight of concrete having an aspect ratio of 60 were used. The specimens are tested for 7 and 28 days age. It is observed that, when compared to conventional concrete the steel fiber reinforced concrete was more resistant to acid attack and sulphate attack, leading to less loss of weight and compressive strength for concrete with addition of steel fibers. Among the different percentages of steel fiber reinforced concrete, the 1.5% steel fiber reinforced by weight of its concrete is more resistant to the acid attack and sulphate attack.
Vamsi Krishna and Srinivasarao (2016) [14] Studied on durability of steel fiber reinforced concrete. The grade of concrete adopted is M30. Fiber dosage of 0.5%, 1% and 1.5% by volume of concrete were used for their study. Hookend steel fibers were randomly dispersed in concrete. The cube specimens were immersed in 3% Sulfuric acid for acuring period of 28 days and 56days. Steel fibers were found to be effective to acid resistance. Percentage loss of weight was increased with increasing in fiber dosage subjected to acid curing. Compressive strength decreased with increase in fiber dosage respectively subjected to acid curing compare to normal concrete. Fiber dosage of 0.5% shows better results.
Basavaraj and Amaresh (2015) [3] studied on durability of steel fiber reinforced concrete. For their study M40 grade concrete with steel fibers of dosage of 0.75%, 0.1% and 1.25% by volume of concrete having an aspect ratio of 54. After 28days of curing, the cube specimens are removed from the curing tank and then the cube specimens will be immersed in 3% H 2 SO 4 solution an d the pH 4 is maintained constant throughout the test for a period of 5, 10, 15, 20, 25, 30, 35, 40 and 45days. Steel fiber reinforced concrete is more resistant to acid attack, when compared with conventional concrete. Highest resistance to acid attack is observed incase of 1.25% steel fiber reinforced concrete as indicated by lowest % loss in weight. Water absorption of conventional concrete is found to be higher than the steel fiber reinforced concrete. Conventional concrete has absorbed 0.577% of water where as 1.25% SFRC has absorbed 0.364% of water. Compressive strength of conventional concrete was less than SFRC after the 45days acid attack to cubes. Porosity of SFRC is considerably less than that of conventional concrete. Lowest porosity was observed in case of 1.25% SFRC mix. i.e. Conventional concrete has a porosityof 13.978% where as 1.25% SFRC has porosity of 5.33%.
Velayutham and cheah (2014) [15] studied on the effect of steel fiber on the mechanical strength and durability of steel fiber reinforced high strength concrete subjected to normal and hygrothermal curing. Hygrothermal curing was performed by placing the specimens into a hot water bath for 24 hours after the 7days normal curing at 700°C., which was optimum temperature under the curing condition. The steel fibers were added at volume fractions of 0.5%, 1%, 2% and 3%. Two types of cooling regimes i.e. normal curing and hydrothermal curing are used for investigations. Steel fiber high strength concrete shows an increase in compressive strength and flexural strength with normal water curing compared to hydrothermal curing. Normal strength concrete shows an increase in compressive and flexural strength with hygrotheramal curing. It has been found that steel fiber high strength concrete is not suitable for hygrothermal curing compared to normal strength concrete.
Srinivasarao et al (2012) [13] Studied on durability of steel fiber reinforced Metakaolin blended concrete. The grade concrete adopted is M20. Crimped steel fibers with 60 aspect ratio at 0%, 0.5%, 1% and 1.5% of volume of volume of concrete are used. 150x150x150mm cubes were cast and cured for 28days in water, after then immersed in solution of 5% concentrated H 2 SO 4 and HCL solution. The loss of compressive strength and weight loss were observed after 30, 60 and 90days of immersion. Durability studies revealed that 10% replacement of cement with Metakaolin along with crimped steel fibers with 1.5% steel fibers content is more durable when compared to normal concrete after exposure to the HCL and H 2 SO 4 solution. The percentage loss of compressive strength and loss of weight are increasing in with the time of exposure to acid attack. The percentage loss of compressive strength and loss of weight in 5% H 2 SO 4 solution is higher than 5% HCL solution. Casting and curing of specimens:
Experimental Programme
The specimens of 150 × 150 × 150mm cubes are casted to find compressive strength of concrete and weight loss of concrete. The total numbers of cube specimens casted for testing are 108. The specimens were demoulded after 24hours from the time of casting and the specimens are kept under water for a period of 28days after that the specimens are immersed in solutions for a period of 30, 60, 90, 120, 150 and 180 days respectively Testing of specimens: After completion of specified curing period, the specimens are removed from the curing tanks and tested for weight loss and compressive strength.
Weight loss: The weight of cubes before immersed in solutions by using an electronic weighing machine and that weight is considered as initial weight in kilograms. After the cubes are removed from the solutions for specified duration, the weight of cube is measured which is considered as the final weight in kilograms. The difference in weights is the loss in weight.
Compressive strength: The cubes are tested in 2000kN capacity digital compression testing machine after removing the cubes from chemical solutions. The testing is done as per IS: 516-1970. The load is applied till the specimen fails. The maximum load at which the specimen failed is noted. Load divided by the area of cross section of the specimen gives the compressive strength of the specimen. Each sample comprising of 3 cubes were tested and the average value is reported.
Test Results and Discussions
Loss of weight of specimens after immersion in 5% H 2 SO 4 , MgSO 4 and NaCl solutions: Loss of compressive strength of specimens after immersion in 5% H 2 SO 4 , MgSO 4 and NaCl solutions: Table 2 gives the percentage loss of weight of steel fiber reinforced concrete and controlled concrete after being immersed in various acidic solutions like H 2 SO 4 , MgSO 4 and NaCl. The compressive strength of steel fiber reinforced concrete (with 3% fibers) is 15.3% more when compared with controlled concrete (with 0% fibers), when the specimens are immersed in H 2 SO 4 solution. The compressive strength of steel fiber reinforced concrete (with 3% fibers) is 3.65% more when compared with controlled concrete (with 0% fibers), when the specimens are immersed in MgSO 4 solution. The compressive strength of steel fiber reinforced concrete (with 3% fibers) is 13.86% more when compared with controlled concrete (with 0% fibers), when the specimens are immersed in NaCl solution.
Conclusions
1. The percentage weight loss in 5% H 2 SO 4 solution is higher than 5% MgSO 4 solution and 5% NaCl solution in both controlled concrete and steel fiber reinforced concrete. 2. The percentage weight loss is increasing in with time of exposure to acid attack, sulphate attack and chloride attack in both controlled concrete and steel fiber reinforced concrete. 3. The percentage weight loss is less in steel fibre reinforced concrete when compared with controlled concrete in all the three solutions of H 2 SO 4 , MgSO 4 and NaCl. 4. The loss of compressive strength in 5% H 2 SO 4 solution is higher than 5% MgSO 4 solution and 5% NaCl solution in both controlled concrete and steel fiber reinforced concrete 5. The loss of compressive strength is increasing in with time of exposure to acid attack, sulphate attack and chloride attack in both controlled concrete and steel fiber reinforced concrete 6. The values of compressive strength of steel fibre reinforced concrete were on higher side when compared with controlled concrete in all the three solutions of H 2 SO 4 , MgSO 4 and NaCl 7. Durability studies revealed that the steel fiber reinforced concrete with 3% steel fibers is more durable when compared to controlled concrete with 0% steel fibers after exposed to the H 2 SO 4 , MgSO 4 and NaCl solutions. | 2020-06-11T01:32:54.818Z | 2020-06-10T00:00:00.000 | {
"year": 2020,
"sha1": "2f863100bcb00c5eef15fccd9295a0ed9e6a8332",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajcbm.20200401.15.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2f863100bcb00c5eef15fccd9295a0ed9e6a8332",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
265252189 | pes2o/s2orc | v3-fos-license | New healthy food product crafted from sweet potato flour and pasta: Brownies
. Sweet potato (SP) provides good nutrition to maintain public health status. It has a high energy content, protein, fat, and considerable dietary fiber. Four of SP varieties i.e., Beta 1 , Beta 2 , Antin 2 , and Antin 3 contain bioactive compounds such as anthocyanin and beta-carotene. In their fresh form, Antin 2 and Antin 3 contained higher calories compared to Beta 1 . Meanwhile, in the form of flour, Beta 1 had more beta carotene (29.41±0.06 mg/100g) than Beta 2 (27.26±0.13 mg/100g), and Antin 3 contained more anthocyanin (47.66±0.05 mg/100g) than Antin 2 (29.04±0.01mg/100g). The aim of this study was to test the brownies made using SP pasta (SPP) and SP flour (SPF), and to evaluate the physicochemical and sensory properties of resulting brownies. Results show that there were various differences in moisture content (2.35 – 3.99%), ash (2.67 – 3.66%), fat (11.50 – 24.62%), protein (2.46 – 3.65%), carbohydrate (65.07 – 79.52%), and energy (436.98 – 494.86 kcal), but the differences were not systematic. The sensory evaluation demonstrated that the product was well-accepted by the panelists in terms of color, flavor, taste, texture, and overall acceptability, indicating that SP have a potential as raw material for food products and further utilization to support sustainable agriculture.
Introduction
Beside rice, sweet potato (SP) is also a main food crop in the tropical area [1], including in Indonesia.It provides good nutrition and can be utilized as carbohydrate source due to its high carbohydrate content.Various types of soils suit SP to grow.In addition, it needs less fertilizer than other food plants, and quite productive with short period of planting, which is about 3 to 6 months [2].There are several cultivars of SP i.e. white SP, yellow SP, red (orange) SP, and purple SP [3].These SP cultivars are rich in carbohydrate, polyphenol (in the form of anthocyanin in the purple-fleshed sweet potato), carotenoids (in orange-fleshed sweet potato), low in fat and as good sources of dietary antioxidants, acts as antiinflammatory, anti-cancer [1], and can prevent vitamin A malnutrition [4].These also contain a range of minerals beneficial for human health including iron, phosphorus, calcium, magnesium, potassium, sodium, zinc, and manganese [3,5].Potassium is the mineral with the greatest concentration in SP (averaged 396 mg/100 g), with phosphorus, calcium, magnesium, iron, copper, and magnesium are also present in significant amounts [5].
Purple-fleshed SP contains a large number of anthocyanin ranging from 110.51 mg/100 g, varies depending on the intensity of the purple color [3].Anthocyanin is also potential for coloring agent (dyes) [2].Many research across the globe have investigated anthocyanin in the purple-fleshed SP reporting the benefits of anthocyanin to the human health, particularly in relationship with its antioxidant capacity [1], [5][6][7][8][9].
Another rich in nutrients-SP is orange-fleshed SP.It is rich in beta-carotene and can be acting as source of vitamins and dietary fiber [4].Moreover, it is drought resistant, and a cheap source of Vitamin A [10].Generally speaking, flesh-colored (orange, red) SP contains phytochemicals, pro-vitamins, and antioxidants beneficial for human health.Cooked sweet potato has moderate glycemic index (around 63-66) [11], which is good to maintain reasonable blood glucose level.Even so, Astawan dan Widowati [12] stated that the processing method affects the GI value food and that frying was the best method compared to roasting and boiling (GI values of fried, boiled, and roasted SP were 47, 62, and 80, respectively).
To increase SP utilization, an effort is needed.Promoting SP consumption can be done by creating more food products made of SP.Previous research have made various SP-based products such as noodle [13], bread [14], and spaghetti [15].For this research, brownies are chosen as food model because not much research has been conducted regarding this modern food.According to Nakamura et al., [8], dehydrated powder and puree are the best form of colored-SP (such as orange-fleshed and purple-fleshed SP) to be further processed into food products which need nutritional ingredients.Thus, we intended to proceed SP into pasta and flour in advance to obtain flexible raw material from SP when further mixed with other ingredients.
The aim of this research was to study the capability of SP either in flour or pasta form to produce a new crafted food product i.e., brownies.We chose brownies as a food model to test its nutrient composition as well as its acceptance by panelists by performing a sensory evaluation.Brownies are cakes with the appearance of solid chocolate, made by mixing wheat flour, eggs, fat, granulated sugar and chocolate followed by baking [16], with other ingredients added to improve nutrients [17].The hypothesis was that panelists will accept brownies made of SP without the addition of wheat flour (as it is a common practice to produce baked products which need maximum development) and that the products have à good nutrient as well, beside can be used for carbohydrates source, also for improving healthy degree due to its bioactive compounds such as beta-carotene and anthocyanins (from the colored-flesh SP).
Materials and Methods
The materials used were SP from cultivars i.e.Beta 1, Beta 2, Antin 2 and Antin 3, provided by Indonesian Legumes and Tubers Research Institute (ILETRI).The equipment used includes processing equipment and analysis equipment.Methods conducted were production of sweet potato pasta (SPP), sweet potato flour (SPF), sweet potato food product as food model (brownies) and nutrient analyses of the resulting brownies.
Production of sweet potato pasta (SPP)
Sweet potato pasta was made by using a method adopted from Mulyawanti et al., [18].The sweet potato was first washed, drained, steamed for 20 min, peeled, blended, and stored in the freezer.
Production of sweet potato flour (SPF)
The process of SPF production followed Richana and Widaningrum [19].Basically, SPF production involves fresh SP washing, peeling, slicing, chips drying, and grinding to produce flour prior to sieving using 80-100 mesh of sieves.
Food model (brownies) preparation
Brownies from SPP were made as follows: 100 g of margarine (as butter replacement) and 100 g of dark chocolate were cooked until melted, then set aside until cool.As many as 3 pcs of chicken egg and 75 g of refined sugar were mixed until developed well, then a 25 g of sweet potato pasta, 25 g of mocaf (modified cassava flour), and 12.5 g of chocolate powder were added.After that, the melted butter and margarine that have been prepared previously were added to the dough to be mixed well.The dough was then poured onto the tray and distributed evenly or sliced into a 'bar-like form' then baked using electric oven at 170 °C for 50 min until done and brownies from sweet potato were ready to serve.The similar procedure was applied to the preparation of brownies from sweet potato flour (SPF), with the SPP was replaced with SPF, yet different proportion of SPP was used i.e., 35 g of flour from each SP and 15 g of modified cassava flour (mocaf), for every production batch.
Analyses
Physico-chemical characteristics (moisture, ash, fat, protein) were analyzed from flour and pasta of SP.Beta-carotene content was analyzed from Beta 1 and Beta 2 cultivars, meanwhile anthocyanin content was analyzed from Antin 2 and Antin 3. Moisture, ash, protein, fat, carbohydrate, and energy (proximate analyses) were analyzed from all cultivars using the methods from AOAC (2006) [20] and sensory evaluation was conducted to SP brownies made of flour and pasta SP as a food model, using the methods from Meilgaard et al. [21].
Nutrients composition of SPP
As ingredients for making food model brownies, here are the nutrient composition of SPP (Table 1).From the Table 1 it can be seen that Beta 1 SPP has the highest moisture content (78.32%) among others, quite significantly different from Beta 2 SPP (73.60%) and significantly different from Antin 2 SPP as well as Antin 3 SPP (61.91% and 61.97%, respectively).Factors affecting nutrient composition of SP varies depending on the cultivar, growing conditions, maturity, and storage [5].Overall, distinct moisture content across the samples might affect the physicochemical properties of the resulting food model i.e., brownies.
Table 1. Nutrient composition of SPP from four cultivars (Beta 1, Beta 2, Antin 2 and Antin 3)
Numbers in the same column followed by the same letter show no significant difference at the 5% level on Duncan's different test In terms of ash content, even though there are a little bit of differences, the values are relatively comparable (0.85%-0.97%), meanwhile for fat content, Beta 1 SPP has the highest fat content (0.72%), followed by Antin 3 SPP which is not different from Antin 2 SPP (0.40% and 0.38%, respectively), both are from purple-fleshed SP, with the least belongs to Beta 2 SPP (0.15%) and is significantly different from all others.
For protein, Antin 2 SPP has the highest protein content (1.74%) and is significantly different from the nearest value i.e., Antin 3 SPP (1.59%), both from the purple-fleshed sweet potato.Beta 1 SPP ranks third (1.44%), significantly different (P>0.05) from Beta 2 SPP which ranks last (0.15%) protein content, both come from the orange-fleshed sweet potato.For carbohydrate content, Antin 3 SPP and Antin 2 SPP have the highest carbohydrate content (35.20% and 35.06%), whilst Beta 2 SPP has only 24% of carbohydrate, with the least belongs to Beta 1 SPP which has 18.58% of carbohydrate and is significantly different among others.For energy, due to its linearity with the carbohydrate, fat, and protein content (energy value is obtained by multiplying it with 4 kcal from carbohydrate, 9 kcal from fat, and 4% from protein), the order is the same as carbohydrate level (Table 1).
For bioactive compounds such as beta-carotene and anthocyanins, the values are listed in Table 2. Beta 1 SPP has 1.93 mg/100g beta-carotene while Beta 2 SPP has 1.59 mg/100 g beta-carotene, respectively.For anthocyanin, Antin 3 SPP has the high anthocyanin content, 63.58 mg/100 g, far above anthocyanin content of Antin 2 SPP, which has only 13.72%.
Nutrient composition of SP brownies
The brownies made of Beta 2 and Antin 3 SP (after being mixed will all other ingredients) can be seen Figure 1 to Figure 4, as representative of all dough made in this research.Brownies are a popular cake.Brownies is usually consumed as dessert and texture can either be fudgy or cakey, on the individual preferences [22].The products have hard texture outside, but spongy and compact inside.Apparently, there is no difference in color between brownies made of Beta 1 and Beta 2, and between Antin 2 and Antin 3. Nevertheless, regarding the texture of products, brownies made of pasta have moister texture in comparison of these made of flour, which have solid and compact texture inside.Images shown here only representatives of all brownies produced.Regarding nutritional composition of sweet potato brownies (Table 5), moisture content of brownies ranging from 2.35-3.99%,significantly different each other (<0.05), but not different (P>0.05) with their initial ingredient (pasta and flour).Meanwhile ash content is ranging from 2.64-3.66%.Yet, their differences are not systematic.
For fat content, brownies made of Antin 2 SPF has the highest fat content (24.62%), but it is not different from brownies made of Antin 2 SPP (23.69%) and is quite similar with brownies made of Beta 2 SPF (22.59%), and with brownies made of Beta 1 SPP (20.72%).It is then followed by fat content of brownies made of Antin 3 SPP, quite lower from the fat content (17.36%), and the least are brownies made of Beta 2 SPP (13.01%), which is not different with fat content of brownies made of Beta 1 SPF (11.50%).A possible reason to explain this difference is the initial moisture content of SPP and SPF, which is higher in SPP rather than in SPF, resulting in lower fat content, except for Beta 1 SPP.The increase in fat content in the resulting brownies overall is highly likely influenced by fat content from the other ingredients such as margarine (which has 11% of fat) and from dark chocolate (8% of fat) in the dough.
Surprisingly, within Beta 1 variety, the fat content of SPF brownies is much lower (11.50%)than in the SPP brownies (20.72%).The possible reason of this might have been caused by the incomplete stirring of Beta 1 SPP brownies when the dough was prepared, which may result in the thick and full of fat area in one side, yet another side has thin and less of fat area, or the dough was not quite homogeneous.Nevertheless, this needs further investigation.
Truong et al., [5] reported that orange-fleshed sweet potato has low dry matter content (18-25%), which means that it has high moisture content, as it is also reflected in this research results (Beta 1 and Beta 2 SPP, Table 1, and Beta 1 and Beta 2 SPF, Table 3, in comparison with Antin varieties).Therefore, brownies made of Beta 1 and Beta 2 SPP and SPF have characteristic of moister texture after cooking due to its high moisture content.For protein content, brownies made of Antin 2 and Antin 3 both from SPP and SPF have the higher value of protein (3.25-3.60%)rather than those made of Beta 1 and Beta 2 (2.46-2.79%),except for Beta 2 SPF and Beta 1 SPF which have protein content 3.20% and 3.08% in the resulting brownies, respectively.Yet, the results are not systematic.Higher moisture content in Beta varieties SPP as brownies ingredient might have affected the protein content of the resulting brownies by replacing protein with water.The second plausible reason is that these brownies are made of SPP and SPF with the addition of modified cassava flour (mocaf), which has low fat (0.38%) and low protein content (3.43%), made the resulting brownies have similar values of protein.Yet, the low fat and protein content are typical for most tubers or roots.
For carbohydrate (by difference) content in brownies, the higher content come from Beta varieties (67.93%-79.52%),with one comes from Antin varieties (72.85% for Antin 3 SPP brownies) even though it is not so different with Antin 3 SPF brownies (67.81%) (Table 5).The energy values show the linearity with the carbohydrate content, ranging from 436.98 kcal for brownies made of Beta 1 SPF to 494.86 kcal for brownies made of Antin 2 SPP, all are showing their capacity as carbohydrate source from modern cake like brownies.
Brownies as a new crafted food model from orange-fleshed sweet potato and purplefleshed sweet potato has not been much investigated, not like noodle.For noodles, the addition of purple-fleshed sweet potato flour (PFSPP) indeed increases the anthocyanin content of the resulting noodle.Nevertheless, the more PFSPP added, the tensile strength and elasticity decreased [23].Hence, it is important to get the right proportions with other ingredients in the preparation of any food development.
Based on the SPP and SPF brownies formula in this study without the use of wheat flour so that it can be categorized as a gluten-free product.Currently, many people are adopting a gluten-free consumption pattern, especially for children with autism and for individuals with a gluten-free diet.A similar gluten-free product formula has been developed by Darniadi et al., [23] on the processing of gluten-free biscuits from rice grit flour.
Sensory Evaluation of Brownies made of Beta and Antin SPP and SPF
Sensory evaluation was carried out on SPP and SPF brownies to confirm consumer acceptance in terms of color, taste, texture, and overall taste and acceptance.The results of sensory evaluation are presented in the have determined their acceptance rate by giving marks to every sensory attribute for each product i.e., brownies from sweet potato, both from pasta and flour.From the Table 6 it can be observed that all of the parameters (color, flavor, texture, taste, and overall acceptance) are well-rated, or in the other words the panelists like most of the product based on each parameter, but there are some attributes such as texture for brownies made of Beta 1 SPP which is marked 3.43 for texture and 3.33 for taste, meaning 'neutral to dislike'.Brownies for Antin 2 SPP is marked 3.5 for texture and 3.5 for pasta, meaning neutral to like, just the same as texture of brownies made of Antin 3 SPF, which is marked 3.40 for texture and 3.37 for taste.The other parameters from all samples are marked more than 3.51 close to 4, which are liked.For overall acceptance, brownies from Beta 1 SPF is marked 3.87, the highest mark among other samples.Nonetheless, there are no significant differences among all samples, indicating that the panelists cannot really differ the brownies whether they are made of pasta or flour, which is expected.
Results from Selvakumaran et al., [22] show that substitution of wheat flour with orangefleshed sweet potato (OSP) puree increased moisture and fat.The higher amount of OSP puree (50% and 75%) had higher scores for color, texture, flavor, and overall acceptance, further revealing the improvement of sensory properties of the resulting brownies.In the present study, where wheat flour is replaced with modified cassava flour (mocaf), we hope it can also contribute to the utilization of other local food carbohydrate sources such as cassava, sorghum, arrowroot, etc., which are available abundantly in Asia in general, and in Indonesia in particular.
Conclusion
Sweet potato is available abundantly in Indonesia and has no harvesting season.Due to its high nutritional content, it makes sweet potato a very attractive option to improve its capacity to be processed into several intermediate products such as pasta and flour, hence make it flexible to be stored and transported.This present study has shown that a food model namely brownies has been successfully produced from specialty Indonesian sweet potato flour and pasta (orange-fleshed Beta 1, Beta 2 and purple-fleshed sweet potato Antin 2 and Antin 3) with the addition of modified cassava flour but without the addition of wheat flour, so that the product can be called 'gluten-free food' and is sensorially acceptable.This further supports the potential of local carbohydrate sources to maintain sustainable agriculture in developing countries.
Table 2 .
Beta-carotene and anthocyanin content of SPP from four cultivars (Beta 1, Beta 2, Antin 2 and Antin 3) Numbers in the same column followed by the same letter show no significant difference at the 5% level on Duncan's different test, NA = Not Analyzed
Table 5 .
Nutritional composition of brownies made of sweet potato (SP) pasta and flour.Remarks: Numbers in the same column followed by the same letter show no significant difference at the 5% level on Duncan's different test
Table 6 .
Panelists, representing consumers in general,
Table 6 .
Sensory evaluation of SP brownies | 2023-11-17T16:36:23.908Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "4e028a3691f6e994ab7af563dfdc852b7bed1e89",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/81/e3sconf_iconard2023_01010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "94bf097e039b1eeab83139dbba97f41b43d265ac",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
263806400 | pes2o/s2orc | v3-fos-license | Modulation of Motor Awareness: A Transcranial Magnetic Stimulation Study in the Healthy Brain
Previous studies on the mechanisms underlying willed actions reported that the premotor cortex may be involved in the construction of motor awareness. However, its exact role is still under investigation. Here, we investigated the role of the dorsal premotor cortex (PMd) in motor awareness by modulating its activity applying inhibitory rTMS to PMd, before a specific motor awareness task (under three conditions: without stimulation, after rTMS and after Sham stimulation). During the task, subjects had to trace straight lines to a given target, receiving visual feedback of the line trajectories on a computer screen. Crucially, in most trials, the trajectories on the screen were deviated, and to produce straight lines, subjects had to correct their movements towards the opposite direction. After each trial, participants were asked to judge whether the line seen on the computer screen corresponded to the line actually drawn. Results show that participants in the No Stimulation condition did not recognize the perturbation until 14 degrees of deviation. Importantly, active, but not Sham, rTMS significantly modulated motor awareness, decreasing the amplitude of the angle at which participants became aware of the trajectory correction. These results suggest that PMd plays a crucial role in action self-monitoring.
Introduction
Although many of the processes underlying motor programming and execution are not accessible to consciousness, we are aware that we are moving (motor awareness) and that we desire to act (motor intention).Blakemore et al. [1] proposed that, for intentional movements, motor commands are selected and sent to the muscles to perform the action, while at the same time a prediction is made about the sensory consequences of the movement.This prediction (called the forward model) is based on an efference copy of the intended motor act and is compared, by a comparator system, to the actual feedback of the executed movement.According to this proposal, the forward model to be compared with the sensory feedbacks is the neural signal on which motor awareness is built [2,3].Therefore, motor awareness seems to precede, rather than follow, the actual execution of an intentional action being, within certain limits, dissociated from it.
In a seminal experiment, Libet demonstrated that subjects become aware of a hand movement before the actual onset of muscle contraction [4], whereas Haggard and Magno [5] found that interfering, through single-pulse TMS, with the activity of the left primary motor cortex (M1) resulted in a significant delay of right-hand movements but had little effect on the time the subjects perceived the movement (assessed by asking participants to indicate the position of a rotating clock hand).Conversely, single-pulse TMS of the anterior frontal areas (with the coil placed at the standard FCz site) resulted in smaller delays in actual Reaction Times (RTs) but larger delays in the assessment of the timing of manual response, an ability related to motor awareness.This shows that motor awareness does not co-vary with Brain Sci.2023, 13, 1422 2 of 10 the experimentally induced delay in motor response.In other words, once the intention to perform an action is formed, the motor response may be delayed, but the motor awareness, already triggered by the intentional stance, is not affected.
Motor awareness also can be reported in absence of any intentional movement.Indeed, brain-damaged patients with anosognosia for hemiplegia (i.e., patients who deny their paralysis) subjectively report the feeling of having performed an action with the paralysed limb [6,7].This phenomenal experience has its measurable counterpart in the fact that the pretended action with the paralysed hand actually affects the spatiotemporal parameters of the movements of the healthy hand [8][9][10][11][12][13].Interestingly, on the basis of lesional data in anosognosic patients, Berti and colleagues [6,14] proposed that the right premotor cortex (PM, especially Area 6) is part of a neural circuit for motor monitoring, and thus contributes to the operation of one of the comparator systems described by Blakemore et al. [1] (see also Haggard, 2005 [15]).In particular, previous studies have suggested the involvement of the premotor [6,16,17] and insular cortices [6,16,18] for the process of conscious motor monitoring, the basal ganglia, insulo-frontal, temporal and parietal structures for explicit and implicit motor awareness [19], and mesial-frontal [5, 20,21] and posterior-parietal areas [22] for the intentional component of the motor act.
More recently, non-invasive brain stimulation evidence [23] has shown that cathodal transcranial Direct Current Stimulation (tDCS) of the PM, but not of the Posterior Parietal Cortex (PPC), affects the subjects' self-confidence about their contralateral (left) hand motor performance, consistent with the idea of a role of right PM in the conscious monitoring of voluntary motor acts.On the contrary, tDCS over PM does not interfere with monitoring of involuntary muscle contractions induced by TMS over the hand motor area [24].
Taken together, previous findings suggest a role of right PM in the conscious control of voluntary action, but they do not provide direct evidence of its causal involvement.In the present study, we used a task in which participants were asked to draw a straight line either with the left hand (i.e., the hand contralateral to the stimulated side) or with the right (ipsilateral) hand.During the execution of the requested movements, the visual feedback of subjects' actual motor performance was experimentally deviated from the real trajectory in most of the trials to create a mismatch between the movement they executed and the movements they viewed on a computer screen.This mismatch led the subjects to correct their trajectory in the opposite direction in order to draw a straight line.Previous studies (e.g., [25][26][27][28]) showed that, within certain limits of deviation, subjects did not become aware of the modified trajectory they performed.In other words, in the manipulated trial, until certain degrees of deviation, subjects still believed they were tracing a straight line, as required by the task.Therefore, in this experimental setting, subjects performed (erroneous) movements they were not aware of.In Fourneret and Jeannerod's (1998) interpretation, this finding demonstrated that the subjects became aware of the movement they intended to perform (a straight movement) rather than the movement they actually performed (a deviated movement).This is, again, consistent with the idea that motor awareness is mainly constructed on a predictive code and not exclusively on the actual sensory feedback.
In the present study, we aimed at exploring the role of the right PM in motor monitoring by means of repetitive Transcranial Magnetic Stimulation (rTMS), which allows drawing causal links between the stimulated brain regions and the observed behaviours [29,30].In our experiment, indeed, in order to interfere with motor awareness, we applied, before the execution of the task, 1 Hz rTMS over the right PMd cortex [31].The task was also performed after Sham rTMS stimulation of the same area.
Our first prediction was that if the PM plays a pivotal role in motor awareness, then interfering with its activity using rTMS, when the subjects perform their "deviated" trajectories, further affects their action monitoring.
However, it is worth noting that while an inhibitory TMS usually worsens subjects' responses, by decreasing the activity of the targeted areas [32][33][34], a few studies unexpectedly showed that inhibitory rTMS of the right PMd could enhance subjects' performance [35][36][37].Consequently, if rTMS has an inhibitory effect on motor monitoring, we should expect an Brain Sci.2023, 13, 1422 3 of 10 increase in the angle at which subjects become aware of the deviated trajectory (i.e., a decrease in motor awareness), whereas if rTMS has an enhancing effect on motor monitoring, we should expect a decrease in the angle at which subjects become aware of the deviated trajectory (i.e., an increase in motor awareness).
As for the side of the body where right rTMS may have an effect, if the right PMd has a control only over the contralateral hand, we should expect a modulation of motor monitoring only for the left-hand action.However, if the right PM controls the awareness of both hand movements, we should expect to find a modulation of motor monitoring for both hands.Finally, we expect to observe a modulation of motor awareness in the active but not in the Sham rTMS condition.
Crucially, and independently from the outcome of the stimulation, a modulation of rTMS on subjects' capability of detecting action deviation would be another fundamental step for demonstrating the key role of the PMd in the construction of motor awareness.
Participants
Fourteen healthy right-handed healthy volunteers (11 women, 3 men, age: 21-57 years, mean age 25.8;Standard Error-SE = 2.5) participated in the study.We based the choice of our simple size on previous TMS studies on motor cognition [5, 31,38] and/or cognitive studies using 1 Hz rTMS [32,33].Participants were selected according to the TMS exclusion criteria [39] and they provided their informed consent to participate in the study, previously approved by local Ethics Committee of the University of Turin (number of protocol: 24001).The study was conducted in accordance with the Declaration of Helsinki for human participants.The Edinburgh Handedness Inventory-revised [40] test was administered to ensure that all subjects were actually right-handed.
Set Up
The experimental set up consisted of a 30 × 40 cm graphic tablet placed in a wooden box on a desk and connected to the computer.The computer was also connected to an LCD screen placed on top of the wooden box, 30 cm above the graphic tablet.A hole in the wooden box allowed the participant insertion of one of the hands inside the box, thus excluding it from the subjects' view (See Figure 1).The subject was seated on a comfortable fixed chair in front of the desk (both the graphic tablet and the screen were aligned with the subject's trunk midline) and they could only see the screen below the chin.The chair was fixed so that the distance between the subject and the desk was kept equal across conditions (i.e., 60 cm).The hand inside the box held a pen stylus, while the other hand simply rested on the participant's leg.
Procedures
The task was to trace a straight line from the starting position to the target position.During task execution, while tracing the vertical line on the tablet, the subjects only saw the line appearing on the computer screen.In the Artificially Deviated (AD) trials, the output of the graphic tablet was processed by the computer using a simple algorithm that added a constant linear directional bias to the right or left of varying amplitude, so that the trajectory drawn by the subject appeared displaced to the left or right according to an angle defined by the bias (i.e., Left Deviation-LD from −1 • to −25 • and Right Deviation-RD, from +1 • to +25 • ).The trajectory on the screen was the only visual feedback of the actual movement available to the subject and, after the target was reached, they were asked to indicate with "yes" or "no" whether the trajectory they saw on the screen corresponded to the movement actually performed or not (see Figure 1).Subjects performed 51 trials in each experimental condition (i.e., No Stimulation, rTMS and Sham).There was also one trial with a deviation of 0 • , indicating a perfect coherence between the visual feedback and the actual movement.Participants performed the task in three different conditions: No Stimulation (NS), after real 1 Hz rTMS (rTMS), delivered to the right PMd (900 pulses, at 90% of resting Motor Threshold, rMT), after Sham stimulation (SHAM), with the coil held perpendicularly to the right PMd.For each condition, participants performed the task with the right (RH) and the left (LH) hand.NS, active and Sham stimulation measurements were collected on 3 different days with an interval of about one month between sessions in order to prevent potential learning.The order of the 3 conditions, as well as the order of the hand, was randomised between participants.Within participants, the order of the hand used for the task was maintained across sessions.
sinki for human participants.The Edinburgh Handedness Inventory-revised [40] test was administered to ensure that all subjects were actually right-handed.
Set Up
The experimental set up consisted of a 30 × 40 cm graphic tablet placed in a wooden box on a desk and connected to the computer.The computer was also connected to an LCD screen placed on top of the wooden box, 30 cm above the graphic tablet.A hole in the wooden box allowed the participant insertion of one of the hands inside the box, thus excluding it from the subjects' view (See Figure 1).The subject was seated on a comfortable fixed chair in front of the desk (both the graphic tablet and the screen were aligned with the subject's trunk midline) and they could only see the screen below the chin.The chair was fixed so that the distance between the subject and the desk was kept equal across conditions (i.e., 60 cm).The hand inside the box held a pen stylus, while the other hand simply rested on the participant's leg.A hole in the wooden box allowed the participants insertion of one of the hands inside the box, so that it could not be seen.The subjects were seated on a comfortable fixed chair in front of the desk (both the graphic tablet and the screen were aligned to the subjects' trunk midline) and they could only see the screen below the chin (Panel A).The subjects were instructed to reach, with the pen tip, a yellow target (4 × 4 mm) located on the sagittal axis at 22 cm from the starting point, by drawing a continuous line as fast as possible.After the target was reached, they were asked to indicate with "yes" or "no" whether the trajectory they saw on the screen corresponded to the movement actually performed or not.Subjects performed 51 trials in each experimental condition (i.e., No Stimulation, rTMS and Sham).For each trial, the software randomly applied the trajectory deviation, which ranged from 25 • to the left (LD, i.e., −25 • from the 0, with negative values indicating a leftward perturbation) to 25 • on the right (RD, i.e., +25 • from the 0), with a trial for each degree of deviation.There was also one trial with a deviation of 0 • , indicating a perfect coherence between the visual feedback and the actual movement (Panel B).
Transcranial Magnetic Stimulation
In the rTMS and Sham conditions, the subject performed the task soon after 15 min of real repetitive TMS or Sham stimulation, respectively.In the Sham stimulation, the coil was positioned over the same area as in the real stimulation, but it was held in a position perpendicular to the subject's scalp.RTMS was performed with a Magstim Rapid2 system with a focal coil (70 mm figure-of-eight).The participants' resting Motor Threshold (rMT) was defined as the lowest pulse intensity able to elicit a visible twitch in the abductor pollicis brevis muscle of the right hand in at least five of ten consecutive stimulations of the motor hotspot [34].The average resting Motor Threshold was 53.4 (SD = 5.38) of maximum machine output.Then, fifteen minutes of low-frequency rTMS (900 pulses, 1 Hz at 90% of rMT) was delivered over the right dorsal premotor cortex (PMd), defined as 2 cm anterior and 1 cm medial to the previously defined M1 hotspot [41].Soon after the end of the stimulation, participants were asked to look at the screen in order to start the experimental task.
Data Analysis
Statistical analyses were conducted using Statistica 6.0.Only data from trials with manipulated angles were used for the analysis, so we excluded the trials without deviation (i.e., the 0).In order to establish at which degree each subject became fully aware of the deviation, the angle at which the subjects recognised the presence of a mismatch between what they saw on the screen and what they had actually traced was recorded and analysed.The dependent variable for the data analysis was therefore the angle of deviation (toward the left and/or the right side) at which the subject started to consistently answer "no" (e.g., at the 14 degree), for at least two consecutive degrees.On the degree selected using the above criteria, a 3 × 2 × 2 ANOVA with three within-subject factors, COND (No Stimulation-NS, rTMS, and Sham), SIDE (Right and Left DEVIATION), and HAND (left and right hands) was used to directly test the differential effects of NS vs. Real and vs. Sham rTMS.
Results
Analyses reveal that the factor COND was significant [F (2, 50) = 7.3; p = 0.001; partial η 2 = 0.228].Crucially, post hoc analyses (Duncan's test) showed that participants became aware of their deviation at significantly smaller angle soon after the rTMS (p = 0.0006, mean = 11.09,SE = 0.9) with respect to the NS (mean = 14.34,SE = 1.1) and the Sham conditions (p = 0.01, mean = 13.3,SE = 1.1), independently from the hand used to perform the task and from the direction of the deviation (see Figure 2).This result shows that rTMS facilitated awareness compared to the other conditions.
Statistical analyses were conducted using Statistica 6.0.Only data from trials with manipulated angles were used for the analysis, so we excluded the trials without deviation (i.e., the 0).In order to establish at which degree each subject became fully aware of the deviation, the angle at which the subjects recognised the presence of a mismatch between what they saw on the screen and what they had actually traced was recorded and analysed.The dependent variable for the data analysis was therefore the angle of deviation (toward the left and/or the right side) at which the subject started to consistently answer "no" (e.g., at the 14 degree), for at least two consecutive degrees.On the degree selected using the above criteria, a 3 × 2 × 2 ANOVA with three within-subject factors, COND (No Stimulation-NS, rTMS, and Sham), SIDE (Right and Left DEVIATION), and HAND (left and right hands) was used to directly test the differential effects of NS vs. Real and vs. Sham rTMS.
Results
Analyses reveal that the factor COND was significant [F (2, 50) = 7.3; p = 0.001; partial η2 = 0.228].Crucially, post hoc analyses (Duncan's test) showed that participants became aware of their deviation at significantly smaller angle soon after the rTMS (p = 0.0006, mean = 11.09,SE = 0.9) with respect to the NS (mean = 14.34,SE = 1.1) and the Sham conditions (p = 0.01, mean = 13.3,SE = 1.1), independently from the hand used to perform the task and from the direction of the deviation (see Figure 2).This result shows that rTMS facilitated awareness compared to the other conditions.Taken together, these results suggest that rTMS of PMd significantly affected participants conscious self-monitoring, and that the deviation direction, as well as the hand used to perform the task, influenced subjects' action self-monitoring.Taken together, these results suggest that rTMS of PMd significantly affected participants conscious self-monitoring, and that the deviation direction, as well as the hand used to perform the task, influenced subjects' action self-monitoring.
Discussion
In the present study, we investigated the role of the right PMd in action self-monitoring in healthy volunteers by evaluating motor awareness modulation through the application of inhibitory rTMS.To obtain a behavioural measure of motor awareness, we referred to the well-known paradigm of Fourneret and Jeannerod where healthy volunteers are requested to judge their actions in a task in which self-generated movements are experimentally deviated [25][26][27][28].
More specifically, we investigated whether the application of low-frequency rTMS over the right PMd may affect the monitoring of trajectory deviations.Consistent with our expectations, we found a modulation of subjects' motor awareness when movement perturbation was at 11 degrees of deviation.Interestingly, although the results reported in previous studies usually show worsening performance when inhibitory rTMS is applied [34], we found that inhibitory rTMS over the right PMd improved participants' motor awareness (i.e., it decreased the angle at which they became aware of the deviation).This is in line with some results reported in previous studies showing facilitatory effects of the low-frequency rTMS protocol [35][36][37].
One possible explanation for the observed facilitation following 1Hz rTMS protocol is that it may be due to the phenomenon known as paradoxical facilitation (PF), for which behavioral facilitation may result from disruption or inhibition of brain activity [42][43][44].PF was first described in brain-damaged patients who performed better than normal subjects on specific tasks [45,46].Recently, it has been reported that PF can be induced by lowfrequency rTMS in healthy participants.For example, Buetefisch and co-workers [47] showed that participants' accuracy on a task that required a higher level of precision for both hands increased after low-frequency rTMS applied to the left M1.In a previous study, Avanzino et al. [48] demonstrated an improvement of ipsilateral motor accuracy following 1 Hz rTMS over M1 that lasted the period of stimulation up to 30 min.Similarly, Schwarzkopf et al. [49] demonstrated that low-intensity TMS over the visual cortex facilitated the detection of weak motion signals, while higher intensities resulted in impaired detection of stronger motion signals (see also Pascual-Leone et al. (2012) for a review [50]).
Discussion
In the present study, we investigated the role of the right PMd in action self-monitoring in healthy volunteers by evaluating motor awareness modulation through the application of inhibitory rTMS.To obtain a behavioural measure of motor awareness, we referred to the well-known paradigm of Fourneret and Jeannerod where healthy volunteers are requested to judge their actions in a task in which self-generated movements are experimentally deviated [25][26][27][28].
More specifically, we investigated whether the application of low-frequency rTMS over the right PMd may affect the monitoring of trajectory deviations.Consistent with our expectations, we found a modulation of subjects' motor awareness when movement perturbation was at 11 degrees of deviation.Interestingly, although the results reported in previous studies usually show worsening performance when inhibitory rTMS is applied [34], we found that inhibitory rTMS over the right PMd improved participants' motor awareness (i.e., it decreased the angle at which they became aware of the deviation).This is in line with some results reported in previous studies showing facilitatory effects of the low-frequency rTMS protocol [35][36][37].
One possible explanation for the observed facilitation following 1Hz rTMS protocol is that it may be due to the phenomenon known as paradoxical facilitation (PF), for which behavioral facilitation may result from disruption or inhibition of brain activity [42][43][44].PF was first described in brain-damaged patients who performed better than normal subjects on specific tasks [45,46].Recently, it has been reported that PF can be induced by low-frequency rTMS in healthy participants.For example, Buetefisch and co-workers [47] showed that participants' accuracy on a task that required a higher level of precision for both hands increased after low-frequency rTMS applied to the left M1.In a previous study, Avanzino et al. [48] demonstrated an improvement of ipsilateral motor accuracy following 1 Hz rTMS over M1 that lasted the period of stimulation up to 30 min.Similarly, Schwarzkopf et al. [49] demonstrated that low-intensity TMS over the visual cortex facilitated the detection of weak motion signals, while higher intensities resulted in impaired detection of stronger motion signals (see also Pascual-Leone et al. (2012) for a review [50]).Therefore, in healthy subjects the effects of low-frequency rTMS on the brain can either worsen [34] or improve subjects' performance [35][36][37].However, the specific mechanisms of how non-invasive brain stimulation induces PF in healthy individuals are not yet fully understood.One of the recently proposed explanations is the stochastic resonance model, which postulates that introducing small amounts of noise into a system may promote low-level signals, which, in turn, enhance functions within that system.Whatever the explanation, the crucial finding of our study is the modulation of conscious experience obtained by delivering rTMS over the PMd.This demonstrates the causal relationship between the premotor cortex and motor awareness, confirming that the premotor cortex can be considered an important hub of the circuit related to the construction of the conscious experience of actions.
As for the side of the body controlled by the right PM, our results show that stimulation of the right PMd affects motor awareness for both, the contralateral (left) and ipsilateral (right) hands.These findings showing a right-hemispheric control of both hands suggest a right hemispheric specialization for motor monitoring mechanisms.It is worth noting, however, that studies coming from different experimental paradigms have suggested that also the left pre-motor cortex seems to be involved in motor action monitoring.For instance, there are a few cases of anosognosia for the right hemiplegia (that is, anosognosia following left, instead of right, brain damage [51]) while Fornia et al. [52] found in an experiment with awake surgery that Direct Electrical Stimulation (DES) of PMC dramatically altered the patients' motor awareness, making them unconscious of the motor arrest induced by the same stimulation.Given all these results, we might suggest that while the right hemisphere may have a control for motor action executed by both hands, the left hemisphere may have a control only on the right hand.This proposal is reminiscent of one of the theories put forward to explain the deployment of attention in space and the data on neglect [34,53].Further investigation should consider this possibility.
Finally, we also found an unexpected result: an effect on participants' awareness of the deviation direction related to the hand used to perform the task.Indeed, in the No Stimulation condition, participants were less aware of deviating from a straight trajectory in the Left Deviated trials when the task was performed with the left hand, and in the Right Deviated trials when the task was performed with the right hand.We can speculate on this result by referring to the well-known Simon effect, an attentional effect described as a stimulus-induced bias in response selection [54], in which manual responses to a visual stimulus are facilitated when there is congruence between the stimulus side and the responding hand (stimulus response compatibility effect [55]).In our experimental setting, a condition of hand-space compatibility was realised when the subject had to perform the task with the right (or left) hand and the line that was projected on the screen deviated towards the right (or left) space.The hand-space contingency created by the AD facilitated and enhanced attention to that space by interfering with the generation of awareness of the movements toward the opposite space that the subjects had to perform in order to correct the trajectories (see Freud et al. 2015 for the presence of the Simon effect in the motor trajectory task [56]).
Limitations of the Study
The present study has three main limitations.First, we did not perform stimulation of the left PMd, which prevents formulation of any definite conclusion about the role of the right PMd on motor awareness.Considering that the left pre-motor cortex also seems to be involved in action control [52], it will be crucial to investigate the effect of the stimulation of the left PMd on motor awareness.Second, in our task, we did not investigate the role of the right Parietal Cortex, which, according to some authors, is involved in motor intention [23].Therefore, further studies targeting the Parietal Cortex are needed to clarify the role of different brain areas in action monitoring.Finally, it is possible that the medium effect size of the present study is due to the limited sample size (n = 14), although it is quite similar to the sample of previous TMS studies (see, for example, [32,34,38]).
Conclusions
Our results, showing a modulation of motor awareness by the application of rTMS to the right PMd, demonstrated that this region plays a crucial role in action self-monitoring.Although the interference with its activity improved the subjects' motor awareness, our study suggests that one of the comparator mechanisms proposed by the Blakemore et al. [1] model, responsible for the conscious monitoring of motor acts, is located in the right PM [6].Given the functional enhancement effect that we found when rTMS was administered to the PMd, it is worth considering this procedure as a possible treatment for motor awareness disorders.As already pointed out, a limitation of our study is that we did not test the effect of rTMS over the left PMC.This would be crucial to draw firm conclusions about the different involvement of the two hemispheres in action self-monitoring.Therefore, further investigations, also targeting different areas and increasing the numbers of participants, both in the right and left hemisphere, are needed to clarify the different components of the motor monitoring circuit and their specific role in generating the conscious experience of action.
Figure 1 .
Figure 1.Experimental set up and procedure.The experimental set up consisted of a 30 × 40 cm graphic tablet placed in a wooden box on a desk and connected to the computer.An LCD screen
Figure 1 .
Figure 1.Experimental set up and procedure.The experimental set up consisted of a 30 × 40 cm graphic tablet placed in a wooden box on a desk and connected to the computer.An LCD screen was placed on top of the wooden box.A hole in the wooden box allowed the participants insertion of one of the hands inside the box, so that it could not be seen.The subjects were seated on a comfortable fixed chair in front of the desk (both the graphic tablet and the screen were aligned to the subjects' trunk midline) and they could only see the screen below the chin (Panel A).The subjects were instructed to reach, with the pen tip, a yellow target (4 × 4 mm) located on the sagittal axis at 22 cm from the starting point, by drawing a continuous line as fast as possible.After the target was reached, they were asked to indicate with "yes" or "no" whether the trajectory they saw on the screen corresponded to the movement actually performed or not.Subjects performed 51 trials in each experimental condition (i.e., No Stimulation, rTMS and Sham).For each trial, the software randomly applied the trajectory deviation, which ranged from 25 • to the left (LD, i.e., −25 • from the 0, with negative values indicating a leftward perturbation) to 25 • on the right (RD, i.e., +25 • from the 0), with a trial for each degree of deviation.There was also one trial with a deviation of 0 • , indicating a perfect coherence between the visual feedback and the actual movement (Panel B).
Figure 2 .
Figure 2. Mean degree at which subjects became aware of the artificial deviation in the three conditions.Mean degree at which subjects became aware of the artificial deviation in the BS (mean = 14.34,SE = 1.1), soon after the rTMS (mean = 11.09,SE = 0.9) and in the Sham conditions (mean = 13.3,SE = 1.1).Error bars represent standard error of means; *, significant.NS = No Stimulation, rTMS = repetitive Transcranial Magnetic Stimulation, Sham = Sham stimulation.Results also showed a significant three-way interaction between COND X SIDE X HAND [F (2, 50) = 3.79; p = 0.02 partial η2 = 0.228].Post hoc analysis (Duncan's test) revealed that when the task was performed with the left hand, subjects were significantly more aware of their own performance soon after rTMS (p = 0.003, mean = 11.53,SE = 1.6) than BS (mean = 16, SE = 1) and Sham (p = 0.008, mean = 15.46,SE = 1.41) in the Left Deviated trials.Conversely, when subjects performed the task with the right hand, they were significantly more aware of the deviation in the rTMS condition (mean = 11.28,SE = 1.46) compared to the BS (p = 0.0009, mean = 16.35,SE = 1.19) and the Sham (p = 0.01, mean = 14.92,SE = 1.3) in the Right Deviated trials (see Figure 3).
Figure 2 .
Figure 2. Mean degree at which subjects became aware of the artificial deviation in the three conditions.Mean degree at which subjects became aware of the artificial deviation in the BS (mean = 14.34,SE = 1.1), soon after the rTMS (mean = 11.09,SE = 0.9) and in the Sham conditions (mean = 13.3,SE = 1.1).Error bars represent standard error of means; *, significant.NS = No Stimulation, rTMS = repetitive Transcranial Magnetic Stimulation, Sham = Sham stimulation.Results also showed a significant three-way interaction between COND X SIDE X HAND [F (2, 50) = 3.79; p = 0.02; partial η 2 = 0.228].Post hoc analysis (Duncan's test) revealed that when the task was performed with the left hand, subjects were significantly more aware of their own performance soon after rTMS (p = 0.003, mean = 11.53,SE = 1.6) than BS (mean = 16, SE = 1) and Sham (p = 0.008, mean = 15.46,SE = 1.41) in the Left Deviated trials.Conversely, when subjects performed the task with the right hand, they were significantly more aware of the deviation in the rTMS condition (mean = 11.28,SE = 1.46) compared to the BS (p = 0.0009, mean = 16.35,SE = 1.19) and the Sham (p = 0.01, mean = 14.92,SE = 1.3) in the Right Deviated trials (see Figure 3).Taken together, these results suggest that rTMS of PMd significantly affected participants conscious self-monitoring, and that the deviation direction, as well as the hand used to perform the task, influenced subjects' action self-monitoring.
Figure 3 .
Figure 3. Mean degree at which subjects became aware of the artificial deviation in the three conditions and with the two deviations.Mean degree at which subjects became aware of the artificial deviation in the three conditions and with the two deviations when the task was performed with the left hand (Panel A) and with the right hand (Panel B).Error bars represent standard error of means; *, significant.NS = No Stimulation, rTMS = repetitive Transcranial Magnetic Stimulation, Sham = Sham stimulation.Taken together, these results indicate that rTMS of PMd significantly affected participants' conscious self-monitoring, and that the deviation direction, as well as the hand used to perform the task, influenced subjects' action self-monitoring.
Figure 3 .
Figure 3. Mean degree at which subjects became aware of the artificial deviation in the three conditions and with the two deviations.Mean degree at which subjects became aware of the artificial deviation in the three conditions and with the two deviations when the task was performed with the left hand (Panel A) and with the right hand (Panel B).Error bars represent standard error of means; *, significant.NS = No Stimulation, rTMS = repetitive Transcranial Magnetic Stimulation, Sham = Sham stimulation.Taken together, these results indicate that rTMS of PMd significantly affected participants' conscious self-monitoring, and that the deviation direction, as well as the hand used to perform the task, influenced subjects' action self-monitoring. | 2023-09-15T13:03:18.303Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "e247c111056c7f7d56e0813b31866c4fc3657cd7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/13/10/1422/pdf?version=1696663133",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c7ce42bf678daf95662cba8d6b133730fe53030",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258911529 | pes2o/s2orc | v3-fos-license | Meaningful coproduction with clinicians: establishing a practice-based research network with physiotherapists in regional Australia
Background The disconnect between research and clinical practice leads to research evidence that is often not useful for clinical practice. Practice-based research networks are collaborations between researchers and clinicians aimed at coproducing more useful research. Such networks are rare in the physiotherapy field. We aimed to describe (i) clinicians’ motivations behind, and enablers to, participating in a network, (ii) the process of network establishment and (iii) research priorities for a practice-based network of physiotherapists in the Hunter Region of New South Wales (NSW), Australia that supports research coproduction. Methods We describe the methods and outcomes of the three steps we used to establish the network. Step 1 involved consultation with local opinion leaders and a formative evaluation to understand clinicians’ motivations behind, and enablers to, participating in a network. Step 2 involved establishment activities to generate a founding membership group and codesign a governance model. Step 3 involved mapping clinical problems through a workshop guided by systems thinking theory with local stakeholders and prioritizing research areas. Results Through formative evaluation focus groups, we generated five key motivating themes and three key enablers for physiotherapists’ involvement in the network. Establishment activities led to a founding membership group (n = 29, 67% from private practice clinics), a network vision and mission statement, and a joint governance group (9/13 [70%] are private practice clinicians). Our problem-mapping and prioritization process led to three clinically relevant priority research areas with the potential for significant change in practice and patient outcomes. Conclusions Clinicians are motivated to break down traditional siloed research generation and collaborate with researchers to solve a wide array of issues with the delivery of care. Practice-based research networks have promise for both researchers and clinicians in the common goal of improving patient outcomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12961-023-00983-x.
Background
The disconnect between research and clinical practice leads to a substantial amount of research that fails to improve practice or patient outcomes [1][2][3]. Typically, researchers set research agendas and design research questions in isolation. The failure to involve end-users in setting research agendas and designing research questions has resulted in proliferation of research that clinicians do not find relevant [4][5][6][7]. Translating findings into practice is also more difficult when researchers and clinicians work in silos [1][2][3].
Involving end-users (e.g., patients, community and clinicians) in research is one way to improve research translation [8][9][10][11][12]. End-user involvement, often called coproduction, refers to a variety of practices where researchers work with stakeholders to generate research evidence. Mounting evidence suggests that involving end-users in producing research improves its relevance and likelihood of implementation into practice [8][9][10][11][12]. Moreover, there is a strong ethical rationale for involving end-users with the ability to shape the research that is designed to assist them [13,14]. However, end-user involvement may require more effort than traditional research processes [15][16][17]. Power imbalances and structural barriers concerning how research is typically conducted can lead to tokenistic collaborative efforts and involvement arrangements [18,19].
Practice-based research networks are ongoing collaborations between researchers and clinicians that aim to support coproduction [20]. In practice-based research networks, clinicians are active in setting research agendas and conducting research [20,21]. Practice-based research networks have been proposed to conduct more relevant research, accelerate research findings into practice and connect clinicians to key care stakeholders such as policy-makers and funders of care [20,22,23]. Practice-based research networks are common in general practice [21,24], but there are only a few select examples of practice-based research networks for physiotherapists in Australia [25,26]. There is a lack of information available to guide clinicians or researchers in establishing their own practice-based research networks.
To address this lack of information, we aimed to describe (i) clinicians' motivations behind, and enablers to, participating in a network; (ii) the process of network establishment and (iii) research priorities for a practice-based network of physiotherapists in the Hunter Region of New South Wales (NSW), Australia, that supports research coproduction.
Design
We used program logic to design key activities and identify outputs in a three-step process ( Fig. 1) [27]. First, we performed a qualitative study as part of a formative evaluation to assess problems with care provision and ways that a research network could address these problems (we refer to these as motivators). We also assessed enablers for participation in a practice-based research network. Second, we carried out two activities to develop and formally establish the network and used codesign to develop network governance [28]. Finally, we performed problem mapping through a face-to-face workshop and prioritization of research areas through an online survey.
Coproduction and codesign
Increasing enthusiasm for partnering with end-users has led to conflation and misappropriation of "co-" terms such as coproduction and codesign [13,18]. For the purposes of our study, we defined coproduction as researchers and clinicians collaboratively generating research [11][12][13][14]. We defined codesign as a method where we worked with clinicians in multiple steps to create an endproduct [28,29]. We accept that involving patients in coproduction and codesign is key to success [8][9][10][11][12][13]18]. However, the focus of this paper was to establish a network for clinicians, and our use of the terms coproduction and codesign throughout this manuscript refers to involving only researchers and clinicians.
Ethical considerations
The entire study was approved by the Hunter New England Local Health District Human Research Ethics Committee (Reference number: 2020/ETH01029).
Step 1: scoping and formative evaluation
We performed an initial scoping consultation with local physiotherapy opinion leaders prior to formative evaluation, which involved an online survey and online focus groups. Online focus groups are reported here and represent one portion of the formative evaluation. More detail on formative evaluation is reported elsewhere [30]. Here, we only report focus group data that address physiotherapists' thoughts and perceptions of a network and its establishment. The group interaction of focus groups was deemed the ideal way for participants to share ideas about their problems with care delivery and thoughts about how the network might address these problems, and to propose ideas about how the network might function [29,31].
Participants
Initial scoping was targeted to local clinical opinion leaders who were defined as either physiotherapists with extensive clinical experience (over 15 years) or people with a high degree of centrality in professional networks and who were capable of being "change agents". [32,33] To be eligible for the focus groups, participants had to be registered physiotherapists working in regional or rural NSW [34]. We purposely sampled physiotherapists from three settings: private practice, public health system and physiotherapist researchers. We invited participants via individual emails, which included information sheets about the study (Additional file 3: Supplement 1).
Activities
Initial scoping: We consulted local physiotherapy opinion leaders to assist in the design of subsequent stages [32], validate the network concept and understand the general scope of the network.
Focus groups: The facilitation schedule (Additional file 1) explored four main questions: (1) How do we improve practice? (2) Where does research fit into improving practice? (3) What are the issues that this network could help with? (4) How do you see this network being successful?
We conducted four focus groups online using Zoom [35] between 29 July and 6 August 2020. Each focus group lasted approximately 60 minutes. Two researchers facilitated each group, composed of four to six physiotherapists (a maximum of eight participants in each focus group).
One researcher (CG) transcribed audio files verbatim from each focus group, cleaned transcripts, and deidentified and organized data in NVivo software [36]. CG descriptively coded focus group data, and codes were checked by a second author (SD) [37,38]. CG led the development of themes with inductive thematic analysis [38,39]. During analysis, CG grouped codes under "problems" and "solutions" and explored the relationships between them to develop overarching categories. Motivating themes, or "motivators", resulted from grouping similar categories of problems and solutions together. "Enablers" were developed by grouping together common [27]. *Formative evaluation resulted in data used in two research reports. Motivators of the network and reported enablers to a successful network are reported here, and barriers and enablers to providing evidence-based care are reported elsewhere [29]. categories of participant-reported ideas on what would make their participation in the network easier [37][38][39]. Themes were finalized through discussion among CG, KB, SK and CW. CG is a physiotherapist with lived experience of the challenges that were reported by participants. CG has professional relationships with many focus group participants. These factors have shaped data collection and analysis, and theme development.
Step 2: establishment activities First, we developed a clinician-led vision and mission statement and formed a founding membership group through an online network establishment meeting. Second, we codesigned a joint governance group and held a launch event to raise local awareness of the group. We used codesign because it allows multiple iterations to create products in partnership with end-users, where the outcome of the design process better meets end-users' needs and respects their experience [16].
Participants
We determined eligibility for the network as being a registered physiotherapist or a professional who contributes to physiotherapy care (for example, a rehabilitation provider), and working in the greater Newcastle, Lake Macquarie and lower Hunter regions of NSW. This sample was selected to ensure that network members could meet and interact regularly with shared interests in improving local patient outcomes. Recruitment efforts can be described as convenience sampling. We focused recruitment efforts on a list of potential members that emerged from initial scoping (more information under results section). We primarily used face-to-face meetings, online meetings, email or phone calls to recruit participants. Supplementary recruitment strategies included social media posts, social media advertisements (targeted to regional physiotherapists in NSW), and a website landing page to advertise the network. We invited 45 potential members to both establishment activities.
Activities
Network establishment meeting: Prior to the meeting, CG thematically analysed data from step 1 to generate a draft vision and mission statement, which was provided as pre-reading for participants. We determined consensus for the vision and mission statement as all participants present were in agreement (100% agreement).
We invited potential members to an online meeting (6 November 2020). After receiving background information on practice-based research networks, participants discussed the vision and mission statement. Participants who disagreed suggested changes, and the groups discussed these changes until we reached 100% agreement. We provided participants with an opportunity to make small, grammatical edits to the vision and mission statement after the meeting through a live online document.
We then asked participants to consider being a founding member of the network and encouraged them to confirm their response through an online communications platform (Slack) [40]. Following the establishment meeting, we took suggestions for the network name and performed an online poll (through Slack) [40].
Launch event: Prior to the launch event, a small group of founding members (CG, AD, NM, BG, KD, CW) discussed different leadership frameworks over three successive meetings and made a shared decision on the final model. We invited network members and local care stakeholders to a launch event (3 December 2020) to create awareness about the network and gather expressions of interest to become part of the network's steering committee. Following the launch event, the steering committee held sequential meetings to design terms of reference and make key decisions about the network scope.
Step 3: problem mapping and prioritization
We first held a face-to-face workshop to list and explore the causes and effects of key problems with the delivery of care for patients with musculoskeletal conditions in the local community. We used systems thinking to design the workshop [41]. The network steering committee then prioritized problems that resulted from the workshop.
Participants
We wanted to gain perspectives from professionals who provide musculoskeletal care in different parts of the local health system [41]. Therefore, we invited network members plus local emergency department consultants and nurse practitioners, general practitioners, orthopaedic surgeons and sports physicians to the problemmapping workshop. Network members and the steering committee were involved in the prioritization process.
Activities
Workshop: Patients with musculoskeletal conditions are managed within a health system composed of multiple interacting agents (different professionals in different health sectors). Hence, we based our workshop on general systems theory, which broadly deals with exploring the role of interacting agents and their connections [37]. During the workshop we encouraged participants to consider links between parts of the system, and how these links (or lack of ) influence major problems. Prior to the workshop, all participants were asked about their perspectives on the main problems they face with care delivery for patients with musculoskeletal conditions and the criteria they would consider for setting priorities. CG analysed these responses and created a list of key problem areas and criteria for prioritizing problems by gathering similar responses under common categories. Criteria for prioritizing problem areas were used as a rough guide for participants to consider when making their decisions. In the first half of the workshop, participants reviewed and added to the problem list from the pre-workshop survey, then prioritized six key preliminary problem areas. In the second half of the workshop, participants split into groups (one group per key problem area) and discussed the causes, effects, what is known or unknown, and potential strategies to address the problem area. We captured responses on paper. Prioritization: Workshop data and field notes were analysed to produce a report for the workshop participants. Network members were asked to reflect on this report and provide their feedback through a live online document. For pragmatic reasons (we wanted to demonstrate meaningful progress on generating research with our limited capacity), we chose to prioritize the final three problem areas through an online poll involving only steering committee members. Steering committee members were asked to vote for three areas, without ranking them. We did not ask steering committee members to consider any specific criteria when ranking problem areas.
Initial scoping led to a list of potential network members who were already part of a professional network with a large number of pre-existing connections or who had professional relationships with local opinion leaders. We chose this list of potential members to optimize the diffusion of information and innovation [34].
Motivators to becoming involved in the network
We report each theme as "motivators", as solutions that mapped to problems with evidence-based care delivery reported by physiotherapists (except in the case of theme 5-making local impact) ( Table 1).
Theme 1: improving research relevance
Most participants reported that researchers may be asking the wrong questions to guide clinical practice improvement. Participants noted that the network may be a way for researchers to make research questions more relevant to clinical practice. For example, one participant reported,
I sort of need researchers to better understand what clinic life is like. So that they're asking better questions. (Participant 16, focus group 4)
Some participants noted that treatments included in research may be difficult to implement in "real-world"
Theme 2: enabling connection and collaboration
Many participants shared that disconnected systems are a hindrance to care provision. Disconnections between research and practice, between care providers, and between funders of healthcare and clinicians were reported. Participants reported that a network may improve the chances of clinicians connecting with healthcare funders, by providing a united voice (in contrast to separate individuals approaching healthcare funders).
I think, collectively, if we had something that we could-you know-as a collective go, well, this is what we're doing and that would then allow us to have a bit of a seat at the table when those conversations are happening [with funders of healthcare]. (Participant 15, focus group 4)
Often participants noted that providing care in a market-driven primary care environment forces physiotherapists to compete for patients' business. However, participants reported that the network could enable collaboration among clinicians, by sharing information with the shared goal of improving patient outcomes.
Theme 3: improving accountability
Some participants reported that a problem in physiotherapy is the variability in care that a patient will receive between physiotherapists because of a lack of obligatory standards. Participants discussed that the network could reduce unnecessary care variability in their local region by enabling the measurement of care standards and establishing a culture of accountability.
Theme 4: promoting evidence-based care
Most participants reported that evidence-based care is difficult to market, because patients may not understand the difference between care that is underpinned by evidence and other care options with less empirical support. Participants noted that being a part of the network would be a marker of quality and a unique selling proposition for their services that are underpinned by evidence. Participants shared that their (evidence-based) services are currently indistinguishable from service providers that do not practice in an evidence-based manner. A network "affiliation, branding or stamp" may assist in competing for business and promoting their services to patients. For example, a participant shared that promoting evidencebased care can reflect well on the physiotherapy profession more broadly.
People are seeing that there is this high degree of quality coming out constantly and we say it's physiodriven. (Participant 14, focus group 4).
Theme 5: making local impact
Some participants shared that the desire to have an impact on the care of patients in their local region was important, and that the network could be a vehicle to achieve this impact.
Trying to make a difference in Newcastle and the care of patients with musculoskeletal, sporting problems. (Participant 3, focus group 1)
Participants also reported that promoting themselves and the local healthcare community through involvement in research was important.
Enablers to a practice-based research network Theme 1: "time-crunched"
Participants reported that ensuring that network activities fit around a busy clinic schedule ("time-crunched") would make it easier for them to partake in network activities. One participant shared,
Theme 3: motivation and commitment
Participants noted that those who are more motivated to engage in research are more likely to be involved. However, participants also reported that clinicians who are committed, as well as motivated, may make the network a success. One participant noted,
We need some really motivated individuals. (Participant 5, focus group 2)
And another shared, And a commitment, I suppose, as well. Behaviour change amongst clinicians is as important as patients. (Participant 3, focus group 1) Step 2: network establishment Details of the outputs from network establishment activities are provided in Table 2.
Members
There were 29 founding members of the network, which was 64% of potential members. Twenty-seven (93%) of the founding members are registered physiotherapists (the two founding members who are not physiotherapists include one rehabilitation provider and one medical doctor). Twenty (69%) members worked in private practice, representing 16 private practice clinics in Newcastle, Lake Macquarie and Maitland, NSW. Five members worked in senior clinical and managerial roles at the local health district (Hunter New England Local Health District). One member worked in a private health insurance company, and one worked for a rehabilitation provider. Three researchers were founding members of the network (CG, CW, KD).
Findings
Network establishment meeting: The network's vision and mission statement reached consensus (100% agreement of meeting participants) after key changes were made to emphasize the desire for network members to improve their own care quality (changing "the care that patients receive" to "the care that WE provide"). We established a founding network membership group and finalized the network name 1 week after the meeting.
Launch event: The final leadership model we designed was a steering committee consisting of clinicians, researchers and other care stakeholders capable of providing a unique perspective on the healthcare system. Following the launch event, we received 14 expressions of interest; 13 of these became steering committee members (AD, BD, BM, CD, CG, CW, DR, JM, KD, MB, NM, SL, TW). The steering committee has majority (9/13 [70%]) representation from physiotherapists who work full-time in private clinics, and minority (2/13 [15%]) representation from researchers ( Table 2). To simplify the research agenda, the steering committee initially limited membership to physiotherapists, with a view to incorporating other disciplines in the coming years. The steering committee limited the network's scope to musculoskeletal conditions because the network membership predominantly practised in the musculoskeletal area of practice.
Step 3: problem mapping and prioritization
See Table 3 for responses from the first half of the workshop (listing problems and preliminary prioritization plus the criteria provided in the pre-workshop survey). The following three areas were prioritized by the network steering committee through an online poll to contend with clinicians' busy schedules: (i) public and patient perception of musculoskeletal conditions and what is effective to manage them, (ii) poor quality of care that patients with musculoskeletal conditions receive, (iii) lack of preventative focus from the health system.
Future research directions
A foundational research program has been developed based on the priority areas identified (Additional file 3). Initial projects completed include a rapid review and a consensus project [46]. Our research areas are intentionally broad, and we recognize that specific and answerable research questions are needed to undertake research with tangible clinical impact and translation potential (e.g. research-embedded clinical or implementation trials). Our process to refine project areas includes assessing the literature base for knowledge gaps and specifying a clear research question in an iterative process. To coproduce research, we used small working groups including experienced researchers and clinicians, with one-to-one mentoring to build clinicians' research capabilities (for example, performing a literature search on a scientific database).
The network faces several ongoing challenges to support and sustain activity. Maintaining network member engagement and securing adequate resources and funding for research are two ongoing challenges. To boost member engagement, we have established a clinically relevant professional development program that includes research capacity-building. For funding research, we have developed a detailed funding strategy. The strategy Table 3 Problem-mapping workshop problem areas and criteria for prioritization a Criteria were used only to guide the decision-making for participants (participants did not score problem areas using these criteria). b The six preliminary problem areas were then prioritized by the network steering committee via online poll outlines our approach to identifying and applying for relevant grant opportunities, partnership options, the development of a track record to secure independent funding, and alternative forms of income (e.g. professional development events, memberships).
Discussion
Our formative evaluation showed that physiotherapists were motivated to contribute to a research network to coproduce research that is more relevant to clinical practice and to improving patient outcomes more broadly. We achieved involvement from 29 founding members across 16 practices, who codesigned governance structures, mapped systemwide problems with care delivery priorities and generated research priorities. The network was viewed as a vehicle to connect disconnected health systems through knowledge sharing, promote evidencebased care against other non-evidence-based alternatives, improve the overall standard of care delivered by physiotherapists and have an impact in the local community. Physiotherapists reported that "time-crunched" network activities, research infrastructure support, and motivated and committed members would enable a successful network. Establishment activities led to key outputs such as a vision and mission statement to harness physiotherapists' motivations and a joint governance group. These data may help other researchers or clinicians who wish to form a practice-based research network and coproduce research.
Interpretation
Physiotherapists' motivations extended beyond generating research, suggesting that a network is viewed as a way to address an array of problems with care delivery. In our study, all motivating factors shared one common theme: the underlying link to improving patient outcomes. Evidence from similar initiatives aimed at connecting clinicians and researchers demonstrates a similar patient-centred motivation [43]. Ultimately, the integration of high-quality research and clinical experiencetraditionally outlined in evidence-based practice-is a cornerstone of healthcare that aims to optimize patient outcomes [44]. Working together through a mechanism such as a practice-based research network may create additional value for researchers and clinicians in the common pursuit of better evidence-based practice and improving patient outcomes.
Practical takeaways
To achieve a clinically relevant research agenda, we used a joint governance model with majority representation from private practice physiotherapists [9/13 (70%)].
To allow maximal engagement from busy clinicians, we balanced pragmatism with rigour in establishment activities. For example, we used a brief online poll to prioritize research areas rather than following a Delphi process [45]. Activities were typically scheduled in the evening (after clinical hours) and were less than 2 hours in length.
We used a mix of online and face-to-face meetings based on the preferences of network members, and we codesigned aspects of the network through online documents to ensure time efficiency.
Strengths
Our study involved a stepped approach using principles of codesign. While iterative in nature, our transparent description allows others to replicate or adapt specific components to their settings. We are not aware of another study describing the approach and results of research network establishment. For our formative evaluation, we purposively sampled physiotherapists from a variety of backgrounds, which should improve the transferability of our findings. Our manuscript was written in partnership with network members, which ensures an accurate reflection of included clinicians' perspectives and the establishment process.
Limitations
For our formative evaluation, CG has experience with the motivators we present and had professional relationships with participants, which may impact the credibility and confirmability of our findings. However, codes were checked by a second author (SD) and themes were discussed among a group of authors. We have only presented motivations for physiotherapists to become involved in the network, and not researchers' motivations (which would presumably be quite different from clinicians' motivations). However, we assume that collaboration with clinicians is understood by researchers as advantageous for many normative and functional reasons. For the network establishment, some activities may be difficult to apply to other contexts without the pre-existing social infrastructure or support personnel to implement establishment activities. However, our transparent description was intended to serve as a framework for others to adapt to their context, not a recipe to be followed. Finally, patients have an integral role to play in coproducing research, and a limitation in our study is that we did not include them in formative evaluation or in network establishment activities. However, the network is already involving patient as research partners to ensure a meaningful partnership moving forward. | 2023-05-27T13:39:03.287Z | 2023-05-26T00:00:00.000 | {
"year": 2023,
"sha1": "0494d4b5c4573fde117ff8beb6dbda5310ec3759",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "0494d4b5c4573fde117ff8beb6dbda5310ec3759",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7824502 | pes2o/s2orc | v3-fos-license | Log Linear Models for Religious and Social Factors affecting the practice of Family Planning Methods in Lahore, Pakistan
This is cross sectional study based on 304 households (couples) with wives age less than 48 years, chosen from urban locality (city Lahore). Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.
Introduction
Pakistan has been experiencing rapid population growth since second half of century due to the reduction mortality and persistent high birth rate.The country's population grew from nearly 33 million in 1947 to about 135 million to date.(Mahmood and Ali 1997).Rapid growth in population has ranked Pakistan eleventh in the world and third biggest contributor to the world population growth.
To check the growth rate, it is necessary to understand the process through which the fertility behavior is regulated in a given cultural set up.
There is great deal of effects of religion upon society in Pakistan.The acceptance of family planning in Pakistan is determined by various factors and may vary between difference regiments of population according to various socioeconomic, cultural and economic factors and is generally associated with various aspects of economic, religious and social organization.Sabihuddin and Jamal (1993) built the equation which relates the women's total births over the productive career as a function of prominate and deliberate fertility control variable on cross-sectional data.Finally, explanatory variables used in previous stages equated as dependent variable with the set of socio-economic and cultural variables, which are traditionally thought to the factors affecting fertility behavior.The impact of socio-economic modernisation on fertility has been discussed.Socio-economic and cultural variables explained about 0.3 to 7 percent of the variation in the dependent variables.Zafar (1996) attempted to find the extent to which socio-economic, cultural variables and additionally attitudinal variables such as husband and wife education, family income, husband's occupation, child mortality, exposure to mass media, husband authority and domination in family influence the decision making process for the family planning methods.The Chi-square test was employed to explore the relationship between the predictor variables and dependent variable.Multiple linear regression was also used to establish the relative importance of each the predictor variables.
Hakim (1995) examined that use of family planning methods in Pakistan is determined by various factors and may vary between difference segments if population according to various socio-economic, cultural and economic factors.The author used logit models to determine the effect of the desire for no more children on the current use of family planning methods.To see the independent effect of the desire of no more children on the contraceptive use, the variables like number of living children, women's education and women's work status were controlled one by one.The odd ratios indicated that chances of current use of contraceptives are about seven times higher among women who desire no more children then among those who desire more.
Farooqi, (1994) examined the importance of husband wife communication in the adoption of family planning methods.A series of basic cross tabulation were run separately for 6393 currently married women from the Pakistan demographic and health survey data.After simple bivariate analysis, logistic regression analysis was used by the researcher.
Ali, and White (2005) conducted a cross-sectional community-based survey from May to June 2000 in Khairpur District, Sindh.To determine the prevalence and sociodemographic factors associated with family planning practices among currently married women in Khairpur District, Sindh, Pakistan.A pre-tested structured questionnaire was used to interview 300 subjects from the study area.Stratified cluster sampling was done to collect information on knowledge and use of family planning methods and other sociodemographic factors from the respondents.In this series 62% of the women were illiterate.Nearly 45% of the women were in the age group of 25-34 years.Exposure to family planning messages was greater by television (66%) than by radio (55%).The prevalence of family planning methods among married women was 27%.Oral contraceptive pills were the predominant method used (32%).Regarding sociodemographic factors, more than four living children, exposure to family planning messages on TV, and husband's approval were the main factors associated with the use of family planning methods.
Materials and Methods
The study was based on three sets of variables (i) socio-economic and demographic conditions indicated by the back-ground variables (ii) religious variables (iii) dependent variable, acceptance of family planning methods.A married couple in the reproductive age group was taken as a unit of study.Bivariate chi-square test was used to test the association between various factors and dependent variable under study.The sampled population was married couples of Lahore city.Due to unavailability of appropriate sampling frame, multistage sampling was used and data were collected on 304 husbands from Allama Iqbal town on the fourteen factors as Husband's education (HE), Wife's education (WE), Husband's monthly income (M), Husband's Occupation
Results and Discussion
The association of every socio-economic, demographic and religious factors was tested with the acceptance of family planning with the Chi-square test.(Agresti 1996) The results are shown in Table - PV, N, D and HW were found to be the highly significant factor and DU was found to be significant factor.All other factors were found to be non-significant.In order to model the association pattern and to study the affect of all factors upon the decision making process of the acceptance of family planning methods, multivariate log-linear analysis have been applied.To over come the difficulty of over-dimensionality of the contingency table with fifteen variables, all the factors of explanatory nature were divided into four marginal groups by keeping the dependent variable in each group.This step helped to simplify the interpretation of the results by avoiding the higher order interaction terms which are impractical (Afifi and Clark(1994)).All the factors were distributed arbitrarily among four marginal groups by keeping the factor under study.These groups were rearranged to avoid the higher order interaction terms.Finally these marginal groups have been constructed.Group1: FP, A, HW, HE, RD Group 2: FP, N, O, M, OP Group 3: FP, D, DU, WE Group 4: FP, P, H, PV
Group 1:
In order to search the good fitted model on FP, A, HW, HE and RD, simultaneous tests that the k-factor interaction terms are zero, has been applied.Simultaneous test indicated that 5-factor, all four-factor and all 3-factor interaction terms are insignificant.So keeping all possible 2-factor interaction terms, Backward elimination procedure has been applied for the purpose of best fitted model 1) odds ratio indicated that desire for more children is bout 31 times more likely to occur for the couples whose duration of marriage is 0-9 years as compare to those whose duration of marriage was 20 or more years.Conditional D(1)*FP(1) odds ratio gave that desire for more children is just 5% likely to occur for those husband who have approved family planning as compare to those who have disapproved the family planning.Similarly estimates of parameters DU(1)*FP(1) and DU(2)*FP(1) gave that approval of family planning is less likely to occur for those couples whose duration of marriage was 20years or mire than those whose duration of marriage was 0-9 years.Approval of family planning is 1/6.89=0.14i.e. 14% likely to occur for those couples whose duration of marriage was 10-19 years, at each combination of levels of other factors.
Group 4:
Four factors which are FP, h, P and PV were included in the group- (1) in the form of conditional odds ratio indicated that approval of family planning methods is occurring 2.485 times more to those husbands whose political view is democratic rather than Islamic, controlling the H and P.This gives the strong positive association between approval of family planning and liberal democratic political view.
Logit model:
The factors associated with variable under study FP in the four groups were taken to develop the logit model in order to find the dependence of the factors associated with FP and their combined effect on the probability of adopting the contraceptives.HW, N, D, DU and PV were taken in logit model.If π is the probability for the approval of family planning methods, the logit model has form the importance of fitting the logit model is that it can analyze the dispersion in the dependent variable.Total dispersion of the binary dependent variable FP has been divided into dispersion explained by the model and residual dispersion (unexplained).From the analysis of dispersion in Table-6, we can calculate statistic similar to R 2 in regression analysis.The ratio explained by the model using entropy criterion was found to be 37.93 %.
In order to see the combined effect of factor PV, N, D, DU and HW on FP, we have found the odds of acceptance of family planning.We can see in serial no. 2 of table 7 the probability of approving the family planning is 41% less than probability of disapproving the contraceptives, with the combined effect of given levels of factors PV, N, D and HW in order to investigate that which are factors upon which the variable FP is highly depending, we have found measure of association using entropy and concentration criteria by dropping each factor one by one and watch the effect of dropping this factor upon dispersion explained by logit model.Table 8 shows the detail of this procedure
Conclusion
It has been found that husband-wife discussion upon the size of family is the most important factor, highly influencing upon the decision making of acceptance of family planning.The factor education attainment of husband was found to be highly associated with variable husband wife discussion so it has been found that by increasing the educational level of husband, acceptance of family planning can be increased.Moreover, we can conclude that like other family matters, the decision upon family size is highly influenced by males' opinion.The factor, no. of living children for N(1) i.e. 0 -1 child has negative effect upon acceptance of family planning while the factor N(2) i.e. no. of children 2 & above has positive effect upon the practice of family planning.Those husbands who have democratic political view, which was considered a more liberal view according to context of research, have 2.48 times more chance to adopt the family planning methods as compared to those who have Islamic political view, which were more orthodox.Thus approval of contraceptives was more likely to occur for those couples whose religious view was low as compared to those who want strict Islamic political system.Senior couples were less likely to believe on contraceptives as compared to those couples whose duration of marriage was less than 20 years, this factor may be significant due to the fact that senior couples think that their family is now complete and they do not need more children.
(O), Household size (H), Husband-Wife discussion on family planning issues (HW), Number of living children (N), Desire for more children (D), Duration of marriage (DU), Present age of wife (P), Age of wife at marriage (A), Offering of prayers for husband (OP), Political view of husband (PV), Religiously decisions by husband (RD) and acceptance of family planning methods (FP).Let the data comprise a set of counts ) defined by a set of relations been measured.This is the systematic component of the model, the random component being the assumed Poisson distribution.The ) ( p k × matrix X is called the design matrix. best model has generating class (FE, FP*HW, HE*HW, A*HE) with G 2 = 29.284d.f.= 37, p = 0.83.The structural form of model is
Table 2 :
Estimated log odds-ratio to the corresponding parameters are given in the Table-2.Remaining parameter are redundant parameters and set to zero.The estimate of parameter A(1)*HE(1) in the table-2 is 1.004 and e 1.004 = 2.73 and hence conditional odd ratio is 2.73 i.e. chance for marriage of women in younger age is 2.73 times more of those husbands whose education is secondary or less, by keeping other factors constant.Conditional odd ratio for parameter FP(1)*HW(1) became e 1.7585 = 5.80 and hence the fitted model that odds of acceptance of family planning methods is 5.80 times the odds of husbands who have discussed with there wives on the number of children.More generally, there are 5.80 times more chances to accept the family planning for those husbands who have discussed with their wives on the mater of number of children.Table-2 also indicated that there is negative association between low education and husband wife discussion, at each combination of levels of A, FP and RD.
Table 3 :
As the absence of two 2-factor interaction terms which are D*WE and FP*WE implies the conditional independence for D and WE at each combination of levels of DU and FP.Parameter estimates were given for group 3 in table-4.
Table-3 indicated that conditional FP(1)*N(1) odds-ratio is e -0.4829 = 0.62 i.e. those families who have no or one child provides 38% protection to adopt the contraceptives, as compare to those who have four or more children.Table-3 also indicted that low income of husband is 40% less likely to occur for husbands who are businessmen, at each combination of levels of FP, N and OP.
Table 8 :
On basis of table 8 we may make the groups of explanatory variables upon which FP is highly depending, moderately depending and slightly depending.So D and HW are the factors upon which FP is highly depending | 2014-10-01T00:00:00.000Z | 2006-01-01T00:00:00.000 | {
"year": 2006,
"sha1": "832ca7570612766bca3c81a96478cec82c879f8d",
"oa_license": "CCBY",
"oa_url": "https://pjsor.com/pjsor/article/download/84/61",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "832ca7570612766bca3c81a96478cec82c879f8d",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
14410796 | pes2o/s2orc | v3-fos-license | Cancer patients participating in a lifestyle intervention during chemotherapy greatly over-report their physical activity level: a validation study
Background The short form of the International Physical Activity Questionnaire (IPAQ-sf) is a validated questionnaire used to assess physical activity (PA) in healthy adults and commonly used in both apparently healthy adults and cancer patients. However, the IPAQ-sf has not been previously validated in cancer patients undergoing oncologic treatment. The objective of the present study was to compare IPAQ-sf with objective measures of physical activity (PA) in cancer patients undergoing chemotherapy. Methods The present study was part of a 12-month prospective individualized lifestyle intervention focusing on diet, PA, stress management and smoking cessation in 100 cancer patients undergoing chemotherapy. During the first two months of the lifestyle intervention, participants were wearing an activity monitor (SenseWear™ Armband (SWA)) for five consecutive days while receiving chemotherapy before completing the IPAQ-sf. From SWA, Moderate-to-Vigorous intensity PA (MVPA) in bouts ≥10 min was compared with self-reported MVPA from the IPAQ-sf. Analyses both included and excluded walking in MVPA from the IPAQ-sf. Results were extrapolated to a wearing time of seven days. Results Sixty-six patients completed IPAQ-sf and wore the SWA over five days. Mean difference and limit of agreement between the IPAQ-sf and SWA including walking was 662 (±1719) min.wk−1. When analyzing time spent in the different intensity levels separately, IPAQ-sf reported significantly higher levels of moderate (602 min.wk−1, p = 0.001) and vigorous (60 min.wk−1, p = 0.001) PA compared to SWA. Conclusions Cancer patients participating in a lifestyle intervention during chemotherapy reported 366 % higher MVPA level from the past seven days using IPAQ-sf compared to objective measures. The IPAQ-sf appears insufficient when assessing PA level in cancer patients undergoing oncologic treatment. Activity monitors or other objective tools should alternatively be considered, when assessing PA in this population. Electronic supplementary material The online version of this article (doi:10.1186/s13102-016-0035-z) contains supplementary material, which is available to authorized users.
Background
The number of new cancer cases is continuously rising and is estimated to grow from 12.7 million new cases in 2008 to more than 22 million new cases worldwide in 2030 [1,2]. Parallel to this increase, the possibilities for surviving cancer has never been better with 32.6 million cancer survivors worldwide in 2012 [3]. Many cancer survivors are expected to return to normal and productive lives following their diagnosis. However, cancer and its treatments are often associated with long-term impairment of physical, mental and psychosocial health and survivors are at risk of developing co-morbidities [4,5].
Physical activity (PA) is recommended as a strategy both during and after chemotherapy to manage treatmentrelated symptoms, prevent early and late co-morbidities, improve quality of life, increase the rate of chemotherapy completion and possibly extend overall and diseasespecific survival in cancer patients [6][7][8]. Consequently, the American Cancer Society has provided a set of general PA recommendations for cancer patients and survivors. Accordingly, cancer patients should avoid inactivity, try to return to normal daily activities as soon as possible following diagnosis, and follow the general PA guidelines for aerobic and strength exercise. This recommendation is >150 min of moderate PA per week combined with strength training two days per week. Further, the importance of individualizing these PA recommendations to the patient's condition and preferences is pointed out, and it is emphasized that cancer patients may need to exercise at a lower intensity and/or for a shorter duration during their treatment [6]. These global PA recommendations are largely based on self-reports of cancer patients' PA levels [6].
The International Physical Activity Questionnaire (IPAQ) is a validated questionnaire developed to monitor self-reported PA levels in healthy adults [9], and the most commonly used self-report tool of PA worldwide [10,11]. However, limitations of the IPAQ include its length, low compliance and difficulties in completing the questionnaire [12]. These difficulties may be of even greater magnitude for cancer patients experiencing disease and treatment-related side-effects like fatigue, loss of interest, and cognitive difficulties [13,14] when undergoing chemotherapy. A short form of the IPAQ (IPAQ-sf ) is therefore preferred and previously used in cancer patients [15], but has not been validated in cancer patients. Cancer is primarily a disease of the elderly [16], which can make the use of the IPAQ-sf challenging since the questionnaire is developed for adults aged 18-65 years [9]. Secondly, the IPAQ-sf defines moderate PA as activities that make you breathe somewhat harder than normal [17]. In this regard it is of major importance to be aware of the fact that cancer patients undergoing chemotherapy usually are fatigued, which may impair the patients' perceived level and intensity of PA. A validation study of IPAQ-sf across 12 countries revealed large variations with respect to correlation between IPAQ and objective measures of PA by activity monitors [9]. Thus, it is likely that the accuracy of the IPAQ varies when different populations are assessed, dependent on the populations' demographics, cultural backgrounds, physical fitness, level of physical functioning and disease status [12,18]. The objective of the present study was therefore to compare time in Moderate-to-Vigorous intensity PA (MVPA) recorded with the short form of the IPAQ with activity directly quantified using SenseWear™ Armband (SWA) in cancer patients receiving chemotherapy with curative or palliative intent.
Methods
The present validation study is part of the I CAN-study [19]; a 12-month prospective feasibility intervention with the aim to increase population based adherence to healthy lifestyle behaviors including diet, PA, mental stress management and smoking cessation. The intervention was delivered to 100 cancer patients undergoing chemotherapy with curative or palliative intent through: 1) a grouped start-up course with patients and nearest relatives, 2) an information binder with recommendations, recipes and tips on how to manage possible disease and treatment-related symptoms themselves, and 3) monthly counseling with a lifestyle supervisor with recommendations individualized to the patients' abilities, barriers and preferences. The study is described in detail elsewhere [19].
Participants
All cancer patients receiving chemotherapy for all cancer types, with either curative or palliative intent, at one oncology center in Kristiansand, Norway were considered for study participation against the following inclusion criteria: 1) age ≥ 18 years; 2) life expectancy ≥ 6 months; 3) Eastern Cooperative Oncology Group performance status (ECOG) ≤ 2; and 4) able to speak and read Norwegian. The only exclusion criterion was suspected anorexia cachexia syndrome. The study was conducted according to the guidelines of the Helsinki Declaration. The Regional Committee for Medical and Health Research Ethics, South-East approved the study (ref.no. 2012/1717/REK). Written informed consent was obtained from all patients before inclusion.
Procedures
Medical and demographic characteristics were collected via self-report and medical records. These included date of birth, height (collected by the physicians before startup of chemotherapy), tumor type (later categorized into 1 = breast cancer; 2 = colorectal cancer; 3 = prostate cancer; 4 = other cancer types), tumor stage (I-IV), ECOG (0-2), treatment intention (curative or palliative), marital status, cigarette smoking status and education level. Weight was measured to the nearest 0.5 kg (Mechanical scale, Seca 761, Birmingham, United Kingdom) and body mass index (BMI) was calculated by dividing weight (kg) by height (m) squared. Self-reported PA was assessed using the IPAQ-sf. Objective quantification of PA was acquired via the SWA, either in conjunction with the participants' first or the second appointed visit in the I CAN-study. The participants had undergone chemotherapy from five to twelve weeks at this time point. Participants were instructed to wear the SWA for five consecutive days; both including work week and weekend days. Since the present study was part of a comprehensive lifestyle study, participants' diet, mental stress, cigarette smoking and quality of life were assessed in addition to their PA level; a total of 59 questions.
Short form of the International Physical Activity Questionnaire (IPAQ-sf)
The IPAQ-sf questionnaire assesses PA in bouts of ≥ 10 min as part of leisure time, domestic and gardening activities, and work and transportation activities the past seven days. PA is classified as either Vigorous PA (VPA), Moderate PA (MPA) or as walking [17]. All walking was included to MPA, as proposed by Craig et al. [9]. Additionally, MPA minus time spent on walking is presented. Total PA was defined as the sum of time in MVPA.
SenseWear Armband
SenseWear™ Armband Pro 3 and Mini (BodyMedia Inc. Pittsburgh, PA) has been shown valid compared to indirect calorimetry in cancer patients (underestimation of daily energy expenditure by 9%, r = 0.68; p < 0.01) [20] and doubly labeled water in healthy adults (underestimation of daily energy expenditure by 5%, r = 0.81; p < 0.01) [21]. The sensor array includes an accelerometer, heat flux-sensor, galvanic skin response sensor, skin temperature sensor and a near-body ambient temperature sensor [21]. The SWA was worn on the triceps muscle halfway between the acromion and olecranon processes of the upper arm, as recommended by the manufacturer. Participants were instructed to remove the SWA during water-based activities, such as swimming or bathing, as the monitor is not waterproof. Data were downloaded with software developed by the manufacturer (SenseWear Professional Research Software V.6.1 for Pro 3 and V7.0 for Mini, algorithm V.2.2.4) after entering necessary demographic characteristics (sex, age, height, weight, smoking status).
The SWA was programmed to record PA in 1-min epochs. The cut-points defined MPA as 3-6 METs and VPA >6 METs in bouts ≥10 min. Total PA was defined as the sum of MVPA in bouts of ≥10 min. Min . wk −1 from the SWA for each participant was calculated by multiplication of mean MVPA min . day −1 by seven. Complete measurements required SWA wearing-time ≥19.2 h . day −1 on at least one day.
Data analysis
Descriptive characteristics are presented as mean and standard deviation (SD). The PA data are presented as min . wk −1 in the present study. The mean difference (IPAQ-sf minus SWA) ± 1.96 SD was calculated according to Bland and Altman [22]. To test if the IPAQ-sf overestimated PA compared to SWA, we applied a Wilcoxon signed-rank test for the absolute values for each activity monitor. Differences are presented as means with 95 % Confidence Intervals (CI). Additionally, the percentage discrepancy between IPAQ-sf and the SWA was calculated for each individual within different intensity categories. A linear regression with MVPA from SWA as the independent variable and the difference between IPAQ-sf and SWA for MVPA as the dependent variable was applied to test for systematic over-reporting. Level of significance was set to 0.05. Statistical analysis was performed with SPSS statistical software version 22 (SPSS Inc., Chicago, IL, USA). A post hoc power analysis using G*Power [23] yielded a power of 0.99 to detect differences between means, based on an effect size of 0.5.
Results
From the 100 I CAN participants, 16 participants dropped out before enrolment in the present validation study. The remaining 84 participants received a SWA and completed the IPAQ-sf. Of these, 18 were eliminated due to 1) SWA malfunction (n = 14) or 2) not sufficient wearing-time of the SWA (n = 4). In total 66 participants had valid registrations on the SWA and were included to the present validation study. There were no significant differences in self-reported MVPA between participants vs. non-participants at baseline (p = 0.414). Participants wore the SWA for an average of 3.6 days and 23.7 h . day −1 of all valid days. Demographic, medical and physical characteristics of the participants (n = 66) and non-participants (n = 34) are presented in Table 1.
Time in MPA, VPA and MVPA, including and excluding walking
The mean differences and limits of agreements between the IPAQ-sf and SWA from the Bland-Altman plots for time in MVPA were 662 (1719) min . wk −1 and 203 (1070) min . wk −1 with walking included and excluded in the analyses, respectively ( Fig. 1a and b). Figure 1c and d depicts the mean differences and limits of agreements between the IPAQ-sf and the SWA from the Bland-Altman plots for time in MPA with walking included and excluded from the analyses; 602 (1694) min . wk −1 and 143 (1009) min . wk −1 , respectively. Furthermore, Fig. 1 shows that several of the participants also underreported their PA compared to the SWA (plots under the solid line). From the IPAQ-sf, 23 participants reported VPA during the last week. The SWA only identified three participants conducting VPA during the same seven-day period, and VPA is thus not depicted in a Bland-Altman plot. Linear regression revealed no significant systematic over-reporting; indicating those who had
Accommodation of PA guidelines
Participants were categorized into fulfilling the American Cancer Society PA guidelines of ≥150 min . wk −1 or not when comparing MVPA obtained from the IPAQ-sf vs. the SWA. Analyses revealed significant differences between the IPAQ-sf and the SWA when walking was included to the MVPA; IPAQ-sf identified 58 of the participants as meeting the PA guidelines vs. 32 by SWA (p = 0.001). After excluding walking from the analyses, 35 of the participants self-reported accommodating the PA guidelines (p = 0.532).
Discussion
In the present study, cancer patients participating in a comprehensive lifestyle intervention while undergoing chemotherapy over-reported time in MVPA compared to SWA, when completing the IPAQ-sf. Specifically, these cancer patients over-reported their moderate-tovigorous PA by nearly 100 min . day −1 . No differences in over-reporting were observed between patients undergoing chemotherapy with curative or palliative intent. Almost 90 % of the participants in the present study perceived themselves as meeting the PA guidelines of 150 min . wk −1 of MPA while less than 50 % actually met the PA guidelines according to the objective measures. Our findings are supported by previous studies, where more than half of healthy adults included perceived themselves as accommodating the PA guidelines [24,25]. Objective measures in the same population revealed that only 15 % were reaching the guidelines [26]. The higher percentage of participants self-reporting accommodating the PA recommendations observed in the present study compared to previous studies [26] may be a result from drawing our participants from a lifestyle intervention focusing on the participants' diet, mental stress and smoking cessation in addition to their PA. In addition to measuring the participants' lifestyle behaviors, participants also received recommendations on how to maintain or adhere to a healthy lifestyle during their cancer treatment. It is a well-known phenomenon that individuals participating in a study often improve aspects of their behavior as a response to being studied or they believe they have improved more than what they actually have (the Hawthorne effect) [27][28][29]. In the present study, the IPAQ-sf identified significantly more participants as accommodating the PA guidelines of 150 min . wk −1 of MVPA vs. the SWA. Six of the participants reported from 420 to 840 min . wk −1 of MVPA, while less than 60 min . wk −1 was registered on their SWA. These findings are not only statistical significant, but also of great clinical importance. Health care professionals should take these findings into consideration when delivering PA recommendations in this population and if using IPAQ-sf as an assessment tool in the clinic. In terms of this knowledge, individualized PA recommendations can be delivered with barriers such as fatigue, feeling sick, loss of interest and nausea, in mind [13,30], so that cancer patients undergoing oncologic treatment can harvest the known health benefits from adhering to the developed PA guidelines [6].
Participants in the present validation study self-reported being in MVPA 662 min . wk −1 more than what was objectively registered by the SWA; an over-reporting that is supported in the literature [9,[31][32][33]. Of the 844 min . wk −1 MVPA reported on the IPAQ-sf in the present study, participants reported 459 min . wk −1 as walking activities. As reported previously, IPAQ-sf may over-report MPA because it includes walking at any intensity [33]. In the present study, IPAQ-sf over-reported time spent on walking by 283 min . wk −1 or 1.6 times compared to MPA obtained by the SWA. When including MPA from the IPAQ-sf in the analysis, IPAQ-sf over-reported walking by 602 min . wk −1 or 3.4 times compared to MPA data on the SWA. To include time spent on walking in the MVPA but not differentiate the intensity of the walking may be a potential source of over-reporting in cancer patients undergoing chemotherapy, and time spent on walking was thus both included and excluded to MVPA in the present study. A significant amount of the walking performed by these patients may be objectively registered as light intensity [34], while being experienced and self-reported as moderate [35]. The IPAQ-sf defines MPA as activities that make you breathe somewhat harder than normal [17]. Importantly, cancer patients dealing with disease-and treatment-related side-effects such as reduced physical capacity, fatigue, pain, depression and anxiety [36][37][38][39] may feel short of breath at a much lighter intensity than they have previously or compared to the population which the questionnaire is developed for. Consequently, the experienced side-effects are a great source of over-reporting since the patients might experience and report the PA as moderate, while the SWA assesses the PA as light intensity. This is important to have in mind when using selfreports in cancer patients; many PA self-reports, like the IPAQ-sf, are developed for use in a healthy population. Gil-Rey et al. [39] thus suggest cancer-specific PA guidelines to maximize the health benefits in this population.
The participants did not systematically over-report their PA levels when completing the IPAQ-sf; in other words, higher physical activity levels from the SWA was not associated with more over-reporting of MVPA reported from the IPAQ-sf. Our findings are in contrast to the findings of Johnson-Kozlow et al. [31], who revealed larger over-reporting in the breast cancer survivors who reported the highest PA levels on the long form of the IPAQ. Reasons for our conflicting findings may be the use of different accelerometer in the two studies and the use of the long versus the short form of the IPAQ in the study of Johnson-Kozlow and colleagues [31] and the present study, respectively. The long form of the IPAQ gives a total of 35 examples of PA across different activity domains, i.e. recreational sports, leisure time and housework PA. This provides many opportunities for "forward telescoping"; a recall bias occurring if the activities were recalled as taking place during the same seven-day period which is monitored, but actually took place previously [40]. Secondly, while the participants were undergoing chemotherapy in the present study and may not have been able to perform much vigorous PA, the participants in the study of Johnson-Kozlow et al. [31] were cancer survivors two years post diagnosis. Recall of vigorous PA is more likely to be subject to "forward telescoping" on PA selfreports, since these activities are often easier to remember due to strong, distinct physiological signals [40]. Importantly, IPAQ has been criticized for being complicated to complete [41] due to difficulties in remembering which activities they performed the past seven days, to distinguish the intensity of the different activities and last but not least trying to identify whether or not the activities lasted ≥ 10 min [42]. These aspects may be even harder to recognize for cancer patients experiencing disease-and treatment-related side effects such as pain, mental stress, cognitive difficulties and fatigue [13,14,37]. When comparing the IPAQ-sf to the SWA, activities lasting ≥ 10 min from the SWA was applied in the analyses, which may lead to great overreporting if the activity lasted < 10 min. Thus, post hoc PA comparisons assessed by the IPAQ-sf and minuteby-minute SWA data were conducted in the present study (data not shown). Analyses revealed that IPAQ-sf still significantly over-reported VPA but IPAQ-sf now significantly under-reported MPA by 44 % compared to the SWA. The difficulties in completing the IPAQ-sf are reasonably clear. Importantly, there are currently no gold-standard for quantifying PA [43]; however, accelerometers are a precise and valid tool and thus commonly used in validating self-reports of PA [32]. Consequently, it is of concern that both the evidence regarding PA in cancer patients and the PA recommendations in this population is developed on the basis of self-reported PA data, which in turn has impact on the validity of those recommendations [6]. Another question that arises in this context is why self-reports only addresses PA in bouts >10 min, when there is rapidly growing evidence on health benefits in shorter bouts of PA [44,45]. This is; however, an unexplored field in cancer patients, which needs further investigation, especially since activities of longer duration may be hard to complete for cancer patients with impaired physical capacity due to the oncologic treatment.
There are strengths and limitations of the present study. To our knowledge, this is the first time the IPAQsf has been compared to an objective PA monitor in cancer patients undergoing chemotherapy with either curative or palliative intent. One key aspect when designing the present study was to make the objective registrations feasible to the patients. Many different guidelines regarding days wearing the activity monitors are previously provided, with a minimum wearing time of four days recommended [46,47]. One weakness of the present study is that only 66 % of the eligible patients were included, despite wearing the SWA for only five days, due to voluntary dropout, SWA malfunction or insufficient wearing time, leading to a higher inclusion of breast cancer patients who were either married or living together with someone compared to the total sample. These observations may indicate that a shorter wearing time is preferred in this population with regard to feasibility. Secondly, the participants were instructed to remove the SWA during water-based activities and activities such as swimming were thus not recorded. However, cancer patients at our clinic were advised to refrain from activities such as swimming in public pools due to reduced immune function and increased infection risk during chemotherapy. Further, the present study, as previous lifestyle interventions, is limited by including the healthier and fitter participants compared to the population from which they are drawn [19,30]. Unfortunately, no Bland-Altman plot was calculated for VPA since only three of the participants had bouts of ≥10 min of VPA recorded on the SWA. Further, the present study is limited by the amount of questions asked. The present validation study was part of a comprehensive lifestyle intervention, which focused on the participants' diet, mental stress, smoking cessation and quality of life in addition to their PA level. The large amount of questions asked, perhaps in combination with side effects from the disease and its treatment, may have affected the accuracy of the participants' answers.
Conclusion
Based on our findings, cancer patients participating in a lifestyle intervention while undergoing chemotherapy grossly over-report their PA level from the past seven days when using the IPAQ-sf. Thus, the IPAQ-sf seems to be insufficient when assessing PA level in cancer patients undergoing oncologic treatment. Activity monitors or other objective tools should be considered in this population as an attempt to bridge the gap between how physical active the cancer patients perceive themselves and how physical active they actually are, in order to | 2017-01-07T08:35:44.032Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "f20b7fa5e1a61ab06ac388be896f4a8151b7cb6d",
"oa_license": "CCBY",
"oa_url": "https://bmcsportsscimedrehabil.biomedcentral.com/track/pdf/10.1186/s13102-016-0035-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f20b7fa5e1a61ab06ac388be896f4a8151b7cb6d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267140749 | pes2o/s2orc | v3-fos-license | TESTING THE APPLICATION OF THE INTEGRAL AQAL MODEL IN ENTREPRENEURIAL COACHING IN SOUTH AFRICAN BUSINESS INCUBATORS
Objective: Despite significant investment in entrepreneurial coaching in South Africa, the failure rate of small businesses remains high. This empirical study addresses this issue by creating an effective entrepreneurial coaching framework with the help of business incubators, entrepreneurs, coaches, and industry experts. Research Design & Methods: The study used a quantitative cross-sectional design, with a questionnaire distributed to 296 entrepreneurs and statistical analysis performed using SPSS and STATA software. Findings: The findings support the AQAL model of Ken Wilber in how the entrepreneurial outcomes relate to each other. The findings also align with the literature's emphasis on relationship building as a key factor for business growth, with relationship building emerging as the only statistically significant predictor of business growth. Implications and Recommendations: Relationship building should be prioritised over other coaching outcomes, such as self-efficacy, entrepreneurial resilience, and visionary skills, by entrepreneur development practitioners. Contribution & Value Added: These findings have implications for entrepreneurial coaches, policymakers, and professional bodies, urging them to understand entrepreneurs' contexts, take a systemic approach to entrepreneurial coaching, and provide guidance on industry developments and best practices. It is expected that by implementing these recommendations, the proposed entrepreneurial coaching framework will contribute to better outcomes for entrepreneurs and their incubated businesses.
INTRODUCTION
The high rate of unemployment in South Africa has driven individuals to pursue self-employment to alleviate poverty (Tengeh & Choto, 2015).SMMEs are critical in job creation and economic growth, aligning with the National Development Plan's objective of creating 11 million jobs (Msimango-Galawe & Hlatshwayo, 2021).However, the start-up failure rate of SMMEs in South Africa remains alarming, ranging from 70% to 90% ( Van der Spuy, 2019).Limited access to finance, market challenges, lack of support, inadequate skills training, and limited infrastructure and technology access are identified as primary factors contributing to SMME failures (Lose, et al., 2017).SMMEs are crucial in combating unemployment and poverty in South Africa (Msimango-Galawe & Hlatshwayo, 2021).However, they face limited skills, crime, funding constraints, and technology access.Business incubators, including coaching support, contribute to SMME success (Amamou & Ali, 2019;Schutte & Direng, 2019).To | 92 JMER, 2023, 04(2), 91─105 ensure sustainable growth, diverse Business Development Support Providers (BDSPs) offer entrepreneurial coaching in South Africa ( Van der Spuy, 2019).
While entrepreneurial coaching is critical in the development of business owners around the world (Ben Salem & Lakhal, 2018;Saadaoui & Affess, 2015), there is inconsistency in the implementation by BDSPs, particularly because there are no guidelines, frameworks, or regulations on coaching practice in South Africa (Schutte & Direng, 2019).According to Schutte & Direng (2019), the profession faces a challenge because little is known about the approaches and methodologies of entrepreneurial coaching.Furthermore, the delivery of entrepreneurial coaching is still inconsistent.In the South African context, no entrepreneurial coaching framework guides BDSP implementation.As a result, different coaches take different approaches to measurement, resulting in replication issues for incubators and ineffective trial-and-error delivery (Schutte & Direng, 2019).Despite limited research, coaching consistently contributes to entrepreneurial success and business growth (Schutte & Direng, 2019).A specific focus on entrepreneurial coaching in the business incubator environment fills the gap in coaching literature.Therefore, this study aims to identify entrepreneurial coaching outcomes through self-reporting by entrepreneurs, focusing on four outcomes from the integral AQAL model: entrepreneurial self-efficacy, entrepreneurial resilience, visionary mindset, and stakeholder relationship building.
To resolve the above-stated research gap, the research questions are formulated as follows: 1) to what extent does entrepreneurial coaching mediate relationships between entrepreneurial self-efficacy, entrepreneurial resilience, being visionary, building relationships and business growth?;2) how does entrepreneurial self-efficacy influence the relationship between entrepreneurial resilience and business growth?;3) how does entrepreneurial resilience influence the relationship between being visionary and business growth?;4) how does being visionary influence the relationship between building relationships and business growth?;and 5) what is the influence of building relationships on business growth?
LITERATURE REVIEW Entrepreneurial Ecosystem in South Africa
Small business support policies in South Africa date back to 1994 (Tengeh & Choto, 2015).The Small Business Development Corporation (SBDC) pioneered the concept of business incubation in 1995, initially providing infrastructure and facilitating market access for entrepreneurs (Lose et al., 2017).Various types of Business Development Service Providers (DSPs) have emerged over time, such as virtual and physical incubators, as well as workshop and office space providers, each tailored to the specific needs of entrepreneurs and sponsors (Lose et al., 2017;Small Enterprise Development Agency (SEDA), 2019).This research focuses on business support organisations, specifically business incubators.
Business Incubators
Business incubators are both a process and a physical space for nurturing businesses in a secure environment (Schutte & Direng, 2019;Van der Spuy, 2019).They provide a supportive ecosystem where entrepreneurs can connect with business support services (Lose et al., 2017).Business incubation encourages entrepreneurship by providing different formats tailored to the needs of entrepreneurs, such as virtual and traditional incubators with physical infrastructure (Schutte & Direng, 2019).Finance, technology, workspace, skills, and networks are among the challenges that businesses face at various stages (Mamabolo & Myres, 2020).Incubators provide interventions, such as skills training, coaching, mentoring, advisory services, market access, and networking opportunities to entrepreneurs (Schutte & Direng, 2019).However, reducing small business failure rates remains a significant challenge for incubators (Msimango-Galawe & Hlatshwayo, 2021).Funding, external partnerships, and target markets all influence the format of an incubator (Mrkajic, 2017).
Figure 1 depicts the evolution of business incubation models, classified into three generations with a growing range of offerings (Bruneel et al., 2012).The first-generation model focuses on physical infrastructure, such as office space, while subsequent generations include intangible business services (Bruneel et al., 2012;Mrkajic, 2017).There is a mix of all three generations of business incubators in | 93 JMER, 2023, 04(2), 91─105 South Africa, with some limited to infrastructure provision and others embracing the third-generation model and beyond (Van der Spuy, 2019).However, not all incubators meet best practice standards because some do not provide comprehensive business development services or market reach development (Lose et al., 2017).The COVID-19 pandemic reinforced the importance of technological knowledge, innovation, and e-delivery during incubation, rendering the first-generation model obsolete (Lose & Kapondoro, 2020).In response to the pandemic, business incubators should modify their strategies, including using modern technology to facilitate small business model changes (Lose et al., 2020).This research focuses on second and third-generation business incubators that offer business development services, particularly coaching, intending to increase the success rate of SMMEs by developing an entrepreneurial coaching framework.
Entrepreneurial Coaching and Entrepreneurial Outcomes
Entrepreneurial coaching is personalised support for entrepreneurs seeking to improve their entrepreneurial skills (Saadaoui & Affess, 2015).Entrepreneurial coaching is defined in this study as individualised assistance provided by a professional entrepreneurial coach who uses their own entrepreneurial experience and skills to facilitate the entrepreneur's business development and soft skills, resulting in business growth (Ben Salem & Lakhal, 2018;Coller-Peter & Cronjé, 2020;Kotte et al., 2021).This distinguishes entrepreneurial coaching from other coaching domains in which the coach's experience or skills may or may not be required (Mmaditla & Ndlovu-Hlatshwayo, 2022).The individual coachee and the entity experience these entrepreneurial coaching outcomes (Wiginton & Cartwright, 2020).Establishing the coaching outcomes reported by coachees can help to understand and refine coaching interventions for other stakeholders.
Theoretical Foundation
This study drew from multiple theories in gaining deeper insights into the entrepreneurial coaching ecosystem and testing the AQAL model.Systems theory provides a theoretical framework for understanding organisations and their environments as complex systems with interconnected parts (Lynch et al., 2021).It implies that changes in one part of the system can impact other parts and that the system can adapt and evolve to maintain stability and functionality (Soltanzadeh & Mooney, 2016).In the context of this study, systems theory is relevant for investigating the entrepreneurial ecosystem of small enterprise development in South Africa to develop the entrepreneurial coaching framework.For integrative and systemic coaching, several authors (Louis & Diochon, 2018) argue that entrepreneurial coaching is a systemic approach that integrates individuals with their entities.Wilber's Integral theory, specifically the AQAL model, provides a holistic perspective by considering both observable and nonobservable aspects of individuals and their operating context (Landrum & Gardner, 2012;Wilber, 2000).The AQAL model was critical in achieving the objectives of this study.Isenberg (2010) developed the entrepreneurship ecosystem theory, consisting of six domains co-existing in the entrepreneurial environment.When considering the key components of an entrepreneurial ecosystem that enables entrepreneurship development, Fredin & Lidén (2020) emphasise the importance of a systemic | 94 JMER, 2023, 04(2), 91─105 perspective.To better understand the ecosystem in South Africa and achieve the study's objectives, this study focuses on the support domain within the entrepreneurship ecosystem, specifically business development support and incubators.The development of the entrepreneurial coaching framework used systemic coaching models to boost business growth through entrepreneurial coaching.
Against the above background, the following hypothesis is stated: Hypothesis 1: Entrepreneurial coaching moderates the relationship between: Hypothesis 1a: entrepreneurial self-efficacy and business growth.Hypothesis 1b: entrepreneurial resilience and business growth.Hypothesis 1c: being visionary and business growth.Hypothesis 1d: building relationships and business growth.
Entrepreneurial Self-Efficacy and Business Growth
The interior quadrant of the individual focuses on unobservable aspects of the individual, such as selfmotivation and intuition, which are important for entrepreneurs (Kerrin et al., 2017;Landrum & Gardner, 2012).Coaching interventions have increased entrepreneurial self-efficacy, or confidence in one's ability to complete tasks (Saadaoui & Affess, 2015).Entrepreneurial self-efficacy influences entrepreneurial outcomes as a foundation for behaviour and cognition (Gielnik et al., 2020).However, because entrepreneurs may overlook risks and limitations, it is important to consider the potential risks associated with excessively high entrepreneurial self-efficacy.Nonetheless, research has consistently identified entrepreneurial self-efficacy as a critical factor in business success (Saadaoui & Affess, 2015).Therefore, the following hypothesis is stated: Hypothesis 2: Entrepreneurial self-efficacy: Hypothesis 2a: as an individual interior entrepreneurial coaching outcome is positively related to business growth.Hypothesis 2b: mediates the positive relationship between entrepreneurial resilience and business growth.
Entrepreneurial Resilience and Business Growth
The individual exterior quadrant focuses on observable and measurable aspects of the individual, such as behaviour, time management, resilience, emotional coping, passion, and assertiveness, all of which are important skills for South African entrepreneurs (Kerrin et al., 2017;Volckmann, 2002;Wilber, 2000).Resilience is an active state of dealing with challenges and has been found to positively impact business growth, including profits and sales, among South African entrepreneurs (Santoro et al., 2020).Therefore, the following hypothesis is stated: Hypothesis 3: Entrepreneurial resilience: Hypothesis 3a: as an individual exterior entrepreneurial coaching outcome is positively related to business growth.Hypothesis 3b: mediates the positive relationship between being visionary and business growth.
Being Visionary and Business Growth
The collective interior quadrant includes unobservable aspects of the collective, such as collective thoughts, culture, norms, shared vision, and values, all of which are necessary skills for South African entrepreneurs (Kerrin et al., 2017;Volckmann, 2002;Wilber, 2000).Being visionary, defined as having a vision for the future of the business, is a collective interior quadrant characteristic because it extends beyond the individual and is not always externally observable (Kerrin et al., 2017).According to research, visionary characteristics are important for leaders and entrepreneurs, as successful entrepreneurs influence others to achieve desired goals or visions (Nimbodiya & Totala, 2019).Therefore, the following hypothesis is stated: Hypothesis 4: Being visionary: Hypothesis 4a: as a collective interior entrepreneurial coaching outcome is positively related to business growth.Hypothesis 4b: mediates the positive relationship between building relationships and business growth.
Building Relationships and Business Growth
The collective exterior quadrant includes observable and externally measurable aspects of the collective, such as collective behaviours, communication, stakeholder relationship building, and responsible leadership, all of which are essential skills for South African entrepreneurs (Kerrin et al., 2017;Landrum & Gardner, 2012;Volckmann, 2002;Wilber, 2000).This research focuses on relationship building, extending beyond clients to other stakeholders.Poor relationship management has been linked to business failure (Xesha et al., 2014), so developing good stakeholder relationships is critical for business success.Coaching entrepreneurs can help them become more self-aware and understand their personalities, allowing them to effectively network and build relationships with key stakeholders.Therefore, the following hypothesis is stated: Hypothesis 5: Building relationships as a collective interior entrepreneurial coaching outcome is positively related to business growth.
METHODS
The positivism paradigm was used in the study, known for its value-free and objective approach to explaining phenomena through objective facts (Creswell & Creswell, 2018).Adopting an explanatory research design, which uses quantitative data to test hypotheses, resulted in improved comprehension, explanation, and prediction of the research topic (Creswell & Creswell, 2018).The target population included entrepreneurs who had worked with an entrepreneurial coach within the previous 36 months and owned a business that was less than five years old, which corresponded to provincial contributions to the national economy in South Africa (Statistics South Africa (STATSSA), 2021, 2022).Despite the estimated population of 2,400,000 SMMEs in South Africa (SEDA, 2021), no database for coached SMMEs exists, implying that the proportion of SMMEs receiving coaching and mentoring may be lower than reported (SEDA, 2021).
The research sample was drawn up using convenience and snowball sampling to achieve a sample size that meets the assumptions of multivariate statistical techniques (Field, 2018).The recruitment of the entrepreneurs was largely from the Business Development Support Providers (BDSPs) programme managers sending the link to their alums.In addition, the entrepreneurs were asked to share the survey link with other entrepreneurs who had been in an incubation programme within the stipulated period and had a coach in the programme.Based on the population stated above, regarding a minimum acceptable sample size, the current study aimed to acquire a sample size that generated a 95% confidence interval (Field, 2018;Lakens, 2022).Further, to estimate the proportion with plus or minus 5% precision, the resultant target sample size statistically was n=385 (Field, 2018;Filho et al., 2013;Lakens, 2022).However, the sample size was n=296 (indicating a 77% response rate), surpassing the recommended 250 participants.For this study, data was gathered using an online self-administered questionnaire, adapted from validated measurement scales in the literature, investigated the outcomes of entrepreneurial coaching, their impact on business growt specifically in the South African business incubator environment (Kerrin et al., 2017).A pilot study was conducted to ensure content validity and clarity of instruction (Creswell & Creswell, 2018).The survey aimed to determine the relationship between entrepreneurial coaching, reported outcomes, and business growth.
| 96 SPSS version 27 and STATA version 13 were used to analyse quantitative data collected through Qualtrics.To analyse the responses from entrepreneurs, descriptive statistics, correlation tests, reliability tests, and regression analysis were used (Field, 2018).The conceptual framework and relationships between variables were tested using Structural Equation Modelling (SEM) with path analysis (Mehmetoglu & Venturini, 2021).Cronbach's alpha was used to measure the overall reliability of the questionnaire to ensure rigour, with values greater than 0.7 considered acceptable (Field, 2018;George & Mallery, 2019).The reliability of the instrument used in this study ranged from acceptable to excellent, providing greater confidence in its validity (see Table 1)
FINDINGS
Data was gathered from 312 respondent entrepreneurs who had received entrepreneurial coaching for their businesses, representing a wide range of business sizes and from all provinces in South Africa.
From May to December 2021, the self-administered survey was distributed through Qualtrics.The final sample size was 296 after excluding incomplete surveys.
Demographic Characteristics of the Sample
Of the 296 respondents, 121 (41%) were male and 175 (59%) were female.relationships.The study looked at several pathways, including the impact of entrepreneurial coaching on self-efficacy, entrepreneurial resilience, vision, and relationship building, as well as the impact of these factors on business growth.A four-step approach was used to conduct mediation analysis to assess the conditions of mediation.The conceptual model (Figure 2) depicted the relationships between the independent variable (entrepreneurial coaching), the mediating variables (self-efficacy, resilience, visionary thinking, and relationship building), and the dependent variable (business growth).The relationship between the dependent variable and both the mediating and independent variables was examined using structural equation modelling (SEM).This step aimed to see if the mediator predicts the dependent variable significantly and if the previously significant co-efficient of the independent variable is significantly reduced.The independent variable's non-significance indicates its lack of influence (MacKinnon et al., 2007;Mehmetoglu & Venturini, 2021).Only building relationships had a significant p-value of 0.000 among the four independent variables.On the other hand, entrepreneurial resilience, vision, and self-efficacy were not statistically significant, with p-values of 0.905, 0.824, and 0.445, respectively.Across all independent variables, there were complete, partial, and failed mediated effects for business growth.Table 3 shows that the self-efficacy abilities composite variable had a nonsignificant direct effect on the dependent variable [coefficient =.081; (p =.372)].However, it had a highly significant relationship with entrepreneurial resilience (p 0.000).Entrepreneurial resilience, in turn, had a significant direct effect on the dependent variable [coefficient =.726; (p =.000)] and a highly significant relationship with being visionary (p 0.000).
Certain mediation tests failed at various stages of the investigation.For example, the regression of selfefficacy on business development produced non-significant results (p =.372), indicating failure at Step 1.Similarly, the relationship between visionary leadership and business success was insignificant (p = 0.735), failing at Step 1.As a result, using ECO as a mediator enables a systematic investigation of the relationships between independent and dependent variables.Overall, the mediation tests revealed that self-efficacy and entrepreneurial resilience, entrepreneurial resilience and being visionary, being visionary and building relationships, building relationships and business growth partially mediate.The
DISCUSSION
In the mediation test (Step 1), ECO serves as a mediator between business growth and entrepreneurial coaching, indicating a positive relationship (p0.05).Previous research backs up ECO's role in influencing business growth (Kerrin et al., 2017;Mele et al., 2010;Wiklund et al., 2009;Wilber, 2000;Xesha et al., 2014).The structural model, moderation and mediation test results support Hypotheses 1 and 2, indicating that entrepreneurial coaching leads to self-efficacy, vision, and relationship building (except for entrepreneurial resilience, which yielded non-significant results).Hypotheses 1, 2, 3, 4, and 5 can be accepted at p = 0.002 significance levels.
Entrepreneurial coaching is directly related to self-efficacy, vision, relationship building, and business growth, emphasising its effectiveness in developing critical entrepreneurial skills for South African entrepreneurs.The revised model's moderation, mediation analysis, and goodness of fit confirm that entrepreneurial coaching mediates the positive relationship between business growth and relationship | 99 building.This finding is consistent with previous research, such as Kerrin et al. ( 2017); Xesha et al. (2014);Galvão & Pinheiro (2019).In the mediation tests, however, entrepreneurial coaching does not mediate the relationship between self-efficacy, entrepreneurial resilience, vision, and business growth.
The statistical analysis of the primary quantitative data aligns with previous studies, particularly Wilber's Integral theory and AQAL model (Wilber, 2000).The findings highlight the interconnectedness of the four quadrants representing entrepreneurial coaching outcomes: entrepreneurial self-efficacy, entrepreneurial resilience, being visionary, and building relationships.
Following the evidence provided above, entrepreneurial coaching: Hypothesis 1a, which reads: "Entrepreneurial coaching moderates the positive relationship between entrepreneurial self-efficacy and business growth", is not supported.Hypothesis 1b reads: "Entrepreneurial coaching moderates the positive relationship between entrepreneurial resilience and business growth" is not supported.Hypothesis 1c, which reads: "Entrepreneurial coaching moderates the positive relationship between being visionary and business growth", is not supported.Hypothesis 1d, which reads: "Entrepreneurial coaching moderates the positive relationship between building relationships and business growth", is supported.
In its relationship with entrepreneurial coaching, self-efficacy produces significant results [p = 0.000] (Volckmann, 2002;Wilber, 2000).Furthermore, when regressed on entrepreneurial resilience, selfefficacy shows significant results (p = 0.000), confirming its direct relationship with both variables and mediation of the relationship between entrepreneurial resilience and business growth (Volckmann, 2002;Wilber, 2000).This emphasises the significance of entrepreneurial self-efficacy because of entrepreneurial coaching and its significance for the development of entrepreneurs (Saadaoui & Affess, 2015).
Following the data provided above concerning entrepreneurial self-efficacy: Hypothesis 2a, which reads: "Entrepreneurial self-efficacy as an individual interior entrepreneurial coaching outcome is positively related to business growth", is not supported.Hypothesis 2b reads: "Entrepreneurial self-efficacy mediates the positive relationship between being entrepreneurial resilience and business growth" is supported.
Entrepreneurial resilience had no direct relationship with entrepreneurial coaching (p =.258) or business growth [p =.905] (Corner et al., 2017).These findings may be influenced by differences in measuring entrepreneurial resilience across studies (Fisher et al., 2016).The revised model, on the other hand, indicates that self-efficacy mediates the relationship between entrepreneurial resilience and business growth, emphasising the interdependence of entrepreneurial coaching outcomes (Volckmann, 2002;Wilber, 2000).Furthermore, being visionary mediates the relationship between building relationships and business growth, which aligns with the AQAL model's concept of interconnected quadrants (Volckmann, 2002;Wilber, 2000).
Following the data provided above with regard to entrepreneurial resilience: Hypothesis 3a, which reads: "Entrepreneurial resilience as an individual exterior entrepreneurial coaching outcome is positively related to business growth", is not supported.Hypothesis 3b reads: "Entrepreneurial resilience mediates the positive relationship between being visionary and business growth" is supported.
Previous research has identified vision as an attribute associated with successful business growth (Kerrin et al., 2017;Mamabolo & Myres, 2020).The current study also discovered that being visionary is an outcome of entrepreneurial coaching (p = 0.000), supporting the notion that visionary skills benefit entrepreneurs (Nimbodiya & Totala, 2019).The study, however, did not find a significant relationship between being visionary and business growth (p = 0.735), contrary to previous findings.
Following the data provided above with regards to being visionary: Hypothesis 4a, which reads: "Being visionary as a collective interior entrepreneurial coaching outcome is positively related to business growth", is not supported.
Building relationships is an essential skill for business growth (Galvão & Pinheiro, 2019).According to the study findings, entrepreneurial coaching leads to developing relationship-building skills (p = 0.000), and there is a direct relationship between building relationships and business growth (p = 0.000).These findings are consistent with previous research indicating that individuals with building relationship skills are likelier to experience business growth than those without (Kerrin et al., 2017;Mamabolo & Myres, 2020;Xesha et al., 2014).As a result, the ability to build relationships has a significant impact on business growth (Galvão & Pinheiro, 2019).
Following the data provided above with regard to building relationships: Hypothesis 5, "Building relationships as a collective exterior entrepreneurial coaching outcome is an antecedent for business growth," is supported.As pictured in the revised conceptual model in Figure 3, the statistical analysis of the primary quantitative data aligns with previous studies, particularly Wilber's Integral theory and AQAL model (Wilber, 2000).The findings highlight the interconnectedness of the four quadrants representing entrepreneurial coaching outcomes: entrepreneurial self-efficacy, entrepreneurial resilience, being visionary, and building relationships.Furthermore, while important, Kerrin et al. (2017), entrepreneurial resilience has no direct impact on business growth.According to the study findings, a systemic approach is required, and self-efficacy mediates entrepreneurial resilience and business growth.This finding supports the AQAL model, which emphasises the interconnectedness of the individual interior and exterior quadrants.Isolating entrepreneurial resilience or visionary leadership may not result in the desired business growth outcomes.However, as supported by the revised conceptual model and aligned with integral and systems theory, being visionary plays an important role in business growth (Volckmann, 2002;Wilber, 2000).The study's findings support Wilber's (2000) AQAL model, indicating that entrepreneurial resilience in the individual exterior quadrant influences the visionary collective interior quadrant.Empirical studies show that building relationships positively impacts business growth in the South African context (Kerrin et al., 2017;Mamabolo & Myres, 2020;Xesha et al., 2014).These findings are consistent with the systemic approach, in which changes in one element affect other elements (Bhatnagar, 2021).
This study's findings support the argument that entrepreneurial coaching outcomes mediate the relationship between coaching and business growth, emphasising the importance of coaching in achieving greater success in business development.The study emphasises the importance of increased entrepreneurial coaching to boost business growth in South Africa, as coaching is directly related to selfefficacy, entrepreneurial resilience, vision, relationship building, and business growth.These findings confirm that entrepreneurial coaching is a dependable intervention for developing critical entrepreneurial skills required for business growth among South African entrepreneurs.According to the study findings, entrepreneurial coaching indirectly impacts business growth due to its influence on entrepreneurial coaching outcomes.Increasing the availability of entrepreneurial coaching in small business development programmes can improve coaching outcomes for entrepreneurs.The study tested the outcomes of entrepreneurial self-efficacy, entrepreneurial resilience, vision, and relationship building.Incorporating entrepreneurial coaching into business incubator programmes can greatly benefit the entrepreneurial ecosystem by providing adequate resources and physical infrastructure for entrepreneurship.This supports the recommendation that more South African business incubators implement business coaching interventions.
According to the revised model, entrepreneurial resilience mediates the relationship between visionary and business growth, whereas visionary mediates the relationship between relationship building and business growth.The findings emphasise the interdependence of entrepreneurial coaching outcomes, emphasising the systemic coaching approach, which recognises the importance of addressing multiple factors for improved business growth.Increasing entrepreneurs' self-efficacy improves their entrepreneurial resilience, improving their visionary mindset and ability to build business relationships.
Relationship building has been shown to have a positive impact on business growth.The systemic coaching approach and the AQAL model place a premium on the interdependence and interconnectedness of sub-factors in the coaching process.Changes in one factor cause changes in others, affecting the entire system.
Furthermore, the study discovered that entrepreneurial coaching moderates the relationship between relationship building and business growth, emphasising the importance of this skill for entrepreneurs.
On the other hand, entrepreneurial coaching had no effect on the relationship between self-efficacy, entrepreneurial resilience, vision, and business growth.These findings highlight the importance of entrepreneurial coaching and its various outcomes in driving business growth, highlighting the importance of a comprehensive and integrated approach to coaching interventions.The revised conceptual framework is justified by its alignment with the AQAL model, which provides a novel and comprehensive perspective on the entrepreneurial coaching sector that has not previously been applied.
The systemic approach of the AQAL model emphasises the interconnectedness and interdependence of various factors that influence human experience and outcomes.The AQAL model promotes a holistic approach in the context of entrepreneurial coaching.It emphasises that coaching should not focus on a single factor to achieve success because human experience is multifaceted and influenced by multiple dimensions.This integral approach is central to the philosophy of the AQAL model.factors such as social and cultural influences, observable behaviours, and the broader socioeconomic context.This comprehensive viewpoint aligns with the core principles of the AQAL model, emphasising the need for a holistic understanding of the complex dynamics at work in entrepreneurial coaching.In summary, the revised conceptual model is justified by its adherence to the AQAL model's holistic approach, which encourages a more thorough and inclusive examination of the diverse elements that contribute to entrepreneurial coaching success.This alignment ensures that the model captures the integral nature of coaching and provides a more comprehensive analysis framework.
CONCLUSION
This study aimed to fill a knowledge gap regarding the mediating effects of business growth antecedents on entrepreneurial outcomes.The research questions, conceptual model, methodology, quantitative findings, and revised conceptual model were discussed.The study discovered that entrepreneurial coaching is directly related to entrepreneurial self-efficacy, entrepreneurial resilience, vision, and relationship building.It also modifies the link between relationship-building abilities and business growth.Relationship building was the only significant direct predictor of business growth, emphasising the importance of coaching interventions.Policymakers are urged to use the proposed framework to professionalise and differentiate entrepreneurial coaching from other types of coaching.Some limitations were observed in the study.The study relied on cross-sectional data to provide a view of current entrepreneurs, coaches, programme managers and opinions of industry experts.This limited the study's ability to identify causal links across factors.The findings and conclusions may not be universal in understanding the significance of entrepreneurial coaching concerning business entities that have passed the important three-year mark at which the small business failure rate is reported to decrease.The findings indicate that undertaking a longer-term Integral theory AQAL model study will allow for further validation of findings through the model's practical implementation, which time and budget restrictions prevented in this study.
More studies are needed to holistically examine the effectiveness of entrepreneurial coaching, its outcomes and business growth.Future studies should focus on the shortcomings and challenges in the competency of entrepreneurial coaches in South Africa to identify effective strategies for addressing them.This can help to design an appropriate competency framework for entrepreneurial coaching programmes that can be comprehensively integrated into a broad range of curricula and agendas within the existing entrepreneurial coaching field in South Africa.This can assist in standardising practice and supporting entrepreneurial coaches' procurers in BIs by monitoring and evaluating entrepreneurial coaching services.There is limited research on the influence of entrepreneurial coaching on small enterprises in the post-incubation period.Longitudinal studies should examine entrepreneurs who have been coached while incubated and are in the post-incubation stage, comparing them with those who have not received entrepreneurial coaching in BIs.This study might benefit from researching the question: "To what extent does entrepreneurial coaching impact post-incubation business growth?"Future research could benefit from using qualitative approaches to gain a more in-depth understanding of the experiences of entrepreneurs with the entrepreneurial coaching process.Quantitative research with a larger sample of entrepreneurial coaches, BI programme managers and industry experts would be valuable in determining success and growth trends and continually testing and updating entrepreneurial coaching frameworks so they remain relevant.
Figure 2 .
Figure 2. Step Four of Mediation Tests: Conceptual Model EC -Entrepreneurial coaching, SE -Entrepreneurial Self-Efficacy, ER-Entrepreneurial resilience, Vis -Being Visionary, BR -Building relationships, BG-Business Growth.Source: Processed by Author
Figure 3 .
Figure 3. Revised Conceptual Model Source: Created by Author Figure 3 depicts partial mediation effects in the revised conceptual model.The findings show that self-efficacy, along with entrepreneurial resilience, contribute to business growth, while entrepreneurial resilience combined with vision and vision combined with relationship building also contribute to business growth.These findings are consistent with Wilber's (2000) AQAL model, which emphasises the influence of individual thoughts (entrepreneurial self-efficacy) on individual behaviour (entrepreneurial resilience), which in turn, influences collective thoughts and culture (being visionary), ultimately influencing collective behaviour (building relationships).Addressing the individual interior, such as an entrepreneur's thoughts and beliefs about themselves and their entrepreneurial abilities (e.g., self-efficacy), is critical for stimulating business growth in the South African context.However, concentrating solely on individual factors may not produce the desired results.Gielnik et al. (2020) emphasise the importance of self-efficacy in influencing behaviour and cognition.Testing the Application of the Integral AQAL Model in Entrepreneurial… | 101 JMER, 2023, 04(2), 91─105 By incorporating the AQAL model, the revised conceptual model recognises that entrepreneurial coaching should consider not only individual factors such as mindset or self-efficacy, but also external | 102 JMER, 2023, 04(2), 91─105
Table 1 .
Cronbach's Alpha for the Sub-scales
Table 2 .
Demographics and Sample Characteristics Tests of Mediating Effects of Variables on Business GrowthThis study sought to investigate the mediating effects of entrepreneurial coaching on self-efficacy, entrepreneurial resilience, visionary thinking, relationship building, and business growth in South Africa.Path analysis, a type of multiple regression, was used to test the research hypotheses and causal | 97 JMER, 2023, 04(2), 91─105
Table 3 .
Results of Tests of Mediation: Conceptual Model Values greater than 0.90 for CFI and TLI and less than 0.08 for RMSEA indicate an acceptable model fit.A cut-off value 0.95 for TLI, CFI, and 0.06 for RMSEA is required for a reasonably good fit.The initial conceptual model fit poorly and was modified based on modification indices, resulting in the deletion of ten indicators.To maintain concept validity, cross-loadings were not allowed to be freed.Compared to the conceptual model, the revised model had better-fit indices.The chi-square test and fit indices confirmed the revised model's superiority.The revised model had a chi-square value of 0.000, a TLI of 1.006, an RMSEA of 0.000, and a CFI of 1.000, all indicating a good fit for the data.
Tests of the Conceptual ModelModel fit was evaluated using Chi-square, Normed X2/df values, and model fit indices (CFI, TLI, RMSEA).Previous research informed the recommended cut-off values for these indices. | 2024-01-24T18:51:10.260Z | 2023-12-17T00:00:00.000 | {
"year": 2023,
"sha1": "ac04a8aacf00d1ed5e3e632172fc6a960a2e56d5",
"oa_license": "CCBY",
"oa_url": "https://journal.unisnu.ac.id/jmer/article/download/2023.12.04.2-41/376",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "728b19a475ea7e6c4956e58bf4f6dfef09f8e5bb",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
3537895 | pes2o/s2orc | v3-fos-license | Vitamin D Modulates Hematological Parameters and Cell Migration into Peritoneal and Pulmonary Cavities in Alloxan-Diabetic Mice
Background/Aims. The effects of cholecalciferol supplementation on the course of diabetes in humans and animals need to be better understood. Therefore, this study investigated the effect of short-term cholecalciferol supplementation on biochemical and hematological parameters in mice. Methods. Male diabetic (alloxan, 60 mg/kg i.v., 10 days) and nondiabetic mice were supplemented with cholecalciferol for seven days. The following parameters were determined: serum levels of 25-hydroxyvitamin D, phosphorus, calcium, urea, creatinine, alanine aminotransferase, aspartate aminotransferase, alkaline phosphatase, red blood cell count, white blood cell count (WBC), hematocrit, hemoglobin, differential cell counts of peritoneal lavage (PeL), and bronchoalveolar lavage (BAL) fluids and morphological analysis of lung, kidney, and liver tissues. Results. Relative to controls, cholecalciferol supplementation increased serum levels of 25-hydroxyvitamin D, calcium, hemoglobin, hematocrit, and red blood cell counts and decreased leukocyte cell counts of PeL and BAL fluids in diabetic mice. Diabetic mice that were not treated with cholecalciferol had lower serum calcium and albumin levels and hemoglobin, WBC, and mononuclear blood cell counts and higher serum creatinine and urea levels than controls. Conclusion. Our results suggest that cholecalciferol supplementation improves the hematological parameters and reduces leukocyte migration into the PeL and BAL lavage of diabetic mice.
Introduction
Type 1 diabetes mellitus (T1D) is an autoimmune disorder that causes destruction of pancreatic cells and insulitis resulting in loss of insulin secretion [1].
Anemia is defined as a decrease in the amount of hemoglobin (Hb) or the number of red blood cells (RBC) in the blood. It is a common complication in patients with diabetic kidney disease and it increases mortality in diabetic individuals, but its mechanism of action is still not clear [9].
Albumin is the major plasma protein and plays a critical role in maintaining tissue oncotic pressure and in the transport of substances such as vitamin D into the bloodstream. Studies have suggested that hepatic albumin synthesis and secretion are reduced in diabetic rats [10].
Vitamin D is a hormone obtained from the diet and/or by endogenous synthesis. The vitamin D molecule is hydroxylated twice, once in the liver by the enzyme 25-hydroxylase and once in the kidney by the enzyme 1 -hydroxylase, to form calcitriol (1,25(OH) 2 D), which is the form of vitamin D that interacts with vitamin D receptor (VDR) to perform its biological activity [7].
The classic activity of calcitriol is to regulate bone metabolism by maintaining a balance between calcium (Ca) and phosphorus (P) concentrations [8]. Studies have reported that vitamin D may affect the course of T1D by immunomodulation [11]. Although the mechanism of action is still unknown, serum 1,25(OH) 2 D levels appear to modulate the level of systemic cytokine production and to increase white blood cell (WBC) count [12,13].
Although vitamin D supplementation is an important aid in clinical practice, few studies have demonstrated the effects of vitamin D supplementation on the course of T1D in humans and animals. In addition, vitamin D supplementation studies are often contradictory due to the lack of precise information on regimen and dosage protocols [14].
Thus, using a well-established mouse model of T1D, we investigated the effect of vitamin D supplementation on histological, biochemical, and hematological parameters and on bronchoalveolar lavage (BAL) and peritoneal lavage (PeL) fluids in male (C57BL/6) diabetic and nondiabetic mice. The hypothesis is that vitamin D supplementation improves the hematological parameters and reduces inflammatory cell migration into the PeL and BAL fluid of diabetic mice.
Animals.
Thirty-one specific pathogen-free male C57BL/6 mice weighing 25 ± 2 g at baseline were used. The animals were maintained at 22 ∘ C under a 12 h light-dark cycle. Food and water were provided ad libitum before and during the experimental period. This study was conducted in strict accordance with the principles and guidelines of the National Council for the Control of Animal Experimentation (CONCEA) and approved by the Ethics Committee on Animal Use (CEUA) at the School of Pharmaceutical Sciences (FCF), University of São Paulo, Brazil (protocol number: CEUA/FCF/389). Surgery was performed under ketamine/xylazine anesthesia and all efforts were made to minimize animal suffering.
Induction of Diabetes Mellitus.
Animals were separated in four groups: control (C), control supplemented with vitamin D (CV), diabetic (D), and diabetic supplemented with vitamin D (DV). Briefly, diabetes mellitus was induced by intravenous injection of 60 mg/kg alloxan monohydrate (ALX) (Sigma Chemical Co., St. Louis, MO, USA) dissolved in physiological saline (0.9% NaCl). Control mice were injected with physiological saline only. Ten days after diabetes induction, body weight was measured and peripheral blood (tail vein) glucose level was determined using an Accu-Chek Advantage II blood glucose monitor (Roche Diagnóstica, São Paulo, SP, Brazil). Animals were considered diabetic if glucose level was above 300 mg/dL [15].
Vitamin D Supplementation.
Animals in the CV and DV groups were supplemented orally with 800 IU of vitamin D (Sanofi Aventis, São Paulo, SP, Brazil) for seven days [14]. The last dose of vitamin D was administered 24 h before the experiment to minimize stress-induced endocrine changes caused by oral administration of the hormone [16]. Body weight and blood glucose were measured on the first, fourth, and seventh day of the supplementation period.
Blood
Collection. Animals were anesthetized by an intraperitoneal injection of 10 mg/kg xylazine hydrochloride (Ceva Santé Animale, Paulínia, SP, Brazil) and 90 mg/kg ketamine hydrochloride (Ceva Santé Animale). Blood was collected from anesthetized animals by cardiac puncture. Samples of EDTA-anticoagulated blood were used to determine the following hematological parameters: red blood cell (RBC) count, white blood cell (WBC) count, hematocrit (Htc), and hemoglobin (Hb). All analyses were performed using a veterinary hematology analyser ABC Vet (HORIBA5, UK). Differential cell counts were determined on stained slides under oil immersion microscopy. Based on morphological criteria, 100 cells per sample were counted and classified as either mononuclear or polymorphonuclear. A sample aliquot without anticoagulant was centrifuged (10 min, 3500 rpm) at room temperature. Next, the serum was separated and used for measuring ionic calcium (Ca), phosphorus (P), albumin, alanine transaminase (ALT), aspartate aminotransferase (AST), alkaline phosphatase (ALP), urea, and creatinine by colorimetric assay according to the manufacturer's protocol (LabTest Diagnóstica, Lagoa Santa, MG, Brazil). Serum 25(OH)D was measured by EUROIMMUN 25-OH-Vitamin D ELISA kit (Luebeck, Germany). Serum samples were stored at −80 ∘ C.
Peritoneal and Bronchoalveolar
Lavage. The abdomen of all mice was exposed through a ventral midline incision. Peritoneal lavage (PeL) was performed by instillation of 10 mL (two 5 mL infusions) of phosphate buffered saline (PBS; 137 mM NaCl, 2.7 mM KCl, 4.3 mM Na 2 HPO 4 , and 1.47 mM KH 2 PO 4 ; pH 7.4) at room temperature [17]. The trachea was exposed through a midline ventral incision in the neck. Bronchoalveolar lavage (BAL) was performed by instillation of 5 mL (five 1 mL infusions) of PBS at room temperature through a 16 G × 1.88 BD Angiocath6 polyethylene tube (BD, Franklin Lakes, NJ, USA) inserted into the trachea. Next, PeL and BAL samples were centrifuged (1500 rpm, 10 min, 4 ∘ C), supernatants were discarded, and the cells were resuspended in PBS (1 mL). The cell suspension was diluted 1 : 2 (v : v) with Turk's solution. Total cell counts were determined using a Neubauer chamber and differential cell counts were examined on stained slides under oil immersion microscopy. A total of 100 cells were counted and classified as neutrophils, lymphocytes, or macrophages based on morphological criteria.
Tissue Extraction and Histological
Analysis. Kidney, liver, and lung samples were extracted and subsequently fixed in formaldehyde solution (10%). After fixation, tissues were dehydrated in increasing ethanol concentrations (70-100%), diaphonized in xylol, and embedded in paraffin. Transverse sections (5 m) obtained after inclusion were stained with hematoxylin and eosin (H/E). After staining, the material was dehydrated, diaphonized, and mounted with Entellan5. Slides containing the tissue were observed under a light microscope (Nikon Eclipse 80i, Tokyo, Japan) and photographed using the NIS-Elements AR imaging software (Nikon).
Data and Statistical
Analysis. The data were analysed by analysis of variance (ANOVA) followed by the Tukey-Kramer multiple comparisons test (when appropriate) or unpaired -test using GraphPad Prism 6.0 software (La Jolla, CA, USA). A two-tailed value with 95% confidence interval was computed. Data are presented as mean ± standard deviation (SD). Data were considered significant when < 0.05.
Analysis of Vitamin D Supplementation.
Animals supplemented with cholecalciferol (CV and DV groups) exhibited higher serum 25(OH) 2 D (ng/mL) levels than their respective controls (C and D groups). These results indicate that vitamin D supplementation was successful (Figure 1(a)). Diabetic mice had lower serum Ca (mg/dL) levels than nondiabetic animals. However, vitamin D supplementation restored serum Ca levels in the DV group (Figure 1(b)).
Serum phosphorus (mg/dL) levels were significantly higher in vitamin D-treated nondiabetic mice compared to nondiabetic controls (Figure 1(c)).
Temporal Variation of Blood Glucose Levels and Body
Weight. Body weight gain and blood glucose levels were measured during the supplementation period. Body weight was measured in the first and seventh days of the experiment. There was no significant difference in body weight variation between vitamin D 3 -supplemented animals (CV, 0.1 ± 0.3 g, = 9; DV, −1.3 ± 1.0 g, = 7) and their respective controls (C, 0.3 ± 0.3 g, = 9; D, −1.1 ± 0.9 g, = 6). Blood glucose was measured on the first, fourth, and seventh day of the experimental period (Figure 1(j)). There was no significant difference in glycemic evolution between vitamin D 3 -supplemented (CV and DV) groups and controls (C and D groups). However, blood glucose levels remained significantly higher in diabetic mice (D and DV groups) than in controls (C and CV).
Vitamin D 3 Supplementation Improved Hematological
Parameters. Diabetic mice had lower RBC, Hb, Htc, WBC, and peripheral blood mononuclear cell counts than controls ( Figure 2). Diabetic mice supplemented with vitamin D 3 had higher RBC, Hb, Htc, WBC, and mononuclear cell counts than diabetic mice. In addition, diabetic mice supplemented with vitamin D 3 had higher Hb and hematocrit levels than nondiabetic mice supplemented with vitamin D 3 (Figure 2).
Vitamin D 3 Supplementation in the Kidneys and Liver.
Diabetic mice had significantly higher serum levels of urea (Figure 1(d)) and creatinine than controls (Figure 1(e)). In addition, diabetic mice had a significant reduction in serum albumin concentration compared to controls (Figure 1(f)). However, there were no significant differences in serum levels of ALP, AST, and ALT (Figures 1(g)-1(i)) between vitamin D 3treated mice and their respective controls. Compared to controls, diabetic mice exhibited a thickening of the Bowman's capsule. However, vitamin D supplementation did not restore this renal parameter ( Figure 3). No significant differences in liver ( Figure 4) and lung ( Figure 5) morphology were observed between vitamin D 3 -supplemented animals (CV and DV) and their respective controls.
Cell Composition of Bronchoalveolar Lavage and Peritoneal Lavage Fluids.
The PeL fluid of vitamin D 3 -supplemented mice had 4.9-fold more neutrophils than respective controls, but no significant difference in total cell counts was observed across groups. Total leukocyte counts in the PeL fluid were reduced by 47% in vitamin D 3 -treated diabetic mice compared with diabetic mice. Similarly, a 5.7-fold decrease in total leukocyte count was observed in the BAL fluid of vitamin D 3treated diabetic mice (Table 1).
Discussion
Alloxan (ALX) is an effective diabetes-inducing agent widely used in experimental animal models. ALX administration produces hyperglycemia, which leads to polyuria and weight loss [18]. In our study, blood glucose and body weight were measured 10 days after ALX administration. During the experimental period, diabetic mice exhibited lower body weight gain and hyperglycemia than nondiabetic mice. Both metabolic alterations may occur due to increased lipolysis and oxidative degradation of amino acids that increase tissue energy expenditure and body weight loss [19].
Few studies have demonstrated the effects of vitamin D 3 supplementation on T1D in animal models. Nevertheless, results are controversial due to the use of different animal strains, protocols, doses, periods, and routes of supplementation. In our study, diabetic and nondiabetic mice supplemented with 800 IU (40000 IU/kg body weight/day) of vitamin D 3 had higher serum 25(OH) 2 D levels than controls. According to Takiishi et al. (2014), this dose causes no significant alterations in serum P or Ca levels in nonobese diabetic (NOD) mice [14]. The role of vitamin D 3 in body weight control is controversial. The presence of 1 -hydroxylase and VDR in adipose tissue cells suggests the involvement of vitamin D in body weight regulation [20]. Weight loss has been associated with Ca metabolism, which is regulated by Vitamin D 3 . The increase in Ca availability enhances oxidation, promotes apoptosis of adipocytes, and reduces the absorption of lipids due to the formation of insoluble molecules in the intestine [21]. However, Takiishi et al. found no difference in body weight variation between vitamin D 3 -treated NOD mice and controls [14]. Similarly, in our study, there was no significant difference in body weight variation between vitamin D 3treated mice and controls during the supplementation period.
The effects of vitamin D 3 supplementation on glycemic control in animal models are controversial. Relative to controls, NOD mice treated with high doses of calcitriol (5 g/kg/alternate days) showed lower insulitis and a reduction in T cell counts [22]. However, vitamin D supplementation had little effect in reverting overt diabetes in streptozotocin-induced diabetic mice [23]. In the current study, vitamin D supplementation did not reduce blood glucose levels in diabetic mice compared to controls.
The major role of vitamin D 3 in the body is to regulate the concentrations of Ca and P [8]. Few studies have focused on how vitamin D and its analogues may affect Ca homeostasis in T1D [24,25]. In addition, diabetic individuals can exhibit low serum Ca and P levels due to renal disorders [26]. In the current study, serum concentrations of these ions were measured. Diabetic mice had lower serum Ca levels than controls and vitamin D 3 supplementation increased these levels in the former. High 1,25(OH) 2 D levels stimulate bone turnover, renal excretion, and intestinal Ca absorption [7]. However, serum P levels in our study were significantly higher in vitamin D 3 -treated nondiabetic mice than controls. There was no significant difference in P levels between vitamin D 3 -treated and control diabetic mice. Vitamin D is transported by albumin and vitamin Dbinding protein in the bloodstream to the liver and kidneys, where it is activated [7,8]. In the current study, liver enzymes (ALP, ALT, and AST) were measured to determine organ dysfunction in diabetic and nondiabetic mice. The concentrations of serum ALT, AST, and ALP were not significantly different between mice supplemented with vitamin D and controls. Measuring serum urea and creatinine levels is an alternative approach to estimate renal function [27]. In our study, diabetic mice showed low albumin levels and high serum levels of both creatinine and urea and a thickening of the Bowman's capsule compared to nondiabetic mice. These results may indicate renal failure [4]. However, vitamin D supplementation did not restore these renal parameters.
Alterations in hematological parameters are common in diabetes patients [28]. Hyperglycemia-induced oxidative stress in diabetic conditions leads to functional and morphological alterations of the erythrocyte membrane [29]. The increased nonenzymatic glycosylation of RBC membrane proteins and hyperglycemia could be responsible for the low Hb levels in these patients [30]. Hyperglycemia and oxidation of these proteins increase the production of lipid peroxides that lead to hemolysis of RBC [31]. In our study, diabetic mice had lower RBC counts and Hb and Htc levels than controls. Conversely, vitamin D 3 -treated diabetic mice showed RBC counts and Htc and Hb levels similar to those of nondiabetic mice. These findings suggest that vitamin D may affect erythropoiesis. Some studies have shown that VDR is expressed in the bone marrow by specific cell subsets such as stromal and accessory cells [32]. In addition, Aucella et al. reported that patients with chronic kidney disease undergoing hemodialysis showed a significant increase in Hb concentration and Htc level after four months of treatment with vitamin D [33]. Increased WBC counts are common in acute and chronic diabetes complications [34]. However Matough et al. observed lower WBC counts in diabetic mice as compared to nondiabetic animals [30]. In our study, diabetic mice showed lower mononuclear cell and WBC counts than control mice. Conversely, vitamin D 3 treatment restored mononuclear cell and WBC counts on diabetic mice to levels similar to those of nondiabetic mice. Interestingly, vitamin D 3 -treated diabetic mice showed a reduction in pulmonary and peritoneal leukocyte counts compared to diabetic animals. Some studies have shown that VDR expression on leukocyte subsets [35] could increase leukocyte migration from tissues to peripheral blood.
Taken together, our results suggest that vitamin D supplementation may improve hematological parameters and reduce cell counts of BAL and PeL fluids during the course of diabetes. | 2018-04-03T04:52:27.708Z | 2017-04-19T00:00:00.000 | {
"year": 2017,
"sha1": "e2643b8a1a124739333110ff9c3daf52c18e2c89",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2017/7651815.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "745052afb98dffe31d304427299babd92a8fd53f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
964891 | pes2o/s2orc | v3-fos-license | New universal primers for genotyping and resistance detection of low HBV DNA levels
Abstract HBV (hepatitis B virus) genotyping is important in determining the clinical manifestation of disease and treatment response, particularly, in patients with low viral loads. Also, sensitive detection of HBV antiviral drug resistance mutations is essential for monitoring therapy response. Asensitive direct sequencing method for genotyping and the drug resistance mutation detection of low levels of HBV DNA in patients’ plasma is developed by PCR amplification of the DNA with novel universal primers. The novel, common, and universal primers were identified by alignment of RT region of all the HBV DNA sequences in databases. These primers could efficiently amplify the RT region of HBV virus at low DNA levels by directly sequencing the resulting PCR products, and mapping with the reference sequence made it possible to clearly obtain the HBV subtypes and identify the resistance mutations in the samples with HBV DNA level as low as 20 IU/mL. We examined the reliability of the method in clinical samples, and found it could detect the HBV subtypes and drug resistance mutations in 80 clinical HBV samples with low HBV DNA levels ranging from 20 to 200 IU/mL. This method is a sensitive and reliable direct sequencing method for HBV genotyping and antiviral drug resistance mutation detection, and is helpful for efficiently monitoring the response to therapy in HBV patients.
Introduction
Hepatitis B virus (HBV) is one of the most serious and prevalent health problems affecting >2.4 billion people worldwide. [1] There are >350 million chronic HBV carriers, 75% of whom reside in the Asia-Pacific region, especially in China. People with hepatitis B are at an increased risk of developing hepatic decompensation, cirrhosis, and hepatocellular carcinoma (HCC). The patients with decompensated cirrhosis have a poor prognosis with a 14% to 35% probability of survival for 5 years. [2,3] The estimated worldwide mortality caused by HBV infection is about 780,000 deaths a year.
Many drugs have been approved for the treatment of chronic hepatitis B. Antiviral therapy is an efficient way to prevent bad clinical outcomes of HBV infection. It has been shown to be effective in suppressing HBV replication, decreasing inflammation and fibrosis in the liver, and preventing progression of liver disease. [4] Currently, there are 2 types of anti-HBV drugs: interferon-alpha (IFN-a) and nucleoside analogs (NAs), [5,6] such as lamivudine (LAM), [7][8][9][10] telbivudine (LdT), [11] and entecavir (ETV). [12] The clinical efficacy of the treatment depends on various factors, such as severity of liver disease, serum ALT levels, and serum HBV DNA levels. The AASLD and APASL guidelines recommend antiviral therapy for patients with compensated cirrhosis and serum HBV DNA level >2000 IU/mL regardless of ALT level. The EASL guideline recommends treatment of patients with any detectable level of serum HBV DNA. [13] Low levels of serum HBV DNA occur in patients during drug treatment, which is considered to be one of the indicators of drug resistance in the HBV patients. [14,15] The drug treatment not only reduces HBV loads to a low level, but also results in resistance mutations in the patients. [14][15][16] Therefore, it is very important to develop a sensitive and clinically applicable method for genotyping and resistance detection of low HBV DNA in plasma of the patients.
Many quantitative real-time PCR (qPCR) methods have been developed to detect the HBV DNA concentration <200 IU/mL, even at 20 IU/mL, such as COBAS TaqMan HBV Test, v2.0 (Roche Molecular Systems Inc.). However, the molecular methods for the analysis of the HBV subtype or resistance are currently based on common PCR, the sensitivity of which is only ≥200 IU/mL of HBV viral load. [14,15] The detection by common PCR method often fails to detect HBV viral load between 100 and 200 IU/mL, and the repeatability of the method is poor. Here, we report a direct sequencing methodology with identified new common and universal primers to examine subtyping and resistance detection in a low virus load (from 20 to 200 IU/mL).
Ethics statement
This study was approved by the Institutional Review Board of Renmin Hospital, Wuhan University School of Medicine. A written informed consent was obtained from each participant in accordance with the Ethics Committee of the Renmin Hospital of the Wuhan University.
Patients and plasma preparations
A total of 90 samples of 3 to5 mL EDTA-anticoagulated peripheral blood were obtained from individuals for quantitative real-time PCR (qPCR) with COBAS TaqMan HBV Test kit (Roche, Swiss). HBV infection was confirmed in the Department of Infectious Diseases, Renmin Hospital of Wuhan University, and patients were treated with NAs, including lamivudine, adefovir, entecavir, tenofovir, or/and famciclovir, for >1 year. The HBV DNA titers for each patient were determined by qPCR as reported previously [16] (data no shown). The HBV DNA titers of 80 of 90 patients were from 20 to 200 IU/mL, and 10 of these patients were from 200 to 10 8 IU/mL in the peripheral blood. The patients' age ranged from 22 to 67 years (median, 42.9 years), and 43 patients had the history of interferon therapy for more than a half year. All samples were centrifuged for 5 minutes at 3000g, and the supernatants were collected and stored at À70°C.
HBV DNA extraction
DNA was extracted from plasma samples with the UltraSens Virus Kit (QIAamp, German, Cat No: 53706) using the designation buffers and regents in the kit and following the manufacturer's manual. Briefly, 0.8 mL Buffer AC and 5.6 mL of carrier RNA solution were pipetted onto the top of 1 mL plasma. After the solutions were mixed and incubated at room temperature for 10 minutes, they were centrifuged to have the supernatant discarded. Next, 300 mL Buffer AR and 20 mL proteinase K were added and vortexed. Buffer AB (300 mL) was added, mixed thoroughly by vortex, and transferred to QIAamp spin column. The solutions were centrifuged once again and the tube containing the filtrate was discarded. The silica pellet was washed with 500 mL Buffer AW1 and 500 mL Buffer AW2, respectively. The nucleic acids were eluted in 30 mL Buffer AVE and stored at À70°C.
Sanger sequencing
The purified amplification was sequenced with an ABI PRISM BigDye 3.1 terminator cycle sequencing kit (Applied Biosystems). The upstream sequencing primer was the same as F1, and the downstream primer was the same as R1.
The common primers designed for amplification of almost all HBV subtypes
The RT region of HBV genomic DNA sequences was extracted from NCBI HBV database (http://www.ncbi.nlm.nih.gov/proj ects/genotyping) and HBV RT region database of Stanford University (http://hivdb.stanford.edu/HBV/releaseNotes/), respectively ( Fig. 1). Phylogenetic analysis was used to identify the HBV subtypes. It could distinguish the subtype for each of the RT region sequence of the Stanford HBV database ( Fig. 2A), which is similar to that in NCBI HBV genomic database (Fig. 2B). The data indicated RT region sequence could be used for clinical HBV genotyping. Therefore, we aligned the sequences of the RT region from all the HBV subtypes to design the primers, which could cover almost all HBV subtypes by Vector NT 11.0 software (Invitrogen, Carlsbad, CA). The identified common primers (F1 and R1) were shown in the red frame, which could be used to amplify the RT region of the 23 HBV DNA subtypes from the HBV DNA database (Fig. 2C). We designed the primer F2 and R2, as a universal primer, which could not amplify any HBV DNA sequence, human genomic DNA sequence, or other nucleotide sequence. The F2 and R2 primers are further linked at the end of F1 and R1 to construct another universal primer F and R, respectively (Fig. 1B).
The specificity of the designed primers
The HBV DNA was extracted from 10 HBV patients' plasma with HBV DNA levels from 200 to 10 8 IU/mL. First, the 2 pairs of primers F1R1 were used to amplify the RT region in the 10 samples. The correct size of the PCR products with F1R1 primers was about 1200 bp length as shown in Figure 3A. The resulting gel-purified PCR products were sequenced with the primer F1 or R1. The representative sequence is shown in Figure 3B. The resulting sequence was used for the HBV subtyping by blast sequence comparison with the online NCBI HBV database, and the representative data are shown in Figure 3C. We used the pair of FR primers to amplify the RT region, and the correct size of the PCR products for this pair of primer was about 1300 bp as shown in Fig. 3D. The PCR products were also sequenced with the primer of F2 or R2 but not F1 or R1 as shown in Figure 3E; and the resulting sequence was blasted online to identify the subtypes as representatively shown in Figure 3F. These data showed that the 2 pairs of primers could efficiently amplify the RT region of HBV. Also, the HBV DNA sequence amplified by the pair of primer F1R1 (Fig. 3B) was consistent with that of primer FR (Fig. 3E), and the resulting HBV subtypes with the 2 pairs of primers (F1R1 and FR) were also identical ( Fig. 3C and Fig. 3F).
Sensitivity of the identified primers for PCR amplification
To examine the sensitivity of the identified primers, HBV DNA was serially diluted from 10 5 to 10 IU/mL; and the primer pair was also serially diluted from 10 to 0.1 pmol/L. We tested the sensitivity of the PCR amplification for the pair of primer FR (FR only) and 2 pairs of primers FR plus F2R2 (FR + F2R2), respectively. The result showed that FR could only amplify the HBV DNA >200 IU/mL with the primer of final concentration of 0.5 pmol/L F and 0.5 pmol/L R (Fig. 4A). The FR + F2R2 primers could amplify the HBV DNA at 20 IU/mL with each of the primers (F, R, F2, and R2) of the final concentration no <0.5 pmol/L (Fig. 4B). With FR + F2R2 primers, HBV DNA could be detected as low as 20 IU/mL, even for the concentration of F or R primer diluted to 0.005 pmol/L with 0.5 pmol/L F2 and 0.5 pmol/ L R2 (Fig. 4C). The PCR products on Fig. 4C were purified and sequenced, and all of the sequences from a to f were identical (Fig. 4D). These data indicated that the designed primers, particularly the mixed primer (FR + F2R2), could efficiently amply the RT region of HBV DNA with an extremely high sensitivity.
Reliability of HBV subtyping and resistance mutation detection in clinical samples
The HBV DNA was extracted from 80 patients' plasma with low load HBV DNA infection. The RT region of HBV was PCR amplified with FR + F2R2 primers at the final concentration of 0.025 pmol/L for F and R and 0.5 pmol/L for F2 and R2. The primers successfully amplified the RT with the correct size in all the samples, and the representative 8 PCR products are shown on Figure 5A. After sequencing the purified PCR products for all the samples with low level of HBV DNA, the accurate subtype for each sample was identified by mapping to NCBI reference sequences through blast sequence comparison with the online HBV database. Figure 5B shows the representative mapped genotypes of B, C, D, and mix B/C. The HBV mutations in the samples were also analyzed by comparing their sequences with NCBI reference sequences. The common resistance mutations such as rtI169, rtV173, rtL180, rtA181, rtT184, rtA194, rtS202, rtM204, rtN236, and rtM250 were found in the samples; also some rare mutations, such as rtL80, rtV84, rtV214 and rtQ215, were identified in the samples with this method (Fig. 5C). We identified 11.25% (9/80) HBV subtype B, 85.0% (68/80) HBV subtype C, 1.25% (1/80) HBV subtype D, and 2.5% (2/80) HBV subtype mix B/C in the 80 clinical samples (Fig. 6A). The detective mutation frequencies for the most common mutation rtM204 and rtL180 were 35.5% and 29.71%, respectively (Fig. 6B), whereas the rare mutation rtV214 and rtQ215 was 2.90% and 2.17%, respectively (Fig. 6B).
Discussion
HBV genotypes are important for the clinical manifestation of disease and treatment response of the patients, particularly those with low HBV loads. Also, sensitive detection of HBV-DNA resistance is essential for monitoring response to therapy. So far there have no common PCR methods available for HBV subtyping or resistance detection for low levels of HBV DNA. We developed a sensitive method in this study for efficiently genotyping and detecting resistance mutations in patients with low levels of HBV DNA (20 to 200 IU/mL). Our result showed that this method is a precise and reliable method for clinical application. This method is characterized by the designed common and universal primers. We identified the common primer F1R1 that could cover all HBV subtypes in the databases. We also designed the universal F2R2 primers, and another universal primers FR by linking F2 and R2 to F1 and R1 primers, respectively. The new designed common primers F1R1 and FR could amplify all 23 HBV subtypes from databases and clinical samples. The F2R2 universal primer could be used as sequence primer for all amplified subtypes in the patients' samples. This pair of primers was also able to amplify all HBV subtypes if mixed with FR together, which increases the sensitivity of PCR amplification. Therefore, FR + F2R2 have the highest sensitivity for amplification, and we found that they can detect plasma HBV DNA as low as 20 IU/mL. Antiviral treatment is effective to suppress HBV replication, decreasing the level of serum HBV DNA in the patients. However, drug treatment also induces resistance mutations in the patients. [14][15][16] Therefore, sensitive detection of HBV-DNA resistance is essential for monitoring response to therapy. With this method, we could efficiently perform the subtyping and detection of resistance mutations in 90 patient samples including 80 samples with low levels of plasma HBV DNA (from 20 to 200 IU/mL). The ratio of HBV genotypes and resistance mutations is consistent with previous reports. These results indicated this method is a sensitive and reliable direct sequencing method for genotyping and resistance mutation detection in low levels of serum HBV DNA samples.
In summary, we developed a new direct sequencing method for genotyping and detecting resistance mutations in low levels of HBV DNA serum samples with the identified new common and universal primers. This method facilitates early identification of resistance mutation and monitoring therapy response in HBV patients. | 2018-04-03T03:11:04.585Z | 2016-08-01T00:00:00.000 | {
"year": 2016,
"sha1": "cfc6cee5fce381b20a0e6078be9313ffbbea9cfe",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000004618",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfc6cee5fce381b20a0e6078be9313ffbbea9cfe",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244527301 | pes2o/s2orc | v3-fos-license | Efficient Anomaly Detection Using Self-Supervised Multi-Cue Tasks
Anomaly detection is important in many real-life applications. Recently, self-supervised learning has greatly helped deep anomaly detection by recognizing several geometric transformations. However these methods lack finer features, usually highly depend on the anomaly type, and do not perform well on fine-grained problems. To address these issues, we first introduce in this work three novel and efficient discriminative and generative tasks which have complementary strength: (i) a piece-wise jigsaw puzzle task focuses on structure cues; (ii) a tint rotation recognition is used within each piece, taking into account the colorimetry information; (iii) and a partial re-colorization task considers the image texture. In order to make the re-colorization task more object-oriented than background-oriented, we propose to include the contextual color information of the image border via an attention mechanism. We then present a new out-of-distribution detection function and highlight its better stability compared to existing methods. Along with it, we also experiment different score fusion functions. Finally, we evaluate our method on an extensive protocol composed of various anomaly types, from object anomalies, style anomalies with fine-grained classification to local anomalies with face anti-spoofing datasets. Our model significantly outperforms state-of-the-art with up to 36% relative error improvement on object anomalies and 40% on face anti-spoofing problems.
I. Introduction
O NE of the most fundamental challenge in machine learning is detecting an observation as anomalous compared to a normal baseline. Properly solving such problem with high predictability and robustness has been essential in many fields.
To mention a few, in intrusion detection [1] where we wish to detect untrustworthy entries on a network, fraud detection [2] where a forged item or transaction must be rejected, in medical imaging [3] where abnormalities in a captured image must be located, video surveillance [4], [5] where abnormal events are detected, and in manufacturing defect detection [6], [7].
With the advent of deep learning, many tasks on image data including binary classification and anomaly detection (AD) have greatly improved. Nevertheless classical binary classification still generally lacks robustness and reliability outside its training domain. Many anomaly detection methods try to solve this problem by only learning the normal class boundary, rather than directly discriminating anomalies from normal samples. Any observation defined outside is then deemed as anomalous. This decision rule is especially useful when the anomaly class boundary is ill-defined or continually evolving and only few anomalous training samples are available.
The recent explosion of self supervision further improves unsupervised learning abilities and reduces the needed amount of labeled data. It enables to discriminate anomalies from normal samples by learning to solve simple tasks such as geometric transformation classification. However, although deep anomaly detection can achieve interesting performance, it still suffers from limitations on more challenging problems with local and fine-grained differences between anomalies and normal samples. Indeed, existing self-supervised anomaly detection algorithms evaluated their performance on datasets like CIFAR10 or CI-FAR100 but not on fine-grained ones like Caltech-Birds or face anti-spoofing. Moreover, these methods usually have an high inference time, making them impractical for real-life anomaly detection problems. For example, the state-of-the-art model GeoTrans [8] needs to apply during inference 72 different transformations to the input making it around 10 times slower than our proposed method.
In this given context, our main contributions in this paper are the following: • We introduce a new way to efficiently exploit the benefits of discriminative and generative auxiliary tasks in selfsupervised anomaly detection. Using the two-branch network, we are among the first to reach high-quality results with auxiliary tasks on fine-grained anomaly detection and face anti-spoofing in a one-class setting.
• We carefully design and optimize three novel specialized auxiliary tasks according to loss functions, anomaly scores as well as complexity. This allows our model to learn very rich and complementary representations which better encompass image structure (Section III-A), colorimetry (Section III-B) and texture (Section III-D). With these tasks, we also explore different out-of-distribution (OOD) detection methods and fusion functions.
• We compare our method with state-of-the-art using an exhaustive protocol for anomaly detection covering object, style and local anomalies, and even more challenging task of face anti-spoofing.
• The proposed method obtains high-quality results with up to 36% AUROC relative improvement on object anomalies and 53% on face anti-spoofing from state-of-the-art anomaly detection methods.
This paper follows the motivation of our work presented in [9]. In [9], we improved the anomaly detection by simultaneously solving in a self-supervised fashion a high-scale geometric task and a low-scale jigsaw puzzle task. It is worth noting that the differences of this paper compared to [9] are significant: all pretext tasks are novel and more efficient. In this paper, we address the inference complexity issue and considerably improve the anomaly detection performance.
First, we give an overview of anomaly detection related work in Section II. Then we present our new pretext tasks in Section III, and our study of OOD methods with fusion in Section IV. Our complete model is summarized in Section V which we give a general overview in Fig. 3. In a first stage, a jigsaw puzzle task with intra-piece tint rotation detection and a partial colorization are performed. Then in a second stage, a set of OOD scores is computed for each task and is aggregated into a single anomaly score using a fusion function. In addition, we extensively compare our model with state-of-the-art in Section VI, and provide several experiments on the influence of our model parameters in Section VII. Finally, we discuss future work in Section VIII.
II. Related work
We first review several common classical and deep anomaly detection methods in Section II.A and Section II.B. We then present self-supervised learning and how they are applied for AD in Section II.C and Section II.D, respectively. Readers are refereed to [10]- [12] for more in-depth surveys on AD or selfsupervised learning.
A. Classical anomaly detection
The main goal in anomaly detection is to classify a sample as normal or anomalous. Formally, we predict P (x ∈ Xnorm) for an observation x and a normal (or positive) class Xnorm. The anomalous (or negative) class is then defined implicitly as the complementary of the normal class in image space. We can generally categorize anomalies into three families: 1) Object anomaly: any object which is not included in the positive class, e.g., a cat is an object anomaly in regards to dogs. 2) Style anomaly: observations representing the same object as the positive class but with a different style or support, e.g., a realistic mask or a printed face represent faces but with a visible different style. 3) Local anomaly: observations representing and sharing the same style as the positive class, however a localized part of the image is different. Most of the time, these anomalies are the superposition of two generative processes, e.g., a fake nose on a real face is a local anomaly. Usually, we assume in anomaly detection that only normal samples are available during training, meaning that methods are in one-class setting. Traditionally, one-class Support Vector Machine [13] (OC-SVM) or its extension the Support Vector Data Description [14] (SVDD) were used for anomaly detection. The anomaly score of an observation x is given by its distance to a parameterized boundary Ω. OC-SVM defines Ω as an hyper-plan separating the origin from the normal samples with the maximum margin, whereas SVDD uses an hypersphere containing all normal samples with the minimum radius (see Fig. 1(a,b)).
Fully-unsupervised methods which learn from a set of unlabeled data containing normal samples and anomalies were also used. Such non-deep methods include Robust Principal Component Analysis [15] (RPCA) or the Isolation Forest (IF) [16]. Rather than modeling the normal samples, the IF algorithm tries to isolate anomalies from normal samples via successive random partitions of the feature space. If the sample can be entirely isolated (i.e. be the only point in a region) in a few partitions, then it is more likely to be anomalous (see Fig. 1(c)).
These classical methods have shown great success on lowdimensional data such as tabular data, but usually fail on higher dimension inputs such as images.
B. Deep anomaly detection
The introduction of neural networks as feature extractors gave birth to several hybrid methods where a pre-trained neural network is used to extract features, on which a classical algorithm such as OC-SVM or isolation forest is trained. It ultimately led to the first end-to-end anomaly detection neural network, the one-class Neural Network (OC-NN) [17] which integrates the OC-SVM loss in the network training. More recent methods include different dedicated approach to anomaly detection. In [18]- [20], a binary classification is used with pseudo negative images or latent vectors to represent the anomaly class. Another approach is to use the error of a generative model reconstruction [21]- [24] or the gradient of the error given that the image is normal [25]. Finally, the self-supervision framework can be used to learn normal class representations and subsequently form an anomaly score as presented in Section II-D.
There also have been semi-supervised anomaly detection methods such as DeepSAD [26] or deviation networks [27] where we assume some of the anomalies representing a few modes are available. These methods can achieve better accuracy on borderline cases given enough diverse anomalies, which is often less manageable in practice. In particular, these two methods directly learn representations by minimizing the distance of normal sample features to an hypersphere center, while maximizing the distance to the anomalies. It follows the compactness principle, where the normal class representations variance is minimized and the inter-class representations variance is maximized.
C. Self-supervised learning
Self supervised learning (SSL) is a part of representation learning, where useful and general representations are learned from an unlabeled dataset. The learned features are then used through transfer learning for a different task such as classification.
In this manner, representations are learned by solving from the data an auxiliary task T , which is often unrelated to the final one. The pretext task can either be discriminative, usually resulting in a multi-class classification setting or generative where a regression loss is often utilized. Any SSL is defined by its pretext objective loss L and its pretext data generation function DGT : P(X ) → P(X × K) which yields a labeled set from an unlabeled set X . In the case of discriminative tasks, it is usually done via n images transformations T1, · · · , Tn: where the xi are images from the unlabeled training dataset. In other words, SSL consists of two steps: (1) generating a labeled set XT = DGT (X ), (2) training a classification or regression network on this generated labeled set. One of the final layers are thus used as a feature extractor. Some commonly used tasks are: 90°rotation prediction [28], jigsaw puzzle [29], distortions [30], colorization [31], image inpainting [32] or relative patches prediction [33].
More recently, the contrastive learning framework [34] has been extensively used for self-supervised representation learning. Unlike the methods above, it does not rely on an explicit pretext task and directly formulates losses on the representations. The most effective contrastive method is instance discrimination [35], [36] where the objective is to maximize similarity between augmented versions of a same image (positive samples) while minimizing similarity with any other images (negative samples). The instance discrimination can be seen as a pretext task where the pretext data generation function maps samples to the set of positive pairs and negative pairs and the objective function is to discriminate positive from negative pairs using cosine similarity in representation space.
D. SSL anomaly detection
In this section, we first present how to apply SSL for AD and then discuss some state-of-the-art methods exploiting SSL for AD.
Very recently, SSL has been adapted to the one-class anomaly detection framework. First we learn to solve an auxiliary task in a SSL fashion. Then, a measure of how well the network can solve the task on the generated dataset DGT (X ) is used to classify at inference time an observation x as anomalous or normal. The main assumption is that the network will perform relatively well on normal samples but will fail on anomalies. The goals of representation learning and AD are different. In representation learning we try to maximize the performance of the representation on as many downstream tasks and data as possible; whereas in AD, we want a clear discrimination through performance on normal and anomalous data.
Any SSL anomaly detector is composed of three steps (see Fig. 2): 1) The representation learning on the normal class, carried out in a self-supervised manner. In our case this is done by solving a pretext task T , but other methods employ other mechanisms such as contrastive learning. 2) During inference of an unseen sample x, an out-ofdistribution (OOD) detection method is applied on the generated labeled samples DGT ({x}). The goal of OOD methods is to detect whether or not an observation has been sampled from the same distribution as the training set. OOD is more low-level and general than AD, and aims at modeling the training distribution rather than the normal class. For example, contrary to AD the CIFAR-100 dataset would be considered out of distribution in regards to CIFAR-10. Given a pre-trained model Ψ on a distribution FX train , it estimates P (x ∼ FX train ). The normal training set is assumed to be close enough to the real distribution of normal samples, and since we have access to the correct task label y, the following approximations hold: where sOOD((x, y); Ψ) is the OOD score for an image x with its label y given the pre-trained network Ψ.
3) The fusion of the OOD scores into a single anomaly score sa using a fusion function M . In the rest of this section, we detail several state-of-the-art self-supervised anomaly detection algorithms that are the most closely related to our work. In GeoTrans [8], the auxiliary task is to classify which geometrical transformation has been applied to the input from a set {Ti} of 72 random composition of translations, rotations and symmetries. At the end of training, a Dirichlet distribution parameterized byαi is fitted over the softmax responses of each transformation on the normal class y (Ti(x)) = smax(φ • f (x)); then its log-likelihood is used during inference.
In MHRot [37], the task is to simultaneously classify 90°r otations, horizontal translations (VTrans), and vertical translation (HTrans), each modeled by a softmax head. Accordingly, the pretext data generation function is the composition Tr,s,t = Rot(r) • HTrans(s) • VTrans(t), where r ∈ {0 • , 90 • , 180 • , 270 • }, s ∈ {0, −tx, +tx} and t ∈ {0, −ty, +ty}. During inference, the three softmax of the known transformations for each of the 36 transformation compositions are summed as anomaly score: Another class of models, called two-stage anomaly detectors [38], does not use the representation learning task during inference, but rather directly apply OOD methods on the representation space [39]- [42]. For example, in SSD [40] the representation learning step is performed through contrastive learning, then OOD detection is applied on the representation space induced by the encoder φ. The training data representations are clustered around several centroids using K-means. The Mahalanobis distance is used to compute the anomaly score: Similarly, DROC-contrastive (Deep Representation Oneclass Classification) [38] first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. Contrastive learning with distribution augmentation is used for the self-supervised representation learning, and a OC-SVM for the one-class classification. Finally, it is interesting noting that some SSL anomaly detectors solve the more specific task of anomaly segmentation like CutPaste [43], SOMAD [44]. Those anomaly segmentation consists in predicting a heatmap where the anomaly score is computed on each pixels of the input image. They usually consider very minute and local AD, such as defect detection, while in this work we focus on image-level anomaly detection.
III. Novel pretext tasks
In the rest of the paper, we consider an observation z, its label y and a pre-trained network φ • f . We gradually detail the proposed pretext tasks for anomaly detection which focus on different visual cues: structure, colorimetry and texture. The tasks of piece-wise puzzle, tint rotation and their combination are discriminative (Sections III-A, III-B, III-C) whereas the colorization task is generative (Section III-D). An overview of the loss function and anomaly score for each proposed task is shown in Table I.
A. Piece-wise puzzle task
The puzzle task has been successfully used as a pretext task for representation learning [29], [45]. First an image is separated into n = nw × n h pieces, with some random margin between them. Then given the an image generated by shuffling pieces, a deep encoder is trained to predict which permutation has been applied. It is therefore formulated as a classification task where the prediction label corresponds to the index of the permutation among the n! total possibilities. When the number of pieces becomes too large, the full task is not conceivable and the model should only learn to classify a smaller random subset of all permutations. This formulation of the jigsaw puzzle task, used in our previous work [9] along with geometrical transformation recognition, enables our model to learn low-scale fine features. In the rest of the paper, we call this formulation the partial puzzle task. It is worth noting that regarding to our previous work [9], this paper reconsiders only the puzzle task which is further optimized in both term of time and performance, as will be described in the rest of this section, while other tasks including tint rotation and partial colorization have never been used for visual anomaly detection in the literature, to the best of our knowledge.
The partial puzzle task [9] has several limitations: (i) the quality of the representation highly depends on the chosen permutations. Indeed if the sampled permutations are too hard (e.g. swapping two corners) or too easy, the learned representations will suffer; (ii) Moreover from an anomaly detection perspective, all mispredicted permutations are equally penalized regardless of the number of misplaced pieces.
To address these limitations, we propose here an improved piece-wise puzzle task. Rather than predicting the permutation index, we train a deep encoder to predict the original position of each piece. By assuming each piece is independent, we can now cover all the permutations with only n 2 outputs instead of n!. Thereby we separate the output layer f into n functions f1, · · · , fn, each corresponding to a piece.
Let Π be a random permutation, Π(I) corresponds to the image I where each piece has been moved according to Π, and Πi corresponds to the new position of the i th piece. The task is learned using the cross-entropy loss L CE on every piece predictions: The full task is illustrated in Fig. 4. In practice, we sample during every training epoch a random subset of ntsp permutations for each normal image. In order to have as many different permutations as possible in the training set, we define ntsp = n! N train ·ep , where Ntrain is the size of the training set and ep the number of training epochs.
During inference we also consider a random subset of nsp permutation, and compute an anomaly score for each of them: where sOOD is an OOD score function which is presented in more detail in Section IV. In fact, we try different OOD functions and find out the best one. While ntsp permutations are randomly used during training, it is important to note that the nsp permutations are fixed for all tests in the final model. With this new piece-wise puzzle task, lower anomaly detection errors can be reached while keeping the same inference complexity as the partial puzzle task (see results in Fig. 11).
B. Tint rotation task
High-scale object colorimetry is a simple but powerful clue to discriminate anomalies, especially in spoof detection. To explore this rich information that is not considered yet in the literature, we present a novel tint rotation recognition task which focuses on the normal class colorimetry. Given an RGB image I and a transformation γ where γ(I, θ) adds an offset θ to the hue channel (in HSV space) of I; we try to predict the distribution of Θ from γ(I, Θ). For practical reasons, we limit the possible tint rotation angles to c distributed angles and our task becomes to distinguish angles which are multiples of 2π c . Tackling the colorimetry task with a rotation detection task allows us to discriminatively learn high-scale and general colorimetry clues while keeping a low computational cost. In addition, we note that contrary to the geometrical rotation recognition task where a number of angles different from four would leave visual artifacts, our task does not have any limitation on c.
Nevertheless it is impossible to detect any tint rotation inside areas without any original color information. To prevent high anomaly scores on desaturated images, we need to give a lower weight on those regions. To this end, instead of working on the angle distribution we use the expected L1 error in RGB space between the original image and the predicted one. Since we are computing a pixel wise RGB error, only large areas of colorful pixels will impact the anomaly score. The tint rotation task training loss is: where W ×H is the dimension of the image. As for the anomaly score, we use the same error as the loss function which becomes in its developed form: where smax(·) is the softmax function.
By introducing this task we force our encoder to fully represent the normal class colorimetry, which could be potentially ignored by the puzzle task in case of salient geometrical features. Color.
U-shape Encoder Decoder Fig. 3. Method overview. Our model consists of discriminative (upper U-branch) and generative (lower L-branch) tasks. All the discriminative tasks share the same encoder.
TABLE I
Overview of the loss function and OOD score for each proposed task. Upper U-branch consists of piece-wise puzzle, tint rotation tasks while lower L-branch consists of partial colorization task.
Task (type) Loss
Anomaly score Piece-wise puzzle (Cross-entropy) Encoder random piece permutation Fig. 4. Piece-wise puzzle task for 3 × 3 pieces, where Π is a random piece permutation andΠ i is the prediction vector for the j th piece (Section III-A).
C. Intra-piece tasks
On top of the piece-wise puzzle task, we further propose to add pretext sub-tasks inside each puzzle piece. Given an intra-piece task Tpiece and an image composed of n pieces images R1, · · · , Rn, we first sample a random augmented piece using the pretext data generation function on each piece (I (aug) i , yi) ∼ DGT piece ({Ri}). Then our network tries to solve simultaneously the puzzle task and the intra-piece tasks by minimizing the loss ig. 5. Tint rotation task for c = 4 (Section III-B).
where the first term is from the piece-wise puzzle loss defined in Equation 6 and Lpiece is the loss of the intra-piece task. In our case, we choose the tint rotation task for the intra-piece task thus Lpiece = Ltint. We argue that the piece-wise tint rotation task is more suitable than a piece-wise geometrical rotation task since it mixes different modalities rather than only combining geometrical cues. Besides, We have already studied the combination of jigsaw puzzle task with the geometric rotation in our previous work [9]. A summary of the intra-piece task model is given in Fig. 6. By adding these intra-piece tasks, we essentially consider n new tasks during inference without increasing the number of forward pass in our encoder. The only cost is the additional specialized dense layer for the pretext task. Each intra-piece task will allow our network to focus on specific image patches.
One issue with this method is that we can potentially mix object pieces and background pieces. Solving tasks on background pieces would enable the model to generalize on image distribution far from the normal class object. As a result, we Fig. 6. Example of intra-piece tasks with tint rotation detection with c possible rotations (Section III-C). The only additional cost of this task when compared to the piece-wise task is a specialized dense layer.
introduce a weight map for each piece learned during training where higher weights are given for pieces covering the object.
We could see this map as a rough segmentation of the normal object in the image. These are computed in a similar fashion as visual attention mechanism, which have previously successfully been used for learning weight maps for each pixels [46].
First, we compute from the encoder representation z a weight map (wij) 0≤i+j≤n , which we normalize into attention weights using the L1 normalized sigmoid Pij = σ(w ij ) w 1 . This normalization function produces smoother maps than the classical softmax activation, preventing very sparse maps where only one piece has a non-null activation. To further prevent these cases, we include an additional term to the loss encouraging spread matrices: where µ = ij Pij i j . Our final loss of intra-piece tasks taking into account the attention map (upper branch in Fig. 3) is: Pij · Lpiece(Ri,j) (12) and the corresponding anomaly score is M ({sOOD((z, y); φ • f )|(z, y) ∈ DGT ({I})}) (13) where M is the fusion function which is detailed in Section IV.
We can see in Table IX that the attention mechanism increases anomaly detection performances.
D. Partial colorization task
We present in this section a novel generative pretext task for anomaly detection which is highly texture oriented. In the colorization task commonly used in the literature [31], [47], [48], the main objective is to predict the (A, B) color channels from the luminance channel L of an image in LAB space.
One big challenge with this task is to colorize the background since it can vary a lot inside the training normal set. The recolorization will be naturally poorer for unseen background during inference of new observations. Therefore the object itself should have more impact on the AD algorithms than the background, making the anomaly detector more objectoriented than scene-oriented. In addition, several issues arise when considering the typical framework of colorization through regression [47] where E [(Aij, Bij)|L] is directly estimated for each pixel (i, j). First, the colorimetry of the normal class can potentially be multi-modal. In other words, the normal class objects can have several plausible set of colors called modes. For example, horses could have more than one fur color yet still being part of the same class. In this case a regression network will end up predicting the mean of all modes ignoring the multimodality. Second, even if one of the object mode is correctly predicted, any error function will yield high values if the mode of the current observation is different.
To tackle these limitations, we establish a novel method to learn colorization well-suited to anomaly detection. First, we augment the available inputs with the color values of the image inside a border of size α to make the background re-colorization easier. For a simple unified background, our model will be encouraged to color areas near the center object similarly to the border areas and mitigate the background influence on AD. Our partial colorization task thus consists in predicting (A, B) from the image with partial color channels Ipart = (L, A Mα, B Mα) where Mα is a binary mask consisting of 1 in the border of size α and 0 in the center. Moreover, different to existing regression methods, we estimate the posterior density p(Aij, Bij|Ipart) of each pixel to cover any color multi-modality. For density estimation, we explore two different ideas: (1) quantize the colors into a low-range discrete variable and perform multi-class classification; (2) parameterize the density with a gaussian mixture model and perform maximum likelihood estimation.
1) Color bin classification: By quantizing each color value into K bins and assuming the two colors planes to be independent, we can define the resulting categorical variables by 2K probabilities: P (Aij = 1), · · · , P (Aij = K), P (Bij = 1), · · · , P (Bij = K). We thus estimate a map y of dimension Inspired by the label smoothing idea [49], a gaussian smoothing is applied to the output distributions in order to propagate our model confidence to neighbor color bins. Indeed we do not want to entirely penalize close color bins. As such the final estimated densityP (Aij|Ipart) for a network φ iŝ where Gσ is the gaussian kernel of standard deviation σ.
2) Gaussian Mixture Model MLE: Our second approach is to parameterize the densities with Gaussian Mixture Models. Accordingly, we have for each pixel a sum of K gaussian densities: (16) where π (k) ij ∈ R is the prior probability of the k th cluster, µ (k) ij ∈ R 2 is the mean color of the k th cluster and Σ (k) ij ∈ R 2×2 is the covariance color matrix of the k th cluster.
Rather than predicting the full 2 × 2 matrix Σ (k) ij , we only predict the three free parameters σ. We can then reconstruct the positive definite covariance matrix using Cholesky decomposition [50]: where d ∈ R 2 and l ∈ R. This decomposition ensures strictly positive eigen values from the exponential and a semi-positive matrix from the Cholesky decomposition. All the possible covariance matrices are thus parameterized by (d, l). It also introduces better numerical stability for determinant computation with the simple formula log |Σ| = log Diag e d = i di. To train this model, we could use as the loss function the log-likelihood which considers all pixels independent: (18) However this function turns out to be very hard to directly optimize for each pixel and does not lead to any meaningful colorization. We use instead the classical Expectation Maximization algorithm. As for details, we carry out the three following steps: (step 1) Compute Mahalanobis distances: (step 2) Compute posterior cluster probabilities: (20) (step 3) Fix the γij(k) and minimize loss (lower branch): Once the training is finished, we compute the anomaly score as the likelihood of the color channels under the predicted π In order to choose the number of gaussians K, we apply beforehand a K-means color clusterization [51] on the cropped down-sampled images of the normal class. Then by using the elbow method, we can find the optimal K inside 1, 10 .
Advantages. The GMM approach has three advantages over the bin classification: (i) its density support is not bounded, and is continuous thus not needing any gaussian smoothing, (ii) it can fully model the dependence between the color channels with the full covariance matrix, and (iii) it can reach the same quality of colorization with fewer parameters. The quality of colorization is here measured using the mean pixel color likelihood.
Encoder UNet Decoder Fig. 8. Scheme of the partial colorization with GMM estimation and a UNet network (Section III-D). The model predicts 6K parameters per pixel: π ∈ R, µ ∈ R 2 and σ ∈ R 3 for each of the K clusters. Transform batch to ntsp shuffled images x 1 , · · · , x n tsp with piecewise tint rotation 6: for k = 1 · · · ntsp do 7: Apply encoder z k ← φ(x k )
IV. OOD methods and fusion
We try two different out-of-distribution methods for each pretext task: the softmax truth and the Mahalanobis distance. In the case of a self-supervised classification task, the most commonly used OOD function is the likelihood of the label given that the image is normal, which we call the "softmax truth": However, this softmax truth criterion takes into account only one component of the softmax vector. For easy tasks, we usually have a high probability on the correct class, however for harder, multi-issue task, we can have several typical highly activated classes for the normal class. As such, another idea is to look at the likelihood of the raw score vector given its label and given that the image is normal: To approximate this conditional probability, the training dataset is first partitioned on samples sharing the same label value l, i.e. {(z, y)|(z, y) ∈ Xtrain and y = l}. The distribution of the normal class raw score vectors given y can then be separately estimated on each partition after convergence of the network weights. For a given classification problem with C classes and a training set Xnorm, we estimate the mean scores µc and covariance matrices Σc for each class c: where Zc = {z|(z, y) ∈ DGT (Xnorm) and y = c}. The OOD score is approximated by the Mahalanobis distance [52] with the mode corresponding to the truth label: We also explore different fusion functions to combine all the OOD scores into a single anomaly score. We first use the mean, but observe heavy biases from outlier OOD scores (very easy sub-task or harder sub-task). We then try different order statistics including the median and the 25 th percentile and compare the results in Table VII.
V. Full method overview
This section summarizes our full method (Fig. 3). Our model is made of two independent branches. The first discriminative branch (upper branch in Fig. 3) solves the piece-wise puzzle task with intra-piece tint rotation detection task. The second generative branch (lower branch in Fig. 3) performs the partial re-colorization task. We share the same encoder network for all of the discriminative tasks, including the attention mechanism. The re-colorization task is modeled with GMM, and we include the attention mechanism for the intra-piece task. To detect whether or not an observation x is an anomaly, we produce the OOD scores of the re-colorization and the nsp sampled permutations along with tint rotation tasks. The chosen OOD function for every task is the softmax truth. All of these scores are then combined into a single anomaly score using the median.
Our full training and inference algorithms are respectively given in Alg. 1 and Alg. 2. Algorithm 2 Our model inference 1: Input: image x 2: Transform input to nsp shuffled images x 1 , · · · , x nsp with piecewise tint rotation 3: for k = 1 · · · nsp do 4: Apply encoder z k ← φ(x k )
A. Evaluation protocol
Our evaluation protocol is made of three types of anomaly detection challenges: object anomalies, fine-grained style anomalies, and face presentation attacks. First, to detect object anomalies we use general coarse object recognition datasets. The one-vs-all protocol is used, where we consider one class of a multi-classification dataset as the normal class. All the other classes are then considered as anomalous, and we can obtain a set of runs for each possible normal class. Thus, for a given run the training dataset is the normal class training data and the test dataset contains the original test data of the normal class and the anomalous classes. The final reported result is the mean of all runs.
However, these datasets have become far from real anomaly detection applications and might not be enough to fully evaluate AD methods. Thus we include a second evaluation group where we try to detect style anomalies using fine-grained classification datasets. Fine-grained datasets have been introduced to tackle the recognition of classes, usually part of a same category, with slight differences. We use here the one-vs-all protocol as well.
Finally, we consider a real anomaly detection problem which incorporates object anomalies, style anomalies and local anomalies. In particular we choose a dataset from face presentation attack detection (FPAD), where the goal is to discriminate real faces from fake representations of someone's face. Due to the constantly evolving frauds and high variability, anomaly detection seems a very appealing solution to this problem.
We use the following datasets: (i) For object anomalies: • F-MNIST [53]: has been introduced as a harder version of MNIST with 10 different classes of fashion items. All images are grayscale meaning no color information can be used to discriminate anomalies.
• FounderType-200 [56]: font recognition dataset containing 200 fonts with 6700 images per class. It has been introduced for novelty detection and even though these images lie on a low dimensional manifold compared to natural images, they still provide insight into how well the model can capture small shape hints.
(iii) For the face presentation attack detection, we use the WMCA dataset [57] which contains more than 1900 short videos of real faces and presentation attacks. It contains several modalities such as infra-red or depth, but here we only use RGB. There are 72 real identities along with several types of attacks: paper print, screen replay, masks and partial attacks where only a localized area of the face is fake. The masks are composed of paper masks, rigid mask and flexible masks. An example of each type of attack is given in Fig. 10.
TABLE II
Summary of evaluation datasets.
Dataset
Anomaly type Object Style Local
WMCA
In all evaluations, the metric used is the area under the ROC curve (AUROC) or the error 1-AUROC, averaged over all possible normal classes in the case of one-vs-all datasets. We additionally include for anti-spoofing datasets metrics more adapted to biometric presentation attack detection: • The equal error rate (EER [58]), which is the location in the ROC curve where the false reject rate (or Bonafide Presentation Classification Error Rate BPCER) is equal to the false acceptance rate (or Attack Presentation Classification Error Rate APCER).
B. Implementation details
For the piece-wise puzzle task, we use a margin of half the size of the pieces and find best results with nsp = 18. Generally we use nw = n h = 3 pieces for most datasets, except face anti-spoofing where nw = 3 and n h = 4. We observe better results with more vertical pieces on faces, since they are always upright and need finer vertical analysis. For the tint rotation recognition we use c = 4 and for the re-colorization task, we use a contextual border α of two pixels.
Regarding network architecture, we use a 16-4 WideResNet [61] (≈ 10M parameters with a depth of 16) for the feature extractor network φ, along with three dense layers respectively of size n 2 for the piece-wise puzzle task, size n · c for the tint rotation task and size n for the attention. Each of these dense layers have a dropout rate of 0.3 [68]. As for the recolorization task, we use a UNet network [69]. It was originally introduced for image segmentation, using a down-sample / upsample strategy reintroducing the intermediate maps at each step of the down-sample branch into the up-sample branch. It is in fact generally well suited for any prediction task where the output is aligned with the input pixels (in our case a vector of GMM parameters for each pixel). Training is performed under SGD optimizer with Nesterov momentum [70], using a batch size of 32 and a cosine annealing learning rate scheduler [71].
C. Comparison to the state-of-the-art
A comparison of our method with other state-of-the-art (SOTA) anomaly detection models is performed on all three protocols. We choose to include three families of SOTA methods: one-class learning methods which only learn using the normal class, semi-supervised learning methods where a small set of anomalies is used during training and supervised learning. The considered one-class methods can be categorized into (1) reconstruction error-based methods with ADGAN [59], GANomaly [22] and PIAD [23], (2) hybrid methods with OCSVM [13], IF [16] , OC-CNN [19], (3) pretext tasks-based methods with ARNet [60], GeoTrans [8], MHRot [37] and PuzzleGeom [9] and (4) two-stage anomaly detection using contrastive learning with SSD [40] and DROC-contrastive [38]. GeoTrans uses various geometrical transformations as SSL pretext task, MHRot adds on top 90°rotations and our previous model PuzzleGeom [9] includes a basic jigsaw puzzle task. Regarding semi-supervised methods, we evaluate DeepSAD [26] trained on the same normal samples but with three different ratio of the anomaly sub-classes: 10%, 25% and 75%. For the fully supervised baseline we simply use the same backbone as our one-class method (the 16-4 WideResNet) extended with a dense layer representing the two normal and anomaly classes. It is important to note that its training is performed with classical binary cross-entropy loss on the normal/anomaly label, without any class balancing mechanism.
The experiment results are displayed in Table III and a detailed evaluation on the CIFAR-10 dataset is included in Table IV. We note that for the sake of fair comparison in the
TABLE III
Comparison with the state-of-the-art AUROC over several datasets, underline indicates best result, bold indicates best one-class learning result. For the sake of fair comparison, we re-evaluated by ourselves all methods, except the one-class methods in the first block (results are from [38], [59], [60]). DROC-contrastive [38]
Our method maintains among the best accuracies on coarse object and fine-grained anomaly detection. It improves upon PuzzleGeom, and closes the gap toward semi-supervised performances with a small AUC difference of 0.5% on CIFAR-100. Compared to previous pretext tasks such as rotation detection, our proposed tasks can better focus on local parts of the image. The re-colorization task will target more finegrained local textures while the puzzle task and intra-piece tint detection will work on higher-scale geometrical and colorimetric features of the image. We also show that our method greatly improves anti-spoofing detection performance on WMCA. It even outperforms the supervised model and semi-supervised anomaly detection methods which have access up to 75% of the anomalous data.
In general we can notice that hybrid methods, although efficient for smaller problems, do not extend well to highdimensional data. The evaluated reconstruction-based methods also tend to fall behind pretext-task oriented models. On the other hand, two-stage contrastive methods like DROC-contrastive produce very competitive performance. This model combines different techniques including contrastive representation learning, distribution augmentation and OC-SVM. It performs slightly better than ours on the F-MNIST dataset and reaches the same AUC on CIFAR-10 but on the more challenging one, CIFAR-100, we obtain a gain of nearly 2%. Moreover, we note that distribution augmentation and OC-SVM could also be used on the concatenation of our learned representations to reach better accuracy. Overall, our model keeps a good balance between coarse object anomaly detection and finer style anomaly detection, and even outperforms semi-supervised anomaly detection methods on CUB-200 and WMCA. It achieves a relative error improvement of 36% on CIFAR-10 and 40% on WMCA compared to PuzzleGeom.
Lastly, we compare in Table V our method with the two second best self-supervised methods MHRot and PuzzleGeom on WMCA. Using our method the APCER@5%BPCER drops from 33.8% to 27.3%. This also shows promising usage of anomaly detection methods in fraud detection.
VII. Parameter study
In this section, we evaluate the parametrization of pretext tasks in Sections VII-A, VII-B, VII-C, the choice of OOD function in Section VII-D and perform an ablation study in Section VII-E.
A. Puzzle task complexity
We start by comparing in Fig. 11 the two approaches on the CIFAR-10 dataset for the jigsaw puzzle task introduced in Section III-A. The piece-wise puzzle task greatly improves performances for all CIFAR-10 classes even though the same permutations are tested during inference. Moreover, we confirm that the partial puzzle task is more sensitive to the choice of nsp, since its representation quality also depends on this factor. We choose to fix nsp = 18 out of 9! possible permutations as a good compromise between complexity of inference and accuracy. The influence of the number of puzzle pieces nw and n h for nsp ∈ {9, 18} is reported in Fig. 12 on CIFAR-10. We can see that for both nsp = 9 and nsp = 18, the best value for general one-vs-all problem is nw = n h = 3.
B. Tint rotation task complexity
We measure the AUC of the isolated tint rotation task for different number of tint rotations c on the CIFAR-10 dataset in Fig. 13. The best value of c across several normal classes is 4.
C. Colorization task parametrization
The two colorization parametrizations using Gaussian Mixture Model and bin classification are compared on the normal class full colorization task. Our evaluation metric is directly the likelihood of the colorization, which is respectively for classification and GMM (27) and Overall, we can reach higher likelihoods with GMM than bin classification. Moreover, a better separation of the different modes can be achieved using GMM, where bin classification usually mixes the different modes and produces dull colors (see Fig. 14). Colorization comparison on faces. The first row displays the original images, while the second represents the re-colorization of two methods. As we can see, the bin classification approach produces dull colors and mixes the skin color modes, producing grayish colors.
D. Choice of OOD and fusion functions
To evaluate the effect of Mahalanobis distance as an anomaly score, we compare it with the softmax truth and its improved form, the ODIN method [72] which adds temperature scaling during training, and the input pre-processingx = x − ε sign(−∇x log smax(x; T )).
The results are presented in Fig. 15 for different number of puzzle pieces n and nsp = 18 permutations tested. The AUC increases with the number of pieces when using the Mahalanobis distance, whereas it decreases with the softmax truth. In addition, the AUC of the most difficult class is always higher when using the Mahalanobis distance. This shows that despite a lower average anomaly detection performance, it has less variance in its predictions and provides more robust OOD scores to different normal classes. Even though the ODIN method provides sensible improvement for more than 3 × 3 pieces, it greatly increases computational complexity during training and inference. In our tests, we observe an inference time increase of more than three times with the ODIN method. We provide in Table VI further comparisons between the softmax truth and the Mahalanobis distance on the puzzle task with nsp = 9. Finally, we evaluate the choice of different fusion functions on the WMCA dataset in Table VII. The evaluated fusion functions are simple order statistics commonly found among ensemble learning decision fusion strategies. We observe overall better performances regarding AUC and APCER with the median fusion function.
E. Ablation study
We evaluate the impact of each pretext task on the final anomaly detection AUROC. In Table VIII, we compare on CIFAR-10 the basic partial puzzle model with the addition of the piece-wise puzzle task, colorization task, intra-piece tint rotation detection task with and without the attention map. While the piece-wise puzzle and colorization give our model great discrimination power with an AUC of 89.12, the intrapiece task with attention further refines our model.
TABLE VIII
Ablation study of each component on CIFAR-10 using the AUROC. The baseline is the partial puzzle task. We also investigate on more datasets how the addition of attention in the intra-piece task improves anomaly detection in Table IX. By including attention weights for each piece, we can further improve the mean AUC on all datasets, although marginally increasing the prediction variances on different normal classes. We can also notice that the usage of attention has varying contribution depending on the dataset. The main role of the attention for the intra-piece task is to prevent our taskspecific model to generalize too much on background pieces. Thus, attention will benefit the most when the normal class background is very diverse or the normal object is very small in the image.
VIII. Conclusion and Future Work
We explore in this paper more efficient pretext tasks and show that a combination of a colorization and a puzzle task with intra-piece tint rotation subtasks provides the best anomaly detection performances. We also show the importance of different out-of-distribution functions along with their fusion functions. Finally, we provide a more comprehensive evaluation protocol than previously used datasets in the anomaly detection literature. It presents more challenging datasets and covers object, style and local anomalies. Our method outperforms state-of-the-art, including a semi-supervised method, on most of the fine-grained datasets.
For future work we could explore other generative pretext tasks such as image reconstruction. As in the colorization task, only a part of the image mostly covering the normal object would be destroyed. Furthermore, generative tasks such as our current colorization could be used to locate anomalies using the pixel-wise error. Finally we could reframe our method into a two-stage anomaly detection. In a first step, representations would be learned solving our pretext re-colorization, jigsaw puzzle and intra-piece tint rotation detection tasks. Then we could separately train a OC-SVM on the concatenation of representations from the puzzle and colorization encoder. We could further evaluate our model with differently sized backbones and measure the impact on each of our three pretext tasks. | 2021-11-25T02:16:40.501Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "63ec90a44b4b5db6c8f2925e492cb6b7bfb1c56a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "338c55f43bc6bce1174a766b9f6bd2fec4cf6c5e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
4536416 | pes2o/s2orc | v3-fos-license | G-quadruplex DNA targeted metal complexes acting as potential anticancer drugs
Although cisplatin and its analogues have been widely utilized as anticancer metallodrugs in clinics, their serious side e ff ects and damage to normal tissues cannot be avoided because cisplatin kills cancer cells by attacking genomic DNA. Thus the design of metallodrugs possessing di ff erent actions of anti-cancer mechanism is promising. G-quadruplex nucleic acid, which is formed by self-assembly of guanine-rich nucleic acid sequences, has recently been considered as an attractive target for anticancer drug design. The basic unit of a G-quadruplex is a G-quartet, a planar motif generated from four guanine residues pairing together through Hoogsteen like hydrogen bonds. DNA G-quadruplex ( G4 ) structures exist in the chromosomal telomeric sequences and the promoter regions of numerous genes, including oncogenetic promoters. Formation of G4 structures within the 3 ’ -overhang of telomeric DNA can inhibit the telomerase activity, which is silent in normal cells but up-regulated in most cancer cells, thus signi fi - cantly shortening telomeres and preventing cancer cell proliferation and immortalization. Intramolecular G4 structures formed within the oncogene promoter regions can e ff ectively inhibit oncogenen transcription and expression. Thus rational design of small molecular ligands to selectively interact, stabilize or cleave G4 structures is a promising strategy for developing potent anti-cancer drugs with selective toxicity towards cancer cells over normal ones. This review will highlight the recent development of G4-interacting metal complexes, termed G4-ligands , discussing their binding modes with G-quadruplex DNA and their potential to serve as anticancer
Introduction
In 1978 cisplatin was approved by the FDA as an anticancer drug for clinical use. Nowadays cisplatin and its analogues are very effective chemotherapeutic agents in treating testicular and ovarian cancers. 1 However, they have several serious side effects including nephrotoxicity and ototoxicity, and their clinical efficacy is limited by cisplatin-resistant tumor cells. The main reason for this is assigned to their action mechanism: cisplatin-based drugs attack genetic DNAs, resulting in the dysfunction of transcription, translation and other processes, ultimately causing tumor cell death. However, such attacks do not distinguish between tumor and normal cells, and hence serious side effects occur due to damage of normal tissues. 2,3 Since then, the development of metallodrugs that possess different actions of mechanism has attracted great interest. 2,4 One promising strategy for the design of metallodrugs is to explore new potential biological targets rather than genomic DNA.
G-quadruplex nucleic acids, which have structural features very different from the regular double helix, have recently gained increasing interest as targets for anticancer drugs. A G-quadruplex is formed by self-assembly of guaninerich nucleic acid sequences, the basic unit of which is the G-quartet, a planar motif generated from four guanine residues pairing together through Hoogsteen like hydrogen bonds. 5 The structures and topologies of G-quadruplex nucleic acids have been well investigated and widely reviewed. 6,7 As shown in Fig. 1, a series of planar G-quartets stack with each other and are connected by the intervening sequences (termed loops), forming the G-quadruplex (G4) structure. Both DNA and RNA G4 structures are inherently stabilized by the presence of alkali-metal cations (most often Na + or K + ion) coordinated by the guanine carbonyl oxygen atoms pointing towards the inner channel formed at the centre of G-quartets. 8 G-quadruplex structures can be formed not only intramolecularly within single stranded nucleic acid sequences but also inter-molecularly from two or more individual strands. Depending on the distinct ways the exterior loops connecting the G-quartets and the relative orientation of the tetra-stranded helices, G-quadruplexes display a wide range of topologies (ca. parallel, anti-parallel, hybrid of parallel and anti-parallel, etc.). Indeed, NMR and crystallography have been widely used to explore the structures of G-quadruplexes, some of which have been successfully reported either as native nucleic acids or as complexes of G4 nucleic acids binding with small molecules. 6,7 The distinctive structures of G-quadruplexes offer a great opportunity for specific molecular recognition. Recently, the existence of G-quadruplex DNA has been quantitatively visualized on chromosomes in human cells, 9 further motivating efforts to understand the biological implications and the structural-functional relationship of G4 structures.
What are the biological implications of G4 and G4-ligands?
G-quadruplex DNA predominantly exists in the chromosomal telomeric sequences and the promoter regions of numerous genes, such as the oncogene bcl-2, VEGF, c-myc, c-kit. [10][11][12][13][14][15] Human telomeric (hTel) DNA consists of the tandem d(TTAGGG) repeats and a single stranded 3′-overhang of 100-200 bases. [16][17][18] In normal somatic cells human telomeric DNA is shortened at a rate of 50-200 bp per cell division cycle, and the accumulated telomere shortening ultimately results in cell senescence and apoptosis. However, telomere maintenance and elongation rather than telomere shortening dominates in cancer cells which is the basis for cancer cell proliferation and immortalization. 16 This is attributed to telomerase activity which is silent in normal cells but up-regulated with high activity in 85-90% of cancer cells. 19 Active telomerase hybridizes with the 3′-overhang of the telomeric DNA to elongate the telomere thus maintaining the telomeric DNA integrity in cancer cells. Formation of G4 structures within the 3′-overhang of telomeric DNA blocks telomerase hybridization, resulting in the inhibition of telomerase activity thereby interfering with telomere maintenance in cancer cells. 20 G-quadruplex DNA formed in promoter regions also displays significant biological roles especially in modulating gene transcription. [10][11][12][13][14] Prior to transcription the duplex DNA transiently unwinds to release the G-rich single strand from the complementary C-rich strand. Once the single-stranded G-rich sequence folds into a stable G-quadruplex, its access to the promoter will be blocked thereby down-regulating transcription. Thus rational design of small molecule ligands to selectively interact and stabilize either or both hTel G4 and promoter G4 of oncogenes has been identified as a promising strategy for the development of anti-cancer drugs with selective toxicity towards cancer cells over normal ones. 21 Although much research has focused on G4 DNA, G-quadruplex RNA exists within the non-coding telomeric r(UUAGGG) repeats (TERRA) and untranslated regions (5′-UTRs) of mammalian mRNA sequences. [22][23][24][25] G-quadruplex formation within the nascent RNA blocks the progress of the ribosome complex formation thereby down-regulating translation. 24 Compared to equivalent G4 DNA sequences, G4 RNA is much more stable, invariably folding into parallel topology, and widely distributed in the entire cell including cytoplasm which makes it easier to access than DNA. Thus G-quadruplex RNA is also considered a potential target for anticancer drugs, and lately considerable research effort has focused on RNA-directed drug design that can selectively target G-quadruplex RNA. 26,27 Indeed, the development of small molecules that can recognize and bind to the G-quadruplex with high affinity and specificity over duplex nucleic acids, termed G4-ligands, has become a progressively large field of research with rapidly increasing numbers of reported molecules as well as excellent reviews. 28,29 The representative member is a first-in-class in vivo G4-ligand, named as CX3543 (also known as Quarfloxin), which has reached phase II clinical trials for treating neuroendocrine tumors and carcinoid tumors. 30 Besides purely organic G4-ligands, metal complexes acting as small molecule G4-ligands have recently attracted a lot of interest, as they interact strongly and selectively with quadruplex nucleic acids. 28,31 Compared with organic molecules, metal complexes possess characteristic structural features, various charges, and additional electromagnetic properties providing advantages for the construction of G4-ligands. For example, the synthesis is very regular and much easier; the geometry is variable and controllable (e.g. planar, octahedral, tetragonal pyramidal, etc.) which is predominately determined by the coordination geometry surrounding the metal centre. Such a variety of geometries provides a greater number of action modes: planar molecules favour p-stacking with G-quartets, including end-stacking and intercalation, but also alternative modes such as groove/ loop binding, electrostatic interactions, and direct coordination to bases or the phosphate backbone are possible. In addition, the central metal ions and suitable substituted ligands offer cationic properties to the entire molecule which is preferable for stronger electrostatic interactions with electronegative nucleic acids and easier cell permeability. Certain metal complexes possess interesting optical, magnetic, or catalytic properties and thus show additional functions and anticancer properties. For example, optical properties including large stokes shifts, high quantum yields, long-lived luminescence, and good photostability allow one to trace the behaviour of these metal complexes in real-time in living cells, thereby at the same time acting as theranostic agents and G-quadruplex probes. These unique properties of metal complexes make them ideal candidates for constructing novel G4-ligands.
This review highlights the recent developments of G4-targeting metal complexes, termed G4-ligands, discussing their binding modes with the G-quadruplex, their inhibition effect on telomerase activity, their interference with gene transcriptional and translational regulation, and their potential to act as anticancer drugs in the clinics. Although G4 RNA also possesses significant biological roles in cancer biology, to the best of our knowledge the only report of metal complexes acting as RNA G4-ligands is a bimetallic platinum(II)-modified perylene derivative reported by Bierbach et al. 32 Thus this review will only cover the reported metal complexes acting as DNA G4ligands. Generally speaking, these compounds achieve their anticancer potential by inducing or stabilizing G-quadruplex formation through various binding modes, thus affecting gene transcription and expression ( promoter G4) or inhibiting telomerase activity (telomere G4). In a few cases, metalcontaining G4-ligands can even induce cross-linking or cleavage of G-quadruplex DNA, ultimately achieving anticancer properties.
Methods for studying ligand/ G-quadruplex interactions
A wide variety of biophysical and biochemical techniques have been employed to investigate the interactions of G4-ligands and their effect on telomerase activity and gene expression regulation. 33 Usually the combination of more than one technique is required for a detailed understanding of G4-ligand interactions. Optical spectroscopy, including UV-visible (UV-Vis), fluorescence resonance energy transfer (FRET), and circular dichroism (CD), is one of the routinely used techniques capable of determining, e.g., the melting temperature of the G-quadruplex in the presence of G4-ligands via the spectral changes, thus providing crucial information about G4ligand interactions, such as stoichiometry, stabilization potency and especially the selectivity for quadruplex in comparison to duplex DNA. These optical methods are rapid, nondestructive and only require small amounts of material. Moreover, the G-quadruplex has various topologies and CD spectroscopy can monitor conformational transitions. For example, the characteristic spectrum of an antiparallel G4 structure shows a positive band at 295 nm and a negative band at 265 nm, while both a negative band at 240 nm and a positive band at 275 nm are signatures of a parallel G4 structure. Taking a propeller-shaped trinuclear Pt( 34 which has recently been reported by our group as an example (complex 165 in Fig. 15), FRET melting curves and stabilization temperature (ΔT m ) values obtained in the presence of 0.5 μM Pt(II) complex showed that this complex has little effect on the duplex DNA (ΔT m = 0.4°C), but induces an appreciable increase in thermal stability (ΔT m = 30.2°C) in hTel G4 sequences, indicating excellent selectivity towards the G-quadruplex. Titration of this complex to the hTel d [AG 3 (T 2 AG 3 ) 3 ] sequence, in the presence of either K + or Na + , induces characteristic CD signatures of an antiparallel G4 structure even at a low complex/DNA ratio (r = 0.2-1.5), indicating that this complex likely induces and stabilizes the antiparallel G4 topology.
To quantitatively analyse G4-ligand interactions, two complementary methods, isothermal titration calorimetry (ITC) and surface plasmon resonance (SPR), have been extensively used to determine the kinetic and thermodynamic parameters of the binding events, including binding affinity, dissociation constant, enthalpic and entropic contributions of the binding process, etc. 33 These parameters provide crucial information for understanding the driving force for ligand/quadruplex complex formation, evaluating the selectivity of the G4ligands, and to some degree revealing the binding modes. However, all the biophysical techniques mentioned above cannot provide precise structural information. For the development of excellent G4-ligands with high affinity and specificity, it is important to determine the structural parameters of ligand/quadruplex interactions including precise binding sites and binding modes at the molecular level. Such structural information can be obtained by X-ray diffraction and NMR spectroscopy, and until now a variety of resolved structures of ligand/G-quadruplex complexes have been reported and reviewed. 8,29,[35][36][37] Among them, the first crystal structure of a hTel G4 DNA bound to a metal-salphen complex (copper or nickel) was reported by Campbell et al. in 2012. 38 Besides the described biophysical methods, several in vitro biochemical methods have also been developed to assess the capability of G4-ligands to induce and stabilize the G-quadruplex, to inhibit telomerase activity, to accelerate telomeric shortening, and to inhibit oncogene expression. Here we focus on two usually employed PCR-based methods, the PCRstop assay and in vitro TRAP (Telomere Repeat Amplification Protocol) assay. 33 In a PCR-stop assay the G4-ligand induced formation and stabilization of a G-quadruplex structure within the DNA sequence will reduce or even inhibit DNA synthesis, resulting in decreased formation or complete absence of the PCR product. The IC 50 value obtained in a PCR-stop assay indicates the concentration of G4-ligands required to achieve 50% inhibition of the amplification reaction. Telomerase activity can be evaluated by the in vitro TRAP assay, which involves three steps: initial primer elongation by telomerase in the absence or presence of G4-ligands, removal of the G4-ligands and then PCR amplification of the telomerase elongation products. The obtained IC 50 -TRAP value is defined as the concentration of G4-ligands required for inhibiting telomerase activity by 50%. Taking again the propeller-shaped {[Pt(dien)] 3 ( ptp)} (NO 3 ) 6 complex as an example (vide infra, complex 165 in Fig. 15), 34 increasing concentrations of the Pt(II) complex indeed decrease the amount of PCR products (dsDNA) with a complete PCR stop observed in the presence of 3.0 µM of Pt(II) complex, confirming that this complex can induce and stabilize hTel G4 structure. Simultaneously this complex exhibits effective inhibition towards telomerase activity in a concentration-dependent manner with an IC 50 -TRAP value of 16.0 ± 0.40 µM.
In addition to the techniques described above for investigating metal complex/G-quadruplex interactions and the anticancer properties of G4-targeted metal complexes, other efficient and robust biophysical techniques and in vitro assays are also available to monitor the ligands/G-quadruplex interactions, such as mass spectrometry, surface-enhanced Raman spectroscopy, single-molecule fluorescence imaging, equilibrium dialysis, gel electrophoresis, translational assay as well as molecular modelling. These are not described in detail here as their applications to metal complexes have been limited.
Cisplatin derivativesplatination of G-quadruplex
In spite of the disadvantages of its unselective binding to genetic DNA, cisplatin is still the most successful anticancer drug in clinical use. Thus the development of rationally designed cisplatin derivatives acting as specific DNA G4ligands also attracts a lot of attention. Similar to cisplatin, these derivatives also contain labile groups (Cl, H 2 O, etc.) making direct coordination of the platinum centre to G4 DNA nucleobases highly probable. This type of coordination is traditionally denoted as platination and can occur either at a single-site or lead to cross-linking of two nucleobases, in most cases, guanines. For example, cross-linking of two guanine bases or of an adenine-and a guanine base was observed after platination of a preorganized G-quadruplex with [Pt (NH 3 ) 2 (H 2 O) 2 ](NO 3 ) 2 (cis or trans), the action pattern of which is similar to cisplatin. 39 A dimetallic cisplatin derivative (1 in Fig. 2) can also cross-link two guanine bases and the crosslinking position was located at the terminal G-quartets. 40 In addition, an interesting Pt-ACRAMTU complex 2 (Fig. 2) showed an abnormal kinetic preference for platination at adenine (N7 site) over guanine, and more interestingly, HPLC analysis of the reaction between DNA and Pt-ACRAMTU showed that the amount of the Pt-G4 adducts exceeds the corresponding Pt-duplex DNA adducts, indicating some binding selectivity of this complex towards G4 DNA. 41 Cisplatin derivatives containing a planar aromatic ligand suitable for π-stacking with the G-quartets allows interactions with the G-quadruplex through a double noncovalent/covalent binding mode. For example, a Pt(II)-MPQ complex 3 ( Fig. 2) was constructed by linking cisplatin with a planar quinacridine aromatic moiety through a long hydrophilic linker, the length of which was suitable for spanning the length of the G-quartet stacks. 1,42 Such structural features allow the quinacridine plane to end-stack on one face of the G-quartets and the Pt(II) metal centre to platinate a tetrad guanine base on the opposite face, ultimately stabilizing the antiparallel topology of a 22-mer G4 DNA. Very recently, two organoplatinum complexes 4-5 ( Fig. 2) have also been reported containing both the labile chloride atom and a π-conjugated planar aromatic ligand (1-azabenzanthrone or 6-hydroxyloxoisoaporphine alkaloid), which allowed platination at guanine nucleobases of the G-quadruplex as well as the non-covalent π-stacking with Fig. 2 Cisplatin derivatives reported as G4-ligands, the binding mode of which is mainly direct platination (single-site coordination or crosslinking of two nucleobases). [40][41][42][43] Complex 3 utilized a double noncovalent/covalent binding mode. 42 G-quartets. 43 The in vitro and in vivo anticancer efficacies of these two Pt(II) complexes were also investigated in cisplatinresistant tumor cells and xenograft models, respectively.
Metalloporphyrins and derivatives
In the examples presented above, the Pt(II) metal centre coordinates directly to G4 DNA bases. However, most of the reported metal-containing G4-ligands interact with G4 DNA through non-covalent binding. The earliest reported metal complexes acting as G4-ligands are metalloporphyrins, the binding mode of which was proposed to be π-stacking on top of the terminal G-quartets (termed end-stacking), similar to the free porphyrins. [44][45][46] TMPyP4, a cationic meso-methylpyridiniumsubstituted porphyrin, was reported in 1998 to be capable of stabilizing the human telomeric G4 DNA by end-stacking on the terminal G-quartets and inhibiting telomerase activity (IC 50 -TRAP = 6.5 ± 1.4 μM). 47 The cationic properties of TMPyP4 enable electrostatic interactions with the negative backbone of G4 DNA, and hence a higher binding affinity is observed compared to porphyrin. Subsequently, a series of TMPyP4 complexes with various metal centres, including main-group metals and transition metals (Ni(II), Mn(III), Mn(V)vO, Mg(II), Cu(II), Zn(II), Pd(II), Pt(II), Fe(III), Co(II), In(III), etc.) have been synthesized and investigated. All these metal-TMPyP4 complexes (6-16 in Fig. 3) efficiently stabilize the hTel quadruplex and inhibit telomerase activity in vitro, due to the cationic charge and the π-stacking abilities (either end-stacking or intercalation) of the complexes described above. The stoichiometric ratio of ligand/G-quadruplex was found to be 2 : 1 in most cases. 45,46 The metal centre plays a critical role in the ligand/ G-quadruplex interactions including binding affinity, specificity, as well as telomerase inhibiting activity. For example, although the nickel(II)-TMPyP4 complex 6 was reported as a potent inhibitor of telomerase with an IC 50 -TRAP value of 5 μM, its binding affinity for G4 DNA is unfortunately one order of magnitude lower than that for duplex DNAs (10 6 M −1 vs. 10 7 M −1 ) in a SPR assay. 48,49 In comparison, the manganese(III)-TMPyP4 complex 7 displays similar telomerase inhibition potency to the free TMPyP4 ligand itself, but shows at least 10-fold better selectivity for the quadruplex over duplex DNA. 48,49 Changing the metal centre from Mn(III) to Zn(II) maintains the selectivity for quadruplex as well as the telomerase inhibition potency. 50 Interestingly, the Zn-TMPyP4 complex 11 was found to be capable of inducing the formation of parallel G4 topology for some specific sections of singlestranded DNA. 50 Cu(II)-TMpyP4 complex 10 and its analogue 17 ( Fig. 3) also attracted attention because copper is an essential element for human living. 51,52 10 and 17 both stabilize the hTel quadruplex and inhibit telomerase activity in vitro. Upon titration of the parallel G-quadruplex, a large hypochromic effect in the UV-Vis spectra accompanied by an induced negative band at 240 nm in the CD spectra was observed, indicating a binding mode via the intercalation of the squarepyramidal geometry between the G-quartets of the G-quadruplex. Gold(III)-TMPyP4 complex 19 was also reported as an hTel G4 stabilizer and a potent telomerase inhibitor, which was proven by the effective inhibition of the PCR amplification of a G4 sequence in the PCR-stop assay and the observed 57% inhibition rate of telomerase, Moreover, this Au(III) complex showed low cytotoxicity (IC 50 > 50 μM) against normal nasopharyngeal cells.
Besides the metal centre, the choice of the meso substituents on the porphyrin is another critical parameter influencing the interaction with G4. For example, when one or two mesomethylpyridinium groups of the TMPyP4 are replaced by one or two long 4-aminoquinoline moieties, new metalloporphyrins were constructed, probably enhancing the G4 affinity and cell wall penetration (Fig. 4). Such modified manganese(III) complexes 22 and 28 show greater telomerase inhibition than the unmodified Mn(III)-TMPyP4 complex 7, with lower IC 50 -TRAP values of 11.5 and 8.6 μM, respectively. 49 The binding rate was found to be fast in kinetic experiments suggesting the stacking of the quinoline substituents with a G-quartet. When one meso-methylpyridinium group of the TMPyP4 is replaced by a simple polyamine side chain, the Mn(II) complexes 24 show a relatively high affinity (nearly 10 8 M −1 ) for quadruplex DNA, although this was not accompanied by improved selectivity. 48 Upon replacement of all four mesomethylpyridinium groups of the TMPyP4 ligand by four flexible bulky cationic moieties, the most impressive pentacationic manganese(III)-porphyrin complex 20 was constructed, which has a high affinity (nearly 10 8 M −1 ) for quadruplex DNA as well as 10 000-fold selectivity towards G4 over duplex DNA. 54 Such a [44][45][46][47][48][49][50][51][52][53][54] high selectivity can also be explained by the steric effect of the bulky side arms preventing the intercalation of the complex into the duplex DNA as similarly described for porphyrinbridged tetranuclear platinum complexes. In contrast, applying the same modifications mentioned above to the Ni-TMPyP4 analogues 21, 23, 25 does not increase the G4 affinity, the selectivity or the telomerase inhibition activity. These modified Ni-TMPyP4 analogues cause inhibition of telomerase-mediated elongation of the telomere primer at 7-39 μM in the TRAP assay, and hence the inhibition effect is even lower than that of the unmodified Ni-TMPyP4 complex 6. 49 In addition, binuclear manganese/nickel-porphyrin derivatives 26-27 were synthesized with a linker of appropriate length linking the symmetric porphyrinic dimer. This design proposes a sandwich-type binding mode with the quadruplex DNA structure in theory, but these complexes were shown to be inefficient for stabilizing or discriminating G4 over the duplex. 48 Our group also reported two clover-like shaped, porphyrinbridged tetranuclear platinum complexes 29-30 (Fig. 5). 55 Different from the complexes mentioned above, the Pt(II) ions are coordinated at the side arms providing high positive charges in the compounds and probably increasing the overall steric hindrance, which helps end-on stacking and prevents intercalation into duplex DNA. Indeed, both of them were found to effectively stabilize various kinds of G-quadruplexes (hTel, c-myc, c-kit and bcl2) in the parallel topology with high affinity but displayed negligible effects on the duplex. The π-π end-stacking binding mode was proven by the hypochromic effect in UV/Vis spectra and unquenched fluorescence upon titration of the G4 sequence. Furthermore, the maximum binding ratio of 4 : 1 ([complex]:[G4]) indicates the presence of other binding modes such as groove interactions. The cloverlike shaped platinum complexes showed excellent anticancer activity that was attributed to a dual effect, the inhibition of telomerase activity (IC 50 -TRAP = 1.46 μM and 0.25 μM) and the repression of oncogene expression, ultimately inducing apoptosis and G2/M phase arrest in HeLa cells. Another special case is the ruthenium(II) polypyridyl complex 31 having a pyri-dine ligand attached to a porphyrin. 56 Although the Ru(II) metal core determines the octahedral geometry of this compound, the planar porphyrin ligand results in a high affinity to the hTel G4 accompanied by hypochromism effects in UV/Vis spectra.
These investigations indicate that the central aromatic core (TMPyP4 moiety) might be predominantly responsible for the interactions between metalloporphyrines and G4. The side arms also play significant roles, helping to improve the binding affinity and selectivity over the duplex depending on the rational design.
Metallophthalocyanines and derivatives
Phthalocyanine, a porphyrin derivative featuring aromatic rings fused to each pyrrole moiety of the basic porphyrin skeleton with nitrogen atoms in the meso-positions, provides a more extended aromatic surface preferred for end-stacking on a terminal G-quartet. A series of metallophthalocyanines, especially with zinc(II) and nickel(II) as metal centres, have been reported as novel and potent telomerase inhibitors (Fig. 6). [57][58][59][60][61] The phthalocyanine skeleton of these complexes was modified by introducing four or eight cationic oxygen/ sulfur armed quaternary ammonium on the fused aromatic rings (complex 32-39), increasing the number of cationic charges and enhancing the steric hindrance, both of which made the complexes more favourable G4 binders. Compared to the corresponding metalloporphyrins, these metallophthalocyanines display enhanced binding affinities and selectivity for the G-quadruplex over duplex DNA. For example, the Zn(II) complex 38 with eight cationic quaternary ammonium exhibits strong electrostatic interactions with grooves or loops, contributing to the approximately 6-fold G-quadruplex selectivity over duplex DNA and very effective telomerase inhibition (IC 50 -TRAP = 0.23 μM). Moreover, this complex prefers to induce conformational transitions from antiparallel to parallel G-quadruplex even in alkali-metal deficient buffer. 60 By comparison, the Zn(II) analogue 47 with fewer positive charges (4+) and less steric hindrance prefers an antiparallel G4 topology in alkali-metal deficient buffer. 61 A series of Zn(II) complexes with amido-armed phthalocyanines (41)(42)(43)(44)(45)(46) were also reported with excellent affinities for hTel G4 as well as a metallophthalocyanine was reported in 2009: the guanidinium-parallel G4 with high affinity. When interacting with c-myc G-quadruplex DNA the dissociation constant is only 2 nM, which is the strongest binding interaction among all reported small molecule-based G4-ligands. 57,58 These investigations indicate that both the planar phthalocyanine moiety and the highly positively charged side arms contribute to the favourable binding to G-quadruplex as well as the telomerase inhibition potential. 57,58
Metallocorroles and derivatives
Another porphyrin derivative is the corrole ligand, which provides additional geometries and electronic properties for the effective stabilization of transition metal ions in high oxidation states. A series of metallocorroles, mainly with Cu(II) or Mn(III) as metal centres (Fig. 7), were reported as effective G4ligands and potent telomerase inhibitors. [62][63][64] Interestingly, these complexes have typically a saddle-type geometry opposed to the planar metalloporphyrins. One example is the watersoluble, saddle-shaped, meso-methylpyridinium-substituted Mn(III)-corrole complex 48. 62,63,65 Due to the favourable shape and high electron deficiency, this Mn(III)-corrole complex exhibits 64-fold selectivity towards G4 over duplex DNA according to the binding constant values and prefers to induce hybrid G4 topology according to the CD characteristic signatures. In a PCR-stop assay, this complex effectively induces and stabilizes hTel and c-myc G-quadruplex DNA with the IC 50 values of 2.37 and 1.52 μM, respectively. The Cu(II)-corrole complex with the same meso-methylpyridinium substitution as the Mn(III)corrole complex 48 was reported to be as effective and selective in G4 stabilization, the activity of which was slightly lower than that of the corresponding Mn(III)-corrole complex. 62 Its binding constant to G-quadruplex DNA was found to be 50-fold greater than that to duplex DNA, and the IC 50 values obtained in the PCR stop assay are 3.51 and 2.74 μM, respectively, for hTel and c-myc. CD spectra showed that this complex also prefers to induce the parallel-to-hybrid conformational transitions of the hTel G4 sequence. Further Cu(II)-corrole complexes (49)(50)(51)(52)(53)(54), modified by introducing three meso-substituted benzene ring-armed pyridinium or quaternary ammonium moieties through different linkers into the corrole skeleton, were also investigated as G4-ligands. 62 Such modifications increase the number of cationic charges which promotes electrostatic interactions with the negatively charged DNA backbone as mentioned above. Both CD spectra and PCR-stop assay indicated that these complexes are good at stabilizing G4 in the presence of micromolar Na + concentration, and that some of them induce antiparallel G4 topology.
Metal-salphen and metal-salen complexes
Besides macrocyclic metal complexes, planar metal complexes with nonmacrocyclic polydentate ligands acting as effective G4-ligands have also attracted ample attention. A representative group of these complexes are metal-salen and metalsalphen complexes, which have previously been proven to be capable of intercalating into duplex DNA through π-π stacking. 66,67 A series of comparative metal-salphen complexes with similar polydentate ligands but different metal ion centres were constructed (Fig. 8 found to be highly selective for the G-quadruplex over duplex DNA, e.g. 50-fold selectivity of Ni-salphen complexes as reflected by the equilibrium dissociation constants of: 0.1-1.0 μM for the G-quadruplex vs. 2.0 μM for duplex DNA. The interaction mode of the planar metal-salphen complexes with the G-quadruplex was proposed based on molecular modelling: the planar aromatic surface of the complex stacks on the terminal G-quartet with the exterior cationic side chains inserted into the opposite grooves. It is worth mentioning that a co-crystallization of a typical square-planar Ni-salphen compound 57 complexed with the telomeric G4 DNA in parallel topology was successfully obtained. 68 The TRAP assay also proved the potent telomerase inhibition activity of these metal-salphen complexes resulting in IC 50 values in the low micromolar or even nanomolar range. Despite the inability of the salphen ligand itself to π-stack with and stabilize G-quartets due to its flexible geometry, the coordination with suitable metal ions often induces a planar geometry and arranges the aromatic rings around the central metal ion in an optimal structure to enhance π-π stacking between the metal-salphen complexes and G4 structures. Moreover, the modification of the salphen ligand in metal complexes by, for example positively charged quaternary ammoniums, alkyl-imidazolium side chains, or cyclic aminebased side chains which can be protonated under physiological conditions, enhances the binding affinity with G4 by electrostatic interactions of the protonated side arms with the loops and grooves of the DNA backbones. All these factors (overall geometry, charge, modification on the salphen ligand) also dictate the selectivity of the metal-salphen complexes towards G-quadruplexes. For example, the Pt(II)-salphen complex 72 with cyclic amine-based side chains was reported to effectively stabilize the c-myc promoter G4 structure by endstacking with the terminal G-quartets, resulting in the inhibition of oncogene expression both in a cell-free system and in cultured cells. 73 It is worth mentioning that this complex modified with cyclic amine-based side arms exhibited a 10-fold higher inhibition activity than the non-modified complex. Moreover, several Pt(II)-salphen complexes such as 72 possess additional fluorescence emissive properties capable of showing their cellular uptake and localization in living cells by using confocal microscopy. 71 A series of Ni(II), Cu(II) and Pt(II) based metal-salen complexes 83-91 with suitably modified salen ligands were also reported to act as efficient G4-ligands, illustrating again the significance of the planar geometry and modifications on the side arms for improving G4 binding (Fig. 9). [73][74][75] For example, a planar Ni(II)-salen complex 87 with quaternary ammonium side chains is capable of selectively stabilizing an oncogene promoter G4 over duplex DNA, as shown by UV/Vis, CD and FRET assays. 74 Another two interesting Ni(II)-salen complexes 90-91 with cyclic amine-based side chains were also investigated, the salen ligand of which contained the meso-1,2diphenylethylenediamine moiety. 75 Compared with the corresponding Ni(II)-salphen complex 63-64, the presence of the meso-1,2-diphenylethylenediamine moiety provides steric hindrance eliminating the possibility of intercalation with duplex DNA, thus offering a slightly better selectivity for either a unimolecular or intermolecular G-quadruplex over duplex DNA.
Metal-phenanthroline complexes and derivatives
Another representative group of planar non-macrocyclic polydentate metal complexes are square-planar Pt(II)-phenanthroline complexes, in which the phenanthroline moiety is replaced by analogues such as bipyridine, phenylpyridine, dipyridophenazine, or phenanthroimidazole (Fig. 10). The Pt(II) centre with its positive charges is essential for DNA binding, the square-planar geometry of these complexes again promote π-π stacking with G-quartets. Comparative studies illustrated that ligands possessing an extended π surface favour G4 interactions. For example, both Pt(II) complexes 94 with bis-phenathroline 76 or phenanthroline-ethylenediamine 77,78 stabilize G-quadruplex structures, the interaction being stronger than with the bis-bipyridine and bipyridineethylenediamine analogues 92-93. [76][77][78] Upon replacement of the phenanthroline by a phenanthroimidazole moiety, the latter possessing an extended π-delocalized surface and an aromatic pendant, the selectivity of the platinum complexes 102-103 towards G4 improved and the affinity constants for G4 binding were two orders of magnitude larger than binding to duplex DNA. 77,79,80 Organoplatinum complexes usually exhibit excellent photophysical properties, e.g. organoplatinum(II)dipyridophenazine complexes 96-99 with C-deprotonated 2-phenylpyridine ligands. Upon interaction with G4 DNA, the emission intensity of the complexes is greatly enhanced. 81 Fig. 9 Metallosalens with various side arms reported as G4ligands. [73][74][75] These photophysical experiments also provide additional evidence supporting the end-stacking binding mode for G4 DNA. The binding affinity in the order of 10 6 -10 7 M −1 for G4 DNA is stronger than for the phenanthroline complexes and at the same time potent inhibition of telomerase activity in in vitro TRAP assays is observed. 81 Similar phenanthroline complexes with other metal centres 106-109 are also effective G4 DNA binders capable of distinguishing G4 and duplex DNA (Fig. 10). 82,83 For example, the Mg(II) complex with bis-phenathroline selectively increases the melting temperature of G4 DNA but displays only a negligible effect on the melting temperature of duplex DNA, illustrating selective G4 stabilization. 82 In the case of the Ni(II) complex with the phenanthroline-ethylenediamine ligand, the in situ formation of the complex/hTel G4 DNA adduct was directly observed by electrospray ionization mass spectrometry (ESI-MS). 83 A penta-coordinated Au(III) pyrazolylpyridine complex 106 interacts strongly and selectively with quadruplex DNA, in particular with c-myc, through π-π stacking. This is the first example of an Au(III) complex based G4-ligand. 84
Metal-terpyridine complexes and derivatives
Metal-terpyridine complexes with Cu(II), Zn(II), Pt(II) and Pd(II) ion centres can act as effective G4-ligands ( Fig. 11 and 12). [85][86][87][88] The central metal ion, the number of aromatic rings, as well as the number and position of the substituents on the terpyridine scaffold play critical roles for G4 recognition and stabilization. A series of comparative studies of metal-terpyridine complexes demonstrated that the binding affinity and selectivity for G4 DNA depend mainly dependent on the geometry of the complex. Here the effective G4 stabilizers must exhibit at least one planar aromatic surface accessible for π-π stacking interactions with G-quartets. Thus the capability of the zinc(II) terpyridine complexes 110-111 for G4 binding was poor due to their nonplanarity as reported by Teulade-Fichou et al. 85 whereas similar Cu(II) 113-114 and Pt(II) complexes 131-132 as reported by Bertrand et al. 85 and Wang et al. 86 , respectively, are good G4 stabilizers due to their planar geometry.
The modification of terpyridine by attaching additional aromatic rings, charged peralkylated ammonium, cyclic amine- 85 Another example is a group of Cu(II)and Pt(II)-terpyridines 118-121 constructed by Gama et al., in which the terpyridine is tethered with a planar anthracene moiety via different linkers. 87 All these complexes are potent G4 stabilizers and telomerase inhibitors, exhibiting good affinity and selectivity for the G-quadruplex over duplex DNA. It is interesting to note that the strength of the ligand/G4 interactions increases with the linker size between the anthracenyl moiety and the terpyridine chelating unit. The extension of the planar aromatic surface of the terpyridine scaffolds also enhances the binding affinity and specificity of metal complexes for G4 DNA. Several N-donor tridentate (terpyridine-like) ligands with a more extended planar aromatic surface than the simple terpyridines were designed. The corresponding Pd(II), Cu(II) and Pt(II) complexes 125-129 exhibit an enhanced binding affinity and specificity for G4 DNA (Fig. 11). 89 It is interesting to note that the Pd(II) complex 129 exhibits a better stabilization effect on G4 DNA than the corresponding Pt(II) and Cu(II) complexes 128 and 130 according to the FRET assay. The ESI-MS and UV/Vis measurements suggest that the Pd(II) complex possesses the greater ability to effectively coordinate DNA bases at room temperature whereas the Pt(II) complex predominately binds non-covalently to G4 DNA.
Here we concentrate on platinum(II)-terpyridine complexes as these are most widely studied (131-146 in Fig. 12). 85,88,90,91 Besides very few examples with a sulfur atom-armed cyclic amine as the monodentate ligand, most reported platinum(II)terpyridine complexes can be divided into two main groups: the monodentate ligand is either a labile chloride or an alkynyl moiety, both ensuring a planar geometry of the complex optimal for G4 interactions through π stacking. The presence of the labile chloride allows the possibility of direct coordination of the platinum(II)-terpyridine complex to G4 DNA. 85,88,90,91 For example, the complexes 131-132 can interact with a telomeric G4 sequence with good affinity-selectivity via platination of the adenine residues in the loops. 42 However, a labile chloride does not ensure platination of G4, for e.g. the complex 146, 89 carrying a bis-quinolino modification on the terpyridine, not only exhibits a larger π-delocalized surface but also extensive steric hindrance, resulting in no platination of G4 DNA. Members of the second group 142-145 with an alkynyl moiety as the monodentate ligand, not only act as good G4 stabilizers with high affinity and selectivity but also exhibit excellent optical properties. Hence the Pt(II) terpyridyl alkynyl complexes can be additionally used to monitor the G-quadruplex. 86,92 Furthermore, the terpyridine ligands of all these Pt(II) complexes were modified with additional aromatic rings, or cyclic amine-or peralkylated ammonium-based side chains, resulting in an enhanced binding affinity and superior selectivity for G4 structures over duplex DNA.
Octahedral metal complexes with planar ligands
With few exceptions, all metal complexes described above possess planar geometry due to the planar macrocyclic or nonmacrocyclic polydentate ligands coordinated to various metal centres. Several octahedral metal complexes such as Ru(II), Ir(III), Fe(III) ion centres with planar ligands have also been investigated as potent G4-ligands (Fig. 13), e.g. ruthenium(II) polypyridyl complexes. Ruthenium complexes were initially developed as alternatives to platinum anticancer drugs because of their prominent DNA binding properties and outstanding anticancer activity. 2,93 One of the most recently examined complexes 147 [Ru(bpy) 2 (dppz)] 2+ is known as a molecular "light switch" for DNA that can intercalate between the duplex DNA base pairs by π-π stacking. 94,95 This observation prompted scientists to direct efforts towards the construction of Ru(II) complexes as G4-ligands. Recent research shows that the complexes [Ru(bpy) 2 (dppz)] 2+ 147 and [Ru( phen) 2 (dppz)] 2+ 148 serve as a prominent molecular "light switch" not only for duplex DNA but also for G4 DNA in Na + or K + containing buffer. 96,97 However, the affinity and selectivity of these complexes towards G4 DNA was rather weak and their G4 stabilization effect was not prominent (ΔT m < 1°C with complex/G4 = 0.75 : 1). An enhancement of the stabilization effect and selectivity for the G-quadruplex could be achieved by suitably modifying the polypyridyl ligand of the Ru complexes. For example, the ruthenium complex 149, [Ru(bpy) 2 dppz-idzo] 2+ with an imidazolone substituent on the dppz ligand, was constructed, 98 exhibiting not only a remarkable "light switch" effect for G4 DNA in K + containing solution (300-fold enhancement in emission) but also a powerful ability to induce and stabilize the formation of the antiparallel G4 structure in buffer solution without alkali-metal ions. 98 Its powerful G4 stabilization ability was evident from the significant increase in the melting temperature of G4 DNA being stronger than that of the classic [Ru(bpy) 2 (dppz)] 2+ complex, implying its great potential to act as a telomerase inhibitor and an anti-cancer agent. Two other Ru(II) polypyridyl deriva- tives 150-151 bearing flexible cyclic amine-based side chains on the large planar aromatic ligand were also constructed by Chao et al. 99 Measurements by using CD spectroscopy, FRET and PCR-stop assays proved that these complexes could effectively bind with and stabilize G-quadruplex structures (ΔT m = 5-15°C), thus exhibiting a concentration-dependent inhibitory effect on telomerase activity (IC 50 -TRAP = 100-500 nM), ultimately resulting in long-term anti-proliferation of cancer cells. Similarly, [Ru(dppz) 2 (bpy)] 2+ derivatives 153-154 bearing two dppz ligands and one modified bpy moiety with two quaternary ammonium pendants, again exhibited stronger interactions with G4 DNA than the classic [Ru(bpy) 2 (dppz)] 2+ and increased the melting temperature of G4 DNA by 7.0-9.4°C. 100 Lately, a series of Ru(II) polypyridyl complexes 155-158 containing the phenyl-imidazo- [4,5f ] [1,10] phenanthroline ligand were also shown to stabilize the formation of G-quadruplex structures, such as hTel G4 and c-myc G4 (ΔT m = 9-18°C), via optical spectroscopy, FRET and PCR-stop assays, resulting in effective inhibition of telomerase in the TRAP assay. [101][102][103][104] Moreover, cellular studies such as cytotoxicity and flow cytometric analysis of mitochondrial membrane potential found that these complexes inhibited the growth of cancer cells through effectively promoting cell apoptosis, resulting in antiproliferative activities at low micromolar ranges comparable with cisplatin.
A series of octahedral cyclometallic Ir(III) complexes have been constructed as molecular "light switch" for G4 DNA. 105 Upon binding with a G-quadruplex, these complexes exhibit a luminescence enhancement, the magnitude of which is higher than that upon binding with duplex DNA. The difference in the emission intensity of these Ir(III) complexes suggest a moderate selectivity for the G-quadruplex over duplex DNA, but their G4-mediated anticancer activity was not discussed.
The octahedral Fe(III) complex 159 contains two meloxicam ligands and also induces and stabilizes the hTel G4 structure with good affinity and selectivity as shown by UV/Vis, fluorescence and CD spectroscopy as well as PCR-stop assays. 106 Although the binding constant between 159 and hTel G4 (4.53 × 10 5 M −1 ) is smaller than that of most Pt(II) complexes, it is an order of magnitude higher than that for the interaction with duplex DNA. According to molecular modelling, the binding mode between the Fe(III) complex and the G4 structure is end-stacking of the meloxicam ligand on the terminal G-quartet. Because of the octahedral geometry of these metal complexes, the metal centre and the entire molecule is unlikely to end-stack on or intercalate into the G-quartets. It is actually the aromatic ligand with its large π-delocalized planar surface, such as the meloxicam and the dppz ligand, that adopts the end-stacking or intercalation binding mode with G-quartets. At the same time the cationic properties of the entire molecule caused by the metal centre further enhanced the affinity to the negatively charged DNA backbone at the grooves and loops.
Exceptions: introduction of metal centres reducing the binding capability of organic G4-ligands
In all cases discussed above, regardless of whether the initial ligand was planar or not, the introduction of the metal ion centres resulted in a stronger interaction with G4 DNA by offering a more optimal molecular geometry and cationic properties. The rational modification of the ligand can further improve the affinity and selectivity towards G4 DNA. However, there are also adverse cases in which the introduction of the metal ion breaks the initially planar π-delocalized surface of the ligand, resulting in a dramatic weakening of the binding with G4 DNA. 107,108 A representative member is the bisquinolinium ligand 160, which has a large π-delocalized surface and is one of the most selective G4 binders with highest affinity. However, addition of Cu(II) to a solution of a G4-bisquinolinium mixture leads to unfolding of the G4 structure into a single strand meaning that the stabilizing effect of the bisquinolinium ligand is dramatically weakened by the coordination of Cu(II) (Fig. 14). 107 Presumably this effect is due to the change of the planar geometry of the free bisquinolinium ligand to a tilted arrangement upon coordination of Cu(II).
Multinuclear metal assemblies
Besides the monometallic complexes, a series of rationally designed multinuclear metal assemblies are also described as effective G4-ligands and potent polymerase inhibitors. Their binding modes can also be end-stacking, groove binding, loop binding and others depending on the geometry of the supramolecules.
Multinuclear platinum complexes are the most widely investigated supramolecular G4-ligands. In previous studies dimetallic Pt(II) and Pd(II) complexes with bis-carboxamido pyridines were synthesized as potent G4-ligands because of their planar surface offering favourable geometry for stacking with G4 DNA. However, the dimetallic Pt(II) structure 172 only exists in the solid state but dissociates in solution to the monometallic complex. It is actually this monometallic complex that subsequently interacts with the G-quadruplex. 109 However, the related dimetallic palladium(II) complex 173 with bis-carboxamido pyridines is not a good G4-ligand, probably because of its poor solubility. 109 A number of monometallic variants of this dimetallic Pd(II) complex with various side arms were also described, but only the positively charged complex containing the tertiary amine-modified side arms can moderately stabilize hTel G4 DNA. 109 The first stable multimetallic G4-ligand and telomerase inhibitor described is the tetranuclear platinum(II) molecular square 162 reported by Kieltyka et al., which is a representative of the class of multinuclear metal assemblies with flat surfaces. 110 This square arrangement has four [Pt(en)] 2+ at the corners and four 4,4′-bipyridyl as bridging ligands. This complex has a flat surface for effective end-stacking to the terminal G-quartet and is highly positive charged for strong electrostatic interactions with the DNA backbone. The ethylenediamine ligands of the Pt(II) ions at the corners also allow hydrogen bonding with the DNA backbone, further improving the binding affinity with G4. As a result, this complex strongly stabilizes hTel G4 (ΔT m = 34.5°C, FRET assay) with high selectivity, the telomerase inhibition activity also being very high (IC 50 -TRAP = 0.2 μM, in vitro TRAP assay). Based on this representative compound, our group also synthesized a series of four-nuclear Pt(II) assemblies with different bridging ligands trying to understand the structureactivity relationship for G4 interactions. 111,112 For example, the platinum supramolecular square 164 with pyridyl as the bridging ligand also stabilizes G4 DNA (hTel and c-kit promoter) by end-stacking with the terminal G-quartets (binding ratio of complex/DNA = 2 : 1), but the selectivity is not prominent. FRET assays showed a complex-induced increase in melting temperature by ca. 27.4°C for hTel and 8.5°C for duplex DNA, respectively. 111 This effect was assigned to the relatively smaller size of the flat surface compared to the previously reported 4,4′-bipyridyl bridging analogues 162, the geometry size of the latter was perfectly matching the size of G-quartets (10.8 Å). Further studies of the four-nuclear Pt(II) quasi-cubes 163 exhibit enhanced selectivity towards G4 over duplex DNA. 112 The complex-induced increase in melting temperature was ca. 33.5°C for hTel and less than 1°C for duplex DNA, with a binding constant for G4 being two orders of magnitude higher than that for duplex DNA. The binding ratio between complex 163 and G4 was unexpectedly high (complex/DNA = 6 : 1) and molecular docking studies suggested both end-stacking and groove-binding modes. All these Pt(II) squares and Pt(II) quasi-cubes acted as effective telomerase inhibitors, with IC 50 -TRAP values ranging from 0.12 to 0.35 μM.
The first stable tri-nuclear and di-nuclear Pt(II) complexes 165-166 acting as effective G4-ligands and telomerase inhibitors were synthesized by our group, the Pt(II) corners of which are bridged by a (quaternized-)trigeminal chelating ligand. 34,113,114 Both FRET and ITC assays indicated high affinity and excellent selectivity of these complexes for hTel G4 (ΔT m > 30°C) over promoter c-myc and bcl-2 (ΔT m < 6.0°C) as well as duplex DNA (ΔT m < 1.5°C). The PCR-stop and in vitro TRAP assays indicate that these complexes effectively induced hTel G4 formation thereby effectively inhibiting telomerase activity with IC 50 -TRAP values in the micromolar range. Interestingly, the V-shaped bi-nuclear Pt(II) complexes 166 favour to induce the hybrid parallel/antiparallel hTel G4 topology while the propeller-shaped trinuclear platinum(II) complexes allow the recognition of different G4 topologies with different Pt(II) corners, possibly due to their flexibility. 114 For example, the complex with diethylenetriamine Pt(II) corners stabilizes antiparallel G4, while the complex with bis-(2-pyridylmethyl)amine Pt(II) corners stabilizes parallel G4. Based on the V-shaped bi-nuclear Pt(II) complex, we reported the first nitroxide tagged G4-ligand 167 as a new strategy for investigating detailed interactions between small molecules and the G-quadruplex. 113 Using the nitroxide moiety as a spin label, the inter-spin distance between the two G4-bound 167 could be measured by using electron paramagnetic resonance (EPR) spectroscopy. Combining the EPR-measured data with molecular docking revealed that this complex predominately binds to the neighbouring-grooves as hTel adopts the antiparallel conformation. Very recently, four dinuclear Pt(II)-terpyridine complexes 168-171 were also reported to act as efficient G4ligands, with high selectivity (ΔT m up to 17°C) over duplex DNA (ΔT m = 1°C). 115 Multimetallic supramolecules based on non-platinum metal centres, such as Ru(II), Ni(II), Cu(II), Zn(II), Fe(II), Tb(III), or Ce(IV), were also investigated as G4-ligands ( Fig. 16-19). Among them, the multimetallic Ru(II) complexes possess additional excellent fluorescence emissive properties and their interaction with G4 DNA is in some cases accompanied by great changes in their optical properties. For example, the binuclear Ru(II) complexes 175-176 (Fig. 16) are virtually nonluminescent in aqueous solution, however upon binding with the G4 DNA (d[AG 3 (T 2 AG 3 ) 3 ]) in the presence of K + ions the emission is significantly enhanced (150-fold) accompanied by a blue shift of 30 nm. 97,116 It is worth mentioning that the emission enhancement of the Ru(II) complexes is only observed upon binding to G4 structures containing lateral loops that are at least 3 base pairs long. Thus it is proposed that the possible G4 binding mode is "end-pasting" or "threading" through the G4 lateral loops, in addition to partial intercalation. More interestingly, these dinuclear Ru(II) complexes can be obtained in enantiomerically pure forms and thus offer novel enantio-selectivity for interacting with hTel G4 DNA of antiparallel basket-like topology. 117 Although the complexes discussed above moderately stabilize G4 DNA (ΔT m = 3.8-5.4°C in thermal melting experiments), their selectivity towards G4 over duplex DNA is not optimal: they can also bind to duplex DNA with high affinity, resulting in a moderate emission enhancement (50-fold) as well. Changing the bridging ligand tppz to obip, a second-generation luminescent bimetallic complex 177 is obtained, displaying relatively better selectivity towards G4 DNA. The emission enhancement of 178 with G4 is one order of magnitude higher than that for duplex DNA, and the complex/G4 interaction increases the melting temperature by 5.8-9.4°C in the presence of alkali metal cations. 118 Chao et al. constructed a series of comparative dinuclear Ru(II) complexes 178-181 with larger sized bridging ligands, exhibiting high selectivity for binding with G4 over duplex DNA. These complexes can induce and stabilize antiparallel hTel G4 (ΔT m = 12-24°C) in the presence or absence of K + with 1 : 1 stoichiometry, leading to promising inhibition effects on telomerase activity and cancer cell proliferation. 119 In addition, two trigeminal ligand bridged trinuclear ruthenium(II) complexes 182-183 were also reported to moderately stabilize G4 DNA (ΔT m = 14-19°C with complex/DNA = 5 : 1) but the information about the selectivity between G4 and duplex DNA is not available. 120 These investigations indicate that bridging ligands with appropriately sized planar surfaces are essential for the strong and selective interaction of the multimetallic Ru(II) complexes with the G-quadruplex. All the multimetallic Ru(II) complexes described above are bridged by a rigid, planar bridging ligand with extensive aromaticity. In addition, a novel bimetallic Ru(II) complex 184 is reported in which two octahedral fluorescent Ru(II) monomers are connected with a partially flexible chain (noncyclic crown ethers). 121 Compared to other bimetallic Ru(II) complexes, this complex shows a highly distinct fluorescence upon binding to G4 compared to duplex DNA, the intensity difference of which can even be distinguished by the naked eye. This indicates a relatively high selectivity toward G4, the binding constant of which is about one order of magnitude higher than that for duplex DNA. This complex also moderately stabilizes the G4 structure (ΔT m = 12.7°C) and was also considered as a potent telomerase inhibitor. Besides multimetallic Pt(II) and Ru(II) complexes, several bimetallic terpyridyl-containing complexes with either homogeneous or heterogeneous metal centres were also reported as effective G4 binders (Fig. 17). For example, a dinuclear Cu(II)terpyridine complex 185 effectively binds G4 DNA with very high affinity and stabilizes the antiparallel topology, and its selectivity for G4 is 100-fold higher than that for duplex DNA. 122 Bimetallic complexes 186-189 with either homogeneous or heterogeneous metal centres, e.g. Zn(II)/Zn(II), Cu(II)/Cu(II), Pt(II)/Zn(II), Pt(II)/Cu(II), were also constructed based on a novel terpyridine-cyclen ligand, both the terpyridne and cyclen moieties of which possessed efficient metal chelating properties. These dinuclear complexes all exhibited higher binding affinities towards G4 compared to their monometallic counterparts. 123 In addition, Morrow et al. reported two Zn(II) macrocyclic complexes 190-191, the structures of which were similar to the monometallic counterparts of complex 186 with the nonplanar dansyl group and acridine group pendents, respectively. [124][125][126] Complex 190 shows 110-fold selectivity in binding to hTel G4 over duplex DNA, evidenced by an increase in the fluorescence and a simultaneous shift in emission upon G4 interaction. ITC, fluorescence and NMR spectroscopy give a complex/G4 stoichiometry of 2 : 1 and indicate one complex binding to two spaced thymines within two separate loops in the hTel G4. Notably, this is the first reported metal complex-based hTel G4-ligand utilizing the thymine residues as the primary mode of recognition. Although loop bindings have also been found in several previously reported G4-ligands, they were mainly the secondary mode of interaction and driven through aromatic stacking. 127,128 As a comparison, complex 191 containing an acridine pendent shows stronger binding to the G-quadruplex but indiscriminately binds to duplex DNA as well.
A novel cylinder-like bimetallic Ni(II) complex 192 (Fig. 18) with triple diimine ditopic ligands was constructed by Qu and co-workers, exhibiting a novel chirality-based selectivity for hTel G4 structure over the duplex as well as other different types of G4 DNA. 129 Only the P-enantiomer of this supramolecular cylinder selectively stabilizes hTel G4 DNA and simultaneously induces an antiparallel-to-hybrid conformational transition in the presence of Na + . The stoichiometry ratio complex/G4 was determined to be 1 : 1 and an S1 nuclease cleavage assay was performed to determine whether the complex binds with an end-stacking binding mode with the cylinder stacking on top of the terminal G-quartet via its exten-sive hydrophobic exterior. In addition, the positively charged metal ion centres allow strong electrostatic interactions with loops and grooves, so that the termini of the hTel G4 are protected from the S1 nuclease cleavage. This example paved the way towards the development of a new class of G4-targeted chiral binders and anticancer drug candidates. In addition, two bimetallic terbium(III)-amino acid complexes 193-194 are capable of binding hTel G4 DNA as well as i-motif DNA, although the binding constant and stabilization effect were as good as found for previously described metal complexes. 130 4.11. Metal assemblies as G4 DNA-cleavage agents In the examples presented above, the metal complexes effectively stabilize certain G-quadruplex structures thus inhibiting telomerase activity or oncogene expression, ultimately interfering with telomere maintenance and preventing the proliferation of cancer cells effectively. However, this strategy usually requires long-term treatment to significantly shorten the telomere length of cancer cells, because each round of cell division only reduces 50-200 nt of the telomere length. Thus several metal-containing G4-ligands have recently been reported to induce direct cleavage of G4 DNA as another anticancer strategy. The first reported metal complex acting as a G4 DNA cleavage agent was a highly reactive high-valent oxo-manganese porphyrin species 195, which was formed in situ from the Mn(III)-TMPyP4 complex 7 in the presence of the oxygen atom donor, KHSO 5 (Fig. 19). 131 During the redox reaction only one face of the porphyrin is solvent accessible which is consistent with the partial stacking of the metalloporphyrin on the external side of the terminal G-quartet. The in situ formed Mn(V)vO species from the Mn(III)-TMPyP4/KHSO 5 system can act as nuclease mediating oxidative cleavage of the bound hTel G4 sequence. Such oxidative damage consisted of both guanine oxidation within the interacting G-quartet and the 1′-carbon hydroxylation of deoxyriboses carrying thymidine residues located on the neighboring single-stranded loop. However, this highly reactive Mn(V)vO species cleaves both hTel G4 and duplex DNA indiscriminately, indicating that improved targeting and selective cleavage towards hTel G4 structures are still required. 131 Afterwards a water-soluble dimetallic Fe(II)-EDTA complex 196 bridged by the planar polyaromatic ligand perylene was constructed as a selective G4 DNA cleavage agent (Fig. 19). 132,133 The bridging perylene ligand is known as an effective G4 binder, allowing π-stacking of the dimetallic complex on the terminal G-quartets, and two Fe(II)-EDTA cores bind with the opposite grooves. Upon interaction, this complex selectively cleaves the G-quadruplex over duplex DNA in the presence of the reducing agent, dithiothreitol, probably through a hydroxyl radical mechanism. 132,133 The dimetallic Ce(IV) complex 197 with ethylenediaminetetramethylene phosphonic acid (EDTP) has recently been reported as a selective DNA-cleavage agent for intermolecular G-quadruplex DNA (Fig. 19). 134 This complex can induce the assembly of a highly stable (3 + 1) intermolecular G4 structure by covalently binding to the G-rich sequence, and such a Ce(IV)-EDTP-DNA conjugate can further induce sequencespecific hydrolytic cleavage at a specific phosphodiester site of the hTel G4 DNA backbone. However, this complex possesses some inherent disadvantages e.g. low cellular uptake, instability to natural nucleases, self-cleavage, which limit its further applications and investigations in vivo.
Very recently, a novel DNA-cleaving agent 198 has been reported to be capable of selectively cleaving intramolecular hTel G4 DNA. 135,136 This complex is constructed by coupling an amino-terminal copper binding motif peptide GGHK to an acridine-based G4 ligand (Fig. 19). The acridine moiety promotes the selective binding of the complex towards hTel G4 over duplex DNA, and helps in positioning the catalytic metallodrug moiety CuGGH in close proximity to G4 DNA. Based on this G4 targeting property, complex 198 could selectively and efficiently induce irreversible cleavage on the G4 structure rather than other structural states of telomeric DNA in the presence of redox co-reagents. The 3′overhang cleavage product was generated from both hydrolysis and oxidative cleavage mechanisms. Data from molecular modelling and MALDI-MS suggested the major selective cleavage sites at A1-G2 and T6-A7 nucleotides. As a result, complex 198 caused the significant shortening of telomeric DNA, cellular senescence and apoptosis in MCF7 cell lines.
Conclusions
G-quadruplex nucleic acids are formed by self-assembly of guanine-rich sequences and can be found in telomeres, oncogene promoters, and non-translated RNA regions. They possess unique structures distinct from the well-known double helical structures adopted by most genomic DNA. Such distinct structures offer a great opportunity for selective molecular recognition toward the G-quadruplex over genomic duplex DNA. Moreover, formation of G4 nucleic acids interferes with numerous biological pathways, including maintenance of telomeres, regulation of oncogene transcription and translation and so on. Once intramolecular G4 is formed within the human telomeric DNA sequence, the activity of telomerase is indirectly inhibited due to the loss of the substrate, resulting in telomere shortening in cancer cells. Because telomerase is silent in normal cells but up-regulated in most cancer cells contributing to their immortality, molecules capable of inhibiting or down-regulating telomerase activity thereby has the potential for selective toxicity toward cancer cells over normal cells. Intramolecular G4 formed within the oncogene promoter regions could lead to the inhibition of oncogene expression, thus exhibiting selective toxicity toward cancer cells as well. Thus G4 nucleic acids have attracted a lot of attention as potential clinical targets for the development of new types of anticancer agents.
Small molecules that can recognize and interact with the G-quadruplex with high affinity and specificity, termed G4ligands, have been systematically investigated over the past few decades. Their capabilities to act as G-quadruplex stabilizers, telomerase activity inhibitors, oncogene transcription and translation modulators, as well as DNA-cleavage agents have been assessed, thus opening new avenues for cancer chemotherapy. This review summarizes the various families of metal complexes reported in the literature that have G4 DNA targeting properties so as to exhibit potent anticancer effects through selectively stabilizing or cleaving the G-quadruplex over duplex DNA. The majority of understanding has focused on the G4 DNA at telomeres and promoter regions although G-quadruplex motifs are definitely present in other regions of the genome. These metal complexes possess peculiar tridimensional geometries, such as square-planar in most cases, squarebased pyramidal, cube-like shape or even an octahedral structure, and the metal ions must arrange the scaffold into the optimal geometry for G4 binding. In fact, different metal centres coordinating with the same organic ligand can generate metal complexes with overall distinct G4 binding properties.
The interactions between the metal complexes and G-quadruplex have been characterized by numerous biophysical methods revealing the strength and specificity of these G4ligand interactions. A summary of current knowledge of metalcontaining G4-ligands concludes three main binding sites: the G-quartets, the grooves, and the loops of the G-quadruplex. The large majority of known G4-ligands exhibit a preference for the first binding site, usually end-stacking of the metal complex scaffold onto one of the terminal G-quartets. To optimize this end-stacking binding mode, metal complexes reported in the literature generally present a suitably sized planar surface, which is larger than that involved in DNA intercalators so as to eliminate the possibility of intercalation binding mode. In some cases the scaffold of metal complexes is flexible but attains planarity upon end-stacking onto the external G-quartet. On the other hand, the presence of bulky flexible side chains at the periphery of the central planar core increases the steric hindrance of the metal complexes, which could prevent the intercalation mode of G4-ligands between DNA base pairs. Moreover, positively charged or protonable side chains can not only enhance electrostatic interaction strength with the negatively charged DNA phosphate backbone but also improve the fitting of the molecule into the grooves, loop residues or cavities, resulting in an increase in the affinity of the G4-ligands. All these structural design considerations are beneficial for increasing the selectivity of metal complexes for G4 over duplex DNA.
Although the majority of G4-ligands reported so far recognize G-quartets and present end-stacking binding mode, this interaction can only allow a selective discrimination between G4 and duplex DNA, it can hardly distinguish one G4 structure over another. This is not therapeutically effective enough. As a consequence more promising binding modes, such as groove binding and loop binding, are favourable for the enhancement of selectivity. Because different G-quadruplex topologies endow grooves and loops with various dimensions and accessibilities, G4-ligands presenting specific groove or loop binding modes have great potential to discriminate among different G4 DNA derived from various G-rich sequences. It is a big challenge to combine targeting of the grooves/loops with targeting of the G-quartets and to date this has only provided limited success. In these examples multiple binding modes can occur simultaneously, but groove/ loop binding are mainly the secondary mode of interaction and usually driven through the primary mode of interaction end-stacking. Only a few bimetallic Pt(II), bimetallic Ru(II) and macrocyclic Zn(II) complexes utilize the grooves and loops as the primary mode of recognition. This provides great inspiration for the design of the next generation of G4-ligands that depends less on general G-quartet but more on specific groove/loop recognition.
Besides π stacking and groove/loop binding, metal complexes containing labile groups (Cl, H 2 O, etc.) can also make direct coordination of the metal centre to G4 DNA nucleobases, similar to the platination of duplex DNA by cisplatin. This type of coordination either occurs at a single nucleotide, in most cases guanines, or causes cross-linking of two nucleobases, probably enhancing the affinity and specificity of the metal complexes to G4 DNA, similar to the compounds 131-132. Regardless of the kind of binding mode, these metalcontaining G4-ligands effectively stabilize certain G-quadruplex structures, thus inhibiting telomerase activity or oncogene expression, ultimately interfering with telomere maintenance and inhibiting the proliferation of cancer cells. On the other hand, several rationally designed metal complexes have very recently been reported possessing the property to selectively cleave hTel G4 over duplex DNA through either oxidative damage or hydrolysis mechanism. These cleavage processes require the presence of redox co-reagentshydrolysis cleavage and oxidative damage. This is considered as another anticancer strategy, which can significantly shorten the telomere length of cancer cells in a relatively shorter term rather than other G4 stabilizers.
From all these studies, the rational design of metal complexes to selectively interact with, stabilize or cleave G-quadruplex structures has evolved as a promising strategy for the development of anticancer drugs with selective toxicity towards cancer cells over normal ones. However, the G4-ligand development has been dominated by in vitro biophysical and biochemical investigations, the understanding of G4-ligand activity and selectivity in the in vivo environment still remains a major challenge. After so many years only one promising in vivo G4-ligand, Quarfloxin, has reached phase II clinical trials for treating neuroendocrine tumors and carcinoid tumors. 30 Upon its binding with the G-quadruplex structure in the ribosomal DNA (rDNA) template, the interaction between nucleolin protein and rDNA is disrupted, resulting in the inhibition of rRNA biogenesis and ribosome synthesis in cancer cells. Similar ribosomal DNA G4-targeted metal complexes have not been reported. This may shed light on the development of rDNA G4-targeted metal complexes for clinical use.
The structures of G4 sequences in vivo are highly dynamic, which may be in its linear form or interacting with proteins. In this sense there are two critical questions that need to be addressed for the development of G4-targeted drugs: what are the in vivo targets of these G4-ligands and how to control their accessibility? Thus it is important for the G4-ligands not only to recognize the G4 arrangement already organized but also to reach the target and induce the formation of G4 conformation. Several metal complexes have been reported capable of inducing the formation of certain G4 conformations and those possessing distinct photophysical properties can be used as "light switches" for G-quadruplexes to monitor the occurrence of G4 structures even in living cells. Thus it is probably possible to design G4-ligands capable of simultaneously inducing and real-time tracing the occurrence of G4 structures during cancer therapy. The chirality-based selectivity of metal complexes is also very attractive for designing G4-ligands: different enantiomers can recognize different G4 topologies and even show differential uptake mechanisms and cellular localization. The utilization of these metal complexes as in vivo probes for G4 DNA or as unique photosensitizers is also likely to be an active area.
Additionally, an important first step in designing selective G4 DNA-cleavage agents has been taken by application of a catalytic metallodrugs' strategy, in which metal complexes act as an artificial nuclease mediating the hydrolysis or oxidative cleavage of telomeric DNA. The combination of metallodrugs and G4 DNA binding may also catalyze oxidation directing toward small substrates present in the bulk, thus becoming a DNAzyme. For example, a heme cofactor of natural enzymes known as iron(III) protoporphyrin IX (or Hemin) is endowed with G-quadruplex binding capability with high affinity, and the hemin-binding G4 DNA aptamer has been found to exhibit peroxidase-like activity which can be used as a sensitive probe for the identification of single nucleotide polymorphisms by giving a color signal. 137,138 This may open new avenues for the application of metal-containing G4-ligands as attractive catalytic labels in biosensing.
In summary, development of metal-containing G4-ligands as novel anti-cancer drugs is a rapidly growing field. Despite making some progress, to date the majority of metal-containing G4-ligands do not have realistic drug-like structures and their in vivo applications are still very rare. Thus it is expected that further advancement in this field will focus on progress in medicinal studies and in the next few years we would like to see new generations of metal-containing G4-ligands enter clinical trials. | 2017-10-23T12:34:28.035Z | 2017-01-13T00:00:00.000 | {
"year": 2017,
"sha1": "b205180f8ba32942271d71682431ebf99addcd47",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/qi/c6qi00300a",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "40a2c18e4d40900c199bbb6bd6b099ed59aff3d2",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
231574554 | pes2o/s2orc | v3-fos-license | CJEM journal club: corticosteroids use for critically ill COVID-19 patients
1 Royal College Emergency Medicine Training Program, University of British Columbia, Vancouver, BC, Canada 2 Centre for Clinical Epidemiology and Evaluation, Vancouver Coastal Health Research Institute, Vancouver, BC, Canada 3 Department of Emergency Medicine, University of British Columbia, Diamond Health Care Centre 11th floor, 2775 Laurel Street, Vancouver, BC V5Z 1M9, Canada Full Citation: Sterne JA, Murthy S, Diaz JV, Slutsky AS, Villar J, Angus DC, Annane D, Azevedo LC, Berwanger O, Cavalcanti AB, Dequin PF. Association between Administration of Systemic Corticosteroids and Mortality among critically ill patients with COVID-19: a meta-analysis. JAMA. 2020 Sep 2.
Background
The role of corticosteroids in treating critically ill COVID-19 is unclear.
Objectives
To evaluate the effect of corticosteroid in critically ill COVID-19 patients on 28-day mortality.
Design
Prospective meta-analysis with analyses conducted prior to publication of trial data.
Eligibility criteria
Randomized controlled trials (RCTs) evaluating use of corticosteroids in critically ill COVID-19 patients compared to placebo or standard care.
Outcomes
Planned primary outcome was 30-day mortality; secondary outcome was serious adverse events.
Main results
Seven RCTs including 1703 critically ill patients were included (678 randomized to corticosteroids and 1025 to placebo or standard care). Five RCTs reported mortality at 28 days, 1 at 21 days, and 1 at 30 days. The primary analysis odds ratio (OR) for mortality was 0.66 (95% CI 0.53-0.82, P < 0.001; I 2 = 15.6%, P het = 0.31). For the 6 trials that reported serious adverse events, 64 occurred in the 354 patients randomized to corticosteroids and 80 in the 342 patients receiving placebo or standard care. Effect estimates for pooled dexamethasone trials and pooled hydrocortisone trials were similar. There was only one small imprecise study evaluating the effect of methylprednisone. Mechanical ventilation, duration of symptoms, and vasopressor use were not associated with increased effect of corticosteroids in exploratory subgroup analyses. GRADE assessment was reported as moderate quality.
Strengths
• Prospective meta-analysis of pre-publication trial data coordinated by the World Health Organization with all active trial investigators. • RCTs were mostly low risk of bias using the Cochrane risk of bias tool 2.0. Only one study had "some concerns" for randomization process. No significant or substantial heterogeneity across study results.
Results consistent across corticosteroid groups (dexamethasone and hydrocortisone). Prespecified subgroup analyses. • Multiple analysis approaches (random-effects, risk ratios) with consistent estimates. GRADE assessment of quality of evidence performed [1].
Limitations
• Search limited to ongoing trials, which may miss unpublished negative trials. • Small number of trials. • RECOVERY trial contributed 57% of the weight in primary analysis [2]. • Random-effects meta-analysis of mortality was nonsignificant. • Limited exploratory subgroup analyses due to small number of trials. • Only one small, imprecise study evaluating effects of methylprednisone. • Unable to meta-analyse serious adverse events due to differing definitions across trials. The risk of harm was not clearly addressed.
Context
Following the RECOVERY trial, dexamethasone was viewed as the first RCT-proven treatment for COVID-19. However, it remained unclear whether other corticosteroids may be beneficial, which would ease access for patients.
Questions also remained about whether hypoxic nonventilated patients would benefit, and if the effect may be dependent on symptom duration. This meta-analysis showed that the mortality benefit is a class effect of corticosteroids overall and found no signal for increased effect in patients receiving mechanical ventilation or in those with prolonged symptoms. Following this publication, the WHO stated that corticosteroids are now standard of care for critically ill COVID-19 patients.
Bottom line
The results of this landmark meta-analysis have made corticosteroids standard of care for critically ill COVID-19 patients. Any negative trials, once published, should be included in analyses. Future research should address whether differences exist between high-versus low-dose steroids, different types of steroids, and at what point in the progression of COVID-19 does treatment with corticosteroids yield benefit. Standardization of adverse drug event documentation would also help clarify benefits versus risks of treatment [3]. | 2021-01-12T05:13:18.322Z | 2021-01-11T00:00:00.000 | {
"year": 2021,
"sha1": "852e58a57a45d29f0cb6df2536b1893fcb2b2c77",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43678-020-00056-w.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "852e58a57a45d29f0cb6df2536b1893fcb2b2c77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264433994 | pes2o/s2orc | v3-fos-license | Insulin Resistance and Truncal Obesity as Important Determinants of the Greater Incidence of Diabetes in Indian Asians and African Caribbeans Compared With Europeans
OBJECTIVE To determine the extent of, and reasons for, ethnic differences in type 2 diabetes incidence in the U.K. RESEARCH DESIGN AND METHODS Population-based triethnic cohort. Participants were without diabetes, aged 40–69 at baseline (1989–1991), and followed-up for 20 years. Baseline measurements included fasting and postglucose bloods, anthropometry, and lifestyle questionnaire. Incident diabetes was identified from medical records and participant recall. Ethnic differences in diabetes incidence were examined using competing risks regression. RESULTS Incident diabetes was identified in 196 of 1,354 (14%) Europeans, 282 of 839 (34%) Indian Asians, and 100 of 335 (30%) African Caribbeans. All Indian Asians and African Caribbeans were first-generation migrants. Compared with Europeans, age-adjusted subhazard ratios (SHRs [95% CI]) for men and women, respectively, were 2.88 (95%, 2.36–3.53; P < 0.001) and 1.91 (1.18–3.10; P = 0.008) in Indian Asians, and 2.23 (1.64–3.03; P < 0.001) and 2.51 (1.63–3.87; P < 0.001) in African Caribbeans. Differences in baseline insulin resistance and truncal obesity largely attenuated the ethnic minority excess in women (adjusted SHRs: Indian Asians 0.77 [0.49–1.42]; P = 0.3; African Caribbeans 1.48 [0.89–2.45]; P = 0.13), but not in men (adjusted SHRs: Indian Asians 1.98 [1.52–2.58]; P < 0.001 and African Caribbeans, 2.05 [1.46–2.89; P < 0.001]). CONCLUSIONS Insulin resistance and truncal obesity account for the twofold excess incidence of diabetes in Indian Asian and African Caribbean women, but not men. Explanations for the excess diabetes risk in ethnic minority men remains unclear. Further study requires more precise measures of conventional risk factors and identification of novel risk factors.
T he global prevalence of type 2 diabetes continues to increase, with the Indian subcontinent predicted to contribute the greatest increase in the number of people with diabetes by 2030 (1). Indian Asian migrant populations also experience greater prevalence of diabetes than host white populations (2,3). Although prevalence of diabetes in sub-Saharan Africa remains low, the prevalence in African-origin populations in other areas of the world is elevated compared with that of white populations (4,5). Few studies have explored explanations for ethnic differences in diabetes incidence. The Atherosclerosis Risk in Communities (ARIC) study found that although adiposity, lifestyle, and socioeconomic factors accounted for nearly 50% of the excess diabetes risk in African American women, none of the excess could be explained in men (6), echoing findings from the National Health and Nutrition Examination Survey (NHANES) (7,8). However, previous studies did not explore the role of insulin resistance and ectopic fat distribution. Further, there are no longitudinal studies to explain the excess diabetes risk in Indian Asians compared with Europeans.
We have reported a threefold prevalence of diabetes in men and women aged 40-70 years of Indian Asian and African Caribbean origin compared with Europeans in the SABRE (Southall And Brent REvisited) cohort. We now report incidence of diabetes and potential explanations for ethnic differences in incidence in this unique cohort with a 20-year follow-up to ages 60-89 years.
RESEARCH DESIGN AND
METHODSdSABRE is a communitybased cohort of Europeans, Indian Asians, and African Caribbeans from north and west London. Details of the cohort have been published (9). Participants aged 40-69 years at baseline (1988)(1989)(1990)(1991) were randomly selected from age-and sex-stratified primary care physician lists (n = 4,063) and workplaces (n = 795) in the London districts of Southall and Brent (Fig. 1). Because primary care registration is free and the gateway to all health services in the U.K., this forms a representative and comprehensive sampling frame. The study was designed to investigate cardiometabolic risk in different ethnic groups, primarily in men.
All Indian Asians and African Caribbeans were first-generation migrants. Ethnicity was interviewer-recorded based on parental origins and appearance and was subsequently confirmed by participants. Indian Asians originated from the Indian subcontinent. African Caribbeans originated from the Caribbean (91.5%) or from West Africa. At baseline, participants underwent fasting blood tests, blood pressure measurements, and anthropometry, and completed a health and lifestyle questionnaire. Those whose diabetes status was unknown underwent oral glucose tolerance testing (OGTT).
During 2008-2011, survivors were invited to participate in a morbidity followup, including a health and lifestyle questionnaire, primary care medical record review, and/or attendance at clinic at St. Mary's Hospital, London. Clinic attendees fasted overnight and underwent measurements as at baseline, including OGTT.
All participants gave written informed consent. Approval for the baseline study was obtained from Ealing, Hounslow and Spelthorne, Parkside, and University College London research ethics committees, and at follow-up from St. Mary's Hospital Research Ethics Committee (reference 07/H0712/109).
Identifying baseline and incident diabetes
Physician diagnosis or World Health Organization 1999 criteria (10) for fasting and OGTT blood glucose measurements defined baseline diabetes. Incident diabetes was identified from a positive report from one of the following sources: Direct follow-up: Primary care medical record review: recorded diagnosis of diabetes or prescription of antidiabetic medications. Age at diagnosis was identified at first report of diabetes in primary care medical records or age at follow-up for the 86 (15%) cases identified at follow-up clinic.
Baseline risk factor measurements Height was measured using a stadiometer. Body fat included circumferences around the waist (halfway between costal margin and iliac crest), hip (over greater trochanter), and midthigh, and was measured using a fiberglass tape with a spring balance set to a constant tension of 600 g. Holtain Harpenden calipers were used to standard protocol to measure skinfold thicknesses. Subcutaneous truncal fat was estimated by addition of subscapular and suprailiac skinfold thicknesses. For OGTT, plasma glucose and insulin were measured 2 h after 75 g oral glucose (9). Blood was analyzed at the same hospital laboratory (5). Glycated hemoglobin was measured in stored blood samples (Southall center only) using an immunoassay on a clinically validated automated analyzer (c311; Roche, Burgess Hill, U.K.); the high and low quality-control coefficients of variation were 2.9 and 3.3%. Baseline insulin resistance (IR) as a measure of hepatic IR and percentage b-cell function (HOMA2-B) were approximated using the HOMA2 calculator (11). The formula derived by Matsuda et al. (12) incorporates both fasting and postload measures, thus approximating both hepatic and peripheral IR (!(fasting glucose 3 2-h glucose[mg/dL] 3 fasting insulin 3 2 h insulin [mU/mL])/ 10,000).
Sitting blood pressure was the mean of two resting measurements using a random zero sphygmomanometer (Hawksley, Lancing, U.K.).
Physical activity was assessed by questionnaire. Methods based on the Allied Dunbar Fitness survey (13) were used to calculate energy expenditure for leisure activity, giving a summary estimate of total weekly energy expenditure (MJ) in sport, walking, cycling, and strenuous activities. An index of physical activity at work was generated (Supplementary Data).
Statistical analyses
Primary analyses relate to data obtained from direct follow-up sources for people without baseline diabetes. Indirect data (HES) and death certification data were not used for primary analyses because date of diagnosis could not be established. We amalgamated direct data sources to identify incident diabetes and found no evidence of heterogeneity between sources using a meta-analysis approach. Baseline characteristics by incident diabetes status were compared using parametric (Student t) or nonparametric (Wilcoxon rank sum/x 2 ) tests, as appropriate.
Competing risks regression (competing risk = death) based on Fine and Gray proportional subhazards methods (14) was used to describe ethnic differences in diabetes incidence and to examine baseline characteristics representing a series of prespecified parameters (anthropometric, metabolic, blood pressure, lifestyle, socioeconomic position) as predictors of incident diabetes in univariate models. Those predictors that most substantially and significantly altered the subhazard ratio (SHR) for ethnic difference were included in multivariate models. Interactions between ethnicity and baseline risk factors chosen for multivariate models were examined.
We tested interactions between followup time and each covariate. The small number of covariates in which the proportional hazards assumption was violated was included as time-varying covariates. We plotted cumulative incidence curves for each ethnic-sex group and examined Schoenfeld-like residuals. All analyses are stratified by ethnicity and sex. We repeated analyses of associations between baseline risk factors and ethnicity using logistic regression combining HES, death certificate, and directly collected data. All analyses were conducted in STATA version 12. All statistical tests were twosided and statistical significance was accepted as P , 0.05.
Diabetes risk and determinants in men
Indian Asians who had development of diabetes had lower BMI at baseline but were more centrally obese than Europeans (Table 1). Although baseline fasting and postload glucose did not differ, Indian Asians had higher fasting insulin, more adverse measures of insulin resistance, and higher calculated b-cell function than Europeans. In contrast, only the ,0.001 108 (81%) ,0.001 Years lived in U.K. independent predictors (P , 0.05) ( Table 2). Data on baseline HbA 1c were available for 70% of Europeans and 77% of Indian Asians; that inclusion in the multivariable model did not explain ethnic differences in diabetes incidence (not shown).
Although only available for 2,011 surviving participants who completed follow-up questionnaires, family history of diabetes was more prevalent in ethnic minority groups and most prevalent in those with incident diabetes. (Table 1) African Caribbean men, whether of West African or Caribbean descent, had twice the incidence of diabetes compared with European men, although median age at diagnosis was similar (median, 68 vs. 67 years; P = 0.51). Because African Caribbean men were less centrally obese and had more favorable lipids than European men, adjustment for these factors exaggerated the ethnic difference in diabetes incidence. Adjustment for truncal skinfolds attenuated the ethnic difference (SHR, 1.84; 95% CI, 1.31-2.58; P , 0.001). Adjustment for Matsuda IR or its components provided the next greatest attenuation, bringing the SHR to 1.98 or 1.96 (P , 0.001). Further multivariate adjustment or adjustment for family history provided no additional attenuation ( Table 2).
Diabetes risk and determinants in women
Patterns for women were similar to those for men with the following exceptions (Table 1). African Caribbean women were the most centrally obese. The sex difference in truncal skinfolds differed markedly by ethnicity in those who had development of diabetes. Baseline truncal skinfold was 0.6 cm greater in European women than in European men, but 2.4 cm greater in Indian Asian women and 1.4 cm greater in African Caribbean women than in men of the same ethnicities. Fasting insulin and HOMA-IR were highest in African Caribbean women and lowest in European women. Asian Indians and African Caribbeans had similarly elevated levels of Matsuda IR compared with Europeans. Fasting HDL cholesterol was lower and triglycerides were higher in European and African Caribbean women who later had development of diabetes, whereas there was no significant diabetes-related difference in lipid profiles in Indian Asian women. Leisure time physical activity was lowest in Indian Asian women, whereas African Caribbean and Indian Asian women were more active at work, regardless of diabetes status. Age at diagnosis of diabetes did not differ by ethnicity in women.
The ethnic differential in incident diabetes was less marked between Indian Asian and European women and disappeared on adjustment for truncal skinfold thickness (adjusted SHR, 1.08; 95% CI, 0.61-1.89; P = 0.79). Waist-to-height ratio and Matsuda IR also had a similarly marked attenuating effect ( Table 2). Only truncal skinfold thickness (P = 0.041) and Matsuda IR (P , 0.001) were independent predictors.
African Caribbean women were 2.5 times more likely to have development of diabetes than European women. Adjustment for truncal or abdominal obesity partially attenuated the ethnic differential, but the Matsuda IR index had the greatest attenuating effect. In the multivariable model, the SHR was 1.48 (95% CI, 0.89-2.45) ( Table 2). Only Matsuda IR (P , 0.001) and waist-to-height ratio (P = 0.016) were independent predictors.
No other baseline risk factor, including family history, physical activity, socioeconomic markers, medication use, and age at migration, further altered the excess diabetes risk in men and women of either ethnicity.
Sensitivity analyses
Hospital discharge (HES) data, based on 2,996 people without baseline diabetes, demonstrated incident diabetes in 11 and 8% of European men and women, 26 and 14% of Indian Asian men and women, and 21 and 15% of African Caribbean men and women. Findings from analyses of ethnic group differences in diabetes incidence were similar when we used logistic regression to compare HES data added to direct follow-up (n = 3,679) versus direct follow-up alone.
Smoking was unusual in Indian Asian and African Caribbean women; however, these analyses repeated in never-smoking women demonstrated similar ethnic differentials.
CONCLUSIONSdIn this British population-based triethnic cohort with more than 20 years of follow-up, diabetes incidence is substantially elevated in people of African Caribbean and Indian Asian origin compared with Europeans. By age 80, 40-50% of British Indian Asian and African Caribbean men and women have diabetes, at least twice the prevalence observed in Europeans of the same age. Midlife measures of IR and of upper body fat deposition were already unfavorable in people who had development of diabetes approximately one decade later and were more adverse in the ethnic minorities. The Matsuda IR index contributed most to explaining the ethnic minority excess of diabetes in both sexes. Of obesity measures, adjustment for truncal fat provided the most consistent and independent attenuation of the ethnic differentials in both sexes Adjustment for these risk factors in women largely abolished the ethnic minority difference in diabetes incidence. In men, however, a twofold excess remained for both ethnic minority groups. ARIC (6) and NHANES (7,8) both suggest that their available lifestyle and adiposity measures could determine some of the African American excess in diabetes in women, but not in men. However, these previous studies did not use the OGTT, and a significant proportion of cases are diagnosed on postload values alone. Further, the role of measures of insulin resistance and ectopic fat deposition, beyond abdominal fat, were not explored, as we have performed here. Inequalities in access to health care may adversely affect risks of incident diabetes in African Americans; this is not an issue in the U.K., where health care is free at the point of delivery. There are no previous longitudinal studies comparing Indian Asians with Europeans.
Direct measures of insulin sensitivity are not feasible for epidemiological studies, which therefore need to rely on surrogates. There are many surrogates and no clear consensus regarding which works best. This reflects the choice of clamp technique used, the population studied, methods of validation used, and the purpose of prediction. In general, surrogate measures of IR that incorporate both fasting and postload values are stronger predictors of future diabetes than fasting levels alone (15)(16)(17), although beyond that the choice is less clear. Of the surrogates at our disposal, the Matsuda index mapped most closely to the ethnic/sex gradient in diabetes incidence and provided the most complete explanation for ethnic differences in risk. Comparison of OGTTderived values of IR with hyperinsulinemic euglycemic clamp values suggests that they are valid markers of clampderived IR within Indian Asians (18) and within people of Black African descent (19), although these largely tested HOMA and not Matsuda indices. Calculated b-cell function at baseline was particularly low in African Caribbean men and lowest in those who had development of diabetes during follow-up. It is tempting to speculate that inadequate b-cell function explains elevated postchallenge glucose in African Caribbeans not matched by equivalently elevated postload insulin. However, smaller studies report C-peptide levels, a direct measure of b-cell function, that are similar in people of Black African descent in the U.K., U.S., or Africa compared with those of European origin; further, the acute insulin response to a glucose load is greater (20)(21)(22), even in those with established diabetes (23). In African Caribbeans, plasma insulin levels may be particularly affected by reduced hepatic insulin extraction and reduced insulin clearance (20,22,24), which could explain differences between variation in b-cell function and insulin concentrations. The pathogenesis of type 2 diabetes involves a prolonged period of insulin resistance initially compensated by increased b-cell function but latterly involving progressive b-cell deficit (25). European and Asian Indian men who would have development of type 2 diabetes had relatively elevated b-cell function, whereas among African Caribbean men and women there was no such increase. These findings suggest differences in the stage of pathogenesis reached in the groups at baseline. Our findings imply that, overall, IR was the principal driver of diabetes; however, in the absence of more detailed evaluations of b-cell function, this conclusion is tentative.
The greater visceral obesity of Indian Asians is an obvious candidate to account for the observed excess diabetes (26). However, others have suggested that deep subcutaneous truncal fat with larger hypertrophic adipocytes may be of key importance (27). We extend this observation by showing that truncal fat plays an independent role in accounting for the Indian Asian male excess in diabetes. Adjustment for truncal skinfolds provided the most consistent attenuation in the ethnic minority excess of diabetes incidence for men and women (although not independent of the Matsuda index in the comparison between African Caribbean and European women). We confirm the excess truncal obesity in people of Black African descent despite, in men at least, less visceral fat (28), and its strong predictive ability for diabetes incidence (8). Excess truncal fat in both Indian Asians and people of Black African descent is observed in youth and, in both cases, the growth trajectory for truncal skinfolds appears more rapid for the ethnic minority groups (29,30). Whereas the greater truncal fat despite less abdominal fat in women has been previously noted (31), it is striking here that the sex difference in truncal obesity is more marked in the ethnic minority groups than in Europeans.
Adjustment for IR provided more complete attenuation in the ethnic difference in diabetes incidence in women compared with men. Others have reported sex differences in response to the OGTT such that, in some studies at least, women are more likely to be classified as having impaired glucose tolerance and men are more likely to exhibit impaired fasting glucose (32,33). Impaired fasting glucose may be more strongly associated with b-cell failure (33,34). Thus, the weaker ability of HOMA-IR and Matsuda IR indices to account for ethnic differences in diabetes incidence in men may, speculatively, reflect the imprecision of their characterization of b-cell function, which may play a more important role in the onset of diabetes in men.
Underlying explanations for the greater predisposition to IR in people of ethnic minority descent or, perhaps more appropriately, the protection from IR in individuals of European origin who, despite escalating levels of obesity, remain at lowest risk of diabetes are unknown. Changes in fat distribution over the lifecourse appear to differ by ethnicity (29,30). Although a genetic susceptibility may be an obvious explanation for ethnic differences in metabolic and obesity characteristics, and for the different trajectories over the life-course, it is notable that total energy intake is higher compared with British Europeans in largely first-generation Indian Asian migrants to the U.K. (35) and in Indian Asian children in the U.K. (36). Within India, migrants from rural to urban areas also have higher energy intakes compared with those of the rural population (37). This is associated with greater obesity, ectopic fat distribution, and IR. Further, agerelated changes in adverse patterns of IR are more marked in Indian Asian migrant populations compared with Indian Asian nonmigrant populations (35). Such rapid changes in metabolic characteristics are likely to be environmentally rather than genetically driven. It is notable that dietary intake comparisons between rural and urban Cameroonians and British Jamaicans and Caribbeans do not suggest greater calorie or saturated fat intake in the latter (38); however, physical activity levels have been shown to be lower in migrant populations (39).
Strengths and limitations
To our knowledge, this is the largest triethnic cohort in the U.K. with lengthy (20-year) follow-up between middle and older age, thus providing unique prospective ethnicity-specific information on diabetes incidence. Loss to direct followup has occurred in one-third of all ethnic groups. However, in sensitivity analyses, the addition of hospital discharge (HES) data to those from direct follow-up provides diabetes status for more than 91% of traced individuals. Results of analyses based on these more complete data were similar to analyses based on direct followup. Baseline measurements are limited to those performed on only one occasion 20 years ago, meaning that we cannot account for changes in risk factors during the follow-up period or in earlier life. We have addressed lifestyle and socioeconomic status using the available baseline data on smoking habits, physical activity, years of education, and occupational status, but we acknowledge that these cannot capture all the complexities of the nonmetabolic explanations for ethnic differentials in diabetes incidence.
In conclusion, Indian Asian and African Caribbean migrants to the U.K. have at least twice the risk of development of diabetes compared with British Europeans, even in older age, broadly reflecting patterns observed in younger populations worldwide. Given the increasing life expectancies for those with type 2 diabetes (40), this presents a public health challenge. Measures of insulin resistance and ectopic fat deposition, particularly truncal, account for excess diabetes risk in Indian Asian and African Caribbean women but only make a contribution to the excess risk in ethnic minority men. Strikingly, we show that despite our comprehensive measures, the ethnic minority excess of incident diabetes in men (both Indian Asians and African Caribbeans) cannot be explained, whereas it can be explained for women. We would have anticipated otherwise, and our findings in a different geographical setting than the care.diabetesjournals.org DIABETES CARE, VOLUME 36, FEBRUARY 2013 391 U.S. and in two different ethnic groups strongly indicate that better assessment of risk factors and/or a search for novel factors are required if we are to understand why ethnic minority groups are at such high risk for diabetes. | 2017-05-01T21:57:15.563Z | 2013-01-17T00:00:00.000 | {
"year": 2013,
"sha1": "32b7cbcc870e4b8287f87a8b613978f0cfd5ddc8",
"oa_license": "CCBYNCND",
"oa_url": "http://care.diabetesjournals.org/content/36/2/383.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "830713de556fd1f571f680237133e2c624d119d9",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244168109 | pes2o/s2orc | v3-fos-license | Proton to hydride umpolung at a phosphonium center via electron relay: a new strategy for main-group based water reduction
Generation of dihydrogen from water splitting, also known as water reduction, is a key process to access a sustainable hydrogen economy for energy production and usage. The key step is the selective reduction of a protic hydrogen to an accessible and reactive hydride, which has proven difficult at a p-block element. Although frustrated Lewis pair (FLP) chemistry is well known for water activation by heterolytic H–OH bond cleavage, to the best of our knowledge, there has been only one case showing water reduction by metal-free FLP systems to date, in which silylene (SiII) was used as the Lewis base. This work reports the molecular design and synthesis of an ortho-phenylene linked bisborane-functionalized phosphine, which reacts with water stoichiometrically to generate H2 and phosphine oxide quantitatively under ambient conditions. Computational investigations revealed an unprecedented multi-centered electron relay mechanism offered by the molecular framework, shuttling a pair of electrons from hydroxide (OH−) in water to the separated proton through a borane-phosphonium-borane path. This simple molecular design and its water reduction mechanism opens new avenues for this main-group chemistry in their growing roles in chemical transformations.
Experimental section Experimental Procedures Instrumentation and Chemicals
All manipulations were performed under Ar or N2 atmosphere by using standard Schlenk or glove box techniques. All the solvents were dried prior to use. Column chromatography was carried out using Merck silica gel 60 and KANTO CHEMICAL silica gel 60N. JNM-ECA 600 NMR spectrometer [ 1 H NMR (600 MHz), 13 C{ 1 H} NMR (151 MHz), 11 B{ 1 H} NMR (193 MHz) (d) are given in ppm with reference against external SiMe4 ( 1 H, 13 C{ 1 H}), BF3·Et2O ( 11 B) and 85% H3PO4 ( 31 P). Mass spectra were recorded with a Thermo Fisher Scientific samples. The DFT calculations were performed using the Gaussian09 program package. 1
Synthesis of bis(2-bromophenyl)(phenyl)phosphane
In a nitrogen atmosphere, the nBuLi hexane solution (21.4 mL,1.55 M) was added to a solution of 1,2dibromobenzene (4.00 mL, 33.2 mol) in THF and ether (1:1) (120 mL) at -112 ˚C (liquid nitrogen ethanol bath) through a dropping funnel. After stirring for 20 minutes at -112 ˚C, dichlorophenylphosphine (2.23 mL 16.4 mmol) was added to the reaction mixture at the same temperature. After an additional stirring for 2 hours (30 minutes at -112 ˚C before gradually warming over 1.5 hours), the reaction mixture was quenched by NH4Cl aq. and evaporated. The residue was dissolved in DCM and washed with brine. After drying with Na2SO4 and removing DCM, the crude product was recrystallized with ethanol at -20 ˚C. The product was purified with column chromatography (DCM : hexane=1 : 1) (4.22 g, 10.0 mmol, 61%).
Synthesis of BPB-pin (1a)
In a nitrogen filled Schlenk flask, bis(2-bromophenyl)(phenyl)phosphane (3.00 g, 7.14 mmol), bis(pinacolato)diboron (3.63 g, 14.3 mmol), [1,1′-bis(diphenylphosphino)ferrocene]dichloropalladium(II) (525 mg, 0.642 mmol), potassium acetate (4.24 g, 43.2 mmol) and degassed super dehydrated DMSO (180 mL) were mixed and stirred for 4 days at 80 ˚C. After the reaction, water (540 mL) was added to the mixture, which was extracted with benzene until the solution color became colorless. The combined benzene solution was died over Na2SO4 and evaporated. The resulting solid residue was dissolved in a small amount of DCM, to which a large amount hexane was added. Storage at -20 ˚C overnight led to precipitation of an olive-green solid, which was mostly undesired byproducts. The filtrate was evaporated again, and the resulting oily residue was redissolved in hexane. Cooling at -20 ˚C allowed precipitation of a second crop of byproduct, which was then removed by filtration. This resulting filtrate was then evaporated to afford greenish oily residue, to which hexane was added. With scratching, crude product formed as an off-white solid from the solution. This crude product could be further purified by redissolving in hot hexane, evaporation to form an oily residue, followed by an addition of a small amount of hexane and scratching. This procedure was repeated 5 times to afford 1a as a white solid in 45% (1.66 g, 3.22 mmol). 1
Synthesis of BPB-9BBN (1b)
In an argon atmosphere, 1 M LiAlH4 ether solution (3.9 mL, 3.9 mmol) was added to a solution of BPB-pin (1a, 1.00 g, 1.94 mmol) in ether (20 mL) at 0 ˚C. Reaction mixture was stirred for 1 h at 0 ˚C and then 2 h at room temperature. After stirring, salt is removed by filtration in glove box. Toluene (30 mL), Cyclooctadiene (30 mL) and Trimethylsilyl chloride (3 mL) are added after complete evaporation of ether solvent at room temperature. After stirring for 2 days at 40 ˚C, all volatile was evaporated and then salt was removed by celite filtration using toluene in the glove box. The toluene filtrate was concentrated and layered with pentane to precipitate product 1b as a white solid (328 mg, 0.654 mmol, 34%). 1
Crystals suitable for X-ray crystal structure analysis were grown from a C6D6 reaction solution. 1
Synthesis of compound 3
To a toluene (2 mL) solution of BPB-9BBN (1b, 30 mg, 0.060 mmol), degassed water (0.1 mL) was added. The reaction mixture was stirred at room temperature for 10 min. Product 3 was obtained by evaporation (30 mg, 0.058 mmol, 97%). Crystals suitable for X-ray crystal structure analysis were grown from a THF/hexane diffusion system at -30 ˚C. H2 gas was detected from the reaction mixture by GC measurement. 1
Control reaction
To a C6D6 (0.3 mL) solution of PPh3 (5 mg, 0.02 mmol) and BCy3 (10 mg, 0.04 mmol), degassed H2O (0.2 mL) was added by vacuum transfer (some amount of H2O remained). Reaction mixture was stored at room temperature for one hour followed by 16 hours at 50 ˚C. No H2 was observed in the 1 H NMR of the reaction mixture. The 11 B{ 1 H} NMR spectra indicated presence of large amount of starting material BCy3 and some formation of BCy3•H2O. The 31 P NMR spectra showed no formation of phosphine oxide nor any other reactivity.
Crystal structure determination
Crystals suitable for X-ray structural determination were mounted on a Bruker SMART APEXII CCD diffractometer. Samples were irradiated with graphite monochromated Mo-Kα radiation (λ= 0.71073 Å) at 173 K for data collection. The data were processed using the APEX program suite. All structures were solved by the SHELXT program (ver. 2014/5). Refinement on F 2 was carried out by full-matrix least-squares using the SHELXL in the SHELX software package (ver. 2014/7) 1 and expanded using Fourier techniques. All non-hydrogen atoms were refined using anisotropic thermal parameters. The hydrogen atoms were assigned to idealized geometric positions and included in the refinement with isotropic thermal parameters. The SHELXL was interfaced with ShelXle GUI (ver. 742) for most of the refinement steps. 2 The pictures of molecules were prepared using Pov-Ray 3.7.0. 3 The crystallographic data are summarized in Table S1. These data (CCDC 2096462-465) can be obtained free of charge from the Cambridge Crystallographic Data Centre via www.ccdc.cam.ac.uk/data_request/cif
Figure S3
Solid-state structures of 3·H2O. Thermal ellipsoids are drawn at 30% probability. Periphery ellipsoids are omitted for clarity.
Computational section Computational details
All the electronic structure calculations were performed using Gaussian09 rev. C.01 package. 4 Geometry optimizations were carried out with the long-range hybrid functional wB97X-D 5 in conjunction with the Pople's 6-311G(d) triple-z quality basis set with one polarization function. Subsequent harmonic frequency calculations were performed to corroborate the character of each optimized species. Depending on the number of negative eigenvalues of the hessian matrix, it is possible to classify each optimized structure as minimum (zero) or transition state (only one). Thermal and entropy corrections to the total energy were taken from the thermochemistry analyses in the output file at 298K and 1 atm. Finally, the solvation effects added to the electronic Hamiltonian were taken into consideration by performing singlepoint calculations over the optimized geometries at the w-B97XD/6-311G(d) level of theory though the PCM 6 model using the SMD 7 parameters according to the Truhlar's model with benzene as solvent (e = 2.2706). These energies were added to the gas-phase calculations and are reported as the final energy values. Therefore, the final reported energy values are in solvent-phase calculated at the SMD(benzene):w-B97XD/6-311G(d).
Figure S5
Intrinsic bonding orbitals (IBO) between H2 and H3 during the reaction of 1b and H2O
Figure S6
Computed relative free energy profile (kcal/mol) for the proposed isomerization mechanism of 3 into 3-iso. Optimized geometries shown for clarity.
Kinetic Isotope Effect of the reaction of 1b with H2O/D2O
To compute the Kinetic Isotope Effect (KIE) 8 of the reaction of 1b with H2O/D2O, we calculated the energetic profile for 1b with D2O as shown in Figure S8. For this, we used the optimized geometries of the reaction mechanism of 1b with H2O ( Figure S8, X = H), followed by calculation of the Hessian matrix from which harmonic vibrational frequencies for all isotopomers are derived by introducing isotope masses. The computed energetic span is then used to calculate reaction rates in the framework of transition-state theory by using Boltzmann expression: | 2021-11-17T16:12:10.044Z | 2021-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "e312b111b6a6d2802cfc6b8cc91ff70c3e21eda7",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sc/d1sc05135k",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e249e6d623538e1fe15d7a65ade63e53648250e2",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
248656963 | pes2o/s2orc | v3-fos-license | Primary Angiosarcoma of the Breast: A Single-Center Retrospective Study in Korea
Due to the rarity of primary angiosarcoma of the breast, optimal management is based on expert opinion. The aim of this study was to review all primary angiosarcomas of the breast obtained from a single center in terms of clinicopathologic characteristics, treatment, and survival outcomes. From 1997 to 2020, 15 patients with primary angiosarcoma of the breast underwent either mastectomy or wide excision. We analyzed the clinicopathologic data to assess disease-free survival and overall survival. Fifteen women with primary angiosarcoma of the breast were identified. The mean age at diagnosis was 33 years (range: 14–63 years). The overall mean tumor size was 7.7 cm (range 3.5–20 cm). Upon histological grading, there were three cases of low grade, five intermediate grade, six high grade, and one unidentified grade. The five-year disease-free survival rate was 24.4%, and the five-year survival rate was 37.2%. The survival rate of the low-grade patient group was statistically higher than that of the intermediate- or high-grade patient groups (p = 0.024). Primary angiosarcoma of the breast is a rare aggressive tumor characterized by high grade and poor outcome. Histologic grade appears to be a reliable predictor of survival. There are no standard treatment guidelines; thus, optimal R0 surgical resection remains the best approach. The roles of neoadjuvant, adjuvant chemotherapy, and radiotherapy remain unclear.
Introduction
Angiosarcoma of the breast is a rare entity with poor prognosis, comprising less than 1% of all soft-tissue tumors [1][2][3]. Breast angiosarcoma commonly is divided into two types, primary and secondary angiosarcoma. Primary angiosarcoma of the breast develops de novo with no prior breast radiation. It occurs within the breast parenchyma, usually affecting women in their 30s to 50s [2,4]. Secondary angiosarcoma of the breast occurs in the setting of radiation therapy as part of breast-conservative treatment of breast cancer and is typically seen in older patients. [1,4].
Primary angiosarcoma of the breast is rarer than secondary angiosarcoma and has no known risk factors [5]. It usually is derived from the endothelial cell lining of the vascular channels and does not involve the regional lymph nodes [6]. However, angiosarcoma is aggressive and tends to have a high risk of local and distant metastases [1,7].
Due to the rarity of these tumors, optimal management is based on expert opinion. Complete surgical resection with optical margins (R0 resection) is the most common treatment [2]. The best surgical methods for resection are uncertain due to lack of long-term outcome data comparing wide excision and mastectomy.
The role of radiotherapy and chemotherapy remains unclear. Some studies have insisted that radiotherapy before surgery is not recommended, and that adjuvant radiotherapy conveys better local control [8,9]. However, one study showed no effect of radiotherapy on overall survival [10]. According to the meta-analysis study, it was revealed that adjuvant radiation therapy after surgery for primary angiosarcoma of the breast had a statistically significant effect on recurrence-free survival [2]. A prior study showed that adding chemotherapy to the treatment of angiosarcoma has a significant benefit on reduced risk of local recurrence [11]. However, other studies showed that adjuvant chemotherapy has no statistically significant benefit for breast angiosarcoma [2,12]. The effectiveness of the adjuvant treatment is uncertain.
The aim of this study was to review all cases of primary angiosarcoma of the breast diagnosed from 1997 to 2020, in a single center, and to describe a single-institution experience with primary angiosarcoma of the breast, including clinicopathologic characteristics, treatment, and survival outcomes.
Materials and Methods
This retrospective study included 15 patients with primary angiosarcoma of the breast who were treated at Samsung Medical Center from 1997 to 2020, accessed through the electronic medical recoding system of the institute. This study was approved by the institutional review board (Approval number: 2021-09-037) of the Samsung Medical Center.
We reviewed the demographic data, tumor size, histologic grades, treatment modality, and survival data. Tumor size was defined as the largest dimension recorded on the pathology report. If excisional biopsy was performed and followed by operation at Samsung Medical Center, the largest length was recorded by adding to the previous excision size. Tumor grade was categorized as low, intermediate, or high.
Overall survival (OS) was measured from the date of surgery to the date of last followup or the date of death, as recorded in Statistics Korea records. Disease-free survival was measured from the date of surgery to the date of any recurrence or death. Overall survival and disease-free survival (DFS) were evaluated using the Kaplan-Meier method with the log-rank test. All statistical analyses were carried out using IBM SPSS v 27.0 (SPSS, Inc., Chicago, IL, USA).
Results
From 1997 to 2020, 15 patients who were diagnosed with primary angiosarcoma of the breast were treated at Samsung Medical Center. All patients presented with a palpable mass and were diagnosed with a core needle biopsy. Radiologic imaging such as via mammograms, ultrasound and MRI, was performed for patients All cases were defined as primary angiosarcoma without prior diagnosis of breast cancer or radiation treatment. All patients were female, and the mean age at diagnosis was 33 years (range: 14-63 years).
The overall mean tumor size was 7.7 cm (range 3.0-25 cm). For histological grade, there were three patients of low grade, five of intermediate grade, six of high grade, and one unidentified grade (Table 1). Thirteen patients underwent mastectomy, eight of whom also received axillary surgery (Table 2). However, no node metastasis was present in the axillary surgery group. Wide excision was performed in only two patients (13.3%). Surgical margin was negative in all patients. Recurrence was detected in 10 patients (66.7%). We described the site of the first recurrence. Some patients were found to have distant metastasis after first local recurrence. The median follow-up period was 29 months (5.6-89 months). The last follow-up was observed in July 2021. Local recurrence occurred in four patients and local contralateral breast recurrence was observed in two patients. Distant recurrence was noted in three patients and one who had both local and distant recurrence. In distant metastasis, one was pulmonary, three were bone metastases (Tables 2 and 3). Wide excision was performed in patients with local recurrence and palliative chemotherapy was performed in patients with distant metastases. Two patients with contralateral breast recurrence underwent wide excision and one patient with synchronous local and distant metastases received palliative chemotherapy ( Table 2). As shown in Table 2, one patient was diagnosed with angiosarcoma on both sides and underwent bilateral breast surgery.
In terms of adjuvant therapy after surgery, three patients received both chemotherapy and radiation therapy, five patients received radiation therapy only, and one patient received chemotherapy alone. Only one patient underwent mastectomy after neoadjuvant chemotherapy ( Table 2). The adjuvant chemotherapy regimen in Samsung Medical Center was Adriamycin combined with alkylating agents (ifosfamide), followed by taxane agent (paclitaxel) or Adriamycin combined with alkylating agents (cyclophosphamide). The pediatric chemotherapy regimen in the center was etoposide combined with ifosfamide.
Overall survival and disease-free survival are shown in Figure 1. The five-year survival rate was 37.2%, and the five-year disease-free survival rate 24.4%. Overall survival according to tumor size is shown in Figure 2. The five-year survival rate was 28.3% in the group with tumor 5 cm or more in size and 66.7% in the group with tumors smaller than 5 cm. There was no significant difference (p = 0.096).
Overall survival by tumor grade is shown in Figure 3. The five-year survival rate was 100% in the low-grade group, 30% in the intermediate-grade group, and 0% in the high-grade group. The survival rate of the low-grade patient group was statistically higher than that of the intermediate-or high-grade patient groups (p = 0.024) At the time of last follow-up, six patients were alive without distant metastatic disease. Only one of the 6 patients experienced local recurrence and was alive until the last follow-up (Table 3).
Survival
Alive 6 (40.0) Death 9 (60.0) As shown in Table 2, one patient was diagnosed with angiosarcoma on both sides and underwent bilateral breast surgery.
In terms of adjuvant therapy after surgery, three patients received both chemotherapy and radiation therapy, five patients received radiation therapy only, and one patient received chemotherapy alone. Only one patient underwent mastectomy after neoadjuvant chemotherapy ( Table 2). The adjuvant chemotherapy regimen in Samsung Medical Center was Adriamycin combined with alkylating agents (ifosfamide), followed by taxane agent (paclitaxel) or Adriamycin combined with alkylating agents (cyclophosphamide). The pediatric chemotherapy regimen in the center was etoposide combined with ifosfamide.
Overall survival and disease-free survival are shown in Figure 1. The five-year survival rate was 37.2%, and the five-year disease-free survival rate 24.4%. Overall survival according to tumor size is shown in Figure 2. The five-year survival rate was 28.3% in the group with tumor 5 cm or more in size and 66.7% in the group with tumors smaller than 5 cm. There was no significant difference (p = 0.096). Overall survival by tumor grade is shown in Figure 3. The five-year survival rate wa 100% in the low-grade group, 30% in the intermediate-grade group, and 0% in the high grade group. The survival rate of the low-grade patient group was statistically higher tha that of the intermediate-or high-grade patient groups (p = 0.024) Overall survival by tumor grade is shown in Figure 3. The five-year survival rate was 100% in the low-grade group, 30% in the intermediate-grade group, and 0% in the highgrade group. The survival rate of the low-grade patient group was statistically higher than that of the intermediate-or high-grade patient groups (p = 0.024) At the time of last follow-up, six patients were alive without distant metastatic ease. Only one of the 6 patients experienced local recurrence and was alive until the follow-up (Table 3).
Discussion
As in previous studies [2,4,5,13], primary angiosarcoma of the breast occur younger females between 30 and 50 years and can arise de novo with no risk factors. mary angiosarcoma of the breast usually develops in the lining of the endothelial ce the vascular channels and often involves the breast parenchyma without triggering fac [6]. Therefore, angiosarcoma appears mostly as a palpable mass, and the age at diagn is lower than the average age for invasive breast cancer [5]. This is consistent with study. We found the average age at diagnosis of primary angiosarcoma of the breast 33 years, which is younger (range: 14-63 years) than that of invasive breast cancer oc ring in the 40-49-year age group, according to the Korea Breast Cancer Society regi data (KBCS) [14]. The minimum age of onset of primary angiosarcoma of the breast 14 years in our study.
Several studies reported breast angiosarcoma as a more aggressive malignancy of vascular endothelium, and the overall prognosis is poor compared to that of other in sive breast cancers [15,16]. In our study, the five-year overall survival rate of primary giosarcoma of the breast was 37.2%, while the five-year overall survival rate of inva breast cancer was 93.2% according to the KBCS [14].
Several studies suggested that the grade seemed to be the most consistent progno factors for primary angiosarcoma of the breast in regard to both OS and DFS [2,17,18 total, 6 of the 15 patients had high-grade disease on histopathology, and the median o all survival was 40 months (range: 8.2-71.6 months). We revealed a significantly hig survival rate of low-grade tumor than that of intermediate or high grade (p = 0.024). O studies reported that histological grade was associated strongly with clinical presenta and overall prognosis. They noted an improved DFS for low-and intermediate-grade mors compared to high-grade ones [18][19][20]. Figure 4A) and 5-year diseasefree survival (p > 0.05, Figure 4B) between groups according to adjuvant treatment. Abbreviations: adjuvant Tx, adjuvant chemotherapy; RT, radiotherapy; chemo, chemotherapy; neoadjuvant chemo, neoadjuvant chemotherapy.
Discussion
As in previous studies [2,4,5,13], primary angiosarcoma of the breast occurs in younger females between 30 and 50 years and can arise de novo with no risk factors. Primary angiosarcoma of the breast usually develops in the lining of the endothelial cell of the vascular channels and often involves the breast parenchyma without triggering factors [6]. Therefore, angiosarcoma appears mostly as a palpable mass, and the age at diagnosis is lower than the average age for invasive breast cancer [5]. This is consistent with our study. We found the average age at diagnosis of primary angiosarcoma of the breast was 33 years, which is younger (range: 14-63 years) than that of invasive breast cancer occurring in the 40-49-year age group, according to the Korea Breast Cancer Society registry data (KBCS) [14]. The minimum age of onset of primary angiosarcoma of the breast was 14 years in our study.
Several studies reported breast angiosarcoma as a more aggressive malignancy of the vascular endothelium, and the overall prognosis is poor compared to that of other invasive breast cancers [15,16]. In our study, the five-year overall survival rate of primary angiosarcoma of the breast was 37.2%, while the five-year overall survival rate of invasive breast cancer was 93.2% according to the KBCS [14].
Several studies suggested that the grade seemed to be the most consistent prognostic factors for primary angiosarcoma of the breast in regard to both OS and DFS [2,17,18]. In total, 6 of the 15 patients had high-grade disease on histopathology, and the median overall survival was 40 months (range: 8.2-71.6 months). We revealed a significantly higher survival rate of low-grade tumor than that of intermediate or high grade (p = 0.024). Other studies reported that histological grade was associated strongly with clinical presentation and overall prognosis. They noted an improved DFS for low-and intermediate-grade tumors compared to high-grade ones [18][19][20].
Several studies have suggested that tumor size is a prognostic factor [2,15,18,21,22]. Other studies have also revealed increased risk of local recurrence and decreased overall survival with larger tumor size [2,18,22]. In contrast to those studies, our study showed lower survival rates in groups with larger tumor sizes, though the difference was not significant (28.3% for size ≥ 5 cm vs. 66.7% for < 5 cm, p = 0.096). Although we did not find statistical difference of survival rate related to tumor size due to our small sample size, we did find the trend of the difference in survival rate according to size.
In terms of adjuvant treatment, survival was associated not favorably with administration of adjuvant chemotherapy or radiation therapy in our study. Other studies reported unclear roles of neoadjuvant and combined adjuvant chemotherapy and radiotherapy [13,19]. However, one author suggested that adjuvant radiotherapy can reduce local recurrence [1]. Another author reported that tumor size > 5 cm can predict patients at higher risk of local recurrence, who are more likely to obtain benefit from adjuvant radiation therapy [23]. In the analysis from one study, adjuvant radiation therapy seemed to have a significantly positive impact on recurrence-free survival when both primary angiosarcoma of the breast and secondary angiosarcoma of the breast were analyzed together. Despite concerns about radiation-induced etiology and complications in the re-irradiation environment, this study found that the local recurrence rate in primary and secondary angiosarcoma was lower when patients received surgery and adjuvant radiotherapy; this was in contrast to the lack of significant difference reported in our study [2]. So, the role of adjuvant radiotherapy remains controversial. Additionally, it was reported that chemotherapy is beneficial in high-grade lesions and in the metastatic setting [15]. Based on the results of previous studies, the lack of an association of survival with adjuvant therapy in the present study might be due to the retrospective study design and the relatively small number of patients.
The best treatment for primary angiosarcoma of the breast is surgery with R0 resection [2,18,22]. In our study, 13 patients underwent mastectomy, and only two underwent wide excision, both of whom had negative resection margins. One study revealed that patients who underwent breast-conserving surgery did not have worse prognosis than those who underwent mastectomy [24].
The role of axillary lymph node dissection in primary angiosarcoma of the breast is unknown, as breast angiosarcoma is due primarily to hematogenous spread [25]. According to one study, all 13 patients who underwent axillary staging showed absence of involved nodes. There also was no node metastases in the patients with axillary staging in our study. Based on these results, axillary surgery is not suitable in patients with breast angiosarcoma.
This study is limited by the very small sample of breast angiosarcoma and retrospective nature of this analysis, which prevents any definite conclusions. Some findings that failed to reach statistical significance might be due to lack of statistical power.
Conclusions
In conclusion, breast angiosarcoma is a rare aggressive tumor characterized by high grade and poor outcome. Histologic grade appears to be a reliable predictor of survival. There are no standard treatment guidelines, and optimal R0 surgical resection remains the best approach. The roles of neoadjuvant, adjuvant chemotherapy, and radiotherapy remain unclear. Informed Consent Statement: Based on the retrospective character of the study, informed consent was not obtained.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-05-10T15:46:16.904Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "26cafedae3bf1e97fceaf8fb5da4f2fc4972bc70",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1718-7729/29/5/267/pdf?version=1651749663",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2e9619a4e5975c55ac4708ca33fcc9e6c23fc335",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
240039600 | pes2o/s2orc | v3-fos-license | Association between Milk Intake and Incident Stroke among Japanese Community Dwellers: The Iwate-KENCO Study
We aimed to evaluate the association between the milk consumption and incident stroke in a Japanese population, where milk consumption is lower than that of Western countries. In total, 14,121 participants (4253 men and 9868 women) aged 40–69 years, free from cardiovascular diseases (CVD) were prospectively followed for 10.7 years. Participants were categorized into four groups according to the milk intake frequency obtained from a brief-type self-administered diet questionnaire. The adjusted HRs of total stroke, ischemic stroke and haemorrhagic stroke associated with milk intake frequency were calculated using the Cox proportional hazards model. During the follow-up, 478 stroke cases were detected (208 men and 270 women). Compared to women with a milk intake of <2 cups/week, those with an intake of 7 to <12 cups/week had a significantly low risk of ischemic stroke in a model adjusting CVD risk factors; the HR (95% CI) was 0.53 (0.32–0.88). No significant associations were found in men. This study suggested that milk intake of 7 to <12 cups/week decreased the risk of ischemic stroke in Japanese women. Milk intake of about 1 to <2 cups/day may be effective in the primary prevention of ischemic stroke in a population with low milk intake.
Introduction
Milk and dairy products are the major components of traditional Western diets, and the effects of their consumption on health have been frequently reported [1]. In Japan, milk intake is a relatively new dietary habit that was introduced to Japan after World War II, when it was served in school lunches. According to the 2017 Food Agriculture Organization balance sheet, the food supply quantity of milk was 245.43 kg/capita/year and 215.96 kg/capita/year in North America and Europe, respectively, compared to 58.63 kg/capita/year in Japan, corresponding to 160 g/capita/day [2]. The consumption of milk in Japan is lower than that in Western countries, and it is much lower for adults. According to the National Health and Nutrition Survey conducted in Japan in 2016, the average milk and dairy consumption was 111.2 g/day for those aged 20 years and above and 306 g/day for those aged 7-14 years [3]. Conversely, according to National Health and Nutrition Survey in US in 2007-2010, the average total dairy consumption was 2.1 cup equivalents/day for those aged 9-18 years, 1.7 cup equivalents/day for those aged 19-50 years, and 1.5 cup equivalent/day for those aged 51-70 years [4]. The Dutch National Food Consumption Survey 2012-2016 showed that the mean dairy product consumption was 374 g/day for men aged 19-70 years and 321 g/day for women aged 19-70 years [5].
The association between milk consumption and the risk of cardiovascular diseases (CVDs), such as myocardial infarction and stroke, has also been often reported [6][7][8]. The epidemiology of the diseases differs among Japanese and Western populations; in Western countries, mortality and morbidity from coronary artery disease are higher than those from stroke, while in East Asian countries, including Japan, mortality and morbidity from stroke are higher than those from myocardial infarction [9]. The reports on health effects of milk consumption that focus mainly on Western populations may not provide sufficient evidence for the Japanese population, which differs greatly from the Western population in terms of both milk intake and disease patterns.
Milk is a good source of potassium, calcium, and magnesium, which have been reported to reduce blood pressure [10][11][12]. In observational and interventional studies, an inverse relationship of milk consumption with blood pressure levels, and the development of hypertension have been observed [13,14]. Therefore, milk consumption is expected to have a potential risk-reducing effect on stroke.
A recent meta-analysis showed a significant inverse association between milk consumption and the risk of stroke [15,16], while others have shown a null association [17]. Some meta-analyses have suggested a nonlinear relationship [18,19]. Several studies have examined the association of milk consumption with mortality from stroke in Japanese people, but the results have been inconsistent [20][21][22]. In addition, to the best of our knowledge, no study involving the Japanese subjects has used the incidence of stroke as an endpoint. Therefore, the relationship between milk consumption and the incidence of stroke in the Japanese population is unclear.
Therefore, this study aimed to elucidate the relationship between the frequency of milk intake and the incidence of stroke in a Japanese population, whose milk consumption is lower than that in Western populations.
Study Population
The Iwate-Kenpoku cohort (Iwate-KENCO) study is a prospective cohort study of community-dwelling residents in the Ninohe, Kuji, and Miyako districts of the northern part of Iwate Prefecture, Northeast of the main island of Japan. The methodology of the Iwate-KENCO study has been described elsewhere [23]. A baseline survey was conducted between 2002 and 2005, wherein participants were recruited from the individuals who participated in the government-regulated multiphasic health check-up in each municipality. In total, 17,706 participants (5614 men and 12,092 women) aged 40 to 69 years provided written informed consent for participation in this study. In the present analysis, we excluded 3585 persons for the following reasons: 477 persons with a history of stroke or myocardial infarction, 3092 persons without the data for food frequency questionnaire, and 16 persons with missing data. Consequently, data from 14,121 participants (4253 men and 9868 women) were analyzed in this study ( Figure 1). In the present analysis, we excluded 3585 persons for the following reasons: 477 persons with a history of stroke or myocardial infarction, 3092 persons without the data for food frequency questionnaire, and 16 persons with missing data. Consequently, data from 14,121 participants (4253 men and 9868 women) were analyzed in this study. (Figure 1).
Frequency of Milk Intake and Other Foods
Food intake was assessed once at baseline using the brief-type self-administered diet history questionnaire (BDHQ) [24,25], which includes questions about the frequency of consumption and/or portion sizes of about 58 food items. The BDHQ was validated through comparison with results from dietary records, and the Spearman's correlation coefficient for dairy products was 0.70, which indicated reasonable ranking ability [25]. Two questions regarding dairy foods intake were posed to assess the consumption (number of cups) of normal/high-fat milk/yogurt and low-fat milk/yogurt. Frequency in the BDHQ were measured as "≥2 cups/day", "1 cup/day", "4 to 6 cups/week", "2 to 3 cups/week", "1 cup/week", "<1 cup/day", and "never drink". These were re-categorized as 14, 7, 5, 2.5, 1, 0.5, and 0 cups/week, respectively, in the present analysis. We calculated the total number of cups of normal/high-fat and low-fat milk/yogurt and described it as milk consumption (cups/week). Based on milk intake, participants were categorized into following four groups: <2 cups/week, 2 to <7 cups/week, 7 to <12 cups/week, and ≥12 cups/week.
To examine the food intake pattern, we calculated the intake frequency (times/week) of starchy foods (such as rice, bread, and noodles), fish, soy products, meat, vegetables, fruits, and sugar-sweetened beverages, and the number of cups of miso soup per day. The Participants who had no data on milk intake n = 58 Participants who had missing data n = 16
Frequency of Milk Intake and Other Foods
Food intake was assessed once at baseline using the brief-type self-administered diet history questionnaire (BDHQ) [24,25], which includes questions about the frequency of consumption and/or portion sizes of about 58 food items. The BDHQ was validated through comparison with results from dietary records, and the Spearman's correlation coefficient for dairy products was 0.70, which indicated reasonable ranking ability [25]. Two questions regarding dairy foods intake were posed to assess the consumption (number of cups) of normal/high-fat milk/yogurt and low-fat milk/yogurt. Frequency in the BDHQ were measured as "≥2 cups/day", "1 cup/day", "4 to 6 cups/week", "2 to 3 cups/week", "1 cup/week", "<1 cup/day", and "never drink". These were re-categorized as 14, 7, 5, 2.5, 1, 0.5, and 0 cups/week, respectively, in the present analysis. We calculated the total number of cups of normal/high-fat and low-fat milk/yogurt and described it as milk consumption (cups/week). Based on milk intake, participants were categorized into following four groups: <2 cups/week, 2 to <7 cups/week, 7 to <12 cups/week, and ≥12 cups/week.
To examine the food intake pattern, we calculated the intake frequency (times/week) of starchy foods (such as rice, bread, and noodles), fish, soy products, meat, vegetables, fruits, and sugar-sweetened beverages, and the number of cups of miso soup per day. The frequency ratio of the total fish and soy products intake to that of meat (times/times) was also calculated.
Stroke Event Ascertainment
Stroke events were identified by accessing the Iwate Stroke Registry, which included the entire area where the participants lived; indeed, details of this registry have been described previously [26,27]. Since 1991, the stroke registration program has been coordinated by the Iwate Prefecture government and the Iwate Medical Association; the medical records of all medical facilities within the survey area are verified to ensure complete capture of all data. The stroke diagnostic criteria in this registry are based principally on the criteria established for the Monitoring System for Cardiovascular Disease commissioned by the Ministry of Health and Welfare [28], and these criteria correspond with those published by the World Health Organization [29]. To verify the accuracy of the data, a physician or trained research nurse visited and checked the medical records at the referral hospitals. We defined the follow-up period as the period from the baseline survey to either the first outcome or the end of the observation period. In the present study, we used follow-up data until 31 December 2014; participants who did not experience any outcomes during the follow-up period and those who moved out of the study area were censored administratively. Death and date of death were confirmed by the investigators reviewing the population-register sheets of the cohort members.
Other Measurements
The baseline survey consisted of a self-reported questionnaire, measurements of anthropometric data and blood pressure, and blood tests. The methodology for data collection has been described previously [23]. Body mass index (BMI) was calculated by measuring height and body weight. BMI was classified into four categories: <18.5 kg/m 2 , 18.5-24.9 kg/m 2 , 25-29.9 kg/m 2 , and ≥30 kg/m 2 . Systolic blood pressure (SBP) and diastolic blood pressure were recorded two times after five minutes of sitting rest, and the mean of the two measurements was used. Casual blood samples were drawn from the antecubital vein. Glycated haemoglobin (HbA1c) levels were measured using high-performance liquid chromatography. Serum total cholesterol (TC) and high-density lipoprotein cholesterol (HDLC) levels were measured by direct enzymatic assays.
Participants completed a self-reported questionnaire and reported their use of antihypertensives, smoking status, alcohol consumption status, and exercise habits. We asked participants to complete the questionnaire and bring it to their municipal health check-up site. In cases where the answers in the questionnaire were insufficient, a trained interviewer asked the respondent to answer as fully as possible. Smoking status was categorized into three groups: current smoking, ex-smoking, and non-smoking. Alcohol consumption was assessed by the frequency per week and amount of drinking per occasion and categorized as follows: intake of ≥3 drinks/day, 2 to <3 drinks/day, <2 drinks/day, ex-drinking, and non-drinking for men. Women reported less frequent alcohol intake compared with men, and the ≥3 drinks/day and 2 to <3 drinks/day categories were aggregated. Regular exercise was defined as engaging in exercise for at least 60 minutes, 8 times per month. In women, menopausal state was divided into two categories based on the answer to a questionnaire: postmenopausal state or not.
Statistical Analysis
All analyses were stratified by sex. Linear trends across the four milk intake frequency categories were estimated by one-way analysis of variance (ANOVA) for continuous variables and chi-square test for categorical variables. Using the Cox proportional hazards model, multivariate adjusted hazard ratios (HRs) and 95% confidence intervals (CIs) for total stroke, ischemic stroke, and haemorrhagic stroke in each group, considering the category "<2 cups/week" as a reference, were calculated for 4 models: model 1, adjusted for age; model 2, adjusted for age and lifestyle factors (smoking status, alcohol consumption status, and exercise habits); model 3, adjusted for age, lifestyle factors, and dietary factors (fruits and vegetables, fish and soy products intake to meat intake ratio); and model 4, adjusted for age, lifestyle factors, dietary factors, BMI categories, menopausal state, and CVD risk factors (SBP, HbA1c, TC, HDLC, and use of antihypertensives). The assumption of proportional hazard was verified using an interaction term between time and milk intake frequency in the models. All p values were two-tailed, and differences with p values < 0.05 were considered statistically significant. Statistical analyses were performed using the SPSS software package, version 25 (IBM Corporation, Armonk, NY, USA).
Results
The numbers (percentages) of men in the milk intake frequency categories were 1072 (25.2%), 1129 (26.5%), 1508 (35.5%), and 544 (12.8%) for <2 cups/week, 2 to <7 cups/week, 7 to <12 cups/week, and ≥12 cups/week, respectively. The corresponding numbers (percentages) for women were 1370 (13.9%), 2624 (26.6%), 4257 (43.1%), and 1617 (16.4%), respectively. Table 1 shows the baseline characteristics of the participants according to milk intake frequency categories. The men who had a higher milk intake frequency were older, had lower SBP, and significantly increased TC and HDLC levels; additionally, the proportion of current smokers and those who consumed ≥3 alcoholic drinks/day was significantly lower, and the proportion of those with regular exercise habit was significantly higher. Similar trends were observed among women; additionally, the proportion of obese women was significantly lower, and the proportion of postmenopausal women was significantly higher among those with a higher milk intake frequency. Table 2 shows the food intake frequency of the participants according to milk intake frequency categories. The intake of starchy foods was similar among the groups. Participants with a higher milk intake frequency consumed more vegetables, fruits, meat, fish, and soy products, and the ratio of total fish and soy products intake to meat intake was significantly higher for those with a higher milk intake frequency.
The total observational person-years was 152,518 and the mean (standard deviation) observational period was 10.7 (2.1) years. Total stroke, ischemic stroke, and haemorrhagic stroke occurred in 478 (208 men and 270 women), 263 (134 men and 129 women), and 210 (72 men and 138 women) participants, respectively. Table 3 shows the multivariate adjusted HRs (95% CIs) of groups based on milk intake frequency, while considering the <2 cups/week category as a reference. For total stroke, women consuming 2 to <7 cups/week and 7 to <12 cups/week had significantly lower risks in models 1, 2, and 3. The HR was attenuated in model 4 with a marginal significance; the HRs (95% CIs) were 0.71 (0.47-1.05) (p = 0.084) and 0.73 (0.51-1.05) (p = 0.084) for the 2 to <7 cups/week and 7 to <12 cups/week categories, respectively. For ischemic stroke, women who consumed 2 to <7 cups/week had a significantly lower risk in model 1, but the significance disappeared in further adjusted models. For women who consumed 7 to <12 cups/week, a significantly lower ischemic stroke risk was shown in all models; the HR was 0.53 (0.32-0.88) (p = 0.014) in model 4. For haemorrhagic stroke, no significant association with milk intake frequency was observed. In men, no significant associations were found between milk intake frequency and the risks of total stroke, ischemic stroke, and haemorrhagic stroke. Additionally, no linear relationships were observed in all models in both sexes.
Discussion
We demonstrated that women who consumed milk at the frequency of 2 to <7 cups/week and 7 to <12 cups/week tended to have a decreased risk of total stroke compared with those who consumed <2 cups/week; however, the statistical significance disappeared after adjusting for not only CVD risk factors such as BMI, SBP, TC and HPLC, but also lifestyle factors, dietary factors, and menopausal state. For ischemic stroke, the reduced risk for women who consumed 7 to <12 cups/week remained significant even after fully adjusting. In contrast, no significant association between the frequency of milk intake and incidence of stroke was observed among men.
The findings of the previous studies on the association between milk intake frequency and risk of stroke in Japanese cohorts have been inconsistent [20][21][22]. In a 15-year follow-up cohort study that was conducted in 1965, people who consumed milk ≥4 times/week had significantly lower mortality risks from total, haemorrhagic, and ischemic stroke than those who consumed milk <once/week [20]. In contrast, in a 16-year follow-up study that was conducted in 1979, no significant association between milk consumption and mortality due to stroke was observed [21]. NIPPON DATA80, a 24-year follow-up study that was conducted in 1980, showed an inverse relationship between milk and dairy consumption and mortality risk due to stroke with a marginal significance only in women [22]. These studies examined only the risk of mortality but not the incidence of stroke. To the best of our knowledge, this is the first prospective cohort study in Japan to investigate the association between milk intake frequency and the incidence of stroke.
Several studies regarding the association between milk consumption and the risk of stroke have found an inverse [30,31], non-significant [32][33][34], or even a positive association in Western populations [35]. A recent meta-analysis showed that the relative risk of stroke (incidence and mortality) for a 200 g increase in daily milk intake was 0.98 (95% CI, 0.95-1.01) in Western countries and 0.82 (95% CI, 0.75-0.90) in East Asian countries, including Japan [18]. The analysis also suggested a possible nonlinear relationship between milk intake and the risk of stroke, and the greatest reduction in the risk of stroke was observed with the intake of approximately 125 g of milk/day for Western populations. Regional differences were also suggested, and the greatest reduction in the risk of stroke was observed with the intake of approximately 165 g of milk/day for East Asian countries [18]. The findings of our study suggested that, compared to those with a milk intake of <2 cups/week, the HR for ischemic stroke in women was significantly lowest for those who consumed 7 to <12 cups/week, while the HR was not significant for those who consumed ≥12 cups/week. Our results also suggested a nonlinear relationship between milk intake frequency and stroke incidence. In the BDHQ, participants were asked to state the number of cups of milk without specifying the volume of the cup. However, people often use a 150-200 mL-cup or glass in Japan [36], and the current analysis may suggest the optimal amount of milk intake for the prevention of ischemic stroke in the Japanese population.
The present study could not elucidate the underlying mechanism for the association between milk intake frequency and stroke risk reduction; however, there are several possible explanations. First, people with higher milk consumption seemed to prefer traditional Japanese foods; the consumption of high amounts of fish and soy products was accompanied by higher vegetable and fruit consumption. These foods have been reported to be associated with the lower risk of mortality due to CVDs [37][38][39]. Current findings suggesting a favorable association between milk intake frequency and the risk of stroke may be partially affected by the preference for Japanese foods. Second, nutrients included in milk might have played an important role in reducing the risk of stroke; indeed, milk contains an abundance of minerals such as potassium, calcium, and magnesium. These minerals have been reported to be associated with the reduced risk of stroke [40][41][42]. A prospective cohort study has shown a decrease in the risk of stroke and ischemic stroke after a higher consumption of dairy-derived calcium [43]. It has also been suggested that the proteins and peptides in milk may have antihypertensive and insulin secretion control effects [44]. An inverse association has been reported between milk and dairy consumption and CVD risk factors, such as hypertension, diabetes, and metabolic syndrome [13,45,46]. In the present analysis, the significant association of milk intake frequency with stroke risk reduction weakened or disappeared after adjusting for CVD risk factors. High blood pressure levels and impaired glucose tolerance may mediate the association between milk intake frequency and the incidence of stroke. In contrast, as milk intake increases, the intake of saturated fat, an important determinant of blood cholesterol levels, of which the average intake of Japanese adults is 15.3 g/day [3], increases. However, a recent metaanalysis reported an inverse association between dietary saturated fat intake and stroke risk in Japanese, but not in non-Japanese [47]. Yamagishi et al. suggest that an increase in saturated fat intake to a level of about 20 g/day may be optimal for the primary prevention of CVD in the Japanese population [48].
In the present study, no significant association was observed between milk intake frequency and the incidence of stroke in men. Among men, the proportion of those who consumed 7 to <12 cups of milk/week was lower than that among women. Owing to a smaller number of men who consumed the quantity of milk that was effective in reducing stroke risk, there might not have been a significant association. Moreover, a higher proportion of men had CVD risk factors, such as hypertension, diabetes, heavy drinking, and smoking habit, compared to women. These factors might have had a stronger impact on the incidence of stroke than the milk consumption, which might have masked the association between milk intake frequency and the incidence of stroke.
Our study has several limitations. First, in the BDHQ, consumption of milk and yogurt was combined in one question, and these could not be distinguished. Additionally, we could not distinguish between the effects of normal-and low-fat milk because approximately 30% of the participants reported drinking both types of milk. However, in the baseline year (2003) of the present cohort study, most milk and dairy products consumed were in the form of whole milk (76.1%), according to the National Health and Nutrition Survey in Japan [49]. In addition, according to the 2003 statistical survey on milk and dairy products, the amount of milk produced in Iwate Prefecture was 97,530 kL, while the amount of fermented milk produced was 4951 kL [50]. In 2004, the purchase volume per week per household for whole milk and low-fat milk was 2.56 L and 0.43 L, respectively [51]. Therefore, we believe that normal-and low-fat milk and yogurt consumption can be considered as whole milk consumption in our cohort. The consumption of low-fat milk is increasing in Japan, and the impact of different types of milk requires investigation in the future. Second, the BDHQ contained the question only on the number of cups of milk without specifying the volume, which might have caused a misclassification of the milk consumption categories. However, given the nature of food frequency questionnaires, a quantitative assessment of nutrient intakes could not be expected. Only frequencies were assessed for most of the foods in the BDHQ, and the validation paper showed the BDHQ's limited ability to estimate the mean values of nutrients [52]. Although we also did not consider total energy intake in this study, considering the substantial correlation coefficient (0.70) of the BDHQ, we believe we categorized the participants reasonably: participants with higher milk/yogurt consumption and lower consumption. Third, the generalizability of our findings may be limited. The participants were those who underwent health check-up in three districts in one prefecture and might have been highly health conscious. Therefore, they are more likely to have favorable health behaviors, and the incidence and hazard ratios of stroke may be underestimated. In addition, it may be difficult to extrapolate our findings to the Western populations because of the differences in milk consumption levels. However, we believe that our findings may be generalizable to the East Asian population, which share some common characteristics such as low milk intake, high salt intake [53], and relatively low levels of obesity [54] with Japanese populations.
In conclusion, our findings suggest that moderate milk consumption (7 to <12 cups/week) decreased the risk of ischemic stroke in women but not in men. Consuming approximately 1 to <2 cups of milk/day may be effective in preventing ischemic stroke in the Japanese population. | 2021-10-28T15:13:08.737Z | 2021-10-25T00:00:00.000 | {
"year": 2021,
"sha1": "8e042b5e3a80713e766e402a04a227f458edd844",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/11/3781/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79d76590abe50a9efe91e922ba96cf4c0e601456",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
126053074 | pes2o/s2orc | v3-fos-license | Preface: Diffusion on fractals and non-linear dynamics
This special issue is dedicated to the memory of B. O. Stratmann (1957–2015), who played a major role in developing the theory of dynamical systems, fractal and hyperbolic geometry, and thermodynamic formalism; who continued the long standing traditions of L. Arnold from the 1970s of dynamics in Bremen; and who, together with K. Falk and M. Kesseböhmer, built the current and active dynamics research group at the University of Bremen. The overarching theme of this special issue is diffusion on fractals and non-linear dynamics, and is aimed at both newcomers to the area, as well as specialists. The contributions reflect different aspects of these topics and form new connections. The articles featured here have been carefully selected and peer reviewed. This special issue, in part, complements and records the outcomes of the 2015 Winter School and Symposium on Diffusion on Fractals and Non-linear Dynamics hosted by the groups Dynamical Systems and Geometry and Applied Analysis within the Department of Mathematics and Computer Science at the University of Bremen. The event successfully brought together experts from 11 countries and was attended by over 60 academics. During the meeting new collaborative projects were formed – results of which are hereby featured. We hope to have captured the exciting atmosphere of the event in this special issue through the original research, survey and expository articles. We are indebted and grateful to a number of anonymous referees for their invaluable help and suggestions in preparing this special issue. We express our gratitude to the University of Bremen and to the German Research Council who sponsored the winter school and symposium through the grants M8 Post-Doc-Initiative PLUS, Gastdozentenprogramm and Scientific Network – Skew Product Dynamics and Multifratcal Analysis; without their support this special issue and the event could not have come to fruition. This special issue centres around four themes: fractal geometry and applications of thermodynamic formalism, quasicrystals, skew-product dynamical systems, and non-linear dynamics. Below we give a short synopsis of these topics and the articles hereby featured.
1. Fractal geometry and applications of thermodynamic formalism. Structures in nature, such as galaxies and landscapes, aggregates and colloids, and polymers and proteins, often possess complexity irregularity and randomness on large and small scales. Fractal geometry is a mathematical theory used to describe and analyse properties of such structures, which are often called fractal. Geometric characteristics which allow for their detailed analysis include dimensions, such as Assouad, Fourier, Hausdorff or Minkowski; measures, like Hausdorff, Gibbs or Patterson-Sullivan; or volume forms such as the Minkowski content.
A prominent approach to geometrically describing a fractal object is to consider the decay of volume of particular -covers as tends to zero. In this way the Hausdorff and Minkowski dimensions are defined as the corresponding exponents, and the Hausdorff measure and Minkowski content as the respective pre-factors of the leading asymptotic term. These quantities, as well as the exponents and pre-factors of the lower-order asymptotic terms (related to curvature), are important geometric characteristics. A useful method to determine such characteristics of particular fractals relies on considering the Mellin transform of the -dependent volume function of the -cover. This leads to a description of the problem in terms of ζ-functions and is related to a statement that is equivalent to the Riemann hypothesis. It is in the analysis of these ζ-functions that renewal theory and thermodynamic formalism have been applied successfully, yielding new and meaningful results. In the article by M. Kesseböhmer and S. Kombrink a complex Ruelle-Perron-Frobenius theorem for Markov shifts over an infinite alphabet is proven, whence extending results by M. Pollicott from the finite to the infinite alphabet setting. As an application they obtain an extension of renewal theory in symbolic dynamics, as developed by S. P. Lalley. This work is complemented by the paper of M. Rauch, where new notions of topological pressure for measurable potentials are introduced and studied. By proving a corresponding variational principle, the author also establishes a Bowen formula for the Hausdorff dimension of cookie-cutters with discontinuous geometric potentials.
The thermodynamic formalism also makes apparent the link between geometry and the theory of Diophantine approximation. It has played a crucial role in the understanding of regularity of singular maps. A beautiful survey of this is presented in the article by J. J. Miao and S. Munday. The exposition exposes a number of open questions some of which are explicitly stated, while others are only suggested.
The theory of complex dimensions, as introduced by M. L. Lapidus and M. van Frankenhuijsen, is a C-valued extension of non-integer notions of dimension such as the Hausdorff dimension and the Minkowski dimension for sets U ⊆ R d . These noninteger notions of dimension assign a single number to U . By contrast, the set of complex dimensions provides a richly structured geometric invariant of U , typically an infinite but discrete subset of C of which the Minkowski dimension is a distinguished member. The complex dimensions describe not just the order of magnitude of the scaling properties of U , but also the oscillatory aspects. K. Dettmers et al. provide an articulate account of this topic and present an extension of the theory to subsets of n-dimensional Euclidean space, with an emphasis on self-similar sets.
We conclude this topic with the article by K. Hattori et al. concerning non-Markovian random walks on the Sierpiński gasket. As a main result the authors show that the exponent ν governing the short-time behaviour of the scaling limit varies continuously in the parameter. The limit process is almost surely selfavoiding, while it has path Hausdorff dimension equal to 1/ν, which is strictly greater than one.
2. Quasicrystals. Quasicrystals, being arrangements with long range aperiodic order, were discovered by D. Shechtman in 1982. This discovery (honoured with the Nobel prize in Chemistry in 2011) came as a complete surprise to physicists, chemists and material scientists. It also stimulated a tremendous amount of research in mathematics, in particular, questions concerning characterisation of order versus disorder, diffraction theory, and quantum mechanics of transport in solids. The involved methods range from number theory and geometry, over ergodic theory to harmonic analysis and operator theory. Dynamical systems and self-similarity feature very prominently. Indeed, all manifestations of 'the same' quasicrystals can
PREFACE: DIFFUSION ON FRACTALS AND NON-LINEAR DYNAMICS
iii be conveniently gathered in the form of a dynamical system (called the hull of the quasicrystal ) and many concrete models display self-similarity. Here we present an elegant survey article by M. Baake and D. Lenz. The authors give a careful exposition of definitions and results, with a strong emphasis on the underlying physical motivation. Much of the focus of their paper is on comparing two wellstudied methods for understanding diffraction, the classical definitions involving the Fourier transform of the autocorrelation measure, and the dynamical spectrum coming from the natural translation action on the hull of the point set. Throughout the article the authors highlight open problems in the field. Notably, inspired by the winter school and symposium, the authors include a section on the structural differences of quasi-crystals generated by abstract point sets and those emerging in non-linear evolution equations for soft matter.
3. Skew-product dynamical systems. Skew products form a class of dynamical systems that have proved relevant both from a theoretical and a practical point of view. In the description of real-world processes, they are used to model dynamical systems which are subject to the influence of external time-varying factors, a situation which naturally appears in many applications. On the theoretical side, the study of skew products is an important step towards understanding higher dimensional discrete dynamical systems and also gives an impetus to many other areas of mathematics, including, but not restricted to, hyperbolic dynamics, rotation theory of toral homeomorphisms, Schrödinger operators with quasiperiodic potential, and quasicrystals. M. G. Gharaei and A. J. Homburg use a relatively simple but quintessential class of skew product maps to illustrate and prove several important phenomena in skew product dynamical systems: intermingled basins of attraction, synchronization, on-off intermittency and random walks with drift. These concepts are further studied in the article by G. Keller, who provides an insightful and detailed analysis of the stability index and uncertainty exponent of two-dimensional skew-product systems highlighting the intriguing behaviour of intermingled basins. These skew-product systems consist of one-dimensional fibre maps with negative Schwarzian derivative which are driven by a one-dimensional mixing Markov map. Through an application of the thermodynamic formalism the author obtains the stability index and a natural upper estimate of the uncertainty exponent. In addition, the author's calculation of the corresponding exponent, which is the upper bound for the uncertainty exponent, yields a nice application of the theory and provides a possible explanation for the discrepancy of theoretical and numerical calculations of the uncertainty exponent observed in previous literature on this topic. 4. Non-linear dynamics. The alluring expository article by M. Beck provides an accessible introduction to the use of pointwise estimates in establishing nonlinear stability for certain non-linear waves. Non-linear waves are an ubiquitous phenomenon in infinite dimensional dynamical systems generated by (non-linear) evolutionary equations on spatially extended domains. The viewpoint taken here is that of spatial dynamics, where the dynamics acts in the unbounded space direction and the actual time direction is constrained to a compact domain. For the stability problem one may then regard the linearisation in the non-linear wave together with the non-linear equations as a skew-product system with continuous (spatial) time. Parabolic systems, as considered here, appear frequently in models of soft matter or more generally as macroscopic models for effectively diffusing particles iv PREFACE: DIFFUSION ON FRACTALS AND NON-LINEAR DYNAMICS on smooth domains. Such systems naturally generate spatial patterns, and nonlinear waves are a particularly ordered or coherent type of these. The article also summarises the application of these techniques in two more complex cases: timeperiodic shocks and defects. In two space dimensions, the linear problem readily allows for more disordered 'quasi'-patterns with almost or quasi-periodic quasicrystalline structures. However, the rigorous non-linear existence is an essentially open problem, and much more so the stability of these. Even in low dimensions bifurcations and the emergence of complex attractors occur and provide difficult problems.
The expository article by M. Beck is complemented by the paper of S. Gonchenkov and I. Ovsyannikov on discrete Lorenz attractors. These arise in unfoldings of certain homoclinic tangencies, and the authors provide an introduction to the various types and notions of chaotic attractors in this context. Indeed, the bifurcation theoretical perspective on complex dynamics taken here complements the viewpoint of topological and measure theoretic dynamics of this special issue's first three themes. The authors follow the tradition of the Shilnikov school for classifying dynamical systems based on the recurrence induced by a homoclinic orbit with given properties. By carefully studying the organisation and intersection of invariant manifolds the authors unravel the structures that generate the attractor upon unfolding the organising centre as well as geometric details of the loci in parameter space. | 2019-04-22T13:04:40.494Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "7c78329d7dd0c93c2ffdb46d6e042d1467c74f9b",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=522eb49b-febd-4bf7-ab02-feb9d309f565",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b8ea53ab248c613b369934f6c946410a4d4ef771",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119387854 | pes2o/s2orc | v3-fos-license | Transport Phenomena at a Critical Point -- Thermal Conduction in the Creutz Cellular Automaton --
Nature of energy transport around a critical point is studied in the Creutz cellular automaton. Fourier heat law is confirmed to hold in this model by a direct measurement of heat flow under a temperature gradient. The thermal conductivity is carefully investigated near the phase transition by the use of the Kubo formula. As the result, the thermal conductivity is found to take a finite value at the critical point contrary to some previous works. Equal-time correlation of the heat flow is also analyzed by a mean-field type approximation to investigate the temperature dependence of thermal conductivity. A variant of the Creutz cellular automaton called the Q2R is also investigated and similar results are obtained.
I. INTRODUCTION
Creutz devised a deterministic dynamics for the two-dimensional Ising model with a momentum term [1]. This dynamics is a kind of cellular automaton (CA), where states are upadated in a deterministic way with energy conservation and we call it the Creutz cellular automaton (CCA). In the CCA random numbers are not necessary for its time evolution, which provides an advantage in numerical simulations. Thus, the CCA and its variants have been used to investigate equilibrium properties of magnetic systems [2] instead of conventional Monte Carlo method, especially the critical phenomena at the critical point.
Besides this advantage, the CCA provides a dynamical model for time evolution with energy conservation. Thus the CCA can be used to study transport phenomena where flows of physical quantities take important roles. In fact, numerical results for heat conduction in the CCA were reported in [1] and the thermal conductivity was found to be proportional to T −2 in high temperature, where T denotes the temperature. Harris and Grant showed that this temperature dependence is explained by the Kubo formula [3]. They presented an asymptotic expression for the thermal conductivity in the high and low temperature limits by evaluating the first term in the Kubo formula.
Then, it is natural to ask a possible connection between the thermal conductivity and the phase transition. Because the specific heat diverges at the critical point of the Ising model, the thermal conductivity may also show some peculiarity at the point. Actually, in some materials, abnormal behavior of the thermal conductivity has been observed [4]. Clearly, the CCA is suitable to look into the thermal conductivity near the critical point. In the above mentioned paper, Harris and Grant made a comment that the thermal conductivity must vanish at the critical temperature T C without any evidences. It is a purpose of our paper to clarify what really happens to the energy transport at the critical point in the CCA.
Thus, in this paper we investigate the temperature dependence of the thermal conductivity in the thermodynamic limit. We obtain the thermal conductivity by two methods. One is a direct measurement of heat flow under a temperature gradient. The validity of the Fourier heat law is established in a wide range of temperature values and the coefficient of thermal conductivity is estimated. The other is the use of the Kubo formula. Explicit derivation of the formula is given and the coefficient of thermal conductivity is calculated from equilibrium autocorrelations of the energy flow. We check that both the methods yield the same result and find a finite conductivity at T C , which does not agree with the previous belief.
We also develop a mean-field approximation for the equal-time correlation of the energy flow, which improves the estimate by Harris and Grant [3]. Since it is the first term of the Kubo formula, the result of this treatment not only explains the temperature dependence at the high and low temperature limits, but also gives a qualitatively good estimate for the overall temperature dependence.
The conditions under which the Fourier heat law is satisfied have been studied in the literature [5][6][7][8][9] mainly by using Hamiltonian systems. The dynamical rules of CA are so simple and local that fast simulations are possible. Thus, one of the present authors (ST) applied CA to this problem and found some rules that clearly satisfy the Fourier heat law [9]. However, most of the studies have so far been confined in one-dimensional models, which might cause pathological effects due to a single path of the flow. Here we study a two dimensional system with CCA where we are free from the above anxiety. We find that the Fourier heat law holds at all temperatures in the CCA. We also find that the Q2R, which is a variant of the CCR without momentum terms, satisfies the Fourier heat law in two dimensions, although energy transport in Q2R is ballistic in one dimension. This paper is organized as follows. In Sec. 2, our model and method are explained and an expression for the local energy flux is derived via the equation of continuity. In Sec. 3, we demonstrate that the Fourier heat law holds in a wide range of temperatures by a direct simulation. In Sec. 4, the thermal conductivity is calculated by the use of the Kubo formula and its temperature dependence is carefully investigated especially around T C . A mean field analysis is done for the correlation of the energy flux in Sec. 5. Numerical results for the Q2R are exhibited in Sec. 6. We give summary and discussion in Sec. 7.
II. MODEL
The CCA is defined as follows. Let us consider the square lattice. A couple of variables (σ i,j ,σ i,j ) are assigned at a site (i, j). Here σ i,j ∈ {+1, −1} denotes a spin andσ i,j ∈ {0, 1, 2, 3} is called a momentum. Then the total Hamiltonian is given by (2.1) The first term represents the ferromagnetic Ising interaction between the nearest-neighbor spins and the second term represents a kind of kinetic energy. We divide the lattice into two sublattices like the checkerboard. Site (i, j) is called even or odd according as the sum i + j is even or odd. One unit of time evolution consists of two processes each of which simultaneously updates variables on a sublattice. Namely, when the variables are updated from the states at time t, first the even sites are updated at time t + 1/2 and next the odd sites are updated at time t + 1. The updating rule is the following. Spin flip is accepted when the momentum at the site can compensate the energy change of the flip. That is, if the following relation is satisfied, 0 ≤σ i,j − 1 2 σ i,j nn σ nn ≤ 3, where nn denotes the nearest neighbor sites of (i, j), the spin σ i,j changes its sign and the momentum is changed to conserve the total energy. Now we derive expressions for a local energy and an energy flux. From the total Hamiltonian (2.1), we can define the local energy on the site (i, j) at time t as Note that the total energy is equal to the sum of the local energies over the lattice. First we consider the case where the site (i, j) is even. If the spin at site (i, j) is flipped at time t + 1/2, we have and the difference between the local energies at times t + 1/2 and t is given by If the spin σ i,j is not flipped, the local energy does not change. Thus the energy change is generally expressed as where δ(x) is Kronecker's delta and we have used the equality δ(x + y) = (1 − xy)/2 that holds for x, y ∈ {+1, −1}. The energy difference between t + 1/2 and t + 1 is calculated in the same manner, and we obtain Combining Eqs. (2.3) and (2.5), we obtain the following expression for the energy difference between t and t + 1.
where we have used the fact that σ t+1/2 i,j = σ t i,j for odd (i, j) and σ t+1/2 i,j = σ t+1 i,j for even (i, j). Because the total energy is conserved, Eq. (2.6) must represent the equation of continuity, where J t i,j,α (α = x or y) denotes the α component of the energy flux at site (i, j) at time t. Comparing Eqs. (2.6) and (2.7), we find that the components of the energy flux are given as The same argument can also be applied to the case where site (i, j) is odd. As the result we arrive at the following expressions for the energy flux. If site (i, j) is even, and if site (i, j) is odd,
III. THERMAL CONDUCTION UNDER A GRADIENT OF THE TEMPERATURE
In this section, we report numerical results on energy transport in the CCA obtained by a direct simulation. We took the systems of size L × L where L varies from 10 to 300. The periodic boundary condition was imposed on the y direction. At the ends in the x direction, two heat reservoirs, one at temperature T L and the other at T R , were attached as shown in Fig. 1. Each heat reservoir consisted of spins on two vertical lines, where the spins on a sublattice were simultaneously updated by the use of the Monte-Carlo method with the heat-bath algorithm. We have numerically confirmed that if the two heat reservoirs have an identical temperature the system relaxes to the equilibrium state at that temperature. This relaxation to the equilibrium was also observed in the case where only one reservoir was attached to the system. Energy transport occurs when the left and right reservoirs have different temperatures. It is found that the relaxation time to a stationary state is very long at low temperatures below T C = 2/ log(1 + √ 2) ≃ 2.270, while it is rather short at high temperatures.
The following two cases are examined with particular care. One is the case where both the temperatures T L and T R are higher than T C , T L = 5.0 and T R = 5.5. This is called case A. The other is the case of T L = 2.1 and T R = 2.2, where both the temperatures are lower than T C . We call this case B. In each case, within 10 7 time steps the system of any size (L ≤ 300) reached a stationary state where a uniform flux in the x direction is realized. After the system reached the stationary state, we continued the simulation by 10 7 more steps for which we took time averages of physical quantities.
First we consider the distribution of a local kinetic energy, P i,j (σ). Because the system is translation invariant in the vertical direction, we computed the average of P i,j over the vertical line and found that it is given by a canonical distribution where β i is a fitting parameter which is regarded as the local inverse temperature at line i. Figures 2(a) and (b) show the distributions for case A and case B, respectively. They clearly demonstrate the property (3.1). Thus local equilibrium is realized and the local temperatures are well defined. Let T i denote the temperature at horizontal position i, namely T i = β −1 i . We plotted T i as a function of x = i/L for various Ls in Fig. 3(a) and (b), which correspond to cases A and B, respectively. Clearly the scaling limit where [Lx] means the integer part of Lx, exists and is smooth in both the cases A and B.
Next we observed the total energy flux per row in the stationary state where J tot,x is the total energy flux in the x direction and the bars mean the time average in the stationary state. If the Fourier heat law is realized, this quantity must converge to a nonzero constant in the limit L → ∞ with T L and T R fixed, because then this quantity is written as with use of the thermal conductivity κ(T ). We utilized this property to judge whether the Fourier heat law is satisfied or not. In Fig. 4, the L dependence of J tot,x /L is shown for various temperature values. The figure shows that the size dependence disappears in the large systems. Thus we conclude that the Fourier heat law is realized in a wide range of temperatures including the critical point.
Moreover, J tot,x /L has a finite value and changes smoothly around the critical temperature. This observation suggests that the thermal conductivity has no strong singularity at T C . However, we can treat not the thermal conductivity itself but the integration of it in the present method and a possible singularity, if any, is hardly observed. Thus in the next section we investigate the thermal conductivity in the bulk at a given temperature with use of the Kubo formula.
IV. THERMAL CONDUCTIVITY COMPUTED VIA THE KUBO FORMULA
According to the Kubo formula, the thermal conductivity is equal to the summation of the equilibrium autocorrelation functions of the energy flux as x is the total energy flux in the x direction at time t, · · · means the equilibrium ensemble average at temperature T , and N is the total number of sites. This formula is proved for the CCA in Appendices A and B.
We numerically computed the autocorrelation functions J 0 tot,x J t tot,x for t ≤ 150 in the CCA under the periodic boundary conditions in the x and y directions. Initial conditions were randomly generated by a Monte-Carlo method with temperature T . We denote the partial Kubo sum up to time t by κ t , namely Figure 5 shows numerically computed κ t in the system of size 200 × 200 at various temperatures. It is observed that the summation converges by t = 30 for every temperature. At temperatures above T C , the sum monotonically increases and tends to a constant exponentially fast. At low temperatures the monotonicity is lost and significant fluctuations appear. However, κ t still reaches a convergence by t = 10. Figure 6 shows the thermal conductivity thus obtained and that computed via the direct measurement of energy flux as explained in the previous section. Both the results agree with each other very well. From the figure we know that the thermal conductivity has a peak at T ∼ 2.70, which is slightly above the critical temperature T C . Above the peak value, the thermal conductivity gradually decreases and tends to zero in the high temperature limit. Below the peak value, the conductivity shows a remarkable change around T C and reaches nearly zero at T = 2.0. Detailed measurements were done near the critical temperature T C and the results are shown in Fig. 7. This figure shows that κ(T ) appears continuous and smooth at the critical point, though the magnitude of the change is large. Because little size dependence is seen when L ≥ 100, we can conclude that at least no divergence or no vanishment of κ(T ) occurs at T C . Of course we cannot deny the possibility of singularity or discontinuity in a higher derivative.
V. MEAN-FIELD ANALYSIS OF THERMAL CONDUCTIVITY
In this section, we estimate the equal time correlation function of the heat flow using a mean-field approximation and discuss its temperature dependence. This quantity is the first term of the Kubo formula Eq. (4.1), namely κ 0 (T ), and thus we can obtain some information on the temperature dependence of the thermal conductivity.
As derived in Appendix B, κ 0 (T ) is expressed in terms of an average of the total flow J tot,x in the local equilibrium as where · · · le denotes the average with respect to the local equilibrium product measure (A6) with the right reservoir temperature T R and the left reservoir temperature T L . First we consider the quantity J i,j,x le at an even site (i, j). Denoting the local equilibrium measure by ρ le , we have Substituting Eq. (2.8) into J i,j,x , we can express this as where σ ′ i,j denotes the updated spin value at the even site (i, j) and * means the summation over the configurations in which the spin σ i,j can flip. Whether the spin flip occurs or not depends on the spin and the momentum variable at (i, j) and the sum of the spin values on the nearest neighbor sites, Specifically, the spin flip is possible in the following configurations: ±1, (3, 2)) (±2, ±1, (3, 2, 1)) (±0, ±1, (3, 2, 1, 0)) (±4, ∓1, (1, 0)) (±2, ∓1, (2, 1, 0)) .
(5.5)
Because the summation (5.3) must be taken over the configurations for the whole system, it is difficult to be carried out exactly. Thus we consider the following mean field approximation. In this approximation the spin variables at the next nearest neighbor sites are replaced by their average values. Those average values should depend only on the horizontal position and not on the vertical position, since the local equilibrium measure is translation invariant in the y direction. Thus the average concerning the local equilibrium measure is replaced by the average concerning the following measure The summation in the denominator of (5.6) is taken over possible values of σ i,j ,σ i,j and the nearest neighbor spins.
Here β i is the inverse temperature at horizontal position i and β ′ i−1 takes the same value as β i−1 . We introduced β ′ i−1 for later convenience. σ i denotes the local equilibrium value of the spin variable at horizontal position i. Under the above approximation, J i,j,x le is represented as where Z and Z * are defined by With straightforward calculation Z * is obtained as, In the first order of ∆T (: Z is also calculated as Thus we arrive at the approximate formula for κ 0 (T ), where in the limit of ∆T → 0 the system becomes uniform and we identify J tot,x /N le with J i,j,x le and put σ := σ i . In the high temperature limit, using σ = 0, we have while in the low temperature limit, using σ = 1, These asymptotic forms are the same as obtained by Harris and Grant [3]. However, the formula (5.12) gives more information about overall temperature dependence. Although the present approximation is not good near the critical point, within this approximation we find that κ 0 (T ) is continuous but shows a cusp at the mean-field critical temperature T M ≃ 3.5 because σ ∝ (T M − T ) 1 2 . In Fig. 8 we compare the mean-field results with the numerical ones obtained in the previous section. In the high temperature region both the results agree with each other, while discrepancies appear at low temperatures. This is partly due to the difference between the mean-filed critical temperature and the true critical temperature T C ≃ 2.27. In addition the simulation results have no cusp and actually change smoothly.
In Fig. 9 we show κ(T ) and 3.5 × κ 0 (T ) both of which are numerically obtained from the equilibrium autocorrelation functions of the energy flux. This figure shows that κ 0 (T ) is nearly proportional to κ(T ) in high temperatures. This implies that the autocorrelation functions of the energy flux are similar in this temperature region, which is also perceived by comparing the two curves for T = 3.0 and T = 3.5 in Fig. 5.
VI. HEAT CONDUCTION IN THE Q2R
As a simplified variant of the CCA, the Q2R was devised and some equilibrium and dynamical features were investigated [11,12,10]. There are no momentum variables in the Q2R, where a spin flips only when the sum of the nearest-neighbor spins is zero. Despite the simplicity of the model, it is known that the critical behavior for the magnetization can be simulated by this model.
We have performed direct simulations of the Q2R in contact with two heat reservoirs at different temperatures in almost the same manner as in Sec. 3. Heat reservoirs were realized by the same algorithm as shown in Fig. 1. The temperatures of the reservoirs were set as T L = 6.0 and T R = 10.0. Here we took quasi one-dimensional systems of size L × 10 with various Ls. Simulation time for each size is 5 × 10 7 time steps. The expressions for the energy flux (2.8) and (2.10) can be used without changes because they do not contain momentum variables. For the same reason, in the Q2R we cannot determine local temperature from the distribution of local kinetic energy as was done in the CCA. Thus we plotted local energies in the stationary state for various system sizes in Fig. 10. As in the Creutz model, the Q2R also shows a smooth energy profile in the scaling limit (3.2). System-size dependence of the total energy flux is shown in Fig. 11. The total energy flux converges to a nonzero finite value in the limit L → ∞ and it demonstrates that the Q2R has a normal thermal conductivity at least when the temperatures are sufficiently high. This means that the normal thermal conductivity in the CCA is not caused by the presence of the momentum terms.
The thermal conductivity was carefully calculated with use of the Kubo formula in a system of size 100 × 100 at temperatures around T C . The result is shown in Fig. 12, which exhibits similar behavior to the CCA. The thermal conductivity shows a remarkable change near T C but seems continuous and smooth. This result disagrees with Costa and Herrmann [10], where they reported that energy flux vanished at the critical point and no transport occurred below the critical point. This discrepancy may be attributed to the differences in system sizes and heat reservoirs in their and our systems. In [10], the distance between the reservoirs is 20, which may be too small to obtain bulk thermal conductivity. Their heat reservoir is deterministic and keeps energy a constant in the boundary layer representing the reservoir. Thus the motion of the total system must eventually turn into periodic. Because energy flow rarely occurs in low temperature, such simple dynamics may not be able to generate it, whereas our reservoirs are stochastic and rare events can happen. Another possible interpretation is that they misunderstand the great change of thermal conductivity around the critical point as vanishing.
In addition, Costa and Herrmann reported two different types of transport processes. One is normal diffusion and the other is a systematic transport called "highway." The latter causes a ballistic transport. However, we did not find such ballistic transport in our simulations. This is also attributed to the differences in heat reservoirs and system sizes. The highway is characteristic of their deterministic reservoirs and moreover the fraction of highways decreases to zero as the system size increases.
At the end of this section, we mention the one-dimensional Q2R dynamics. If i is even, the spin value of site i at time t + 1 is expressed in terms of spin variables at time t as In the same manner, if i is odd Defining local energy of site i at time t as we obtain the following relation for the local energy using Eqs.(6.1) and (6.2). Namely if i is even, and if i is odd, Therefore, the energy transport in one-dimensional Q2R is ballistic and the Fourier heat law is not satisfied. Thus we have found that the dimensionality has an important role for the Fourier heat law in the Q2R.
VII. SUMMARY AND DISCUSSION
In this paper we have studied the thermal conduction in the CCA with two methods. One is the direct measurement of the heat flux under a temperature gradient. The other is the use of the Kubo formula. The former revealed that the assumption of local equilibrium is satisfied and that Fourier heat law is realized in a wide range of temperatures. The thermal conductivity was carefully calculated near the critical point by the latter method and the results show no singularity for κ(T ) at T C .
How a thermal conductivity behaves at T C is a highly nontrivial problem. Harris and Grant [3] and Costa and Herrmann [10] both argued that the thermal conductivity vanishes at the critical point. On the other hand, the autocorrelation of the total energy flux might show a slow decay due to the critical slowing down. Then the thermal conductivity might be divergent at T C . Our present result shows either is not the case.
The present result does not mean that there is no singularity in energy transport at T C . The Fourier heat law means that the macroscopic motion of energy density obeys the diffusion equation with diffusion constant D(T ) = κ(T )/C(T ), where C(T ) is the specific heat. The present result shows that κ(T C ) is finite while C(T ) diverges to infinity at T C . Thus the diffusion constant D(T ) vanishes at T C .
We evaluated the equal time correlation of the heat flow by the use of mean-field approximation. This quantity is the first term in the Kubo formula and we can obtain a rough estimate for κ(T ). In the high and low temperature limits, our approximation reproduces the result by Harris and Grant [3].
Similar calculations were also done for the Q2R, a simplified variant of the CCA. The results obtained are almost the same as in the CCA. The normal thermal conductivity was found and it was continuous and smooth at the critical point. This proves that the existence of the momentum terms is not relevant to the normal thermal conductivity. On the other hand, the dimensionality is important. Energy transport is ballistic in the one-dimensional Q2R. Such importance of the dimensionality was reported also in [13].
The similarity of the thermal conductivities in the CCA and the Q2R also implies that the smooth change of κ at the critical point is rather generic. To investigate to what extent this behavior is generic, however, we must examine other dynamical systems with a critical point. It is a future problem. ACKNOWLEDGMENT We gratefully acknowledge partial financial support from Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture. Numerical computation in this work was carried out at the Yukawa Institute Computer Facility.
APPENDIX A:
In this and the next Appendices, we derive the Kubo formula (4.1) for the CCA. We denote a state of the total system by ω = (ω i,j ), where ω i,j = (σ i,j ,σ i,j ), and the transformation from the state at time t, ω t , to that at time t + 1, ω t=1 , by Ω as Then, the time evolution of any function F (ω) is represented by and F 0 (ω) = F (ω), and the time evolution of a measure ρ(ω) by and ρ 0 (ω) = ρ(ω), where Now we define the total flux J tot,x (ω) by where J i,j,α (ω) (α = x or y)is the α component of the energy flux at site (i, j) when the system is in state ω. We assume that the initial measure ρ 0 equals the local equilibrium measure ρ le defined by where E i,j (ω) is the local energy around site (i, j) in state ω. Z le denotes the normalization constant The parameter β i is the local inverse temperature at the ith column. We consider the temperature variation in the x direction only. If the temperature is uniform and all the β i s equal a value β, ρ le becomes the equilibrium measure of temperature T = β −1 . The average of function F (ω) with respect to the local equilibrium measure is written as Similarly we write the equilibrium average as F eq . In the following, we calculate the local equilibrium average of the total flux at time t In the last equality, we have used the identity Utilizing the equation of continuity Inserting the above formula into Eq. (A11), we have Now we formally expand the right hand side with respect to ∇T and obtain in O(∇T ) J t tot,x le ≃ J tot,x le + where we have used the time invariance of the equilibrium measure, ρ eq (Ω(ω)) = ρ eq (ω). As we will show in Appendix B, the following equality holds for the first term in the right hand side in O(∇T ). In addition, we assume that the average energy flux goes to a stationary value in the limit t → ∞ irrespective of an initial measure. Then the stationary energy flux per site obeys the Fourier heat law and the thermal conductivity κ is given by which is the Kubo formula for the CCA. We remark that the expansion is formal and not justified. The coefficient κ might be divergent. Currently we have no means to judge the convergence of the coefficient except the numerical methods.
APPENDIX B:
In this Appendix we prove the formula (A17). First we note that This is obtained by a straightforward calculation. From now on, we only deal with equilibrium averages and suffix "eq" will be omitted. Since the total Hamiltonian H is invariant, namely H(Ω(ω)) = H(ω), we have where Ω −1 is the inverse operator of Ω. Denoting the operation of updating the even sites by Ω e and that for the odd sites by Ω o , we can decompose the time evolution operator Ω as Since we have Ω e • Ω e = Ω o • Ω o = 1 (identity), the inverse operator is given as Let us define the shift operator S by This means that the operator S shifts the state by one site in the y direction. Because the shift exchanges the roles of the even and odd sites, we have Thus the inverse operator has another representation Inserting this formula into Eq. (B2) and utilizing the shift invariance of the Hamiltonian (i.e., H(S(ω)) = H(ω)), we can write From the definition of the flux J i,j,x (ω) where in both the cases that site (i, j) is even or odd. Combining Eqs. (B11) and (B13), we have Similarly we have J i,j,x = − J i,j+1,x . Inserting these equalities into Eq. (B1), we arrive at This is equivalent to the formula (A17). | 2019-04-14T02:18:42.263Z | 1998-11-12T00:00:00.000 | {
"year": 1998,
"sha1": "c6d748bb6b24f4c4d0f1e2e472a190cd0b095eaa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9811168",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c6d748bb6b24f4c4d0f1e2e472a190cd0b095eaa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
231678242 | pes2o/s2orc | v3-fos-license | Quantification of Nitric Oxide Concentration Using Single-Walled Carbon Nanotube Sensors
Nitric oxide (NO), a free radical present in biological systems, can have many detrimental effects on the body, from inflammation to cancer. Due to NO’s short half-life, detection and quantification is difficult. The inability to quantify NO has hindered researchers’ understanding of its impact in healthy and diseased conditions. Single-walled carbon nanotubes (SWNTs), when wrapped in a specific single-stranded DNA chain, becomes selective to NO, creating a fluorescence sensor. Unfortunately, the correlation between NO concentration and the SWNT’s fluorescence intensity has been difficult to determine due to an inability to immobilize the sensor without altering its properties. Through the use of a recently developed sensor platform, systematic studies can now be conducted to determine the correlation between SWNT fluorescence and NO concentration. This paper explains the methods used to determine the equations that can be used to convert SWNT fluorescence into NO concentration. Through the use of the equations developed in this paper, an easy method for NO quantification is provided. The methods outlined in this paper will also enable researchers to develop equations to determine the concentration of other reactive species through the use of SWNT sensors.
Popular methods of NO detection, including the Griess assay, horseradish peroxidase (HRP), and various electrochemical probes, suffer from limitations, including detection of upstream or downstream products of NO, rather than NO itself, and a lack of spatial detection (Table 1) [40][41][42]. Downstream measurements of NO are frequently inaccurate because NO decays into different molecules, such as sodium formate [43], methemoglobin [27], or nitrogen dioxide [44], depending on the chemical makeup of the environment. Upstream assays for NO encounter similar quantification issues, since the formation of NO is dependent on multiple cell-specific characteristics [41]. The conflicting reports about NO's concentration demonstrate the need for a quantification method capable of directly detecting biologically relevant NO concentrations.
Single-walled carbon nanotubes (SWNT) emit light in the near infrared (nIR) range when excited, and when wrapped with specific polymers, they become optical sensors for a wide array of analytes, including NO, reactive oxygen species, insulin, and dopamine [45][46][47][48][49][50][51][52]. SWNT sensors react to their analyte of interest with an increase or decrease in fluorescence intensity and/or a blue or red shift in their wavelength [45,49,53,54]. Researchers are interested in developing SWNT as sensors for biological applications since their emission wavelength falls within the near-infrared range, an area in which water and blood have limited interference, and they do not photobleach, therefore providing a long-term fluorescence sensor [55][56][57].
When a (6, 5) SWNT is wrapped with single-stranded (AT) 15 , a 30-m strand of DNA, a fluorescence quenching NO sensor is created [45]. The (AT) 15 -wrapped SWNT maintains a constant fluorescence intensity until it is exposed to NO; once NO is introduced to the SWNT, the fluorescence intensity will decrease [45]. This decrease in fluorescence intensity does not occur when similar reactive oxygen and nitrogen species are exposed to the SWNT, it only occurs when exposed to NO [45]. Due to their lack of photobleaching, real-time response rate, analyte specificity, and ability to detect NO, not a precursor or downstream product of NO, these SWNT sensors have many unique properties that cannot be found in other NO sensors [45,[57][58][59][60][61]. Table 1. Comparison of three of the frequently used nitric oxide (NO) sensors to the (AT) 15 single-walled carbon nanotube (SWNT) sensor. ** The range of detection (µM) for the SWNT has not previously been determined and will be shown in this paper.
Range of Detection (µM) Analyte Measured Real-Time Sensing Spatial Resolution
Griess [ Unfortunately, the (AT) 15 SWNT sensor does not have a linear fluorescence quenching rate compared to NO concentration, so the determination of the actual NO concentration, as opposed to quantification of the changes in the concentration, has never before been determined. In this paper, we demonstrate the success of our research aim, which was to develop a mathematical model that converts the change in SWNT fluorescence into NO concentration.
SWNT Sensors
SWNT sensors were made as previously described [53]. Briefly, single-stranded (AT) 15 DNA was added to (6,5) SWNT in nanopure water in a 2:1 ratio (Integrated DNA Technologies, Coralville, IA, USA and Sigma-Aldrich, St. Louis, MO, USA). The SWNT and DNA solution was placed in a bath sonicator for 10 min, tip sonicator for two 20 min periods, and then centrifuged twice (ThermoFisher, Waltham, MA, USA and Qsonica, Newtown, CT, USA). The remaining supernatant was then analyzed on an ultraviolet-visible spectrometer (UV-Vis) (Beckman Coulter, Brea, CA, USA) to determine its concentration [45].
Attachment of SWNT to Glass Surface
The SWNT sensors were adhered to a glass slide using a previously described method [65]. Briefly, the glass slides were treated over the course of five days with piranha solution, 3-glycidyloxypropyl trimethoxy-silane (GPTMS), and avidin, before incubating with biotinylated SWNT (Sigma-Aldrich, St. Louis, MO, USA and Integrated DNA Technologies, Coralville, IA, USA) [65].
Nitric Oxide Solution
Both a NO and NO-free control solution were made as previously described [66]. Briefly, 12 mL of saline was placed in two sealed round-bottomed flasks. Argon was bubbled into both flasks for 20 min to de-oxygenate the saline, then NO was bubbled through a single flask for 5 min to create an NO solution (Matheson Tri-Gas, Irving, TX, USA).
NO Concentration Quantification via Horseradish Peroxidase
NO concentration was determined as previously described by Qiang et al. [64]. Briefly, NO was mixed with a horseradish peroxidase solution (final concentration 1.36 µM) (Ther-moFisher, Waltham, MA, USA). The absorbance values at 405 and 420 nm were collected and used to calculate NO concentration via Qiang et al.'s formula [64].
Preparation of Slides for Imaging
Before imaging, the slides were tightly fitted to a holder by means of thermal expansion. They were then allowed to cool and reach thermal equilibrium, before adding 4 mL of saline. The slides were placed on the microscope and imaged for 2 min to establish a baseline. After that, 400 µL of saline was withdrawn from the slide holder, to ensure the slide stayed in focus when the 400 µL of NO was injected.
Detecting SWNT Response to NO (Fluorescence Measurements)
Sensitivity and reactivity of the SWNT sensors to NO was determined using the custom-built, hyperspectral near infrared microscope (Photon, etc., Montreal, QC, Canada). The microscope excites samples with a 2 W laser (561 nm), collects the emission signal with a volume Bragg grating to choose the specific wavelength of interest, and records the data with an InGaAs camera (Xenics, Beverly, MA, USA). SWNT fluorescence was monitored while solutions of NO at different concentrations were added (400 µL of NO solution to a 3600 µL saline bath). Images were collected every 200 milliseconds for 6.5 min, with the NO injection at the 2.5 min timepoint. A custom developed MATLAB program (Supplementary File 1) (MathWorks, Natick, MA, USA) was used to quantify h5 files (a file type specific to our imaging system).
Mathematical Analysis
Mathematical analysis was performed under the supervision of a trained statistician. First, the average brightness for each frame of the video was extracted via a customdeveloped program (Supplementary File 1) and smoothed using a standard three-point median filter.
The signal intensity difference was found by averaging the last quarter of the data collected before injection for the initial value, and the final quarter of the data collected after the injection of NO for the final value, and then subtracting the final value from the initial value.
The slope was determined using the local maximum before the injection of NO and local minimum in the first quarter of the data after the injection of NO.
Each collection of NO concentrations was averaged, and the linear section of the graph was fit with an equation correlating NO concentration and either fluorescent signal intensity difference or the slope of the fluorescent signal.
Results and Discussion
An important aspect for analyzing the change in SWNT fluorescence due to NO exposure is the stabilization of the SWNT. Therefore, it was important that the sensorcoated slides be analyzed within a device that kept them from moving and also allowed for the saline bath used in the experiments. A slide holder was 3D printed to fulfill this purpose. Before use, the slide holders were expanded via heat, and then the slide was placed in the holder, which was allowed to cool, creating a tight seal between the slide and the holder.
Once the sensor-coated slide was stabilized it was imaged with a custom-made upright microscope. The SWNT were excited via a 561 nm wavelength laser, and the subsequent emissions at 990 nm were read by a 20× objective with an exposure time of 200 ms for a total duration of 6.5 min. The sensors were exposed to various concentrations of NO as well as a non-NO control while fluorescence intensity readings were collected (Figure 1). Unfortunately, the ratio of the SWNT to NO cannot be determined, since SWNT is measured by fluorescence intensity, not by number, but the number of SWNTs on the surface of the slides remained constant, so the change in the NO concentration led to a change in the SWNT to NO ratio. The SWNT sensors respond to the different concentrations of NO by quenching to different extents ( Figure 2). An addition of 0.1 µM or higher concentrations of NO resulted in a measurable decrease in fluorescence when compared to the non-NO control (n = 3-7). The addition of higher concentrations of NO resulted in a lower final fluorescence intensity when compared to the final intensity of samples exposed to lower NO concentrations ( Figure 3A). The lower limit of detection for the SWNT was found to be 0.1 µM, with concentrations of NO below 0.1 µm resulting in changes of fluorescence that were within the noise range of the 0 µm control samples. With the current system, the SWNT does not have a discernable upper limit for detection, but it does have an upper limit for differentiation between concentrations. When 30 µM NO is added to the system, the SWNT becomes fully saturated. Increasing the NO concentration beyond that point will not change the observable fluorescence intensity. Therefore, we have set the functional upper limit of quantification to be 30 µM.
The goal of this project was to develop a mathematical model that correlates the response of the SWNT sensors to NO concentration. While the SWNT respond to NO over a wide variety of concentrations, the response is not always linear. However, the statistical analysis of the data, as described in the materials section, was limited to a linear section of the graph ( Figure 3B), to ensure a more accurate curve fit. The quenching was found to have a linear fit within the range of 0.1 to 10 µM when determined through analysis of the difference in initial vs. final signal intensity. The equation comparing the drop in fluorescence intensity to NO concentration is x = (y − 28.59)/3.73, where y is the change in fluorescence and x is the concentration of NO in µM.
Since the lower concentrations of NO are of interest in biological settings, we attempted to create an analysis method that is accurate at lower concentrations of NO. We found that by comparing the slope of the fluorescent signal with NO concentration we were able to get a much better fit for our data at low concentrations, creating a sensor with a range of 0 to 10 µM (Figure 4). The equation comparing the slope of the fluorescence intensity to NO concentration is x = −(y + 0.21)/0.42, where y is the slope of the fluorescent signal and x is the concentration of NO in µM. We are choosing to report both methods of NO concentration quantification since there are situations for which each method is preferable. When a researcher is interested in the total concentration of NO added to a system, the change in SWNT's initial to final fluorescence will provide the necessary information without the complication of noise in the system as more/less NO is being released in short time spans. Whereas the quantification of NO concentration via the slope of the fluorescent signal will be beneficial for situations in which temporal data or information about low concentrations of NO is required.
These models do have some limitations, including the fact that the SWNT must be adhered to a glass slide, meaning that extracellular NO can be quantified, but intracellular NO concentrations are not currently quantifiable. The results also take more time to obtain and process when compared to a traditional electrochemical probe. However, with the development of this model, NO concentrations can be analyzed spatially down to the µm scale, which is not feasible with current electrochemical probe technology. Our SWNT sensing system allows for repeated quantification of NO in both a spatial and temporal fashion, which is a feat that none of the commercially available sensors can currently claim.
The understanding of the methods for NO concentration quantification will also assist in the development of equations to quantify intracellular NO concentrations in vitro and extracellular NO concentrations in vivo.
Conclusions
Two methods of NO concentration quantification have been developed, both for real-time and longer time period data collection modalities. The two equations that we developed, specifically the equation derived from the difference in the initial and final signal intensity and the equation derived from the changes (slope) in fluorescence intensity over time are x = (y − 28.59)/3.73 (with x = NO concentration in µM and y = difference in fluorescence intensity) and x = − (y + 0.21)/0.42 (with x = NO concentration in µM and y = slope of signal intensity), respectively. These two methods have limits of detection of 0.1 to 10 µM (difference in signal intensity) and 0 to 10 µM (slope of signal intensity). These equations allow for the determination of NO concentration and spatial resolution when imaging, opening up possibilities that could not be previously explored via the standard detection method of an electrochemical probe.
With this work, we have improved a tool for the study of NO in living systems by finding a mathematical equation that correlates changes in SWNT fluorescence with NO concentration. We have also created a template for the development of mathematical relationships between other SWNT sensors and their analyte. As this is the first publication demonstrating the quantification of NO concentration with a SWNT sensor, we hope that our technique can be used to improve the function of other SWNT sensors too.
Supplementary Materials: The following are available online at https://www.mdpi.com/2079-4991/ 11/1/243/s1, File S1: computer program to extract the average brightness for each frame of the video. Funding: This research was funded by the Nebraska Center for Nanomedicine COBRE Grant (P30 GM127200) and is based upon work supported in part by the National Science Foundation EPSCoR Cooperative Agreement OIA-1557417.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
SWNT Single-walled carbon nanotubes NO nitric oxide (AT) 15 a 30-mer of DNA consisting of adenine and thymine repeated 15 times HRP horseradish peroxidase | 2021-01-23T06:16:24.859Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f9757297622c874a3a19609fb93d605c946a53f2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/1/243/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fb102b999def336f445ada121e1b0734a191a1a7",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257443255 | pes2o/s2orc | v3-fos-license | Novel Information Measures for Fermatean Fuzzy Sets and Their Applications to Pattern Recognition and Medical Diagnosis
Fermatean fuzzy sets (FFSs) have piqued the interest of researchers in a wide range of domains. The striking framework of the FFS is keen to provide the larger preference domain for the modeling of ambiguous information deploying the degrees of membership and nonmembership. Furthermore, FFSs prevail over the theories of intuitionistic fuzzy sets and Pythagorean fuzzy sets owing to their broader space, adjustable parameter, flexible structure, and influential design. The information measures, being a significant part of the literature, are crucial and beneficial tools that are widely applied in decision-making, data mining, medical diagnosis, and pattern recognition. This paper aims to expand the literature on FFSs by proposing many innovative Fermatean fuzzy sets-based information measures, namely, distance measure, similarity measure, entropy measure, and inclusion measure. We investigate the relationship between distance, similarity, entropy, and inclusion measures for FFSs. Another achievement of this research is to establish a systematic transformation of information measures (distance measure, similarity measure, entropy measure, and inclusion measure) for the FFSs. To accomplish this aim, new formulae for information measures of FFSs have been presented. To demonstrate the validity of the measures, we employ them in pattern recognition, building materials, and medical diagnosis. Additionally, a comparison between traditional and novel similarity measures is described in terms of counter-intuitive cases. The findings demonstrate that the innovative information measures do not include any absurd cases.
Introduction
Te idea of the fuzzy set (FS) was developed by Zadeh [1] in 1965, which addressed vagueness and ambiguity in realworld situations. In 1970, Bellman and Zadeh (1970) introduced the concept of decision-making (DM) problems with uncertainty. DM is a systematic procedure of selecting the most ideal choice from a collection of available alternatives. Terefore, the decision maker plays a crucial role in real world environments [2]. A smart decision may have a signifcant impact on the direction of someone's lifestyle.
Before making a fnal selection, a DM assesses the restrictions, advantages, and characteristics of each alternative. Since an FS is defned by a single parameter: membership degree. Several higher-order FSs have been described in recent decades by several scholars.
Atanassov [3] established the notion of intuitionistic fuzzy sets (IFSs) capable of dealing with complexity and uncertainty and it has been extensively examined and utilized by several researchers in DM problems. An IFS is defned by three parameters: membership grade (MG), nonmembership grade (NMG), and hesitancy margin with the property that the sum of MG and NMG must be less than or equal to 1. In many situations, it is conceivable that the sum of the MG and NMG will be greater than 1. To overcome these challenges, Yager [4] introduced the Pythagorean fuzzy set (PyFS) as an extension of the IFS theory. PyFS is defned by an MG and NMG and satisfes the criterion that the square sum of its MG and NMG is less than or equal to 1. Terefore, PyFSs can more accurately express the fuzzy nature of information than IFS.
In the feld of PyFS, there are various approaches for solving real-life multiattribute decision-making (MADM) situations. A number of researchers have also suggested realworld applications in a Pythagorean fuzzy environment. However, if orthopair FSs as 〈0.9, 0.5〉, where 0.9 is the MG of specifc criteria of a parameter and 0.5 is the NMG, it does not fulfll the IFS and PFS requirements. However, the cubic sum of the MG and NMG is equal to or less than one. In this context, Senapati and Yager [5] recently introduced the Fermatean fuzzy set (FFS). Tey also demonstrated that FFSs have larger degrees of uncertainty than IFSs and PyFSs, are capable of sustaining higher levels of uncertainty, and can solve MCDM challenges. Information measures are an essential notion for dealing with MADM challenges in a variety of domains, including pattern recognition, clinical diagnosis, and personnel appointment. Tere are several types of information measures established such as distance, similarity, entropy, and inclusion measures.
Te MADM process are normally assisted by similarity measures, distance measures, inclusion measures, entropy measures, and, in certain situations, aggregation operators. Te degree of similarity measures has garnered considerable interest in recent decades due to its importance in DM, data mining, pattern recognition, and medical diagnosis applications. Szmidt and Kacprzyk [6] performed the frst investigation, extending well-known distance measures such as the Hamming distance and the Euclidian distance to the IFS environment and comparing them to approaches used for conventional fuzzy sets. However, Wang and Xin [7] suggested that Szmidt and Kacprzyk [6] distance measure was inefective in certain situations. Terefore, several innovative pattern recognition distance measures were developed and implemented. Grzegorzewski [8] also extended Hamming, Euclidean, and their normalized versions to the IFS framework. Later on, Chen [9] demonstrated that several faws occurred in Grzegorzewski [8] by providing counterexamples. Hung and Yang [10] described three similarity measures and extended the Hausdorf distance to IFSs. On the other side, rather than expanding well-established measures, various research established novel similarity measures for IFS.
Yong et al. [11] developed a novel similarity measure for IFS based on MG and NMG. Mitchell [12] demonstrated that Yong et al.'s [11] similarity measure had certain counter-intuitive circumstances and improved it statistically. Additionally, Liang and Shi [13] provided examples to demonstrate that the similarity measure proposed by Yong et al. [11] was unsuitable for certain scenarios, and hence developed various additional similarity measures for IFSs.
Xu [14] formulated a series of IFS-based similarity measures and applied them to the MADM problem employing IF information. Xu and Chen [15] presented a set of distance and similarity measures that are diferent combinations and extensions of the weighted Hamming, Euclidean, and Hausdorf distances. Xu and Yager [16] constructed a similarity measure between IFSs and used it to MAGDM utilizing IF preference relations.
In addition to this research, several researchers investigated the relationships between IFSs' distance, similarity, and entropy measures. Zeng and Guo [17] analyzed the relationship between normalized distance, similarity, inclusion, and entropy of interval-valued fuzzy collections. Additionally, it was demonstrated that the similarity, inclusion, and entropy of interval-valued fuzzy sets may be induced using the normalized distance of their axiomatic defnitions. Wei et al. [18] proposed a generalized entropy measure for IFSs and PyFSs. Additionally, a technique was developed for constructing similarity measures for IFS and PyFSs using entropy measures. Numerous researchers investigated information measures (distance measure, similarity measure, entropy measure, and inclusion measure) for IFSs and PyFSs and their transformations relationship. Dengfeng and Chuntian [19] investigated the similarity between IFSs and used their fndings to pattern recognition. Huang and Yang [10] presented the Hausdorf distance as a similarity measure between IFSs and utilized it to assess the degree of similarity between IFSs. Ashraf et al. [20] gave the idea of a spherical fuzzy set then they implicated this concept also in decision-making [21].
Nguyen et al. [22] developed a novel knowledge-based similarity measure for IFSs and demonstrated its application to pattern recognition. Zhang [23] pioneered a unique strategy for PyFSs MADM based on similarity measures. Zhang et al. [24] explored the use of the application of a scoring function on IFSs with double parameters for pattern recognition and medical diagnosis. Ejegwa established distance [25] and similarity measures [26] for PyFSs.
Ye [27] designed and implemented a cosine similarity measure for IFSs (CIFS). In addition, Ye [28] introduced the cosine similarity measure for interval-valued IFSs (CIVIFSs) and described its use in solving MADM problems. Liu et al. [29] investigated the cosine similarity measure between hybrid IFSs and their application for diagnostic purposes. In recent years, several scholars have conducted research on PyFS information measures (distance measure, similarity measure, entropy measure, and inclusion measure). Wei and Wei [30] introduced a set of ten cosine-based PFS similarity measures relying on the MG, NMG, and hesitation of PyFSs in order to improve the capacity to cope with the two optimization challenges related to pattern recognition and medical diagnosis procedures. Peng [31] established a PyFS similarity measure based on the parameters Lp norm and levels of ambiguity, which were examined in detail in relation to the PyFS similarity measure. Peng et al. [32] developed the fundamental defnitions of PyFS information measures, along with the similarity measure, as well as discussed the transformation principles for the established information measures. 2 Computational Intelligence and Neuroscience Te advantages of existing information measures are as follows: An examination of the existing literature on FS, IFS, and PyFS exposes a number of weaknesses that spur us to create a more potent class of novel information measures (distance measure, similarity measure, entropy measure, and inclusion measure).
Te disadvantages of existing information measures are as follows: (i) Some of them cannot help but be caught in pointless circumstances (i.e., dividing by zero). (ii) Many of them struggle to avoid examples that seem to go against logic.
(iii) Many of them are unable to categorize the results, and some of them provide irrational results. We describe a class of useful FFS information measures (distance measure, similarity measure, entropy measure, and inclusion measure), ofer associated information measure formulations, and examine their transformation connections to address the faw in the prior research.
Te important contributions of the current manuscript are listed. Te manuscript is organized as follows: Section 2 discusses the defnitions and fundamental ideas of FS, HFS, IFS, and FFS, as well as the corresponding operational rules of FFS. Section 3 introduces a new type of information measures, provides related information measure formulations, and investigates their transformation relationships for FFSs. In Sections 4 to 7, we demonstrated the application of the novel information measures between FFSs to pattern recognition. Moreover, a comparative study has been presented between the proposed similarity measure and conventional similarity measures. Section 7 concludes the paper by outlining the future area of research.
Basic Terminologies
In this section, we provide some relevant fundamental information, such as FS, HFS, IFS, FFSs, and some related operational laws, which are listed. Tese core concepts will assist readers in comprehending the proposed framework.
Some New Types of Information Measures between FFSs
Tis section explains the axiomatic framework of FFSs information measures (distance, similarity, entropy, and inclusion), as well as their related formulations. Simultaneously, their transformation relationships are thoroughly examined.
Distance Measures for FFSs.
Tis section introduces the idea of a distance measures for FFSs. A number that is assigned to a pair of points in a space which indicates how far those points are from one another. A distance measure is called a metric if it is always positive and also it is always symmetric. (1) 0 ≤ D(Y, P) ≤ 1; (2) D(Y, P) � D(P, Y); Computational Intelligence and Neuroscience Similarity Measure for FFSs. Tis section introduces the idea of similarity measures for FFSs. Similarity functions take a pair of points and return a large similarity value for nearby points, and a small similarity value for distant points. One way to transform between a distance function and a similarity measure is to take the reciprocal.
Theorem 2. Let Y and P be two FFSs, then
Theorem 5. For i � 1, 4, 5, 6 and for all ϰ i ∈ Z, (I 3 3.3. Entropy for FFSs. Let Y and P two FFSs on Z. An carrying the following features: is an inclusion measure.
Te Relations between Tese Measures.
In this section, we study the relations between inclusion, entropy, similarity measure, and distance measure of Fermatean fuzzy sets. First, according to the defnitions of similarity measure and distance measure of Fermatean fuzzy sets, one should note that they are all used for estimating the degree of similarity between two Fermatean fuzzy sets. Te main diference is as follows: for the similarity measure, a greater value means that the two Fermatean fuzzy sets are more similar than are a pair with a lower value. Te situation for the distance measure is just the opposite, that is, the smaller the value is, the more similar these two Fermatean fuzzy sets are. So, we can obtain the following theorem.
Transformation Relationships among Information Measures for FFSs
Theorem 8. Suppose D be the Fermatean fuzzy distance measure for Y, P ∈ FFS s, then S(Y, P) � 1 − D(Y, P) is the similarity measure of FFS s Y and P. Te proof is straightforward.
Theorem 10. Let D and S be the distance and similarity measures of Tis completes the proof. □ Theorem 11. For Y, P ∈ FFSs, and we order Computational Intelligence and Neuroscience Theorem 12. Let D be the distance measure and S be the similarity measure of FFSs, for Y in FFSs, then Tis completes the proof.
Proof
(i) (I 1 ) It is straightforward. Defnition 11. Let Y and P be two FFS s, then we defne g(A,B) ∈ FFS s, ∀ϰ i ∈ Z,
Theorem 15.
Suppose E be the entropy measure of FFSs, for Y, P ∈ FFSs, then E(g(Y, P)) is the similarity measure of FFSs Y and P. Proof Also, we can know that is Computational Intelligence and Neuroscience and known by the defnition, E(g(Y, O)) ≤ E(g (Y, P)).
Similarly, we can prove that E(g(Y, O)) ≤ E(g (P, O)). Tis completes the proof. □ □ Theorem 16. Suppose I be the inclusion measure of FFS s, According to the defnition of inclusion measure, we have , that is E(A) ≤E(P). Tis completes the proof. □ Example 1. For Y, P ∈ FFS s, and we order I(Y, P) � I 1 (Y, P) Known by the defnition of similarity measure of FFSs, we have According to the defnition of similarity measure, we can have where L is an unknown pattern. Its aimed is to determine the class to which L belongs. In order to do that, the distance between L and classes M 1 , M 2 , M 3 , and M 4 are measured, and L is then allocated to the class M g specifed as follows: For all the newly developed distance measures (D 1 − Table 1. It is observed in the Table 1, that an unknown pattern L belongs to a class M 3 when D 1 to D 13 are used. It is clear that the cause for this diference is the frst characteristic, i.e., (ϰ 1 ). Te FFNs of ϰ 1 are as follows: is more acceptable. By routine calculations, we can fnd the aforementioned relation for D to D 13 as shown in Table 1.
Example 3.
Assume that a doctor would like to diagnose the condition of C: (viral fever, malaria, typhoid, or chest problem) for patients P: (Ragu, Mathi,Velu, and Karthi) with disease symptoms V: (headache, acidity, burning eyes and depression). Te symptoms associated with the considered diagnosis are listed in Table 2-4, and the symptoms of the disease associated with each patient are listed in Table 2. Each table element is represented by a specifc FFSs. For each patient, a precise diagnosis is necessary.Te distance measuring methods mentioned here are used to assess the distance between each patient and each diagnosis. Each patient was then diagnosed using the concept of the shortest possible distance. To determine a condition of the patient, we may assess the distance measure between the symptoms associated with each illness and those associated with the patient. Te diagnostic fndings are provided in Table 2-4 using the distance measure formula D 13 . We may conclude that all the patients sufer from viral fever.
Comparison of the Distance Measure between FFSs in Medical Diagnosis.
To illustrate the efectiveness of the novel distance measure for specifc FFSs in pattern recognition, we present a numerical example and compare the novel fndings to those reported in the literature. Table 5. Te FF relation S ⟶ L is denoted by the FFS, as seen in Table 6. Each element in Table 6 is represented by FFS. Te established distance measure methods are used to determine the distance between each patient and each diagnosis. Ten, using the idea of minimal distance degree, each patient was diagnosed. We demonstrated the distance measure results of the patient M j (j � 1, 2, 3, 4) with regard to the diagnostic L i (i � 1, 2, 3, 4, 5) and the fnal diagnosis fndings are given in Al has malaria Table 7, Bob has stomach problem Table 8, Joe has typhoid Table 9, and Ted has viral fever Table 10. We perform a comparison study with other methodologies to demonstrate the capability and validity of the presented distance measures, and the fndings are provided in Table 11. Table 11 shows that the suggested distance measure approaches achieve the same result as in D [18], D [35], D [36], and D [37], demonstrating that using the proposed distance measure methods to solve the medical diagnosis problem is possible and benefcial. From the preceding practical implementation of the measures techniques, we may deduce that the proposed distance measures approaches are more efective and superior in handling real world challenges.
Apply the Similarity Measure between FFSs to Pattern Recognition
In this part, we describe some examples to show the use of the suggested similarity measures based on FFS to pattern recognition.
Example 5. Suppose the four classes M 1 , M 2 , M 3 , and M 4 of known construction materials and L, an unknown construction material, are defned in the space X � ϰ 1 , ϰ 2 , ϰ 3 and are represented by FFS is given. Its goal is to ascertain to which class L belongs to (see Table 12).
Here, L is a known building materials. Its objective is to determine the class to which L belongs. To do this, the degrees of similarity between L and classes M 1 , M 2 , M 3 , and M 4 are measured, and L is then allocated to the class M g specifed as follows: Table 13. It is clearly observed in the Table 13, that an unknown building material L belongs to a class M 1 when S 1 , S 3 , S 7 , to S 10 are used and L belongs to a class M 3 when S 2 , S 4 , to S 6 and S 11 to S 13 are used. It is clear that the cause for this diference is the frst feature, i.e., (ϰ 1 ). Te
A Comparison of the Proposed Similarity Measures between FFSs
To illustrate the efectiveness of the novel similarity measures for specifc FFSs in pattern recognition, we present some examples and compare the novel fndings to those reported in the literature. Our objective is to ascertain the class to which L belongs. Te classifcation result of the suggested similarity measures (S 1 − S 13 ) displayed in Table 14 Example 7. Assume that a doctor would like to diagnose the condition of C: (viral fever, malaria, or typhoid) for a set of patients P: (Al, Bob, Joe, and Ted) having symptoms V: (temperature, headache, and cough). Te symptoms associated with the considered diagnosis are listed in Table 16, and the symptoms associated with each patient are listed in Table 17. Each table element is represented by a specifc FFSs. Each patient requires proper diagnosis, which need be assessed. We will identify a diagnosis for each patient based on the similarity between the symptoms associated with each diagnosis and those associated with the patient. Te diagnostic observations are described in Table 18 Al, Table 19 Bob, Table 20 Joe, and Table 21 Ted, respectively, using the novel similarity measures formula (S 1 − S 13 ). Te patient Al is diagnosed with malaria (Mal.) in 12 of the 13 of the approaches; the remaining approach indicates that Al is diagnosed with viral fever (VF) as presented in Table 18. It is obvious that Bob has a stomach problem (SP), since all of the measures yield the same fndings as shown in Table 19. Joe is diagnosed with typhoid in 12 of the 13 methods; the other approach represented that Joe is diagnosed with VF as shown in Table 20. Similarly, 9 of Table 1: Distance measures for Example 2 with α � β � 0.5, l 1 � l 2 � 2, and t � 1. the 13 measures indicated that Ted has VF, whereas, the remaining methods imply that Ted has Mal as presented in Table 21. For patient Al, it could be observed from Table 18 and Table 22 50]. For patient Bob the novel similarity measures provided the same results as in the literature presented in Table 19 and Table 22. Similarly, for patient Joe the proposed similarity measure provided the same result as in the literature shown in Table 20 Tables 21 and 22. Table 23 shows the present summary of medical diagnosis.
Application of the Inclusion Measure between FFSs and Pattern Recognition
Tis section illustrates the applicability of the suggested FFS inclusion measures to pattern recognition. Table 24. It is clearly observed in the Table 24 that an unknown pattern L belongs to a class M 4 when I 1 to I 3 and I 5 to I 6 are used, and L belongs to a class M 3 and M 2 when I 4 and I 7 are, respectively, used. It is clear that the cause for this diference is the frst characteristic, i.e., (ϰ 1 ). Te FFNs of ϰ 1 are is more acceptable. In a similar way, we can fnd the previously mentioned relations for I 2 to I 7 . (1) We developed axiomatically FFSs information measures (distance measure, similarity measure, entropy, and inclusion measure). (2) We constructed various formulae for FFSs information measures and analyzed the associated transformation relationships in detail. (3) We used the established distance measures (D 1 -D 13 ) to pattern recognition and medical diagnosis to demonstrate their efcacy. Te applications substantiate the results and also illustrate the feasibility and efectiveness of the distance measures between FFSs information. (4) We demonstrated the efcacy of the novel similarity measures (S 1 -S 13 ); several counterintuitive examples of existing similarity measures are shown. We employed them to pattern recognition, construction materials, and medical diagnosis. For pattern recognition problems, we conclude that the proposed similarity measures dominate existing similarity measures. In some special situations, it has been shown that many conventional similarity measures are incapable of providing reasonable fndings. However, in these specifc cases, the proposed similarity measure is profcient of discriminating Te experimental fndings demonstrated that the proposed measures are more reliable and can avoid the counter-intuitive situation in dealing with practical applications based on Fermatean fuzzy environment. [24,51,52].
Limitations and Future Works
(1) Te FFSs are inappropriate to deal with situations where the cube sum of membership and nonmembership grades of exceeds 1 (2) A near future target is to unfold the application of the proposed information measures in scientifc investigations for decision-making, pattern recognition, linguistic summarization, and data mining (3) We have also a plan to apply the presented approach to procurement planning, water desalination station selection, wind power plant site selection, and many more domains of real world problems (4) Additionally, we will be further interested to immerse them in a variety of fuzzy environments (5) Furthermore, since this work presents an applicative analysis of the FFS information measures, we should develop an appropriate software to efectively apply the presented information measures in a realistic situation
Data Availability
Te data used in this manuscript are hypothetical and can be used by anyone by just citing this article. | 2023-03-12T15:50:48.495Z | 2023-03-08T00:00:00.000 | {
"year": 2023,
"sha1": "920bce4857d522d53874ba7d6bbdf355f1794c41",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "67aef0cad0fe165ce424edfe017950abf0d3ba6d",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243783627 | pes2o/s2orc | v3-fos-license | The Role of Controlled Motivation in the Self-Esteem of Adolescent Students in Physical Education Classes
The aim of this cross-sectional study was to analyse the relationships between the satisfaction of psychological basic needs, physical education, academic controlling motivation, and self-esteem, and to propose a prediction model in line with the postulates from the hierarchical model found in the self-determination theory. The participants were 618 physical education students from primary and secondary school (317 girls and 301 boys) aged between 10 and 14 years old (M = 11.62; SD = 0.94). The questionnaires basic psychological needs in exercise measurement scale (BPNES), perceived locus of causality scale (PLOC), the academic motivation scale (EME), and physical self-perception profile (PSPP) were used to measure the studied variables. The results showed that autonomy and relatedness significantly and negatively predicted physical education controlling motivation, which predicted a positive and significant academic controlling motivation. This, in turn, negatively and significantly predicted self-esteem. It is concluded that it is essential to avoid controlling motivation to promote the development of a positive self-perception in students.
Introduction
Numerous authors have highlighted physical education classes as an ideal context for the development of adaptive behaviours in adolescents [1,2], such as the promotion of positive attitudes and prosocial values of students (respect, participation, autonomy, help others, etc.) [3,4]. Thus, in this context, learning new skills, successfully achieving the proposed tasks, and feeling loved and accepted by peers, could be appropriate factors for the promotion of these values. Among them, working on and increasing self-esteem through the practice of physical activity has become a priority in some intervention programs, due to its positive repercussions on health, especially mental health [5].
Self-esteem is the evaluative feeling that people have of themselves [6]. Gergen [7] defines self-esteem as the degree to which people feel positive about themselves. It affects motivation, life satisfaction, and well-being throughout life [8], and can be used as an indicator of emotional health and psychological benefits gained from regular participation in physical activity [5,9]. In this sense, the theoretical construct of physical self-perceptions [5] indicates that positive experiences lead to positive feelings and improved overall selfesteem. In contrast, a low self-esteem is related to different diseases, such as psychiatric disorders, obesity, eating disorders, etc. [10], and can also be an important predictor of depressive symptoms among young people [11].
To help us understand the emergence and development of certain attitudes, emotions, and behaviours in the context of physical education, the Self-Determination Theory (SDT) [12] has been providing an adequate structure for its study for the last two decades [12,13]. Deci and Ryan [14] postulated that behavioural regulation can be based on autonomous, controlled motivation, or on the absence of motivation. These types of motivation are the reasons why people are persistent with the activities they perform and are found along a continuum of self-determination that qualitatively differs in the cause of action.
According to Ryan and Deci [15], autonomous motivation represents the highest level of self-determination. Autonomous motivation is defined as engaging in activities for the interest on, and the satisfaction derived from the activity itself, in the absence of any external incentives (e.g., rewards, praise) [16]. This type of motivation is represented by intrinsic motivation (the person performs the activity because he/she finds it attractive and fun), as well as identified (the person identifies with the value of the activity and has a high degree of willingness to act) and integrated (the person finds the activity congruent with other values and interests in his/her life) regulations [17]. On its part, controlled motivation is represented by extrinsic motivation (the person acts to obtain rewards or avoid externally imposed punishments), and introjected regulation (the person acts to obtain internal rewards such as feeling good in case of success or to avoid anxiety and guilt in case of failure) [18]. Lastly, the absence of motivation is represented by demotivation (the person feels incompetent and uninterested) [19]. In the educational context, there is abundant evidence that autonomous motivation is related to adaptive outcomes such as persistence in the classroom and academic performance [20].
Within this macro-theory, the mini-theory of basic psychological needs considers that human beings need to satisfy three basic psychological needs that are essential for optimal functioning, integration, personal development, and well-being [12]: autonomy, competence, and relatedness to others. In physical-sport activities, when people interact with their environment, they need to feel competent (feeling of mastery of the task), autonomous (feeling of being the initiator of one's own actions), and related to others (feeling respected by others and the desire to feel connected with them) [21], and there is a positive relationship between them and physical self-concept [22]. The frustration of one or more of these needs is a trigger for the loss of intrinsic motivation and the approach towards demotivation or extrinsic motivation [23]. The feeling of belonging and being linked to the teacher are key pieces in the improvement of self-esteem [24] and are closely related to improvements at the level of physical condition along with personal and social wellbeing. Therefore, if the student perceives an activity in a way that it manages to provide opportunities for feeling socially integrated, experiencing task mastery, and satisfying the feeling of autonomy, it will increase the autonomous motivation and well-being of their students [25,26].
Much research has been directed towards the study of autonomous motivation and its positive cognitive, affective, and behavioural consequences in the education environment [15,27]. Thus, studies such as that by Hein and Hagger [28] indicated that motives of a more autonomous nature (i.e., when an activity is regulated on the basis of reasons that stem from the student's own needs or feelings) led to greater self-esteem in young people in physical activity contexts. Coaches can develop the athletes' psychological well-being through autonomy-supportive coaching behaviours [29]. Baumeister et al. [30] pointed out that high self-esteem led to better school performance. In contrast, a controlling teaching context has been related with the frustration of the three basic psychological needs (autonomy, competence, and social relationships) [31], and these experiences of frustration predispose individuals to a greater perceived fear of failure, challenge avoidance, and low self-esteem [32][33][34]. In this case, the type of motivation underlying this process is controlling, as the learner feels pressured to act by an external force that he or she did not choose, and which does not coincide with his or her interests or needs, or even by internal imperatives. This process usually involves the need for competence and relationship with others, leaving the need for autonomy far behind. For example, the student may perform the tasks despite not choosing them of his own free will, but he knows that with them he will meet the expectation and succeed (competence) and remain integrated in the group (relationship) [35].
In this sense, although there are numerous works that focus on the positive consequences related to autonomous motivation, there are currently very few studies in education that have examined the ones generated by a controlling motivation [36][37][38][39], its link with the satisfaction of basic needs and its relationship with an emotional consequence such as the student's self-esteem. Considering the importance of self-esteem in students' physical health and psychological well-being and, based on self-determination theory, the present study aimed to test the predictive power of basic psychological needs satisfaction on controlling motivation in physical education class, and this in turn on academic controlling motivation, to ultimately predict the perceived self-esteem of secondary school students. The following hypotheses were tested: Hypothesis 1 (H1). The satisfaction of basic psychological needs negatively predicts controlling motivation in physical education.
Hypothesis 2 (H2).
Controlling motivation in physical education positively predicts controlling motivation at the academic level.
Ethics Statement
This study was approved by the Research Ethics Committee of Universidad Miguel Hernández de Elche (Elche, Spain) (DPS.JMM.01.14) and meets all ethical and legal standards that are applicable to the research of this survey modality.
Participants
The sample, obtained non-randomly and through convenience sampling, consisted of a total of 618 students (301 boys and 317 girls) aged 10-14 years old (M = 11.62, SD = 0.94) from 24 public primary and secondary schools in various Spanish municipalities with a middle-class socioeconomic status. In Spain, all primary and secondary schools have a very similar curriculum, and the same number of hours is dedicated to physical education throughout the country.
Procedure
For the collection of information, once the study was approved by the Ethics Committee of the institution of the responsible investigator (DPS.JMM.01.14), we contacted the management team of the educational centres to ask for their collaboration in this study. The students were asked for written authorisation from their parents, as they were under-aged. The administration of the final scales was carried out before the beginning of the physical education lesson and took approximately 20 min to complete. The tool used for data collection was Google forms. The physical education teacher supervised the answers, and during the completion process, all the problems that could arise were solved by the teacher.
Measures
Basic Psychological Needs (BPNES). The version translated into Spanish and adapted to physical education [40] of the Basic Psychological Needs in Exercise Measurement Scale (BPNES; [41]) was used. This instrument contained the following heading "In the classes . . . " followed by 12 items grouped into three factors (four items per factor) measuring the perception of autonomy (e.g., "the exercises we perform match my interests"), the perception of competence (e.g., "I perform the exercises effectively"), and the perception of relatedness to others (e.g., "I feel I can communicate openly with my peers"). The reliability analysis obtained Cronbach's alpha values for the three measurements taken of 0.73 for the perception of autonomy, 0.77 for the perception of competence, and 0.81 for the perception of social relationships.
Controlling motivation in physical education. Part of the Spanish validated version [42] of the Perceived Locus of Causality Scale by Goudas, Biddle, and Fox [43] was used. The dimensions, composed of four items each, were introjected regulation (e.g., "Because I want the teacher to think I am a good student"), external regulation (e.g., "Because I will get in trouble if I don't"), and demotivation (e.g., "But I don't really know why") were used. The scale was headed by the statement "I participate in this physical education class . . . " and was answered through a Likert-type scale from 1 (Strongly disagree) to 7 (Strongly agree). Cronbach's alpha values of 0.73 for introjected regulation, 0.72 for external regulation, and 0.71 for demotivation were obtained.
Academic controlling motivation. From the Spanish version of the EME [44], named Escala de Motivación Educativa (EME-E, Academic Motivation Scale) by Nuñez, Martín-Albo and Navarro [45] we used the dimensions introjected regulation (e.g., "to prove to myself that I am an intelligent person"), external regulation (e.g., "to be able to get a more prestigious job in the future"), and demotivation (e.g., "I honestly do not know; truly, I have the impression of wasting my time"). Each dimension was composed of four items. It was preceded by the previous statement "Why do you go to school?" These reasons were scored according to a seven-point Likert-type scale ranging from 1 (Does not correspond at all) to 7 (Corresponds completely), with an intermediate score of 4 (Corresponds moderately). Internal consistency was 0.82, 0.78, and 0.81, respectively. Self-esteem. The self-esteem dimension adapted to the academic context of the Physical Self-Perception Profile questionnaire [46,47] was used. The dimension was composed of 5 items (e.g., "I do not feel confident when it comes to participating in activities"). The responses to the instrument were expressed with a Likert-type scale ranging from 1 (Strongly disagree) to 4 (Strongly agree). The internal consistency was 0.77.
Data Analysis
First, a descriptive statistical analysis (mean, standard deviations, asymmetry, and kurtosis) was performed, assuming the univariate normality of the data when values for asymmetry and kurtosis were within −2/+2 and −7/+7, respectively [48]. The internal consistency of each factor was calculated using the Cronbach's alpha coefficient, which is acceptable when values are greater than 0.70 [49], and the bivariate correlations of all the variables under study. X 2 and the ratio X 2 /df [50] were used as absolute measures. To verify the relationship between the variables proposed in the study, the two-step maximum likelihood (ML) approach was used, as it allows testing complex relationships between variables (observed and latent) with multiple ways [51]. On the first step (measurement model) a confirmatory factor analysis (CFA) was performed.
On the second step, the structural equation model (SEM) allowed us to test the hypothesised model including all the variables within the same regression model, taking more than one dependent variable, as well as considering the same variable as both dependent and independent (the three dimensions for basic psychological needs, the three dimensions for controlling motivation in physical education, the three dimensions for academic controlling motivation, and the dimension for self-esteem) [51]. For CFA and SEM, the following absolute and incremental indices were used for analysis: Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), and the Root Mean Square Error of Approximation (RMSEA) with its respective Confidence Interval (CI90%). For the cut-off values, CFI and TLI ≥ 0.90, and RMSEA ≤ 0.80 were considered as acceptable [52]. The Confidence Interval at 95% (CI95%) was considered to measure direct and indirect effect among constructs, accepting significance if the CI did not encompass zero. To test multi-group analysis, the structural SEM model was initially assessed in each group separately (basic psychological needs, controlling motivation in physical education, and academic controlling motivation). The Mardia's multivariate index was used to check the factors' multivariate normality, accepting it with values lower than 70 [53]. The present research adopted differences in CFI, TLI, and RMSEA to evaluate structural invariance. The data was analysed using the statistical packages SPSS V. 25 and AMOS V. 23 (SPSS Inc., Chicago, IL, USA).
Descriptive and Correlation Analysis
Descriptive and correlation values are in Table 1. The results revealed that perception of relatedness was the highest ranked variable from the psychological needs satisfaction, with a mean value of 4.55. The participants' perception of self-esteem obtained a mean value of 2.77. Academic controlling motivation showed a higher value than physical education controlling motivation (4.70 and 4.21, respectively). The asymmetry and kurtosis values were within −2/+2 and −7/+7, respectively, assuming the univariate normality of the data. Cronbach's alpha yielded acceptable values for all the variables analysed. The correlation analysis revealed significant and positive correlations between perception of autonomy and competence with the rest of the variables. Physical education and academic controlling motivation showed a positive correlation between them, although this correlation was negative with self-esteem.
Measurement Model
A structural equation modelling procedure to test the hypothesised model was conducted using various absolute and relative measures of fit calculated. Firstly, a confirmatory factor analysis (measurement model) was used to confirm and trim the constructs for the groups of these items. The factors' multivariate normality was accepted with a Mardia's coefficient lower than 70 (35.29). In addition, the multicollinearity assumption was met, since all the bivariate correlations between variables were below 0.85. The following values were obtained: X 2 (206, N = 276) = 583.7, p < 0.001, X 2 /df = 2.83, CFI = 0.92, IFI = 0.92, TLI = 0.90, RMSEA = 0.06, RMSEA 90% CI = 0.05-0.06, SRMR = 0.07. The standardised regression weights ranged between 0.26 and 0.85, were statistically significant, and yielded satisfactory variance of the error.
Indirect Effects
Mediated or indirect effects must be analysed when explaining a model [44]. In the present study, the standardised indirect effects (see Table 2) revealed that only perception of relatedness had a significant indirect effect, in this case negative, with academic controlling motivation (β = −0.17) and a positive one with self-esteem (β = 0.04). Perception of autonomy had a negative effect on academic controlling motivation (β = −0.23) and perception of competence had a positive effect one (β = 0.18). Additionally, perception of autonomy had a positive and indirect effect on self-esteem (β = 0.06), although perception of competence had a negative one (β = −0.05).
Discussion
Based on the SDT framework [13,54], the aim of the present study was to test, in a sample of adolescent students in physical education classes, the predictive power of the satisfaction of basic psychological needs on controlling motivation, and this in turn, on academic controlling motivation in the academic context, to finally predict adolescent self-esteem.
First, the results of the hypothesised model showed that the three basic psychological needs were positively and significantly related to each other, as postulated by SDT. According to H1, the results showed that perceived controlling motivation in Physical Education classes was negatively predicted by perception of autonomy and relatedness to others, as corroborated previous works [18], thus serving as support for the starting theoretical framework. However, in our work, perception of competence showed a positive relationship with controlling motivation. Therefore, as already pointed out by other works [55,56], the pursuit of competence in physical education classes may be generated by external causal agents (e.g., teacher pressure) rather than internal regulation (e.g., the student performing the tasks because he/she likes learning), giving rise to the less autonomous and more controlling types of motivation. Previous studies in sports, such as Sheenan et al. [57], reported that when tasks were performed based on external reasons such as comparison with others, or outperforming peers, and placing the result as the only incentive for their development, there was a positive relationship with the need for perception of competence, but not with the rest of the variables. This could be possible, as indicated by Smith et al. [58], that of the three basic psychological needs, the one that was least harmed by this type of practice was perception of competence. In this sense, some works have warned of the maladaptive consequences of the use of external reinforcement that feeds the controlling motivation for the promotion of competitiveness in physical education class [18].
The results confirm H2, since a controlling motivation in Physical Education classes predicts controlling motivation in the general academic context. The relationship found is in line with previous research based on Vallerand's [59] Hierarchical Model regarding the tendency of moderately stable motivational orientations towards each context within the same level of generality [60]. In this case, it is possible that exposure to repeated experiences that create a controlling motivation in physical education classes may have contributed to an extended development of this motivation in the student towards the general academic environment, generating a self-reported controlling motivation with respect to the educational context in which he/she developed. Teachers who use this type of motivation with their students tend to foster in them an external locus of control of perceived causality, offering rewards, threats, and punishments, and unilaterally imposing objectives in advance. On their part, their students' behaviour is based on obtaining rewards (e.g., passing) or avoiding punishment [18,61]. However, they do not contemplate the importance of learning in the process, enjoying the tasks, or the personal improvement that self-regulation of learning entails [62]. Finally, another risk of using this type of controlling motivation is that the student's behaviour is always constantly subordinated to the action of the teacher who maintains it, so that when the latter disappears, so does the behaviour. Therefore, the reason could be that the student has not been given the opportunity to build a process of internalisation of learning that he/she can manage by him/herself autonomously without the control of the environmental agents that regulate them [63].
Finally, the results confirm Hypothesis H3, i.e., the predictive and negative power of controlling academic motivation on self-esteem. This result is in line with studies such as Franco et al. [56] and Méndez-Giménez et al. [64], where, although they did not analyse controlling motivation, they reported the existence of a positive relationship between more self-determined motivation and self-esteem. Intervention studies about the development of social and emotional competencies (in line with autonomous motivation using projectbased learning) have reported positive changes in self-esteem, apart from better responsible decisions and higher self-awareness in primary education students [65]. A piece of research about service-learning (a pedagogical model focused on achieving curricular goals while providing a community service) showed improvements in the social self-realisation and decisive self-efficacy of Physical Education Teacher Education students, but not in selfesteem [66]. It seems that controlling interventions from teachers could develop a higher controlling motivation and a lower self-esteem in students, and further studies would be very welcome to contrast these statements.
Moreover, thus far there are not many studies that link academic motivation with selfesteem, relating student motives of a more autonomous nature with higher self-esteem [28]. Thus, for example, in the study by Gothe et al. [67] participation in physical activity predicted general self-esteem, or the study by MinHyuk [68] with more than 2000 high school students, where it was found that a better experience in physical education classes was a mediating factor for having higher self-esteem. These results associate the importance of encouraging more autonomous types of motivation to favour a more positive selfesteem [69]. Only one of the basic psychological needs (perception of autonomy) had an indirect effect on the other variables, negative for academic controlling motivation and positive for self-esteem. Previous studies in neuroscience and anger indicated that although all three unsatisfied basic psychological needs were correlated with the trait anger, unsatisfied relatedness was the only factor that was uniquely related to the trait anger [70]. However, in a study about two different models of psychological need satisfaction to well-being in adapted sport athletes, perceived relatedness was the weakest predictor of overall self-esteem, followed by perceived autonomy and competence [71].
The present study shows a series of limitations that should be taken into consideration for future research. In first place, it would be interesting to take into account the social triggers, such as the teacher's interpersonal style, that may be influencing each of the basic psychological needs and may therefore be generating a more or less controlling motivation in the context of physical education. In addition, the type of sampling used was purposive by accessibility. Future work addressing this issue should be carried out with a more methodologically valid sampling method such as random sampling. Finally, the type of methodological design, cross-sectional and correlational, prevented any type of causal explanation. It would be interesting to carry out longitudinal studies and experimental and/or quasi-experimental designs to test the sequence proposed in this study.
As future lines of research, and within the dynamic process of motivation, it would be interesting to discover whether autonomous motivation has a similar behaviour, as well as to incorporate both the frustration of psychological needs, and the role of the controlling style of the physical education teacher, into the motivational sequence. In addition, it would be important to check the differences according to age in different educational stages, gender, or socio-economic context, as this may be of great importance for future research. As for the practical implications, the teacher should minimise the use of external reinforcements, replacing them by acting as a guide in a process of self-regulation of learning where the process takes precedence over the result, lets the student improve by himself or herself, encourages his or her progress by stressing the value of personal improvement, establishes moments of positive communication and considers mistakes as an opportunity for learning, ultimately helping to promote positive thoughts in the student about himself, developing a higher quality self-esteem. All this, accompanied by the precise tools that the teacher can use to foster the most positive forms of motivation, and decrease motivation of a controlling nature [72], will contribute to the pursuit of greater student well-being in physical education classes.
Conclusions
In conclusion, the greater importance of self-esteem of the students was explained thanks to higher levels of basic psychological needs (especially relatedness and autonomy perception), and a lower academic and physical education controlling motivation. Moreover, satisfaction of basic psychological needs (especially perception of relatedness) could predict an improvement in the self-esteem of the students. The results of this research suggest the need to promote this type of basic psychological need in physical education and in the educational context the improvement of the student's self-esteem. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data sets analyzed during the current study are available from the corresponding author on reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-11-06T15:11:07.369Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "fbcbf473e0ba780c99e6a1c98ba54f3789ff9a57",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/21/11602/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22f73c88f223365fa9285c0772eab50edfb7b0ba",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209408308 | pes2o/s2orc | v3-fos-license | Case report of an arteriovenous graft for renal dialysis, with multiple complications treated successfully over 5 years
Highlights • Case report of 35 year old patient with arteriovenous graft formation (AVG).• Multiple interventions performed with successful salvage over a course of 5 years.• Outline of role of arteriovenous grafts and current literature and recommendations.• The learning point from our case is that close monitoring and surveillance can lead to a prolongation of an active AVG.
Introduction
Much effort has been made in recent times to maximize placement of native arteriovenous fistula (AVF) over grafts with advent of Fistula First Initiative and KDOQI guidelines recommending a native AVF as the vascular access of choice due to superior outcomes [1]. However an arteriovenous graft (AVG) is a good alternative when native AVF is not a viable option, in patients with previous failed attempts, or in the elderly patient with multiple co-morbidities where there is a higher risk of non-maturation [2].
Complication rates are high with a one year and two year primary patency rate ranging between 40-50% and 20-30% respectively, and a one year and two year secondary patency rate ranging between 70-90% and 50-70% often with multiple interventions to maintain patency [3]. A considerable cause for AVG related morbidity is graft stenosis, thrombosis and infection and timely intervention is required to prevent graft failure.
We report a case of an AVG performed in 2012 that sustained multiple complications but with successful monitoring and timely intervention we were able to salvage the AVG for up to 5 years by which time successful renal transplantation was possible. Our case report is in accordance with the SCARE guidelines for reporting cases [4].
Case presentation
A 35 year old man with end stage renal disease secondary to IgA nephropathy presented for an elective AVG formation in August 2012. He had previously had multiple attempts at a native AVF formation with a failed left brachio-cephalic AVF in 2009 and a failed right brachio-basilic AVF in March 2012. It was decided based on his previous surgeries and small caliber superficial veins that a brachio-axillary polytetraflouroethylene (PTFE) AVG would be the most suitable option for him.
He underwent successful surgery and was discharged. In 2014 there was difficulty noted in using the AVG for dialysis. Subsequent imaging confirmed graft stenosis at two different sites. He underwent successful fistuloplasty with interventional radiology with placement of two stents, an 8 mm stent at the proximal graft and a 7 mm stent at the distal graft (Figs. 1-2 ). He again returned in June 2015 with recurrence of stenosis. This was treated successfully with fistuloplasty. However, in October 2015 he had occlusion of his AVG and imaging confirmed a thrombosis. He was treated with fistuloplasty and thrombolysis and discharged with successful treatment. However, in April 2016 he required a further fistuloplasty to treat stenosis (Fig. 3). In August 2016 he developed a further thrombus and this was again treated successfully with thrombolysis. In September 2016 he had a recurrence of his stenosis. This was treated with placement of 2 further stents.
In 2019 he presented with a sinus discharge over the medial aspect of his right upper arm with pain and swelling. By this point he had a successful renal transplant and the AVG was no longer in use for access. On examination a small sinus was noted over the graft site with active purulent discharge and surrounding erythema. He was however systemically well with normal vitals and with only mildly elevated inflammatory markers. An US scan was performed that suggested possible collection in association with the graft and a DVT was ruled out. At this point it was decided to undergo surgical excision of the AVG.
Our patient was taken to theatre. Under general anesthetic a skin incision was made to gain proximal and distal control before a dissection was made into the graft. A note was made of an eroded graft wall with a 1 × 2 cm defect revealing an underlying stent device and intra-luminal thrombosis (Figs. 4-6 ).
The patient recovered well and was subsequently discharged post-operative day 1 with a course of oral antibiotic and for OPD follow up.
Discussion
This case demonstrates that although complications are common with a AVG close surveillance and timely intervention can prolong access lifespan. In this case, we were well aware of the difficulty that a failed AVG would pose for this gentleman, given his extensive history of failed AVF formation and co-morbidities. Therefore, stringent monitoring was employed with rapid intervention to treat the various complications that presented over the course of 7 years.
AVF infection is an important consideration with incidence ranging between 4-20% with the use of a graft [5]. Multiple different factors predispose patients to getting a graft infection. Uremia related immunodeficiency due to ESRD, multiple co-morbidities, and vascular access technique during time of hemodialysis are all thought to be associated factors. Diagnosis is clinical with local signs such as redness, swelling, warmth, pain, discharge. Systemic symptoms may indicate sepsis and requires urgent attention. US imaging may be used to aid diagnosis by looking for fluid collections. The ESVS guideline recommendation for graft infection is total graft infection if sepsis is present. Partial excision may be considered if segments of graft appear intact [3] as was the case with our patient.
Stenosis can occur anywhere along the site of graft but often occurs in the juxta-anastomotic areas. Assessment for stenosis can be made in a physical exam. A change in the thrill or pulsatile flow can indicate stenosis. Reduced flow during hemodialysis may also be seen. The recommendation for treatment is percutaneous transluminal angioplasty (PTA) if inflow or outflow stenosis is suspected [3]. Stenosis can lead to abnormal or reduced flow which can then increase the risk of thrombosis and therefore pre-emptive treatment may be considered in select cases [6].
Graft thrombosis often presents secondary to progressive stenosis. Early treatment of thrombosis is recommended to prevent organization of thrombus. Treatment options include thrombectomy and thrombolysis however this is not sufficient as a flow limiting stenosis will also need to be treated to ensure improved patency. Endovascular therapy was shown to have similar patency rates to surgical thrombectomy [7]. The ESVS guidelines recommend either surgery or endovascular approach depending on centre expertise provided there is concomitant treatment of any associated stenosis [3].
An important aspect to discuss is the role of surveillance and patient/ staff education in monitoring for graft complications. Patients should be educated in examining their AVG and in identifying any abnormal patterns or signs. Any concerns should be flagged with the hemodialysis unit and the primary team. Examination should also take place during visits for hemodialysis and any concerns for complications should be investigated with further imaging. Monitoring is shown to be cost-effective method in improving patency [8].
In our case, we were fully aware of the challenges associated with a failed graft and our patient was young and fully compliant with education and surveillance measures. We were able to ensure timely interventions that allowed for successful salvage of his AVG until renal transplant was possible.
Conclusion
Complication rates are higher in a AVG with lower patency rates when compared to native AVF. However, close surveillance and prompt intervention can lead to multiple successful salvage procedures thus effectively prolonging the life of the AVG. As in our case we were able to prolong the life of the AVG with 6 successful interventions.
There may be role for an individualized approach in managing patients with AVG. Patient education and staff education in recognizing early signs of complications with tailored surveillance programs can help in optimizing outcome.
Sources of funding
Case report not sponsored.
Ethical approval
Not applicable as no research on human participants was involved.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Registration of research studies
Not applicable as this manuscript does not involve research on human participants.
Provenance and peer review
Not commissioned, externally peer-reviewed.
Declaration of Competing Interest
No conflict of interest. | 2019-12-12T10:22:34.426Z | 2019-12-07T00:00:00.000 | {
"year": 2019,
"sha1": "b7e5fcf4dbdc50fbda5b68875305d47e14648988",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijscr.2019.11.059",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37c58c04723f98eb07de8a6b00d9d9c7f3585468",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266219452 | pes2o/s2orc | v3-fos-license | Modelling dynamic pesticide amounts in multiple environmental compartments at landscape scales in ALMaSS
A dynamic model of the pesticide amount at a landscape scale (10 km x 10 km with the finest spatial resolution of 1 m ) is implemented in the ALMaSS (Animal, Landscape and Man Simulation System) framework. The spatial resolution can be configured, allowing the user to control how detailed the simulation should be according to the specific needs. Three application types, spray, seed coating treatment and granular, can be applied through the pesticide engine according to the management plan of crops in ALMaSS. A drift model is implemented for the spray application to include the effect on adjacent unsprayed areas. After applying a pesticide, the pesticide module controls transfer amongst different environmental compartments and follows the fate of up to ten different pesticides simultaneously. It enables ALMaSS to be used for complex risk assessment through impact studies of pesticides on many species, including pollinators.
Introduction
The fate of pesticides, when used in agricultural situations, will determine the pattern of environmental contamination.Prediction of contamination is important for evaluating environmental scenarios as part of risk or impact assessment, for example, predicting ‡ ‡ ‡ 2 pesticide residues on crops.The fate also determines the environmental concentrations to which organisms are exposed in environmental and even human risk assessment.
Some models for the determination of pesticide fate are very detailed.For example, the PEARL model (https://www.pesticidemodels.eu/pearl/home) is used to evaluate the leaching of pesticides into water bodies and their persistence in soil.PEARL describes the fate of pesticides in the plant-soil system, which is coupled to the hydrological model SWAP (Soil Water Atmosphere Plant).It calculates changes in pesticide concentrations in different compartments as affected by various physical and chemical processes.Models such as PEARL simulate dynamics at a point location with high precision.They are often used to model physicochemical processes when environmental fate is the focus of the study.In other cases, the prediction of environmental pesticide concentrations forms part of a larger evaluation, such as predicting the pesticide impact on organisms moving through a landscape.In these cases, the precision of the fate model is of less importance than the accuracy and often calculation time must be reduced to make the model tractable.This is particularly the case when the simulation aims to assess a higher organisational level (e.g.population) when the precise exposure of individuals is not critical.
The pesticide fate model built into ALMaSS (Animal, Landscape and Man Simulation System) (Topping et al. 2003, Topping 2022) falls into the latter category of fate models.
The purpose here is to predict changing amounts of pesticides over a large area (e.g. 10 km x 10 km), but at a detailed scale, typically 1 m .Similarly, the model generally runs over many years (e.g. 30 years) with a fine temporal resolution of one day.The model may be used to calculate pesticide amounts in map form as an output, but is more typically used to drive effect models, for example, evaluating pesticide policy's impact on beetles (Ziółkowska et al. 2022).These models can cover many organisms and types of behaviour and have been used to simulate pesticide effects on non-target arthropods, birds and mammals (e.g.Topping et al. (2005), Dalkvist et al. (2009), Topping et al. (2014), Mayer et al. (2020)).
The pesticide fate model, used in ALMaSS up to 2022, considered the pesticide amount in one compartment only, i.e.only a total environmental amount.However, to align ALMaSS better with current approaches in pesticide risk assessment, a more detailed model is needed.The original ALMaSS pesticide model was dubbed ToxImpact and was introduced for modelling pesticide effects in skylarks (Topping et al. 2005).This model considered the spraying pattern of a pesticide which determined the environmental amount, which then decayed following a fixed rate (DT ).In this model, drift could be calculated as part of the application procedure using constants for an arbitrary compound selected from the FOCUS software commonly used for risk assessment (FOCUS 2001).Further refinements were introduced in the form of temperature variable decay rates by Ziółkowska (Ziółkowska et al. 2022), but the model still only considered a single amount.
The 'blueprint' for the current model was laid down as a feature wish by EFSA Panel on Plant Protection Products and their Residues (PPR) (Residues (PPR) 2015), with the wish to create separate, but linked vegetation and soil compartments.This expansion of the model was further developed to include amounts of pesticides inside plants and 2 50 differentiation between plant parts to support the evaluation of pesticide impacts on pollinators (Duan et al. 2022).This paper describes the implementation of the new model to fulfil these feature wishes.
Methods
The pesticide engine of ALMaSS includes consideration of both the application (spray, seed coating treatment and granules treatment) and the fate of the pesticide.These are explained in Pesticide Application and Pesticide Fate, respectively.
The pesticide module has different levels of complexity depending on which model of the module is used.The simplest model considers only a single compartment for the pesticide.
We call this model the 1-compartment model.For a more complex version, we consider whether the pesticide is on the plant canopy or the soil, so we split the pesticide between these two compartments.We refer to this version as the 2-compartment model.In this case, we also consider the rain wash-off from the canopy to the soil.The 3-compartment model is an even more complex model in which we consider an additional compartment inside the plant.It is the amount of pesticide in this 'in plant' compartment, that is used to calculate the pesticide concentration in the pollen and nectar.It is simply given by the amount of pesticide divided by the green biomass times a pollen (or nectar) specific partition coefficient.
In the 2-and 3-compartment models, the pesticide can be transferred between the different compartments as described in Pesticide Transfer.Building on the 3-compartment model, another level of complexity is added if seed coating is turned on.The seed coat adds another compartment and enables additional transfer, effectively resulting in a 4compartment model.
In practice, a pesticide map is added for each of the compartments per pesticide (compartment maps).The compartment maps can have the same resolution as the landscape or coarser.It is currently possible to consider up to 10 different pesticides simultaneously and they will each have a unique set of compartment maps.ALMaSS takes many parameters as inputs for the pesticide module, with parameter settings applied through a configuration file.These parameters are listed in Model Parameters.
Examples of usage demonstrates the models by showing the pesticide amount as a function of time under certain conditions.
Pesticide Application
In ALMaSS, pesticides can be applied in three ways: sprays, granules and seed coating.
The model used for sprayed pesticides is the most complicated of the three since it can consider the drift caused by the wind and the division of the pesticide between the plant canopy and the soil.On the contrary, the granular application is assumed not to experience drift and only be applied to the soil compartment map, based on the application rate.For seed coating, the pesticide will be added to the seed coating compartment map at the time of sowing and will, from that time on, be able to decay and be transferred to the other compartments, as explained in Pesticide Transfer.
The spatial resolution of the landscape in ALMaSS is 1 m .At default, all the pesticide maps use the same resolution as the landscape with the possibility of using coarser resolutions.All three pesticide application events, spraying, granules or seed coating are managed by adding them to an event queue, which is then executed once per day when the weather allows.Each event includes information on the pesticide type, application rate and the landscape element that the pesticide should be applied to, typically a field.The pesticide is applied to the pesticide map(s) by looping over the cells in a bounding-box rectangle around the treated polygon, where the pesticide should be applied.Each cell is then checked to see whether it is inside the sprayed polygon.If the cell is inside, the pesticide amount is first added to a temporary map (the twin map), which has the same dimensions as the compartment maps.Before continuing, the need for a border correction is checked for in case the size of the pesticide map extends beyond the boundary of the actual landscape.If yes, the pesticide amount in the temporary map is reduced according to the size of the area beyond the boundary of the actual landscape.After this is done, the three types of applications are handled differently.Based on the temporary map, the pesticide is applied to the soil compartment map for granule application or to the seed coating compartment map for seed coating treatment.
For pesticide spray, the drift caused by wind is considered before it is transferred to the compartment maps.Only the wind direction is considered in the model, without the inclusion of the impact of the wind speed.Four wind directions (South, North, East and West) are used.Drift inclusion is done by choosing a drift vector at the beginning of the simulation.The drift vector is used to diffuse the pesticide in the temporary map to its surrounding cells, especially along the wind direction.In ALMaSS, we assume that drift happens up to 10 m along the wind direction and 1 m for the upwind direction and the two directions perpendicular to the wind direction.This assumption is supported by the studies in Destain et al. ( 2011) which compare drift measurements with a detailed simulation of the spray cone.To form the drift vector the results from Stallinga et al. ( 2014) and Stallinga et al. 2016 are used, which provided the ground deposit from a single nozzle 1 m upwind and 10 m downwind for two different forward speeds (7.2 and 14.4 km/h) and 17 different nozzles.However, these data could not be used directly and needed to be processed in the following way: First, the downwind ground deposit is fitted to a power function where is the distance from the nozzle and is the pesticide proportion of the application rate.The fit is not performed over the whole range of the data, but only from to .Whereas is always set to the last measurement point ( m), is chosen such that a good fit is obtained.It, therefore, varies from m to m.For ALMaSS, we want to know the drift in increments of 1 m starting from -1 m to 10 m in the downwind direction as well as ±1 m perpendicular to the wind direction around the spraying point.For the cells 1 m upwind and perpendicular to the wind direction, we use the measurement point at m, whereas for downwind (positive) x-values, the power function fit is used when available; otherwise, the average of the logarithm of the two surrounding measurement points are used: The amount at m is then the amount that has not drifted, so 100% of the applied amount minus the sum of the upwind drift and three times the downwind drift to also consider the amount that goes perpendicular to the wind direction.This assures that the intended amount of pesticide is spread in the landscape.An example of the original data, the fit and the derived drift vector for a BCPC-F/M nozzle with a forward speed of 14.4 km/ h is shown in Fig. 1.Fig. 2 and Fig. 3 shows the derived drift vectors for all the different nozzles at a forward speed of 7.2 and 14.4 km/h, respectively.For both speeds, at a distance to the nozzle of 1 m, the fraction of applied pesticide varies by roughly one order of magnitude and, at 10 m, it varies almost by two orders of magnitude.
Pesticide drift measurements are often not done per nozzle, but instead, as the accumulated drifted amount outside the spraying area as in, for example, the study done by Rautmann et al. (2001).Fig. 4 shows the accumulated drifted amount of pesticide for Ground deposit data for BCPC-F/M nozzle at a forward speed of 14.4 km/h (blue crosses) (Stallinga et al. 2014, Stallinga et al. 2016) fitted with a power function (orange line) together with the derived drift vector to be used in ALMaSS (green points).Note that the amount at -1 m is also used for the two cells perpendicular to the wind direction.
the different nozzles at a forward speed of 14.4 km/h, as well as the Rautmann result which is given by where is the fraction of the applied pesticide and is distance from the sprayed area in metres.Drift vectors for different nozzles at a forward speed of 7.2 km/h.The distance is measured along the wind direction.Note that the amount at -1 m is also used for the two cells perpendicular to the wind direction.
Drift vectors for different nozzles at a forward speed of 14.4 km/h.The distance is measured along the wind direction.Note that the amount at -1 m is also used for the two cells perpendicular to the wind direction.
Fig. 5 shows the pesticide distribution on and outside of a rectangular field with a width of 18 m immediately after the spraying, on a day with a westerly wind.In this case, the drift for the BCPC-F/M nozzle with a forward speed of 14.4 km/h is used.The figure shows that, upwind to the left side of the field, the drift only reaches 1 m (pink column) and the rest of the area is unaffected (white columns).Inside the field, the pesticide amount gradually increases from left to right until it reaches 100% (yellow columns) before decreasing to around 92% for the last 1 m (turquoise column).On the right side of the field, the amount of pesticide follows the BCPC-F/M distribution in Fig. 4, gradually decreasing from around 13% to 0%.Please note that the pink colour scales are not linear.
The last step of the spraying is distributing the pesticide between the different compartments.In the 1-compartment model, the whole amount is simply applied to the same map.For the 2-and 3-compartment model, it is shared between the plant canopy and soil compartments by using Beer's Law (EFSA 2017) to calculate the canopy cover.
According to this law, the fraction of the surface covered by the crop is given by: where LAI is the leaf area index and is the extinction coefficient for diffuse solar radiation which has a default value of 0.6, but can be specified in the configuration file.The fraction of the pesticide added to the plant canopy is then given by SC.
Pesticide Fate
The pesticides are assumed to undergo a first-order decay every day.Therefore, the remaining fraction of pesticide after one day is given by: where is the temperature-dependent half-life, which is given by: where is the half-life at 20℃ and is the average temperature on the given day.The half-life can vary for different pesticides and compartments.They can, therefore, be specified in the configuration file, but have a default value of 10 days.The daily fraction remaining is calculated for each pesticide and compartment once daily to account for the temperature dependence.
The decay of the pesticides is then calculated by looping over all the cells and multiplying the current amount in each cell by the daily fraction remaining.A flag is set to true as soon as a pesticide has been applied.During the decay process, the remaining amount of pesticide is checked against a user-defined threshold for infinitesimally small values and the cell value is set to zero.To prevent running the computationally heavy loop over all the cells when there is nothing to decay if all cells are zero, the application flag is unset and the decay process is no longer run.
The amount of pesticide in the plant compartments (plant canopy and in plant) can also decrease or be completely removed due to a number of farm management events like harvesting or plouging.Note that this does not affect the soil compartment or the only Pesticide distribution on landscape after drift using a BCPC-F/M nozzle with a forward speed of 14.4 km/h.Each square is 1 m x 1 m and the sprayed field is the 18 columns in the middle with the purple-green-yellow gradient.
compartment in the 1-compartment model.The amount of pesticide in the 'in plant' compartment is also decreasing when the green biomass transform to dead biomass.In this way, we are just considering the amount of pesticide in the living part of the plant since it is only this part that will be able to transfer into the pollen and nectar.
Pesticide Transfer
The pesticides are transferred between the different compartments when the multiplecompartment models are used.A sketch of the transfer can be seen in Figure 6.In the case of the 2-compartment model, the only transfer mechanism is rain wash-off, which transfers part of the pesticide from the plant canopy to the soil as indicated by the blue arrow.The rain wash-off depends on the daily gross precipitation in mm.To implement this, the leaf area index (LAI) and surface cover (SC) is used to calculate the intercepted precipitation given by Prochnow et al. ( 2012): where is the gross precipitation and is an empirical coefficient set to 0.25 mm/day for agricultural crops.The proportion of the pesticide that is washed off because of gross precipitation is then given by: where: is the wash-off factor (EFSA 2012), that is dependent on the water solubility of the pesticide, , which is set to a default value of 10000 mg/l unless another value is given in the configuration file.
In the case of the 3-compartment model, several transfer mechanisms are considered in addition to the rain wash-off.This is indicated by the green and brown arrows in Fig. 6.Part of the pesticide amount is absorbed from the plant canopy and the soil into the plant.In the case that seed coating is used, two additional mechanisms of transfer are considered, from the seed coating to the soil and into the plant.This is indicated by the orange arrows.
The transfer between the seed coating and soil compartment is simply calculated by multiplying the pesticide amount with a rate , such that the amount in each cell goes from to in the seed coating compartment and from to in the soil compartment: For the three types of transfer into the plant, the transfer is depending on the green biomass of the plant with the assumption that a large plant absorbs more than a small plant so: where stands for one of the three compartments (plant surface, seed coating or soil) from which the pesticide is transferred.The transfer rates are given in the configuration file and the default value for all rates is set to 10%.The green biomass is given in kg/m .The order of the transfers is: plant canopy to inside the plant, soil to inside the plant and seed coating to inside the plant and to the soil.
Model Parameters
In Table 1, the model parameters, which are used to control the simulation, are listed.Table 2 shows a list of the different nozzle types that can be chosen.Table 1.
Parameter
Model parameters controlled in the configuration files for ALMaSS.
List of nozzle types.
Modelling dynamic pesticide amounts in multiple environmental compartments ...
Optimisation
The pesticide code is very computationally intensive both with regard to CPU time and memory consumption.To decrease both time and memory use, it is possible to decrease the resolution of the pesticide maps to, for example, a 4 m or 16 m grid instead of the 1 m resolution of the landscape.
Another way to run the code quicker is to use several CPU cores in parallel.The loop over the cells in the pesticide map can be done in parallel for both the decay and transfer methods, which are some of the most time-consuming parts of the code.This is possible because the cells are independent of each other.
Examples of usage
These examples are designed to show that the pesticide behaviour works as described, but do not purport to show a real case.An example of the decay and transfer of the pesticide between the different compartments is shown in Fig. 7.The default values for the half-life, water solubility and transfer rates are used.The figure shows the amount of pesticide in mg/m in a field with winter rape, which is fully sprayed.Note that the default transfer rates have been set at relatively high values to produce a clear pattern; hence, all pesticide transfers quickly from canopy and soil to 'in plant'.Slower transfer and degradation rates would result in higher pesticide amounts in different compartments for longer.
At the time of spraying (4 April), most of the pesticide is sprayed on the plant canopy (blue).However, the pesticide is quickly transferred to the soil (orange) due to the rain wash-off and the plant starts absorbing the pesticide both from the soil and the canopy, which increases the amount inside the plant (green).
The stack of the three compartments is seen to match the curve for the 1-compartment model (black dashed line) until day 154 (3 June) after which the stack decreases quicker.This is caused by the transformation of green biomass to dead biomass starting this day, which decreases the amount of pesticide in the 'in plant' compartment as explained in Pesti cide Fate.Note the curves only match in this example because the half-lives are kept the same for all compartments; this is unrealistic, but is shown to confirm that the transfers work correctly.
Fig. 8 shows the pesticide amount in an area which is first applied with seed coating (brown) and then alternately sprayed pesticide and seed coating in a total of six applications over three years.The figure shows a case where the pesticide from the seed coating has barely decayed before the sprayed pesticide is applied.
Fig. 9 shows an example of the pesticide amount in the different compartments when applying a granular pesticide instead of spraying it.The main difference is that there is no pesticide on the plant canopy, but otherwise, the transfer occurs in the same way as for the sprayed pesticide.
Fig. 10 demonstrates that the framework is able to keep track of several pesticides at the same time.In this case, the first pesticide (PPP1) is sprayed on 4 April and the second (PPP2) on 18 April.Fig. 11 demonstrates the effect of the half-life of the pesticide.The decay in the 1compartment model is shown for a half-life of 5, 7.5, 10 and 12.5 days, respectively.The expected behaviour with a faster decay with a lower half-life is observed.The small wiggles in the curves are caused by the temperature-dependence of the half-life as explained in Pe sticide Fate.
Fig. 12 demonstrates the effect of the water solubility of the pesticide.The decay in the 2compartment model is shown for water solubility of 1, 5, 10 and 15 g/l, respectively.The bumpy distributions are caused by the rain wash-off, which depends on the daily precipitation, as explained in Pesticide Transfer.
Discussion
The way pesticides are handled in ALMaSS is not a one-to-one replica of reality.That is, however, not the goal and would not be computationally feasible.The implementation Pesticide amount as a function of time in the different compartments in an area where a granular pesticide is applied on 4 April (day 104).The 3-comparment model is used, but there is no pesticide on the plant canopy.
merely aims to cover the main exposure routes that would impact the numerous organisms simulated in ALMaSS.
The drift which occurs when the pesticide is sprayed was the most complicated part to implement.The main challenge is that most of the available data on drift is not directly applicable to the simulation.The studies from, for example, Rautmann and Ganzelmeier ( Rautmann et al. 1995, Rautmann et al. 2001), investigated the total drift outside the field after having sprayed a whole field.They assume that the field is broad enough that pesticides sprayed at one side will lead to negligible drift on the opposite side.However, in ALMaSS, we have complex field geometries (e.g.very narrow stretches of field), so we are interested in knowing the drift caused by spraying on a single square metre to not overestimate the drift outside those parts.Here the studies on drift from a single nozzle by Stallinga et al. ( 2016) are more applicable.Using their results, we can calculate the drift up to 10 m away from the spraying point in the wind direction.In reality, the drift can go further, but the effect is deemed very small and would require additional computing power, as well as an uncertain estimate of the amount deposited due to the missing data above 10 m.The effect of only applying the drift up to 10 m is seen in Figure 4 where the distribution drops off at the end.This figure also shows the results from Rautmann (Rautmann et al. 2001), which has a less steep slope.However, a paper by Butler Ellis et al. (2017) shows that the slope of the drift distribution varies significantly between different studies; for example, the Fera PS2022 result (Anon.2010) has a similar slope to that which we obtained here.Modelling dynamic pesticide amounts in multiple environmental compartments ...There are also more general considerations related to the assumptions in the current model.The drift is caused by the wind, which, at low speeds, is variable in direction and strength, but in the ALMaSS simulation, the main drift is only dependent on the wind in that the drift is applied in the average wind direction of the day (in four directions).Future extensions could consider variations in wind direction during the day and include wind speed in the calculation.This might have an effect on which habitats surrounding a field receive drift.However, the range for the wind speed is limited since the farmers are typically not supposed to spray pesticides unless the wind speed is less than 5 m/s, hence the change in drift distance will be minimal.
Another assumption used for the drift is that the drift at a point in time mainly occurs with the wind direction.This assumption is based on the results from Destain et al. (2011) showing that a simulation which only assumes a drift of around 0.5 m perpendicular and opposite to the wind direction for each nozzle gives a good description of the measured drift, so the assumption of only 1 m drift perpendicular and opposite to the wind direction should hold.Since we do not have any measurements of the drift perpendicular to the wind direction, we have assumed that it is the same as the amount going upwind, even though it is probably a bit more.This could be altered in future versions if this proves to be an important simplification.
This model considers what happens to the pesticide when it is portioned between soil and vegetation, but other factors can be important in the field.One important, but missing mechanism for pesticide mass transport is runoff.Runoff describes the removal of pesticides from the soil caused by water flow; since ALMaSS does not currently simulate surface water, it would be complicated to include this effect.It might be possible to include this in the future, but it would require implementing an ALMaSS surface water model, which has not been explored so far.
There are also simplifications regarding environmental decay.The temperature-dependent half-life given in Pesticide Fate stems from decay in soil and is, therefore, strictly speaking, only valid for the soil compartment.It is, however, also used for the other compartments, with different parameters, since it is the best estimate we have at the moment.In future versions, it might be possible to implement a solar radiation-dependent half-life for the plant canopy or another suitable model, if available.
Conclusions
We have demonstrated that the pesticide module in ALMaSS can apply pesticides to the landscape in the form of sprays, granules and seed coating.For sprayed pesticides, the model takes into account the drift caused by the wind, as well as the division of the pesticide between the plant canopy and the soil by using Beer's Law.Furthermore, the pesticides can be transferred between different compartments, for example, from leaf surface by rain wash-off or absorption.
The ALMaSS pesticide module is highly configurable and has several levels of complexity.It can be used as a relatively simple model with only one compartment if the precise division of the pesticides is not deemed important for a particular simulation.However, in some cases, for example, for honeybees, it might be important to model the fractions of the pesticides that enter the pollen and nectar.The more complex 3-and 4-compartment models can be used in such a case.However, it requires detailed calibration data such that the pesticide half-lives, transfer rates and partition coefficients can be determined for a given pesticide.These data will preferably include residue measurements in soil, plant, pollen and nectar several times after the pesticide application.If such data are not available, worst-case estimates would have to be used.
Figure 4 .
Figure 4.The accumulated drifted amount of pesticide outside the field (in the wind direction) for the different nozzles at a forward speed of 14.4 km/h compared to the result of Rautmann et al. (2001).
Figure 6 .
Figure 6.Diagram of transfer between different pesticide compartments.The 2-compartment model includes the dark green and brown compartments, whereas the 3-compartment model also includes the light green one.The 4-compartment model furthermore includes the orange box.Small symbols indicate how the compartments can be supplied with pesticide through the application: spray, granular and seed coating.
half-life (for each pesticide and each compartment: soil, plant canopy, plant and seed coating) 10 days Transfer rate (for each of the transfers: soil to plant, plant canopy to plant, seed coating to plant, seed coating to soil)
Figure 7 .
Figure 7.Pesticide amount as a function of time in the different compartments in an area, which is fully sprayed on 4 April (day 104).The 3-compartment model is used.Note that the assumed pesticide amount in the simple 1-compartment model is not affected by the degradation of the plant as explained in the main text.
Figure 8 .
Figure 8. Pesticide amount as a function of time in the different compartments in an area where seed coating and sprayed pesticides are applied alternately.The 4-compartment model is used.
Figure 10 .
Figure 10.Pesticide amount for two different pesticides (PPP1 and PPP2) as a function of time in the different compartments in an area, which is fully sprayed with PPP1 on 4 April (day 104) and PPP2 on 18 April (day 118).The 3-compartment model is used.
Figure 11 .
Figure 11.Pesticide amount as a function of time for different half-life assumptions in an area, which is fully sprayed on 4 April (day 104).The 1-compartment model is used.
Figure 12 .
Figure 12.Pesticide amount as a function of time for different water solubility assumptions in an area, which is fully sprayed on 4 April (day 104).The 2-compartment model is used. | 2023-12-15T16:03:47.736Z | 2023-12-13T00:00:00.000 | {
"year": 2023,
"sha1": "51327c6506a3a38887e0b20e410769cf94dfe079",
"oa_license": "CCBY",
"oa_url": "https://fesmj.pensoft.net/article/107849/download/pdf/",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e1d8aa6865ef99c83109d53a75563d916b24ab78",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
230990556 | pes2o/s2orc | v3-fos-license | Influence of pharmaceutical marketing mix strategies on physicians’ prescribing behaviors in public and private hospitals, Dessie, Ethiopia: a mixed study design
Background Prescription drugs constitute the primary source of revenue for the pharmaceutical industry. Most pharmaceutical companies commit a great deal of time and money to market in hopes of convincing physicians about their products. The objective of this study is to assess perceived influence of pharmaceutical marketing mix strategies on physicians’ prescribing behaviors in hospitals, Dessie, Ethiopia. Methods Mixed methods sequential explanatory design was employed in two public and three private hospitals. A cross-sectional study design was employed by including (136) physicians working in public and private hospitals. Percentage, mean, standard deviation, and multiple linear regressions were computed using Statistical Package for Social Science. In the second phase, the phenomenological design was employed to fully explore in-depth information. Purposive sampling was used to select key informants and 14 in-depth interviews were conducted by the principal investigator. Content analysis was performed using Nvivo 11 plus and interpretation by narrative strategies. Results The overall perceived influence of pharmaceutical marketing mix strategies in physicians’ prescribing behavior was 55.9%. The influence of promotion, product, place and price strategy perceived by physicians in their prescribing behavior was 83 (61%), 71(52.2%), 71 (52.2%), 80 (58.8%) respectively. There was a statistically significant difference among marketing mix strategies (β = 0.08, p = < 0.001). Determinants on the influence of physicians’ prescribing behavior were specialty (p = 0.01) and working areas (p = 0.04). The qualitative design also generates additional insights into the influence of pharmaceutical marketing mix strategies on physician prescribing behavior. Conclusions More than half of physicians perceived that pharmaceutical marketing mix strategies influence their prescribing behavior. The qualitative design also revealed that pharmaceutical marketing mix strategies influenced physicians prescribing behavior. Strengthening the regulation and maintaining ethical practice would help to rationalize the physicians’ prescribing practice. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-020-10063-2.
Background
According to Philip Kotler and Armstrong marketing is defined as "Satisfying needs and wants through an exchange process" [1]. The pharmaceutical marketing mix first introduced by Borden in 1964 with the basic elements are the product, price, place, and promotion (collectively coined the 4Ps of marketing) [2]. 4Ps are linked to each other to generate a prescription order by physicians and makes the product reaches to the consumers [3] this creates for a firm to get the desired level of sales in the target market [4].
Marketing prescription medicines constitute the primary source of revenue/profits for the pharmaceutical company [5]. Pharmaceutical companies commonly employ a wide range of marketing strategies to increase their drug sales [6,7]. Eighty four percent of pharmaceutical marketing efforts are directed toward physicians because from the manufacturer's point of view, physicians are the gatekeepers or decision-makers to drug sales [8]. Most pharmaceutical manufacturer and distributor companies commit a great deal of time and money to market in hopes of convincing physicians about their products [9].
Regardless of codes of pharmaceutical medical practice [10] and World Health Organization code of ethics regulating the marketing of prescription drugs, there are still unethical commercial practices which influence prescribers' decisions [11] by supporting with biased information and more engaged in creating higher profits to the company [12].
Marketing the pharmaceutical industry is a large and high-value industry in the globe, where its practices have a direct influence on the welfare of patients at the individual level and society in general [13]. The different company utilizes many ways in marketing their product such as giving away gifts, free lunches, incentives, sponsoring education and holidays as inducements which compel a doctor to prescribe without a scientific basis [14].
Ethiopia is one of the most populous countries in Africa and the demand for pharmaceutical products in the country is high [15]. The manufacturing of pharmaceutical is quite small and covers between 10 and 20% of the domestic market and the rest of the market are satisfied through imports [16]. In 2015, the annual pharmaceutical market in Ethiopia was estimated at United States of America Dollar (US$) 400 to 500 Million and expected to reach around US$ 1 billion by 2018 [17]. According to Ethiopian food and drug administration (EFDA), regulation directive marketing of pharmaceutical products is restricted and a retailer or wholesaler cannot market prescription products or services directly to the end consumer [18].
District hospital, health centers, and health post facilities found in adjacent zones areas of Dessie city rather than referral and general hospitals. This infrastructure and national treatment guideline limit the health officers to diagnosed and treat patients had different disease. Due to that, a significant number of patients have been visiting Dessie city for seeking health services. Thus some public and private health institutions are concentrated. Taking this advantage the number of pharmaceutical distributors/wholesalers in Dessie town has been increasing steadily over the years. In some streets of the town, a very high concentration of pharmaceutical service providers is found to the extent that five licensed premises within the same building or a walking distance of each other. This scenario necessarily elicits a high degree of market competition among various companies. In the absence of such direct marketing, the players have to find innovative ways to stand out of the crowd and beat the competition because of the existence of various brands of generic medications [19]. The competitive nature of the market triggers a pharmaceutical company to develop pharmaceutical marketing strategies to convince physicians to beat the competitor and to get reasonable profits.
As pharmaceutical spending continues to escalate and drug safety issues have become more common, such physician-directed outreach efforts have come under mounting public scrutiny [20]. Pharmaceutical firms, therefore, need to design their marketing mixes strategies without affecting the ethical code of practice. They need to understand how their marketing mixes influence the doctors' choice of prescription drugs. So far limited research had been carried out about the influence of pharmaceutical marketing mix strategies on physicians prescribing behaviors [21,22]. Therefore, the objective of this study was to assess perceived influence of pharmaceutical marketing mix strategies on physicians' prescribing behaviors in hospitals, Dessie, Ethiopia.
Study area and period
A study was conducted from September 1 -October 1, 2019 at private and public hospitals, Dessie, Ethiopia. Dessie is located in South Wollo zone of Amhara Regional State, 401 km away from Addis Ababa, the capital city of Ethiopia.
Study design
Mixed methods sequential explanatory design was employed. A first phase is begun by a cross-sectional study design to assess perceived influence of pharmaceutical marketing mix strategies on physicians' prescribing behavior. This was followed by a phenomenological design used to fully explore in-depth information about underlining reasons, opinions and motivational behaviors of physicians.
Source and study population
All physicians working in public and private hospitals were found in Dessie city considered as a source population. Physicians who were available and volunteer to participate during the study period were included in the study.
Sample size determination and sampling procedures
For quantitative part of the study, all physicians (140) working in public and private hospitals of Dessie town were included, by consulting human resource department of Dessie city health office, so the issue of representativeness of the population was guaranteed. Key informants were selected by the principal investigator by screening the doctors that medical representatives (MRs) often target; i.e. using MRs, as gatekeeper, who work in Dessie city pointed out highly experienced and targeted physicians, and by their patient load (from the patient data clerk registration form of each hospital). Purposive sampling was used to select physicians working in public and private hospitals as key informants for qualitative part of the study since they are supposed to be experienced and rich in information related to pharmaceutical marketing activities performed by pharmaceutical companies. The number of the key informants was 14 due to the saturation of information concerning emerging themes.
Data collection instruments and procedures
A structured questionnaire (see Additional file 1) was developed from literatures [21,[23][24][25] to measure the influence of pharmaceutical marketing mix strategies, such as pricing, promotional, places, product strategies, and socio-demographic characteristics of physicians on physicians' prescribing behavior. The questionnaires, which were delivered to the participants in person, included demographic questions and five width Likertstyle questions (44 questions). The Likert-question answers ranged from "strongly disagree" to "strongly agree". The quantitative data was collected by 5 nurses using data collection self-administered questionnaires after recruiting and half-day training. The principal investigator coordinated data collection.
An interview guide (see Additional file 2) was utilized for the qualitative part to assess in depth-related information on the behavior of physicians towards how pharmaceutical marketing mix elements such as promotion, products, places, and price strategy affects their prescribing behavior. This was conducted through face to face interviews with physicians, by the principal investigator (ADH) using open-ended questions, on hospital compound and public places, such as cafés and hotels. The interview guide was organized and developed from reputable literatures [21,[23][24][25]. The principal investigator conducted an in-depth interview which lasted an average of 38 min (25 to 59 min) and until no new theme emerged. The interview was conducted in Amharic, official language of Ethiopia, with the aid of the audio recorder. All physicians, natives to Ethiopia, and fluently speaks the Amharic language to explain their experience. The principal investigator took notes during the interview. In cases of ambiguity, the principal investigator clarifies the issues raised instantaneously. All interviews recorded were transcribed to a verbatim.
Reflexivity: the principal investigator status as an insider
The principal investigator's status as a "professional" and "native" offers certain strengths and insights into the professional issues he was exploring. The principal investigator was non-judgmental while in the in-depth interview and maintained professional relativity. He considered insider bias in his work and justified how other key informants respond to him. He was also faced with the challenge based on his position as a member of the elite and senior pharmacy professional.
All of these issues concerning competing roles and perceptions related to the concept of insider bias, which has both advantages and disadvantages when conducting such a study. In his case, the advantages included being able to use existing networks and contacts within the physician, than might otherwise have been available to him. On the other hand, the disadvantages related to his position include the way he was perceived by the participants in this study; it is impossible to know the extent to which his participants were truthful in the perceptions and opinions they share with him or whether they were telling him the things they think he wants to hear. The use of open-ended questions, as well as efforts made to engage informants in informal conversations on other topics, they themselves raised, were among some of the measures taken to mitigate these limitations.
Data quality management
For the quantitative survey self-administered questionnaire was prepared in English. Interview guide was prepared in English, translated into Amharic and finally back-translated into English to maintain consistency and standardization of the instruments. To assure the quality of the data, the self-administered questionnaire tool was properly designed and its reliability was checked by the Cronbachs Alpha test (0.94). Validity of questionnaire was assured by using standardized questionnaire adapted from literatures [21,23,25]. Also, the principal investigator and supervisors closely monitor the data collection process. The assumption of linear regression (independent observation, normal distribution and homogeneity of variance) test was fulfilled. Any unfilled data in the questionnaire was first checked and all collected data were critically examined for completeness and consistency during data collection, analysis and, interpretation. The pre-test was carried out to test study instruments with thirty-five physicians in a health facility that was not part of the study area. Sensitive issues to physicians like ethnicity and salary questions were removed from the questionnaire after conducting the pre-test. Also, different strategies were used to assure the quality of the data: theoretical, using conceptual frameworks to guide the study, a mixed approach (qualitative and quantitative), and more than one investigator involved in this study. Also, the qualitative findings were shared with key informants to confirm the presentations accurately reflected their perceptions and experiences. The content of the interview guide was checked by an expert, from a social and administrative pharmacy department group.
Data analysis and presentation
EpiData software (version 4.6.0) was utilized for coding and data entry processes after the data edited. The quantitative data were analyzed using the Statistical Package for Social Sciences (SPSS) version 20. Simple descriptive analysis such as percentage, mean and standard deviation (SD) was computed to meet the stated objective. Microsoft excel program was used to present summary results in terms of figures and tables. Also, inferential statistics were computed using multiple linear regressions to measure the association between independent and dependent variables with a 95% confidence interval and variable with p-value < 0.05 taken as statistically significant.
Early coding concurrently with data collection was conducted on audio-recorded and transcribed. Data were analyzed using the principles of inductive content analysis. Texts were read independently by the principal investigator (ADH) and another professional who speaks the local language (MHK) and codes were developed in reference to the research questions. Each of the codes were organized into higher-order conceptual themes. These individual codes and themes were discussed at group meetings until consensus was reached on basic themes and subthemes across interviews. Finally, the themes were incorporated into a conceptual model of the participants and the influence of pharmaceutical marketing mix strategies on prescribing behaviors of physicians. Sections of original transcripts and key quotes considered to be illustrative of the emerging themes were translated into English to facilitate discussion with the full research team. Data analysis was supported by the use of NVivo 11 plus computer software Narrative strategies was employed for interpretation and the identifier codes for the presentation of quotations in the qualitative findings were: specialty, working areas, and age of key informants.
Operational definitions
Influenced: Physicians scored greater or equal to mean score for pharmaceutical marketing mix strategy was influenced. Not-influenced: Physicians scored below mean score for pharmaceutical marketing mix strategy was not influenced.
Socio-demographic characteristics of respondents
A total of 136 questioners were collected during data collection period with the response rate 97.14%. As shown in Table 1, 120 (88.2%) of the respondents in the study were male and 16 (11.8%) were female. The majority of respondents 49 (36%) were between the age of 30-44 years. Looking at the specialty of participants, 20(26.3%) were residents, 16(21.1%) were internists and 12(15.8%) were surgeons.
Participants for qualitative section
A total of 14 professionals participated in in-depth interviews. All key informants were male and within the age of 32-45 with mean and SD 39.86 ± 3.86. Regarding the place of work, 5 interviewees were from public health facilities and 9 were from private health facilities. Key informants included were 3 internists, 3 surgeons, 2 pediatrician, 2 gynecologists, 2 orthopedician, and 2 GPs. The work experience of key informants ranges from 4 to 15 years with mean and SD 10.07 ± 2.94.
Influence of promotion strategy on physicians prescribing behavior
Physicians perceived that information from MRs (3.85 ± 1.11), participating in company-sponsored CME (3.61 ± 1.2), participating product launch meeting (3.5 ± 1.01), frequent visits of MRs (3.64 ± 1.08), information from promotional drug brochures (3.47 ± 1.11), and invitation to visit a pharmaceutical manufacturing plant (3.52 ± 1.22) were influenced their prescribing behavior. However, promotional strategy tools like receiving different gifts from pharmaceutical company (3.09 ± 1.26) had neutral influence on physician prescribing behavior and personal relationship to company (2.74 ± 1.11) did not influence their prescribing behavior ( Table 2).
The qualitative part of study found that pharmaceutical companies follow a variety of strategies as they seek to further increase their market share in Dessie town. They are trying to find doctors by adopting a pharmaceutical marketing mix strategy. One of those used a strategies was promotion. Pharmaceutical companies will hire MRs in the town to get the physicians closer to promote their products and to explain the company's image. Also, those who do not have a MRs here will send their MRs from the capital city for some time and maintain close ties with the physicians. The majority of participants (n = 10) in the qualitative study reported that receiving medication information through MRs and seeing the manufacturing site had a positive influence on their prescriptions pattern. One of an informant described this scenario: "If you take MRs with me now … … if you find them somewhere else, what is the whole thing? That product will bring to your mind the product that promoter will introduce to you and have influence here and these things will affect my prescription order (Orthopedician,Public,40)." In addition, this was emphasized by another informant: "… ..seeing where the pharmaceutical products are manufactured is also very interesting to me, I have personally gone twice to saw pharmaceutical manufacturing site, has led to me an increase in the prescribing rate of drugs.....(Internist, Private, 41)."
Perceived influence of place strategy on physicians prescribing behavior
Respondents perceived that pharmaceutical product availability (4.26 ± 0.77), the inclusion of the medicine in the hospital medicine list (3.79 ± 0.98), fast deliveries with special storage and distribution of medicines (3.66 ± 1.08), availability of real-time product information from distribution intermediaries (3.61 ± 1.11) and availability of local agents (importer/distributor) representing the principal company (3.4 ± 1.14) were influenced physicians when prescribing medication to the patients (Table 3).
A pharmaceutical company use place strategy to expand its market share by persuading physicians to 3.27 ± 1.09 a Responses ranged from strongly disagree (1) to strongly agree (5) prescribe their products. Most (n = 10) of those who participated in the qualitative study emphasized that the companies are focused on improving the supply chain performance of pharmaceuticals has made their medical work easier. This was further substantiated by one of an informant: "If you are prescribing some drugs, they will bring that product and the primary benefit for the patient. A patient does not send the prescription to Addis Ababa they easily get here within a variety of pharmacies in the city is such a great job for us (Pediatrician, Private, 44)." One of the place strategies was having an agent in the city of Dessie made it easy for them to prescribe medications for patients. This was supported by one informant from the public sector revealed that: "… .to be honest, I don't want to roam a patient in the city. If we have these medications, we can prescribe medications at the pharmacy and I don't prescribe the medicine if it's not available outside the pharmacy. There is no need to wander the patient (Gynecologist, Public, 41)."
Perceived influence of price strategy on physicians prescribing behavior
Regarding price strategy dimensions, respondents perceived that the price of the drug and effectiveness of therapy (4.13 ± 0.89), disclosure of actual price of the product (3.87 ± 0.97), price discounts technique for the product (3.71 ± 1) and price of medication to quality (3.72 ± 1.07) were influence on physicians' prescribing behavior (Table 4).
Pharmaceutical companies utilize strategy to attract the attention of physicians to maintain their market status and increase their revenues. Most of key informants (n = 8) recognizing the cost of the drug had a benefit and a positive influence on their work when prescribing medication. This was supported by one of an informant that: "....knowing the product price makes us comfortable with prescribing a product in the first place. This is important to identify individuals who have potential or users can use a product (GPs, Public, 34)." Perceived influence of product strategy on physicians prescribing behavior As shown in Table 5, physicians perceived that supportive evidence of the efficacy of the medicine given by the pharmaceutical company (4.01 ± 0.86), the release of new innovations or combinations of drugs (4.07 ± 0.97), quality of the medicine (4.12 ± 0.98), form of delivery of the medicine (3.88 ± 0.95), introducing the country of manufacturer of pharmaceutical products (3.6 ± 1.06), and the image of a pharmaceutical company (3.51 ± 1.08) were influenced physicians' prescribing behavior.
A pharmaceutical manufacturer or importer companies try to attract doctors' attention to increase their market share in the Dessie town. Most (n = 9) of the participants in the qualitative study agreed that the change in the preparation of the drug formulations has a positive effect on when they prescribe medicines to their patients. Regarding this an informant had to say: "… … a combination that comes with a new look. For example, if we take Aferin now, this combination of three drugs is very good for me, to prescribe for the patient. You can make it easier to deliver, especially at pediatrics age and you will reduce overload (Pediatrician, Private, 38)." The study also found that the country of origin had influence on their prescribing behavior. In this regard majority (n = 9) of informants think that the drug has a good quality and they said it has a positive effect on their work when prescribing medicine to patients. To this, one of the key informants stated that: "… ..knowing about the companies is a good idea. It has influence sometimes; I think it is better to know what country the products are from if you know the
Perceived influence of marketing strategies
The data fulfilled the assumption of normal distribution, the mean score was appropriate to classify the influence of pharmaceutical marketing mix strategies on physicians' prescribing behavior. Accordingly, promotional strategies, product strategies, place strategies, and price strategies were reported with the overall mean score of 3.39, 3.72, 3.59 and 3.73 respectively. Respondents' scores above this mean score were influenced. As shown in Fig. 1, regarding marketing mix elements; promotion strategy, product strategy, place strategy and price strategy 83(61%), 71(52.2%), 71(52.2%), 80(58.8%) of physicians perceived that influenced their prescribing behavior respectively. Seventy-six (55.9%) of respondents scored above the overall mean score and perceived that their prescribing behavior was influenced. Whereas, 60 (44.1%) of participants scored below the overall mean perceived that not influenced their prescribing behavior by pharmaceutical marketing mix strategies.
In addition, in the qualitative part of the study majority of participants (n = 10) reported that promoting of drug products by multinational companies had a positive influence on their prescriptions pattern. This was emphasized by one of an informant: "Promotion of dugs I think it has to use update our knowledge, it has great importance for getting new things done (Internist, Private, 39)." The Post Hoc test revealed that promotional strategies have a statistically significant difference with product (p = 0.001), place (p = 0.0028) and price (p = 0.00). There was a statistically significant difference among marketing mix strategies employed by pharmaceutical companies to influence physician prescribing behavior (β = 0.08, p = < 0.001) ( Table 6).
The current study also identified, specialty (β = − 0.06, p = 0.01) and working areas (β = 0.15, p = 0.04) had a statically significant influence on physicians' prescribing behavior but sex, age, education, country of first-degree education, country of specialty, and working experience were not found statistically significant (p > 0.05) ( Table 7).
Discussion
Medicines are important components of the health care system and play a crucial role in saving a life. When used rationally, they produce the desired effect of improving patients' ailments [26]. The main aim of physicians is to render service to patients in a rational manner [27]. The interaction between physician and pharmaceutical companies must be ethical if a proper medication prescribing practice is required. Responses ranged from strongly disagree (1) to strongly agree (5) In this study, the perceived influence of pharmaceutical marketing mix strategies on physicians' prescribing behavior in Dessie town was 55.9%. This might be attributed to pharmaceutical sectors are more likely densely accumulated to take advantage of market opportunity and since currently different pharmaceutical marketing strategies adopted by various drug companies are too attractive in developing countries [28]. Inadequate enforcement of the pharmaceutical law was establish to be the leading contributing factor to irrational prescribing practice [29]. It keeps them moving and working as they freely. This gap allows physicians become involved in unethical activity. A study reported that the loss of credibility of physicians' in the eyes of the patients and the public as a consequence of the nature and effect of the relationship between pharmaceutical companies [30]. Key informants also revealed that they were influenced by pharmaceutical marketing mix strategies.
Promotion as one of the strategies used by pharmaceutical companies 67.8, 73.5, 63.9 and 56.6% of physicians perceived that participating in company-sponsored CME, information from MRs, frequent visits of MRs, and information from promotional drug brochures was influenced their prescribing behavior respectively. This finding revealed that higher influence of physicians as compared to studies conducted in Lebanon, which mentioned that physicians participating in CME conferences, visits of MRs, and promotional drug brochures were influenced their prescribing pattern 39, 51.1, 18.7% respectively [31]. Also, a study conducted in Saudi Arabia reported that the frequency of visits MRs was 56.6% of physicians' influence in their prescribing decisions [32].
The present study revealed that lower influence of physicians' prescribing behavior as compared to the study conducted Addis Ababa; which stated that 44.2, 69.7, and 75.4% of physicians participating in company-sponsored CME, frequency of visits of MRs, and the information from MRs was influenced their prescribing behavior respectively [25]. Also, A study done in south-eastern city in United States of America reported that influenced physicians' responses to marketing strategies by drug brochures were 68% and the visit of MRs was 73% [33]. The discrepancy among studies might be due to medical professionals after graduating from medical school have difficulties to update themselves because of limited sources of information. A study conducted at Hawassa University teaching and referral hospitals in southern Ethiopia reported that lack of drug information was one of the factors that lead to physicians' irrationally prescribing medicines [34]. Because the source of information is limited, prescribers rely on the information they find on their environment. A review reported that identified that information provided by multinational companies is often biased and sometimes dangerously misleading [35]. This inappropriate use has serious health and economic consequences for the success of health care system at national level and adopting this information into clinical areas is too difficult [36].
The qualitative part of the present study indicated that the promotion strategies; visits of MRs, attending CME and drug brochures are the major source of drug information to their work. This finding was also supported by other studies done in Pakistan, which reported that physicians recognize MRs as information providers and beneficial patronage to their work [8].
In addition, the present study revealed that perceived influence of physicians on their prescribing behavior by invitation to visit a pharmaceutical manufacturing plant was 60.3%. this finding is comparable to the study conducted in Addis Ababa; which reported that 63.9% of physicians visit a pharmaceutical manufacturing plant was influenced their prescribing behavior [25].
This might be attributed to drug companies choose physicians in a serious way and take them to on a tour abroad to see manufacturing sites. Studies conducted in Pakistan also reported that attending pharmaceutical company-sponsored travels to touristic locations and visiting manufacturing plants have increased in the prescribing rate after the physicians attended a companysponsored event with all their expenses covered [37]. This leads to incorrect generalization in physicians after each visit makes them think about that company. The (25) qualitative finding also reported that tours and invitation to visit the pharmaceutical plant that helped physicians for strengthening their relationship with the company and had a contribution to changing their prescribing behavior.
Regarding to product strategy, this study found that 58.8, 54.4, and 80.9% of physicians' perceived that country of pharmaceutical product manufacturer, form of delivery of the medicine, and quality of medicine was influenced their prescribing behavior respectively. Lower influences as compared to study conducted in Nairobi revealed that physicians influenced in their prescribing behavior by form of delivery of the medicine was 85.8% [24]. But higher influences as compared to study done in Saudi Arabia in which 46.2% of physicians influenced their prescribing decision by a source of the company that produces the drug [32]. The study conducted in Addis Ababa also reported that 34.5% of physicians' quality of the medicine was influenced their prescribing behavior [25]. The difference among studies might be attributed to companies spend a lot of money every year for innovation and they come with better improvement with a condition on their dosage, indication, side effect and cost from the oldies one. Physicians select new generation and improved medication products after MRs told to them. Also, difficult to diagnose because there is no fully-equipped laboratory at the health facility in the town due to that they prescribe drugs having better coverage. As the government community health insurance scheme continues to grow in the town as a strategy for reducing financial catastrophic shock [38], physicians prescribe what is considered to be good quality because there is an insurance scheme that covers the cost of patients. Most communities nowadays have increased awareness of the importance of quality and will have also their own contribution.
The city is close to the harbor and no strict regulation done by Ethiopian food and medicine regulatory authority at the border areas makes a better supply of medicine with questionable quality of the medicine. In Dessie town, there are many pharmacies and wholesalers where medicines transactions are very active and it is difficult for doctors to identify counterfeit drugs from the original product. Due to this reason physicians preferred the known sources and good quality of medicines. China and India are the leaders in counterfeit drug production [39]. So, drugs come from those countries that are often perceived that has unproven quality. The qualitative finding reported that change in the preparation of the drug formulations, knowing the country where the drug is manufactured makes them think that the drug has a good quality and influence on their work when prescribing medicine to patients.
In the present study, 88.2, 64, and 50% of physicians perceived that pharmaceutical product availability, the inclusion of medicine in the hospital medicine list, and availability of local agents representing the principal company were influenced their prescribing behavior respectively. It was higher influences as compared to a study conducted in Nairobi stated that influenced physicians' on prescribing behavior by medicine availability was 65.1%, availability of the medicine in hospital formularies was 45.4%, and local agent representing the principal company was 31.8% [24]. This might be due to various reasons, such as people move from one place to another for different reasons, the spread of the disease also increases in the town, leading to diseases are occurring that we have never seen before. Moreover, the climate difference between the towns and around the city increases the spread of disease and patients. To use this opportunity, companies will have an equal agent like Addis Ababa to fulfill this need. World Health Organization and Ethiopian pharmaceutical policy emphasized that each health facility to develop their own facility-specific medicine list, made the physicians forced to use the medical drugs that are available there. Key informants also revealed that improving the supply chain of pharmaceuticals had made their medical work easier and effect on them.
Regarding pricing strategy, the present study found that physicians' perceived influence on their prescribing behavior by disclosure of actual price of the product was 68.4%, the price of the drug and effectiveness of therapy was 82.3%, and price competition among the pharmaceutical companies was 44.1%. This finding was higher than a study conducted in Addis Ababa, in which the influence of pricing of medicine to the physicians' prescribing behavior was 23% [25]. The result was also higher influence as compared to study in Nairobi reported that 56.4% of physicians' by price of the drug in relation to the severity of the indication was influenced their prescribing behavior and lower influences stated 81.6% of physicians by price in relation to competing product was influenced their prescribing behavior [24].
The difference among studies might be attributed according to the World Bank, Ethiopia lies in a lowerincome country [40]. Although the purchasing power of the community varies, they prescribe medications with the consideration of patients' wealth. The current medical system considers physicians and patients as the pillars of decision-maker deciding what treatment will begin for patients' condition. Physicians compare the relative costs, effects of different types of drugs and estimate the strengths and weaknesses of alternatives to determine options to select and prescribe pharmaceutical products for patients [41]. The pharmaceutical policy in our country is making companies freely to change the price of their products as they need and different level of markup utilized by different pharmaceutical companies. Physicians comprise the least costly alternatives when the outcomes of two or more drugs are virtually the same. Despite the government regulation and insurance company's guidelines might have its contribution to this variation. The qualitative findings reported that knowing the price of the drug and making a discount for patients has a benefit and influenced their prescribing behavior.
In this study, 61, 52.2, 58.8, and 52.2% of physicians perceived that promotion strategy, product strategy, price strategy and place strategy were influenced their prescribing behavior respectively and there is a statistically significant difference among marketing mix strategies (p = < 0.001). This might be due to different companies develop one marketing strategy with the other in a more efficient way, which makes it distinct to each other. The ultimate objective of the pharmaceutical marketer would be to devise a product that will be seen as different in the eyes of physicians. The pharmaceutical companies have four basic ingredients (promotion, place, product, and price) utilize to achieve a great market share [42]. Companies compete with each other for better profits by using different strategy will lead to more drug sales and increase market share in the town.
The qualitative part of the current study indicated that physicians who are working in public and private hospitals perceived influence by pharmaceutical marketing mix strategies on their prescribing behavior. This finding was also supported by other studies done in Yemen that reported that pharmaceutical companies and their marketing activities were effects on their prescribing behavior [43].
Regarding the determinants of influencing physicians' prescribing behavior, a significant result was found only for two variables, working areas (p = 0.04) and specialty (p = 0.01). A change in physician's specialty to ear, nose, and throat specialty will shift perceived influence by − 0.06. Changing working area from public or private sector will shift perceived influence by 0.15. This might be due to the health care system in the private sector is open; the pharmaceutical company MRs easily will meet individual doctors. In addition, although it is expected that physicians prescribe generic drugs to the patients but they mostly prescribe brand drugs. So, the promoter will easily find them. Patients visiting the private sector want better medicine to be prescribed for their condition. Currently, pharmaceutical companies are bringing their specific medications to the market for specific uses by modifying the preparation of drugs. During this time, they directly contact to specialized physicians to prescribe those medications. They choose a few selected specialists and make them prescribe more drugs on their prescriptions.
Availability of high pharmaceutical sectors operating within close proximity of one building is an important catalyst for growth among pharmaceutical companies as they benefit from the value chain that exists within a city. These make them focus physicians found in the city and keep them updated only on things currently available products. One of the factors that make the health care system not good in our country is irrational prescribing. The study conducted in Ethiopian referral hospitals reported that a mean number of drugs per prescription was 5.1 [44]. One of the reasons for this is the influence of pharmaceutical companies on physicians for the deterioration of the medical system. This leads to a reduction in the quality of pharmacotherapy, wastage of resources, high treatment cost, resistance to antibiotics, and making illness more serious [45].
Pharmaceutical companies have to work ethically if effective health care services required to be given to patients. Both the patient and the doctor need to make a decision in cooperatively to the treatment options. The regulatory agency in the country should make appropriate laws and implement them. It must be a functionalized drug therapeutic committee and drug information center on a private and public health facility. Physicians also need to refrain from unethical activity provided by pharmaceutical companies that are unnecessary and does not input scientific knowledge to their work.
Practical implication of the study
There is a need for a good rational prescribing practice and delivery of the health care system in Dessie city. Consequently, the influence of pharmaceutical companies on physicians should be examined. The finding of the research is relevant for creating awareness on the influence of pharmaceutical marketing mix strategies on physicians' prescribing behaviors. The results obtained in quantitative and qualitative studies were complimentary. This can help responsible stakeholders to formulate intervention to maximize rational prescribing medication in the delivery of health care services. In addition to the above points, the findings of this study also will give a clue to conduct further investigation in the area and evaluate the ethical practices of pharmaceutical marketing mix strategies.
Strength and limitation of the study
This study assessed the influence of pharmaceutical marketing mix strategies on physicians prescribing behavior using both (qualitative and quantitative) study design. As a limitation, the present study was unable to determine the temporal effect of a company's marketing strategy on physician prescribing behavior due to the cross-sectional nature of the study. To measure the influence of pharmaceutical marketing mix strategies on physicians' prescribing behavior respondents were asked Likert type questions that were answered based on selecting an appropriate choice on a scale from a given list of activities performed as pharmaceutical marketing mix strategies. This prevents to measure the association of individual pharmaceutical marketing mix strategies with the influence of physicians prescribing behavior. Likert scale type questions fail to measure the true attitude (opinions) of physicians as the space between each choice (5-point scale) is not equaled distant. Moreover, respondent's usually avoiding choosing extreme values on the scale. Hence, there is a central tendency error committed by our respondents related to the nature of the data collection tool. The findings of this study may not represent other health care facilities of the country, since it was conducted in the stated town and the issue of context may vary. The volunteer nature of this study may affect the findings of this project, such as it is difficult to know the extent to which the participants were truthful in the perceptions and opinions they share with PI or whether they were telling the PI the things they think the PI wants to hear.
Conclusion
Pharmaceutical manufacturers try to influence physicians through a variety of strategies to increase their market share by inducing more prescriptions. More than half of physicians prescribe medication due to the influence of pharmaceutical manufacturers. Nearly twothirds of physicians influenced their prescribing behavior by promotion strategy, product strategy, place strategy, and price strategy.
Participating in company-sponsored CME, frequency of visits and information from MRs, information from promotional drug brochures, and invitation to visit a pharmaceutical manufacturing plant, country of pharmaceutical product manufacturer, image of pharmaceutical company, quality of medicine, supportive evidence of the efficacy of the medicine given by pharmaceutical company, release of innovations of drugs from product strategy, pharmaceutical product availability, inclusion of medicine in the hospital medicine list, and availability of local agent representing the principal company, price of the drug and effectiveness of therapy, disclosure of actual price of the product, and price of medication with quality were strategies influenced most physicians in their prescribing behavior. Pharmaceutical companies target physicians based on their specialty and their working area.
The qualitative study generates some additional insights into the influence of pharmaceutical marketing mix strategies. Key informants revealed that they were influenced by pharmaceutical marketing mix strategies on their prescribing behavior and these insights obtained should be viewed as preliminary complimentary propositions that are not necessarily fully generalizable. All concerned stakeholders should work together to ensure a good health care system and drug use. Strengthening the regulation and maintaining ethical practice would help to rationalize the physicians' prescribing practice. | 2021-01-07T15:56:35.820Z | 2021-01-07T00:00:00.000 | {
"year": 2021,
"sha1": "c7fdd5d17cac8ddd2767530b1710f381672fcb82",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-020-10063-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7fdd5d17cac8ddd2767530b1710f381672fcb82",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247027244 | pes2o/s2orc | v3-fos-license | Effect of Licorice (Glycyrrhiza glabra) Extract as an Immunostimulant on Serum and Skin Mucus Immune Parameters, Transcriptomic Responses of Immune-Related Gene, and Disease Resistance Against Yersinia ruckeri in Rainbow Trout (Oncorhynchus mykiss)
This study was designed to appraise the effect of licorice herbal supplement on the immune status of rainbow trout fingerlings. Accordingly, five diets were formulated with different levels of licorice extract (LE) including 0 (control), 0.5 g kg−1 (LE0.5), 1 g kg−1 (LE1), 2 g kg−1 (LE2), and 3 g kg−1 (LE3). The fingerlings (10.0 ± 0.1 g initial mean weight) received the diets in triplicates (30 fish in each replicate) for 56 days. The results showed that the white blood cells and their differential number (lymphocytes and monocytes) were remarkably increased by LE2 supplementation (P < 0.05). The oral administration of LE2 significantly increased the levels of serum immunoglobulin (Ig), lysozyme activity, and complement components (C3 and C4) compared with others. Meanwhile, the serum bactericidal activity against Yersinia ruckeri in LE2 and LE3 treatments was significantly higher than others except for LE1 (P < 0.05). In addition, serum alternative complement activity significantly improved in all treated groups except LE0.5 compared with the control group (P < 0.05). In terms of skin mucosal immunity, the fish fed with LE2 and LE3 diets exhibited notably higher lysozyme activity, alkaline phosphatase activity, and Ig value than other groups (P < 0.05). The highest skin mucus bactericidal activity against Y. ruckeri was obtained in LE2 treatment (P < 0.05). In addition, dietary LE2 significantly increased the relative expression of immune-associated genes including tumor necrosis factor-α, interleukin-1β, interleukin-8, and IgM and the former treatments showed higher values than the control group. The cumulative mortality of fish against Y. ruckeri infection was notably reduced from 53.6% in the control group to 29.0% in LE3 treatment. Overall, the dietary administration of LE at 2 g kg−1 had the best effects on immunocompetence in rainbow trout.
INTRODUCTION
Over the past decade, the global production of rainbow trout (Oncorhynchus mykiss) has grown significantly and nearly doubled. However, the immune system of fish can be suppressed by various stressors in captivity conditions, and the emergence of infectious diseases has hampered the growth performance and survival rate of rainbow trout in aquaculture sectors (1). It has been widely proven that the improvement of aquafeed plays a chief role in fish health status (2)(3)(4). Therefore, a healthy balanced diet should not only be formulated based on the essential requirements of aquatic animals but should also contain some ingredients to ameliorate the overall performance of fish in captivity (5)(6)(7). In this regard, several functional feedstuffs such as acidifiers, probiotics, and medicine plants with diverse biological activities such as antioxidant and immuneboosting properties have entered into aquaculture nutrition (8)(9)(10)(11). In recent years, there has been increasing attention to the use of herbal supplements in aquaculture due to their high potency (i.e., availability, relatively low price, easy processing, and eco-friendly) (8)(9)(10)(11). In fact, herbal supplements display immunostimulatory effects on the host due to several bioactive compounds that are unique for each medicinal plant species (12).
Licorice, Glycyrrhiza glabra, is a medicinal plant belonging to the Fabaceae family and native to the Mediterranean, southern Russia, and Asia. In addition, it is cultivated throughout Europe, the Middle East, and Asia (13). The roots are the most important part of the plant, which is widely used in the pharmaceutical, health, and food industries (13,14). Flavonoids, glycyrrhizinic acid (glycyrrhizin), glabridin, liqueurite, and liqueurizine are the main biomolecules of the roots, which have potential antimicrobial, anti-inflammatory, antihepatotoxic, antimutagenic, and antioxidant effects (14)(15)(16)(17). Recent studies have found that dietary licorice roots can boost the non-specific immune system in several fish species, such as common carp, Cyprinus carpio (18), and Nile tilapia, Oreochromis niloticus (19). Moreover, other findings showed that licorice root powder enhances hepatoprotective and anti-stress effects in aquatic animals (20,21). Herbal extract supplements have become more common in aquaculture nutrition than botanical powders due to higher absorption rates, higher bioactive compounds, and lower dietary dosage to meet the medicinal benefits. As with most plant extracts, licorice root extract has more potency and consistency than the powder form due to higher bioactive phytochemicals, especially glycyrrhizin (22). However, the information about the dietary effects of licorice root extract on immunological defense mechanisms in fish or shellfish is scarce. Therefore, in this research, the influence of dietary licorice extract (LE) was evaluated for the first time on the immune system (skin mucus and serum immune responses), the expression of related genes, and Yersinia ruckeri resistance in rainbow trout as one of the most widely introduced species in the finfish aquaculture industry.
Plant Hydro-Alcoholic Extraction Process
The dried and cleaned rhizomes and roots of licorice (G. glabra) were prepared from Dineh Company, Tehran, Iran. Then, they were pulverized into a fine powder (0.2 mm) using a grinder. Then, the powder (300 g) was soaked in 70% ethanol (1.5 L) at 4 • C, and the mixture was stirred periodically. After 72 h, the mixture was filtered by Whatman No. 1 paper. The solvent (ethanol) was removed by vacuum rotary method at 90 rpm at 50 • C. A final 50 ml of concentrated extract was obtained from 300 g of dried rhizomes. Finally, the extract was dried at room temperature and then stored in a refrigerator until use (23).
Experimental Fish
In this study, 500 rainbow trout fingerlings were purchased from a private farm in Firuzkuh (Tehran, Iran) and transferred alive to the Khojir Research Station (Tehran, Iran). The fish health was checked by examining their physical appearance and behavior such as skin color, swimming style, eye and gill status, abnormalities in the spine, fin rot, and external parasitic contaminations. Before the start of the feeding trial, the fish were stored for 2 weeks in circular tanks (1,000 L) to acclimate to new conditions (water temperature of 15.0-15.8 • C, dissolved oxygen of 7.5-8.5 mg L −1 , and pH 7.2-7.8), and they were fed with the basal diet. At the end of 14 days, 450 rainbow trout (mean weight ± SD; 10.0 ± 0.1 g) were distributed in 15 square concrete tanks (700 L; 30 fish in each tank) with a 0.3 L s −1 water flow rate. During the rearing period, feeding was done three times a day (8:30, 12:30, and 16:30) on the basis of apparent satiety for 56 days.
Blood and Epidermal Mucus Sampling
At the end of the rearing period, the fish were deprived of feed for 24 h and three fish were randomly caught from each tank and then anesthetized with clove powder (150 mg L −1 ). Then, 2 ml of blood was taken from each rainbow trout using a syringe from the caudal vein. To measure hematological parameters, a part of the individual blood sample was transferred to a heparinized tube. The other part was transferred to a non-heparinized tube and kept in the refrigerator for 4 h (for clotting) and then centrifuged at 3,000 ×g for 15 min at 4 • C. The collected supernatants (serum) were stored at −80 • C for further analysis.
To measure mucosal immune parameters, three fish were randomly caught from each tank and individually placed in polyethylene bags containing 2 ml of chloride sodium (50 mM). The fish was gently rubbed for ∼2 min, the skin mucus was collected and centrifuged at 1,500 ×g for 10 min, and then the supernatants were transferred to sterile tubes and kept at −80 • C for further experiments (24).
Blood and Skin Mucus Immune Parameters
White blood cells (WBCs) were calculated using a hemocytometer slide on the basis of the method of Martins et al. (25). In addition, the differential count of WBC including neutrophils (Neu), lymphocytes (Lym), monocytes (Mono), and eosinophils (Eos) was performed by preparing and staining the blood smear according to the method recommended by Borges et al. (26) under an optical microscope (Alphaphot-2 YS2, Nikon, Japan). The total protein (TP) values of the skin mucus and serum samples were estimated using the method described by Lowry et al. (27).
Total immunoglobulin (Ig) of the serum or skin mucus sample was measured according to the method previously recommended by Siwicki and Anderson (28), in which the amount of Ig was calculated by subtracting the protein concentration of the sample before and after the addition of polyethylene glycol.
The serum or skin mucus lysozyme (LYZ) activity was measured according to the protocol described by Ellis (29) using the turbidimetric method. For this purpose, 50 µl of the serum or skin mucus sample was mixed with 2 ml of bacterial suspension along with citrate-phosphate buffer (0.2 mg ml −1 in a 0.05 M sodium phosphate buffer, pH 6.2). Then, the rate of reduction of light absorption of each sample up to 15 min at 450 nm was measured by an ELISA reader (800 TS, BioTek Instruments Inc., USA). The amount of LYZ activity in each serum or mucus sample was calculated using the standard egg white LYZ curve (Sigma-Aldrich).
Serum alternative complement activity (ACH50) was measured on the basis of the method described by Yano (30) using sheep red blood cell hemolysis. The volume yielding 50% hemolysis was estimated and, in turn, used for quantifying the complement activity of the serum. In addition, the serum complement components (C3 and C4) and skin mucus alkaline phosphatase (ALP) were measured using the relevant commercial diagnostic kits for fish (Hangzhou Eastbiopharm Co., Hangzhou, China) by a clinical automated blood analyzer (Prestige 24i, Tokyo Boeki, Japan) as previously used for rainbow trout (31). Serum and mucus bactericidal activity (BA) against Y. ruckeri (KC291153; isolated from infected rainbow trout and maintained at the Faculty of Veterinary Science, University of Tehran, Iran) was evaluated according to the method described by Fazelan et al. (32). In brief, the bacterium was cultivated in a nutrient broth, washed, and suspended in phosphate-buffered saline (PBS). The optical density of the bacterial suspension was adjusted to 0.5 at 600 nm (33). This suspension was serially diluted five times (1:10) by PBS. Afterward, 20 µl of the serum and 100 µl of the mucus were added to 2 µl of the bacterial suspension and incubated at 22 • C for 1 h. In addition, PBS (20 µl) was used as the control. Finally, the samples were cultured on trypticase soy agar medium at 22 • C for 24 h, and the grown colonies on the plates (in triplicates) were counted to determine BA against Y. ruckeri as log 10 colony-forming unit (CFU) ml −1 .
Gene Expression Study
To evaluate the expression of the immune-related gene, three fish were randomly taken from each tank, and after euthanasia with clove powder, the anterior kidney was removed on ice and immediately stored at −196 • C in sterile tubes. In the Genetic Laboratory of Zakariya-Razi Complex Center (Science and Research University, Tehran, Iran), 100 mg of the head kidney was ground, transferred to sterilized tubes, and subjected to the total RNA extraction process via an RNX-Plus kit R (SinaClon Co., Tehran, Iran) according to the manufacturer's instructions. Besides, DNase (Invitrogen, USA) was used to remove possible genomic DNA in the extracted RNA. The extracted RNA was stored at −80 • C until the synthesis of cDNA. For this purpose, a cDNA synthesis kit (GeNet Bio Co., Daejeon, South Korea) was used. Accordingly, 2 µl of the template RNA was transferred to a 0.2-ml tube, and oligonucleotide reagents and primers were added. Then, the mixture was incubated at 44 • C for 60 min. In the next step, the reaction was subjected to 75 • C for 5 min to inactivate the cDNA degrading enzymes. The volume of solution containing the synthesized cDNA was adjusted to 20 µl (34).
Quantitative real-time PCR was carried out using a SYBR Green Master Mix kit (GeNet BIO Inc., Daejeon, South Korea) in a real-time thermal cycler (Rotor-Gene Q, QIAGEN, Germany). The thermal profile for the reactions consists of 2 min at 94 • C for initial denaturation; then 40 cycles for 30 s at 94 • C, 30 s at 62 • C, and 45 s at 72 • C for annealing temperature; and one cycle for 5 min at 70 • C for the final extension.
The intensity of fluorescence at the end of each cycle was recorded by the qPCR device (Rotor-Gene Q). β-actin gene was used as a reference gene to normalize the expression of the target genes. The primers used in this experiment were designed using Primer3 online software version 0.4 ( Table 1). The relative expression of candidate genes in different treatments was performed on the basis of the 2 − Ct method.
In vivo Fish Infection Model
After the feeding trial, the resistance of the remaining fish against Y. ruckeri (KC291153), the cause of red mouth disease in rainbow trout, was evaluated for 10 days. In summary, the fish from different treatments (N = 10) were transferred into 15 circular fiberglass tanks (150 L) located in the Research Station laboratory under the isolated and quarantined conditions. A dose of 1 × 10 7 bacterial cells/ml [based on the median lethal dose (LD 50 ) test] in PBS was prepared from Y. ruckeri using standard McFarland tubes, and the fish were injected intraperitoneally (0.1 ml per fish). The fish were fed with the experimental diets during the challenge text. The mortality and clinical signs of the infected fish were recorded daily. In addition, the standard bacteriological culture was performed on the kidney tissue samples of all dead fish to confirm the cause of death due to the pathogenic bacteria.
The cumulative mortality was calculated on the basis of the following equation: Cumulative mortality (%) = (total number of dead fish/ total number of stocked fish) × 100.
Statistical Analysis
First, the homogeneity and normality of the data were investigated using the Levene's test and Kolmogorov-Smirnov test, respectively. Then, the data were analyzed by ANOVA using SPSS software (version 20). The mean comparison between different treatments was determined on the basis of the Tukey's multiple-range test at 5% probability level (P < 0.05).
Total and Differential Leukocyte Counts
Cellular immune responses of rainbow trout at the end of the feeding trial are presented in Table 2. The WBC count in the groups fed with LE1 and LE2 diets was significantly higher than LE0.5 and the control groups (P < 0.05). In addition, Lym and Mono percentages in the fish fed with dietary LE2 were remarkably higher than those receiving the diet free from LE (control diet). Conversely, the lowest percentage of neutrophils belonged to the trout fed a diet containing LE2, which recorded a significant decrease compared with the control fish (P < 0.05). In the case of eosinophils, no significant differences were recorded between the different experimental groups (P > 0.05).
Serum Immune Parameters
As shown in Figure 1, the serum immune responses were affected by different levels of LE in rainbow trout. The fish fed with LE2 diet indicated significantly higher Ig, LYZ, C3, and C4 values than other treatments (P < 0.05). In addition, TP was remarkably higher in fish fed diets supplemented with LE2 and LE3 than other groups (P < 0.05). ACH50 activity was significantly enhanced in response to diets treated with LE1, LE2, and LE3, whereas no significant differences were observed between LE0.5 treatment and the control group (P > 0.05). Furthermore, the BA was significantly increased in the serum of the fish treated with dietary LE, and the strongest reduction of Y. ruckeri counts was recorded in the fish fed with LE2 and LE3 diets.
Immune-Related Gene Expression
Expression of immune-related genes was significantly upregulated by LE dietary supplementation (Figure 3). The expression of IL-1β, IgM, and TNF-α genes was significantly increased in LE2 group (P < 0.05). In addition, the treated fish with LE2 and LE3 diets significantly indicated a higher level of IL-8 gene expression, when compared with the control group and LE0.5 treatment (P < 0.05).
Challenge Test
The trend of rainbow trout mortality during 10 days of experimentally challenged with Y. ruckeri infection is shown in Figure 4. A significant reduction (P < 0.05) in the mortality was observed in the fish fed with LE at higher concentrations than 0.5 g kg −1 . In addition, the highest percentage of mortality was recorded in the control (53.66%) and 0.5 g kg −1 LE (50.00%) at the end of the bacterial challenge test. However, the mortality was started with a delay of 1 day (fourth day) in the groups of 2 and 3 g kg −1 LE. The lowest mortality was observed in the fish fed with LE2 diet (29%), which was not significantly different from LE3 treatment (33.66%; P > 0.05).
DISCUSSION
The application of feed supplements to improve the immune system of aquatic animals can reduce the economic losses due to infectious diseases outbreak in aquaculture sectors. In this regard, traditional plants are considered a common strategy to boost the immune system according to their remarkable antimicrobial, antioxidant, and therapeutic properties (3,5,7). Licorice is one of the most important medicinal plants and can be effective as a strong immunostimulant, which is also supported by our results on rainbow trout. However, the indicators of immunocompetence at the highest dietary LE level were slightly decreased compared with the fish belonging to LE2 treatment. This outcome may be associated with various factors such as the presence of anti-nutritional factors (saponin), allergic reactions, hyperglycemia, excessive excretion of potassium ions, and elevated plasma pH by consuming a high level of licorice (5,20). However, finding the accurate mechanisms of the above topic in fish requires extensive studies in the future. Improving leukocytes quantity and differential count is one of the vital parameters in the specific and non-specific immune system of fish that can be beneficially affected by adding herbal supplements in the diets (2,4,35,36). On the basis of the results, the highest number of total leukocytes was observed in LE-added groups, which was significantly higher than the control group. The bioactive compounds in licorice (more than 20 triterpenoids and nearly 300 flavonoids), especially glycyrrhizin, 18β-glycyrrhetinic acid, licochalcone A and E, glabridin, and liquiritigenin, exert strong antimicrobial properties, which can modulate the immune system functions (37). Similar results were reported in great sturgeon, Huso huso (38) and striped catfish Pangasianodon hypophthalmus (35), when they were administrated orally with Origanum vulgare and Phyllanthus amarus, respectively. The main components of WBC engaged in the innate immune system are neutrophils, monocytes, macrophages, and eosinophils, which are potentially involved in bactericidal activities and phagocytosis (35). Unlike the innate mechanisms, B lymphocytes play a manifest role in specific immunity via producing antibodies against pathogens (36). The findings of the current study indicated that supplementation of rainbow trout with LE, especially at the level of 2 g kg −1 , enhanced the lymphocyte and monocyte percentages, which can reveal the positive impacts of dietary LE in enhancing the defense system functions of rainbow trout. Moreover, our findings indicated that the neutrophils count has an inverse pattern with the lymphocytes. In agreement with our results, Rashmeei et al. (24) showed a decrease in the neutrophils count of goldfish (Carassius auratus) followed by an increase in dietary chasteberry (Vitex agnus-castus) extract concentration. Neutrophils and eosinophils are motile cells that accumulate at infectious sites and kill pathogens by producing O 2− and OH − ions through the process of respiratory burst (39). However, we could not find a strong explanation for the role of LE on the neutrophils. Therefore, further studies are needed to elucidate the dietary effect of LE on fish neutrophils. The humoral immune system has many molecules with diverse activities. For instance, the complement system contains 35 types of soluble proteins that play a vital role in chemotaxis, inflammatory, and phagocytic responses (40). Besides, LYZ and Ig are two other known components of humoral immunity that are involved in bacteria wall lysis and virus neutralization, respectively (36). In our study, rainbow trout fed with LE2 diet indicated higher values of serum LYZ, C3, C4, ACH50, and Ig than the control fish. Similarly, Abdel-Tawwab and El-Araby (19) reported that Nile tilapia fed the diet treated with 20 g kg −1 licorice powder remarkably increased serum LYZ activity. In addition, Adineh et al. (18) elucidated higher serum LYZ and ACH50 activities as well as Ig value in common carp fed diets enriched with different levels of licorice powder. Improving liver function by licorice root is one of the main reasons for the increased serum TP, C3, C4, and ACH50 levels that were previously suggested by Elabd et al. (20) and Yin et al. (21). These findings may be related to the role of LE to improve liver and other organs functions that produced serum proteins (18). Earlier reports indicated that dietary administration of licorice (main active compound: glycyrrhizin) can decrease liver damages due to the hepatoprotective and antioxidant effects of G. glabra in fish (19,20). In fact, the protective effect of licorice against liver fibrosis and cirrhosis may be related to its anti-inflammatory activity and enhancement of antioxidant defense in the host (41). In addition, an increase in the serum TP content can be accompanied by increasing in the Ig and total globulin levels (17). Moreover, it seems that increasing the number of lymphocytes could beneficially affect the level of LYZ activity and Ig value of rainbow trout (42,43). In this study, the highest level of serum BA activity was recorded in the fish fed LE2 diet by observing the lowest Y. ruckeri count. This can be interpreted by improving the integrity of the mucosal tissues due to the phenolic compounds (mainly glycosides of liquiritigenin and isoliquiritigenin) found in LE (44). Similarly, previous studies have also confirmed that enhancing serum immune responses by diets containing plant extracts such as Taraxacum officinale (45), Viscum album (46), Mentha piperita (47), and Ziziphora clinopodioides (48) led to increase serum BA in rainbow trout.
The skin and gills of fish are constantly exposed to destructive agents such as pathogens, toxins, and stressors (49,50). Therefore, improving their protective layer, such as epidermal mucus, is crucial in the first line of defense against invading pathogens. Skin mucosal barrier in the fish defense contains a variety of biomolecules such as lectins, peptides, LYZ, immunoglobulin, and proteolytic enzymes (44,45). In recent years, several nutritional studies have been performed to improve mucosal immunity (51)(52)(53). The possible mechanism of immunostimulants in the regulation of mucosal immune parameters is through stimulation or contact with lymphatic tissue related to the skin or gills (51). Several studies have shown that the oral administration of medicinal plants and their derivatives stimulate lymph tissues to secrete more epidermal mucus defense elements (53)(54)(55). Our findings showed that dietary supplementation with different levels of LE, especially at 2 and 3 g kg −1 , resulted in a significant improvement in LYZ and ALP activities as well as Ig value compared with the control group. These findings were supported by previous reports in rainbow trout fed with other medicinal plants. For instance, Gholamhosseini et al. (56) showed that dietary Tarragon (Artemisia dracunculus) improved LYZ and ALP activity and TP value in rainbow trout. In another study, Ghafarifarsani et al. (57) indicated that rainbow trout fed diet enriched with oak acorn (Quercus liaotungensis) extract had remarkably higher skin mucus LYZ and ALP activities compared with the control group. The beneficial effects of licorice on the immune responses may be due to immunomodulatory compounds such as glycyrrhizin, which was previously reported (18)(19)(20). The present findings on the mucus BA confirmed that LE can improve skin mucus immune parameters in rainbow trout as a potential source of antibacterial substances. Similarly, Oroji et al. (48) reported that oral administration of Ziziphora clinopodioides extract significantly increased the BA of skin mucus in rainbow trout. In addition, Ghafarifarsani et al. (57) reported the strong antibacterial effect of skin mucus extract against Y. ruckeri in rainbow trout treated by dietary oak acorn extract.
Fish immune systems are complex and significantly influenced by nutritional status (10,58). In the context of immune and inflammatory responses, cytokines are soluble glycoproteins that play a major role in regulating immune responses and can be divided into various groups such as interferons, interleukins, tumor necrosis factor, and chemokines (59,60). Several studies have suggested that the expression of proinflammatory cytokines genes (IL-1β, IL-6, IL-8, and TNF-α) was upregulated by plant-fortified diets such as Ginkgo biloba (61), Phoenix dactylifera (62), Origanum vulgare (4), and Taraxacum officinale (45). In this study, the expression of TNF-α and IL-8 genes significantly increased in response to different levels of LE, especially at 2 g kg −1 . In fact, TNF-α is involved in initiating and enhancing inflammatory processes against gramnegative bacteria and other pathogens (63). In addition, proinflammatory cytokines such as IL-8 play a fundamental role in the acute inflammatory process, neutrophil oxidative burst, and wound healing (64). In the present study, the expression of IL-1β and IgM genes significantly increased with the increase of dietary LE and peaked at LE2; however, thereafter, they were slightly decreased. IL-1β plays a key role in systemic responses to infections and tissue injuries, and it has a similar function with TNF-α and enhances the activity of LYZ and cytokines with antibacterial properties such as IL-17 (65). To our knowledge, no data are available on the effect of licorice on the expression of immune-related genes of rainbow trout or other species. However, a pair of studies showed that the supplementation of rainbow trout diet with Polygonum minus (66) and Aloysia citrodora (67) could upregulate the expression of TNF-α, IL-1β, and IL8 genes. In this study, the expression of pro-inflammatory cytokine genes was elevated in LE-added groups, especially at 2 g kg −1 , which may indicate the importance of phytogenic bioactive compounds in LE to modulate immune responses in rainbow trout.
The efficacy of dietary supplements can be evaluated in aquaculture by exposing the fish to pathogens. Y. ruckeri is the cause of enteric red mouth disease or yersiniosis, which is a septicemic infection that affects salmonids, and rainbow trout is more susceptible to this infection at early life stages (68). This infectious disease is widespread and has caused significant economic losses in the trout aquaculture industry (69). In the present study, the infected rainbow trout with Y. ruckeri indicated a significantly lower mortality rate in the LE-added groups, especially in LE2 treatment, compared with the control group. The positive effect of various medicinal plants in increasing the survival rate of rainbow trout against yersiniosis has been previously reported (23,57,66,70,71). The increase in the survival and resistance of rainbow trout against Y. ruckeri infection in LE 2 and LE3 treatments can confirm the enhancement of the fish innate immunity and health, which are in line with the improvements in the measured immune parameters of serum and skin mucus.
Our findings revealed that using LE as a feed additive can enhance innate immune responses in rainbow trout. In this study, a diet containing LE at 2 g kg −1 effectively improved the leukocyte count, serum, and mucus immune responses as well as some immune-mediated genes in the head kidney. The oral administration of LE, especially at 2 g kg −1 increased the resistance of fish to Y. ruckeri infection. Therefore, it seems that the use of LE in rainbow trout diet as an immunomodulatory agent can be effective to boost the immune system.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
All the protocols were approved by the Science and Research Branch University, Committee of Faculty of Natural Resources and Environment, Tehran, Iran (Local Approval No. 162409789, 03/15/2020).
AUTHOR CONTRIBUTIONS
MD carried out fish maintenance and sample collection. MS contributed to conception and design of the study. AK organized the database and performed the statistical analysis. All authors read, wrote, and approved the submitted manuscript version.
FUNDING
This study is a part of a Ph.D. thesis in aquaculture and supported by the University of Science and Research Branch (IAU, Tehran, Iran). | 2022-02-23T14:24:55.176Z | 2022-02-23T00:00:00.000 | {
"year": 2022,
"sha1": "287de47aa80ca5f5b8b0b4ba6dbccbcf33c8971b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2022.811684/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "287de47aa80ca5f5b8b0b4ba6dbccbcf33c8971b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259228999 | pes2o/s2orc | v3-fos-license | LIM Kinases: From Molecular to Pathological Features
LIM kinases (LIMKs), LIMK1 and LIMK2, are atypical kinases, as they are the only two members of the LIM kinase family harbouring two LIM domains at their N-terminus and a kinase domain at their C-terminus [...].
LIM kinases (LIMKs), LIMK1 and LIMK2, are atypical kinases, as they are the only two members of the LIM kinase family harbouring two LIM domains at their N-terminus and a kinase domain at their C-terminus. They are dual kinases able to phosphorylate serine, threonine and tyrosine due to their non-canonical catalytic site. They play a crucial role in cytoskeleton remodelling through the independent regulation of the actin filament and microtubule turnover. Initially, LIMKs were described as downstream effectors of the signalling pathway controlled by small Rho GTPases. Along this pathway, LIMKs phosphorylate cofilin, an actin-depolymerizing factor, leading to its inhibition and actin filament stabilization. The molecular actors of LIMK implication into microtubule dynamics remain unknown. Therefore, many studies on LIMKs have focused on their role involved in cell division, differentiation and migration as aspects of cytoskeleton remodelling.
However, many partners of LIMKs have been identified over the last years, positioning them at the heart of an impressive network of signalling pathways. Along this line, the implications of LIMKs in numerous pathologies have inspired interest in their role as potential relevant therapeutic targets. Many small molecule inhibitors targeting their kinase active site have been developed without success during the clinical stage. As such, it appears crucial to have a better understanding of these kinases to develop new efficient therapies. This Special Issue focuses on these different aspects of LIMKs, presenting an interesting overview of this field and opening new exciting perspectives.
Villalonga et al. [1] propose a broad overview of the LIMK gene and protein organization, as well as their implication in many physiological and pathological processes. They also depicted LIMK partners, substrates and regulators, establishing a detailed scheme of their molecular interactome.
Berabez et al. [2] focus on small molecule inhibitors targeting LIMK kinase activity, and discuss their chemical structure-biological activity relationship. They concentrate on inhibitors that have successfully reached the stage of preclinical assays on animal models of different diseases. They emphasize the failure of these compounds to reach the clinical stage.
Chatterjee et al. [3] highlight the structural features of LIMKs, describing their conformational changes due to ligand binding. They underscore the atypical catalytic mechanism of these kinases, with a focus on substrate recognition and LIMK regulation. They also propose new therapeutic strategies to target these kinases without restricting the field of investigation to their kinase activity.
Ribba et al. [4] shed light on the less known role of LIMKs in embryonic development. They depicted LIMK tissue expression during development, and utilize studies on animal models with loss-or gain-of function mutations in LIMKs and the inhibitors targeting LIMKs in order to better understand their functions.
Park et al. [5] focus on a recently described function of LIMKs in male urogenital system. Indeed, these kinases are involved in several processes crucial for proper urogenital function, such as smooth muscle contraction and spermatogenesis, but also with roles in pathological phenomena, such as cavernosal fibrosis and erectile dysfunction. Therefore, LIMKs may be new therapeutic targets against different urogenital disorders.
Brion et al. [6] draw attention to the implication of LIMKs in osteosarcoma, a bone cancer mainly affecting children and adolescents. Many patients still die from this disease and the survival rates have increased minimally over the last decades; find new therapies against this cancer is an emergency. LIMKs may be one of these new therapeutic targets, as many recent papers described their implication in osteosarcoma. These data are well depicted in this review.
It would also have been interesting to focus on other characteristics of these LIMKs; however, these topics will be briefly mentioned, as limited relevant information has been collated in this Special Issue.
LIMK regulation by miRNA is a well-documented topic in the literature. More than 30 miRNAs targeting LIMKs have been described to regulate their expression, leading to their downregulation and playing a role in various diseases such as cancers, neural pathologies, etc. Long non-coding RNA (LncRNA) and circRNA have also been shown to regulate LIMKs.
Recently, LIMKs have been shown to play a role in viral infections of HIV, Herpes Simplex, Ebola, Rift Valley fiver and Venezuelan Equine Encephalitis viruses, and more studies on this topic would be fruitful [7][8][9].
Furthermore, many papers discuss computational calculations, virtual screening, molecular docking and dynamic simulations, and comparative analysis of different kind of inhibitors (type I, II or II) in order to improve the approach concerning the development of small molecule inhibitors targeting LIMKs [10][11][12][13]. Many efforts are dedicated to this field to improve the pharmacological properties of these molecules and reflect the emergency of finding new compounds in order to selectively target these kinases, with the ultimate goal to reach clinical stages.
The field of research concerning the LIM kinases is still widely open, and many discoveries remain to be achieved to gain a better understanding of these multitasking kinases. We hope you will have interest and enthusiasm to read these well-documented reviews, and that you will widely increase your knowledge on these amazing LIMKs!
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-23T05:23:59.962Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "9e20f06c4816c0d1d3320f24ff70b451b3254f65",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9e20f06c4816c0d1d3320f24ff70b451b3254f65",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
237637926 | pes2o/s2orc | v3-fos-license | Multi-locus transcranial magnetic stimulation system for electronically targeted brain stimulation
Background Transcranial magnetic stimulation (TMS) allows non-invasive stimulation of the cortex. In multi-locus TMS (mTMS), the stimulating electric field (E-field) is controlled electronically without coil movement by adjusting currents in the coils of a transducer. Objective To develop an mTMS system that allows adjusting the location and orientation of the E-field maximum within a cortical region. Methods We designed and manufactured a planar 5-coil mTMS transducer to allow controlling the maximum of the induced E-field within a cortical region approximately 30 mm in diameter. We developed electronics with a design consisting of independently controlled H-bridge circuits to drive up to six TMS coils. To control the hardware, we programmed software that runs on a field-programmable gate array and a computer. To induce the desired E-field in the cortex, we developed an optimization method to calculate the currents needed in the coils. We characterized the mTMS system and conducted a proof-of-concept motor-mapping experiment on a healthy volunteer. In the motor mapping, we kept the transducer placement fixed while electronically shifting the E-field maximum on the precentral gyrus and measuring electromyography from the contralateral hand. Results The transducer consists of an oval coil, two figure-of-eight coils, and two four-leaf-clover coils stacked on top of each other. The technical characterization indicated that the mTMS system performs as designed. The measured motor evoked potential amplitudes varied consistently as a function of the location of the E-field maximum. Conclusion The developed mTMS system enables electronically targeted brain stimulation within a cortical region.
Introduction
Transcranial magnetic stimulation (TMS) offers means to stimulate a specific cortical region noninvasively [1]. Since its first demonstration in the 1980s with a round coil [2], figure-of-eight coils [3] have become common, as they allow targeting TMS in a more specific manner. To adjust the stimulated cortical location, a TMS coil is typically moved manually. Robotic TMS systems offer an alternative approach [4][5][6]; however, the mechanical coil movement is relatively slow due to inertia and safety limitations. Thus, with a single-coil TMS system, it is practically impossible to adjust the stimulated spot fast, in the neuronally meaningful, millisecond timescale. With single coils, it is also difficult to stimulate distinct nearby targets due to the relatively large coil size [7]. Fast stimulation of multiple cortical sites would enable the study of causal interactions in functional networks and more accurate and personalized treatments for neurological disorders [8]. There is a need for a flexible approach that allows TMS targeting based on real-time physiological feedback and convenient stimulation of nearby targets.
To overcome the slow mechanical coil adjustment and to enable the fast stimulation of different nodes of functional networks, we introduced multi-locus TMS (mTMS), in which a single coil is substituted with a transducer consisting of several coils [9]. By adjusting the relative currents in the coils, the induced electric field (E-field) pattern in the cortex can be modified electronically without coil movement. With mTMS, distinct cortical targets can be stimulated with sub-millisecond interstimulus intervals (ISIs) [10] and physiological feedback can be utilized in a closed loop to automate stimulation protocols [11,12]. Others have also taken steps towards implementing multi-coil TMS [13], with Navarro de Lara et al. presenting a prototype concept based on a 3-axis coil [14].
In this work, we aimed to develop an mTMS system that allows the adjustment of the location and orientation of the E-field maximum within a 2-dimensional (2D) cortical region 30 mm in diameter. Such a system would provide a substantial improvement on the 1-dimensional linear control that we previously achieved with a 2-coil mTMS system [9,15]. Here, we present our new mTMS system and demonstrate its unique capabilities in the context of automatic mapping of the primary motor cortex.
Methods
In this section, we introduce key components of the mTMS system (electronics, transducer). We also present the measurement protocols and analysis methods used to validate the system.
Electronics
The mTMS system is based on independently controlled H-bridge circuits (Fig. 1B). The electronics can be roughly categorized into the following modules ( Fig. 1A): control unit, charging unit, channels, coils, and auxiliary electronics. The control unit is responsible for the low-level control and operation of the system, whereas the charging unit, the channels, and the coils constitute the stimulation-related electronics. A single charging unit is used for charging the channel-specific pulse capacitors. Each coil is connected to its own channel; the electronics can drive up to six coils simultaneously, although here we use only five of them because they suffice for adjusting the stimulated location along two dimensions and rotating the electric field maximum. The auxiliaries contain miscellaneous electronics required for the operation and safety of the device. All the electronics are located inside a grounded metal enclosure.
The control unit is a field-programmable gate array (FPGA; PXIe-7820R; National Instruments, USA), interfaced through a custom-made LabVIEW (National Instruments) program in conjunction with specific logic-level trigger signals from external devices. The LabVIEW program, running on a dedicated computer, has an application programming interface, allowing one to develop components for the mTMS software in other programming environments.
The channels form the core of the power electronics of the system; a channel comprises a pulse module and a discharge controller (Fig. 1B). A pulse module is composed of a high-voltage capacitor (E50.R34-105NT0, 1020 µF; Electronicon Kondensatoren GmbH, Germany) in parallel with a full-bridge circuit. Insulated-gate bipolar transistors (IGBTs; 5SNA 1500E330305; ABB Power Grids Switzerland Ltd., Switzerland) function as the switching elements in the bridge; due to the stray and load inductances present, they are protected by resistor-capacitor snubbing circuits (effectively 1 Ω and 1 µF in series). Custom-made driver boards control the switching of the IGBTs. Together with a coil, a pulse module forms the pulse-forming network of the stimulator. The discharge controller is a printed circuit board mounted directly on the pulse-capacitor terminals. The controller has two functions: first, it controls a discharge resistor (TE1000B1K0J, 1 kΩ, 1 kW; TE Connectivity, USA) parallel to the capacitor; second, a subcircuit on the board monitors the capacitor voltage and reports it to the control unit. Special attention was given to the physical layout of the power electronics to minimize the stray inductance in the pulse module.
The individual coils of a multi-coil transducer are driven by separate channels. Thanks to the true parallelism of the FPGA, the current waveforms through all coils can be precisely controlled simultaneously. To avoid excessive circulating currents in the bridge after a stimulation pulse, the coils are characterized before the transducer is applied for brain stimulation, and the coil-specific waveforms are tuned so that no current is left circulating in the system after a pulse.
The charging unit consists of a high-voltage charger (CCPF-1500; Lumina Power, Inc., USA) connected to a solid-state switching array, providing separate connections to all capacitors, one at a time. The maximum voltage is 1500 V, and the maximum charging time is about 700 ms per capacitor. The monophasic pulse waveforms used in this study (with a 60-µs rise time, a 30-µs hold period, and a 36.6-43.3-µs fall time [16]) reduce the capacitor voltage approximately 5-7% depending on the coil parameters; thus, a capacitor can be recharged to the voltage it had before a pulse in less than 100 ms.
The auxiliaries contain various electronics modules that are vital for the proper operation of the mTMS device. Digital temperature sensors (DS18B20; Maxim Integrated, Inc., USA) serve the dual purpose of providing unique sensor identifiers to detect to which connector a particular coil is connected to, while also reporting the transducer temperature. Additionally, a sudden absence of a sensor reply can be utilized to detect a missing coil or a loose connection and to initiate an emergency shutdown. Most of the communication to the electronics on the high-voltage side is done via a communications interface that converts the electrical signals to and from the control unit into optical ones. Optical signaling provides a layer of isolation while also having excellent noise characteristics. Other circuit boards in this category include a power distribution module that delivers the required direct current power to the stimulator electronics, and an optically isolated trigger board that provides an interface for external triggering.
Device operation
The operation of the mTMS device is based on forced current feed through the transducer coils, which is achieved by manipulating the electrical topologies of the coil-specific bridge circuits; see Fig. 2 [17][18][19][20][21]. Depending on the states of the IGBTs, a bridge circuit either connects its respective pulse capacitor in series with the coil connected to the channel ( Fig. 2A,C), resulting in a damped oscillator circuit, or cuts the capacitor completely out of the circuit while also short-circuiting the coil's ends (Fig. 2B). Even though the capacitor-coil circuit is oscillatory in nature, the duration we keep the capacitor connected to the coil is very short (tens of microseconds) compared to the oscillation period of the circuit (in the millisecond scale). The resulting current ramps are thus nearly linear. The capacitor-coil series configuration ( Fig. 2A,C) leads to a changing current in the coil, a correspondingly changing magnetic field and an induced E-field in the brain. The coil-short configuration (Fig. 2B), on the other hand, leads to the current already flowing through the coil continuing its circulation, experiencing a slight decay mostly due to the resistance of the coil. The induced E-field due to this relatively slow change of current and magnetic field is negligible.
By incorporating multiple coils in a single transducer, the superposition of the E-fields is exploited to manipulate the spatial pattern, intensity, and direction of the E-field induced in the cortex [9]. The polarity and intensity of the E-field pattern from each coil can be manipulated by adjusting the rate of change of the current; this rate is proportional to the capacitor voltage in the corresponding channel. The total E-field induced in the cortex is the vector sum of the E-fields produced by the individual coils [9,13].
mTMS transducer
We designed and built a 5-coil transducer to control the location and orientation of the peak of the induced E-field in a 30-mm-diameter cortical region. The coil winding paths were generated with a minimum-energy optimization method [9] implemented in MATLAB 2020a (The MathWorks, Inc., USA). First, we modeled a commercial figure-of-eight coil (17 cm × 10 cm; Nexstim Plc, Finland) as 2,568 magnetic dipoles on a planar surface placed 15 mm away from the cortical surface (here modeled as a 7-cm-radius sphere using a 2,562-vertex triangular mesh) [22,23]. With the model of the commercial coil, we computed the induced E-field distribution for 8,964 coil placements (747 coil positions with 1-mm steps, 12 orientations with 30° steps in each position) with the maxima of the Efields covering a 30-mm-diameter cortical region. For each of these E-fields, we computed the corresponding minimum-energy surface current density in an octagonal plane section (30-cm diameter; 961-vertex triangular mesh) that would induce an E-field distribution with similar focality and intensity [21]. The optimization was performed for five distances between the sections and the cortical surface (15 to 27 mm in steps of 3 mm) to account for the winding thickness and mechanical factors that affect the construction of the physical coil. For each distance, we decomposed the optimized surface current densities with singular value decomposition and extracted the first five components, explaining 87.6-97.7% of the total variance depending on the distance. For each distance, we picked one of the five components so that the coils with the fastest attenuation of the Efield (or highest spatial frequencies [24]) were closest to the head [9]. Finally, we obtained the coil winding paths by discretizing the surface current density of each component in isolines [25]. The process resulted in two four-leaf-clover coils (10 turns in each wing) at the bottom, two figure-of-eight coils (12 turns in each wing) in the middle, and an oval coil (26 turns divided into two layers connected in series) at the top.
To manufacture the transducer, we designed five coil formers to accommodate the windings, a 5-mmthick top cover to protect and insulate the wire connections, and a socket with a wooden rod attached to the top plate to ease transducer handling. The parts were designed in Fusion 360 (Autodesk, Inc., USA). The coil-former thicknesses were 4.0 mm (including a 1.0-mm-thick bottom) for the bottommost coil, 3.9 mm (0.5 mm) for the top-most coil, and 3.5 mm (0.5 mm) for the coils in-between. The bottom thickness corresponds to the material thickness below the wire grooves. All parts were printed by selective laser sintering of 40% glass-filled polyamide (Proto Labs, Ltd., United Kingdom). Each coil was wound with copper litz wire (1.7-mm diameter; 3-layer Mylar coating; Rudolf Pack GmbH & Co. KG, Germany) in the grooves of the coil former and crimped to the end of a low-inductance TMS coil cable (Nexstim Plc). The assembly was potted and glued with epoxy for further mechanical strength and safety.
To characterize the manufactured 5-coil mTMS transducer, we measured the spatial distribution of the induced E-field, the self-inductance, and the resistance of each coil. The E-field distribution was sampled at 1,000 locations with our TMS characterizer [26], which gives E-field values on a 70-mmradius cortex in a spherical head model. The center of the transducer bottom was at 85 mm from the center of the spherical head model. The self-inductance was measured with an LCR meter (1-kHz reference frequency; ELC-130; Escort Instruments Corp., Taiwan) and the resistance with a 4-wire measurement set-up using a bench multimeter (HP 34401A; Hewlett-Packard Co., USA). The duration of the pulse waveform was customized for each coil based on measurements with a Rogowski probe (CWT 60B; Power Electronic Measurements Ltd, UK) connected to an oscilloscope (InfiniiVision MSOX3034T; Keysight, USA) to ensure that no current was left circulating in the system after a pulse.
Algorithm for electronic targeting
We applied the following algorithm to target the E-field in the cortex with the 5-coil transducer. In the computations, the cortex was represented by a triangular mesh extracted from individual magnetic resonance images (see Data analysis). First, we specified a target location ⃗ target on the cortical surface and the desired E-field ⃗⃗ target at that location. We required that on the cortex (i.e., ∀ ⃗ ∈ R cortex , where ⃗ is the position and R cortex the set of mesh nodes constituting the cortex), the E-field magnitude does not exceed | ⃗⃗ target |. We searched for the coil currents = [ 1 , 2 , … , ] T , where is the current in the th coil and = 5 is the number of coils in the transducer, that minimize the magnetic energy needed to induce the desired E-field pattern on the cortex. Given the inductance matrix where is the inductance of the th coil and , the mutual inductance between coils and (in our case , ≈ 0 µH due to the designed approximate orthogonality of the coils), we can write the following formulation of the problem:
⃗⃗ ( ⃗ target ) = ⃗⃗ target | ⃗⃗ ( ⃗)| ≤ | ⃗⃗ target | , ∀ ⃗ ∈ R cortex
This optimization problem is similar to the ones we have encountered when designing optimal TMS coils [21,24]; thus, we solved it with the interior-point method [27]. We approximated each of the | ⃗⃗ ( ⃗)| ≤ | ⃗⃗ target | nodewise constraints with a convex constraint set. To keep the number of constraints small compared to the requirements of a 3-dimensional (3D) approximation [21], we applied an iterative approach in 2D. At each node, a 2D projection of the E-field was constrained to lie within a regular 16-gon, which provided a set of 16 linear constraints to restrict the norm of the projected E-field [24]. At the first iteration, we projected out the E-field along the direction of the node normal. In the subsequent iterations, we always started from the full 3D E-field and selected the projected-out direction to be perpendicular to the 3D E-field from the previous iteration. To reduce the number of constraints further, we constrained the E-field only at a downsampled set of those nodes in which its amplitude after the previous iteration exceeded 0.975 | ⃗⃗ target | and a fixed set of four nodes around the target area. At each iteration, we appended the node positions at which | ⃗⃗ ( ⃗)| > | ⃗⃗ target | to the set R max , which was initialized before the first iteration as an empty set. We considered the optimization converged when the size of R max did not increase.
To account for differences in the pulse waveforms due to coil-specific inductances and resistances, we scaled the obtained solutions (i.e., the applied capacitor voltages) so that the average E-field over the rising phase of the monophasic pulses corresponded to the optimized .
mTMS motor mapping
To demonstrate mTMS in practice, we conducted a study on a 36-year-old healthy right-handed volunteer who provided written informed consent prior to his participation. The study was approved by an ethical committee of the Hospital District of Helsinki and Uusimaa and carried out in accordance with the Declaration of Helsinki.
Prior to the TMS experiments, we acquired structural magnetic resonance images (MRI) of the subject's head with a 3-T Magnetom Skyra scanner with a 32-channel receiver coil (Siemens Healthcare GmbH, Germany). For online neuronavigation, we acquired a T1-weighted image (cubic 1-mm 3 voxels) with a magnetization-prepared rapid gradient-echo sequence. For E-field modeling with the boundary element method (BEM), we acquired a T1-weighted image with fat suppression and a T2-weighted image (both with cubic 1-mm 3 voxels) [28].
In the TMS session, the participant sat in a chair and was instructed to keep his right hand relaxed. Surface electromyography (EMG) was recorded from the abductor pollicis brevis (APB), first dorsal interosseous (FDI), and abductor digiti minimi (ADM) muscles of the right hand with an EMG device (500-Hz low-pass filtering, 3-kHz sampling frequency; Nexstim eXimia; Nexstim Plc) with the electrodes in a belly-tendon montage. TMS was administered with the 5-coil mTMS transducer driven by our mTMS electronics. The pulse waveforms were monophasic with a 60-µs rise time, a 30-µs hold period, and an appropriate fall time (36.6-43.3 µs depending on the coil) [16]. The transducer placement with respect to the subject's head was monitored with a Nexstim eXimia neuronavigation system (Nexstim Plc).
First, with the bottom figure-of-eight coil and a fixed stimulator intensity, we searched manually for the direction and placement of the transducer leading to the largest motor evoked potentials (MEPs) in the APB (so-called APB hotspot). Then, we determined the resting motor threshold (RMT; 50% of the responses with a peak-to-peak amplitude exceeding 50 µV) of APB with a threshold tracking technique utilizing 20 stimuli at that target [29]. The ISI was randomized between 4 and 6 s.
To acquire a motor map, we kept the mTMS transducer fixed above the APB hotspot and adjusted the stimulation target electronically by varying the relative coil currents to mimic the movement of a figure-of-eight coil in a conventional mapping. We had predefined 100 target points on the left precentral gyrus and the desired E-field direction at each target; in the mapping, we aimed at 54 of these targets (i.e., those that were within the reach of the transducer) and the APB hotspot. The BEMestimated induced E-field at the aimed location was kept at 110% RMT (i.e., 110% relative to the amplitude of the E-field maximum induced by the figure-of-eight coil at the RMT intensity) and its direction perpendicular to the precentral gyrus. The targets were stimulated in a pseudorandom order with an ISI of 4-6 s. We repeated the mapping 10 times.
Data analysis
We computed the E-fields needed in the motor mapping experiment using a four-compartment volume conductor model and our surface integral solver [30]. We constructed the anatomical model from T1-and T2-weighted MRIs using the SimNIBS headreco pipeline [28]. We downsampled and smoothed the pial, skull, and scalp surface meshes, resulting in a boundary element mesh with a total of 33,592 vertices, of which 21,949 were on the pial boundary (3.5-mm mean vertex spacing on the pial surface). Using these surfaces and LGISA BEM solver [31], we built a four-compartment volume conductor model that contains the brain (conductivity 0.33 S/m), cerebrospinal fluid (1.79 S/m), skull (0.0066 S/m), and scalp (0.33 S/m).
As field computation space, we used a region of mid-cortical surface, which was represented with a dense mesh (11,811 vertices, 0.97-mm mean spacing) around the hand-knob area and with a 2-mm mesh elsewhere around the target region. The coil windings were exported from the design program as ordered point sets that formed polylines with 13,400-20,400 segments per coil. These segments were further discretized using current dipoles, resulting in 13,600-20,800 dipoles per coil model. The E-field was computed in field space reciprocally using the volume conductor model and coil models otherwise as described in [30], but the coil integrals, i.e., the magnetic fluxes through the coils were computed using the circulation of the vector potential: We split the EMG data into trials around the TMS pulses and subtracted the mean of the baseline signal at −100…0 ms from the whole trial. We rejected trials for which the absolute value of the EMG signal exceeded 20 µV within the 100 ms preceding the TMS pulse. For each accepted trial, we calculated the MEP amplitude as the peak-to-peak signal amplitude between 20 and 50 ms. Finally, for each stimulation target and muscle, we calculated the median MEP amplitude. Figure 5 shows how the median MEP amplitude varied across the targeted primary motor cortex. We notice that for APB and FDI, we obtained large MEPs from an area that appeared more lateral to the region leading to the largest MEPs in the ADM. The maximum distance between the targeted points is 28 mm, which is on par with the 30 mm used as a design parameter for the available target region of the transducer.
Discussion
Our mTMS system with its 5-coil transducer enables electronically targeted brain stimulation within a cortical region approximately 30 mm in diameter. The 5-coil transducer allows automated mapping of the motor areas (Fig. 5), with the induced E-field oriented at will, as in our demonstration according to the gyral anatomy (Fig. 4) [32]. We were able to discern cortical motor representations, with results in agreement with earlier single-coil findings, showing that ADM is best activated with the E-field targeted more medially compared to FDI or APB [32,33].
The developed E-field targeting approach that employs convex optimization and BEM computations allows accurate adjustment of the induced E-field to target the desired cortical location. It also makes the entire mapping process easy and suitable for automation, as no manual transducer or coil movement is needed. Due to the convoluted cortical geometry, it may, however, be difficult to obtain the maximum E-field at some targets, e.g., those that are deeper than their surroundings [34]. This limitation applies, however, also to conventional TMS [34]. To minimize such problems, we manually selected points on the gyrus. For larger studies, it might be beneficial to develop a robust automated method to select the targets. The developed optimization formalism is quite general and can be expanded, e.g., to include constraints also for the coil currents to limit their amplitude (or the maximum rate of change) based on hardware limitations. Similarly, although in this study the only constraints on the E-field were the location and orientation of its maximum, other constraints may be added. For example, one may want to limit the E-field amplitude in specified non-targeted regions below a threshold or to have constraints on the E-field component perpendicular to the sulcal walls.
In the present mTMS system, we implemented electronics capable of controlling up to six coils simultaneously. Since the mTMS electronics and the transducer are separate parts of the system, one can design special-purpose transducers without changing the electronics. For example, the 30-mm targeting range in the cortex can be made larger or E-field focality can be made adjustable. To expand the cortical region within which the E-field can be targeted beyond the 30-mm-diameter region demonstrated in this study, one may (1) develop a transducer with more than five coils [9,35], (2) reduce the desired E-field focality to design a five-coil transducer with a wider control region [35], or (3) implement a transducer that follows the head curvature [35]. To study interhemispheric communication in motor networks for the study of motor control [36], one may use the 5-coil transducer on one hemisphere while stimulating the contralateral hemisphere with a separate figureof-eight coil. The 6th channel in our electronics will allow experimenting with such new designs in a flexible way.
In addition to automated cortical mapping, which may simplify, e.g., presurgical planning [37,38], mTMS will allow electronic stabilization to compensate for head movement during a TMS session faster than the existing robotic control [6]. mTMS also allows stimulating nearby targets with millisecond-scale interstimulus intervals [10], which may prove beneficial for developing new treatment and rehabilitation protocols. With physiological feedback from electroencephalography or electromyography recordings, mTMS enables closed-loop stimulation paradigms where stimulation targets are derived from the data gathered during the stimulation sequence [11,12].
Conclusion
The developed mTMS system and the algorithm for E-field targeting enable electronically targeted TMS within a cortical region. | 2021-09-27T13:17:10.072Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "b8e6fc3e07380edd987fc05450dec8357306c44c",
"oa_license": "CCBYNCND",
"oa_url": "https://helda.helsinki.fi/bitstream/10138/338113/1/PIIS1935861X21008299.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "b8e6fc3e07380edd987fc05450dec8357306c44c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology"
]
} |
253479318 | pes2o/s2orc | v3-fos-license | Change in stunting and its associated factors among children aged less than 5 years in Ethiopia using Ethiopia Demographic and Health Survey data from 2005 to 2019: a multivariate decomposition analysis
Objective The aim of this study is to assess change in stunting and its associated factors among children aged less than 5 years in Ethiopia using Ethiopia Demographic and Health Survey data from 2005 to 2019. Design A community-based cross-sectional study was conducted. Setting The study was conducted in Ethiopia. Participants In 2005, 4586 individuals were examined, followed by 10 282 in 2011, 9462 in 2016 and 4937 in 2019. Primary and secondary outcomes The primary outcome of the study was stunting, and the secondary outcome was factors associated with stunting and its change. A multilevel logistic regression model was fitted to identify individual and community-level factors associated with stunting among children aged less than 5 years. Multivariate decomposition analysis was also carried out to assess the role of compositional characteristics and behavioural change for decline in stunting among children aged less than 5 years in Ethiopia. Results Over the study period, the prevalence rate of stunting in children aged less than 5 years decreased from 47% to 37% in 2019. Differences in behavioural change among children under the age of 5 years account for 76.69% of the overall decline in stunting prevalence rate in the years 2005–2011, 86.53% in the years 2005–2016, 98.9% in the years 2005–2019, 70.34% in the years 2011–2016 and 73.77% in the years 2011–2019. Behavioural adjustments among breastfed children, diet diversity, place of delivery, ANC follow-up and region have all had a major effect on stunting prevalence rate. The wealth index, parenteral education, child’s age in months, length of breast feeding and area were among the compositional change factors. Conclusion A large percentage of children aged less than 5 years remains stunted in Ethiopia. Stunting was associated with alterations in the compositional and behavioural characteristics of children. Stimulating existing nutritional measures and improving the wealth index will make a significant difference in reducing stunting among Ethiopian children aged less than 5 years.
INTRODUCTION
The foundation of a child's existence, health and development is a nutritious, and wellbalanced diet. Children who are well fed are more likely to grow up to be healthy, hardworking and eager to learn. Undernutrition is similarly disastrous, potentially reducing brainpower and productivity, 1 2 and slowing economic downturn, which can perpetuate a cycle of poverty and illness. 3 Stunting is characterised by a decline in height relative to a child's age, which is frequently caused by malnutrition, recurring illnesses and/or a lack of social stimulation. 4 Children who are stunted typically have stunted growth and development as a result of poor nutrition, frequent infections and a lack of psychosocial stimulation. Stunted children are those whose height-for-age is >2 SD below the WHO Child Growth Standards median. 5 6
STRENGTHS AND LIMITATIONS OF THE STUDY
⇒ This is the first study in Ethiopia to address factors associated with changes in stunting, according to our search strategy. ⇒ The use of data that span the entire country is also another strength of the study. ⇒ The identification of behavioural and decompositional elements connected to altering stunting gives good understanding into each factor's role. ⇒ The stunting data in this study are only considered once every 5 years, making it unlikely that many up and down trends would not be identified over a 5year period. ⇒ For 2019, we used the mini DHS data, which would have minimum amount of inconsistency when compared with other standard survey years' data.
Open access
Linear growth failure is the most common type of malnutrition worldwide, 7 affecting an estimated 165 million children aged less than 5 years. Stunting has been designated as a major public health issue, with aggressive objectives set to eliminate stunting prevalence by 40% between 2010 and 2025. 7 8 The global target translates to a 4% annual reduction, with 171 million stunted children in 2010 falling to approximately 100 million by 2025. However, if current improvement rates continue, there will be 127 million stunted children by 2025, which is 27 million more than the target and only a 26% decrease. 9 Stunted children may never reach their full height potential, and their brains may never fully develop cognitively. These children have a significant disadvantage in life: they struggle in school, earn less as adults and face barriers to community participation. 10 11 Stunting, along with other concomitant nutrition issues such as fetal growth restriction, wasting, vitamin A and zinc deficiencies, and suboptimal nursing, was estimated to be the cause of 3.1 million child fatalities (45% of all child deaths) in 2011. 12 According to more recent estimates, stunting and severe wasting account for a third of all deaths among children aged less than 5 years. 13 Stunting is caused by inadequate nutrition and repeated bouts of infection during the first 1000 days of a child's life, and it is often permanent. Long-term consequences for individuals and societies include impaired cognitive and physical development, reduced productive capacity, poor health, and an increased risk of degenerative diseases such as diabetes. 14 One million children die each year as a result of stunting. Stunting in infancy and early childhood has long-term consequences for survivors, including impaired cognition and school performance, reduced physical development, poor health, lost productivity and low adult wages for those who survive. 15 16 Despite the fact that stunting is on the decline in Ethiopia, the prevalence remains high. 17 Ethiopia has one of the world's highest rates of stunted children aged less than 5 years. 18 19 The government has signed various global initiatives and set national commitments to eradicate child malnutrition, including stunting. 20 21 It was discovered that the patterns of stunting and severe stunting are not random. Amhara, Benishangul-Gumuz, Afar, Tigray and Oromia are among the country's stunted regions. 22 Stunting trends over the last two decades can be attributed to a variety of factors, including increased total consumable agricultural output, an increase in the number of health professionals, parental education, maternal nutrition, economic progress and lower diarrhoea incidence. A mother of high stature, living in a city, having a large child size, a mother without anaemia, having a large child weight, higher agricultural productivity, and improved sanitation and childcare practices all contribute to stunting reduction. [23][24][25] Due to a lack of attention paid to these factors, existing investment levels are insufficient to sustain the progress required to accomplish these goals. 26 Effective nutrition strategies in Ethiopia, as elsewhere, necessitate targeting based on factors for change in order to maximise stunting reduction. However, because Ethiopia is a developing country, it will be difficult to implement this programme due to a lack of data on the pattern of stunting over time and the determinant factors that influence stunting prevalence change among children. Apart from detecting stunting reduction, there is a lack of epidemiological data to determine whether the reduction was caused by the intervention or by a change in community composition. To close this visible gap in the literature, it is necessary to estimate the impact of nutritional interventions and other factors on stunting, carefully monitor the intervention, and provide reliable data to national policymakers. Therefore, the current study aimed to assess change in stunting and its associated factors among children aged less than 5 years in Ethiopia. The current study's contribution will be in line with Ethiopia's 2030 sustainable development objective, according to the Federal Ministry of Health, and will expand our understanding of factors that cause stunting change in Ethiopia.
Study settings and period
The Ethiopia Demographic and Health Survey (EDHS) data from the years 2005, 2011, 2016, and 2019 were used in this study. A global effort supported by the US Agency for International Development to collect nationally representative demographic and health data. Ethiopia is structured into 11 administrative regions, each of which is subdivided into zones, with each zone further subdivided into districts. The districts are subdivided further into kebeles. Every 5 years, the EDHS conducts a national and subnational representative household survey. Ethiopia is Africa's second most populated country, situated between 3° and 15° north latitude and 33° and 48° east longitude in the Horn of Africa. 27 Study design and population Cross-sectional data from the four consecutive (2005-2019) EDHS, which are collected every 5 years, were used to assess change in stunting and associated factors among children aged 0-5 years. Children whose data were incomplete, flagged cases and de facto residents were not included in this study.
Sample size and sampling procedure The current study's sample size was calculated using data from a demographic health survey. The EDHS data set was accessed after asking measure of DHS for permission through the project title 'factors associated with change in stunting among under five children in Ethiopia using Children 0-59 months in the selected households were measured for height-for-age. To achieve stratification, each region was divided into urban and rural areas. The Somali region was separated into two parts, the first of which consisted of the first three zones, and the second consisted of the three additional zones that were added later. Because the Addis Ababa region is fully urban, 23 sampling strata were constructed. In addition, the 2019 EMDHS sample was stratified and selected in two stages. There were 21 sampling strata in each region, which were divided into urban and rural areas. In two steps, Enumeration Areas (EA) samples were selected independently in each stratum. At each of the lower administrative levels, implicit stratification and proportionate allocation were used by sorting the sampling frame within each sampling stratum before sample selection. Participants were chosen based on administrative units at various levels and a probability proportional to the size selection method at the first stage of the sampling study. In all DHS reports, the sampling technique is described in detail. [28][29][30][31] Data source and extraction EDHS data sets from 2005 to 2019 were requested and downloaded from the Measure DHS programme website, which are freely available at (https://dhsprogram.com/ data/dataset_admin/login_main.cfm) to all registered users. The recommended data set type for children aged less than 5 years was chosen after analysing and examining the details of the EDHS data structure and data set types. Stunting data and potential independent variables were extracted in this manner.
Patient and public involvement statement
Patients or the public were not involved in the study design, conduct, reporting or distribution strategies of the research.
Study variables and measurement
The primary outcome of the study was the study of stunting, and the secondary outcome was factors associated with stunting and its change. Stunting was defined as the highest for age being less than 2.0 SD height-for-age z-score (HAZ) from the WHO reference population's median for age, as defined by the WHO height-for-age z-scores as Child Growth Standards. 32 Sociodemographic and economic factors such as residence, region, wealth index, source of drinking water, toilet facilities, parent education and occupation, head of household and maternal characteristics, and child morbidity such as diarrhoeal diseases, respiratory tract infection and anaemia are the independent variables.
Data management and statistical analysis
The outcome variable was coded as a binary variable ('stunting' = 1 and 'not stunting' = 0), similar to a prior study, 33 and the data analysis was executed using STATA V.15. To account for the uneven probability of selection between the strata that were geographically specified, sample weights were used. The methodology of EDHS final reports contains a full explanation of the weighting procedure. [28][29][30][31] First, descriptive statistics and trends in stunting were examined across all surveys using recoded background variables. Second, multilevel logistic regression models were fitted to find predictors of stunting at individual and community levels, taking into account the hierarchical nature of the 4-year EDHS data, which included 26 048 children aged 0-5 years nested inside each year's enumeration areas. Four models were fitted to compare and choose the best fit model: the first model (model I), also known as the null model, was fitted as a baseline model without any predictor variables, the second model (model II) was fitted with individuallevel variables, the third model (model III) was fitted with community-level (region and residence) variables, and the final model (model IV) was fitted with both individual-level and community-level variables. Then the models were compared using deviance information criteria (DIC), and the final best fit model (model IV) was selected as the model with the smallest DIC value. 34 For measures of association (fixed effect), adjusted ORs with 95% CIs were used to declare statistical significance. For measures of variation (random effects), the intraclass Open access correlation coefficient were used. Akaike's information criterion and Bayesian information criterion were also used to assess how well the model fits the data (online supplemental appendix 1). Lower scores in both criteria were considered to choose the best model. The SE was used to identify multicollinearity, and equal proportions of the total change in each covariate were assumed to occur at the same time.
Third, multivariate decomposition analysis was used to determine the contribution of each covariate to the observed change in stunting prevalence. The influence of changes in population structure in terms of children's characteristics on percentage of stunting over time was investigated using random-effects generalised least square regression. At a value of p<0.05, any statistical test will be judged significant. The decomposition approach divides the total drop in stunting into two parts: the endowments component, which can be attributed to changes in the composition or prevalence of a set of indicators, and the effect portion, which can be assigned to changes in the effect of these indicators (referred to as the coefficient portion). 35 The formula is given by where i,j=2005, 2011, 2016 and 2019, ΔY is the difference in mean prediction of stunting between year i and year j, of the different characteristics of X, β is the estimated regression coefficient, (X i −X j) β i represents the difference due to endowment between the ith and jth years, X j (β i −β j ) represents the difference due to coefficients between the ith and jth years.
Sociodemographic and socioeconomic characteristics
Data from 3476 children aged less than 5 years in 2005, 9013 in 2011, 8567 in 2016 and 4992 in the 2019 EDHS were used in this analysis. Girls made up about 51% of children aged less than 5 years. The majority of the children were under the age of 2 years in all of the surveys included in the analysis. The vast majority of those polled (68%) reside in rural areas. The Oromia region had the highest number of children (12%-20%), followed by Amhara (9%-13%) and Southern Nations, Nationalities, and People's Region(SNNPR), (11.80-18.79%). Open defecation was used by 41.55%-65.54% of the individuals who responded. Antenatal care follow-up has increased from 33.79% in 2005 to 74.64% in the 2019 EDHS. Over the course of the study, the percentage of children in the poorest wealth index category increased over the survey period, and the proportion has become larger than in other categories (see online supplemental appendix 2).
Trend of stunting
Over the study period (2005-2019), the prevalence of stunting fell from 47% in 2005 to 37% in 2019. The difference was statistically significant, with a beta coefficient of −0.767184 and 95% CI −1.23 to -0.30 with a value of p=0.019. With a 6% decline, the survey period 2011-2016 saw the highest decline. From 2005 to 2019, the rate of reduction in stunting varies depending on the child's attributes. Girls had the largest decrease (12.4%) in the specified period, compared with boys who had the lowest (7 %). Despite the fact that stunting was relatively low among urban residents, the decline was higher (7%) among children from rural settlements across the study period. Breastfed children had a decreased trend by 11.85% during the study period, which was higher than the other groups. Over the study period, the Amhara region has shown the largest reduction in stunting (15.88%), followed by SNNPR (15.41%). In the Benishangul-Gumuz region, however, the percentage of stunted children has increased over time (table 1).
Multilevel logistic regression analysis to identify factors associated with stunting Wealth index, source of drinking water, toilet facilities, parent education, occupation, mother's age, number of children aged less than 5 years in the house, living status of the child, age of mother at first childbirth, mother's height, child's age in months, antenatal care visit, place of delivery, early initiation of breast feeding, exclusive breast feeding, duration of breast feeding, bottle feeding, diversity feeding, sex of child, age of mother, birth order, birth type, diarrhoeal diseases, cough, short rapid breathing, anaemia and body max index of respondents were fitted in model II as individual-level factors. Model III has been fitted with residence and region as second-level variables. The final model showed that in the 2016 EDHS, women with higher education had a reduced risk of stunting than mothers with no education (AOR=0.42, 95% CI 0.22 to 0.85). In both the 2011 and 2019 EDHS data sets, the number of children aged less than 5 years demonstrated a statistically significant association with stunting. The risk of stunting increased by 1.2 and 1.40 times, respectively, in the 2011 and 2016 EDHS when the number of children aged less than 5 years in the home increased by 1, with AOR=1.2, 95% CI 1.02 to 1.4 and AOR=1.40, 95% CI 1.11 to 1.69. The respondent's height was a statistically significant factor in stunting in two of the studies that were looked at. AOR=0.994, 95% CI 0.992 to 0.996 and AOR=0.994, 95% CI 0.992 to 0.995 for the 2005 and 2016 EDHS, respectively, show that a centimetre increase in respondent height reduces stunting by 0.6% (table 2).
With the exception of the 2019 EDHS, breastfeeding length was a statistically significant factor. In the 2005 EDHS, children who were still breast fed had a 2.57 (AOR=2.57, 95% CI 1.86 to 3.56) times higher chance of being stunted than children who were ever breast fed but not during the data collection period. In the 2011 and 2016 EDHS, children who were still breast fed were 2.29 and 1.7 times more likely to be stunted than children who were ever breast fed, with (AOR=2.29, 95% CI 1.72 to 3.07) and (AOR=1.7, 95% CI 1.34 to 2.16), respectively. In all surveys, the likelihood of being stunted increased Open access Only one variable, region, has a significant association with stunting when considering community-level determinants. In the 2005 EDHS, children from the Amhara region had a significantly higher risk of stunting than children from Tigray (AOR=1.68, 95% CI 1.16 to 2.43). Children from the Amhara region, on the other hand, had a lower risk of stunting in 2011 than children from Tigray (AOR=0.71, 95 % CI 0.52 to 0.96). The two survey analyses (2011 and 2019) revealed that children from Oromia region have a lower risk of stunting than children from Tigray (AOR=0.50, 95% CI 0.36 to 0.70 and 0.43, 95% CI 0.23 to 0.79, respectively) (table 2).
Children from the Somali region have a lower risk of stunting than children from Tigray region, according to the analysis of 2011 and 2019 EDHS (AOR=0.46, 95% CI 0.30 to 0.72 and AOR=0.29, 95% CI 0.14 to 0.54), respectively. Only the analysis of the 2011 EDHS showed a strong link between residing in the Benishangul-Gumuz regions and stunting. When compared with children in the Tigray region, living in Benishangul-Gumuz reduces the risk of stunting by 32% (AOR=0.68, 95% CI 0.48 to 0.97). There are two types of findings in the current study for children aged less than 5 years in SNNPR. In 2005, children aged less than 5 years in SNNPR had a 50% higher likelihood of being stunted than children aged less than 5 years in the Tigray region. Children aged less than 5 years from SNNPR, on the other hand, were 0. 51 EDHS data sets, among the compositional change factors the wealth index, parenteral education, child's age in months, duration of breast feeding and region all had a statistically significant contribution to change in the prevalence rate of stunting (table 3).
Children from families with no or limited education were more likely to be stunted than those from families with higher education. Parents primary school coverage grew from 16.60% in 2005 to 25.3% in 2011 and 25.73% in 2016 (online supplemental appendix 2), resulting in a negative significant compositional contribution to a 14.80% and 8.78% reduction in stunting prevalence rate, respectively. From the 2005-2011 and 2016 EDHS, the proportion of women with no education decreased from 76.41% to 69.87% and 64.01%, respectively (online supplemental appendix 2), resulting in a 14.95% and 9.25% increased change in the prevalence rate of stunting. The likelihood of becoming stunted decreased as respondents' height increased. Between 2005 and the two most recent EDHS surveys (2011 and 2016), community compositional changes in respondent height had a positive contribution to change in stunting prevalence rate by 43.41% and 21.86 %, respectively (table 3).
During the data collection period, children aged less than 5 years who were breast feeding were more likely to be stunted than children who were ever breast fed but not during the study period. Open access the poorest wealth index group reduced change in prevalence rate of stunting by 0.31%. The present study relates a 5.31% drop change in the prevalence rate of stunting between 2005 and 2011 and a 5.30% decline between 2005 and 2016 to mean age compositional differences among children aged less than 5 years. Between 2005 and 2011, change in the composition of children aged less than 5 years in the Amhara region and Dire Dawa city contributed to a 0.05% increase and a 0.07% decrease change in the prevalence rate of stunting, respectively. In addition, between 2005 and 2016, changes in the composition of children in the Amhara region and Dire Dawa city contributed to a 0.46% increase and a 0.06% reduction in the prevalence of stunting, respectively. Note that other regions in the combined survey data did not significantly contribute to the change (table 3).
Controlling the effects of change in compositional features, behavioural changes among children aged less than 5 years who were breast feeding during the survey time increased change in the prevalence rate of stunting by 178.2%, according to a multivariate decomposed logistic regression analysis conducted between 2005 and 2011. Change in the prevalence rate of stunting progressed by 211.32% as a result of age-related behavioural changes in children from young to old. According to a multivariate decomposed logistic regression analysis conducted between 2005 and 2016, the change in stunting prevalence rate increased by 62.98% with behavioural changes among children who were breast feeding during the survey period (table 3).
Multivariate decomposition analysis of the 2005-2019 and 2011-2016 EDHS
From 2005 to 2019, a multivariate decomposition logistic regression analysis revealed that 98.9% of the overall reduction in stunting prevalence rate was due to children's behavioural changes. None of the compositional variables have a significant relationship with stunting prevalence rate change. However, a multivariate decomposition logistic regression analysis of 2011-2016 revealed that alterations in compositional characteristics of children aged less than 5 years accounted for 29.66% of the entire change. Among the compositional change factors, parent's education, height of mothers, children's age, sex of children in month, anemiaanaemia and region had a statistically significant contribution on change in stunting. Children from a family that had primary education were more likely to be stunted than children from a family that had higher education. The coverage of parents' primary education was increased from 25.53% in the year 2011 to 25.73% in 2016 (online supplemental appendix 2) that had a negative significant compositional contribution to the decline in the prevalence rate of stunting by 10.05 (table 4).
The likelihood of becoming stunted decreased as respondents' height increased. Between the 2011 and 2016 EDHS, community compositional changes in respondents' height (increased mean height) had increased reduction in the prevalence rate of stunting by 13.23%. In a multilevel logistic analysis, female children were less likely than male children to be stunted. Between 2005 and 2016, the proportion of female children included in the study increased by 0.1%. Stunting decreased by 4.28% as a result of this compositional alteration. Changes in the composition of children in the Amhara region between surveys contributed to a 4.71% reduction in the prevalence rate of stunting between 2011 and 2016. However, raising the proportion of children with moderate anaemia from 16.81% to 23.51% (online supplemental appendix 2) had a negative impact on stunting prevalence rate reduction by 7.26% (table 4).
Controlling the roles of change in compositional characteristics, behavioural changes among women towards Antenatal care (ANC) follow-up enhanced decline in the prevalence rate of stunting by 49.94 %, according to a multivariate decomposed logistic regression analysis from 2005 to 2019. Also, changing the behaviour of children who were breast fed during the study period resulted in a 72.9% increased change in the prevalence rate of stunting. Stunting had been decreased by 37.7% as a result of agerelated behavioural changes in children aged less than 5 years from young to old. Stunting increased by 8.34% as a result of behavioural changes among children in the Tigray region as compared with Addis Ababa. Stunting could have been decreased by 119.3% with behavioural changes among children from middle-income families, according to a multivariate decomposition logistic regression analysis conducted between 2011 and 2016. Change in the prevalence rate of stunting was reduced by 233.14% and 109.63%, respectively, due to behavioural modifications among children in the Oromia and SNNP regions. A good practice of eating solid, semisolid, or soft foods one or more times on the day before data collection time resulted in a 119.97% increase in stunting prevalence rate change (table 4).
Multivariate decomposition analysis of the 2011-2019 EDHS
Difference due to characteristics (endowment); 2011-2019 EDHS The multivariate decomposition logistic regression analysis of 2011-2019 found that decomposition changes in children's characteristics account for 26.23% of the overall change in the prevalence rate of stunting. The remaining 73.77% was attributed to children's behavioural changes. Wealth index, parenteral education, home water source, child's age in months, child's sex, place of delivery and region all exhibited a statistically significant compositional influence on change in the prevalence rate of stunting when variables were decomposed. Keeping all other behavioural variables equal, children from families with no education were more likely to be stunted than children from families with a higher level of education. Between the surveys, a reduction in the share of mothers with no education (online supplemental appendix 2) contributed to a 31% increased change in the prevalence Open access rate of stunting. In contrast, increase in the proportion of mothers with a primary education between the comparison periods had reduced decline in stunting by 13.97% (table 5).
Open access
Stunting was more common in children from the poorest, poorer, middle and richer families than in children from the richest families. Between the surveys, changes in family composition among poorer, medium and richer households increased change in the prevalence rate of stunting by 0.13%, 3.11% and 2.28%, respectively, but changes in family composition among the poorest families decreased change in stunting prevalence rate by 1.12%. The shift in age structure among children included in the 2011 and 2019 EDHS accelerated the pace of change in stunting prevalence rate by 4.23%. More female children were included in the 2019 EDHS, which resulted in a 0.32% increase in the stunting prevalence rate change. Between the surveys, change in the proportion of children in Tigray, Affar and Harari declined change in stunting prevalence rate by 0.76%, 0.79% and 0.12%, respectively. However, between the surveys, changes in the composition of children in the
Open access
Amhara region and changes in the composition of the place of delivery increased the change in the prevalence rate of stunting by 2.5% and 2.35%, respectively. Finally, decrease in coverage of an unimproved water source from 40.06% to 37.54% accounts for 8.57% decline in change in the prevalence rate of stunting as compared with increase in coverage of improved water source from 36.61% to 37.23% (table 5).
Difference due to effects of coefficient (coefficients); 2011-2019 EDHS The multivariate decomposition logistic regression analysis of 2011-2019 found that behavioural changes towards duration of breast feeding, dietary diversity, age in months, place of delivery and region had a significant effect on change in the prevalence rate of stunting between surveys when keeping compositional characteristics constant. During the study period, changing the behaviour of breastfed children resulted in a 57.2% rise in change in the prevalence rate of stunting. Stunting had been dropped by 134.53% as a result of age-related behavioural changes in children aged less than 5 years. Behavioural modifications in children from the Tigray and Harari regions have a detrimental effect on change in stunting by 15.01% and 0.61%, respectively. It was able to boost the change in stunting prevalence rate by 36.92% between the survey years by changing children's behaviour towards dietary diversification. Finally, behavioural adjustments towards other places of delivery resulted in a 2.29% increase in stunting prevalence rate change (table 5).
DISCUSSION
The aim of this study was to estimate changes in the prevalence rate of stunning, factors associated with stunting and changes in the prevalence rate among Ethiopian children aged less than 5 years from 2005 to 2019. In the previous 15 years, Ethiopia has increased the number of national policies and large-scale health, nutrition and food security programmes, 36 resulting in a steady drop in stunting from 47% in 2005 to 37% in 2019, which is comparable to other countries. 37 Stunting, on the other hand, continues to be a serious problem in Ethiopia due to a variety of circumstances. The risk of stunting was higher in children aged less than 5 years whose mothers had no education than in those whose mothers had a higher education level, according to a multilevel logistic regression analysis of the 2016 EDHS. A multivariate decomposition analysis of the EDHS from 2005 to 2011 and 2005 to 2016 revealed that lowering the number of mothers without education contributed to a reduction in stunting. There is a considerable link between maternal education and children's nutrition, according to earlier studies. Children born to educated mothers are less likely to be stunted than children born to uneducated mothers. 38 39 Women's higher education is a critical component in improving a family's socioeconomic level, 40 and excellent socioeconomic status influences predictors of stunting such as reproductive factors, feeding patterns and healthcare utilisation. 41 Our findings, and those of earlier studies, have major policy implications because they imply that by boosting mother's formal education, Ethiopia could ameliorate the impact of stunting on children and lessen the country's high stunting-related ill health among children.
When compared with children aged less than 5 years from mothers with higher education, the coverage of parent's primary education rose from previous to recent surveys; yet, it had a negative significant compositional impact on the decline in stunting. This finding is consistent with a large-scale study conducted across three African countries, which found that women's primary education had no significant impact on child stunting. 42 These findings suggest that educating women at the primary school level may not be adequate to reduce stunting to the levels desired, and that policies to keep mothers in school beyond primary school should be prioritised in order to reduce the number of stunted children in the country.
Behavioural changes towards mother's education did not indicate a significant relation to stunting reduction in any of the analyses. Although a variety of initiatives, such as the Sustainable Development Goals, are emerging in Ethiopia to support and encourage women's empowerment, reaching this goal has not been straightforward and has been hampered by persisting regional inequities. In most regions, community attitudes on women's engagement in development, women's access to and management of productive resources, and gender-based equalities in training and education are unsatisfactory. 43 The current finding implies that focusing on women's perceptions and attitudes, as well as boosting women's education, may have a good impact on bringing about the desired behavioural change in the community regarding child nutrition.
In a multilevel logistic regression analysis, maternal height was found to be inversely related to the prevalence of childhood stunting. Between 2005 and 2011, as well as 2005 and 2011, and 2011 and 2016, community compositional changes in responder height had a favourable impact on change in stunting. Aside from genetics, environmental factors including maternal nutrition, feeding patterns, and nutritional quality and quantity can all influence growth of children before the age of 2 years. 16 In addition, a variety of socioeconomic issues, ranging from general conditions to poor feeding practices, which may result in low maternal stature, may have an impact on early childhood growth and development. 44 In addition to heredity and shared environmental factors, the biological significance of the mother milieu during pregnancy and lactation could have explained the link between maternal height and early life stunting. 16 It is plausible to claim that stunting is a cyclical process in which women who were stunted as children have stunted offspring, producing an intergenerational cycle of poverty and diminished human capital that is difficult to break. 45 As a result, policies and tactics that consider mothers and Open access are implemented over a short, medium or long period of time may have the desired effect on childhood stunting. Furthermore, genetic studies are highly recommended to determine the long-term interaction of material stunting and child growth retardation. Breastfed children were more likely to be stunted than children who had been breast fed previously but not during the study period. In a multivariate decomposition logistic regression analysis, the decline in the proportion of children who were breast feeding between the surveys had a positive contribution to change in prevalence rate of stunting between the 2005-2011 and 2005-2016 EDHS, compared with children who were breast fed before but not during the data collection period. This could never be explained by the breast milk itself, but rather by a combination of factors that directly and indirectly influence a child's feeding habits at this age. According to the descriptive statistics, the majority of children who are still breast fed are under the age of 6-24 months, one of the most significant periods for linear development failure. Due to strong demand for nutrients combined with low quality and quantity of complementary foods, especially in underdeveloped countries, this is the time when the hazard of stunting reaches its pick. 7 46 47 Poor nutrition is caused not just by a lack of food, but also by improper feeding methods, such as poor timing, quality and quantity of foods given to infants and young children. If optimal breast feeding is not combined with complementary feeding practices, which are necessary to meet the nutritional demands of children in their first 2 years of life, it is not enough to prevent stunting. 48 In Ethiopia, however, only 14% of children aged 6-23 months have a diet that satisfies minimum dietary diversity guidelines, and only 7% have a diet that is minimally acceptable, according to the DHS report. 49 In a pocket study, 12.0% of children aged 6-24 months met the dietary diversity criteria by eating from four or more of the seven food categories. 50 Additionally, there is a wide spectrum of harmful child feeding practices in Ethiopia. The median age for exclusive breast feeding in northern Gamo Gofa was 3.7 months, whereas it was 10 months to over 12 months in North and South Gondar, North and South Wello, and Tigray. Likewise, in northern Gamo Gofa, the median age for complementary food introduction was 3.7 months, whereas in North Gondar and Tigray, it was 12.1 months. 48 In conclusion, lack of age-appropriate breast feeding, delays in the introduction of complementary feeding, caregiver knowledge gaps, which are strongly correlated with delays in complementary feeding, and failure to provide minimum dietary diversity, regardless of wealth status, education or remoteness, all contribute to the high burden of stunting in the breastfeeding category of children. 51 52 This finding means that the child's first 2 years after birth would be identified as the most significant window of opportunity for measures to combat stunting.
Behavioural changes such as good feeding habits among children who were breast feeding during the survey time had a positive contribution for change in stunting among children compared with children who were breast feeding previously but not currently, according to the multivariate decomposition logistic regression analysis of 2005-2011, 2005-2016 and 2011-2019. The National Nutrition Strategy of the Federal Democratic Republic of Ethiopia, which has been implemented over the last few decades, focuses on mainstreaming and strengthening nutrition activities through community-based nutrition programmes that help to reduce food insecurity and unbalanced nutrient consumption. Community-based health and agriculture extension programmes, health service delivery, education and gender programmes all received more attention. The community-based nutrition programme also includes growth monitoring and promotion for all children under the age of 2 years, as well as caregiver counselling. 53 54 Thus, the encouraging drop in stunting observed due to behavioural changes among children who were breast fed during data collection could be attributed to the implementation of a communitybased nutrition programme. This indicates that further enhancing the programme will provide very promising results in terms of eradicating stunting among Ethiopian children.
Between 2005 and 2011, the number of children in the poorest wealth index category increased, and stunting among children decreased, according to the analysis of the EDHS data set. At the same time, between 2011 and 2019, the proportion of children from lower, middle and upper-middle-class families decreased, resulting in a faster fall in child stunting. Previous findings that attempted to investigate the effects of economic growth on undernutrition in Ethiopia have similarly confirmed the direct effect of economic growth on stunting. 55 According to a study, a 10% rise in GDP per capita reduces the frequency of child stunting by 2.7%. In this regard, the average cost of stunting in poor nations has been estimated to be around 13.5% of GDP per capita. 56 According to published literature, the link between the prevalence of stunting and economic growth is stronger among children from lowincome nations, 57 58 implying that the household's financial level is the foundation for all nutritional interventions implemented in disadvantaged areas. These findings may serve as a reminder to Ethiopian policymakers to place a greater emphasis on policies that promote economic growth as well as nutrition-related programmes.
Similarly, a multivariate decomposition logistic regression assessment of 2011-2016 revealed that behavioural changes such as poor feeding habits among children from middle-income families were associated with an increase in stunting. Ethiopia's Growth and Transformation Plan is a 5-year development plan that runs from 2010/2011 to 2014/2015. 59 In both the health and agriculture sectors, community-based service delivery systems have been made available throughout this time period to assure decentralised and democratised public services. Health extension workers, in particular, play a critical role in strengthening and accelerating social and behavioural changes in children's eating habits, both in Open access rural and urban regions. 53 As a result, the decrease in stunting owing to behavioural changes among children from middle-income homes could be the outcome of programmes established during this time period, which could be a useful lesson in achieving the country's aim.
The risk of stunting grew as the child's age climbed month by month. Changes in age structure (lower mean age) among children in the EDHS from 2005 to 2011, 2005 to 2016, and 2011 to 2019 showed an increase in stunting. To achieve optimal growth in children, the amount and frequency with which they are fed should be increased: two to three meals per day for infants aged 6-8 months, three to four meals per day for infants aged 9-23 months, plus one to two additional snacks as needed.60 However, findings from national representative data showed that the frequency of infant and child feeding practices dropped as the child's age increased by one unit. 61 In most locations, young child feeding practice is also inadequate, and providing children the minimum appropriate diet variety does not grow with age. 62 On the other hand, despite the fact that optimal birth spacing is regarded as an important element in children's health, Ethiopia's birth interval is short. 63 After the arrival of the second child, the amount and quality of care given to the first child may gradually decrease. All of these things could be contributing factors to the child's inability to achieve optimal growth as they get older.
The decomposition analysis of the EDHS for the years 2005-2011, 2005-2019 and 2011-2019 revealed that age-related behavioural changes such as improvement in good feeding habit among children from young to old age played a favourable role in stunting reduction. Since 2004, Ethiopia's Federal Ministry of Health's Family Health Department has adopted a national policy to improve baby and child feeding practices, with the goal of gradually increasing food consistency and diversity as newborns grow older, while responding to their needs and skills. 64 Ethiopia made significant progress in extending community-based primary healthcare delivered by health extension workers as a result of these programmes. Because of their influence on eating decisions and access to mass media, Alive & Thrive launched a radio and television campaign aimed largely at men to reinforce and expand the impact of community interventions and to reach individuals outside of programme areas. Each television and radio broadcast emphasised the importance of male involvement in infant feeding. 65 The decrease in stunting may be due to changes in parental behaviour towards newborn and young child feeding practices, which may have been affected by radio and television programmes used as communication tools under Ethiopia's Growth and Nutrition programme. 66 This means that, while the economic and political hurdles to improving Ethiopia's nutritional status are enormous and appear insurmountable, strengthening the existing nutritional interventions can make more of a difference in reducing early life growth failure.
From 2005 to 2016, and from 2011 to 2019, the proportion of female children included in the study increased. Stunting among children aged less than 5 years has decreased significantly as a result of this compositional change. Male children aged less than 5 years in sub-Saharan Africa are more likely than girls to be stunted. 67 Gender differences in mortality and morbidity could explain this. Even though there is no clear understanding of early childhood health inequalities, epidemiological research consistently shows that boys have higher mortality and morbidity than girls. 68 Other potential determinants, such as social role valorisation of daughters and nutritional discrimination, have not been widely investigated in Ethiopia, implying the need for more exploratory research in the area.
In all combinations of survey data, a decrease in the proportion of children from the Amhara region increased the change in stunting. Amhara has the country's third highest rate of monetary poverty, as well as the greatest disparity between rural and urban communities. 69 70 Many households can only generate enough food to meet their nutritional needs for about 6 months of the year. 71 Amhara region's children are similarly worse off than the national average in terms of basic necessities and services. 72 There are significant gaps in healthcare professionals' knowledge and abilities, facility readiness, administration and leadership, and the availability of crucial supplies in various parts of the region. Maternal and newborn health services are still underused, and maternal and newborn care is of poor quality. 72 73 This could all be contributing to the high rate of early life growth failure in the Amhara region, implying the need for a variety of interventions to guarantee children have access to both meals and health services that would effectively meet their multifaceted needs for growth and development.
Between 2011 and 2019, the proportion of children in the Tigray region who were stunted dropped. Despite considerable improvements in access to healthcare services in the region, producing an acceptable amount of food is extremely challenging due to a scarcity of suitable farmland. Of the population, 89% earns less than £2 a day, while the bulk of the population produces less than half of their annual minimum food requirements. 74 However, between 2005 and 2019, behavioural changes among children in the Tigray region contributed significantly to a large decrease in the frequency of childhood stunting. This could be attributed to the successful execution of a health extension programme and the expansion of healthcare facilities. 75 In the Tigray area, total universal health service coverage is nearly comparable to Addis Ababa, and is complemented by a high level of facility delivery and children vaccination. 76 77 All of these could aid communities in developing appropriate child feeding practices in the region.
Similarly, between 2011 and 2019, the proportion of children in the Harari and Affar regions who were stunted increased. Harari has had a significant decrease in monetary poverty in recent decades, beginning in
Open access
2004/2005. The region's overall monetary poverty rate has dropped to 7%, the lowest in the country. Similarly to monetary poverty, the number of people living in food poverty has considerably dropped. 78 According to reports, children in the Harari region are less likely than the national average to be deprived of a greater number of fundamental requirements and rights. 79 Despite the fact that many people in the Afar region face chronic food poverty, over 90% of the Afar community relies on a pastoralist subsistence strategy. 78 As a result, there is a relatively high culture of feeding children animal products with great nutritional content to counteract stunting. 80 As a result, when additional children from the two locations are sampled between the surveys, a relatively low degree of stunting is anticipated. In contrast, between 2005 and 2016, a fall in the proportion of children from less risky areas such as Dire Dawa city had a deleterious effect on stunting reduction.
From 2011 to 2016, behavioural modifications among children in the Oromia and SNNP regions were responsible for a significant drop in stunting change. Between 2011 and 2019, this was also true in the Harari region. This could be linked to the political resistance that existed in the southern part of Ethiopia, particularly in the Oromia region. Oromia has underperformed on maternity and child healthcare over this time period. During the study period, good practices such as facility delivery, ANC and postnatal care that might change mothers' behaviour towards childfeeding practice were the lowest in all regions. 81 In the pastoralist areas of Oromia, the execution of the Health Extension Programme has also been hampered. 82 Similarly, despite its economic success, SNNPR has Ethiopia's highest multidimensional child deprivation rate. 83 Despite the fact that Ethiopia has experienced significant poverty reduction in these areas, coordination for the development of good child feeding habits is lacking due to a lack of awareness, frequent turnover of focal persons and management, a lack of accountability and responsibility, and a lack of nutrition structures in each specific area. 83 84 Anaemia and a lack of improved water sources, both of which are well-known causes of chronic malnutrition, are also negative drivers of stunting change in our study. 85 On the other hand, behavioural changes towards nutritional diversity, ANC follow-up, place of delivery, and eating solid, semisolid or soft foods have all been linked to a reduction in childhood stunting. Ethiopia could reduce the burden of early life growth failure by increasing access to improved water sources, maternal and child care, and well-structured patient education programmes to increase self-awareness and a positive attitude towards maternal care and child feeding practice, according to this finding.
CONCLUSION
Despite the fact that several projects to eliminate stunting have been implemented in Ethiopia, a significant number of children remain stunted. Compositional features of children, including as wealth index, parental education, child's age in months, sex of child, duration of breast feeding, anaemia, unimproved water supply and region, all had a statistically significant impact on stunting change. Changes in coefficients such as dietary diversity, ANC follow-up, place of delivery, eating solid, semisolid or soft meals, and age all exhibited a significant association with change in stunting. The Ethiopian Ministry of Health should maintain its present efforts to improve dietary diversity, ANC follow-up, institutional delivery and the feeding of solid, semisolid or soft foods to children above the age of 6 months. The Ethiopian government and the ministry of health should place a specific emphasis on impoverished areas, such as the Amhara region, and vulnerable groups, such as boys, who require special attention. Finally, more steps should be taken by the Ministry of Education to strengthen female empowerment via education.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Patient consent for publication Not applicable.
Ethics approval Not applicable.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement All data relevant to the study are included in the article or uploaded as supplementary information.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability | 2022-11-13T06:17:11.731Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "44aba6464f64c7f6e57cdc254fa53e53b39b614b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "756d6bd8fb3dbde022280c0d36d0d7bafaaf22c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16987562 | pes2o/s2orc | v3-fos-license | Congenital Malaria due to Plasmodium Vivax Infection in a Neonate
Although malaria is endemic in India, congenital malaria is not very common. Congenital malaria is a very rare condition in both endemic and nonendemic areas. We report a case of congenital malaria in a six-day-old neonate with fever and splenomegaly. The diagnosis was picked up accidentally on a peripheral smear examination. Congenital malaria should be kept as differential diagnosis of neonatal sepsis. Timely detection of this condition could lead to early diagnosis and treatment, thereby preventing neonatal mortality.
Introduction
Congenital malaria is defined as malarial parasite demonstrated in the peripheral smear of a newborn from 24 hours to seven days of life. Clinically apparent congenital malaria is rare in countries where malaria is endemic and levels of maternal antibodies are high. With the symptoms of congenital malaria being nonspecific, it is often confused with neonatal sepsis.
Case Representation
A six-day-old male neonate was brought to our outpatient department with complaints of not accepting feed, fever, and loose stools since two days. On examination the baby was pale, icteric, and lethargic, axillary temp was 100 ∘ F, and liver and spleen were palpable 3 and 6 cm below the right and left subcostal margins, respectively. Other systems examinations were normal. A provisional diagnosis of neonatal sepsis was made and the patient was started on intravenous ampicillin and gentamycin. Complete blood count (CBC) at the time of admission revealed a Hb of 9.0 gm/dL, total leucocyte count of 10,000 cells/cu mm, differential count of 50% polymorphs, 47% leucocytes, 3% monocytes, and a platelet count of 50,000 cells/cu mm. Other relevant results included a total bilirubin of 10 mg/dL, indirect fraction of 9 mg/dL, Serum Glutamate Pyruvate Transaminase (SGPT) of 485 IU/L, and C-reactive protein (CRP) of 23 mg/dL ( = 0-6 mg/dL). Blood, urine, and cerebrospinal fluid cultures were sterile. Chest X-ray was unremarkable. The peripheral blood film revealed trophozoites and schizonts of P. vivax, with a parasite index of 2%. The baby was given chloroquine base at dose of 10 mg/kg stat followed by 5 mg/kg at 6, 24, and 48 hrs. There was prompt relief in fever and the spleen gradually reduced in size over a week. Five days after treatment, the patient's parasitemia completely cleared, TLC increased to 12000 cells/cu mm, platelet count was 150,000 cells/cu mm, and CRP fell to 5.5 mg/dL. The infant was discharged on day 5 and is doing well on follow-up.
In view of the revised diagnosis, the history of the mother was reevaluated; a history of fever with chills could be elicited in the ninth month of pregnancy; however presently she did not have any fever. Her peripheral blood film was negative for any malarial parasite. Optimal test was also negative for both P. falciparum and P. vivax.
Discussion
Congenital malaria is defined as presence of asexual stages of parasite in cord blood at time of delivery or in the peripheral smear of neonate in the first seven days of life [1]. While P. falciparum has been reported more often as a cause of 2 Case Reports in Pediatrics congenital malaria, P. vivax as a cause of congenital malaria has been described from southeast Asian region [2]. P. vivax is the leading cause of congenital malaria in Europe while P. falciparum remains the leading cause in Indian and African subcontinent. Neonatal malaria is rare with occurrence rate of 0.3% in immune mothers and 7.4% in nonimmune mothers [3][4][5]. Placental infection occurs in as many as one-third of women who acquire the infection during pregnancy. The spontaneous clearance of infection in neonates in endemic areas may be as high as 93%. This is attributed to the protective effect of maternal antibodies and role of fetal hemoglobin in slowing the rate of parasite development. Since malaria is thought to be rare in neonates, most cases are accidently picked up on peripheral blood examination as a part of routine sepsis work-up.
Though clinical features of congenital malaria are often nonspecific, presence of fever, anemia, and splenomegaly is a pointer towards congenital malaria. This case shows the importance of considering congenital malaria as a differential diagnosis of neonatal sepsis in neonates born to mothers in malarial endemic countries or with a history of malaria during pregnancy.
Postulated mechanisms for congenital transmission of malaria include maternal transfusion into fetal circulation at the time of delivery or during pregnancy, direct penetration through chorionic villi, or penetration via premature rupture of placenta. The remarkable capacity of fetus to resist malarial infection has been demonstrated. Presence of placental barrier, transfer of protective maternal antibodies, and high levels of fetal hemoglobin are all thought to be protective factors. Congenital malaria can occur despite the absence of any evidence of active malarial infection in mother during pregnancy. It is speculated that the mother had an episode of malaria during the ninth month of pregnancy which was mild, resolved spontaneously, and remained undiagnosed. The lack of maternal parasitemia and HRP2 (histidine rich parasite) antigenemia suggests that the infection was localized to placenta and had cleared. Discordance between maternal peripheral blood examination/antigen testing and placental parasitization is well described. The time of onset of symptoms in congenital malaria can vary from immediately after birth to few weeks though the median age of manifestation has been described as 21 days.
The drug of choice for congenital malaria remains chloroquine. The therapy for the infecting plasmodium species is curative. As the infection is produced by transmission of infected erythrocytes rather than forms that invade the liver, the neonate does not require the treatment for exoerythrocytic stages of the parasite.
Our case was accidentally picked up on peripheral blood film examination. This stresses the importance of a good peripheral blood film as a part of all suspected cases of neonatal sepsis. To conclude, congenital malaria is not rare as it was thought to be; in endemic zones malaria should be suspected in all neonates who present with fever and splenomegaly. Early diagnosis could prevent unnecessary antibiotics usage and could prevent neonatal mortality. | 2018-04-03T04:24:46.012Z | 2016-08-29T00:00:00.000 | {
"year": 2016,
"sha1": "627c6bb7df3bf7e30a1b7ed6486ca0195761dea7",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cripe/2016/1929046.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae4f53e7169a3b8b23091fcde95e27c09b601f6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196196048 | pes2o/s2orc | v3-fos-license | A Study on Ad-Hoc on-Demand Distance Vector (AODV) Protocol
AODV is a very simple, efficient, and effective routing protocol for Mobile Ad-hoc Networks which do not have fixed topology. This algorithm was motivated by the limited bandwidth that is available in the media that are used for wireless communications. It borrows most of the advantageous concepts from DSR and DSDV algorithms. The on demand route discovery and route maintenance from DSR and hop-by-hop routing, usage of node sequence numbers from DSDV make the algorithm cope up with topology and routing information. Obtaining the routes purely on-demand makes AODV a very useful and desired algorithm for MANETs.
INTRODUCTION
The Ad hoc On-Demand Distance Vector (AODV) routing protocol is intended for use by mobile nodes in ad hoc network. It offers quick adaptation to dynamic link conditions, low processing and memory overhead, low network utilization and determines unicast routes to destinations within the ad hoc network.
Working of AODV
Each mobile host in the network acts as a specialized router and routes are obtained as needed, thus making the network self-starting. Each node in the network maintains a routing table with the routing information entries to it's neighbouring nodes, and two separate counters: a node sequence number and a broadcast-id.
When a node (say, source node 'S') has to communicate with another (say, destination node 'D'), it increments its broadcast-id and initiates path discovery by broadcasting a route request packet RREQ to its neighbors. The RREQ contains the following fields: source-addr source-sequence# -to maintain freshness info about the route to the source.
dest-addr
dest-sequence# -specifies how fresh a route to the destination must be before it is accepted by the source.
hop-cnt
The (source-addr, broadcase-id) pair is used to identify the RREQ uniquely. Then the dynamic route table entry establishment begins at all the nodes in the network from S to D.As RREQ travels from node to node, it automatically sets up the reverse path from all these nodes back to the source. Each node that receives this packet records the address of the node from which it was received. This is called Reverse Path Setup. The nodes maintain this info for enough time for the RREQ to traverse the network and produce a reply to the sender and time depends on network size.
If an intermediate node has a route entry for the desired destination in its routing table, it compares the destination sequence number in its routing table with that in the RREQ. If the destination sequence number in its routing table is less than that in the RREQ, it rebroadcasts the RREQ to its neighbors. Otherwise, it unicasts a route reply packet to its neighbor from which it was received the RREQ if the same request was not processed previously (this is identified using the broadcase-id and source-addr).
Once the RREP is generated, it travels back to the source, based on the reverse path that it has set in it until traveled to this node. As the RREP travels back to source, each node along this path sets a forward pointer to the node from where it is receiving the RREP and records the latest destination sequence number to the request destination. This is called Forward Path Setup. The other useful information contained in the entries along with source and destination sequence numbers is called softstate information associated to the route entry. The info about the active neighbors for this route is maintained so that all active source nodes can be notified when a link along a path to the destination breaks. And the purpose of route request time expiration timer is to purge the reverse path routing entries from all the nodes that do not lie on the active route.
Interesting concepts of AODV
The concepts of AODV that make it desirable for MANETs with limited bandwidth include the following: Simple: It is simple with each node behaving as a router, maintaining a simple routing table, and the source node initiating path discovery request, making the network selfstarting.
Most effective routing info:
After propagating an RREP, if a node finds re-ceives an RREP with smaller hop-count, it updates its routing info with this better path and propagates it.
Most current routing info:
The route info is obtained on demand. Also, after propagating an RREP, if a node finds receives an RREP with greater destination sequence number, it updates its routing info with this latest path and propagates it.
Loop-free routes: The algorithm maintains loop free routes by using the simple logic of nodes discarding non better packets for same broadcast-id.
Coping up with dynamic topology and broken links:
When the nodes in the network move from their places and the topology is changed or the links in the active path are broken, the intermediate node that discovers this link breakage propagates an RERR packet. And the source node re-initializes the path discovery if it still desires the route. This ensures quick response to broken links.
Highly Scalable: The algorithm is highly scalable because of the minimum space complexity and broadcasts avoided when it compared with DSDV [11].
Advanced uses of AODV
Because of its reactive nature, AODV can handle highly dynamic behavior of Vehicle Ad-hoc networks [26].
Used for both unicasts and multicasts using the 'J' (Join multicast group) flag in the packets [11].
Limitations/Disadvantages of AODV Overhead on the bandwidth: Overhead on bandwidth will be occured com-pared to DSR,when an RREQ travels from node to node in the process of dis-covering the route info on demand, it sets up the reverse path in itself with the addresses of all the nodes through which it is passing and it carries all this info all its way.
No reuse of routing info: AODV lacks an efficient route maintenance tech-nique. The routing info is always obtained on demand, including for common case traffic [12].
It is vulnerable to misuse: The messages can be misused for insider attacks in-cluding route disruption, route invasion, node isolation, and resource consumption [15].
AODV lacks support for high throughput routing metrics:
AODV is designed to support the shortest hop count metric. This metric favors long, low-bandwidth links over short, high-bandwidth links [2].
High route discovery latency: AODV is a reactive routing protocol. This means that AODV does not discover a route until a flow is initiated. This route discovery latency result can be high in large-scale mesh networks.
Discussion and Conclusion Discussion
After reviewing the concept of wireless ad-hoc networks and two routing protocols namely, AODV and DSDV. We would like to make a comparative discussion of both the protocols with their pro's and con's. Most of the discussion being made is based on previous studies and implementations done by many authors [2,10,12].
DSDV is a proactive routing protocol, which maintains routes to each and every node in the network, while AODV is a reactive routing protocol which finds the path on demand or whenever the route is required.
Broadcasting in DSDV is done periodically to maintain routing updates and in AODV, only hello messages are propagated to its neighbors to maintain local connectivity. DSDV routing algorithm maintains a sequence number concept for updating the latest information for a route. Even, the same concept is adapted by AODV routing protocol.
Due to the periodic updates being broadcasted in DSDV, bandwidth is wasted when the nodes are stationary. But, this is not the case with AODV, as it propa-gates only hello messages to its neighbours.
For sending data to a particular destination, there is no need to find a route as DSDV routing protocol maintains all the routes in the routing tables for each node. While, AODV has to find a route before sending a data.
Overhead in DSDV is more when the network is large and it becomes hard to maintain the routing tables at every node. But, in AODV overhead is less as it maintains small tables to maintain local connectivity.
DSDV cannot handle mobility at high speeds due to lack of alternative routes hence routes in routing table is stale. While in AODV this is the other way, as it find the routes on demand. Throughput decreases comparatively in DSDV as it needs to advertise periodic updates and even-driven updates. If the node mobility is high then occurrence of event driven updates are more. But in AODV it doesn't advertise any routing updates and hence the throughput is stable.
Conclusion
The study reveals that, DSDV routing protocol consumes more bandwidth, because of the frequent broadcasting of routing updates. While the AODV is better than DSDV as it doesn't maintain any routing tables at nodes which results in less overhead and more bandwidth. From the above, chapters, it can be assumed that DSDV routing protocols works better for smaller networks but not for larger networks. So, my conclusion is that, AODV routing protocol is best suited for general mobile ad-hoc networks as it consumes less bandwidth and lower overhead when compared with DSDV routing protocol. | 2019-07-14T07:01:28.691Z | 2019-06-17T00:00:00.000 | {
"year": 2019,
"sha1": "03e23840917ecef6da4786821eb00d78040c5ba5",
"oa_license": "CCBY",
"oa_url": "https://www.ijtsrd.com/papers/ijtsrd24006.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "26debc9d43724f1355b42048a53d0d3652cb2ebb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
19750309 | pes2o/s2orc | v3-fos-license | Human interleukin 1 induces interleukin 1 gene expression in human vascular smooth muscle cells.
The recognition that cells of the vascular wall can secrete cytokines such as IL-1 suggests new mechanisms for initiating or sustaining inflammatory responses in blood vessels. We report that purified human monocyte-derived IL-1 or recombinant human IL-1 (rIL-1 beta and rIL-1 alpha) induce cultured human smooth muscle cells derived from veins or arteries to synthesize IL-1 beta mRNA and produce and release biologically active IL-1. rIL-1 beta also stimulated the production of PGE2 by smooth muscle cells. Exposure to rIL-1 beta (1-100 ng/ml), or rIL-1 alpha (0.01-10 ng/ml) increased IL-1 beta mRNA levels within 30 min. Actinomycin D (1 microgram/ml) prevented the induction of IL-1 beta mRNA by rIL-1. IL-1 alpha mRNA was detected in SMC treated with cycloheximide (1 microgram/ml) and rIL-1 beta, or cycloheximide alone. rIL-1 alpha and rIL-1 beta produced maximal levels of IL-1 beta mRNA after 4 h, and intracellular IL-1 biological activity after 6 h of exposure. Release of IL-1 activity in the extracellular medium began after 1 h of incubation with rIL-1 beta or rIL-1 alpha, and continued for up to 24 h. Anti-TNF antiserum that neutralized the biological activity of rTNF did not affect rIL-1-induced production of IL-1 beta mRNA or IL-1 release, suggesting that the release of TNF does not mediate these processes. Several experimental approaches indicated that the release of IL-1 by smooth muscle cells was not due to endotoxin contamination of the IL-1 preparations. Anti-IL-1 antiserum blocked the induction of smooth muscle cell IL-1 gene expression by rIL-1 beta. Polymyxin B did not prevent IL-1-induced IL-1 expression by these cells, but blocked the effect of endotoxin. Heat treatment destroyed the stimulatory capacity of rIL-1 beta, but did not affect the ability of bacterial endotoxin to induce IL-1 expression. The production of IL-1 by human vascular smooth muscle cells was not due to contamination of the cell cultures with blood monocytes, inasmuch as treatment with an antimonocyte antibody (anti-Mo2) and complement did not alter IL-1 beta mRNA content or the amount of IL-1 released from the cells in response to endotoxin, rIL-1 alpha, or rIL-1 beta.(ABSTRACT TRUNCATED AT 400 WORDS)
IL-I is a multipotent inflammatory mediator that may play a central role in vascular pathophysiology . For example, IL-1 promotes the adhesion of all classes of leukocytes to cultured endothelial cells (1)(2)(3) and increases their production of procoagulant activity (4), plasminogen activator inhibitor (5,6), and prostaglandins (7,8) . These actions of IL-1 indicate its importance in mediating disturbances of endothelial function associated with vascular injury and inflammation . However, there is scant information regarding possible effects of IL-1 on vascular smooth muscle cells, the most abundant cell type in most vessels.
Vascular wall cells are not only targets for the action of IL-1, but can also produce IL-1-like molecules . Two distinct genes encode forms of IL-1 with similar biological activities but disparate amino acid sequences: IL-la has an isoelectric point of 5, and IL-1/3 (the predominant form produced by human monocytes) has an isoelectric point of 7 (9,10). Recently, we have described the inducible expression of both of these IL-1 genes in endothelial cells and smooth muscle cells cultured from adult human blood vessels (I 1-13). In these cell types, bacterial endotoxins and recombinant human tumor necrosis factor/cachectin (rTNF)' induced the appearance of IL-10 messenger RNA (mRNA) and release of IL-I . We now report that IL-1 itself is a potent stimulus for IL-1 gene expression in vascular smooth muscle cells from adult humans. The demonstration that smooth muscle is both a source of IL-I and a target for this cytokine raises the possibility of novel feedback loops in vascular pathophysiology . These findings expand the known responses of vascular smooth muscle cells, and also broaden the scope of the actions of the IL-1 polypeptides on the blood vessel wall .
Materials and Methods
Cell Culture.
Human saphenous vein smooth muscle cells (HSVSMC) were isolated from outgrowths of explants of unused portions of veins harvested for coronary artery bypass surgery. The endothelium was removed enzymatically, and the adventitia was removed by blunt and sharp dissection before culture of the explants (12). This use of normally discarded tissues was approved by the Human Investigation Review Committee of New England Medical Center, Boston, MA . Human aortic and iliac arterial smooth muscle cells were prepared by enzymatic dissociation from tissues obtained from organ donors, with the cooperation of the New England Organ Bank . The adventitia and abluminal portions of the tunica media were removed before dissociation of the tissue with collagenase (CLS III, 0.2% ; Cooper Biomedical, Inc., Malvern, PA). The cells were maintained in DME that contained glucose (5 .5 mM), Hepes (25 mM), and FCS (HyClone Laboratories, Logan, UT) (10%). The morphology and growth pattern of the cells determined by phase-contrast microscopy were typical of cultured smooth muscle cells (14,15). Cultures were used in the third to eighth passage.
Assay of IL-1 Biological Activity . Medium collected from cultures of HSVSMC (conditioned medium) was centrifuged (10 min at 1,000 g) and stored at -70°C until assay. For determination of intracellular IL-1, cell layers were washed three times with DME, covered with I ml of FCS (5%) in RPMI 1640 medium, and frozen at -20°C. After thawing, the plates were scraped with a rubber policeman and the lysate was centrifuged (15 min at 1,000 g), and the supernatants were stored at -70°C until assay. IL-1 activity was determined by the murine thymocyte costimulation assay (31). At least two dilutions (1 :1 and 1 :10) of test samples were incubated in 96-well plates (Costar, Inc ., Cambridge, MA) with 5 X 105 thymocytes from C3H/Hej mice (6-8 wk old; The Jackson Laboratory, Bar Harbor, ME) in a final volume of 200 A1 of RPMI 1640 containing FCS (5%), PHA (1 gg/ml ; Burroughs Wellcome, Research Triangle Park, NC), antibiotics, glutamine (2 mM), and 2-ME (5 X 10-5 M). Affinity-purified human monocyte IL-1 was included in each assay as a positive control. After 48 h, ['H]thymidine (1 pCi/well, 20 or 6.7 Ci/mmol; New England Nuclear) was added to the wells, and cells were harvested 18-24 h later onto glass fiber filters using a commercial cell harvester (Cambridge Technology, Cambridge, MA) . The filters were counted in Econosolve HP/b scintillation fluid (Beckman Instruments, Inc., Palo Alto, CA), and quench corrections were performed by use of an external standard. The results are reported as the mean ± standard deviation of disintegrations per minute determined on triplicate cultures. Because conditioned medium contained the rIL-1 used to stimulate smooth muscle cells, as well as IL-1 produced by these cells, both unconditioned and conditioned media were assayed . Smooth muscle cell IL-1 production is the difference between IL-1 activity in conditioned medium (HSVSMCderived IL-I + rIL-1) and IL-1 in unconditioned medium (rIL-1 alone).
PGE2 Assay. HSVSMC were cultured in 24-well dishes (Costar) in 1 ml of DME/10% FCS . The cells were incubated for 1-24 h in the presence or absence of indomethacin (1 wg/ml) and rIL-10 (100 ng/ml). The medium was centrifuged (1,000 g for 15 min), acidified with HCl (final pH 3), and extracted with 3 vol of ethyl acetate. The organic phase was dried, and the residue was resuspended in phosphate buffer and assayed for PGE 2 using an RIA kit, according to the manufacturer's instructions (Seragen Inc., Boston, MA) .
Endotoxin Determination . Levels of bacterial endotoxin in tissue culture media were determined with the use of the chromogenic limulus lysate assay (QCL 1000, M. A. Bioproducts, Walkersville, MD) . This assay is sensitive to 10 pg/ml of bacterial endotoxin . Very low concentrations of bacterial endotoxin (<1 ng/ml) can induce IL-1,8 gene product in human vascular smooth muscle cells (12, and data not shown). In some cases we have found IL-10 mRNA in HSVSMC not deliberately exposed to endotoxin, and have subsequently measured concentrations of endotoxin >100 pg/ml in the tissue culture media . Basal IL-1 production by HSVSMC could be avoided by culturing cells in medium selected for low levels of endotoxin contamination (<40 pg/ml). In addition, the experiments reported here were routinely carried out in the presence ofthe endotoxin antagonist polymyxin B (10 Pg/ml) .
Treatment of Cells with Antimonocyte Antibody and Complement. To test whether IL-1 production by HSVSMC might be due to contamination with blood monocytes, cultures were treated with mAb to the monocyte antigen Mo2 (anti-Mo2) and complement, before stimulation with LPS or rIL-1 . The cells were incubated with anti-Mo2 (1 :150 dilution in DME/10% heat-inactivated FCS), or medium without the antibody, for 30 min at 4°C. The medium was aspirated and replaced with a 1 :4 dilution of baby rabbit complement (Pel-Freez Biologicals, Rogers, AK) . After incubation for 1 h at 37°C the cells were washed three times with HBSS and cultured for 24 h in DME/10% FCS . In independent experiments, such treatment completely lysed adherent human peripheral blood mononuclear cells, as judged by phase-contrast microscopy. The HSVSMC were then incubated for 6 h in fresh medium containing LPS (1 lAg/ml), or for 24 h with rIL-la (10 ng/ml) or rIL-1,0 (100 ng/ml). IL-1 was measured in unconditioned and conditioned media, and IL-10 mRNA was measured by Northern analysis .
biologically active IL-1, determined as thymocyte costimulation activity (data not shown) .
This monocyte-derived IL-1 contained a mixture of both the neutral and acidic IL-I species (20) . rIL-1 a and rIL-10 are homogeneous preparations that express the biological activities of the corresponding monocyte-derived IL-1 species (18,22). rIL-10 (0.01-100 ng/ml) caused a concentration-dependent increase in IL-10 mRNA levels ( Fig. 2A) and secretion of IL-1 (Fig. 2B), with a threshold between 100 pg/ml and 1 ng/ml. The increase in IL-1 mRNA was selective, in that levels of 0-tubulin mRNA were not altered substantially by rIL-10 ( Fig. 2B). rIL-1a (0.01-50 ng/ml) also induced IL-10 mRNA in these cells as well as the release of thymocyte costimulation activity (data not shown) . rIL-1 a was more potent than IL-lß as an inducer of IL-1 production by HSVSMC, in parallel with an increased potency in the murine thymocyte costimulation assay. This differential potency between the two species of recombinant IL-1 may be due, at least in part, to the sensitivity of rIL-10 to oxidation of sulfhydryl groups associated with a loss of biological activity . The response of smooth muscle cultures to IL-1 was not limited to venous or explant-derived cells. Arterial smooth muscle cells isolated by enzymatic dissociation from human aorta or iliac artery responded to HL-la and rIL-10 in the same manner as cells isolated from saphenous vein (data not shown) .
HSVSMC treated with HL-Ice or rIL-lß did not contain mRNA for IL-la, determined by using both an end-labeled 32 P-synthetic oligonucleotide (42-mer) probe and a 32 P-labeled cDNA probe (data not shown) . Under the same hybridization conditions, these probes did detect IL-1a transcripts in RNA from LPSstimulated human monocytes. We have only observed IL-la mRNA in smooth muscle cells treated with cycloheximide alone (1 ug/ml) or with rIL-1 and cycloheximide (data not shown) . This result is in accord with our previous finding FIGURE 3 . Time course of human rIL-I-induced IL-I# mRNA synthesis in HSVSMC . HSVSMC were incubated in DME/10% FCS containing polymyxin B (10 t+g/ml) with rIL-Iß (100 ng/ml, top) or rIL-la (10 ng/ml, bottom) for the time periods indicated. The incubation with IL-la included indomethacin (I tag/ml). RNA (20 tag) was electrophoresed, transferred to a nylon membrane, and hybridized with s2 P-labeled IL-10 probe.
(12) that IL-la mRNA appears in HSVSMC exposed to cycloheximide alone, and that the LPS-induced increase in IL-1 mRNA levels is augmented by this inhibitor . Using less stringent washing conditions (1X SSC ; 55°C) we have detected IL-la mRNA in rIL-l-stimulated HSVSMC (data not shown) .
Induction of IL-1 by rIL-I Is Rapid, Transient, and Dependent on RNA Synthesis. Exposure to rIL-lß (100 ng/ml) increased IL-lß mRNA in HSVSMC within 1 h (Fig . 3, top) . The amount of IL-lß mRNA was maximal after 4 h of incubation, and declined after 24 h of continued exposure to rIL-lß . The time course of the appearance of IL-lß mRNA induced by rIL-la (10 ng/ml) paralleled that found in response to rIL-lß (Fig . 3, bottom) . In further experiments using shorter incubation times, IL-lß mRNA appeared as early as 30 min after exposure to rIL-1ß (data not shown) . Incubation of HSVSMC for 4 h simultaneously with the RNA synthesis inhibitor actinomycin D (1 jug/ml) and AL-10 (100 ng/ml) inhibited the effect of rIL-lß on IL-lß mRNA levels by 99%, measured by scanning densitometry of an autoradiogram of Northern blot analysis . This concentration of actinomycin D inhibited [sH]uridine incorporation by HSVSMC into material insoluble in perchloric acid (0 .2 M) by 93% (data not shown) . These results suggest that rIL-I affects IL-1,Q mRNA levels by increasing RNA synthesis. The synthesis of IL-l,ß mRNA resulted in the release of biologically active IL-1 (Fig . 4) . Extracellular IL-1 activity, in excess of that due to the stimulus alone, rose after 1 h of exposure to rIL-10, continued to increase for up to 8 h, and declined by 24 h (Fig . 4A). This pattern suggested that the smooth muscle cells produce an inhibitor of the thymocyte costimulating ability of IL-1, as do endothelial cells (32) . rIL-la (10 ng/ml) induced a similar time course of IL-1 release, except that IL-1 activity was not decreased at 24 h (Fig . 4B). The cyclooxygenase inhibitor indomethacin was included in the incubation medium in this latter experiment . Preliminary studies had indicated that indomethacin increased the amount of IL-1 activity measured in conditioned medium from LPS-stimulated HSVSMC . Prostaglandins, in particular PGE2, can suppress the response of thymocytes to IL-1 (33), and we postulated that the decreased IL-1 activity in conditioned medium after 24 h was due to accumulated PGE2 . rIL-1 Stimulates PGE2 Production by HSVSMC . We therefore determined PGE2 levels in the supernatants of HSVSMC exposed to medium alone or to rIL-1ß . rIL-10 (100 ng/ml) stimulated PGE2 production in a time-dependent manner . Indomethacin (1 ug/ml) prevented this effect (Fig . 5) . In addition, conditioned medium from indomethacin-treated cells displayed more IL-1 activity than medium from cells incubated without the inhibitor (Fig . 6) . Indomethacin alone did not affect the activity of rIL-lß in the murine thymocyte costimulation assay (Fig . 6), nor did it affect IL-Iß mRNA levels in HSVSMC . Laser scanning densitometry of the autoradiogram of a Northern blot hybridized with IL-1ß probe yielded 1 .28 ± 0 .01 arbitrary units (U) for HSVSMC exposed to IL-1ß alone, and 1 .20 ± 0 .14 U for cells incubated with rIL-10 and indomethacin (mean ± SD of three scans at separate positions of each band). These data suggest that rIL-l-stimulated HSVSMC produce PGE2, which inhibits the responsiveness FIGURE 6. Effect of indomethacin on IL-1 production by HSVSMC . Cells were incubated for 24 h in DME/10% FCS containing polymyxin B (10 lag/ml) and rIL-ls (100 ng/ml), with or without indomethacin (1 lag/ml). Unconditioned and conditioned media (1 :1 dilution) were assayed for IL-1 .
[sH]Thymidine incorporation in response to PHA alone has been subtracted and was 16,257 ± 2,004dpm for medium without indomethacin, and 16,326 t 862 dpm for medium containing indomethacin. FIGURE 7. Intra-and extracellular IL-1 in HSVSMC stimulated with HL-10. Cells were grown in 60-mm culture dishes and stimulated for the indicated time periods with rIL-10 (100 ng/ml) in 3 ml of DME/10% FCS containing polymyxin B (10 tag/ml), with (B) or without (A) indomethacin (1 tag/ml). Conditioned medium was aspirated and the monolayer was washed with medium without rIL-1 and freeze-thawed in 1 ml of RPMI/5% FCS. Intracellular (dark bars) and extracellular (light bars) IL-1 were determined in the murine thymocyte costimulation assay (1 :10 dilution).
[sH]Thymidine incorporation in the presence of PHA alone has been subtracted, and was 10,550 ± 524 dpm for medium without indomethacin, and 11,615 ± 931 dpm for medium with indomethacin . of thymocytes to IL-1 . Alternatively, indomethacin may decrease the synthesis of a specific IL-1 inhibitor. rIL-1 Induces Intracellular IL-1 Synthesis and Extracellular IL-1 Release. The intracellular synthesis and extracellular release of IL-1 by phagocytic leukocytes are temporally distinct and stimulus specific (34) . We therefore studied the kinetics of the appearance of intra-and extracellular IL-1 in HSVSMC exposed to rIL-1,Q. In the absence of indomethacin, rIL-1,8 (100 ng/ml) produced maximal levels of intracellular IL-1 after 6 h that decreased by 24 h (Fig. 7A). Extracellular IL-1 levels followed a similar time course . In parallel experiments with the same isolate of cells, indomethacin (1 ug/ml) did not alter the pattern of appearance of intracellular IL-1 activity, but caused extracellular IL-I activity to continue to increase over a 24-h period (Fig . 7B). In addition, indomethacin increased the amount of intra-and extracellular IL-I activity, as described above (note the difference in the scale of the ordinates in Fig. 7, A and B) . These kinetic data suggest that HSVSMC respond rapidly to human IL-1 by synthesizing IL-1(3 mRNA (peak at 4 h), followed by intracellular IL-1 synthesis (peak at 6 h) and the release of IL-1 . rIL-1 also induces release of PGE2 more slowly (peak >_ 24 FIGURE 8 . Inhibition of AL-10-induced IL-1 production by HSVSMC by anti-IL-1 antiserum and heat treatment, but not by polymyxin B. HSVSMC were incubated for 4 h in DME/10% FCS containing indomethacin (1 wg/ml) and rIL-1,Q (10 ng/ml) or purified bacterial LPS (endotoxin, 10 ng/ml) . Polymyxin B (PB, 10 Pg/ml) was included where indicated. Heated endotoxin and rIL-1 were incubated at 95'C for 1 h. Rabbit antiserum to human monocyte-derived IL-1 (anti-IL-1) or nonimmune rabbit serum (NRS) was included at a final dilution of 1 :100 . Conditioned and unconditioned media (1 :10 final dilution) were assayed in the murine thymocyte costimulation assay. Data are [sH]thymidine incorporation (mean ± SD, triplicate wells) in response to conditioned media, corrected by subtraction of incorporation due to the corresponding unconditioned medium (13,596 ± 1,016 dpm for medium alone, and 20,182 f 793 dpm for medium containing rIL-1) . h), which masks measurable IL-1 activity in conditioned medium from later time points .
Induction of IL-1 Gene Product by rIL-10 Is Not Due to Endotoxin Contamination . HSVSMC respond to endotoxin concentrations <1 ng/ml by producing IL-10 mRNA and releasing IL-1 (12) . The terms endotoxin and LPS are not synonomous, although most of the endotoxic properties of Gram-negative bacterial extracts are due to the LPS component (35) . Here, we use endotoxin to refer to uncharacterized bacterial products, and LPS to designate a well-characterized, purified preparation. Although the endotoxin content of the rIL-1 used in this study was in the picogram per milligram range (18), we used several approaches to exclude the possibility that rIL-1-induced IL-1 synthesis in HSVSMC might be due to bacterial endotoxin in tissue culture media or IL-1 preparations .
Smooth muscle cells were incubated for 4 h with rIL-1,0 or purified LPS (10 ng/ml each), either alone or with the addition of the LPS antagonist polymyxin B (10 vg/ml), or rabbit anti-IL-1 antiserum (1 :100 final dilution). IL-1 release (Fig . 8) and IL-1,Q mRNA (data not shown) were measured . Polymyxin B blocked the ability of purified LPS to increase IL-1,0 mRNA levels and IL-1 release but did not affect this activity of rIL-1 (3 . Polymyxin B does not inhibit all endotoxins, however (36), and we used a rabbit antiserum to human monocyte IL-1 to confirm the specificity of the response of HSVSMC to rIL-I0 . Antiserum to monocyte IL-1 blocked the production of IL-10 mRNA and IL-1 release induced by rIL-10 but not by LPS. Furthermore, incubating rIL-10 for 1 h at 95°C destroyed its ability to increase IL-1,0 mRNA levels and IL-1 release, whereas LPS was unaffected by such heat treatment. These data indicate that rIL-1induced IL-1 production by HSVSMC is not an artifact of endotoxin contamination .
IL-1 Production by HSVSMC Is Not Due to Contamination by Blood Monocytes. Blood monocytes, the prototypical source of IL-1, are potential contami- nants of blood vessel-derived cell cultures (37) . Our primary smooth muscle cell cultures were used in the third or subsequent passage, by which time any surviving monocytes should have acquired macrophage-like characteristics. This differentiation is associated with loss of the ability to secrete IL-1 (35) . Nonetheless, we addressed this possibility directly by treating HSVSMC sequentially with mAb to the human monocyte antigen Mo2 (23,24) and complement . In parallel experiments, this treatment lysed plastic-adherent human peripheral blood mononuclear cells (data not shown) . HSVSMC were treated with complement, with or without preincubation with anti-Mo2, and exposed to LPS (1 /Ag/ml), a potent stimulus for IL-1 production by monocytes. Exposure to anti-Mo2 and complement did not impair the ability of HSVSMC to produce IL-1 in response to LPS when compared with the ability of cells exposed to complement alone (Table 1) . Similar results were found with rIL-la or rIL-1Q as IL-1 secretagogues (Table I) . IL-10 mRNA levels were similar in both groups of cells (data not shown) . The data in Table I also show that concentrations of rIL-la or rIL-I# that are equipotent in the thymocyte costimulation assay induce the release of approximately equal amounts of IL-1 activity from HSVSMC . rIL-1-induced IL-1 Production Is Not Mediated via TNF Release. TNF induces the production of IL-1 in vascular smooth muscle cells (13), and IL-1 may also induce the release of TNF in some cell types. We therefore addressed the possibility that rIL-1-induced IL-1 production by HSVSMC was mediated by the induction of TNF release. Cells were incubated with rIL-la (10 ng/ml) or rIL-1,13 (100 ng/ml) for 24 h in the presence of rabbit anti-TNF antiserum or nonimmune serum (both at 1 :100 dilution). This anti-TNF antiserum neutralized the cytotoxicity of rTNF for the murine fibroblast line L929 at a 1 :400 dilution (Ikejima, T., and C. A. Dinarello, unpublished data), and a 1 :200 dilution detected 1 ng of rTNF in Western blots (data not shown) . rIL-1-induced IL-1 release was not inhibited by the anti-TNF antiserum (Fig . 9) . In addition, the induction of IL-1,13 mRNA in HSVSMC incubated for 4 h with rIL-I# (100 ng/ml) was unaffected by anti-TNF antiserum at a concentration (1 :100 dilution) . Anti-TNF antiserum does not inhibit rIL-1-induced IL-1 release from HSVSMC . Cells were incubated for 24 h in DME/10% FCS containing indomethacin (1 wg/ml) and rIL-la (10 ng/ml) or rIL-1ß (100 ng/ml) alone, or with the addition of either anti-TNF antiserum or nonimmune serum (1 :100 dilution). After 24 h, IL-1 activity in unconditioned (light bars) and conditioned media (dark bars) (1 :1 dilution) was measured.
[sH]Thymidine incorporation in the presence of PHA alone was 12,869 ± 1,523 dpm, and has not been subtracted from the data . that blocked TNF-induced IL-1ß mRNA production (data not shown) . These data indicate that rIL-1-induced IL-1 production is not mediated via the release of TNF.
Discussion
The manifold effects of IL-1 on vascular wall cells suggest an important role for this mediator in blood vessel pathology. IL-1 induces morphologic changes in cultured endothelial cells (38) and the production of procoagulant activity (4), plasminogen activator inhibitor (5, 6), prostaglandins (7,8), and platelet-activating factor (39). In addition, IL-1 enhances adhesion to endothelial cells of all classes of leukocytes studied (1)(2)(3). Although the repertoire of endothelial cell responses to IL-1 has been widely studied, the effect of this mediator on human vascular smooth muscle cells has received scant attention. We report here the surprising observation that IL-1 induces IL-1 gene expression in human vascular smooth muscle . To our knowledge, this is the first definitive demonstration that IL-1 may regulate its own synthesis and release in any cell type.
The threshold concentration for the production of IL-1 in HSVSMC by rIL-10 was <1 ng/ml, which is at the low range of concentrations that elicit PGE2 production by human dermal fibroblasts and IL-2 generation from human T cells (18). Concentrations of rIL-la and rIL-lß that were equipotent in the thymocyte costimulation assay induced similar amounts of IL-1 release (Table I and Fig. 9). rIL-1-induced IL-1 mRNA production and the extracellular release of IL-1 began within 1 h of stimulation. Experiments with an antiserum that neutralized the biological activity of rTNF excluded a role for extracellular TNF in the mediation of rIL-1-induced IL-1 production.
Our finding that smooth muscle cells exposed to monocyte-derived IL-1 for short periods in vivo could respond by releasing further IL-1 may have important implications for the pathogenesis of atherosclerosis and vasculitis . Human atherosclerotic plaques contain smooth muscle cells, macrophages, and T cells in a 132 7 specific spatial distribution (40,41). Smooth muscle cells in these plaques, but not in normal arteries, express class 11 transplantation antigens, an indicator of immune activation (42). Many of the T lymphocytes in plaques express HLA/DR antigens (40). These considerations indicate that the complicated human atheroma is not a static accumulation of lipid, calcium, and extracellular matrix, but a site of active inflammatory and immunologic reaction . Our present findings suggest novel humoral interactions between these cells that may contribute to the formation of this localized lesion .
In cholesterol-fed animals, monocytes adhere to vascular endothelium, and migrate into the developing atherosclerotic plaque (43)(44)(45). Vascular smooth muscle cells and endothelial cells produce factors that are chemotactic for monocytes (46,47). Indeed, IL-1 itself is a potent monocyte chemoattractant (9), and IL-1 secreted by vessel wall cells could recruit monocytes to areas of local derangement. IL-1 produced locally by recruited monocytes, endothelial cells, or smooth muscle cells could stimulate further IL-1 and PGE2 production by vascular smooth muscle cells, with concomitant alterations of lymphocyte, smooth muscle, and endothelial cell functions. IL-1 causes T cells to produce IL-2 and IFN-y, cytokines that activate T cells themselves and induce the expression of IL-2 receptors and class 11 antigens on their surface (48). Activated T cells and IFN-y also induce class II antigen expression on cultured endothelial cells and smooth muscle cells (37,49). Thus, local production of IL-1 in the vessel wall may account for the expression of class II antigens on plaque cells observed in vivo (40,42). This study also shows that rIL-1,6 stimulates PGE2 release by HSVSMC, in accordance with a previous report (8) that monocyte-derived IL-1 increased prostaglandin synthesis in human arterial smooth muscle cells. In addition to modulating platelet function and vascular tone, prostaglandins regulate smooth muscle cell cholesterol metabolism (50), and may be chemoattractant for leukocytes (46).
Our observation that IL-1 induces further IL-1 production from smooth muscle cells also suggests a mechanism for the amplification and perpetuation of vasculitis . Vascular smooth muscle cells from MRL/lpr mice that spontaneously develop a genetically determined autoimmune vasculitis, express class II antigens, and produce an IL-1-like factor (51). The vasculitic lesion is characterized by perivascular cuffing and infiltration of mononuclear cells. Moyer and Reinisch (51) suggested that smooth muscle cell-derived IL-1 stimulated the influx of monocytes into these lesions. The potential role of IL-I in human vascular inflammation in vivo is illustrated by recent studies (52) using an mAb, H4/18, raised against IL-1-treated endothelial cells. IL-1 or TNF/cachectin induce H4/18 binding sites on endothelial cells, but not on other cells tested . H4/18 does not bind normal endothelium in skin or other tissues. However, in human delayed hypersensitivity reactions and other lesions associated with the presence of activated lymphocytes and macrophages, microvascular endothelium does bind H4/18 (53).
These examples show how IL-1 could be involved in the pathogenesis of important pathologic processes involving the blood vessel wall. The common ability of both vascular endothelial cells and smooth muscle cells to secrete IL-1, and the stimulation of IL-1 synthesis and release by IL-1 itself reported here may thus be a crucial factor in the initiation, maintenance, and propagation of these pathologic processes in vivo .
Summary
The recognition that cells of the vascular wall can secrete cytokines such as IL-1 suggests new mechanisms for initiating or sustaining inflammatory responses in blood vessels. We report that purified human monocyte-derived IL-I or recombinant human IL-I (rIL-10 and rIL-la) induce cultured human smooth muscle cells derived from veins or arteries to synthesize IL-10 mRNA and produce and release biologically active IL-I . AL-10 also stimulated the production of PGE2 by smooth muscle cells. Exposure to rIL-10 (1-100 ng/ml), or rILla (0 .01-10 ng/ml) increased IL-10 mRNA levels within 30 min . Actinomycin D (1 )ug/ml) prevented the induction of IL-10 mRNA by rIL-1 . IL-la mRNA was detected in SMC treated with cycloheximide (1 Ag/ml) and rIL-10, or cycloheximide alone . rIL-la and rIL-10 produced maximal levels of IL-10 mRNA after 4 h, and intracellular IL-1 biological activity after 6 h of exposure . Release of IL-I activity in the extracellular medium began after I h of incubation with rIL-10 or rIL-la, and continued for up to 24 h . Anti-TNF antiserum that neutralized the biological activity of rTNF did not affect rIL-1-induced production of IL-10 mRNA or IL-1 release, suggesting that the release of TNF does not mediate these processes. Several experimental approaches indicated that the release of IL-1 by smooth muscle cells was not due to endotoxin contamination of the IL-1 preparations . Anti-IL-1 antiserum blocked the induction of smooth muscle cell IL-1 gene expression by rIL-10 . Polymyxin B did not prevent IL-1induced IL-1 expression by these cells, but blocked the effect of endotoxin. Heat treatment destroyed the stimulatory capacity of rIL-10, but did not affect the ability of bacterial endotoxin to induce IL-1 expression . The production of IL-I by human vascular smooth muscle cells was not due to contamination of the cell cultures with blood monocytes, inasmuch as treatment with an antimonocyte antibody (anti-Mo2) and complement did not alter IL-10 mRNA content or the amount of IL-1 released from the cells in response to endotoxin, rIL-la, or rIL-10 . IL-1 production by smooth muscle cells, the most abundant cell type in the blood vessel wall, may amplify and sustain local inflammatory responses in vasculitis, allograft rejection, atherosclerosis, and vascular responses to injury or pathogens in general . | 2014-10-01T00:00:00.000Z | 1987-05-01T00:00:00.000 | {
"year": 1987,
"sha1": "ea0a9cf64c5da726258155da6ac17ad9a1ab4004",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/165/5/1316.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea0a9cf64c5da726258155da6ac17ad9a1ab4004",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55620407 | pes2o/s2orc | v3-fos-license | Effects of a slow harmonic displacement on an Atomic Force Microscope system under Lennard-Jones forces
We focus in this paper on the modeling and dynamical analysis of a tapping mode atomic force microscopy (AFM). The microbeam is subjected to a low frequency harmonic displacement of its base and to the Lennard-Jones (LJ) forces at its free end. Static and modal analysis are performed for various gaps between the tip of the microbeam and a sample. The Galerkin method is employed to reduce the equations of motion to a fast-slow dynamical system. We show that the dynamics of the AFM system is governed by the contact and the noncontact invariant slow manifolds. The tapping mode is triggered via two saddle-node bifurcations of these manifolds. Moreover, the contact time is computed and the effects of the base motion amplitude and the initial gap are discussed.
Introduction
The Atomic Force Microscope (AFM) is a scanning probe microscope that is used as a nano-scale tool for manipulation and characterization in nanosciences [1].It can be used in a broad spectrum of applications such as imaging, nanolithography, electronics, chemical and biological analysis [2,3].It is based mainly on a vibrating microcantilever with a nano-scale tip that interacts with a sample surface via intermolecular forces [4].Indeed, understanding AFM vibrations is central to the correct interpretation of the AFM outputs.In the present paper, a Lennard-Jones (LJ) force [4] is used to model the tip-surface highly nonlinear interactions.Several studies investigated the effects of resonant harmonic external and/or parametric forcings on an AFM subjected to the LJ forces [5][6][7][8][9] in the contact, noncontact and tapping modes.This latter was the dominant imaging mode for most scanning probe microscopes during the last decade [2].It is based mainly on resonant excitations of the cantilever with a feedback loop keeping the cantilever vibration amplitude constant.This mode minimizes the shear forces, present in the contact mode, that can be destructive to the tip and samples.It overcomes some deficiencies of the noncontact mode by improving the resolution and enabling the measurement of mechanical properties [10].However, operating the cantilever near its resonances can likely cause complex dynamics due to the nonlinearities.As a remedy nonresonant tapping mode techniques are used.Thus, AFM with low frequency excitation compared to the fundamental natural frequency of the cantilever is used, for instance, in the pulsed-force mode AFM [11] and the peak-force AFM [12].Indeed, the low frequency excitation leads to lower the tapping force causing limited tip-sample contact areas and minimizing the loss of resolution.The present work is focused on the dynamics of an AFM a e-mail: f.lakrad@fsac.ac.ma system under LJ forces and a very slow harmonic base displacement.Consequently, the system can be viewed as a fast-slow system with two time scales dynamics: one ruled by natural frequencies of the system and the other by the low frequency of the base displacement.Solutions of the system follow the stable invariant slow manifolds in large regions of the phase space.For more informations on the fast-slow systems see for instance [13,14].In fact, two stable invariant slow manifolds coexist, one corresponds to the contact mode and the other to the noncontact mode.These two stable slow manifolds undergo dynamic saddlenode bifurcations (through the collision with an unstable slow manifold) when the amplitude of the base displacement is varied.These dynamic bifurcations rule the contact time between the tip and the sample and they determine the operational mode of the AFM: contact, noncontact and tapping modes, respectively.This work uses a continuous model of the AFM system and it can be viewed as a continuation of a previous paper by Lakrad [15] where a lumped mass model was used.The present paper is organized as follows: in section 2 we derive a continuous nonlinear model of the microcantilever under the LJ force and the base displacement using the Hamilton principle.Then, the static configurations and the corresponding natural frequencies and mode shapes are investigated in sections 3.In section 4, the Galerkin method is used to reduce the equations of motion to a fast-slow system.Then, the slow invariant manifolds are computed and the conditions of existence of the tapping mode and the contact time are determined.
Mathematical modelling
The classical beam theory based on the Euler-Bernoulli assumptions is used to develop a continuous model of an AFM probe, of length L, operating in air.As shown in Fig. 1, Z denotes the tip/sample separation distance in the reference configuration and w(x, t) + y(t) indicates the to- tal deflection of the microcantilever, where w(x, t) is the deflection of the microcantilever relative to a non-inertial reference frame attached to the base.The base excitation from a dither piezo is assumed to be a vertical harmonic displacement; that is, y(t) = Y * cos(Ωt).The cantilever-tip-sample interaction is modelled by a LJ force F LJ between a sphere of radius R and a flat surface [4,7] where A 1 and A 2 are the Hamaker constants for the sive and attractive potentials, respectively.The attractive part of the LJ force corresponds to the van der Waals force [4].In spite of its simplicity, this interaction model captures generic properties present in the near-field interactions [9].In the present work, the LJ force F LJ is assumed to be the unique source of nonlinearity of the system.Using the Hamilton principle, the nondimensional form of the equation of motion and the associated boundary conditions are given by and where the dots and the primes denote derivatives with respect to the nondimensional time τ = ωt and the nondimensional space X = x/L, respectively.The displacements are normalized with respect to the initial gap The other quantities are given by in which E, I, A and ρ are, respectively, the cantilever modulus of elasticity, second moment of area, cross-section area, and material density.In addition, c denotes the damping coefficient.The nondimensional instantaneous tip/sample separation is Numerical applications are carried out for the case of the interaction of a soft monocrystalline silicon microcantilever with the (111) reactive face of a flat silicon sample [9].Indeed, such a soft cantilever can be used to minimize the tapping force and sample penetration, especially for soft samples.Moreover, in order to operate in the tapping mode it is recommended that the sample should have a low adhesion to the surface.All the key system parameters of the AFM system are listed in table 1.
Static and modal analyses
In this section we solve the linearized undamped eigenvalue problem associated to Eqs. ( 2)-( 4).For that, we first calculate static deflections W * (X) obtained from Eqs. ( 2)-( 4) by dropping the time derivatives and the base excitation.The static tip/sample separation distance η * is defined by η * = 1 − W * (1), and is given by solving a ninth order algebraic equation.In Fig. 2, we show the variation of η * versus the initial tip/sample separation Z.We can distinguish two regions: the first region corresponds to Z < Z * = 1.7nmwhere the system is repulsive since η * > 1.The second region corresponds to Z > Z * which is attractive (since η * < 1).In this second region, for 6.6 nm < Z < 9.09 nm, there is coexistence of three static solutions with two stable and one unstable.This region of bistability is located between the points A and B where saddle-node bifurcations occur.The lower and upper stable equilibria correspond, respectively, to the contact and noncontact configurations.Moreover, for high values of Z the effect of the attractive van der Waals force becomes weaker till it asymptotically vanishes and consequently η * → 1.
We compute the natural frequencies and mode shapes around a chosen static deflection W * (X) by setting Substituting Eq.( 5) into Eqs.( 2)-( 4), dropping the damping and excitation terms and linearizing the outcome for small V(X, τ), then the free vibration of the cantilever is given by the following set of equations The boundary conditions are and Using separation of variables, the n th mode shape is given by where a n is computed by using the orthonormality condition of the modes, the nth natural frequencies coefficients λ n are zeros of the following algebraic equation where The dimensional natural frequencies of the micro-cantilever are given by ω n = Fig. 3 displays the variations of the first two natural frequencies λ 1 and λ 2 with the initial tip/sample separation Z, for three different cases: the free-end microcantilever and the AFM microcantilever operating in the contact and non-contact states.For Z < 6.6nm, the AFM is in the contact mode, the frequencies are larger than in the case of the free end microcantilever because of the repulsive interactions.They are decreasing with increasing initial gap Z.In the bistable region Z ∈]6.6nm, 9.09nm[, there is birth of a noncontact natural mode and an unstable mode (dashed red lines) through a saddle-node bifurcation.The noncontact mode is below the free-end mode and tends to it for increasing Z.The contact mode and the unstable mode disappear through a saddle-node bifurcation.For Z ∈]9.09nm, +∞[ i.e., the noncontact mode, increasing Z leads natural frequencies to tend toward the free-end natural frequencies.Moreover, the differences with the free end case frequencies are decreasing with increasing the order of the mode.The two first mode shapes of the microcantilever under the LJ force are depicted in Fig. 4 for Z = 5nm belonging to the monostable contact mode.These mode shapes are compared to the free-end mode shapes.It is observed that the first two mode shapes about the contact equilibrium are significantly different from those of the free-end case.On the other hand, the mode shapes about the noncontact static equilibrium are almost the same as those of the free-end microcantilever.10), with the initial separation Z.The freeend microcantilever (dashed grey lines), the contact mode (blue), the noncontact mode (black) and the unstable mode (dashed red lines)
Effects of the slow harmonic base motion
In this section we will show that, due to the slow harmonic excitation of the base, the dynamics of the microcantilever can be reduced to the dynamics of the contact and the noncontact slow invariant manifolds.Moreover, based on the time spent on the attracting contact slow invariant manifold, the contact time during the tapping mode could be computed and controlled by the amplitude of the base displacement and the initial gap.We use the Galerkin procedure to reduce the partial differential equation ( 2) and the associated boundary conditions into a set of nonlinear ordinary differential equations.In what follows only the first mode will be considered, thus Then, the extended Hamilton principle and the orthonormality property of the mode shapes ( 9) are applied to obtain the following discretized equation CSNDD 2016 4. The first two modes shapes in the contact mode for Z = 5 nm.The blue line corresponds to the microcantilever with LJ force and the line with circles corresponds to the free end microcantilever where the primes denote derivatives with respect to τ and the base motion Y b (τ) = Y cos(ϵτ) with ϵ = Ω/ω << 1.Furthermore,
Invariant slow manifolds
Equation ( 12) can be transformed to a fast-slow system by introducing the slow time scale T = ϵτ.Following Fenichel's results [13] there exist stable slow invariant manifolds M such that all orbits, starting in some region of the phase space, will reach them and the dynamics of the system is reduced to the dynamics in M. A first order approximation of the invariant slow manifolds, corresponding to ϵ = 0 in the fast-slow system, is given by the following algebraic equation Hence, the approximation of M is given by the graph of a function of the generalized coordinate q 1 (T ) in terms of T and the parameters of the system.In Fig. 5 are shown the graphs of the slow manifolds versus the slow time T , for various values of the amplitude Y and for the initial gap Z = 10nm.The stable manifolds are plotted with continuous lines and unstable ones with dashed lines.When only one stable manifold exists, it is plotted with a thick continuous line.The lower and the upper stable solutions correspond to the noncontact and contact slow manifolds, respectively.For Y = 0 i.e., no base motion, the slow manifold corresponds to the noncontact static equilibrium which is independent of T .For Y = 0.3, a reverse saddle-node bifurcation gives birth to two slow manifolds and the noncontact manifold is the visited solution.Then, for Y = 0.3452 a second saddle-node bifurcation is taking place since the noncontact slow manifold collides with the unstable manifold.Subsequently, for Y = 0.5 the contact and the noncontact slow manifolds are both visited during a period of the base displacement.Physically, this corresponds to the tapping mode and geometrically to a periodic burster.Figure 6 confirms that numerical solutions of Eq.( 12) follow the slow invariant manifolds according to the scenario described in Fig 5.
Contact time
The contact time t c is the time that the tip is interacting repulsively with the sample [16].It depends on the mechanical properties of the sample and the tapping amplitude and frequency.In our case, the tapping frequency is equal to the base excitation frequency and is very low than the fundamental frequency of the microcantilever.Indeed, the tapping mode in our case is governed by saddle-node bifurcations of slow manifolds, consequently the contact duration can be computed as the interval of time starting when the contact slow manifold is attracting and ending when it is repelling.In Fig. 7 time contact takes place through a saddle-node bifurcation that causes a jump from zero.This jump is decreasing by increasing the initial gap Z. Furthermore, the normalized contact time tends asymptotically to 0.5 for higher amplitudes Y, which means that the tip will spend 50% of the time in contact with the sample.
Conclusion
In this paper, we developed a mathematical model of an AFM microbeam subjected to a slow harmonic base motion and Lennard-Jones forces.We investigated the static contact and noncontact configurations and the associated natural frequencies and modes shapes.It was shown that the fundamental natural frequency near the contact mode is the most affected by the intermolecular interactions.Then, a one mode Galerkin method was employed and the equations of motion were transformed to a fast-slow dynamical system.It was found that the contact and the noncontact slow manifolds govern the dynamics of the AFM.Thus, we showed that the tapping mode is triggered via two saddlenode bifurcations of these slow manifolds.Moreover, the contact time between the tip and the sample was computed and the influences of the base motion amplitude and the initial gap discussed.As a continuation of the present work the effects of higher modes is under investigation.The use of other types of interaction forces between the tip and the sample is planned.
Fig. 1 .
Fig. 1.A schematic of the AFM system
Fig. 2 .
Fig. 2. Variation of the normalized tip/sample distance η * versus the initial separation Z
Fig. 3 .
Fig.3.Variation of the first two frequency coefficients λ 1 and λ 2 , solution of Eq.(10), with the initial separation Z.The freeend microcantilever (dashed grey lines), the contact mode (blue), the noncontact mode (black) and the unstable mode (dashed red lines)
5 Fig. 5 .
Fig.5.Graphs of the invariant slow manifolds, given by Eq.(14), for Z = 10nm and for various amplitude of the base Y. | 2018-12-11T06:20:00.737Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "6a142d286072285f0d9e2b4ad01c1775036303b2",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2016/46/matecconf_csndd2016_04001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a142d286072285f0d9e2b4ad01c1775036303b2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
233647171 | pes2o/s2orc | v3-fos-license | Checkpoint inhibitor therapy-associated acute kidney injury: time to move on to evidence-based recommendations
Abstract Immune checkpoint inhibitors (ICIs) have revolutionized cancer treatment since their introduction ∼15 years ago. However, these monoclonal antibodies are associated with immune-related adverse events that can also affect the kidney, resulting in acute kidney injury (AKI), which is most commonly due to acute tubulointerstitial nephritis (ATIN). Limited data are available on the true occurrence of ICI-associated AKI. Furthermore, evidence to guide the optimal management of ICI-associated AKI in clinical practice is lacking. In this issue, Oleas et al. report a single-center study of patients with nonhematologic malignancies who received ICI treatment during a 14-month period, experienced AKI and underwent a kidney biopsy at the Vall d’Hebron University Hospital. Importantly, they demonstrate that only a minority of ICI-associated AKI patients was referred to the nephrology service and kidney biopsy was only performed in 6.4% of patients. Although the authors add to our knowledge about ICI-associated AKI, their article also highlights the need for the development of noninvasive diagnostic markers for ICI-associated ATIN, the establishment of treatment protocols for ICI-associated ATIN and recommendations for optimal ICI rechallenge in patients with previous ICI-associated AKI.
The immune system is designed to optimally control activation and suppression of T cell function. Effective CD4 T cell activation begins with antigen recognition by the CD4 T cell in combination with major histocompatibility complex class II molecules on the cell surface of antigen-presenting cells (APCs). Additional costimulatory signaling is delivered through CD28 present on T cells, which engages CD80 or CD86 receptors on APC. Overactivation of this process is prevented by the negative feedback loop involving cytotoxic T-lymphocyte-associated protein 4 (CTLA-4), which binds to CD80 and CD86 with a much higher affinity than CD28 where an inhibitory signal is delivered to the T cell. Administering antibodies such as the immune checkpoint inhibitors (ICIs) against anti-CTLA-4 inhibits this inhibitory signal, thus resulting in prolonged T cell activation. The programmed cell death protein 1 (PD-1)/programmed death ligand 1 (PD-L1) system, which is central in the maintenance of T cell responses, is activated by immune responses to inflammatory cytokines. Upon engagement with PD-L1, PD-1 transmits a negative costimulatory signal to attenuate T cell activation. Antibodies directed toward PD-L1 or PD-1 will eliminate this brake, resulting in T cell reactivation. Whereas ICIs were originally believed to act solely in an antagonistic manner, more recent data suggest that ICIs also give rise to cytotoxic reactions [1] and depletion of intratumoral regulatory T cells [2]. CTLA-4 inhibitors induce T cell overactivation and proliferation, impair regulatory T cell survival, cause overproduction of T helper 17 cells, cause cross-reactivity between anti-tumor T cells and antigens on healthy cells and increase autoantibody production. PD-1 and PD-L1 inhibitors result in T cell reactivation, reduced survival and inhibitory capacity of regulatory T cells and increased cytokine production. These effects on the immune system make the ICIs perfect candidates for cancer therapy but also raise the possibility for increased autoimmune reactions.
The ICIs have revolutionized cancer therapy and are currently approved in an expanding group of hematologic and solid malignancies. The first authorized antibody blocking an immune checkpoint was the CTLA-4 antagonist ipilimumab, followed by the PD-1 inhibitors nivolumab, pembrolizumab and cemiplimab and PD-L1 inhibitors atezolizumab, avelumab and durvalumab. Under physiologic conditions, an immune response is controlled by inhibitory signals (checkpoints) to prevent a prolonged and excessive immune response. In the cancer setting, removing these inhibitory signals allows for T cell activation and the generation of an effective antitumor immune response.
Due to their main mechanism of action, ICIs are associated with very specific side effects, termed immune-related adverse effects (irAEs), a unique spectrum of autoimmune phenomena. The frequency and type of associated irAEs differ between the various ICIs. Ipilimumab is associated with both an increased and broader range of irAEs compared with PD-1 antagonists [3,4]. Colitis and hypophysitis occur more frequently with CTLA-4 antagonists, whereas pneumonitis and thyroiditis appear more often with PD-1 blockers. Although data are limited, it appears that the PD-L1 blockers are associated with relatively fewer irAEs, possibly due to sparing of the PD-1/ PD-L2 axis [5]. The skin, gastrointestinal tract, lungs, liver and endocrine system are most commonly involved. Kidney involvement is less common but can be significant. The estimated incidence of all-grade kidney toxicity is $2% for monotherapy and up to 4.9% for ICI combination therapy [6,7]. Based on a review of Phase II and III clinical trials of ICIs enrolling 3695 patients, the incidence of high-grade kidney toxicity was 0.6% [7]. However, some authors have claimed that the incidence of kidney toxicity could be considerably higher [8,9]. In fact, overall, AKI (not necessarily caused by the ICI) occurring in the setting of ICI therapy ranges from 7 to 24% [10][11][12][13][14][15]. When clinical adjudication or kidney biopsy (much less common) was undertaken, ICI-associated AKI decreased 0.7-3.8% [10][11][12][13][14][15]. Acute tubulointerstitial nephritis (ATIN) is the most common kidney lesion, while acute tubular injury and an assortment of glomerular lesions are observed less frequently with the ICIs.
In a study published in this issue, Oleas et al. [16] report a single-center study of patients with nonhematologic malignancies who received ICI treatment during a 14-month period, experienced AKI (based on the Acute Kidney Injury Network criteria) and underwent a kidney biopsy at the Vall d'Hebron University Hospital, Barcelona, Spain. During this period, 826 patients with nonhematologic organ malignancies received ICI treatment and AKI occurred in 125 patients (15.1%). Of the patients with AKI, only 23 (18.4%) were evaluated in the nephrology department and 8 (6.4% of all AKI patients) underwent a kidney biopsy. The Mayo Group recently reported on the occurrence of ICI-associated AKI (defined as a !1.5-fold increase in serum creatinine from baseline) in 2143 patients between January 2014 and June 2020 and reported similar numbers: 365 (17%) developed AKI, of which 52 patients with AKI were considered to be possibly directly due to the ICIs [17]. Of these patients, 37 (71%) had clinically suspected or biopsy-proven ICI-associated AKI [biopsy was performed in 14 patients (3.8% of all AKI patients)] [17]. Both studies demonstrate that the majority of AKI episodes in ICI-treated patients are not ICI related and only a minority of AKI patients is evaluated by a nephrologist and undergoes a kidney biopsy.
Cortazar et al. [18] published a large multicenter study that included 138 patients with ICI-associated AKI (defined as a !2fold increase in serum creatinine or new dialysis requirement directly attributed to an ICI). In this study, the median time from ICI initiation to AKI was 14 weeks (range 6-37) [18]. In this study, the time between the start of ICI and the onset of AKI was a mean of 5.8 months (range 2-11) [16]. In the study by Isik et al. [17], AKI was found to develop earlier in the ICI-AKI patients compared with the non-ICI-AKI patients fmedian 4 months [95% condfidence interval (CI) 1.2-11.4] versus 8.5 months [95% CI 5.3-10.4], respectively; P ¼ 0.026g. The most frequent urine findings were subnephrotic-range proteinuria, with a mean protein:creatinine ratio of 544 mg/g and eosinophiluria [5/8 patients (62%)]. In the study of Cortazar et al. [18], most patients also had subnephrotic proteinuria, approximately half had pyuria and extrarenal irAEs occurred in 43% of patients. Isik et al. [17] noted a higher serum creatinine, CRP, protein:creatinine ratio (although subnephrotic in both groups) and urinary leukocyte and erythrocyte counts in the ICI-AKI patients compared with the non-ICI-AKI patients. Also, eosinophilia was not a differentiating factor. Lower baseline eGFR, proton pump inhibitor use and combination ICI therapy have been identified as independent risk factors for ICI-associated AKI by Seethapathy et al. [12].
The limitations of this study are worth discussing. It is a single-center study with a limited number of patients, which contrasts with recently published studies that included more patients and provided important novel data regarding the clinical/biochemical presentation, predictors of occurrence and outcome and management of ICI-associated AKI. In addition, as is problematic in other studies, a lack of kidney biopsy in patients determined to clinically have ICI-associated AKI is a limitation. This limits examination of clinical and laboratory findings as potential predictors of ATIN or another kidney lesion. However, single-center studies can be helpful to provide detailed information about the occurrence of ICI-associated AKI and current practices in the management of these patients. In addition, single-center studies can provide more detailed mechanistic insights regarding the pathophysiology of ICI-associated ATIN and identify biomarkers for a safe rechallenge with ICIs. Besides these mechanistic studies, international, multicenter studies are needed to establish the optimal management of ICIassociated AKI patients to optimize their cancer and kidney outcomes.
Many oncologists manage AKI that develops in ICI-treated patients according to the American Society of Clinical Oncology (ASCO) clinical practice guidelines, which address management of irAEs in patients treated with ICI therapy [19], and the National Comprehensive Cancer Network (NCCN) practice guidelines, which address management of immunotherapy-related toxicities [20]. The ASCO guidelines recommend a diagnostic work-up as follows: (i) exclusion of alternative etiologies of AKI (recent intravenous contrast, medications and fluid status) and (ii) monitoring of patients for elevated serum creatinine prior to every ICI dose [19]. Remarkably, routine urinalysis is not recommended other than to rule out urinary tract infections. For Grade !2 kidney toxicities, the guidelines recommend a nephrology consultation. In the ASCO guidelines it is explicitly stated that 'if no potential alternative cause of AKI is identified, then one should forego biopsy and proceed directly with immunosuppressive therapy' [19]. In the NCCN guidelines, in addition to an evaluation for alternative causes of AKI, discontinuation of nephrotoxic drugs and a spot urine protein:creatinine ratio is recommended [20]. Nephrology consultation is only recommended for Grade !2 kidney toxicities and kidney biopsy should be considered for Grade !3 kidney toxicities [20]. We believe that urinalysis (and urine sediment examination) should be performed in every ICI-treated patient with AKI. Although sterile pyuria and/or leukocyte casts lack both sensitivity and specificity for ICI-associated AKI, as noted by data showing that low-grade (tubular) proteinuria and urine abnormalities, such as pyuria and/or leukocyte casts and hematuria, occur in only approximately half and two-thirds of cases with ATIN, respectively, urinary findings can help identify non-ICI-related causes of AKI. Although both the ASCO and the NCCN guidelines recommend nephrology consultation in Grade !2 kidney toxicities, in actual practice this approach is rarely taken. We feel that nephrology consultation is probably not necessary in Grade 2 renal toxicities when an alternative cause of AKI is clearly identified (urinary obstruction, hypotension with ischemic acute tubular injury, etc.). In our opinion, kidney biopsy should be performed in ICI-treated patients with Grade !2 kidney toxicity when no potential alternative causes of AKI are identified and before treatment with corticosteroids is initiated. The histological information will help guide therapy, as finding non-ATIN lesions reduces unnecessary and potentially harmful corticosteroid exposure in cancer patients and may permit continued ICI use.
In ICI-treated patients with AKI where immunosuppressive treatment needs to be initiated to treat extrarenal irAEs, we recommend postponing kidney biopsy and observing the evolution of kidney function. Kidney biopsy would be recommended when there is no kidney function recovery with immunosuppressive treatment. Recently urinary interleukin-9 and tumor necrosis factor a have been suggested as markers to effectively differentiate between ATIN, acute tubular injury and other kidney lesions [21]. Further research is needed to validate these markers as diagnostic markers of ICI-associated ATIN and to provide clinicians with a useful noninvasive diagnostic tool.
In regards to therapy, the ASCO and NCCN guidelines recommend temporary cessation of ICI treatment and, when no other etiologies can be identified, the administration of 0.5-1 mg/kg/ day prednisone equivalents for Grade 2 kidney toxicities ( Table 1). With no improvement in kidney function, it is recommended that the dose of corticosteroid be increased to 1-2 mg/kg prednisone or equivalent in combination with ICI discontinuation. When there is a kidney recovery to Grade 1 or less, corticosteroids should be tapered over 4-6 weeks. For Grades 3-4 kidney toxicities, permanent discontinuation of ICI treatment is recommended in combination with a nephrology consultation, evaluation for other etiologies and initiation of 1-2 mg/kg/day prednisone or equivalent when no other identifiable etiologies exist. All of these interventions presume that all Grade !2 kidney toxicities without an alternative cause are ATIN. Given the nonspecific signs and symptoms of kidney injury, as well as multiple competing causes of AKI in cancer patients, we believe kidney biopsy is of far greater importance than suggested by these guidelines, not only to make a correct diagnosis, but-more importantly-to guide treatment regarding ICI discontinuation, treatment with corticosteroids and ICI rechallenge. Although corticosteroid treatment may not affect oncologic outcomes, they are still associated with an increased incidence of sepsis, venous thromboembolism and fractures in population-based cohort studies, even in patients with short and moderate corticosteroid exposure [23]. In the study by Oleas et al. [16], three patients (37%) received treatment with pulses of methylprednisolone 250-500 mg/day and five patients (62%) received prednisone 1 mg/kg/day. Seven of eight patients (87%) experienced recovery of kidney function and one patient (12%) progressed to chronic kidney disease. In the study by Cortazar et al. [18], most patients (86%) were treated with steroids and complete or partial recovery was obtained in 40 and 45%, respectively. Predictors of improved kidney prognosis included Permanent discontinuation of ICI is warranted in the setting of severe (Grades 3-4) proteinuria concomitant TIN-causing medications prior to AKI and treatment with corticosteroids. Failure to achieve kidney recovery after ICI-associated AKI was independently associated with higher mortality [18]. Another important issue is whether ICI treatment can be safely reinitiated after ICI-associated AKI. The ASCO and NCCN guidelines recommend permanent discontinuation of ICI treatment in patients with Grades 3-4 kidney toxicities (Table 2). For patients with Grade 2 kidney toxicities, ICI rechallenge can be considered after discussion with the patient when there is neither recurrence nor CKD [19]. Recently Allouchery et al. [24] reported an analysis based on the French pharmacovigilance database evaluating ICI-treated patients with at least one Grade !2 irAE resulting in ICI discontinuation, with subsequent ICI rechallenge. The authors demonstrated that 61.1% of the patients who discontinued ICI treatment for Grade !2 irAEs experienced no recurrent Grade !2 irAEs after ICI rechallenge [24]. In the study of Cortazar et al. [18], ICI rechallenge was performed in 22% of patients, of whom only 23% developed recurrent AKI. In the study by Isik et al. [17], rechallenge with an ICI was attempted in 16 (43%) of the ICI-AKI patients and recurrence was reported in 3 (19%) of the rechallenged patients. Interestingly, in this study, survival tended to be higher in the group not rechallenged compared with the group that was rechallenged; however, results were not statistically significant [17]. So the risk of recurrence appeared to be acceptable and, as such, we do not agree with the ASCO and NCCN guidelines. In contrast, we recommend ICI reinitiation in all patients where an alternative cause of AKI has been identified [22]. Also, in patients with histology-proven ICI-associated ATIN, we recommend rechallenge with ICI with close monitoring after kidney function recovery. Although not supported by data, clinicians may consider using low-dose corticosteroids in patients with ATIN who had Grade !3 kidney toxicities.
In conclusion, the study of Oleas et al. [16] further adds to the existing evidence regarding the frequency, diagnosis and management of ICI-associated AKI in clinical practice. In this area, single-center studies can be helpful to provide more detailed mechanistic insights regarding the pathophysiology of ICIassociated ATIN and identify biomarkers for safe rechallenge with ICI. Additionally, international, multicenter studies are needed to establish the optimal management of ICI-associated AKI patients to optimize their cancer and renal outcomes. Importantly, an evidence-based approach is required to facilitate the creation of rigorous guidelines on the appropriate clinical approach for ICI-associated kidney toxicity. | 2021-05-05T00:09:52.284Z | 2021-03-10T00:00:00.000 | {
"year": 2021,
"sha1": "785abb186321beb56c0e5bcc99f048ad01403606",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/article-pdf/14/5/1301/38851763/sfab052.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63e4301c43ddb5fbc06b1593b189ec6d466a69d8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257477517 | pes2o/s2orc | v3-fos-license | Associations of Eliminating Free-Stall Head Lock-Up during Transition Period with Milk Yield, Health, and Reproductive Performance in Multiparous Dairy Cows: A Case Report
: The objective of this retrospective case study was to understand the effects of eliminating free-stall lock-up time during 21 days postpartum on milk yield, reproductive performance, and health events at a large dairy herd. A group of 200 cows were selected as the treatment (TRT) group, which did not receive a lock-up time during early lactation, and a separate group of 200 cows served as the control (CON) group, which received on average 2 h/day of lockup time. The TRT group had greater milk yield (mean ± SE) on the third monthly milk test day (33.1 ± 0.75 vs. 29.9 ± 1.22; p = 0.04) and tended to have greater milk yield on the second test day (38.3 ± 1.55 vs. 39.1 ± 0.79; p = 0.06) compared to the CON cows. Milk fat% (mean ± SE) was greater in the TRT group than in the CON group on the first monthly milk test (3.65 ± 0.06 vs. 3.31 ± 0.12, p = 0.01). The TRT group had lower linear somatic cell scores on the first monthly milk test day compared to the CON group (2.6 ± 0.24 vs. 3.2 ± 0.11; p = 0.01). Cows in the TRT group had lower days in milk at first breeding (DIMFB) (66.2 ± 3.7 vs. 76.7 ± 2.9; p = 0.02) and were confirmed pregnant earlier as indicated by smaller days in milk to pregnancy (DIMPREG) (96.9 ± 12.32 vs. 112.1 ± 5.5; p < 0.01). Cows in the TRT group also had fewer incidences of all health events combined (13% vs. 30.5%; p < 0.001), lameness (3% vs. 9.5%; p = 0.01), and mastitis (3% vs. 16%; p < 0.001). We conclude that eliminating the stall lockup may have contributed to the increased milk yield, health, and reproductive performance of dairy cows in this dairy herd. Future prospective cohort studies are needed to further assess the potential effect of eliminating lock up time on cow performance.
Introduction
The use of self-locking feed stanchions provides ease of work and reduces the handling time of cattle dairy barns with free stalls. Head locks at the feed bunk of free-stall barns allow dairy farm personnel to retrain cows and facilitate routine herd procedures. These stanchions are utilized during on-farm tasks such as health checks, pregnancy diagnosis, vaccinations, and artificial insemination. Although with a high initial investment cost, headlocks at the free-stall feed bunk also provide labor efficiency, ease of use, and worker safety during the routine on-farm activities [1]. Lock-up stanchions help to reduce competition and aggression among cows at the bunk by ensuring that each animal in the pen receives the minimum feed bunk space in front of them [2]. Therefore, using head locks to restrain the cows in a feed bunk is a widespread practice on dairy farms and lasts for approximately 2-4 h per day. The daily head lock-up time varies across dairy farms and between cows within an individual farm depending on the size of the pen, the routine tasks performed, the number of employees and their skills to perform the tasks, and the cows' arrival order at the feed bunk. There are reported negative impacts of longer lockup times on milk production, reproduction performance, disease events, heat stress conditions, lameness events, and the overall behavior of cattle [3] (Papinchak et al., 2022). Overall, Dairy 2023, 4 216 the use of headlocks at the feed bunk is a widespread practice on dairy farms, but their prolonged use can have negative impacts on cow welfare and productivity. This study seeks to determine if eliminating this practice during the transition period can lead to improved outcomes for the cows. Our hypothesis was that eliminating free-stall head lock-up during the early stages of lactation would increase milk yield, health, and reproductive performance. Consequently, the objective was to evaluate the association of eliminating stall lock-up during the transition period of dairy cows (21 days after parturition) on milk yield, reproductive performance, and health events.
Materials and Methods
This retrospective case study was conducted at a commercial dairy farm in Texas, USA from August 2020 to November 2020. All data used in this study were collected retrospectively from on-farm software, thus no Institutional Animal Care and Use Committee approval was required. All cows were housed in free stall barns with sand-bedded stalls and had free access to feed and water. The farm milked around 2500 Holstein cows with a rolling herd average of 8600 kg. Cows in both of the study groups were fed a total mixed ration two times a day to meet or exceed the nutritional requirements for a lactating Holstein cow producing 30 kg/d of milk with 3.5% fat and 3.1% true protein based on NRC, (2001) [4]. The study cows consumed a diet consisting on average of 24.5 kg DM and consisted of corn silage (14 to 17.5%); wheat silage (13 to 20%); a premix containing soybean, soy hulls, corn, wheat, and minerals and vitamins (47.5 to 50.5%); sorghum silage (3.0 to 4.5%); alfalfa hay (12 to 16%); and grass hay (0 to 3%). Cows were bred based on heat detection using tail chalk after a voluntary waiting period of 45 days. Cow tailheads were painted daily with colored chalk and checked for signs of estrus by the removal of tail chalk. If identified to be in estrus, cows were artificially inseminated in the morning.
The study cows were randomly selected as treatment (TRT; n = 200) or control groups (CON; n = 200) blocked by parity and by the period of the enrollment. The CON group was restrained at the head locks for approximately 2 h per day for 21 days postpartum, while the TRT cows were not restrained. After the intervention period was over, both groups were managed using routine headlocks as they exited the fresh cow pen after the transition period. The cows were tested monthly by DHIA and followed the a.m./p.m. scheme where cows were tested either in the morning or afternoon milking session by trained DHIA technicians. Milk was collected in DHIA sample vials and submitted to the Texas DHIA lab in Canyon, TX. The laboratory used the Bentley 2000 mid-infrared method (Bentley Instruments Inc., Chaska, MN, USA). The study cows were followed up until the end of current lactation and farm records including monthly test-day milk yield, test-day milk fat%, test-day milk protein%, monthly test-day linear somatic cell score (LSCC), days in milk at first breeding (DIMFB), days in milk to pregnancy diagnosis (DIMPREG), and incidence of diseases, were obtained from the farm management software (PCDART ® ). Herd data were exported to Microsoft Excel ® spreadsheets and analyzed using SAS ver.9.4 ® . The continuous variables were analyzed using a mixed model (PROC MIXED) with the cow as a random effect variable in the model. The models included Milk production, fat%, protein%, LSCC, DIMPREG, and DIMFB as outcome variables and DIM, month of test as predictor variables. The frequencies of diseases and pregnancies per AI were evaluated using Chi-squared tests (PROC FREQ). The statistical significance was tested at p < 0.05 level.
Results and Discussion
The study cows comprised all multiparous lactating cows with the mean (±SD) parity of 3.16 (±0.12) and 4.1 (±0.14) for the TRT and CON groups, respectively. A total of 17 cows in the CON group and 10 cows in the TRT group received assistance during calving. Overall, the CON cows produced 33.1 kg per day whereas the TRT cows produced 31.2 kg per day during the study period. Cows on the TRT group had significantly greater milk yield on the third test day (33.1 ± 0.75 vs. 29.9 ± 1.22; p = 0.04), and numerically greater milk Figure 1).
Overall, the CON cows produced 33.1 kg per day whereas the TRT cows produced 31.2 kg per day during the study period. Cows on the TRT group had significantly greater milk yield on the third test day (33.1 ± 0.75 vs. 29.9 ± 1.22; p = 0.04), and numerically greater milk yield on second test day (38.3 ± 1.55 vs. 39.1 ± 0.79; p = 0.06), but numerically lower milk production on the first, fourth and fifth monthly test days compared to the CON cows (41.4 ± 1.37 vs. 39.9 ± 0.64, p = 0.31; 26.1 ± 1.06 vs. 24.6 ± 1.51, p = 0.45; and 27.36 ± 2.12 vs. 22.76 ± 4.56, p = 0.38; Figure 1). Figure 1. Distribution of milk production and linear somatic cell score (LSCC) across the monthly milk test days among cows in treatment (TRT; no headlocks) and control (CON; regular headlock) groups. ** denotes a statistically significant difference.
Milk fat% (mean ± SE) was significantly greater in the TRT than in the CON group on the first monthly milk test (3.31 ± 0.12 vs. 3.65 ± 0.06, p = 0.01) ( Figure 2). Milk protein% was very significantly greater only on the third test (3.32 ± 0.05 vs. 2.98 ± 0.05, p < 0.001). There was no significant association between milk protein% and lock-up time. Similarly, there was no significant difference between the TRT and CON groups when evaluating the LSCC. Milk fat% (mean ± SE) was significantly greater in the TRT than in the CON group on the first monthly milk test (3.31 ± 0.12 vs. 3.65 ± 0.06, p = 0.01) ( Figure 2). Milk protein% was very significantly greater only on the third test (3.32 ± 0.05 vs. 2.98 ± 0.05, p < 0.001). There was no significant association between milk protein% and lock-up time. Similarly, there was no significant difference between the TRT and CON groups when evaluating the LSCC.
Of the total cows enrolled, 117 cows were pregnant by the fifth monthly test day in the CON group, and 101 cows were pregnant in the TRT group. Although with fewer pregnancies, more cows were reported to be in estrus in the TRT group and they consequently demonstrated a greater pregnancy% per AI. It is possible that cows in the TRT group had a less exacerbated negative energy balance for a shorter period after parturition and might have returned to cyclicity and express estrus earlier than the CON cows which would explain the reduced DIMFB (66.2 ± 3.7 vs. 76.8 ± 2.9; p = 0.02). In turn, this may explain why the TRT cows were confirmed to be pregnant significantly earlier, as indicated by the reduced DIMPREG (86.9 ± 12.3 vs. 112.1 ± 5.5; p < 0.01). Overall, the percentage of cows pregnant per AI was greater in the TRT than in the CON group for all inseminations (Table 1). During the lactation, the TRT group reported two abortions, and the CON group had one reported abortion. Of the total cows enrolled, 117 cows were pregnant by the fifth monthly test day in the CON group, and 101 cows were pregnant in the TRT group. Although with fewer pregnancies, more cows were reported to be in estrus in the TRT group and they consequently demonstrated a greater pregnancy% per AI. It is possible that cows in the TRT group had a less exacerbated negative energy balance for a shorter period after parturition and might have returned to cyclicity and express estrus earlier than the CON cows which would explain the reduced DIMFB (66.2 ± 3.7 vs. 76.8 ± 2.9; p = 0.02). In turn, this may explain why the TRT cows were confirmed to be pregnant significantly earlier, as indicated by the reduced DIMPREG (86.9 ± 12.3 vs. 112.1 ± 5.5; p < 0.01). Overall, the percentage of cows pregnant per AI was greater in the TRT than in the CON group for all inseminations (Table 1). During the lactation, the TRT group reported two abortions, and the CON group had one reported abortion. Table 1. Distribution of pregnancy per AI and cumulative frequency of diseases among the cows in treatment (TRT = no headlocks) and control groups (CON = regular headlock). Figure 2. Distribution of milk fat% and milk protein% across the monthly milk test days among cows in the treatment (TRT; no headlocks) and control (CON; regular headlock) groups. ** denotes a statistically significant difference. Table 1. Distribution of pregnancy per AI and cumulative frequency of diseases among the cows in treatment (TRT = no headlocks) and control groups (CON = regular headlock). Cows in the TRT group had lower incidence of lameness compared to CON (3% vs. 8.5%; p = 0.01, x 2 = 8.21). Similarly, TRT group reported reduced incidence of milk fever (0.5 vs. 2%; p = 0.37, x 2 = 1.82), mastitis (3% vs. 16%; p < 0.001, x 2 = 20.71), and respiratory diseases (2.5% vs. 7%; p = 0.29, x 2 = 0.34).
CON (n) TRT (n) p-Value
The study evaluated monthly test day milk production and milk components in cows restrained through head locks on a daily basis and cows without exposure to head locks.
Overall, the TRT group had a greater milk yield on the third monthly test day and reduced incidence of the common dairy cattle health disorders.
The repeated daily exposure to stressors such as head lock-up in free-stall housed dairy cows makes it difficult for them to adapt to the stress, which ultimately alters the physiological response to stress [5]. The long duration of repeated free-stall head lock restraint is associated with forced standing, leading to decreased feed intake which induces altered-energy metabolism. Forced standing has been associated with the reactivity of brain (HPA axis), thus indicating an overall impact on the animal [6,7]. Although there are limited studies investigating the dose-dependent relationship between free-stall lock-up time with markers of chronic stress, there are studies that suggest that longer lock-up time exposes animals to significant stressful conditions and represents one of the neglected issues in the dairy industry that needs to be addressed.
Studies have demonstrated that the use of headlocks with dairy cattle influences glucocorticoid secretion which ultimately leads to a high level of cortisol in the blood [8]. This effect is primarily due to restrictions in accessing water and resting areas when locked up [9], reduced lying time, and increased human presence. An altered time budget leading to reduced lying time has been associated with reduced sleeping for animals [7], leading to the overall disruption of the daily rhythm of dairy cows. As cows are predominantly routine animals, this alteration in dairy time budget leads to overall discomfort and a negative impact upon milk production. Time spent in lockup at the feed bunk can be compensated for 1-2 h of lying time but the prolonged lockup in association with other stressors such as overstocking will impede the cow's ability to compensate [10].
We detected a 0.83 kg/day and 3.2 kg/day increase in daily milk production on the second and the third monthly milk test day in cows that did not receive a head lock-up. Treatment cows also produced 0.35% higher percent fat and 1.06 units lower linear somatic cell scores. Exposure to cortisol for long periods of time has the potential to decrease overall milk yield, and the chronic activation of the stress response due to restraint in head-lock was previously found to have an impact on milk yield, as well as the quality of milk including milk fat percentage, the number of somatic cells, and dry matter intake [11][12][13][14]. Cows that were not deprived of feeding and lying for more than 4 hours have demonstrated reduced milk yield by 2 L/day that lasted for 3 days [14]. Milk protein percentage was found to decrease in cows that were restrained, i.e., it has been shown to drop from 3.27 to 3.19% [11]. However, as we observed the effect only on a few monthly tests, we cannot conclude the effect is entirely due to the treatment. There are several factors including diet, cow health, and reproductive performance that contribute to milk yield and components. Therefore studies considering these and other variables including seasonality, weather, and daily cow activity should be considered.
We observed that cows without provision to lock-up had three times less prevalence of lameness, five times less prevalence of mastitis, and three times less prevalence of all diseases combined. However, previous studies failed to detect significant associations with mastitis or other health issues [11]. A longer lock-up time forces the cows to deviate from the normal daily time budget, contributing to variability in lying time and lying bouts that predispose cows to lameness [15]. Cows that were deprived of lying down become restless and engaged in stomping, repositioning, butting, and weight shifting behaviors. Cows deprived for 4 h also exhibit more sniffing and head rubbing behaviors compared to those deprived for 0 or 2 h [14].
Dry matter intake is extremely critical during the transition period because of the negative energy balance conditions due to unique physiological energy metabolism at this stage and this effect could be further exacerbated by increased stress caused by the prolonged lock-up time. As transition cows are more likely to receive restraint at head lockups for fresh cow checks and other health monitoring, the response from these animals would be further altered. Therefore, stressors presented to the transition cow should be minimal and lock-up management routines could be a strategy to closely monitor the impacts of altering the transition cow's time budget and cow comfort. Restricting access to the feed available at the bunk through prolonged lockup time at one stall can exacerbate the negative energy balance and stress levels in cows, leading to reduced dry matter intake and further health issues. It is important to manage the lockup routine in a way that minimizes stress and maximizes cow comfort to maintain a healthy and productive transition period for dairy cows.
Free-stall head lock-up systems are designed to provide a comfortable and stress-free animal environment for dairy cattle while still ensuring efficient and effective management. Properly designed and maintained headlock units should be comfortable and safe for cows, allowing them to rest and move around freely when not locked. Farmers and dairy workers should monitor cows for signs of discomfort or distress during head lock-up and adjust head lock-up times and systems as necessary to ensure the cows' health and well-being. It is important to manage farm operations adequately to minimize the restraint time to less than 4 h per day, especially during late morning and afternoon hours of the summer months, to prevent these negative effects on dairy cows [3].
Although we were able to report some effects of eliminating lockup time in early lactation, these changes may also be due to many other factors including nutrition, season, and other farm management practices. The limitation of this case study is the lack of external validation due to limited data to evaluate many variables considered. The small sample size and frequency of the diseases may have led us to miss some significant differences that may exist. Although the disease presentations during the transition period impacted the overall lactation performance, the effect of eliminating lockup in early lactation could have a less direct impact on the fourth or fifth test day. However, the goal was to report the observations from this single-farm case evaluation to derive hypotheses for future exploration. To confidently state that the effect observed is entirely due to the elimination of the lock-up of cows, further prospective studies utilizing a robust design with higher control of the confounding variables should be conducted.
Conclusions
In conclusion, eliminating lock-up time in early lactation may have contributed to improvements in the milk production, health, and reproductive performance of lactating dairy cows under these farm management conditions. Future controlled research is needed to confidently determine the impact on cow performance and to further explore the most effective and practical strategies for improving animal welfare in the dairy industry. | 2023-03-12T15:35:01.156Z | 2023-03-09T00:00:00.000 | {
"year": 2023,
"sha1": "9e681f452f993655e73477f53a09d757ac34a81c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-862X/4/1/15/pdf?version=1678349749",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5758342c7d5fe9e73a36abb5ee80d451bd9811bd",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
1826722 | pes2o/s2orc | v3-fos-license | Controlled variation of monomer sequence distribution in the synthesis of aromatic poly(ether ketone)s
The effects of varying the alkali metal cation in the high-temperature nucleophilic synthesis of a semi-crystalline, aromatic poly(ether ketone) have been systematically investigated, and striking variations in the sequence distributions and thermal characteristics of the resulting polymers were found. Polycondensation of 4,4′-dihydroxybenzophenone with 1,3-bis(4-fluorobenzoyl)benzene in diphenylsulphone as solvent, in the presence of an alkali metal carbonate M 2CO3 (M = Li, Na, K, or Rb) as base, affords a range of different polymers that vary in the distribution pattern of two-ring and three-ring monomer units along the chain. Lithium carbonate gives an essentially alternating and highly crystalline polymer, but the degree of sequence randomization increases progressively as the alkali metal series is descended, with rubidium carbonate giving a fully random and non-thermally crystallizable polymer. Randomization during polycondensation is shown to result from reversible cleavage of the ether linkages in the polymer by fluoride ions, and an isolated sample of alternating sequence polymer is thus converted to a fully randomized material on heating with rubidium fluoride.
Introduction
Composite materials for aerospace applications have traditionally been based on thermosetting matrix polymers such as the epoxies and bismaleimides, 1,2 but in more recent years the potential advantages of thermoplastic matrices (increased speed of fabrication and greater toughness) have begun to be realized, 3 notably with the introduction of long-fibre composites based on semi-crystalline engineering polymers such as poly(1,4-phenylene sulphide) 4 and the aromatic poly(ether ketone)s (PEKs), poly(ether ether ketone) (PEEK) 5 and poly(ether ketone ketone) (PEKK; Figure 1). 6 The high crystalline melting points (T m ) of PEKs (typically 340-380 C) result in the retention of significant mechanical strength and stiffness even at temperatures well above their glass transition temperatures (T g s). 7 However, such T m s also require correspondingly high composite fabrication temperatures -up to 420 C. 3,8 In the present article, we report a study of a lower melting but still crystallizable PEK matrix polymer (N1) derived from the nucleophilic polycondensation of 4,4 0 -dihydroxybenzophenone with 1,3-bis(4-fluorobenzoyl)benzene ( Figure 2). The synthesis of this polymer (T g ¼ 152 C; T m ¼ 285 C) has been briefly noted in a conference paper, 9 and its combination of a T g somewhat higher than that of PEEK (T g ¼ 143 C; T m ¼ 343 C) and a very much lower T m -potentially enabling more facile processing -suggested to us that it would be worth investigating further as a possible composite matrix.
The polymer that might naively be expected from the above polycondensation would comprise an alternating sequence of two-ring and three-ring monomer residues. A rigorously alternating structure of this type has been obtained from the electrophilic polycondensation of 4,4 0 -diphenoxybenzophenone with isophthaloyl chloride (Figure 3). 10,11 The resulting semi-crystalline polymer (E1) shows thermal characteristics (T g ¼ 147 C; T m ¼ 310 C) similar to that of the 'nucleophilic' polymer N1 (T g ¼ 152 C; T m ¼ 285 C), though the T m of E1 is noticeably higher. This difference in T m may well be significant, as we now report that the nucleophilic synthesis, involving the use of an alkali metal carbonate as base, affords polymers with a range of different T m and degrees of crystallinity depending on the nature of the alkali metal cation. This variability is shown to relate to the degree of sequence randomization during polycondensation, an effect resulting from reversible cleavage of the ether linkages during the growth of the polymer chain. 12,13 Experimental
Materials, instrumentation and analysis
Monomers, solvents, alkali metal carbonates and other reagents were obtained from Sigma Aldrich (UK) and were used without further purification. Inherent viscosities ( inh ) were measured at 25 C with 0.1% w/v polymer solutions in 96% sulphuric acid (H 2 SO 4 ) using a Schott Instruments CT 52 viscometer (Mainz, Germany). No insoluble gel fractions were present in any of the polymers described. Phase transitions (glass transitions, cold crystallizations and melting points) were identified from the second heating cycles of differential scanning calorimetry (DSC) traces using a TA DSC Q2000 instrument (New Castle, Delaware, USA; 4-12 mg samples, 10 C min À1 under nitrogen atmosphere). A slight excess of alkali metal carbonate was used in each polycondensation to ensure quantitative conversion of the bisphenol to bisphenoxide. Yields of polymers were essentially quantitative and were diminished only by mechanical losses during the milling stage. Proton ( 1 H) and carbon ( 13 C) nuclear magnetic resonance (NMR) spectra were obtained on Bruker Nanobay 400 MHz or 700 MHz NMR spectrometers (Billerica, Massachusetts, USA) using polymer solutions in (deuterated chloroform; CDCl 3 /hexafluoro-2propanol; (CF 3 ) 2 CHOH (6:1 v/v) or CDCl 3 /trifluoro acetic acid (CF 3 COOH) (6:1 v/v). Mass spectra (electrospray ionization (ESI)) were obtained from 0.1% (w/v) sample solutions in methanol using a ThermoScientific LTQ OrbiTrap XL instrument (Waltham, Massachusetts, United States) equipped with an ACCELA LC autosampler.
Synthesis and characterization
Polymer N1a. A mixture of 1,3-bis(4-fluorobenzoyl)benzene (4.60 g, 14.28 mmol), 4,4 0 -dihydroxybenzophenone (3.00 g, 14.00 mmol), sodium carbonate (Na 2 CO 3 ; 1.63 g, 15.4 mmol) and diphenylsulphone (35 g) was heated with stirring to 300 C under argon atmosphere. After 3 h, the polymer solution was poured onto a sheet of aluminium and allowed to cool. The resulting solid was ground to a powder in an ultracentrifugal mill and then stirred in acetone (200 mL) at room temperature for 30 min. The powder was filtered off, washed with acetone and dried. The powder was next extracted with 4Â 200 mL of refluxing acetone, and then overnight in a Soxhlet extractor with refluxing acetone. The powder was extracted with 5Â 200 mL of boiling water and then finally with 4Â 200 mL of refluxing acetone. The resulting, purified material was dried at 110 C under vacuum overnight, affording polymer N1a (5.65 g, 81.3% yield). T g ¼ 149 C; T m ¼ 300 C; inh (H 2 SO 4 ) ¼ 0.62 dL g À1 ; IR max cm À1 : 2997 (C-H), 1655 (C¼O), 1588 Hz, 8H e,e 0 ) ppm; 13 Polymer N1b. This polymer was obtained using the procedure described for polymer N1a, but with potassium carbonate (2.13 g, 15.4 mmol) replacing Na 2 CO 3 to give polymer N1b (6.00 g, 86.3%). Polymer N1c. The same procedure as described for polymer N1a was used, but replacing Na 2 CO 3 with rubidium carbonate (3.56 g, 15.4 mmol) and using a 5 mol% excess of 1,3bis(4-fluorobenzoyl)benzene (4.74 g, 14.70 mmol) to control molecular weight (MW), afforded polymer N1c Polymer N1d. Polymer N1a (2.20 g), rubidium fluoride (1.49 g, 14.28 mmol) and diphenylsulphone (35 g) were heated with stirring at 300 C under argon for 2 h. Using the same workup procedure as described for polymer N1a afforded polymer N1d as a tan powder (1.05 g, 47.0%), Polymer N1e. The same procedure as described for polymer N1a was used, but replacing Na 2 CO 3 with lithium carbonate (1.13 g, 15.4 mmol) and using an additional 2Â 200 mL of boiling water at the extraction stage to ensure removal of lithium fluoride (LiF). This gave polymer N1e (5.53 g, 78.2%).
Results and discussion
Polycondensation of 4,4 0 -dihydroxybenzophenone with 1,3-bis(4-fluorobenzoyl)-benzene, in diphenylsulphone as solvent, at 300 C in the presence of an alkali metal carbonate M 2 CO 3 as base (Figure 2; M ¼ Na, K, or Rb), afforded high-molecular-weight PEK N1a, N1b and N1c respectively, with inherent viscosities in the range 0.6-0.8 dL g À1 . A slight molar excess of the difluoroketone was used to control the final MW. Following exhaustive extraction of diphenylsulphone and inorganic salts, the polymers were dried and analysed by DSC. After heating to 350 C, the samples were cooled at 10 C min À1 , but none showed evidence of crystallization on cooling from the melt. However, on reheating at the same rate (Figure 4), polymer N1a underwent a glass transition (onset at 149 C), followed by a cold crystallization exotherm peaking at 245 C, and finally a crystal melting endotherm at 300 C. The other two polymers (N1b and N1c) showed only glass transitions at 151 and 153 C, respectively.
It seemed possible that the observed variation in crystallizability between the three polymers could result from differences in their sequence distributions since transetherification with sequence randomization is known to occur during the nucleophilic synthesis of aromatic polyethers in which both monomer residues in the chain are activated towards nucleophilic attack adjacent to the ether linkage. 12,13 This possibility was confirmed by 13 C NMR analysis ( Figure 5), which showed useful diagnostic resonances in the range ¼ 160-162 ppm, corresponding to the aromatic carbons attached directly to ether oxygens. Polymer N1a shows only two peaks in this region, corresponding to the two different carbons of this type that would be expected in the simple alternating structure (EKEKmK) n (c.f. polymer E1), whereas polymers N1b and N1c show two additional 'inner' peaks in the 13 C-O-C region, with the relative intensity of these increasing substantially from N1b to N1c ( Figure 5).
In the 13 C NMR spectrum of polymer 3, two 13 C-O-C resonances are still evident, but the lower field 13 C-O-C peak is shifted only very slightly (ca. 0.25 ppm) relative to its position in the spectrum of 1a, whereas the other peak moves substantially upfield by some 2.9 ppm (Figure 7). This result strongly suggests that the lower field resonance represents the 13 C-O-C carbon associated with the threering residue -which is chemically unchanged in the dimethyl polymer 3 -and that the strongly shifted, higher field resonance can be assigned to the two-ring residue, in which the methyl substituents are ortho to its 13 C-O-C carbons. The 13 C NMR spectra of polymers N1b and N1c however show two additional inner peaks in the 13 C-O-C region ( Figure 5). The increased multiplicity of peaks suggests sequence randomization could be occurring, and indeed the new peaks proved assignable to sequences comprising two adjacent two-ring residues and two adjacent three-ring residues. These assignments were achieved by doping samples of polymer N1b with the homopolymers PEK 7 and PEKmK 9 , which resulted in the enhancement of the intensities of the higher field and lower field inner peaks, respectively (Figure 8).
The 13 C NMR spectrum of polymer N1c, produced using rubidium carbonate, showed even more extensive sequence randomization than in N1b, now with four peaks of equal intensity in the 13 C-O region ( Figure 5). Analysis of the probability distribution for the three possible dimer sequences around the ether linkages in polymer N1 shows that a completely random polymer would contain KEK, KEKmK and KmKEKmK sequences in the relative proportions 1:2:1. This distribution would indeed give rise to four 13 C NMR resonances of equal intensity (1:1:1:1) in this region of the spectrum since the unsymmetrical KEKmK sequence contains two inequivalent C-O-C carbons. A similar calculation indicates that the intensity ratios observed in the spectrum of polymer N1b (ca. 3:1:1:3) correspond to approximately 50% randomization, relative to a fully alternating sequence. It is worth noting that the original report of polymer N1 indicated that it was synthesized using a mixture of sodium and potassium carbonates and that its melting point was 285 C, 9 significantly lower than that that observed for polymer N1a. It thus seems very probable that a significant degree of sequence randomization of N1 had also occurred in that work. In fact, even N1a is unlikely to be 100% alternating as its melting point (300 C) is still slightly lower than that of the ''electrophilic'' polymer E1 (305 C). Moreover, the 13 C NMR spectrum of N1a, shown in Figure 5, reveals a very weak but still detectable inner resonance corresponding to the symmetric, non-alternating sequence KmKEKmK. The relationships between polymers N1a-N1d, identified in the present work, are summarized in Figure 9.
A number of possible mechanisms have been proposed for transetherification during the synthesis of aromatic PEKs, but all depend on reversible, nucleophilic cleavage of the ether linkages ( Figure 10). Candidate nucleophiles in the system include the carbonate and fluoride anions, and indeed potassium carbonate has previously been shown to induce a small degree of sequence randomization in an aromatic PEK, albeit requiring very high reaction temperatures (340 C) and long reaction times (6 h). 15 The fluoride ion can be a very strong nucleophile in dipolar aprotic solvents, 16,17 but its effectiveness in the present context would depend both on the solubility of the fluoride salt involved and on the extent of pairing with its counterion in solution. The larger the counterion, the weaker the ion pairing and the more soluble the salt, so RbF should be much more effective than sodium fluoride, with potassium fluoride somewhere in between (r ionic ¼ 1.16, 1.52, 1.66 Å for 6-coordinate Na þ , K þ , and Rb þ , respectively). 18 This is fully consistent with our experimental results for sequence randomization in the synthesis of N1.
In the present work, sequence randomization catalysed by fluoride ion was demonstrated conclusively by treatment of the alternating polymer N1a with RbF in diphenylsulphone, at the same concentrations, temperature and time as in polymer synthesis. The result was completely clear-cut, with the diagnostic 13 C-O-C resonances for the product N1d changing from two equal intensity resonances in N1a (alternating structure) to four equal intensity resonances (random sequence structure), exactly as found in polymer N1c.
As a final test of the proposed mechanism for sequence randomization, the polycondensation shown in Figure 2 was carried out using lithium carbonate as base. The extremely low solubility of lithium fluoride in organic solvents 19 should strongly inhibit fluoridecatalysed transetherification and indeed, as shown in Figure 11, resonances arising from sequence randomization were scarcely discernable in the 13 C NMR spectrum of the resulting polymer (N1e). As shown in Figure 12, polymer N1e also crystallized from the melt (T c ¼ 215 C) -unlike the other polymers described in this work -and showed a slightly higher T m value than N1b (304 vs. 300 C) and a much higher degree of crystallinity (ÁH m ¼ 49 vs. 16 J g À1 ), presumably the consequences of a more perfectly alternating chain sequence.
An intriguing observation is that, although DSC analysis customarily discounts the first heating scan, a consistent feature of the first (but not subsequent) DSC heating scans for polymers N1a, N1b, and N1c is the presence of a melting endotherm at ca. 174 C -in addition to a conventional polymer melting peak in the range 230-300 C -that Figure 9. Representations of the chain sequences in polymers N1a (almost entirely alternating), N1b (semi-randomized) and N1c (fully randomized), arising from the use of different alkali metal carbonates M 2 CO 3 in the nucleophilic polycondensation shown in Figure 1. Treatment of N1a with RbF results in its conversion to polymer M1d, having the same, fully random, sequence as N1c. RbF: rubidium fluoride. Figure 10. Partial mechanism for fluoride-catalysed sequence randomization in the synthesis of polymer N1. The initial chain cleavage by fluoride can occur at either the two-ring or three-ring monomer residue, and the fluoro end-group resulting from this reaction can subsequently regenerate a fluoride anion by reaction with a phenoxide monomer or end-group. Figure 11. 13 C NMR resonances in the C-O-C region for polymers N1c (fully randomized) and N1e, illustrating the virtual absence of sequence-randomization in the latter polymer. 13 C NMR: carbon nuclear magnetic resonance.
increases in intensity with the degree of sequence randomization. The lower melting peak is however essentially absent from the DSC trace of the fully alternating polymer N1e. This correlation seems to imply the existence of a low-melting crystalline phase in the 'as-isolated' polymers that is associated specifically with the packing of random sequences. This could be possible, in principle, because the three 'parent' polymers (KEKmK, KEK and KmKEKmK) have almost identical unit cells in cross-section perpendicular to the chain direction (orthorhombic, a ¼ 7.67 + 0.05, b ¼ 6.04 + 0.07 Å , two chains per cell). 10,20,21 Moreover, the X-ray powder patterns from 'as-isolated' samples of polymers N1c and N1d indicate substantial degrees of crystallinity, despite the high levels of sequence randomization. Crystallization of random sequence copolymers is not of course unknown when the comonomer residues are isomorphic, but it is not yet clear how isomorphism arises in the present system. Computational modelling studies are under way in our laboratory to investigate this problem further.
Conclusions
Sequence randomization, via transetherification, during the nucleophilic synthesis of an aromatic PEK involving fluoride displacement from a bis(4-fluoroaryl)ketone can be controlled by varying the alkali metal cation (Li þ , Na þ , K þ or Rb þ ) present during polycondensation. The degree of transetherification increases with the ionic radius of the alkali metal involved, and a proposed mechanism in which fluoride ions reversibly cleave the growing polymer chain is substantiated by a direct demonstration of sequence randomization in the presence of RbF. The crystallizability of the polymer from the melt declines markedly as the degree of sequence randomization increases, although crystallization of the more highly randomized polymers from solution in diphenylsulphone affords an unusually low-melting crystalline phase whose nature remains to be established.
Authors' Note
Underlying data for this article may be requested from the corresponding author.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2017-09-08T03:20:36.446Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "b4b8b57abe5b00b841bf293fcb5a366aeac4f41a",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0954008315612140",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "37ec9888024e3e604011c23d2c39ac9e2d6b4402",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.