id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
270561657 | pes2o/s2orc | v3-fos-license | An energy landscape approach reveals the potential key bacteria contributing to the development of inflammatory bowel disease
The dysbiosis of microbiota has been reported to be associated with numerous human pathophysiological processes, including inflammatory bowel disease (IBD). With advancements in high-throughput sequencing, various methods have been developed to study the alteration of microbiota in the development and progression of diseases. However, a suitable approach to assess the global stability of the microbiota in disease states through time-series microbiome data is yet to be established. In this study, we have introduced a novel Energy Landscape construction method, which incorporates the Latent Dirichlet Allocation (LDA) model and the pairwise Maximum Entropy (MaxEnt) model for their complementary advantages, and demonstrate its utility by applying it to an IBD time-series dataset. Through this approach, we obtained the microbial assemblages’ energy profile of the whole microbiota under the IBD condition and uncovered the hidden stable stages of microbiota structure during the disease development with time-series microbiome data. The Bacteroides-dominated assemblages presenting in multiple stable states suggest the potential contribution of Bacteroides and interactions with other microbial genera, like Alistipes, and Faecalibacterium, to the development of IBD. Our proposed method provides a novel and insightful tool for understanding the alteration and stability of the microbiota under disease states and offers a more holistic view of the complex dynamics at play in microbiota-mediated diseases.
Introduction
The microbiota in humans plays a crucial role in maintaining health and well-being, with varying composition in different body sites, including the mouth, vagina, skin, and notably, the intestinal tract [1].It has also been dubbed as a "forgotten organ" due to its collective and complex metabolic activity [2].Bowel dysbiosis, an imbalance in the composition of the microbiota, has been linked to numerous diseases, including gastrointestinal disorders, such as inflammatory bowel disease (IBD) [3].A comprehensive understanding of the impact and mechanisms of microorganism-host interactions is essential for diagnosing and treating associated diseases.
Having the recognition of the complexity of the pathogenic mechanisms of the gut microbiota, current microbiome research performs community-level and muti-omics analysis to uncover the association between gut microbiota and diseases, including the IBD study [4].Current studies elucidated the heterogeneity of gut microbiota in the IBD development stages and categorization [5,6], suggesting the value of the analysis of time-series data from longitude to capture the dynamic feature of the alteration of the microbiome during the disease pathogenesis.However, it is still a challenge for conventional methods to uncover the hidden microbiota structure from the time-series data.
We carried out energy landscape analysis combining the Latent Dirichlet Allocation (LDA) model and pairwise Maximum Entropy (MaxEnt) model to IBD gut microbiome dataset.Our results show multiple stable structure patterns in the Crohn's disease patient, characterized by the alteration of genus Bacteroides, implies the key role of Bacteroides in shaping the dysbiosis stages and their transition in the development of IBD.
The Latent Dirichlet Allocation model is a widely applied unsupervised machine learning method in natural language processing (NLP).It models text through a three-level hierarchical Bayesian model, with "topic-word" and "document-topic" multinomial distributions and a Dirichlet prior [7].In the context of microbial abundance profiles, the LDA model can identify "microbial assemblages" by grouping taxa according to their co-occurrence features [8,9], similar to the "topics" in NLP studies.Additionally, the pairwise MaxEnt model provides a second-order maximum entropy model that captures a single node's firing rates and the pairwise interactions in the biological system, assuming higher-order interactions are not crucial and set aside [10,11].This model has been demonstrated to accurately describe neural systems using time-series MRI data [10,12].The pairwise MaxEnt model has been introduced to study the stability of microbial community by Kenta et al. [13], assuming the components have pairwise interactions akin to neuronal activity.
In this study, we propose the LDA model to cluster the microbial abundance profile into a few microbial assemblages according to co-occurrence features and then the pairwise MaxEnt model to calculate an "energy" profile for all potential activity patterns of microbial assemblages.Finally, the derived Energy Landscape depicts the overall stability of assemblage patterns and the relationship among them under specific health conditions.We investigated the stable assemblage patterns under the conditions and discussed the key microbial elements that may contribute to shaping the intermediate stages of dysbiosis (Fig 1).
Ethics statement
The data used in this study are all available in the public domain(The Integrative Human Microbiome Project (iHMP)(NIDDK U54DE023798)) [14], ethical approval is not applicable to this study.
Metagenomic time-series dataset
We have used the dataset from the Onset of Inflammatory Bowel Disease (IBD) of The Integrative Human Microbiome Project (iHMP)(NIDDK U54DE023798) [14].The dataset contains taxonomic profiles of fecal samples' 16S rDNA sequencing results from participants.These taxonomic profiles of each participant were collected repeatedly during the study period.Here, we selected each participant's first ten successive time-series samples and excluded the participants with less than ten samples.Finally, the sample size comprised 1300 samples collected from 130 participants, each contributing 10 samples.Several participants were diagnosed with two major types of IBD: Crohn's Disease (CD) and Ulcerative Colitis (UC), while the remaining participants without IBD (non-IBD) served as control.
The sample size
Table 1 presents the information of classes in this study.In the LDA modeling step, only the first samples from the time-series samples of each participant were used as the input (N = 130) to avoid the bias resulting from the homogeneity of the microbial community's composition from the same participant.After learning the parameter φ i from the LDA model, the model was applied to the 780 samples (as detailed below) as the next step's input.In the pairwise Max-Ent modeling step, the modeling was conducted separately for three disease types.To facilitate comparisons, balanced input in the three classes (each class for N = 26 × 10 = 260), consisting of 780 samples in total, were chosen for modeling execution on each class.
Latent Dirichlet Allocation modeling
The Latent Dirichlet Allocation model is a generative statistical model applied to observations with unobserved latent attributes.According to the principle of the LDA model, the microbiota, or microbial community, is comprised of a series of single "occurrence-event" (hereafter referred to as occurrence).An occurrence is defined as the solitary presence of a taxonomic unit.Each occurrence belongs to a latent attribute: microbial assemblage.Hence the generative process of a microbial community, with I potential microbial assemblages and F genera in N samples, can be assumed as follows: (1) The k-th occurrence in the sample n, among N samples, O nk , has a latent assemblage attribute i which follows the multinomial distribution with parameters θ n,n2(1,. ..,N) , I n � Multinomialðθ n;n2ð1;...;NÞ Þ: Sampling from the distribution assigns the assemblage attribute i to the occurrence; (2) The taxonomic unit of the occurrence, genus f, given the assemblage i follows a multinomial distribution with parameters φ i,i2(1,. ..,I) , After sampling from the distribution, one occurrence with genus f in sample n is set.
(3) The ( 1)-( 2) process repeats in O n(k+1) , and ultimately, the occurrences combine to form the microbial community in sample n.
Notably, the parameter of multinomial distributions of I n , vector θ n ¼ ðy n 1 ; . . .; y n I Þ, follows the Dirichlet distribution with prior parameter β n , The parameter of multinomial distributions of F i , vector φ i ¼ ðφ i 1 ; . . .; φ i F Þ, follows the Dirichlet distribution with prior parameter α i , occurrences of genera given the assemblage i (p(f j φ i ) = θ if ), respectively.Thus, θ and φ can be regarded as the "abundance" of I assemblages in a specific sample and the weight of F genera in one specific assemblage, respectively.
After parameter estimation of θ n and φ i , the F-dimension original genera abundance profile was reduced to I-dimension assemblages abundance profile, which was processed to the following Maximum Entropy Modeling.
Here, we selected I = 9 as the number of assemblages that reduces the computational cost of pairwise Maximum Entropy modeling and maintains interpretability (see also discussion).And for the reason mentioned in the previous section, we fit the LDA model to a few samples (N = 130) to fix the composition of assemblages φ and applied the model to all samples (N = 780) to obtain the abundance of assemblages θ in all samples.The LDA modeling was performed using Python sklearn.decomposition.LatentDirichletAllocation package [15][16][17], the statistical analysis was performed by Python Scipy package [18].
Pairwise Maximum Entropy modeling
We fit the pairwise Maximum Entropy model according to the manners in its previous applications for neuroscience [11,12,19].In the pairwise MaxEnt model, the objective was to maximize the information entropy of probability distribution under the Maximum Entropy Principle and fit the model's strength of individual assemblage and pair interactions to empirical data, represented by the constraints of hσ i i and hσ i σ j i, respectively.hσ i i and hσ i σ j i are defined as follows: where s i n ¼ �1 is the occurrence state of the i-th assemblage on the n-th sample; where i and j represent two different assemblages, empirical represents the empirical results and model represents the expected value given by the model.The whole model can be derived using max Pðσ n ÞlogðPðσ n ÞÞ s:t: ( The pairwise MaxEnt model illustrates the probability of assemblage patterns σ to occur in the following distribution: Pðσjh; gÞ ¼ exp½À Eðσ j h; gÞ� P σ 0 exp½À Eðσ 0 j h; gÞ� ; where Eðσ j h; gÞ ¼ À The h and g are the parameters that need to be estimated from the data, representing the tendency to the occurrence of one assemblage and the interaction between two assemblages, respectively.Positive and negative values of g are interpreted as promotional and inhibitory interactions, respectively.We estimated the parameters through the maximum-likelihood method [19].Here, we solved ðh; gÞ ¼ argmax h;g
Lðh; gÞ;
where Lðh; gÞ is the likelihood function given by Pðσ n j h; gÞ: The likelihood was maximized by updating h and g in the gradient ascent scheme till convergence: where new and old represent the values after and before a single updating step, relatively, and � > 0 is a constant controlling the step size.The Pairwise Maximum Entropy modeling was performed using Python Numpy package [20].
Definition of the occurrence state in assemblage pattern
The pairwise MaxEnt model required a binary input.Here, we defined the assemblage pattern as σ, where the value of each microbial assemblage σ i,i2(1,. ..,I) was assigned either 1 or -1 according to its "occurrence state."This state represents whether the specific microbial assemblage has a relatively high abundance on a sample.Recall that the assemblage's abundance in each sample is assigned by the parameter θ n given by the LDA model if i-th assemblage of n-th sample has a higher probability parameter than that of m-th sample, y i n > y i m , given by the LDA model.We considered that the ith assemblage shows higher abundance on the microbial community of n-th sample than m-th sample.
Here, a threshold was set to define the relatively high abundance or "activated" assemblage for binarization.We assigned the occurrence state σ i under the following rule: if y n i is less than the upper 25th percentile of fθ i g ; ( where y n i is the abundance of ith assemblage in nth sample given by the LDA model and {θ i } is the set of the abundance of ith assemblage in all samples of a class.The occurrence state of assemblage in a microbial community corresponds to the binary spike state of a single neuron in Schneidman's study, which assigns the response of the neuron in a binary state of 1 (spike) and 0 (not spike) [11].We then integrated two models through this definition by transferring the output from LDA modeling θ to the binary input for pairwise MaxEnt modeling σ.
Energy Landscape
The distribution we obtained from the pairwise MaxEnt model had the form of the Boltzmann distribution in statistical mechanics: where ε i is the energy of the system at state i, k the Boltzmann's constant, and T the temperature [11].Recalling the distribution of assemblage pattern P(σ|h, g) we obtained, we refered E(σ|h, g) to the energy of the system in Boltzmann distribution.
According to the obtained parameters h and g and the function E(σ|h, g), we then assigned an energy value to all potential assemblage patterns.We considered that the assemblage patterns with high energy were unstable and had a low probability of occurring and vice versa.
The Energy Landscape can be constructed once the energy table for all assemblage patterns is obtained.The Energy Landscape was constructed as described in Ezaki's study [19].First, the neighbor pattern of assemblage pattern σ, denoted by σ 0 , was defined as the pattern with only a single assemblage state difference.For example, the assemblage pattern with nine are neighbor patterns to each other since only the first assemblage state is different.We assumed the neighbor patterns are closely related to the original pattern, and the pattern transition to the neighbor patterns was the initial step of any further transitions.Second, the energy of a specific pattern E(σ) was compared to all its eight neighbor patterns E(σ 0 ).If the d-th neighbor pattern Eðσ 0 d Þ is the minimum in the comparison, we assumed that the pattern σ had the closest relation to σ 0 d , and link them to depict the potential transition direction following the steepest energy descent.Third, once E(σ) = E(σ 0 d ), the pattern σ had no other neighbor pattern with lower energy, we defined it as a local minimal pattern (LMP).Intuitively, LMP would be located at the bottom of the energy basin, reflecting the aforementioned transition paths from high energy patterns towards low energy and high stability in the Energy Landscape.Finally, all assemblage patterns belong to one basin through the path linking the pattern to its neighbor pattern and finally reaching the LMP (see the result section).The construction of energy landscape figures was conducted by python NetworkX package [21].
The progression trend of a microbial system can be assumed as starting from an initial assemblage pattern, transiting to its neighbor pattern with higher stability, and repeating the same process towards the LMP with the locally highest stability.The Energy Landscape illustrates the energy relationship of the dynamic microbial system, especially those stable patterns which might contribute to specific health states of the host.
LDA modelling result
The parameters θ and φ represented the abundance of assemblages and the weight of the components in assemblages, respectively.According to the parameter fitting result, the composition of some assemblages was clearly dominated by a single genus, such as in assemblage #6 and #4 dominated by genus Bacteroides (0.89) and Prevotella (0.87), respectively.On the other hand, two or more genera mildly dominated others: Bacteroides (0.52) and Faecalibacterium 2).The components of one assemblage can be regarded as sharing similar characteristics and contributing jointly to the assemblage's effects on the host.
The abundance of assemblages showed a strong imbalance between the assemblages.In most samples, the abundance of assemblage #6 was notably higher than that of other Here the high value of g in two specific assemblages means their co-occurrence contributes to the low energy of the assemblage pattern.Several differences in interaction features among the three classes can be observed.Interestingly, the value of interaction between assemblages #1 and #6 clearly had a low value in the non-IBD class compared with CD and UC classes.
Pairwise MaxEnt modeling result
Besides, the modeling results of other CD classes with different participants showed similar features on both parameters (S2A and S2B Fig), which would support the reproducibility of our method.
Energy Landscape
The Energy Landscape was constructed through the energy E(σ) of 512 assemblage patterns given by the energy function with parameters h and g in the methods section.From Fig 6A-6C, it can be observed that all the patterns were grouped into small clusters according to the LMP to which they were directed.These clusters can also be regarded as the "energy basins" in the Energy Landscape, which indicate the pattern-shifting trend.Because of the corresponding relation between LMP and energy basin, there were four energy basins in
The potential key bacteria in CD development suggested by the analysis of LMPs
Here, we obtained insights into microbiota alteration when health conditions switch from one to another through the comparison of the LMPs in the energy landscape.As introduced in the method, each LMP represents the local stable assemblage patterns in the view of energy landscape.Collectively, these individual LMPs reflect the global stable stages of the microbiota under specific conditions.
The LMPs were P-#2, P-#17, P-#33, P-#137 in CD, and P-#33, P-#455 in healthy non-IBD class (Fig 6).Interestingly, three assemblages #1, #5 and #6 associated with the genus Bacteroides were observed as the only "activated" assemblage in three LMPs-P-#2, P-#17, and P-#33 of the CD class, while only the pattern P-#33 with activated assemblage #6 was in the non-IBD class.According to the probability of Bacteroides' occurrence in assemblages φ Bacteroides , Bacteroides genus was strongly dominant in the assemblage #6 in P-#33 with φ assemblage6 Bacteroides ¼ 0:89, mildly dominant in the assemblage #1 in P-#2 with φ assemblage1 Bacteroides ¼ 0:57, weakly dominant in the assemblage #5 in P-#17 with φ assemblage5 Bacteroides ¼ 0:29.Three Bacteroides dominated levels suggest the varied involvement of Bacteroides in these stable stages.Alterations in the gut microbiota are strongly associated with the development of IBD, which is characterized by reduced abundance of commensal anaerobic bacteria including members of the Bacteroides genus [22][23][24][25][26].The alteration of Bacteroides is also reported in between disease's phases [23].Intestinal Bacteroides species have evolved a commensal colonization system, contributing to the homeostasis of the gut microbiota [27], and might be attributed to the synthesized conjugated linoleic acid, known for its immunomodulatory properties [6].However, the longitudinal data with a large sample size and long timescale is yet to show the role of Bacteroides in IBD development.Our results support the alternation of Bacteroides in the disease development of CD.Besides, the multiple LMPs characterized by different degrees of domination of Bacteroides may also highlight Bacteroides's role in shaping the microbiota structure stable patterns in CD, and the alteration of Bacteroides might be the key to the transition between these pattrens.If we consider the potential concurrence between the stage of disease development and microbiota, this result also implicates the Bacteroides as a potential marker of the disease pathogenesis.
Also among these three LMPs P-#2, P-#17, and P-#33, genus Alistipes was the first dominant component in assemblage #5 (φ assemblage1 Alistipes ¼ 0:29) of LMP P-#17 apart from the Bacteroides in the other two patterns.Alistipes has been reported to relate to gut inflammation, but contrasting results about its contribution to the disease have also been reported [28].Our result may support Alistipes' harmful contribution to CD development, and this contribution might be affected by the decreasing Bacteroides.
Interestingly, two genera show different behavior with their previously reported antiinflammatory property.The genus Faecalibacterium was the second dominant component in assemblage #1 of P-#2 (φ assemblage1 Faecalibacterium ¼ 0:22) and assemblage #5 of P-#17 (φ assemblage5 Faecalibacterium ¼ 0:20).Note that the only species of this genus, Faecalibacterium prausnitzii, has been reported to decrease in the IBD pathogenesis [29] and have anti-inflammatory protein production [30].Besides, Genus Parabacteroides was the third dominant component in assemblage #1 of P-#2 (φ assemblage1 Faecalibacterium ¼ 0:14).Parabacteroides spp.has been identified as a probiotic and related to the alleviation of tumorigenesis and inflammations [31,32].Therefore, comparing assemblages #6 of shared LMP P-#33 and #1 of CD specific LMP P-#2, the transition from health pattern P-#33 to disease stable pattern P-#2 with assemblage #1 can be interpreted as the joint effect of three factors: the increase of Faecalibacterium and Parabacteroides, which are reported beneficial to the disease; the decrease of Bacteroides.We could speculate that such "trade-off trend" between these factors from both directions and their contribution to CD development lead to the potential intermediate LMP P-#2 with activated assemblage #1.
Apart from three LMPs associated with Bacteroides of CD-specific LMP, in the LMP P-#137 with activated assemblage #4 and assemblage #8, we found Prevotella and Escherichia as the dominant genus, respectively.Both have been reported to be related to chronic inflammatory disease [33,34].We suppose that the concurrence of Prevotella and Escherichia can be a potential maker of a particular stage in CD development.
We conclude that the aforementioned genera and the interaction of these genera might be the key to the alteration of microbiota in CD development.Especially, the alteration of Bacteroides and its "trade-off trend" with other genera are suggested crucial contribution to shaping microbiota stages and facilitating the transition of the stage in the disease development, which remain to be further investigated.
The methodological advantages of Energy Landscape approach
We combined the LDA model and pairwise MaxEnt model with complementary advantages and achieved the goal of uncovering the hidden microbiota pattern from time-series microbiome data.The LDA model can extract the co-occurrence assemblages from the microbiota [8,9], however, it doesn't indicate the stable composition and their transition in the dynamic system.On the other hand, the pairwise MaxEnt model studies the compositional stability of the changing microbiome system [13] but only few high-abundance taxa were selected as input.Our approach combines the two models and incorporates their advantages to assess the global compositional stability of overall microbiota (Table 3).
Amos et al. elucidated the gut microbiota structure alteration is specified to the disease stratification and location of IBD showing the heterogeneity of gut microbiota in the IBD development [5].They used the well-labeled cohort with the collected metadata to compare the alteration of microbiota.Our proposed method showed the consistent result of multiple stable structure patterns under the diseases which might suggest the gut microbiota structure in the intermediate stage of disease development.And those stable structure patterns characterized by Bacteroides-associated assemblage show the potential key role of Bacteroides in shaping the stages and their transition.Notably, our method gave the result without using the detailed information of patients, which implies the potential to uncover the hidden microbial signatures and their relationship during disease development using the time-series microbiome dataset.This function might enable the exploration of hidden stages in the time-series microbiome data without sufficient descriptive information.
The technical features surpassing conventional approach
Firstly, our method analyzes the microbiome composition data in a community level and comprehensively considers the complex interactions between the microbial communities.In our proposed approach, the microbial taxonomic group, assemblage, is defined by LDA model based on the abundance co-occurrence and the composition of assemblage considering all the pairwise interactions with both positive and negative directions are analyzed and evaluated by the Maximum Entropy model.In previous studies of the association between the microbiome and IBD, the interactions between species or taxonomic groups are still relatively isolated.Some studies [22,35,36] discussed the potential contribution of microbiota in IBD's development by identifying the significant abundance alteration taxonomic units on the composition data between health and disease cohorts.The joint role between those altered taxa was not to be thoroughly analyzed in these studies.On the other hand, a resent study [37] studied the cooccurrence networks that defined microbial modules 'quantitative traits' on IBD development and associated these quantitative traits to genome-wide quantitative trait locus by linkage analysis.Although the co-occurrence network categorized the taxa into community-level modules, the modules were investigated separately without taking their interaction into account.
Secondly, the LDA and pairwise MaxEnt model don't require independent input, which makes the approach appropriate for the time-series data.In our study, we used the fecal microbiome composition data with 10 successive time points for each participant and separated by a gap of more than one week.Our result is derived from longitude data which should reflect the dynamic characteristics of microbiome alteration.The model enables the researcher to address the association between microbiota and disease from a dynamic perspective.Although IBD, as a chrome disease, is dynamic, microbiome studies have primarily focused on single time points or a few individuals, which makes it hard to capture the dynamic feature of the alteration of the microbiome during the disease pathogenesis.However, the time-series data points on longitude study are dependent and hard to apply to conventional statistical methods for crossindividual comparison requiring the independence of samples.For example, Walker et al. [38] observed the alteration of Firmicutes and Bacteroidetes in the IBD patients, with the Mann-Whitney U test analysis on single time point microbiome composition data of only a few patients.In Lewis et al.'s study [39], the author explored inflammation and anti-inflammation treatment effects on the composition of the gut microbiota in Crohn's disease.They analyzed the samples with the comparison of single time points of the microbial composition of health/ disease and no-treatment/treatment by quantile regression model.
Thirdly, our method quantitatively describes the probability of the occurrence of all potential combination patterns among bacterial assemblages' interactions and constructs a global stability view "Energy Landscape" for the homeostasis and dysbiosis of the gut environment.In other words, even the patterns which not occur on the input data will be assigned the energy value given by the parameter estimated from observed data.This feature enables the researcher to analyze and discuss all the situations and observe the transition routes between patterns that represent the intermediate microbiota structure.The prediction of the transition between a current pattern from a sample and its future development might also be available after further improvement of the method.Currently, even though some studies directed attention toward the dynamic of the microbiome, there is a lack of quantitative methods to describe and analyze the absent or rare abundance patterns in the microbiome data of IBD.In Halfvarson et al.'s study [40], although they found the health patients' microbiome varies within the defined "Health Plane" while the IBD samples are away from the "Health Plane", only the structures with collected data have been analyzed.Whole potential structural patterns within the IBD development especially those short-term intermediate stages between disease and health, were not to be quantitatively analyzed, which leaves the barrier for studying the shift of microbiome structure from health to disease stage.
In summary, our approach addresses the challenges of conventional microbiota-disease association analysis, through the comprehensive evaluation of interaction between microbial communities, the compatibility to dependent time series data, and the capability to quantitatively analyze all potential patterns of composition.The stable microbiota patterns insight gained from this approach capturing the complex structural and dynamic aspects of gut microbiota in disease development contribute to the growing body of knowledge on microbiota-IBD association.
Limitations and future work
Considering the function and the role of an assemblage as a unity is challenging.All nine assemblages had their dominant components (Fig 3B), which may be considered to determine the assemblage's contribution to the host's microbiome to a great extent.Besides, genera with a much higher probability of occurring in one specific assemblage, or are "unique" in a specific assemblage, will bring special features to the assemblage.However, as observed from the composition of the assemblages, most of the genera satisfy the condition of "unique," increasing the complexity of studying the function of assemblages.Thus, in this study, we mainly discuss the function of assemblages according to their dominant components.However, a more comprehensive and persuasive method to analyze the assemblages is required for future studies.
Although the potential microbiota structure stages are indicated by the LMPs uncovered from each class, it is still challenging to know the association between them.We speculate the LMP P-#2 in CD is the intermediate stage between healthy pattern P-#33 and more severe pattern P-#17 according to the stepwise change of Bacteroides -dominated level in represented assemblages.However, more experimental evidence is necessary to prove their association, and the transition route between the stable patterns merits further discussion in future work.
Conclusions
In this study, we introduced a novel Energy Landscape approach combining LDA and pairwise MaxEnt models with their complementary benefits to study the heterogeneity of microbiota during the disease pathogenicity from time-series microbiome data.The method uncovers the hidden intermediate microbiota structure and their transition during the microbiome-associated disease's development and explores the microbial taxa that play key roles in shaping the relevant structures.The analysis with time-series IBD dataset reveals the potential contribution of Bacteroides and several genera in CD development.The results demonstrate the method's promising capability in studying the role of dysbiosis in microbiota-associated diseases.
Fig 2
Fig 2 depicts the aforementioned generative process.Based on multinomial distribution, parameter θ n and φ i can represent the probability of the occurrences of assemblages given the sample n (p(i j θ n ) = θ ni ) and the probability of the
The parameters h
and g were obtained from the pairwise MaxEnt model in the three classes with the same sample size (N = 26 × 10 = 260).Fig 5A shows the tendency for the occurrence of single assemblages h.By definition, the low value of h implies a low energy and high probability of occurring.Notably, the assemblages #1 and #6 dominated by Bacteroides had obviously low values than other assemblages.They had lower values in the CD class than in UC and non-IBD classes.Fig 5B shows the pairwise interaction between assemblages.
Fig 4 .
Fig 4. Common genera in the assemblages.A: the network shows the relation between assemblages, where the edges are weighted by the common genera number within the top five dominant components of each assemblage.B: The table shows the top five dominant genera of all assemblages and the times they recur on different assemblages.https://doi.org/10.1371/journal.pone.0302151.g004 Fig 6A-6C depict the energy of assemblage patterns in the three classes by drawing the line plot linking the patterns with their steepest energy descent neighbor pattern (see the method section 3.6 with energy value as Z-axis, S5 Fig depict the same Energy Landscape in 2D view).Fig 6Dshows the LMPs of each class, representing assemblage patterns with locally low energy and high stability.Four LMPs were observed in the CD class: pattern P-#2, pattern P-#17, pattern P-#33, and pattern P-#137, while two LMPs were observed in both UC and non-IBD classes: pattern P-#33 and pattern P-#455 in UC and pattern P-#9 and pattern #33 in non-IBD.Within these LMPs, patterns P-#2, P-#17, P-#33, and P-#9 had only a single positive assemblage, while patterns P-#137 and P-#455 had multiple positive assemblages.Notably, pattern P-#33 was shared in all three classes, and other patterns were unique in specific classes.
Fig 5 .
Fig 5. Pairwise MaxEnt results for the three classes.A: the heatmap shows the tendency for occurrence of single assemblages h obtained from the pairwise MaxEnt model.B: three heatmaps describe pairwise interactions between assemblages g obtained from the pairwise MaxEnt model.https://doi.org/10.1371/journal.pone.0302151.g005
Fig 6 .
Fig 6.Energy landscape constructed according to the pairwise MaxEnt modeling results.A, B, C: the 3D line plot showing the energy of all the patterns of 9 assemblages in CD, UC, and non-IBD class, respectively.Each assemblage pattern is connected to its neighbor pattern with the steepest energy descent or to itself when it is a local minimal pattern.D, the composition of each LMP in the three classes: CD, UC, and non-IBD, respectively.The green block means an activated state (+ 1) in the assemblage pattern.https://doi.org/10.1371/journal.pone.0302151.g006
Table 1 . The classes in the study.
The numbers without brackets represent the number of participants, and in bracket represent the number of samples. https://doi.org/10.1371/journal.pone.0302151.t001
Table 2 . The dominant genera in the assemblages.
The values in brackets represent the weight of genera φ.
Table 3 . The advantages and disadvantages of the two models.
The LDA model and pairwise MaxEnt model are complementary to cooperate to study the stability of the dynamic microbiome system.φ. | 2024-06-19T05:05:39.709Z | 2024-06-17T00:00:00.000 | {
"year": 2024,
"sha1": "c67c979f7aa5add644e72fad836dda14eff7ecc0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c67c979f7aa5add644e72fad836dda14eff7ecc0",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40705083 | pes2o/s2orc | v3-fos-license | Chronic treatment with paeonol improves endothelial function in mice through inhibition of endoplasmic reticulum stress-mediated oxidative stress
Endoplasmic reticulum (ER) stress leads to endothelial dysfunction which is commonly associated in the pathogenesis of several cardiovascular diseases. We explored the vascular protective effects of chronic treatment with paeonol (2'-hydroxy-4'-methoxyacetophenone), the major compound from the root bark of Paeonia suffruticosa on ER stress-induced endothelial dysfunction in mice. Male C57BL/6J mice were injected intraperitoneally with ER stress inducer, tunicamycin (1 mg/kg/week) for 2 weeks to induce ER stress. The animals were co-administered with or without paeonol (20 mg/kg/oral gavage), reactive oxygen species (ROS) scavenger, tempol (20 mg/kg/day) or ER stress inhibitor, tauroursodeoxycholic acid (TUDCA, 150 mg/kg/day) respectively. Blood pressure and body weight were monitored weekly and at the end of treatment, the aorta was isolated for isometric force measurement. Protein associated with ER stress (GRP78, ATF6 and p-eIF2α) and oxidative stress (NOX2 and nitrotyrosine) were evaluated using Western blotting. Nitric oxide (NO) bioavailability were determined using total nitrate/nitrite assay and western blotting (phosphorylation of eNOS protein). ROS production was assessed by en face dihydroethidium staining and lucigenin-enhanced chemiluminescence assay, respectively. Our results revealed that mice treated with tunicamycin showed an increased blood pressure, reduction in body weight and impairment of endothelium-dependent relaxations (EDRs) of aorta, which were ameliorated by co-treatment with either paeonol, TUDCA and tempol. Furthermore, paeonol reduced the ROS level in the mouse aorta and improved NO bioavailability in tunicamycin treated mice. These beneficial effects of paeonol observed were comparable to those produced by TUDCA and tempol, suggesting that the actions of paeonol may involve inhibition of ER stress-mediated oxidative stress pathway. Taken together, the present results suggest that chronic treatment with paeonol preserved endothelial function and normalized blood pressure in mice induced by tunicamycin in vivo through the inhibition of ER stress-associated ROS.
Introduction
The endoplasmic reticulum (ER) is the cellular organelle which is responsible for protein translation, biosynthesis, translocation, folding and post-translational modifications including glycosylation, disulfide bond formation, and chaperone-mediated protein folding processes [1]. When ER homeostasis or function is impaired by biological stress such as ATP deprivation, hypoxia or calcium overload, this will lead to the accumulation of unfolded proteins [2]. Following this, glucose-regulated protein 78 (GRP78) is released, permitting their oligomerization to deal with accumulated unfolded proteins which activates transcriptional and translational pathways known as the unfolded protein response (UPR) [3]. When UPR is activated, 3 distinct UPR branches are initiated namely protein kinase-like ER kinase (PERK) which phosphorylates eukaryotic translation initiation factor 2 alpha (eIF2α), the inositol requiring kinase 1 (IRE1), and the activating transcription factor 6 (ATF6) [4]. Excessive and prolonged UPR will activate pro-apoptotic pathway which contribute to the development of cardiovascular diseases [5]. ER-initiated apoptosis is mediated through IRE1 and CHOP (C/EBP-homologous protein), either by downregulation of BCL-2 (anti-apoptotic protein) or interrupting calcium haemostasis signalling [6]. ER stress-induced ROS and apoptosis was demonstrated in animal model of arteriosclerosis [6], ER stress [7], hypercholesterolemia [8] and diabetes [9]. Therefore, targeting UPR component molecules and reducing ER stress will be promising strategies to treat cardiovascular diseases.
Recent studies demonstrate a synergistic relationship between ER stress and oxidative stress in the pathogenesis of cardiovascular diseases [4,10]. ER stress pathway involving calcium and Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) has been shown to activate nicotinamide adenine dinucleotide phosphate (NADPH) oxidase, leading to oxidative stress [7,11]. NADPH, a multi-subunit enzymatic complex, is one of the key generating sources of cellular reactive oxygen species (ROS) such as superoxide anion (O 2 − ) in the vasculature [12,13]. Nitric oxide is released by the endothelium and causes vascular relaxation [14]. However, O 2 − acts as a vasoconstrictor and reacts rapidly with nitric oxide (NO), forming peroxynitrite which in turn leads to eNOS uncoupling to produce more O 2 − [15]. ROSproducing enzymes such as NADPH, xanthine oxidase, cyclooxygenase, inactivation of the antioxidant system, and uncoupling of endothelial NO synthase lead to oxidative stress [16]. Excessive production of oxidants causes increased peripheral resistance which have been implicated in the development of hypertension [17]. Oxidative stress-mediated hypertension is associated with inactivation of NO [18]. These processes induce intracellular calcium build up, initiation of inflammatory signalling pathways and increased extracellular matrix deposition, leading to endothelial dysfunction in hypertension [19][20][21]. Therefore, searching for natural products with antioxidants properties should have beneficial effects in a reduction in blood pressure. Paeonol or 2'-hydroxy-4'-methoxyacetophenone ( Fig 1A) is the main phenolic compound of a Chinese herbal medicine which is prepared from the root bark of the plant Paeonia suffruticosa Andrew. Paeonol is used in traditional oriental medicines to improve blood circulation, amenorrhea, dysmenorrhea and fever [22,23]. Paeonol has been previously reported to protect against acetaminophen-induced hepatotoxicity in mice [24], improved Parkinson's disease in mouse model [25] and diabetic encephalopathy in streptozotocin-induced diabetic rats by attenuating oxidative stress [26]. Previously, we reported that paeonol protects against ER stress-induced endothelial dysfunction via inhibition of the upstream pathway involving 5 0 adenosine monophosphate-activated protein kinase (AMPK)/peroxisome proliferator-activated receptor δ (PPARδ) signalling in an in vitro model [27]. However, the chronic effects of paeonol on ER stress-induced oxidative stress resulting in endothelial dysfunction and increased blood pressure in vivo remains obscure. Therefore, the present study seek to investigate the endothelial protective effects of paeonol against ER stress-mediated ROS overproduction and elevation of blood pressure in mice. We hypothesized that chronic treatment of paeonol for 2 weeks protects against ER stress-induced oxidative stress and normalised blood pressure. The results of this investigation may provide new insights for the role of paeonol to mitigate ER stress related cardiovascular diseases such as hypertension, heart failure, ischemic heart diseases, and atherosclerosis.
Animals and experimental protocol
All experiments were performed with approval from Institutional Care and Use Committee (IACUC) of University of Malaya (Ethics reference no: 2016-170531/PHAR/R/MRM). Male C57BL/6J mice (8-weeks-old) weighed (mean±SD) 22±14 grams were purchased from Monash University (Sunway Campus, Malaysia), housed in groups of five and given 2 weeks to acclimate to the housing facility. The mice were housed in a well ventilated room maintained at a temperature of 23˚C with 12 h light/dark cycles, 30%-40% humidity and had free access to standard mice chow (Specialty Feeds Pty Ltd., Glen Forrest, Australia) and tap water ad libitum. During housing, animals were monitored daily for health status. No adverse events were detected.
A total of 48 mice were randomly assigned into the following groups: 1) control group; 2) group that received intra-peritoneal injection of ER stress inducer, tunicamycin (Tu, 1 mg/kg, 2 injections/week for 2 weeks) and vehicle (saline, oral gavage, daily for 2 weeks); 3) group that received tunicamycin and oral administration of paeonol (20 mg/kg/day for 2 weeks) (Tu + Paeonol); 4) group that received only oral administration of paeonol (20 mg/kg/day for 2 weeks); 5) group that received tunicamycin and daily oral administration of a ROS scavenger, tempol (20 mg/kg/day) for 2 weeks (Tu + Tempol); 6) group that received tunicamycin and daily intra-peritoneal injection of ER stress inhibitor, taurine-conjugated tursodeoxycholic acid (TUDCA, 150 mg/kg/day) for 2 weeks (Tu + TUDCA). For group that do not receive tunicamycin, the same dosage of saline as tunicamycin was given 2 injections per week for two weeks via intraperitoneal injection. No animal was excluded from each experiment. The experimenters were blinded to the pharmacological treatment while processing data and making exclusion decisions. The dose of paeonol was determined from literature [25,28,29] and our preliminary data (S1 Fig) showed that paeonol treatment at 20 mg/kg improved endotheliumdependent relaxation in mice treated with tunicamycin.
The body weights were recorded daily during the experimental period. Systolic blood pressure (SBP) of the mice was measured at day 0, day 7 and before sacrifice (day 14) using the tail-cuff blood pressure system (NIBP Monitoring System, IITC Inc., Woodland Hills, CA, USA). The animals were restrained in a pre-warmed chamber (28-30˚C) for at least 30 min before the blood pressure measurement was carried out. The arterial blood pressure measurements were performed at the same time of day (between 9 a.m. and 11 a.m.) in order to avoid the influence of the circadian cycle. The value of SBP was recorded and reported as the average of 6 successive measurements.
At the end of the treatment period, mice were anaesthetized with CO 2 inhalation and blood samples were collected. Blood samples were centrifuged at 2500 rpm for 10 min at 4˚C to obtain serum, which was immediately stored at -80˚C until further use. Then, mouse aorta was isolated immediately and processed accordingly for subsequent experiments. All sections of this report adhere to the ARRIVE Guidelines for reporting animal research [30]. A completed ARRIVE guidelines checklist is included in Checklist S3.
Functional study
The descending thoracic aorta was carefully isolated and cleaned from adjacent connective tissues and fat. The aorta was cut into rings segments, 3-5 mm long and placed in oxygenated Krebs physiological salt solution (KPSS in mM: NaCl 119, NaHCO3 25, KCl 4.7, KH2PO4 1.2, MgSO4.7H2O 1.2, glucose 11.7, and CaCl2.2H2O 2.5). Some of the rings were snap frozen in liquid nitrogen and stored in -80˚C for protein analysis. Two mounting wires were threaded through the isolated mouse aorta rings and secured to two supports in a Multi Wire Myograph System (Danish Myo Technology, Aarhus, Denmark). One support was attached to a micrometer for the adjustment of vessel circumference and application of tension. The other support was attached to an isometric transducer. The fresh aortic rings were maintained at 37˚C and stretched to optimal basal tension of 5 mN with continuous oxygenation of 95% O2 and 5% CO2. The rings were equilibrated for 45 min before being stimulated with 80 mM KCl to prime the tissues and then were rinsed with Krebs solution for 3 times. Once tissues were stabilised, phenylephrine (PE, 3 μM) was added to induce a sustained contraction. Endothelium-dependent relaxation (EDR) was generated by cumulative addition of acetylcholine (ACh, 3 nM to 10 μM; Sigma-Aldrich) and α2-adrenoceptor agonist, UK14304 (3 nM to 10 μM; Sigma-Aldrich). Endothelium-independent relaxation to sodium nitroprusside (SNP, 1 nM to 10 μM; Sigma-Aldrich) was also carried out. The changes of isometric tension were recorded using the PowerLab LabChart 6.0 recording system (AD Instruments, Bella Vista, NSW, Australia). Each experiment was performed on rings obtained from different mice from each group. Concentration-response curves for both endothelium-dependent and -independent relaxation were expressed as the percentage of reduction in contraction induced by PE before the application of ACh, UK14304 or SNP independently. The maximum effect (R max ) and the concentration inducing 50% of R max (pEC 50 ) were determined from the cumulative concentration-response curves.
Detection of ROS formation in en face endothelium of mouse aortas and HUVECs
The level of oxidative stress in the en face endothelium of mouse aorta and HUVECs was assessed using dihydroethidium (DHE) dye by confocal microscopy [27]. The treated HUVECs and aortic rings were incubated with DHE (5 μM, Invitrogen, Carlsbad, CA, USA) for 15 min at 37˚C in normal physiological saline solution (NPSS, composition in mM: NaCl 140, KCl 5, CaCl 2 1, MgCl 2 1, glucose 10 and HEPES 5) with pH 7.4. After incubation, cells and the aortic rings were rinsed 3 times with NPSS. The aorta rings were cut open, and the endothelium was placed upside down between two coverslips on the microscope. Fluorescence intensity was captured by confocal microscope Leica TCS SP5 II (Leica Microsystems, Mannheim, Germany) with 515-nm excitation and 585-nm long pass filter. Background autofluorescence of elastin in aortic rings were taken at excitation 488 nm and emission 520 nm separately to avoid overlapping of the emission spectrum. DHE fluorescence intensity was analyzed by Leica LAS-AF software version 2.6.0.7266 as represented by the fold change in fluorescence intensity relative to the control group.
Detection of vascular superoxide formation
Lucigenin-enhanced chemiluminescence method was used to quantify the vascular superoxide anion production as previously described [27]. Briefly, aortic rings from each groups was preincubated for 45 min at 37˚C in Krebs-HEPES buffer (in mM: NaCl 99.0, NaHCO3 25, KCl 4.7, KH2PO4 1.0, MgSO4 1.2, glucose 11.0, CaCl22.5 and Na-HEPES 20.0) in the presence of diethylthiocarbamic acid (DETCA, 1 mM) and β-nicotinamide adenine dinucleotide phosphate (β-NADPH, 0.1 mM). DECTA was used to inactivate superoxide dismutase (SOD) while β-NADPH was used as a substrate for NADPH oxidase. Inhibitor of NADPH oxidase, diphenylene iodonium (DPI; 5 mM) was added for the positive control. Before measurement, a 96-well Optiplate containing lucigenin (5 mM) and β-NADPH (0.1 mM) in 300 ml of Krebs-HEPES buffer per well was loaded into the Hidex plate CHAMELEONTM V (Finland). Background photo emission was measured with 30 seconds intervals over 20 min. The rings were then transferred into wells and measurement was taken again. Upon completion of measurement, the rings were dried for 48 h at 65˚C and weighed. The data are expressed as average counts per mg of vessel dry weight.
Measurement of vascular nitrate/nitrite level
The total nitrate/nitrite level was detected in the aorta using a Nitrate/Nitrite Colorimetric Assay Kit (Cayman Chemical Company, Ann Arbor, MI, USA) according to the manufacturer's protocol. The absorbance was measured using Hidex plate CHAMELEONTM V (Turku, Finland) and compared with a standard nitrite curve at 540 nm. The results are expressed in μM.
Data analysis
Results are represented as means ± SEM from n experiments. Concentration-response curves were fixed to a sigmoidal curve using non-linear regression using statistical software GraphPad Prism version 4 (GraphPad Software Inc., San Diego, CA, USA). Statistical significance was determined using two-tailed Student's t-test for comparison of two group and one-way ANOVA followed by Bonferroni multiple comparison tests when more than two treatments were compared. Results with P values <0.05 were considered statistically significant.
General parameters; body weight and systolic blood pressure
Mice treated with tunicamycin showed a significant increase in systolic blood pressure compared with the control group (125.20±3.01 versus 94.03±3.36 mmHg; P<0.05) at the end of two weeks. This increase was significantly reduced by co-treatment with paeonol (103.70±6.83 mmHg), ER stress inhibitor, TUDCA (103.70±6.19 mmHg) and tempol (98.84±1.53 mmHg) as shown in Fig 1B. Mice treated with tunicamycin for two weeks demonstrated a reduction in body weight, and it was improved following co-treatment with paeonol, TUDCA and tempol ( Fig 1C). There were no significant changes in both body weight and systolic blood pressure between the paeonol only and control group (Fig 1B & 1C).
Paeonol improved tunicamycin-induced endothelial dysfunction in mouse aorta
To determine the role of paeonol treatment in ER stress-induced endothelial dysfunction in mice, we examined EDR and endothelium-independent relaxation produced by ACh, UK14304 and SNP in aorta respectively in a concentration dependent manner. Mice treated with tunicamycin for 2 weeks displayed attenuated EDR (ACh and UK14304) compared to the aorta from the control group. Chronic treatment with either paeonol or TUDCA significantly improved EDR impaired by tunicamycin (Fig 2A-2E, Table 1). The role of vascular oxidative stress in mice induced by tunicamycin was evaluated following chronic treatment with tempol, a superoxide scavenger. Co-treatment with tempol prevented the tunicamycin-induced impairment of relaxations to ACh in mice (Fig 2C-2E, Table 1). However, the EDR of the paeonol only group were similar to those of the control group (Fig 2A & 2B). Sodium nitroprusside-induced endothelium-independent relaxation was similar in all treatment groups, suggesting the sensitivity of vascular smooth muscle to NO remained intact (Fig 2F and 2G, Table 1).
Paeonol inhibited ER stress-induced oxidative stress in mouse aorta
We next explored the effect of chronic treatment with paeonol on ER stress-associated proteins. Glucose-regulated protein 78 (GRP78) (Fig 3A), activating transcription factor-6 (ATF6) (Fig 3B) and phosphorylation of eukaryotic translation initiation factor 2 alpha (eIF2α) ( Fig 3C) proteins were all elevated in mice treated by tunicamycin, and were reversed following cotreatment with paeonol and TUDCA. Additionally, co-treatment with either paeonol or tempol inhibited the tunicamycin-stimulated up-regulation of NADPH subunits, NOX2 and nitrotyrosine (marker for peroxynitrate, an index for increased oxidative stress) in mice compared with the control group (Fig 3D and 3E). No significant changes were observed between the control and the paeonol only groups (Fig 3A-3E). Table 1. Agonist sensitivity (pEC 50 ) and % maximum response (R max ) of endothelium-dependent vasodilators, acetylcholine (ACh), UK14304 and endothelium-independent vasodilator sodium nitroprusside (SNP), in isolated aorta from C57BL/6J mice treated with tunicamycin (Tu), paeonol, tempol and TUDCA for 2 weeks. Results are means ± SEM (n = 6-7). Paeonol reduced the superoxide production in mouse aorta Next, we determined ROS level in mouse aorta arteries. ROS formation in en face endothelium and O 2 − level was markedly increased mice treated for 2 weeks with tunicamycin compared to control group as reflected by the intensity of DHE fluorescence staining (Fig 4A and 4B) and lucigenin-enhanced chemiluminescence (LEC) (Fig 4C) respectively. Co-treatment with paeonol or TUDCA reduced the tunicamycin-stimulated ROS. Similarly, chronic treatment with ROS scavenger, tempol normalized the elevated ROS production in mice treated with tunicamycin.
Groups
In accordance to the DHE staining on the en face of endothelium in mouse aorta, HUVECs treated with tunicamycin showed an increased in ROS production (S2A Fig) and NOX2 protein up-regulation (S2B Fig), which were reduced with co-incubation with paeonol. Similarly, co-incubation with tempol, TUDCA, tempol + TUDCA respectively reversed the adverse effect of tunicamycin. The ROS level in paeonol only group were similar to the control group in both HUVECs (S2A- S2C Fig) and mouse aorta (Fig 4A-4C). Paeonol enhanced nitric oxide bioavailability in mouse aorta Tunicamycin treated mice displayed significant decrease in tissue total nitrate/nitrite level compared to the control mice (Fig 5A). The effect of tunicamycin was reversed by chronic cotreatment with paeonol, TUDCA and tempol respectively. In addition, chronic paeonol treatment promoted phosphorylation of eNOS at Ser 1176 in aortas which was reduced in tunicamycin treated mice. Chronic tempol and TUDCA treatment similarly increased phosphorylation of eNOS at Ser 1176 compared to mice induced with tunicamycin ( Fig 5B). There were no significant differences in mice treated with paeonol only and control group (Fig 1B & 1C).
Discussion
The present study demonstrates that chronic treatment with paeonol in vivo confers vascular protection by alleviating ER stress and oxidative stress. We observed increased systolic blood pressure, reduction of body weight, impairment of endothelium-dependent relaxation, up- The effect of paeonol in endoplasmic reticulum stress-induced endothelial dysfunction regulations of ER stress markers, increased ROS generation and reduced nitric oxide (NO) bioavailability in aortae following treatment with tunicamycin in C57BL/6J mice. These were reversed by chronic co-administration with paeonol, TUDCA or tempol, respectively.
Prolonged ER stress will lead to advanced lesional macrophage death, plaque necrosis and increases vascular smooth muscle contractility resulting in increased blood pressure [6,10]. Our results revealed that mice treated with tunicamycin showed elevated blood pressure and reduction in body weight, which is in agreement with previously reported literature [10,32]. These elevated blood pressure and reduction in body weight were normalised by treatment of paeonol chronically for two weeks. Previous study reported that ER stress may increase blood pressure by increasing cardiac output and peripheral vascular resistance [10]. In addition, ER stress has been reported in hypertensive patients [33], animals with metabolic syndrome [34], high salt intake-induced hypertensive rats [35] and angiotensin II-induced hypertensive mice [36]. In fact, recent in vivo findings have shown that ER stress inhibitor, TUDCA reduced blood pressure and improved vascular activity in spontaneously hypertensive rats (SHRs) through inhibition of ER stress [37]. Furthermore, hypertension in human is associated with decreased NO bioavailability and an increased in oxidative stress [20]. It has been shown that oxidative stress is the key contributor in the pathogenesis of hypertension [21,38]. Oxidative stress can impact vascular tone leading to endothelial dysfunction [39,40]. ROS promotes vascular cell proliferation and migration, inflammation and apoptosis, as well as extracellular matrix alterations [41,42]. Inhibition of ER stress in hypertension improved macrovascular endothelial function by transforming growth factor-β1 (TGF-β1)-dependent mechanism and microvascular endothelial function by an oxidative stress-dependent mechanism [36]. The anti-hypertensive effects of paeonol are comparable to those produced by ER stress inhibitor (TUDCA) and antioxidant (tempol) or both, suggesting that it may work by inhibiting ER stress-mediated oxidative stress pathway [43,44]. These results are in agreement with Al- The effect of paeonol in endoplasmic reticulum stress-induced endothelial dysfunction Magableh and co-workers (2015) findings which showed that hydrogen sulfide reduced blood pressure in angiontensin II-induced hypertensive mice through inhibition of vascular oxidative stress [45].
ER stress has a negative impact on vascular function as treatment with tunicamycin for several weeks reduced ACh-induced endothelium-dependent relaxation in large and small arteries [36,46]. ER stress also triggers inflammatory signalling mechanism [47] and reduced phosphorylation of eNOS causing endothelial dysfunction [48]. ER stress through the activation of NFkβ and transforming growth factor beta 1 (TGFβ-1), contribute to an increase in ROS generation, which also culminates in vascular dysfunction and development of hypertension [49]. Tunicamycin caused less impairment of the EDR to ACh in aorta of p47phox−/− mice than wild type control mice suggesting association of ER stress to enhanced NADPH oxidase-ROS activity [48]. In a stressed ER, dysregulated disulfide bond formation and breakage may result in ROS accumulation and induced oxidative stress [50]. In addition, some UPR components such as CHOP may promote apoptosis, released ROS and impaired the endothelium [51]. In agreement with our earlier findings [27], in vivo treatment with paeonol, TUDCA and tempol reversed the impaired endothelium-dependent relaxations to ACh and UK14304, an α 2 adrenoceptor agonist in aorta isolated from tunicamycin treated mice. Paeonol is known for its anti-inflammatory effects [52,53]. The attenuation of ER stress-induced inflammation may also have contributed to the protective effects of paeonol against ER stressrelated injuries [54,55]. Previous studies have shown that paeonol induces vasodilatation of the rat mesenteric arteries by inhibiting extracellular Ca 2+ influx and intracellular Ca 2+ release [56]. Additionally, our in vivo results revealed that mice treated with tunicamycin showed upregulation of ER stress proteins (phosphorylation of eIF2α, ATF6 and GRP78) which were reversed by treatment with paeonol, TUDCA and tempol. Taken together, the results indicate that paeonol improves EDRs probably through inhibition of ER stress.
Oxidative stress and increased ROS production is an integral component of acute and chronic states of UPR signalling [13]. Increased unfolded proteins in the ER stimulates Ca 2+ leakage into the cytosol and further augment oxidative phosphorylation of the electron transport chain, increase cytochrome c release impairing electron transfer, altering mitochondrial membrane potential and increasing the generation of ROS [57,58]. Recent evidence reveals that under ER stress, ROS production is increased via enzymes of the NADPH oxidase (NOX) family, especially via NOX2, which is involved in blood pressure regulation [59] and augmentation of proapoptotic signalling [60]. Cholesterol or 7-ketocholesterol-induced ER stress promote pathophysiology of various cardiovascular diseases, including heart failure and oxidative shift in macrophages, which are suppressed by NOX2 siRNA [7]. This eventually decreases the bioavailability of NO and ultimately leads to endothelial dysfunctions [61]. ROS production is also increased by electron leakage from ER stress-associated p450 2E1 activation, a pro-oxidant protein [62]. In agreement to these previous studies, treatment with paeonol inhibited the tunicamycin-induced vascular expression of NOX2 and nitrotyrosine, a marker for peroxynitrite in a similar manner to tempol, a free radical scavenger. The alleviation of ROS following paeonol treatment was accompanied with increased bioavailability of NO. Paeonol treatment prevents ER stress induced by tunicamycin in the mice by reducing ROS production via inhibition of NOX2 and nitrotyrosine. Similarly, paeonol improved EDRs measured in the isolated mouse aorta to a similar extend as following treatment with tempol and TUDCA. Paeonol likely work on the ROS signalling, extracellular signal-regulated kinases (ERK), Ca 2+ mechanisms such as ischemia-reperfusion (I/R) preconditioning pathways to protect ER from oxidative stress [63,64]. Additionally, paeonol has been was reported to enhance antioxidant defence via nuclear factor erythroid 2-related factor 2 (Nrf2) activation in vivo [65].
In summary, the present results demonstrate that chronic administration of paeonol in tunicamycin-induced ER stress in mice confers protection against endothelial dysfunction and normalised blood pressure by alleviating ER stress-induced oxidative stress (Fig 6). The present data provides further evidence supporting the potential use of paeonol as novel therapeutic agents or health supplements for patients with ER-stress related cardiovascular diseases, particularly in the treatment of hypertension. | 2018-04-03T05:01:32.960Z | 2017-05-31T00:00:00.000 | {
"year": 2017,
"sha1": "eb957a159aaa989366b95bb2418ae362e03fcd03",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178365&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42a3bf448a9c03471d70d9ece5778690f068f352",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
238848593 | pes2o/s2orc | v3-fos-license | The Consequences of COVID-19-Related Anxiety on Children’s Health: A Cross-Sectional Study
Background: The emergence of the COVID-19 pandemic has triggered a worldwide health catastrophe. Anxiety caused by COVID-19 has had a negative impact on people's physical and mental health. According to the ndings of the research, signicant emphasis has been devoted to measures linked to the identication of persons with coronavirus infection, but the identication of the affected individuals' mental health issues has been overlooked. Despite study data indicating an increase in fear and anxiety in patients with coronavirus and others, little research on COVID-19-related cardiac disease has been conducted so far. Methods: This cross-sectional study used a survey method with a chosen self-reported questionnaire for data collection from Mashhad residents. The research sample included 435 households with children aged 5 to 18. The data was analyzed using SPSS software version 25 and comprised two measures, (1) the Coronary Disease Anxiety Scale (CDAS) and (2) the Child Health Questionnaire (CHQ) developed by Landgraf and Abets. The ndings indicated that COVID-19-related anxiety has a detrimental inuence on children's health. According to the data, 19% of the children’s variance showed negative effects on health ( β= -0.625, Sig = 0.001, Adj.R2 = 0.193). Results: The ndings revealed a signicant difference in children's health mean scores related to forms of insurance coverage, parents' education level, housing status, and COVID-19-associated anxiety. Children's anxiety levels have increased, causing harm to their health and a reduction in their health status. Conclusion: The outcomes of the study will help health professionals and governments establish appropriate protective measures to address this worldwide health problem.
As previously stated, fear and anxiety about COVID-19 illness are now widespread [24]. Anxiety is a frequent unpleasant feeling felt by people during a disease outbreak [25]. Previous research has found that in the early phases of the coronavirus outbreak in China, more than half of respondents experienced signi cant psychological symptoms, with around one-third reporting moderate to severe anxiety [26][27][28].
Concerning the prevalence of anxiety in individuals, one of the primary issues that may be stated is the health condition of children, in such a manner that; the United Nations has encouraged all countries to prepare for the care and health of children, particularly during this time. The importance of this issue is such that worldwide strategies for the health of children and their mothers address four variables in decreasing child mortality and ve aspects in promoting newborn health [29,30]. It is worth noting that, because children's cognitive capacity is restricted in childhood, they learn the majority of their knowledge of the world around them with the assistance of their parents, and as they get older, their intellectual powers increase and they progressively become independent. In the meanwhile, parents are a valuable source of guidance and instruction [31]. One of the preventive factors against children's psychological and behavioral disorders that has a direct in uence on children's health is the interaction between parents and children [32,33].
According to the ndings of the research, signi cant emphasis has been devoted to measures linked to the identi cation of persons with coronavirus infection, but the identi cation of the a icted individuals' mental health issues has been ignored [34][35][36]. Despite study data indicating an increase in fear and anxiety in patients with coronavirus and others, little research on corona heart disease has been conducted to date [21]. Consequently, in 2021, this study investigated the link between corona anxiety and children's health in parents with children aged 5-18 years in Mashhad, a city in northern Iran.
Design
This cross-sectional research was conducted in the northern Iranian metropolis of Mashhad in 2021.
Population
Families with children aged 5 to 18 years were included in the statistical population. The SPSS Sample Power program was used to calculate the sample size.
Measures (anxiety)
The COVID-19 Anxiety Scale (CDAS) [37] and the Child Health Questionnaire (CHQ) Landgraf and Abets (1996) [38] were utilized in the current investigation. The COVID-19 Anxiety Scale, developed and validated in Iran, is used to assess anxiety induced by COVID19-related heart disease. This questionnaire consists of 18 items and two components. Items 1-9 evaluate psychological symptoms, whereas items 10-18 evaluate physical problems. The tool is evaluated using a four-point Likert scale (never, sometimes, most of the time, and always). The respondents' greatest and lowest scores on this questionnaire ranged from 0 to 54. A high level of anxiousness is indicated by a high score. The Cronbach's alpha technique determined the instrument's reliability for the rst factor (α = 0.879), the second factor (α = 0867), and the entire questionnaire (α = 0.919) [37]. The Landgraf and Abets Child Health Questionnaire was used to examining children's health. This questionnaire's basic form and 28 items had 13 subscales that investigated two aspects of physical health (including functional subscales or physical problems and limitations, general health, and physical pain) and psychological health (includes subscales of social, emotional-behavioral limitations, self-esteem, mental health, and behavior and family problems). This questionnaire is one of the most widely used scales related to health and quality of life for children and adolescents, assessing noticeable areas of child function and health based on parent reports, and it can be used for girls and boys of various ages, as well as parents with varying levels of education and working and marital situations [39]. Validity studies in Iran have shown that CHQ can discriminate between children with certain chronic illnesses and that it is associated with other health and quality of life scales. This questionnaire has 22 questions. The questions are scored on a 5-point Likert scale. The tool is intended to assess eight aspects of child mental health, child self-satisfaction, child mobility, child performance, parental worry, parental limits, child general health, and overall child health score. The ndings of this tool's factor analysis in the research of Golzar et al. [40] were reported twice, at 0.05 and 0.06, respectively, indicating that this instrument is well-suited for usage in Iran. It should be noted that the questions in this instrument are evaluated using a Likert scale, and the tool's validity has been investigated in internal and external studies [41,42].
Data Collection
The following assumptions are proposed for determining sample size in the current study: 1-The probability of the rst type error is a maximum of 5% (alpha value), 2-The probability of the second type error is a maximum of 20% (beta value), 3-The test power is 80%, 4-The 95% con dence level, and 5-the sample size is such that at least 15% of the correlation is detected. The sample size of 540 individuals was then established using the SPSS Sample Power program. The surveys were sent online to parents with children aged 5 to 18 years, and 435 completed questionnaires were used as the foundation for analysis.
Data analysis
For data analysis, the SPSS-22 program was used. The data was further analyzed using two independent sample t-tests, one-way analysis of variance, and simple linear regression.
Ethics approval and consent to participate
All procedures in studies involving human subjects were carried out in line with the institutional research committee's ethical standards, as well as the 1964 Helsinki Declaration and its subsequent modi cations or similar ethical standards. The study procedure was authorized by the Medical Ethics Committee at the University of Social Welfare and Rehabilitation Sciences in Tehran (IR.USWR.REC.1400.043). This study included individuals who provided informed consent. Before commencing the study, the authors acquired verbal informed consent from all participants, and all participants completed the informed written consent-form after being told about the purposes of the project.
Sociodemographic
According to the ndings, the mean age of children was 12.21 years, while the mean age of parents was 39.16 years. The respondents' minimum age was 23 years, and their maximum age was 65 years. The mean CHQ and CDAS score of children was 210 (48.2%) 87.1 ± 12.1, 12.2 ± 8.4 in the boy group and 225 (51.6%) 84.1 ± 12.8, 13.1 ± 9.2 in the girl group, and the t-test ndings indicated that there was a signi cant difference in the mean health score of children in the gender group of children (P < 0.05). Other ndings revealed that the greatest mean health score of children was connected to children with supplementary insurance (90.9 ± 4.1), and the lowest health score was related to uninsured children (81.6 ± 14.7). Furthermore, children with supplementary insurance had the lowest mean CDAS score (9 ± 5.5). The ANOVA test ndings also revealed a signi cant difference in the mean score of children's health based on the status of children's insurance (P < 0.05). The highest mean health score of children was associated with parents with a bachelor's degree (87.7 ± 10.6), while the lowest health score was associated with illiterate parents (80.5 ± 14.3). In addition, children with a Ph.D. had the lowest mean CDAS score (8.2 ± 5). The ANOVA test ndings also indicated a signi cant difference in the mean score of children's health based on parents' educational position (P < 0.05). The ndings on the mean anxiety score from the pandemic revealed that depending on the status of the type of housing, children living in leased housing had the highest mean anxiety score from the corona (15.3 ± 10.1) (P < 0.05). The ANOVA test results also revealed a signi cant difference in the mean score of children's health based on dwelling type (P < 0.05) ( Table 1).
The ndings of the regression test showed that that anxiety has an effect on children's health (β=-0.625, Sig.=0.001, Adj.R2 = 0.193). This variable accounted for 19% of the variation in children's health.
According to the data, the higher the degree of corona anxiety among youngsters in Mashhad, the worse their health (Table 2).
Discussion
The current study investigated the relationship between COVID-19-related anxiety and the health of children aged 5 to 18 in Mashhad, a city in northeastern Iran. According to the ndings of this study, COVID-19-related anxiety impacted children's health. Other studies have revealed that COVID-19 disease causes emotions of uncertainty, fear, and isolation, as well as sleep di culties, anorexia, depression, loneliness anxiety, post-traumatic stress disorder, and obsessive-compulsive disorder in children [43,44]. On the other hand, quarantine regulations and social isolation have resulted in a lack of physical exercise in them [45]. This separation has restricted children's opportunities to acquire social behaviors and, in some cases, behavioral and emotional problems [46]. Other research has suggested that providing appropriate information about COVID-19 illness might relieve children's emotions of fear, worry, and doubt, as well as teach them good coping strategies [47]. According to the ndings of the research, children's degree of awareness had a signi cant link with their anxiety, such that children who were more aware of this sickness experienced greater anxiety. Concerning the principles of crisis intervention, there is a need to appropriately increase awareness by addressing the idea of epidemic cessation. Increasing awareness through interventions such as social distance during the epidemic and obeying sanitary principles such as frequent handwashing with soap and water might help in this respect. Consequently, it is critical to pay attention to how children get the majority of their information, what methods should be used to improve awareness in children, and what variables, in addition to the degree of awareness, have in uenced their anxiety [48]. According to the ndings of a study conducted in the Netherlands, interaction with children by health care workers can decrease anxiety linked with the COVID-19 pandemic in children and its possible harmful effects [43]. Educating parents on how to control their negative emotions is also a crucial step in fostering and sustaining children's mental health in times of crisis. As a consequence, providing information about COVID-19 disease based on children's cognitive development, health attitudes, and age is essential [48,49]. Yoga, meditation, exercise, and mental activity can help reduce the anxiety produced by COVID-19, and in order to ght the pandemic, parents and children must work together to diminish the harmful consequences of COVID-19-related anxiety on children's health [50].
In our study, there was a substantial variation in the mean score of children's health based on the kind of insurance coverage. Other research has found that having access to social insurance has a substantial impact on the number of times youngsters visit the doctor [51]. Furthermore, expanding public insurance coverage reduced deprivation at the communal level [52] as well as the nancial burden on low-income households [53]. As previous studies have highlighted, this state can help to reduce poverty and inequality in society, as well as impact children's well-being. Children's well-being and health are critical for any society's future [53,54]. Today's children are our future generation, and their well-being today lays the groundwork for their health during adolescence. Access to health system services, such as health insurance coverage, is a social and economic right, and health planners must plan carefully to enhance justice and equity in this area.
According to the ndings of the current study, there is a strong relationship between parents' educational level and their children's health. Other studies have found that parental literacy has an impact on children's health and that it is important to engage in their development in order to enhance children's health [54][55][56]. Some studies have indicated that women's education levels have an in uence on children's health [57], and others have found that maternal and paternal education are equally important in lowering child mortality in Indonesia [56,58]. In developing countries, fathers generally have a greater level of education than mothers. Therefore education for fathers might be bene cial. Another way to describe the role of fathers' education is the low social position and empowerment of mothers, which has the ability to diminish mothers' in uence over child health decisions. Fathers may take a more active part in certain sorts of child health decisions, such as speci c measures like immunizations. Mothers, on the other hand, maybe more active in day-to-day decisions concerning public health and nutrition. The father's education has a stronger association with individual health habits, but the mother's education has a greater impact on long-term health indicators like height and weight [59,60]. Parents with a greater level of education and income, as well as children from higher-income households, are healthier because they have access to higher-quality health care, better nutrition, and better living conditions. This research has underlined the need for increasing parental education via investment.
The current investigation found a strong relationship between the mean score of housing status and cardiac anxiety. Hence, those who lived in leased homes were more anxious. This issue has caused several issues in society. Despite this, research has shown that the coronavirus does not discriminate based on personal residence and affects leased homes just as often as homeowners.
[61]. Perhaps the economic crisis caused by the COVID-19 pandemic, as well as issues like unemployment and a lack of nancial support for households, has increased anxiety among renters. According to the ndings of previous research, the cost of renting a property in Iran is high [24,62], which may increase anxiety in leased families indirectly.
The current research was conducted during the pandemic. Due to special circumstances in Iran, such as lockdown, it was not feasible to complete the surveys in person. For that purpose, the surveys were sent to parents over the internet. Another drawback of this study was the use of questionnaires, as there is a risk of bias in self-report instruments. Regarding the aforementioned constraints and the study's nal conclusions, it is proposed that in future studies, further research be performed to clarify the link between variables in other provinces and cities around the country to clarify the relationship between investigated variables. Various aspects related to children's health, such as parenting and self-care principles, as well as other degrees of anxiety in children should be examined. | 2021-09-27T21:04:34.567Z | 2021-08-02T00:00:00.000 | {
"year": 2021,
"sha1": "d9656ae4212af9cf33bd8397043f6be345036fba",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-730170/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6d5f6bae99cba84664f12b7931274e8ea23ffd58",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
81776478 | pes2o/s2orc | v3-fos-license | COMPARISON OF ROCURONIUM BROMIDE AND SUCCINYLCHOLINE CHLORIDE FOR USE DURING RAPID SEQUENCE INTUBATION IN ADULTS
BACKGROUND AND OBJECTIVE: The goal of rapid sequence intubation is to secure the patients airway smoothly and quickly, minimizing the chances of regurgitation and aspiration of gastric contents. Traditionally succinylcholine chloride has been the neuromuscular blocking drug of choice for use in rapid sequence intubation because of its rapid onset of action and profound relaxation. Succinylcholine chloride remains unsurpassed in providing ideal intubating conditions. However the use of succinylcholine chloride is associated with many side effects like muscle pain, bradycardia, hyperkalaemia and rise in intragastric and intraocular pressure. Rocuronium bromide is the only drug currently available which has the rapidity of onset of action like succinylcholine chloride. Hence the present study was undertaken to compare rocuronium bromide with succinylcholine chloride for use during rapid sequence intubation in adult patients. METHODOLOGY: The study population consisted of 90 patients aged between 18-60 years posted for various elective surgeries requiring general anaesthesia. Study population was randomly divided into 3 groups with 30 patients in each sub group. 1. Group I: Intubated with 1 mg kg-1 of succinylcholine chloride (n=30). 2. Group II: Intubated with rocuronium bromide 0.6 mg kg-1 (n=30). 3. Group III: Intubated with rocuronium bromide 0.9 mg kg-1 (n=30). Intubating conditions were assessed at 60 seconds based on the scale adopted by Toni Magorian et al. 1993. The haemodynamic parameters in the present study were compared using p-value obtained from student t-test. RESULTS: It was noted that succinylcholine chloride 1 mg kg-1 body weight produced excellent intubating conditions in all patients. Rocuronium bromide 0.6 mg kg-1 body weight produced excellent intubating conditions in 53.33% of patients but produced good to excellent intubating conditions in 96.67% of patients. Rocuronium bromide 0.9 mg kg-1 body weight produced excellent intubating conditions in 96.67% of patients, which was comparable to that of succinylcholine chloride. Thus increasing the dose of rocuronium bromide increased the number of excellent intubating conditions but at the cost of increased duration of action. INTERPETATION AND CONCLUSION: Thus, from the present study, it is clear that succinylcholine chloride is the drug of choice for rapid sequence intubation. Rocuronium bromide is a safe alternative to succinylcholine chloride in conditions where succinylcholine chloride is contraindicated and in whom there is no anticipated difficult airway.
facilitate endotracheal intubation. Most of the intubations were done with inhalational technique which was associated with problems like laryngospasm, bronchospasm.
Further there was a need to take the patient sufficiently deep before intubation which lead to haemodynamic disturbances. 1 The first skeletal muscle relaxant d-tubocurarine which was nondepolarizing in nature was introduced in 1942 to fulfill the need for jaw relaxation. Though this drug provided excellent muscle relaxation, it had additional ganglion blocking properties causing tachycardia, hypotension even in clinical doses. Further it had a delayed onset at jaw, making it unsuitable for use during rapid sequence intubation in emergency cases.
Hence a search began for a relaxant which had a rapid onset and short duration of action. 2 Succinylcholine chloride, introduced in 1951, was a synthetic depolarizing muscle relaxant. It fulfilled both of the above requirements, and soon became the drug of choice for endotracheal intubation especially in rapid sequence intubation in emergency cases.
But all did not go well for succinylcholine chloride when its adverse effects started surfacing especially hyperkalemia, rise in intragastric, intraocular, intracranial pressures and cardiovascular effects. Thus the quest began for a safer substitute for succinylcholine chloride.
The aim of research on neuromuscular drugs was to have non depolarising muscle relaxant, which is like succinylcholine chloride without its side effects. Though many Non Depolarising Muscle Relaxant drugs like atracurium besylate, vecuronium bromide and mivacurium chloride were introduced, none of them could challenge succinylcholine chloride in terms of its onset. The new Non Depolarising Muscle Relaxant drug rocuronium bromide introduced in 1994 became the first competitor for succinylcholine chloride. Rocuronium bromide when given in two to three times the ED95 dose is said to produce excellent to good intubating conditions in 60 seconds. Further rocuronium bromide is said to be devoid of the adverse effects that are seen with succinylcholine chloride. Hence, the present study was undertaken to evaluate the intubating conditions with rocuronium bromide 0.6 mg kg-1 and 0.9 mg kg-1 body weight and to compare the intubating conditions with that of succinylcholine chloride 1mg kg-1 body weight, for use during rapid sequence intubation of anaesthesia in adult patients.
OBJECTIVES OF THE STUDY:
A. To compare the intubating conditions of rocuronium bromide 0.6mg kg-1, 0.9mg kg-1 body weight with that of succinylcholine chloride 1 mg kg-1 body weight at 60 seconds. B. To study the clinical duration of action of rocuronium bromide 0.6mg kg-1, 0.9mg kg-1 body weight and succinylcholine chloride 1 mg kg-1 body weight. C. To study the cardiovascular responses associated with the administration of rocuronium bromide and succinylcholine chloride. D. To study the side effects associated with the use of rocuronium bromide.
METHODOLOGY:
A clinical study comparing rocuronium bromide 0.6mg kg-1 and 0.9mg kg-1 with succinylcholine chloride 1 mg kg-1 for use during rapid sequence intubation of anaesthesia.
The study population consisted of 90 adult patients of ASA grade I and II belonging to both sexes in the age group of 18 to 60 years who were posted for various elective surgeries. Informed consent was obtained from the patients before taking up for surgery. Exclusion criteria consisted of patients with hypertension, diabetes, bronchial asthma, ischaemic heart disease or anticipated difficult airway.
The study population was randomly divided into three groups with 30 patients in each group.
Group I consisting of 30 patients was to receive succinylcholine chloride 1mg kg-1 body weight and intubation attempted at 60 seconds.
Group II consisting of 30 patients were to receive rocuronium bromide 0.6mg kg-1 body weight and intubation attempted at 60 seconds.
Group III consisting of 30 patients were to receive rocuronium bromide 0.9mg kg-1 body weight and intubation attempted at 60 seconds.
A thorough pre anaesthetic evaluation was dose a day before surgery and all the necessary investigations were done to rule out any systemic disease. Tab alprazolam 0.5mg and tab pantoprazole 40mg were administered to all patients on the night before surgery. Patients were maintained nil per oral for duration of 10 hours prior to surgery.
To test the efficacy of drugs for use during emergency surgeries, a technique mimicking rapid sequence induction was employed in patients posted for elective surgeries.
The baseline heart rate, oxygen saturation and electrocardiogram, systolic, diastolic, mean arterial blood pressure was recorded.
Injection Glycopyrolate 0.2mg and injection midazolam 1mg were given to all patients 3 minutes prior to administering induction agent.
All patients were preoxygenated with 100% oxygen via a face mask for 3 minutes after administering glycopyrolate and midazolam. They were induced with injection thiopentone sodium 5mg kg-1 body weight intravenously.
In all patients cricoid pressure was applied after the administration of induction agent when the patients become unconscious.
In group I, succinylcholine chloride 1mg kg-1 body weight was given intravenously after the loss of eyelash reflex.
Similarly in group II and group III, rocuronium bromide 0.6mg kg-1 and 0.9mg kg-1 respectively was given intravenously after the loss of eyelash reflex. No mask ventilation was done in any patient after administration of relaxant.
In all the three groups of patients, oral endotracheal intubation was attempted at 60 seconds following the administration of muscle relaxant and intubating conditions were graded using the score adopted by Toni Magorian et al. (1993) 3 Excellent = Jaw relaxed, vocal cords apart and immobile, no diaphragmatic movements. Good = Jaw relaxed, vocal cords apart and immobile, some diaphragmatic movements. Poor = Jaw relaxed, vocal cords moving, "bucking". Inadequate = Jaw not relaxed, vocal cords closed.
All the patients were intubated with well lubricated appropriate sized poly vinyl chloride endo tracheal tubes (cuffed) bilateral air entry was checked and the tube was firmly secured. Maintenance of anaesthesia was done with 30% oxygen and 70% nitrous oxide and Controlled Mandatory ventilation.
Monitoring of vital parameters like heart rate, oxygen saturation, systolic, diastolic and mean arterial blood pressures, electrocardiogram, capnograph were recorded 1, 3 and 5 minutes following intubation.
The clinical duration of action that is the time from administration of relaxant to first attempt at respiration of was noted. Subsequently, the muscle relaxation was maintained with vecuronium bromide 0.04 mg kg-1 body weight till the end.
At the end of surgery all the patients were reversed with injection neostigmine 0.05 mg kg-1 body weight and injection glycopyrolate 0.01 mg kg-1 body weight.
Other side effects like histamine releasing property associated with administration of rocuronium bromide and succinylcholine chloride were also noted.
The haemodynamic parameters in the present study were compared statistically using p value obtained from student t-test.
OBSERVATION AND RESULTS:
The age distribution of all patients of all the three groups is as shown below.
The following table shows the sex distribution in the three groups.
The following table shows the weight distribution of the three groups. As shown in table 8, there was a significant (p <0.05) rise in mean heart rate by 36.07%, 37.69% and 32.28% from pre induction value in Group I, II, III respectively. This increase in mean heart rate declined to 4.21%, 4.38% and 3.20% from base line at 5 minutes following intubation.
As shown in table 9, there was a significant (p<0.05) rise in mean arterial pressure by 31.23%, 33.72%, 31.98% from pre induction value at 1 minute following intubation in Group I, Group II, Group III respectively.
DISCUSSION:
Prior to the introduction of muscle relaxants, inhalational agents were used for endotracheal intubation. Inhalational technique was associated with its own complications when intubation was attempted with inadequate depth. The complications noted were laryngospasm and bronchospasm. Further to achieve adequate intubating conditions, higher concentrations of these inhalational agents needed to be used which were associated with haemodynamic disturbances.
Succinylcholine chloride introduced in 1951 was unparalleled in terms of its onset and duration of action. The type of relaxation obtained with this drug was so good but the adverse effects of succinylcholine chloride, like bradycardia, nodal and junctional rhythms, rise in intraocular, intracranial pressure started surfacing, quest began for better relaxants devoid of these adverse effects Rocuronium bromide introduced in 1994 became the first drug to challenge the onset time of succinylcholine chloride, in that it produces good to excellent intubating conditions in 60 seconds. In addition to this rocuronium bromide is devoid of adverse effects of succinylcholine chloride.
In view of this, the present study was undertaken to compare the intubating conditions of rocuronium bromide with that of succinylcholine chloride at 60 seconds. Dosage Selected: The dosage of the neuromuscular blocking drug selected is usually based on the ED95 value. The dose of relaxant needed for endotracheal intubation is usually more and is employed in multiples of ED95 dose.
The ED95 dose of succinylcholine chloride is 0.392 mg kg-1 body weight. Three times the ED95 dose which approximates 1 mg kg-1 body weight has been employed for intubation in the present study.
Rocuronium bromide has been employed in two to three times the ED95 dose to obtain excellent intubating conditions. The ED95 of rocuronium bromide is 0.3 mg kg-1 body weight.
Hence in our study rocuronium bromide has been employed in two doses, i.e. 0.6 mg kg-1 body weight and 0.9 mg kg-1 body weight.
Intubation Time: Selecting the time for intubation can be either by neuromuscular monitoring or by clinical method.
Various authors have employed neuromuscular monitoring for assessing the time for intubation. They have defined the onset time as the time from injection of drug to 95% twitch height depression. However, with non-depolarizing muscle relaxant like rocuronium bromide it has been found that the onset of paralysis at laryngeal muscles preceded that at adductor pollicis and hence monitoring of train of four at adductor pollicis may not give correct picture of intubating conditions. 4,5 Intubating conditions is usually assessed using clinical criteria such as jaw relaxation, vocal cord movements and diaphragmatic relaxation.
In the present study clinical criteria was adopted for scaling intubating conditions at 60 seconds.
Intubating Conditions:
In the present study involved comparison of succinylcholine chloride 1mg kg-1 body weight with rocuronium bromide 0.6 mg kg-1 body weight and 0.9mg kg-1 body weight for rapid sequence intubation in adult patients. It was noted that succinylcholine chloride 1mg kg-1 body weight produced excellent intubating conditions in 100% of patients. Rocuronium bromide 0.6 mg kg-1 body weight produced excellent intubating conditions in 53.33% of cases, good intubating conditions in 43.33% and poor intubating conditions in 3.34% of cases. Rocuronium bromide 0.9mg kg-1 body weight produced excellent intubating conditions in 96.67% of cases and good intubating conditions in 3.33% of cases. The present study is comparable with study of Naguib M. et al. 6 Thus increasing the dose of rocuronium bromide from 0.6mg kg-1 to 0.9mg kg-1 body weight increased the incidence of excellent intubating conditions but at the cost of increased duration of action. The study results are comparable to Toni Magorian et al, 3 The minimum clinical duration for rocuronium bromide 0.6 mg kg-1 body weight in present study was 22 minutes, maximum duration was 32 minutes with a mean duration of 27±2.14 minutes. which concurs with studies of Naguib M.et al, 6 P. Schultz et al. 11 And Aparna Shukla et al 12 ,similarly the minimum duration of action for rocuronium bromide 0.9 mg kg-1 in present study was 40 minutes and maximum duration was 52 minutes with a mean of 45.33±3.73 minutes, which concurs with studies of Toni Magorian et al 3 and P. Schultz et al. 11 Cardiovascular Changes: There was a rise in mean heart rate by 37.69% and 32.28% following administration of rocuronium bromide 0.6 mg kg-1 body weight and 0.9 mg kg-1 body weight, one minute following intubation. There was a similar increase in mean arterial pressure by 33.72% and 31.98% from pre induction value following rocuronium bromide 0.6 mg kg-1 and 0.9 mg kg-1 body weight one minute following intubation. This was a haemodynamic response to laryngoscopy and endotracheal intubation which subsided to near pre induction values 5 minutes after intubation.
Similar trends were seen following the administration of succinylcholine chloride 1 mg kg-1 body weight. There was a rise in mean heart rate by 36.07% from pre induction value one minute after intubation. There was also a rise in mean arterial pressure by 31.23% from pre induction value one minute after intubation. These values returned towards pre induction values 5 minutes following intubation.
Thus there were no haemodynamic disturbances following administration of succinylcholine chloride and rocuronium bromide and rise in mean heart rate and blood pressure was a response to laryngoscopy and intubation.
Untoward Side Effects:
No patient in succinylcholine chloride group had any signs of histamine release. There was no bronchospasm or rash associated with fall in blood pressure. No other patients in the rocuronium bromide groups had any clinical evidence of histamine release (e.g. flushing, rash, bronchospasm).
CONCLUSION:
1. Succinylcholine chloride 1 mg kg-1 body weight produces excellent intubating conditions in all the patients at 60 seconds with an average clinical duration of action of 4.77±0.99 minutes. 2. Rocuronium bromide 0.6 mg kg-1 body weight produces good to excellent intubating conditions in 96.67% of patients at 60 seconds (96.67%) with an average clinical duration of action of 27.4±2.14 minutes. 3. Rocuronium bromide 0.9 mg kg-1 body weight produces excellent intubating conditions in 96.67% of patients and good to excellent intubating conditions in 100% of patients at 60 seconds with an average clinical duration of action of 45.33±3.73 minutes. 4. Increasing the dose of rocuronium bromide from 0.6 mg kg-1 body weight to 0.9 mg kg-1 body weight increases the incidence of excellent intubating conditions but at the cost of increased duration of duration. | 2019-03-18T14:04:37.973Z | 2015-08-01T00:00:00.000 | {
"year": 2015,
"sha1": "d8b87b5b20595a25dab182c222f240604e120923",
"oa_license": null,
"oa_url": "https://doi.org/10.18410/jebmh/2015/672",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ab3a698f12edd97d5c99e2035580517f122c83a2",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
257661052 | pes2o/s2orc | v3-fos-license | Sol-Gel Films Doped with Enzymes and Banana Crude Extract as Sensing Materials for Spectrophotometric Determination
Chromogenic enzymatic reactions are very convenient for the determination of various biochemically active compounds. Sol-gel films are a promising platform for biosensor development. The creation of sol-gel films with immobilized enzymes deserves attention as an effective way to create optical biosensors. In the present work, the conditions are selected to obtain sol-gel films doped with horseradish peroxidase (HRP), mushroom tyrosinase (MT) and crude banana extract (BE), inside the polystyrene spectrophotometric cuvettes. Two procedures are proposed: the use of tetraethoxysilane-phenyltriethoxysilane (TEOS-PhTEOS) mixture as precursor, as well as the use of silicon polyethylene glycol (SPG).In both types of films, the enzymatic activity of HRP, MT, and BE is preserved. Based on the kinetics study of enzymatic reactions catalyzed by sol-gel films doped with HRP, MT, and BE, we found that encapsulation in the TEOS-PhTEOS films affects the enzymatic activity to a lesser extent compared to encapsulation in SPG films. Immobilization affects BE significantly less than MT and HRP. The Michaelis constant for BE encapsulated in TEOS-PhTEOS films almost does not differ from the Michaelis constant for a non-immobilized BE. The proposed sol-gel films allow determining hydrogen peroxide in the range of 0.2–3.5 mM (HRP containing film in the presence of TMB), and caffeic acid in the ranges of 0.5–10.0 mM and 2.0–10.0 mM (MT- and BE-containing films, respectively). BE-containing films have been used to determine the total polyphenol content of coffee in caffeic acid equivalents; the results of the analysis are in good agreement with the results obtained using an independent method of determination. These films are highly stable and can be stored without the loss of activity for 2 months at +4 °C and 2 weeks at +25 °C.
Introduction
Sol-gel materials are widely used in analytical practice. The immobilization of proteins in sol-gel matrices has been attempted in a number of works, given the good properties of the resulting materials: preservation of the biocatalytic properties of the encapsulated proteins and excellent optical properties. Since the first reported case of a successful immobilization of active alkaline phosphatase via the sol-gel method [1], sol-gel silicate has become a desired immobilization matrix for the design of active biocomposite materials. Sol-gel materials as a matrix for immobilized proteins or biomolecules can have many applications, such as stationary phase [2], drug delivery materials [3], and coatings [4,5]. Often sol-gel materials with immobilized enzymes are created for analytical enzymatic applications [1,[6][7][8][9][10][11][12][13][14][15].
Silica sol-gel films present a major advantage comparing to other sol-gel materials for enzyme immobilization: the closeness of the immobilized enzymes to the solid-solution interface. This increases the accessibility of the enzymes to substrates from the aqueous phase, making silica sol-gel film a promising platform for biosensor development. Solgel films are particularly often employed for both optical [6][7][8][9][10][11][12], and electrochemical biosensors [6,[13][14][15]. Chromogenic enzymatic reactions are very convenient for the determination of various biochemically active compounds. In our opinion, the use of sol-gel films to create optical biosensors deserves attention as an effective way to simplify analysis for its further implementation in field conditions.
To obtain sol-gel materials doped with biomolecules, the latter are often encapsulated in matrices of Ormosils-materials obtained by the hydrolysis of tetraethoxysilane in the presence of organic silicon alkoxides, primarily 3-aminopropyltriethoxysilane [16,17]. However, the alcohol formed during the hydrolysis and condensation of the alkoxide precursors could negatively affect the entrapped enzymes' activity [17]. This is an important problem that must be solved before employing the sol-gel process as a universal method of protein or other biomolecules encapsulation. In the literature, special schemes are described for the synthesis of sol-gel materials doped with enzymes to preserve the enzymatic activity of immobilized enzymes [18][19][20][21][22][23]. The approaches described can be divided into two groups: the use of tetraethoxysilane and its derivatives as precursors while using various ways to minimize the contact of the enzyme with ethanol released upon hydrolysis of the precursors and the use of silicate and glycerates as precursors, the hydrolysis of which does not release ethanol.
The approaches in the first group include the modification of the procedure while using standard alkoxide precursors [9,10,18]. To minimize the contact of the enzyme with ethanol, a two-stage synthesis scheme is proposed: the first stage is the hydrolysis of tetraethoxysilane or its mixtures with organic silicon alkoxides and the preparation of the sol; the second stage is the addition of the enzyme to the sol-gel solution and the formation of gels. Sometimes even the removal of the alcohol by rotavaporization method is performed before the second stage [18]. The technology of "kinetic doping" is also proposed where the nascent sol-gel film is submerged into enzyme-containing buffer solution which provides alcohol dilution [9,10].
The second group implies the use of different precursors: sodium silicate [19,20] or glycerol-derived silicates [21][22][23], which provide an alcohol-free sol. Both these routes possess some limitations in their application: the glycerol-derived silicate precursors need to be synthesized and sodium silicate precursors produce high sodium concentration levels in the sol.
Using crude extracts as enzyme sources in biocomposite sol-gel materials seems a promising approach. It was shown for crude extracts with polyphenol oxidase activity that, other conditions being equal, the enzymatic activity of sol-gel materials doped with the extracts is significantly higher than the activity of sol-gel materials doped with commercial tyrosinase [11,12]. Based on our experience of studying crude extracts, we also noted that the use of plant and mushroom extracts as enzyme sources presents a number of advantages compared to purified enzymes: higher interference thresholds, better stability, and lower cost [24][25][26]. In our opinion, the study of the crude extracts' properties and the study of the possibilities of their inclusion in sol-gel films will contribute to the development of methods for the field determination of biochemically important analytes.
Historically, sol-gel films for optical applications were prepared on glass slides, which then could be put inside the cuvettes. However, a more efficient and practical way to create an optical biocomposite sensor is also described [27,28]: the sol-enzyme mixture is put directly into the dispensable polystyrene cuvettes and the film is formed on the cuvette inner side. This approach provides an easy way to fixate the film position in relation to the optical path and also to precisely measure the amount of the enzyme and sol.
The goal of this work was to develop methods for the synthesis of transparent sol-gel films doped with the most well-studied enzymes (horseradish peroxidase and mushroom tyrosinase), as well as with crude banana extract as a source of polyphenol oxidase, on the inner surface of plastic cuvettes with the use of tetraethoxysilane and organic alkoxides or silicon glycerate as precursors, studying the effect of immobilization on the activity of these enzymes, and evaluating the analytical performance of synthesized sol-gel films.
Synthesis of Sol-Gel Films with Immobilized Enzymes and Banana Extract
The validity of the sol-gel route presented in this work to preserve the enzyme activity during the immobilization process has been studied on two of the enzymes that are most widely used in analytical methods: horseradish peroxidase (HRP) and mushroom tyrosinase (MT), and also the widely used crude plant extract-banana extract (BE) as a source of polyphenol oxidase [23]. Most sol-gel immobilization studies use HRP [9,10,15,18,19,21]; polyphenol oxidase (tyrosinase) is studied less frequently [11][12][13][14]29]. We have optimized the sol-gel matrix using HRP, and then studied the performance of HRP, MT, and BE in the selected conditions. We have developed two approaches: using sol-gel films based on mixtures of TEOS with derivatives (i.e., Ormosils) and using glycerol precursors; then we studied the effect of immobilization on the activity of enzymes and the possibility of analytical use of the synthesized films.
Sol-Gel Films Based on Alkoxide Precursors (TEOS Films)
Sol-gel synthesis consists of successively carrying out the following stages: hydrolysis of precursors, polymerization (transformation of a sol into a gel) and, if necessary, drying of the gels under different conditions. Typically, hydrolysis is carried out in the presence of a related alcohol; when using TEOS, in the presence of ethanol. The properties of sol-gel materials depend on the nature of the precursors, the ratio of components in the hydrolyzing mixture, the nature of the gelation catalyst, and special additives to control the porosity of the materials. To obtain sol-gel materials doped with analytical reagents, we have developed a synthesis scheme [30,31], based on the hydrolysis of tetraethoxysilane in an aqueous-ethanol medium in the presence of hydrochloric acid as a catalyst and cetylpyridinium chloride as a pore former [32]. This scheme was used as the basis for the development of a method for the synthesis of sol-gel films doped with enzymes.
Studies show that when enzymes are included in Ormosils, i.e., sol-gel materials obtained from modified alkoxide precursors, their activity decreases to a lesser extent than in the case of standard TEOS materials. Most often, methyl-and amino-derivatives of tetraethoxysilane are used as precursors for the immobilization of enzymes in sol-gel materials [16], while phenyl derivatives are used much less frequently.
According to the literature, adding the enzyme solution to the already formed sol and not to the hydrolyzing mixture seems like a promising approach. This method was proposed using sodium silicate as a precursor [19]. We decided to take this approach when we used tetraethoxysylane (TEOS) and its mixtures with phenyltriethoxysilane (PhTEOS) and 3-aminopropyltriethoxysilane (AmTEOS) as sol-gel precursors. These TEOS-based sol-gel matrices were studied using horseradish peroxidase (HRP) as a model enzyme. HRP is widely used in enzyme immobilization studies and comparing the results of the present study using HRP was likely to be easier.
In order to select the conditions for the synthesis of transparent HRP-containing solgel films on the inner surface of plastic cuvettes, the influence of the nature and ratio of precursors, concentrations of hydrochloric acid and cetylpyridinium chloride was studied. TEOS, AmTEOS, and PhTEOS were used as precursors. The mixtures of precursors with water were prepared with 3:1 ratio of precursors:water and were mixed under the influence of ultrasound with a sound energy density of 0.24-0.38 W/mL at the hydrolyzing stage. The transparent homogeneous sols were formed in 45-90 min. The resulting sols are stable and can be stored at +4 • C for a week. To obtain gels, the sols were mixed with HRP solution in a buffer (pH 6.0) in a ratio of 1:0.8. The loss of fluidity of this mixture (i.e., gel formation) occurs after 2-10 min, depending on the composition of the sol. To prepare the sol-gel films, 0.5-0.9 mL of the mixture was placed in cuvettes, distributed on one of the inner sides, and after 2-10 min, films with an approximate thickness of 1.5-2.5 mm were formed. The stability of the sol-gel films, the enzymatic activity of the immobilized enzymes, and their storage stability were evaluated to choose the synthesis conditions. When 1.0-4.5% w/w AmTEOS was added to TEOS, the films lost transparency compared to films prepared with only TEOS. For TEOS-AmTEOS films, the 10-100 fold increase in the concentration of hydrochloric acid led to the increase in transparency, but it was accompanied by significant reduction in the gelation time (less than 1 min), which made it difficult to obtain films by our method.
The use of TEOS-PhTEOS mixtures with a PhTEOS content of 1-10% w/w made it possible to obtain films that are transparent in the visible light range. Films produced at PhTEOS contents greater than 2% cracked on the next day after the preparation, but at 1-2% w/w they remained stable and did not crack. When using mixtures of TEOS-AmTEOS-PhTEOS with 1% w/w of PhTEOS and 0.4% w/w of AmTEOS, a lack of film transparency was observed, which led us to stop using AmTEOS and to concentrate on studying TEOS-PhTEOS mixtures.
In the literature, enzyme-doped sol-gel materials obtained by various methods are usually characterized by the retention of the immobilized enzyme in the matrix and its activity [9,19,21]. We investigated the effect of PhTEOS content on enzyme retention using HRP as a model enzyme. Under the described above conditions, HRP-doped films were synthesized with different PhTEOS content in the mixture of precursors (0, 1, and 2%)-HRP-PhTEOS0, HRP-PhTEOS1, and HRP-PhTEOS2 films. Enzyme retention was studied by determining the enzyme activity in buffer solutions obtained by washing the sol-gel films. The results are shown in Figure 1. The introduction of PhTEOS into the precursor mixture increased the retention of peroxidase by the sol-gel matrix. Thus, it can be concluded that peroxidase retention is significantly improved when PhTEOS is introduced into the matrix, and the enzyme is washed out slightly less from films containing 1% PhTEOS than from the films containing 2% PhTEOS. The obtained HRP retention values in the films after three washes are shown in Table 1. Comparison with literature data shows that the enzyme washing out values in our experiments are somewhat greater than for the previously proposed methods [19,21]. However, our film preparation method is simple, involves the use of commercially available precursors, and preserves significant activity of the immobilized enzyme. thickness of 1.5-2.5 mm were formed. The stability of the sol-gel films, the enzymat tivity of the immobilized enzymes, and their storage stability were evaluated to ch the synthesis conditions. When 1.0-4.5% w/w AmTEOS was added to TEOS, the films lost transpar compared to films prepared with only TEOS. For TEOS-AmTEOS films, the 10-100 increase in the concentration of hydrochloric acid led to the increase in transparency it was accompanied by significant reduction in the gelation time (less than 1 min), w made it difficult to obtain films by our method.
The use of TEOS-PhTEOS mixtures with a PhTEOS content of 1-10% w/w ma possible to obtain films that are transparent in the visible light range. Films produc PhTEOS contents greater than 2% cracked on the next day after the preparation, b 1-2% w/w they remained stable and did not crack. When using mixture TEOS-AmTEOS-PhTEOS with 1% w/w of PhTEOS and 0.4% w/w of AmTEOS, a la film transparency was observed, which led us to stop using AmTEOS and to concen on studying TEOS-PhTEOS mixtures.
In the literature, enzyme-doped sol-gel materials obtained by various method usually characterized by the retention of the immobilized enzyme in the matrix an activity [9,19,21]. We investigated the effect of PhTEOS content on enzyme retentio ing HRP as a model enzyme. Under the described above conditions, HRP-doped were synthesized with different PhTEOS content in the mixture of precursors (0, 1 2%)-HRP-PhTEOS0, HRP-PhTEOS1, and HRP-PhTEOS2 films. Enzyme retention studied by determining the enzyme activity in buffer solutions obtained by washin sol-gel films. The results are shown in Figure 1. The introduction of PhTEOS int precursor mixture increased the retention of peroxidase by the sol-gel matrix. Thus, i be concluded that peroxidase retention is significantly improved when PhTEOS troduced into the matrix, and the enzyme is washed out slightly less from films taining 1% PhTEOS than from the films containing 2% PhTEOS. The obtained HR tention values in the films after three washes are shown in Table 1. Comparison wit erature data shows that the enzyme washing out values in our experiments are s what greater than for the previously proposed methods [19,21]. However, our preparation method is simple, involves the use of commercially available precursors preserves significant activity of the immobilized enzyme. It is widely known that the activity of an entrapped enzyme is usually only a fraction of its activity in free solution [9]. The relative activity of the immobilized enzyme was calculated as a percentage of the activity of a similar amount of the enzyme in solution. The initial rates comparison allowed calculating the film loaded enzymes' relative activity ( Table 2). The relative activities were in the range of 6.6-7.4%, which is similar to the results observed for other sol-gel matrices described in the literature [9,19]. The highest relative activity was observed for 1% PhTEOS sol-gel film. For further experiments, we chose a film obtained by adding 1% PhTEOS to a mixture of precursors-HRP-PhTEOS1. Table 2. Relative activity of the immobilized enzymes (% of the activity of the same amount of native enzyme) for the different sol-gel films.
Enzyme Precursor(s), Biocomposite Sol-Gel Material Preparation Procedure
Enzyme Relative Activity, % (n = 3, P = 0.95) Reference HRP Sodium silicate, ion-exchange elimination of sodium at the sol formation stage, enzyme is added to the sol 7.2 [19] TEOS, enzyme sorption on the nascent sol-gel film on the glass slide 11.7 ± 0.5 [9] TEOS and PhTEOS, enzyme is added to the sol, Three types of films were prepared for the following studies using this sol-gel matrix: HRP-PhTEOS1 with immobilized HRP, MT-PhTEOS1 with immobilized MT, BE-PhTEOS1 with immobilized BE. The relative activities were also calculated for MT and BE (Table 2) and they were similar to the HRP activities obtained in our study and other works [9,19]. No available data on the relative activity of immobilized tyrosinase or crude extracts were found in the literature.
HRP-PhTEOS1, MT-PhTEOS1, and BE-PhTEOS1 films were used to study the kinetics of enzymatic reactions.
Sol-Gel Films Based on Silicon Polyethylene Glycol (SPG Films)
Another approach for creating biocomposite sol-gel films is the employment of silicon polyethylene glycol (SPG). SPG rapidly hydrolyzes and forms gels in aqueous media Gels 2023, 9, 240 6 of 14 without the need for any catalyst, such as hydrochloric acid, to form silica hydrogels, which are transparent, and physically stable [21][22][23]. This approach was tested with many biological molecules, such as peroxidase, catalase, various oxidases, etc. [21]. We decided to synthesize glycerol-containing-precursors based sol-gel films and compare them to our TEOS-PhTEOS films.
Unlike alkoxide-based films, no ethanol is generated when using SPG, so SPG films are easier to prepare. In order to obtain SPG films, the precursor was mixed with a solution of the enzyme/extract in a pH 6.0 buffer solution in a ratio of 1:2. To form sol-gel films, 0.6 mL of the mixture was placed in cuvettes, distributed on one of its inner sides, and after 90 min, films with an approximate thickness of 1.5-2.5 mm were formed. Gelation occurred in the absence of a catalyst, and film formation took longer than in the case of TEOS-PhTEOS films.
Three types of films were prepared for the following studies using this sol-gel matrix: HRP-SPG with immobilized HRP, MT-SPG with immobilized MT, BE-SPG with immobilized BE. The relative activity of enzymes in these films is given in Table 2, and it is comparable to TEOS-PhTEOS based films. HRP-SPG, MT-SPG, and BE-SPG films were also used to study the kinetics of enzymatic reactions.
Study of the Kinetics of Enzymatic Reactions in the Presence of Sol-Gel Films Doped with HRP, MT, and BE
The effect of the sol-gel process on the enzyme activity was investigated by comparing the kinetic parameters (Michaelis constants) of the reactions catalyzed by native and immobilized enzyme. The initial rates of the HRP, MT, and BE catalyzed reactions were measured as the absorbance increase over time. The Michaelis constants were obtained through the fitting of the data to Michaelis-Menten kinetic analysis using Lineweaver-Burk plots. The Michaelis constants were calculated for hydrogen peroxide in the presence of constant TMB concentration (0.009%) in the case of HRP and for caffeic acid in the cases of MT and BE.
The enzymes included in the sol-gel films obtained in this work retain their activity, and the kinetics of the reactions catalyzed can be fitted to the Michaelis-Menten equation. Figure 2 shows, for example, kinetic curves for different concentrations of hydrogen peroxide in the presence of HRP-PhTEOS1 and HRP-SPG films.
For the evaluation of the immobilized enzymes' properties, we studied their interaction with substrates (hydrogen peroxide in the case of HRP and caffeic acid in the case of MT and BE) and calculated the kinetic parameters of the enzymatic reaction (Michaelis constants). Figure 3 shows the dependence of the reaction rate on the concentration of hydrogen peroxide in the presence of HRP-PhTEOS1 and HRP-SPG films. When Lineweaver-Burk coordinates are used, these dependencies become linear and allow the calculation of the Michaelis constants (Table 3). Both in our experiments and in the literature data [11,18,19] the Michaelis constant values (K M ) of the immobilized enzymes were higher than those of the native enzymes, indicating the presence of partitioning and diffusional effects in the pores of the sol-gel matrix. Table 3 shows that, when using SPG-based films, an even greater increase in K M values is observed, meaning that such films are better fit for the determination of high concentrations of substrates. This can be explained by the greater steric hindrance because of bulkier sol-gel precursor molecules. The obtained data indicate that for all the studied sol-gel films, the inclusion of HRP, MT, and BE does not hinder their enzymatic activity and allows their use for enzymatic reactions. TEOS-PhTEOS-based films seem to be more promising for the development of methods for determining low contents of analytes-substrates. For the evaluation of the immobilized enzymes' properties, we studied their in action with substrates (hydrogen peroxide in the case of HRP and caffeic acid in the c of MT and BE) and calculated the kinetic parameters of the enzymatic reaction (Micha constants). Figure 3 shows the dependence of the reaction rate on the concentration hydrogen peroxide in the presence of HRP-PhTEOS1 and HRP-SPG films. When L eweaver-Burk coordinates are used, these dependencies become linear and allow calculation of the Michaelis constants (Table 3). Both in our experiments and in the li ature data [11,18,19] the Michaelis constant values (KM) of the immobilized enzymes w higher than those of the native enzymes, indicating the presence of partitioning and fusional effects in the pores of the sol-gel matrix. Table 3 shows that, when us SPG-based films, an even greater increase in KM values is observed, meaning that s films are better fit for the determination of high concentrations of substrates. This can explained by the greater steric hindrance because of bulkier sol-gel precursor molecu The obtained data indicate that for all the studied sol-gel films, the inclusion of HRP, M and BE does not hinder their enzymatic activity and allows their use for enzymatic re We have studied crude banana extract in the present immobilization study, because earlier we have established that crude plant extracts have higher interference thresholds than purified enzymes [24]. There are data indicating that crude extracts are more robust and endure sol-gel immobilization better: in some cases, the extract can withstand the immobilization procedure that inhibits the corresponding purified enzyme activity [11]. In the present study, we observed that for crude banana extract the Michaelis constant remained almost the same after the immobilization in PhTEOS1 film (2.4 mM in solution vs. 2.8 mM in film). A similar effect was described earlier for desert truffle tyrosinase extract [11]: the Michaelis constant even slightly decreased upon immobilization (0.5 mM in solution vs.0.2 mM in film). This can be possibly explained by the presence of other plant cell fragments in the crude extracts which create a better environment for the enzymes inside the sol-gel matrices. We have studied crude banana extract in the present immobilization study, because earlier we have established that crude plant extracts have higher interference thresholds than purified enzymes [24]. There are data indicating that crude extracts are more robust and endure sol-gel immobilization better: in some cases, the extract can withstand the immobilization procedure that inhibits the corresponding purified enzyme activity [11]. In the present study, we observed that for crude banana extract the Michaelis constant remained almost the same after the immobilization in PhTEOS1 film (2.4 mM in solution vs. 2.8 mM in film). A similar effect was described earlier for desert truffle tyrosinase extract [11]: the Michaelis constant even slightly decreased upon immobilization (0.5 mM in solution vs.0.2 mM in film). This can be possibly explained by the presence of other plant cell fragments in the crude extracts which create a better environment for the enzymes inside the sol-gel matrices.
Based on the enzyme kinetics study of the immobilized enzymes we have chosen HRP-PhTEOS1, MT-PhTEOS1, and BE-PhTEOS1 films for the analytical application. Based on the enzyme kinetics study of the immobilized enzymes we have chosen HRP-PhTEOS1, MT-PhTEOS1, and BE-PhTEOS1 films for the analytical application.
Analytical Application HRP-PhTEOS1, MT-PhTEOS1, and BE-PhTEOS1 Films
We studied the possibility of the analytical use of the proposed sol-gel films doped with enzymes and banana extract. Hydrogen peroxide was used as analyte for HRP-PhTEOS1 film in the presence of TMB, and caffeic acid was used for MT-PhTEOS1 and BE-PhTEOS1 films. Immobilized enzymes catalyze the corresponding chromogenic reactions: hydrogen peroxide reduction with TMB oxidation and caffeic acid oxidation by air oxygen. The reaction rates were used as an analytical signal; the dependence of the absorbance on time was studied for different concentrations of analytes. Analytical ranges-analyte concentration ranges with a linear dependence of the reaction rate on analyte concentrations-are given in Table 4. Comparison of detection limits (LOD) for the sol-gel film encapsulated enzymes and banana extract, with LODs for non-immobilized enzymes and extract, demonstrate only 2-3 fold loss of sensitivity (Table 4). Such effect is likely attributed to steric hindrances arising during immobilization. The simplicity of determinations using sol-gel films doped with enzymes and banana extract should be noted: it is simply needed to place 3.0 mL of a sample in a cuvette containing a sol-gel film (in the case of determining hydrogen peroxide, 0.4 mL of 0.08% TMB solution should also be added), and monitor the change in absorbance at 650 nm for the determination of hydrogen peroxide and 400 nm for the determination of caffeic acid. Such measurements can be carried out using various portable photometers, which opens the prospect of mass analyses for the determination of biochemically active analytes in the field. Table 4. Analytical parameters of the procedures using HRP-PhTEOS1, MT-PhTEOS1, and BE-PhTEOS1 sol-gel films. In this paper, to demonstrate the analytical capabilities of sol-gel films, we present the results of the total polyphenol content determination in coffee (in caffeic acid equivalents) using an immobilized banana extract (BE-PhTEOS1 film). Total polyphenol content determination is often used in food quality control [24].
Enzyme Source
The recovery study of the caffeic acid determination using BE-PhTEOS1 film shows that the RSD values are comparable to those for banana extract in solution and equal 7-10% (n = 3).
The results of TPC determination in coffee compared with the results of independent methods are given in Table 5. Table 5. Results of TPC determination in coffee using BE-PhTEOS1 film and by independent methods (n = 3, P = 0.95).
Found, mg/g BE-PhTEOS 1 BE Folin's Reagent Method of Standard Addition
Using the Calibration Curve 114 ± 18 118 ± 36 110 ± 30 120 ± 10 The good agreement between the different procedures indicates the good accuracy of the TPC determination with BE-PhTEOS1 film. No significant difference was found between the four values using Student test (p > 0.28 for all the pairs).
The stability and lifetime of the immobilized BE was investigated by measuring the sol-gel film activity using 2.0 mM caffeic acid solution. 95% activity of immobilized BE was retained after 2 months storage at +4 • C (BE solution lost its activity after 4 days). 90% activity of immobilized BE was retained after 2 weeks storage at +25 • C.
These novel enzyme-doped silica matrixes provide promising platforms for development of various on-site analytical procedures. In the present work we proposed a procedure using crude plant extract immobilized in TEOS-PhTEOS sol-gel film (BE-PhTEOS1 film) for spectrophotometric determination of total polyphenol content using a standard curve of caffeic acid. The detection limit for caffeic acid equals 0.7 mM, while LOD values of other enzymatic methods of TPC determination lie in the 0.01-0.5 mM range [24]. However, the sol-gel films are ready-to-use and offer the possibility of storage at a room temperature. Using crude extract as the enzyme source in these sol-gel materials allows low-cost analysis which makes the process suitable for wide screening tests.
Conclusions
We have chosen the conditions for the synthesis of sol-gel films doped with HRP, MT, and BE using TEOS-PhTEOS mixture or SPG as precursors, on the inner side surface of polystyrene cuvettes. When using TEOS and PhTEOS precursors, the film preparation consists of two stages: the preparation of the sol under the influence of ultrasound for 90 min, which leads to the evaporation of a significant part of the formed alcohol, and the subsequent mixing of the sol with an enzyme solution. In the case of SPG films, the enzyme solution is mixed directly with the precursor. Basing on the study of the activity of the immobilized enzymes and the immobilized extract, we have concluded that for both types of films, the enzymatic activity is preserved, and the kinetics of the catalyzed reactions can be described by the Michaelis-Menten equation. The relative activity of the immobilized enzymes is comparable for both types of films and is about 10% of the activity of the non-immobilized enzyme. Thus, the preservation of enzyme activity in the proposed procedures is comparable to those described in the literature, which can also sometimes be significantly more complicated in execution.
When HRP and MT are included in alkoxide-based films, the Michaelis constants increase 3-4 fold, and in SPG-based films-10-20 fold. Compared to these purified enzymes, the crude banana extract demonstrates that it can better withstand the effect of immobilization: for BE the Michaelis constant almost does not change in the alkoxide-based films, and it increases only 3 fold in SPG-based films.
The analytical capabilities of sol-gel films doped with enzymes and banana extract are demonstrated: the analytical range for the hydrogen peroxide determination is 0.2-3.5 mM using HRP-PhTEOS1 film in the presence of TMB, and the analytical ranges for caffeic acid determination are 0.5-10.0 mM and 2.0-10.0 mM using MT-PhTEOS1 and BE-PhTEOS1 films, respectively. The sensitivity of the determination is decreased only 2-3 fold compared to non-immobilized enzymes, while the use of disposable cuvettes with a sol-gel film on the inner side surface greatly simplifies the determination procedure and makes it possible to carry out the determination in field conditions. BE-PhTEOS1 films have been used to determine the total polyphenol content of coffee in caffeic acid equivalents. The lifetime of BE-PhTEOS1 is 2 months at +4 • C storage and 2 weeks at +25 • C storage, which is a significant improvement of the shelf life compared to the non-immobilized enzymes and extracts.
Banana extract (BE) was prepared similarly to [24]: 100.0 g of homogenized banana pulp tissue was stirred in 200.0 mL of phosphate buffer (pH 6.0) at 0 • C for 30 min, and then filtered twice through a paper filter. The protein content of the banana extract was determined by the Biuret method. Total protein content equaled 3.8 mg/mL for banana pulp crude extract. The activity of the crude banana extract used in this work has been determined by comparing the reaction speed of catechol oxidation in the presence of the crude extract and the commercial mushroom tyrosinase. Crude banana extract activity was found to be 292 ± 6 U/mL (n = 3, P = 0.95).
Polystyrene cuvettes (10 × 10 × 45 mm) with caps were purchased from Sarstedt (Numbrecht, Germany). In vials, a certain volume of AmTEOS or PhTEOS was added to a certain volume of TEOS. An aqueous solution of cetylpyridinium chloride and hydrochloric acid as a catalyst was added to the resulting mixture. The silane mixture: water ratio was 3:1. The mixture was stirred under the influence of ultrasound with the sound energy density of 0.28-0.34 W/mL for 90 min. In a plastic cuvette, 0.3 mL of the sol was mixed with 0.3 mL of the enzyme/crude extract solution in buffer. The cuvettes were capped, shaken, and then placed on their side. After 2-10 min, a transparent sol-gel film formed on the inner wall surface of the cuvette. The cuvettes with films were stored at +4 • C.
Synthesis of HRP-PhTEOS1
, MT-PhTEOS1, BE-PhTEOS1 Films 1.5 mL of aqueous solution containing 0.5 mMcetylpyridinium chloride and 1.6 mM hydrochloric acid was added to 4.5 mL of TEOS and 0.05 mL of PhTEOS. The mixture was stirred under the influence of ultrasound with a sound energy density of 0.3 W/mL for 90 min. In a plastic cuvette, 0.3 mL of the sol was mixed with 0.3 mL of HRP solution in buffer (pH 6.0) to obtain HRP-PhTEOS1 film, with 0.3 mL MT solution in buffer (pH 6) to obtain MT-PhTEOS1 film, 0.3 mL of BE to obtain BE-PhTEOS1 film. The cuvettes were capped, shaken, and then placed on their side. After 2-10 min, a transparent sol-gel film formed on the inner wall surface of the cuvette. The cuvettes with films were stored at +4 • C.
Sol-Gel Films Based on SPG
In a cuvette 0.2 mL of SPG was mixed with 0.4 mL of HRP solution in buffer (pH 6.0) to obtain an HRP-SPG film, with 0.4 mL of MT solution in buffer (pH 6.0) to obtain a MT-SPG film, with 0.4 mL of BE to obtain a BE-SPG film. The cuvettes were capped, shaken, and then placed on their side. After 60-90 min, a transparent sol-gel film formed on the inner wall surface of the cuvette. The cuvettes with films were stored at +4 • C. To study the activity of HRP immobilized in sol-gel films, 3.0 mL of a hydrogen peroxide solution of various concentrations and 0.4 mL of a 0.08% TMB solution were added to the cuvette with film. Absorbance was measured at 650 nm for 10 min every 10 s. Enzyme activity was determined as the initial reaction rate. The relative activity was determined from the dependence of the HRP activity in solution on the HRP amount. This dependence was obtained by the following procedure: 3.0 mL of 8.0 mM hydrogen peroxide solution was mixed with 0.4 mL of 0.08% TMB solution and 0.4 mL of HRP solution with different amounts of enzyme, and absorbance was measured at 650 nm.
To study activity of MT and BE immobilized in sol-gel films, 3.0 mL of caffeic acid solution of various concentrations was added to the cuvette with film and the absorbance was measured at 400 nm for 10-15 min every 10 s. Enzyme activity was determined as the initial reaction rate. The relative activity was determined from the dependence of the MT/BE activity in the solution on the MT/BE amount. This dependence was obtained by the following procedure: 1.0 mL of MT/BE solution with different amounts of enzyme was added to 2.0 mL of 5.0 mM caffeic acid solution, and the absorbance was measured at 400 nm.
HRP Retention on PhTEOS0, PhTEOS1, PhTEOS2 Films
To study the retention of peroxidase in films of various compositions-TEOS (Ph-TEOS0), TEOS + 1%PhTEOS (PhTEOS1), TEOS + 2%PhTEOS (PhTEOS2)-2.0 mL of a buffer solution (pH 6.0) was added to the cuvettes with films and left for 30 min. After that, the buffer solution was decanted and its enzyme activity was determined according to the described above procedure.
Sol-Gel Films Stability Studies
The films doped with enzymes and banana extract were stored in closed cuvettes at +25 • C and at +4 • C; their stability was checked by measuring the activity according to the described above procedure.
Study of the Kinetics of Enzymatic Reactions in the Presence of Immobilized HRP, MT, and BE
To study the activity of HRP immobilized in sol-gel films, 3.0 mL of a hydrogen peroxide solution of various concentrations and 0.4 mL of a 0.08% TMB solution were added to the cuvette. Absorbance was measured at 650 nm for 10 min every 10 s. To calculate the Michaelis constant, the dependence of the reaction rate (min −1 ) on the concentration of hydrogen peroxide was plotted in Lineweaver-Burk coordinates.
To study the activity of MT and BE immobilized in sol-gel films, 3.0 mL of caffeic acid solution of various concentrations was added to the cuvette and the absorbance was measured at 400 nm for 10-15 min every 10 s. To calculate the Michaelis constant, the dependence of the reaction rate (min −1 ) on the concentration of caffeic acid was plotted in Lineweaver-Burk coordinates.
Calibration Curves Using HRP-PhTEOS1, MT-PhTEOS1, BE-PhTEOS1 Films
To obtain a calibration curve for hydrogen peroxide, 3.0 mL of a hydrogen peroxide solution of various concentrations and 0.4 mL of a 0.08% TMB solution were added to a cuvette with HRP-PhTEOS1 film. The difference in absorbance at 650 nm, measured after 1 and 2 min from the reaction start, was used as analytical signal.
To obtain a calibration curve for caffeic acid, 3.0 mL of caffeic acid solution of various concentrations was added to a cuvette with MT-PhTEOS1 or BE-PhTEOS1 films. The reaction rate, i.e., the rate of increase in absorbance at 400 nm, was used as analytical signal.
The limit of detection (LOD) was calculated as 3 standard deviation of the blank absorbance (n = 3) divided by the slope value. The limit of quantitation (LOQ) was calculated as 3·LOD.
Total Polyphenol Content Determination
1.0 g of coffee sample was mixed with 100.0 mL of boiling water, and filtered after 15 min. After cooling to the room temperature, 3.0 mL of the sample solution was added to the cuvette with the BE-PhTEOS1 film, and the reaction rate was used as the analytical signal. The total polyphenol content (TPC) in caffeic acid equivalents was determined in the treated sample using the standard addition method and using the calibration curve for caffeic acid in the range of 2.0-10.0 mM.
The procedure for TPC determination with Folin reagent was carried out similarly to [24].
Instrumentation
Sols were prepared under the ultrasound radiation using the ultrasound equipment UZH-02 (SonoTech, Russia). The sound energy density (W/mL) was defined as the ratio of the power absorbed in the reactor to the volume of liquid in the reactor. To determine the power, the time was measured until a certain mass of water was heated to a certain temperature.
Spectra of colored products of enzymatic oxidation of phenolic compounds were recorded with SPECTROstar Nano spectrophotometer (BMG Labtech, Ortenberg, Germany). Spectra were analyzed with MARS software (BMG Labtech, Ortenberg, Germany) and statistical analysis was carried out using MS Excel. | 2023-03-22T15:08:18.819Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "0d07b6debb50f11115dba119e5f6385a9c8ef712",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2310-2861/9/3/240/pdf?version=1679227165",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "88ba5aa58cad3006cffd844f33d350872e67b77a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
251946352 | pes2o/s2orc | v3-fos-license | Cryogenic 3D Printing of w/o Pickering Emulsions Containing Bifunctional Drugs for Producing Hierarchically Porous Bone Tissue Engineering Scaffolds with Antibacterial Capability
How to fabricate bone tissue engineering scaffolds with excellent antibacterial and bone regeneration ability has attracted increasing attention. Herein, we produced a hierarchical porous β-tricalcium phosphate (β-TCP)/poly(lactic-co-glycolic acid)-polycaprolactone composite bone tissue engineering scaffold containing tetracycline hydrochloride (TCH) through a micro-extrusion-based cryogenic 3D printing of Pickering emulsion inks, in which the hydrophobic silica (h-SiO2) nanoparticles were used as emulsifiers to stabilize composite Pickering emulsion inks. Hierarchically porous scaffolds with desirable antibacterial properties and bone-forming ability were obtained. Grid scaffolds with a macroscopic pore size of 250.03 ± 75.88 μm and a large number of secondary micropores with a diameter of 24.70 ± 15.56 μm can be fabricated through cryogenic 3D printing, followed by freeze-drying treatment, whereas the grid structure of scaffolds printed or dried at room temperature was discontinuous, and fewer micropores could be observed on the strut surface. Moreover, the impartment of β-TCP in scaffolds changed the shape and density of the micropores but endowed the scaffold with better osteoconductivity. Scaffolds loaded with TCH had excellent antibacterial properties and could effectively promote the adhesion, expansion, proliferation, and osteogenic differentiation of rat bone marrow-derived mesenchymal stem cells afterward. The scaffolds loaded with TCH could realize the strategy to “kill bacteria first, then induce osteogenesis”. Such hierarchically porous scaffolds with abundant micropores, excellent antibacterial property, and improved bone-forming ability display great prospects in treating bone defects with infection.
Introduction
Bone tissue engineering, which combines cells and growth factors with a highly porous biomimetic bone tissue engineering scaffold, has been increasingly used to induce bone regeneration [1]. Since bacterial infections can easily occur on artificial bone implants, providing bone tissue engineering scaffolds with excellent antibacterial property to prevent post-implantation infection is of great importance [2]. A desirable bone tissue engineering scaffold not only provides temporary mechanical support during the tissue regeneration process but also provides cells with a suitable microenvironment for anchoring, adhesion, proliferation, and differentiation [3]. Therefore, it is particularly crucial to endow the scaffold with appropriate shape and structure properties to elicit favorable biological responses. Towards bone tissue engineering scaffolds, a hierarchical porous structure is beneficial to cell migration, adhesion, and nutrient transfer/metabolic waste discharge [4]. Among the various strategies that have been used to fabricate porous scaffolds, the casting of Pickering emulsions has shown great potential in making scaffolds with very high porosity. Pickering emulsions refer to the use of solid particles as emulsion stabilizers that aggregate at the interface of two immiscible liquids (i.e., oil phase and water phase) to stabilize droplets and prevent their coalescence [5,6]. In a typical w/o Pickering emulsion, both discontinuous water droplets and continuous organic solutions occupy a very high portion, while the polymer matrix and solid particulate emulsifier only hold a small portion. Once the organic solvents (i.e., oil phase) and the water (i.e., aqueous phase) which normally exceed 75% of the total emulsion volume, are removed to obtain scaffolds, an interconnected porous structure with very high porosity can be acquired [6]. However, the simple casting of w/o Pickering emulsions, followed by solvent evaporation, can only produce microporous scaffolds with a pore size below 100 µm, making it difficult to enable cell infiltration, which requires a much larger pore size [7][8][9][10]. Three-dimensional printing is particularly suitable for building tissue engineering scaffolds with personalized shapes and tailored porous structures to create a biomimetic structural environment that can facilitate cell infiltration, enhance vascularization, and promote tissue formation [11,12]. By using traditional 3D printing techniques such as fused deposition modeling (FDM), selective laser sintering (SLS), and digital light projection (DLP), polymeric scaffolds with a macroscopic pore size of 200-600 µm can be easily produced. Such a pore size could facilitate cell crawling, nutrient transfer, metabolite clearance, and neovascularization [13][14][15]. Nevertheless, these scaffolds' lack of secondary micropores (pore size: 1-100 µm) on the strut surface show insufficient microtopographic cues, which are important for fast cell adhesion and spreading. Threedimensional printing w/o polymeric Pickering emulsions can be employed to fabricate tissue engineering scaffolds with both macroscopic grid structures (pore size: hundreds of microns) and secondary micropores (pore size: <100 µm) on the strut surface. Yang et al. [7] formulated a Pickering emulsion consisting of water and PCL-PLLA/DCM solution by using hydrophobic silica nanoparticles (h-SiO 2 ) as emulsifiers. The Pickering emulsion was used as printing inks to fabricate porous scaffolds via micro-extrusion-based 3D printing. Compared with other 3D printed bone tissue engineering scaffolds [16,17], scaffolds 3D printed from Pickering emulsions exhibited a hierarchical porous structure with very high porosity. However, such scaffolds printed from polyester-based Pickering emulsion inks without the delivery of any biologically active agent such as bioactive ceramics and osteogenic drugs lack a bone-forming ability. Given that calcium and phosphate ions generated along with β-TCP degradation can help to promote the mineralization of bone marrow mesenchymal stem cells and osteoblasts [13], and the 3D printing of Pickering emulsion inks containing a certain amount of β-TCP nanoparticles would endow scaffolds with better osteoconductivity.
Towards the antibacterial effect, tetracycline hydrochloride (TCH), a broad-spectrum antibiotic with a good bactericidal effect by preventing the growth of bacterial peptide chains and protein synthesis, has gained increasing attention. TCH inhibits bacterial growth at low concentrations and kills bacteria at high concentrations [18]. Additionally, TCH has also been reported to affect bone metabolism by affecting the function of osteoclasts. In addition, TCH was found to promote the activity and proliferation of osteoblasts and rat bone marrow-derived mesenchymal stem cells (rBMSCs) and enhance the expression of osteogenic markers such as osteocalcin, type I collagen, and osteocalcin at low concentrations [19]. Therefore, a certain amount of TCH can be loaded into the porous scaffold to treat bone defects with infection through a "kill bacteria first, then promote osteogenesis" strategy, in which the burst TCH release can prevent the adhesion of bacteria on the scaffold surface and kill the peripheral bacteria, while a slow but steady TCH release with a low concentration can promote the proliferation and osteogenic differentiation of rBMSCs in a long-term manner.
In this study, micro-extrusion-based cryogenic 3D printing was employed to fabricate TCH-loaded β-tricalcium phosphate/poly(lactic-co-glycolic acid)-poly(caprolactone) (β-TCP/PLGA-PCL) antibacterial bone tissue engineering scaffolds with interconnected porous structures by using w/o composite Pickering emulsion as printing inks. The effects of the contents of β-TCP and h-SiO 2 (emulsifier), the printing temperatures and drying temperatures on the structure of the scaffolds, were systematically investigated. The in vitro TCH release and scaffold degradation were also studied. The antibacterial study and in vitro cell culture study suggested that the scaffolds had excellent antibacterial properties and an improved bone-forming ability. Our study provides a feasible scheme for constructing a hierarchical porous bifunctional bone tissue engineering scaffold to treat bone defects with infection.
Scaffold Design
To produce a bone tissue engineering scaffold with a biomimetic porous structure, excellent antibacterial capability, and improved bone-forming ability, in this study, dualdelivery Pickering emulsion inks with a high volume internal water phase were formulated ( Figure 1A). The printing inks were then subjected to a micro = extrusion-based cryogenic 3D printing to obtain predesigned 3D products, followed by a freeze drying treatment ( Figure 1B). PLGA-PCL polymers were used as the basic material to construct the 3D grid patterns and acted as the delivery carrier of osteoconductive TCP particles and TCH drugs, while the presence of DCM and DI water in the printing inks and their removal after cryogenic 3D printing were responsible for the formation of micropores on the struts. cate TCH-loaded β-tricalcium phosphate/poly(lactic-co-glycolic acid)-poly(caprolactone (β-TCP/PLGA-PCL) antibacterial bone tissue engineering scaffolds with interconnected porous structures by using w/o composite Pickering emulsion as printing inks. The effect of the contents of β-TCP and h-SiO2 (emulsifier), the printing temperatures and dryin temperatures on the structure of the scaffolds, were systematically investigated. The in vitro TCH release and scaffold degradation were also studied. The antibacterial study and in vitro cell culture study suggested that the scaffolds had excellent antibacterial proper ties and an improved bone-forming ability. Our study provides a feasible scheme for con structing a hierarchical porous bifunctional bone tissue engineering scaffold to treat bon defects with infection.
Scaffold Design
To produce a bone tissue engineering scaffold with a biomimetic porous structure excellent antibacterial capability, and improved bone-forming ability, in this study, dual delivery Pickering emulsion inks with a high volume internal water phase were formu lated ( Figure 1A). The printing inks were then subjected to a micro=extrusion-based cry ogenic 3D printing to obtain predesigned 3D products, followed by a freeze drying treat ment ( Figure 1B). PLGA-PCL polymers were used as the basic material to construct th 3D grid patterns and acted as the delivery carrier of osteoconductive TCP particles and TCH drugs, while the presence of DCM and DI water in the printing inks and their re moval after cryogenic 3D printing were responsible for the formation of micropores on the struts.
Characterization of Pickering Emulsions
Prior to 3D printing, the viscosity of the Pickering emulsion inks was measured using a viscometer. As shown in Figure 2A, the emulsion inks in all groups showed a decrease in viscosity with an increasing shear rate, which indicated that all Pickering emulsions had shear thinning properties. Viscous w/o Pickering emulsion inks with a milky white state could be successfully formulated. The as-formulated w/o emulsion inks were stable enough for micro-extrusion-based 3D printing. The structure of Pickering emulsion inks with varied compositions was observed using optical microscopy. It can be seen from Figure 2B that the spherical water droplets in different groups had an average diameter of 17.11 ± 9.43 µm (Group A 1, A 2 , A 3 , and A 4 ); 15.25 ± 9.74 µm (Group B); 15.41 ± 9.82 µm (Group C); 14.92 ± 9.96 µm (Group D); and 27.30 ± 19.55 µm (Group E), respectively. β-TCP agglomerates (red arrows in Figure 2B) were dispersed in the emulsions and could affect the stability of the water/oil interface.
Prior to 3D printing, the viscosity of the Pickering emulsion inks was measured using a viscometer. As shown in Figure 2A, the emulsion inks in all groups showed a decrease in viscosity with an increasing shear rate, which indicated that all Pickering emulsions had shear thinning properties. Viscous w/o Pickering emulsion inks with a milky white state could be successfully formulated. The as-formulated w/o emulsion inks were stable enough for micro-extrusion-based 3D printing. The structure of Pickering emulsion inks with varied compositions was observed using optical microscopy. It can be seen from Figure 2B that the spherical water droplets in different groups had an average diameter o 17.11 ± 9.43 μm (Group A1, A2, A3, and A4); 15.25 ± 9.74 μm (Group B); 15.41 ± 9.82 μm (Group C); 14.92 ± 9.96 μm (Group D); and 27.30 ± 19.55 μm (Group E), respectively. β TCP agglomerates (red arrows in Figure 2B) were dispersed in the emulsions and could affect the stability of the water/oil interface.
Characterization of Scaffolds Printed from Pickering Emulsion Inks
A comparative study was first conducted to investigate the effect of the printing tem perature and drying temperature on the structure of printed scaffolds without the addi tion of β-TCP particles. For each emulsion ink formula, at least 20 scaffolds were printed All the scaffolds were printed under the same working parameters. A1 scaffolds, which were printed and dried at low temperatures, had continuous grid patterns, and abundan secondary micropores were observed on the struts. The dimensions of the A1 scaffolds were identical to that of the CAD model (i.e., strut diameter: 600 μm; the distance between the center lines of the two paralleled struts was 1000 μm), showing a high reproducibility In comparison, the struts in Group A2, A3, and A4 that involved room temperature printing and/or room temperature drying appeared to shrink to a certain degree and even collapse
Characterization of Scaffolds Printed from Pickering Emulsion Inks
A comparative study was first conducted to investigate the effect of the printing temperature and drying temperature on the structure of printed scaffolds without the addition of β-TCP particles. For each emulsion ink formula, at least 20 scaffolds were printed. All the scaffolds were printed under the same working parameters. A1 scaffolds, which were printed and dried at low temperatures, had continuous grid patterns, and abundant secondary micropores were observed on the struts. The dimensions of the A1 scaffolds were identical to that of the CAD model (i.e., strut diameter: 600 µm; the distance between the center lines of the two paralleled struts was 1000 µm), showing a high reproducibility. In comparison, the struts in Group A 2 , A 3 , and A 4 that involved room temperature printing and/or room temperature drying appeared to shrink to a certain degree and even collapse, showing a lower reproducibility. Meanwhile, much fewer micropores were observed on the struts of these scaffolds ( Figure 3B). These results suggest cryogenic 3D printing and vacuum freeze drying could facilitate the formation of more micropores on struts and lead to a more complete scaffold structure. It is also found that adding more β-TCP particles into the Pickering emulsion inks could endow the printed scaffolds with clearer outlines, and this trend could be attributed to the increased viscosity brought by higher β-TCP contents. However, a reduction in the number of micropores on struts could be observed in Groups B-E ( Figure 3B). As a result, Group E, which involved the highest content of β-TCP, showed the lowest specific surface area ( Figure 4A). Additionally, we evaluated the macroscopic pore and secondary micropore size of Group E, which were 250.03 ± 57.88 and 24.70 ± 15.56 µm, respectively ( Figure 4B). Through EDX spectroscopy, the presence of β-TCP on the surfaces of the scaffolds in Group E was confirmed ( Figure 4C).
showing a lower reproducibility. Meanwhile, much fewer micropores were observed on the struts of these scaffolds ( Figure 3B). These results suggest cryogenic 3D printing and vacuum freeze drying could facilitate the formation of more micropores on struts and lead to a more complete scaffold structure. It is also found that adding more β-TCP particles into the Pickering emulsion inks could endow the printed scaffolds with clearer outlines, and this trend could be attributed to the increased viscosity brought by higher β-TCP contents. However, a reduction in the number of micropores on struts could be observed in Groups B-E ( Figure 3B). As a result, Group E, which involved the highest content of β-TCP, showed the lowest specific surface area ( Figure 4A). Additionally, we evaluated the macroscopic pore and secondary micropore size of Group E, which were 250.03 ± 57.88 and 24.70 ± 15.56 μm, respectively ( Figure 4B). Through EDX spectroscopy, the presence of β-TCP on the surfaces of the scaffolds in Group E was confirmed ( Figure 4C).
In Vitro TCH Release Behaviour
In our study, a strategy of "kill bacteria first, then induce osteogenesis" can be achieved by loading the appropriate concentrations of TCH in the oil phase and water phase, respectively. The cumulative release curve is shown in Figure 5A. The scaffolds exhibited a burst TCH release up to 101.65 ± 2.51 μg/mL in 4 h, which could realize the purpose of killing bacteria quickly. Afterwards, a slow but steady TCH release from 0.3 ± 0.05 μg/mL to 2.22 ± 0.13 μg/mL was achieved in 7 days, in which the TCH concentrations (0.25-8 μg/mL) could contribute to the proliferation and osteogenic differentiation of rBM-SCs ( Figure 5B) [20].
Antibacterial Property of Porous Scaffolds
To combat bacterial infections, biomaterials should have effective antibacterial capabilities [21]. The antibacterial activity of scaffolds was investigated in vitro against Grampositive bacterium (Staphylococcus aureus), which are the common bacterium that causes bone infections [21]. The antibacterial properties of the scaffolds were verified by agar diffusion testing. As expected, there was no inhibition zone around the control scaffolds but obvious inhibition zones with radii of 12.3 ± 1.2-22.9 ± 0.3 mm were found around the
In Vitro TCH Release Behaviour
In our study, a strategy of "kill bacteria first, then induce osteogenesis" can be achieved by loading the appropriate concentrations of TCH in the oil phase and water phase, respectively. The cumulative release curve is shown in Figure 5A. The scaffolds exhibited a burst TCH release up to 101.65 ± 2.51 µg/mL in 4 h, which could realize the purpose of killing bacteria quickly. Afterwards, a slow but steady TCH release from 0.3 ± 0.05 µg/mL to 2.22 ± 0.13 µg/mL was achieved in 7 days, in which the TCH concentrations (0.25-8 µg/mL) could contribute to the proliferation and osteogenic differentiation of rBMSCs ( Figure 5B) [20].
In Vitro TCH Release Behaviour
In our study, a strategy of "kill bacteria first, then induce osteogenesis" can be achieved by loading the appropriate concentrations of TCH in the oil phase and water phase, respectively. The cumulative release curve is shown in Figure 5A. The scaffolds exhibited a burst TCH release up to 101.65 ± 2.51 μg/mL in 4 h, which could realize the purpose of killing bacteria quickly. Afterwards, a slow but steady TCH release from 0.3 ± 0.05 μg/mL to 2.22 ± 0.13 μg/mL was achieved in 7 days, in which the TCH concentrations (0.25-8 μg/mL) could contribute to the proliferation and osteogenic differentiation of rBM-SCs ( Figure 5B) [20].
Antibacterial Property of Porous Scaffolds
To combat bacterial infections, biomaterials should have effective antibacterial capabilities [21]. The antibacterial activity of scaffolds was investigated in vitro against Grampositive bacterium (Staphylococcus aureus), which are the common bacterium that causes bone infections [21]. The antibacterial properties of the scaffolds were verified by agar diffusion testing. As expected, there was no inhibition zone around the control scaffolds but obvious inhibition zones with radii of 12.3 ± 1.2-22.9 ± 0.3 mm were found around the
Antibacterial Property of Porous Scaffolds
To combat bacterial infections, biomaterials should have effective antibacterial capabilities [21]. The antibacterial activity of scaffolds was investigated in vitro against Gram-positive bacterium (Staphylococcus aureus), which are the common bacterium that causes bone infections [21]. The antibacterial properties of the scaffolds were verified by agar diffusion testing. As expected, there was no inhibition zone around the control scaffolds but obvious inhibition zones with radii of 12.3 ± 1.2-22.9 ± 0.3 mm were found around the scaffolds loaded with a varied amount of TCH ( Figure 6A,B). Live and dead staining was also employed to show the antibacterial property of scaffolds (Figure 7). After 4 h of incubation of bacteria on the porous scaffolds, the red fluorescence (dead bacteria) and green fluorescence (live bacteria) showed the same intensity in the control group. In comparison, the intensity of the red fluorescence signal was enhanced in the E-TCH 1 group, and the strongest red fluorescence intensity was obtained in the E-TCH 2 group (Figure 7), indicating that the TCH-loaded scaffolds possessed excellent antibacterial properties. scaffolds loaded with a varied amount of TCH ( Figure 6A,B). Live and dead staining was also employed to show the antibacterial property of scaffolds (Figure 7). After 4 h of incubation of bacteria on the porous scaffolds, the red fluorescence (dead bacteria) and green fluorescence (live bacteria) showed the same intensity in the control group. In comparison, the intensity of the red fluorescence signal was enhanced in the E-TCH1 group, and the strongest red fluorescence intensity was obtained in the E-TCH2 group (Figure 7), indicating that the TCH-loaded scaffolds possessed excellent antibacterial properties.
In Vitro Viability and Osteogenic Differentiation of rBMSCs on Drug Loaded Scaffolds
Biocompatibility is always considered a key factor for the application of tissue engineering scaffolds in the biomedical field. To test the cytocompatibility, the scaffolds cultured with rBMSCs were subjected to a cell viability test and live and dead staining (Figure 8A,B). A large number of viable cells (color in green) and only a few dead cells (color in red) were observed on all scaffolds, indicating that drug delivery scaffolds made through the cryogenic 3D printing of Pickering emulsion inks were a favorable platform for cell seeding. Then, cell proliferation was detected by the CCK8 assay. The OD450 value showed that the proliferative activity of rBMSCs increased gradually with the increasing culture time ( Figure 8C). More importantly, the OD450 value of TCH-loaded scaffolds was the highest among all the groups. Afterwards, the bone-forming ability of the drug delivery scaffolds was evaluated in vitro. The expression of ALP, a marker of early osteogenic differentiation, can be used to indicate the osteogenesis potent scaffolds. After 7-14 days of culture ( Figure 8D), the ALP expression (color in purple) in both Group E and Group E-TCH1 were higher than that in Group A1, and a higher ALP expression (larger area and scaffolds loaded with a varied amount of TCH ( Figure 6A,B). Live and dead staining was also employed to show the antibacterial property of scaffolds (Figure 7). After 4 h of incubation of bacteria on the porous scaffolds, the red fluorescence (dead bacteria) and green fluorescence (live bacteria) showed the same intensity in the control group. In comparison, the intensity of the red fluorescence signal was enhanced in the E-TCH1 group, and the strongest red fluorescence intensity was obtained in the E-TCH2 group (Figure 7), indicating that the TCH-loaded scaffolds possessed excellent antibacterial properties.
In Vitro Viability and Osteogenic Differentiation of rBMSCs on Drug Loaded Scaffolds
Biocompatibility is always considered a key factor for the application of tissue engineering scaffolds in the biomedical field. To test the cytocompatibility, the scaffolds cultured with rBMSCs were subjected to a cell viability test and live and dead staining (Figure 8A,B). A large number of viable cells (color in green) and only a few dead cells (color in red) were observed on all scaffolds, indicating that drug delivery scaffolds made through the cryogenic 3D printing of Pickering emulsion inks were a favorable platform for cell seeding. Then, cell proliferation was detected by the CCK8 assay. The OD450 value showed that the proliferative activity of rBMSCs increased gradually with the increasing culture time ( Figure 8C). More importantly, the OD450 value of TCH-loaded scaffolds was the highest among all the groups. Afterwards, the bone-forming ability of the drug delivery scaffolds was evaluated in vitro. The expression of ALP, a marker of early osteogenic differentiation, can be used to indicate the osteogenesis potent scaffolds. After 7-14 days of culture ( Figure 8D), the ALP expression (color in purple) in both Group E and Group E-TCH1 were higher than that in Group A1, and a higher ALP expression (larger area and
In Vitro Viability and Osteogenic Differentiation of rBMSCs on Drug Loaded Scaffolds
Biocompatibility is always considered a key factor for the application of tissue engineering scaffolds in the biomedical field. To test the cytocompatibility, the scaffolds cultured with rBMSCs were subjected to a cell viability test and live and dead staining ( Figure 8A,B).
A large number of viable cells (color in green) and only a few dead cells (color in red)
were observed on all scaffolds, indicating that drug delivery scaffolds made through the cryogenic 3D printing of Pickering emulsion inks were a favorable platform for cell seeding. Then, cell proliferation was detected by the CCK8 assay. The OD 450 value showed that the proliferative activity of rBMSCs increased gradually with the increasing culture time ( Figure 8C). More importantly, the OD 450 value of TCH-loaded scaffolds was the highest among all the groups. Afterwards, the bone-forming ability of the drug delivery scaffolds was evaluated in vitro. The expression of ALP, a marker of early osteogenic differentiation, can be used to indicate the osteogenesis potent scaffolds. After 7-14 days of culture ( Figure 8D), the ALP expression (color in purple) in both Group E and Group E-TCH 1 were higher than that in Group A 1 , and a higher ALP expression (larger area and darker purple color) could be observed in Group E-TCH 1 after 14 days of culture, suggesting that scaffolds with sustained TCH release had the ability to induce the osteogenic differentiation of rBMSCs and can be used as a bifunctional material to treat a bone defect with infection. darker purple color) could be observed in Group E-TCH1 after 14 days of culture, suggesting that scaffolds with sustained TCH release had the ability to induce the osteogenic differentiation of rBMSCs and can be used as a bifunctional material to treat a bone defect with infection.
Discussion
In clinical orthopedics, bacterial infection after trauma, bone tumor-related tissue resection, etc. often leads to the failure of bone repair and/or bone regeneration. The formation of biofilms will promote the formation of chronic wounds, making it very difficult to effectively treat the bone defect. So far, regenerating bone tissue in infection regions is still challenging. Bioactive bone tissue engineering scaffolds with high porosity and hierarchically interconnected pores are designed to promote proper cellular responses such as cell migration, proliferation, and osteogenic differentiation and improved tissue regeneration [7]. Towards the treatment of bone defects in the infection region, a bone tissue engineering scaffold with suitable antibacterial capability is necessary [22][23][24][25]. In this study, a hierarchically porous scaffold with excellent antibacterial capability and osteogenic activity was made through a cryogenic 3D printing of a Pickering emulsion containing β-TCP nanoparticles and TCH.
Since the structure of Pickering emulsion is crucial to the printability of Pickering emulsion inks and the spatial structure of the printed scaffold, the contents of h-SiO2 and β-TCP should be carefully tuned. It is known that the size of the solid particles used to stabilize the water/oil interface of Pickering emulsions is usually small (basically smaller than 3 μm), and the addition of oversize particles will reduce the overall stability of the Pickering emulsions. Meanwhile, Pickering emulsions with higher stability normally have a smaller water droplet size [26,27]. As the size of β-TCP used in the current investigation was much larger than 3 μm (β-TCP is obtained after passing through a screen with a pore size of 70 μm), most β-TCP can only be dispersed either in water droplets or a continuous oil phase.
Discussion
In clinical orthopedics, bacterial infection after trauma, bone tumor-related tissue resection, etc. often leads to the failure of bone repair and/or bone regeneration. The formation of biofilms will promote the formation of chronic wounds, making it very difficult to effectively treat the bone defect. So far, regenerating bone tissue in infection regions is still challenging. Bioactive bone tissue engineering scaffolds with high porosity and hierarchically interconnected pores are designed to promote proper cellular responses such as cell migration, proliferation, and osteogenic differentiation and improved tissue regeneration [7]. Towards the treatment of bone defects in the infection region, a bone tissue engineering scaffold with suitable antibacterial capability is necessary [22][23][24][25]. In this study, a hierarchically porous scaffold with excellent antibacterial capability and osteogenic activity was made through a cryogenic 3D printing of a Pickering emulsion containing β-TCP nanoparticles and TCH.
Since the structure of Pickering emulsion is crucial to the printability of Pickering emulsion inks and the spatial structure of the printed scaffold, the contents of h-SiO 2 and β-TCP should be carefully tuned. It is known that the size of the solid particles used to stabilize the water/oil interface of Pickering emulsions is usually small (basically smaller than 3 µm), and the addition of oversize particles will reduce the overall stability of the Pickering emulsions. Meanwhile, Pickering emulsions with higher stability normally have a smaller water droplet size [26,27]. As the size of β-TCP used in the current investigation was much larger than 3 µm (β-TCP is obtained after passing through a screen with a pore size of 70 µm), most β-TCP can only be dispersed either in water droplets or a continuous oil phase.
The printing temperature and drying temperature significantly affected the scaffold structure. When the scaffolds were printed and dried at room temperature, the molecular chains of PCL-PLGA matrices were still freely movable in DCM solvent. Hence, the volatilization of DCM would cause the inward movement of the molecular chains of PLGA and PCL matrices to the central region of the strut, forming dried struts with a smaller diameter. If the movement rates of molecular chains of PCL-PLGA matrices to the central region are not consistent everywhere, the diameters of struts become uneven, and scaffold collapse could also occur. Additionally, as water droplets have a much higher boiling point than DCM (100 • C vs. 39 • C), the strut thinning induced by the volatilization of DCM would squeeze out many water droplets that were originally located in the struts/on the strut surface, hence showing struts with less micropores. In comparison, when the scaffolds were printed at a low temperature (i.e., −15 • C) and freeze dried at −50 • C, the molecular chains of PCL-PLGA matrices of the as-printed "wet" struts were constantly frozen, and water droplets embedded in the struts were also frozen into ice microparticles. In such cases, the removal of the organic solvent (i.e., DCM) and water phase (i.e., ice particles) through freeze drying would not affect the distribution of the molecular chains of PCL-PLGA in the scaffold matrix, hence forming struts with a uniform diameter and leaving numerous micro-holes/pores in the struts/on the strut surface. With the above consideration, whether the molecular chains of the PCL-PLGA matrix and water droplets were frozen or not during the printing and drying procedure is the predominant factor to influence the microstructure of the scaffolds. Since the freeze drying machine used in this study had a working temperature of −50 • C, we only selected −50 • C as the freeze drying temperature to remove DCM and water from the cryogenic 3D printed scaffolds.
It is known that if the bacterial infection is not effectively controlled in the early implantation stages, the formation of biofilms will exacerbate the infection. The adhesion between bacteria and implants is the first and most important stage of this process [19]. Loading antibiotics in the implant can solve the problem. Tetracycline antibiotics have been used clinically for over decades and are active against a variety of Gram-positive and Gramnegative bacteria [16]. These antibiotics can cause various metabolic disorders in bacteria, including inhibiting protein synthesis, nucleic acid synthesis, oxidic phosphate enzymes, and various oxidation and fermentation reactions [28]. The minimum bacteriostatic concentration (MIC) is considered to represent the inherent activity of each antibacterial agent. The MIC50 and MIC90 of tetracycline in vitro are 0.25-2 µg/mL and 32 µg/mL for Grampositive bacteria (Staphylococcus aureus) and 2 µg/mL and 64 µg/mL for Gram-negative bacteria (Escherichia coli), respectively [29,30]. In our design, the Pickering emulsion has a high concentration of TCH in the water phase and a low concentration in the oil phase. The bacteria are killed effectively in 4 h by a burst release of TCH. Moreover, the concentration of the post-release TCH can also achieve the purpose of becoming bacteriostatic ( Figure 5). Our scaffolds exhibit excellent antibacterial properties of short-term sterilization and long-term bacteriostasis.
In terms of bone tissue regeneration, scaffolds with hierarchical porous structures are beneficial for the anchoring, spreading, and the osteogenic differentiation of rBMSCs. Group E has a macroscopic pore size of 250.03 ± 75.88 µm and contains numerous secondary micropores of 24.70 ± 15.56 µm in size, which meets the needs of bone tissue engineering (i.e., macroscopic pore size: 200-600 µm [13]). Furthermore, the loaded bioactive ceramics, β-TCP, can rapidly degrade to generate calcium and phosphate ions that contribute to the differentiation of rBMSCs and subsequent mineralization [31]. Our results showed that the presence of β-TCP in hierarchically porous scaffolds significantly improved rBMSC differentiation and cell mineralization compared to the control group. As a common antibacterial drug, TCH not only has broad-spectrum antibacterial properties but also has the effect of promoting the proliferation of rBMSCs at an appropriate concentration (0.25-8 µg/mL) [19,20,32]. Moreover, TCH can affect bone metabolism by affecting the function of osteoclasts. For example, osteoclasts are induced to undergo apoptosis, reduce fold boundary areas and acid production, and selectively inhibit ontogenesis [33]. Therefore, dual functional TCH with both excellent antibacteria capability and pro-osteoblast proliferation ability can be used as a bifunctional drug to enhance bone tissue engineering.
The results indicate that we have realized the strategy of "kill bacteria first, then induce osteogenesis". Our hierarchical porous scaffolds with the dual delivery of β-TCP and TCH meet the antibacterial properties required for implants while promoting the proliferation and differentiation of rBMSCs.
Formulation of Pickering Emulsion Inks
Given that h-SiO 2 is a nondegradable nanoparticle, to reduce the toxicity brought by h-SiO 2 [34], the content of β-TCP in Pickering emulsions was increased, while the content of h-SiO 2 was reduced as much as possible. Pickering emulsion was prepared following a protocol in a previous study [7]. Briefly, 0.3 g PCL and 0.3 g PLGA were first dissolved in 10 mL of DCM. Next, a certain amount of h-SiO 2 nanoparticles (0.075, 0.125, and 0.25 g) was added to the polymer solution, followed by ultrasonication for 10 min at 5 • C. Then, 23.3 mL of DI water and a certain amount of β-TCP nanoparticles (0, 0.125, 0.175, 0.25, 0.375, 0.5, and 0.625 g) were dispersed in the PCL-PLGA/DCM solution loaded with h-SiO 2 and magnetically stirred at room temperature for 30 min at 1000 rpm, thereby obtaining Pickering emulsion with a water phase/oil phase ratio of 3:7. Table 1 details the ingredients of different groups of Pickering emulsion inks. The emulsion preparation process is shown in Figure 1A.
Fabrication of Porous Scaffolds
The CAD model with a wood crib structure was designed using SolidWorks (USA) and converted to a STL file format. A self-developed low-temperature 3D printer comprising a X-Y-Z motion platform, an extrusion system, and a refrigerated box was used to fabricate scaffolds. A 20-mL syringe was used to load w/o Pickering emulsion inks and further loaded in the low-temperature 3D printer. The piston of the syringe was driven by a screw at a feeding rate of 0.002 mm/s to extrude printing inks out of a V-shaped nozzle (inner diameter: 0.6 mm) to draw a continuous pattern layer-by-layer. A refrigerated box was used to stabilize the printing temperature, which was set as −15 • C. A typical CAD scaffold model had a 5-layer structure, and each layer had 10 paralleled cylindrical struts. The distance between the center lines of the two paralleled struts was 1000 µm, and the intersection angle of the struts at the adjacent layers was 90 • . The layer thickness of the scaffolds was set as 0.25 mm, and the printing speed was set as 5 mm/s. After low-temperature 3D printing, the as-fabricated scaffolds were subjected to freeze drying treatment to obtain dried scaffolds. Scaffolds printed (i.e., Group A 3 and A 4 ) and dried (i.e., Group A 2 and A 4 ) at room temperature were used as control groups. All dried scaffolds subjected to antibacteria study and cell culture were sterilized beforehand by immersing in 75% ethanol (v/v, ethanol/DI water = 0.75) for 5 min, followed by 5 min of rinsing in PBS for 3 times.
Physical Characterization of Pickering Emulsions and Porous Scaffolds
The viscosity of Pickering emulsions was measured by a rheometer (MCR 702 Mul-tiDrive, Anton Paar, Graz, Austria) at 20 • C, equipped with stainless-steel plates (diameter: 40 mm) with a 1-mm gap between the plates. Viscosity tests were performed at shear rates ranging from 0.01 to 10 s −1 . The structure of Pickering emulsion inks was observed under an inverted fluorescence microscope (Eclipse TE2000-U, Nikon, Tokyo, Japan). The diameter of the droplets and the size of the pores were analyzed by ImageJ (Version 1.53K, National Institutes of Health, Bethesda, MD, USA). The macroscopic morphological images of the porous scaffold were captured by a digital camera (iPhone 12), and the microscopic morphology of the scaffolds was observed using an optical microscope and a SEM (JSM-IT500A, JEOL Ltd., Tokyo, Japan). The specific surface area of different scaffolds was measured by ASAP 2460 Version 2.02. The pretreatment temperature of the scaffold was 40 • C, and the pretreatment time was 16 h.
In Vitro Release Behavior of Tetracycline Hydrochloride (TCH)
It is reported that a sustained TCH release can promote the proliferation and differentiation of rBMSCs [19,32], while a high TCH concentration can effectively kill bacteria [35]. In the current study, to produce drug-loaded scaffolds with a sustained TCH release profile (designated as E-TCH 1 ), 1 mg of TCH was dispersed in 10 mL DCM. In contrast, to produce drug-loaded scaffolds with both burst TCH release and sustained TCH release (designated as E-TCH 2 ), 1mg of TCH was dispersed in 10 mL DCM, and 23.3 mg of TCH was dissolved in 23.3 mL DI water. The rest of the contents and fabrication process of the TCH-loaded scaffold was the same as that of Group E. The absorption concentration standard curve of TCH, and the release kinetics of TCH from the scaffold were determined by a microplate reader at 372 nm (TECAN-Spark, Shanghai, China). Release the experimental samples in a constant temperature shaker (Zhengrong Instrument, Jintan, China) at 37 • C with a vibration speed of 50 r/min. Three control groups were set up at each time point, and 200 µL of the solution was withdrawn at fixed times.
Evaluation of Antibacterial Properties of Porous Scaffolds
Staphylococcus aureus (a Gram-positive bacterium) was used as a model bacterium to verify the antibacterial activity of the scaffolds. After adjusting the concentration of S. aureus to 1 × 10 6 CFU/mL, the bacterial suspension was spread on the surface of agar for inoculation. Next, the scaffold was placed in the center of the agar plate to coculture with S. aureus for 24 h at 37 • C and photographed to record the inhibition zone. In the live and dead staining, S. aureus was cultured until the turbidity of the bacterial suspension was about 0.8. Subsequently, the collected bacteria were diluted 100 times. The resuspended bacteria were cocultured with scaffolds on 24-well plates for 4 h for live and dead staining.
Cell Culture
Rat bone marrow mesenchymal stem cells (rBMSCs) were cultured in Dulbecco's modified Eagle's medium (DMEM, Gibco, New York, NY, USA) containing 10% fetal bovine serum (Gibco, New York, NY, USA), 100 U/mL penicillin-streptomycin, and 2 mM L-glutamic acid (Invitrogen, Carlsbad, CA, USA). The culture plates containing rBMSCs and culture medium were placed in an incubator with an ambient temperature of 37 • C and a volume fraction of 5% CO 2 saturated humidity. The medium was changed every 36 h. After adding the sterilized scaffolds in the wells of 24-well plates, 0.2 mL of rBMSC cell solution with a concentration of 1 × 10 6 cell/mL was seeded on each scaffold. After culturing for 3 h, 1.8 mL of DMEM was added afterward.
rBMSCs Proliferation and Osteogenic Differentiation on Scaffolds
The cytocompatibility of the scaffolds was investigated by staining with a live-dead staining kit (Molecular Probes, Eugene, OR, USA) at 1 and 3 days, in which live and dead cells were stained green and red, respectively. Scaffolds were placed in DMEM containing 4 µM EthD-1 and 2 µM calcein-AM for 15 min in a humidified incubator (37 • C, 5% CO 2 ), and then, photos were taken at 2 time points using a fluorescence microscope (Nikon Eclipse TE2000-U inverted microscope, Japan). The proliferation of rBMSCs on the porous scaffolds was measured using the CCK8 (Dojindo, Kumamoto City, Japan) proliferation assay at day 1 and after 3 days of culture, respectively. After culturing scaffolds for 7 and 14 days, an ALP staining kit (Puhe Biomedical Technology, Wuxi, China) was used to study the osteogenic differentiation of rBMSCs.
Statistical Analysis
All statistical analyses were performed using the SPSS software (version 18). Numerical data were presented as the mean value ± standard deviation (S.D.). For the statistical comparisons, one-way analysis of variance (ANOVA) with the Student's t-test was applied. p < 0.05 (*) was considered to be statistically significant, in which (*) was used to indicate the significant differences in the histological images.
Conclusions
In this study, highly porous bone tissue engineering scaffolds with effective antibacterial property and excellent osteogenesis capability were produced through the cryogenic 3D printing of β-TCP and TCH loaded w/o composite Pickering emulsion inks and the subsequent freeze drying treatment. The printed struts had a microporous surface with a very high surface area-to-volume ratio and, hence, could be used as an excellent delivery vehicle for antibacterial drugs. Since the loading of high dosage of TCH in the water phase of w/o Pickering emulsion inks could lead to a burst TCH release with a high concentration from the scaffolds, the effective elimination of S. aureus bacteria in a short time period can be achieved, hence meeting the early antibacteria needs after scaffold implantation. The slow but sustained release of TCH not only inhibited the growth of S. aureus in the long term but also promoted the proliferation of rBMSCs. Moreover, the osteogenic differentiation of rBMSCs was promoted in the presence of β-TCP and a sustained release of TCH with a low concentration. | 2022-08-31T15:02:47.517Z | 2022-08-27T00:00:00.000 | {
"year": 2022,
"sha1": "a79fefb443e89c58220f772295a77c20f4eba3d4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/17/9722/pdf?version=1661584585",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9de9293be5645cfd69fa3dc69c702390c4bd3ba",
"s2fieldsofstudy": [
"Materials Science",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232425715 | pes2o/s2orc | v3-fos-license | Natural language processing for automated annotation of medication mentions in primary care visit conversations
Abstract Objectives The objective of this study is to build and evaluate a natural language processing approach to identify medication mentions in primary care visit conversations between patients and physicians. Materials and Methods Eight clinicians contributed to a data set of 85 clinic visit transcripts, and 10 transcripts were randomly selected from this data set as a development set. Our approach utilizes Apache cTAKES and Unified Medical Language System controlled vocabulary to generate a list of medication candidates in the transcribed text and then performs multiple customized filters to exclude common false positives from this list while including some additional common mentions of the supplements and immunizations. Results Sixty-five transcripts with 1121 medication mentions were randomly selected as an evaluation set. Our proposed method achieved an F-score of 85.0% for identifying the medication mentions in the test set, significantly outperforming existing medication information extraction systems for medical records with F-scores ranging from 42.9% to 68.9% on the same test set. Discussion Our medication information extraction approach for primary care visit conversations showed promising results, extracting about 27% more medication mentions from our evaluation set while eliminating many false positives in comparison to existing baseline systems. We made our approach publicly available on the web as an open-source software. Conclusion Integration of our annotation system with clinical recording applications has the potential to improve patients’ understanding and recall of key information from their clinic visits, and, in turn, to positively impact health outcomes.
BACKGROUND AND SIGNIFICANCE
Forty to 80% of healthcare information is forgotten immediately by patients postvisit. [1][2][3][4] Poor recall and understanding of medical concepts have been identified as significant barriers to self-management, a central component of the Chronic Care Model, resulting in poorer health outcomes. [5][6][7] These barriers are amplified in older adults with multimorbidity, [8][9][10][11] where reduced cognitive capacity, [12][13][14] low health literacy, 15,16 and complex treatment plans are common. [17][18][19] Older adults with multimorbidity account for 96% of Medicare expenditures, and in the absence of optimal self-management, they experience a lower quality of life and greater functional decline. 10,11,[20][21][22][23][24][25][26] An after-visit summary, shared via a patient portal, is a common strategy to improve recall of visit information. [27][28][29] Open notes is a current trend in healthcare that encourages clinicians to share the visit notes with patients. Sharing visit notes with patients not only increases patients' confidence in their ability to manage their health and understanding of their care but also enhances the communication efficiency. Through accessing visit notes, patients can take medications as prescribed and remember their healthcare plan better. 30,31 However, summaries impose a significant burden on clinicians who must document the entire visit in terms that are understandable to patients, with low health literacy being common. 32,33 Alternatively, audio recordings can provide a full account of the clinic visit and are an effective modality-71% of patients listen to recordings and 68% share their recording with a caregiver. 34 Clinic recordings improve patient understanding and recall of visit information, reduce anxiety, increase satisfaction, and improve treatment adherence. [34][35][36][37][38][39][40] As patient demand for recordings increases, 41,42 a growing number of clinics across the United States are offering audio recordings of clinic visits, and a recent survey reveals that almost a third of clinicians in the United States have shared a recording of a clinic visit with patients. 43 Yet, unstructured clinic recordings may overwhelm patients. 41,44 Advances in data science methods, such as natural language processing (NLP), can be used to identify patterns in unstructured data and extract clinically meaningful information. These methods have been used to predict hospital readmissions 45 and future radiology utilization, 46 and to characterize the significance, change, and urgency of clinical findings in medical records. [47][48][49][50][51] As such, we have developed a recording system for patients that applies NLP methods to unstructured clinic visit recordings. 52 In this article, we describe an approach to extract mentions of medication names in transcripts of clinic visit audio recordings. Annotating mentions of medications discussed during a clinic visit recording can provide added value to the audio-recorded health information. We use NLP to highlight medication mentions in transcripts of clinic recordings. These annotations can be utilized to index the audio and aid visit recall by enabling key visit information to be easily accessed. In addition, the indexed medical concepts can be linked to credible and trustworthy online resources. These resources would provide additional information about medications to aid in patient understanding. Such an approach could potentially increase patient self-management, and, when shared with caregivers, could increase their confidence in delivering care.
At the time of this work, no prior work focused on extracting medication information from clinic visit conversations and their transcriptions. There has been some work on the extraction of medication names and also prescription-related attributes such as dosage and frequency from the medical text (primarily clinical notes). These systems have mainly focused on the extraction of medication information from written clinical notes. In 2009, the Third i2b2 Shared-Task on Challenges in Natural Language Processing for Clinical Data Workshop focused on medication information extraction. The challenge was to extract and label medication-related terms (medication name, dosage, frequency, etc.) from discharge summaries. 53 Teams were given 696 summaries for development, and then 547 summaries were used for evaluation. Twenty teams submitted entries to the challenge, with the top result for annotating medication names being an F-score of 90.3% on the evaluation data set, utilizing a combination of a rule-based approach with two machine learning models (conditional random field and support vector machine). This top approach also achieved an F-score of 90.81% on an internal test set of 30 clinical records when evaluated by the system's authors. 54 Since the 2009 i2b2 challenge, additional work has been done to improve medication information extraction methods. Sohn et al. 55 developed Medication Extraction and Normalization (MedXN) to extract medication information and map it to the most specific RxNorm concept possible. This group reported an F-score of 97.5% for medication name on a test set of 26 clinical notes containing 397 medications. In 2014, MedEx, the system with the second-best results in the i2b2 challenge, was reimplemented using Unstructured Information Management Architecture (UIMA) to extract drug names and map them to both generalized and specific RxNorm concepts. 56 This system, named MedEx-UIMA, achieved an F-score of 97.5% for extracting and mapping to the most generalized concept and an F-score of 88.1% for mapping to the most specific concept, evaluating on a set of 125 discharge summaries from the original i2b2 challenge. The authors concluded that the new MedEx-UIMA implementation was consistent with and sometimes outperformed the original MedEx method. Most recently, PredMed was developed to extract medication names and related terms from office visit
LAY SUMMARY
In this work, we built a natural language processing approach to identify medication mentions in primary care visit conversations between patients and physicians to allow patients to easily find important elements of their recorded conversations with their physicians. This method annotates medication mentions in the text transcribed from office visits. Our approach utilizes a repository of common medication names to generate a list of medication candidates in the transcribed text, and then exclude common false positives from this list while including some additional common mentions of supplements and immunizations in the medication list for a transcript. We evaluated our method on a test set of 65 clinic visit transcripts with 1121 medication mentions. In this evaluation, our proposed method achieved a high performance for identifying the medication mentions, significantly outperforming existing medication information extraction systems for medical records. Integration of this annotation system with clinical recording applications has the potential to improve patients' understanding and recall of key information from the clinic visits, and their health outcomes.
notes. 57 The comparison of PredMed for extracting medication names to earlier versions of MedEx and MedXN on a test set of 50 visit encounter notes showed F-scores of 80.0% for PredMed, 74.8% for MedEx, and 83.9% for MedXN. Since MedEx-UIMA and MedXN are available as open-source systems, we used these systems as baselines for comparison in our study.
In another related work, Kim et al. 58 developed a method for retrieval of biomedical terms in tele-health call notes. Their team identified two types of noise in these records, explicit-including "spelling errors, unfinished sentences, omission of sentence marks, etc."-and implicit-"non-patient information and a patient's untrustworthy information"-and sought to remove that noise as part of their method. Utilizing a bootstrapping-based pattern learning process to detect variations related to the explicit noise, and dependency path-based filters to remove the implicit noise, their system achieved an F-score of 77.33% for detecting biomedical terms on evaluation data from 300 patients. This tool and its corresponding codebase are not publicly available for comparison for this study. Furthermore, recently, there has been additional work on the analysis of medical conversations based on deep learning models. [59][60][61][62][63] However, unlike our open-source tool, the presented proprietary tools and their corresponding test sets are not publicly available for comparison to our approach. Of note, some of these previous works are focused on relation extraction and were evaluated for identifying relations between medications and their properties, 59 rather than finding medication mentions themselves. Also, the proposed deep learning models require a large amount of data for training and finetuning, including tens of thousands of doctor-patient annotated conversations. 60,61 On the other hand, our approach is developed using only a fraction of those deep learning models' training sets. Considering the finite list of possible medications, our approach could achieve high performance (F-score: 85%) by efficiently using the proposed rules and filters without requiring large data sets and computational resources.
MATERIALS AND METHODS
Our NLP pipeline was developed and validated to extract medication mentions in clinic visit transcripts. We define medication mentions as any place in the text that a term refers to a medication by a specific or general name or common lay term. Our pipeline takes advantage of Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) 64 to generate a primary candidate list of medication mentions. Subsequently, our approach filters out false-positive medical mentions in this list and adds the medication mentions that cTAKES misses in visit transcripts. Our workflow took the original visit text transcripts and processed them through the cTAKES default clinical pipeline resulting in a set of corresponding UIMA CAS XMI output files with the sentences, parts of speech, and all clinical concepts annotated by cTAKES. The software we developed for our approach utilizes the CAS XMI output from cTAKES and outputs our final annotated medication mentions in a Knowtator file format. Our approach and cTAKES baseline pipeline for identifying medications in this study do not utilize the outputted part of speech tags from cTAKES. eHOST was used in this study to compute metrics for our evaluation. Outputs from MedEx-UIMA and MedXN were also converted to Knowtator format to compute evaluation metrics using eHOST.
Visit transcripts data set
Transcripts of 85 patient visits with a primary care physician were used as our data set in this study. These visits were audio-recorded and transcribed by a HIPAA compliant commercial medical transcription service. These recordings, which came from eight clinicians, were 31 min long on average, ranging from 5.5 to 70.5 min. This study and the use of human subject data in this project were approved by the committee for the Protection of Human Subjects at Dartmouth College (CPHS STUDY#30126) with informed consent. Table 1 shows the demographics of the participants who had their clinical visit recordings used in our study.
Ten transcripts were randomly selected from this data set as a development set. Another ten of the visit transcripts were randomly selected as a validation set for our model. The remaining 65 transcripts were reserved as a held-out test set for evaluation.
Annotation for medication mentions
All the transcripts were independently annotated for medication mentions by two second-year medical students using the Extensible Human Oracle Suite of Tools (eHOST) software. 65 The two annotators initially worked through blocks of 5 or 10 transcripts, meeting after annotating each block to track inter-annotator agreement (IAA) on the identified medication mentions, discuss disagreements, and improve their accuracy in this annotation task, which led to steadily higher IAA over time. Our IAA calculation considers overlapping annotations as a match, allowing a flexible annotation arrangement for compound medication names. Once the annotators reached over 80% IAA, we considered them trained in this annotation task. Subsequently, they annotated the entire set of transcripts. Inter-annotator agreement for medication mentions between our annotators for the 65 transcripts in the evaluation data set was 84.6%. In that data set, Annotator 1 annotated 1076 instances of medication mentions, and Annotator 2 annotated 1048 instances of medication mentions.
For evaluation, we created a set of gold standard medication mentions in our evaluation data set based on the work of our expert annotators. Our labels are based on overlapping annotations of two annotator experts. All medication mentions in our evaluation set that were agreed upon by the two expert annotators were kept in this gold standard set. A physician, trained in the method used by the annotators, served as an adjudicator to resolve disagreements between our annotators. A disagreement in the annotations would occur when one annotator had annotated a medication mention while the other had not. Disagreements were resolved by the adjudicating physician either choosing to keep the annotation from a single annotator in the gold standard set, or choosing to reject it. The adjudicating physician also reviewed disagreements between the output from our model and the set of annotations from the human adjudicator to identify true positives and false positives for evaluating our model by either choosing to keep the annotation from either source or rejecting it. As a result, a small number of medication mentions that were missed by both annotators were thus added to our gold standard set. The resulting gold standard evaluation data set contained 1121 medication mentions.
cTAKES baseline for annotating medications in transcripts
Our baseline approach was to utilize Apache cTAKES 64 to identify the medication mentions in the transcripts. cTAKES is an opensource widely-used NLP system for biomedical text processing. As one of its NLP capabilities, cTAKES is able to annotate and extract medical information from the free text of clinical reports. We utilized the Default Clinical Pipeline of cTAKES (version 4.0.0) and its Language System (UMLS) Metathesaurus 66 fast dictionary lookup functionality. cTAKES' UMLS fast dictionary lookup, by default, uses sentences as a lookup window for matching, covering the text of the entire document. For our dictionary, we used the provided prebuilt cTAKES dictionary, which includes RxNorm and SNOMED-CT. SNOMED-CT provides extensive coverage of laboratory tests and clinical measurements, while RxNorm focuses on drug names and codes. Our only modification to the default cTAKES configuration was to utilize its PrecisionTermConsumer function, which refines annotations to the most specific variation (eg, if it finds the text "colon cancer" in a report, it only annotates "colon cancer" but not "colon" nor "cancer"). Since cTAKES is designed to work with medical record-free text, there is an assumption that input text is a clinical note, written by an individual with a medical background. In contrast, the visit transcripts are typically a dyadic conversation between a patient and their physician.
Our model for annotating medications in transcripts
After initial experiments with cTAKES and UMLS as a means to find medications mentioned in transcribed clinic visit conversations, we explored additional methods to filter out common false positives from the output generated by cTAKES. For this purpose, we took an iterative approach, looking at the most common errors in cTAKES outcomes for identification of medication mentions in our development set and developed new rule-based filters to detect and remove those from the cTAKES output. As our accuracy on the development set improved by filtering out many types of false positives (described in detail below), we ran our model against our validation set, finding that immunizations along with herbs and supplements persisted as typical errors. cTAKES had difficulty differentiating immunizations from diagnoses (eg, chickenpox vaccine vs chickenpox). Also, cTAKES did not annotate some commonly used herbs and supplements. In the next sections, we describe how our approach adds annotations for immunizations, herbs, and supplements, while filtering out false positives for medication mentions. An overview of this approach is shown in Figure 1. We have made our code for this approach publicly available on GitHub (https://github.com/BMIRDS/ HealthTranscriptAnnotator).
Common word filtering
Since many of the words appearing as false positives in the cTAKES output for medication annotations are common conversational words that have second meanings as medication names or acronyms (eg, "today" is also ToDAY, a name for an antibiotic primarily in veterinary use that appears in UMLS), we decided to utilize a large dictionary of common words to filter out these occurrences. We chose to use a dictionary of the 10 000 most common English words from Google's Trillion Word Corpus (https://github.com/first20hours/google-10000-english). 67 If any of those 10 000 words were annotated by cTAKES as a medication, our model removes that annotation, with a small subset of exceptions. From the 10 000 common words list, there were 24 words that are considered as exceptions and are allowed to remain annotated as medications. These words fit into three categories: (1) names of common medications (eg, "Ambien", "Insulin", etc., which accounted for 17 of the 24); (2) generic terms (eg, "herb", "supplement", and "vitamin", along with their plurals); and (3) the word "flu", which can refer to either a diagnosis or an immunization.
UMLS semantic type filtering
In our error analysis for cTAKES outputs, we also examined UMLS semantic types for the terms that cTAKES annotated as medication mentions. The six types shown in Table 2 generally produced false positives and few to no true positives. Our approach removes these semantic types as medication annotations from the cTAKES output where they occur.
Allergen filtering cTAKES annotates a number of food and food ingredient-related terms (eg, "coconut") as medication mentions, denoting them as an allergenic. We identify those annotations that have the word "allergenic" included in their preferred cTAKES text metadata, and we remove those annotations from the output of cTAKES in our model's output.
Immunization additions
A small number of medication-related UMLS terms are considered as both diagnoses and immunizations/vaccinations (eg, "flu" and "pertussis"). As a result, cTAKES annotation outputs were inconsistent about annotating these terms as immunizations/vaccinations or diagnoses. To improve the annotation of immunizations as medications, we also investigated the cTAKES diagnosis annotations. Since cTAKES segments the input text into sentences, we searched for the words "vaccine," "shot," "booster," and "pill" in the same sentence as a diagnosis annotation, and if both co-occurred, we annotated the diagnosis text as a medication.
Vitamin, herb, and supplement additions cTAKES also produces inconsistent results for annotating herbs and supplements. Our approach adds an additional dictionary of common herbs and supplements from MedlinePlus (https://medlineplus. gov/druginfo/herb_All.html) to capture these. 68 Evaluation We applied our model on the evaluation data set containing 65 transcripts to annotate medication mentions, in addition to capturing the original medication mention annotation output from cTAKES 4.0.0's default clinical pipeline. We also applied publicly available MedEx-UIMA 1.3.7 and MedXN 1.0.1 software on the evaluation data set to compare our results with their medication name annotations as the baselines.
RESULTS
We calculated the standard evaluation metrics of precision, recall, and F-score for our proposed approach and the baseline methods using the medication mention gold standards in our validation and evaluation sets. These evaluation metrics are shown in Table 3. We compared the results from cTAKES, MedEx-UIMA, MedXN, and our proposed model for identification of the gold standard medication mentions for the 65 transcripts in the evaluation set. Table 4 shows this comparison.
DISCUSSION
Our results indicate that the proposed approach significantly reduced the number of false positives, with a relatively small drop in the number of true positives and false negatives, in comparison to the best of three baseline models. As highlighted in Table 4, our proposed model has the best overall performance in comparison to the other baseline methods, with all of its evaluation metrics falling in the range of 83-87%. Overarching the finer aspects of our work is the observation that extracting medical terms from conversational dialogue between patients and their primary care physician has distinct challenges, such as more informal medical terms and unstructured content, in comparison to extracting terms from typical clinical, note-like reports. To the best of our knowledge, the proposed work in this article is the first attempt to extract medical terminology from conversations between a patient and their physician. Prior work for finding medication mentions has focused on written clinical reports. [47][48][49][50][51] Our error analysis suggests that baseline approaches, which rely on dictionaries, struggle with patient-clinician conversational text because of language like filler words (eg, "aha" and "hmm") matching with abbreviations for medications, and the fact that common conversational words are often used as medication names. We also observed, among the filters that we applied to the original cTAKES outputs, that filtering out "hormone" semantic type had the most impact on the improvement of the results. The most common (n > 10) false negatives by cTAKES were "flu shot" (36), "tetanus" (14), and "inhaler" (11). Among annotations that were missed by one of our two annotators, the most common (>10) cases were "Vitamin D" (21), "flu shot" (13), and "Mirena" (11). The most common (>10) false positives in the evaluation set annotated by our approach were "clot" (15 occurrences) and "over-the-counter" (11 occurrences), and the most common (>10) false negatives missed by our approach were "inhaler," "calcium," and "tetanus" (11 occurrences each). A slight but consistent majority of false positives in our data set were from the discussion of lab test results, which will be a focus of our future work to improve the current results.
One advantage of our approach is that each portion of our pipeline was designed to generalize addressing issues seen during development, so our approach was able to recognize terms outside the development/validation data sets. Other rule-based and dictionarybased systems have often relied on whitelisting/blacklisting terms from their development data sets, which limits how they generalize outside their development data. For example, our use of the 10 000 most common English words from Google's Trillion Word Corpus allows us to recognize and filter many common words. Our solu- Of note, our evaluation has limitations. Foremost, our evaluation data set is relatively small and is from a single medical institution. We plan to extend our evaluation data set in future work to test the generalizability of the proposed approach. In addition, because our gold standard was created by reaching consensus between two medical annotators and carrying out our approach, it is possible that other baseline methods, such as cTAKES, found a small number of true positives that were not accounted for by any of the annotators or our proposed method. That said, the sheer number of false positives generated by cTAKES makes adjudication of its medication mention output impractical. Also, our approach has been developed to detect only medication mentions in primary care visit notes. Identifying other types of medical words and their properties in these notes can significantly increase and broaden the utility of our approach. Especially, detecting additional information about medications, such as frequency, dose, refill, modifications, and side effects, can benefit the patients. We plan to extend our approach to identify additional information about medications and other semantics types, such as disorders, in future work. Another limitation is that clinical visit transcripts are more complex if English is not the patient's first language or if an interpreter is involved. Transcripts do not reflect non-verbal communication, such as visible emotions and body language. The transcripts do not include the assessment or plan section of the visit note, which reflect the clinician's summary and reflection that may occur after the visit itself. Finally, our approach, which is based on controlled vocabulary and rule-based filtering, does not consider word context and the corresponding contextual semantics in different circumstances. Since one of our goals is using these annotations to index segments of clinic visit conversations for end-users to review postvisit, we plan to conduct future work with end-users to determine how these limitations may impact the usability of the system. Future plans to integrate the proposed information extraction methods in this study with a digital library of clinic visit recordings is expected to make patients and caregivers more knowledgeable and confident of their health care needs, resulting in greater self-management capabilities.
Notably, as we fine-tuned our model on the validation set, we observed that context words in a sentence can be critical in our task, for example, for determining mentions of immunizations/vaccinations. Our result suggests that although the dictionary-and rule-based methods can achieve a promising result (F-score ¼ 85%) for identification of medication mentions in clinic visit conversations, additional improvements in this domain will be gained through considering contextual semantics and machine learning models, which our team will pursue in future work.
CONCLUSION
In this work, we developed an NLP pipeline for finding medication mentions in primary care visit conversations. The proposed model achieved promising results (Precision ¼ 86.3%, Recall ¼ 83.8%, F-Score ¼ 85.0%) for identification of medication mentions in 65 clinic visit transcripts in our evaluation set. Since this is a first-of-akind study with clinic visit transcripts, we compared our approach to three existing systems used for extracting medication mentions from clinical notes. This comparison shows our approach can extract about 27% more medication mentions while eliminating many false positives in comparison to existing baseline systems. Integration of this annotation system with clinical recording applications has the potential to improve patients' understanding and recall of key information from their clinic visits, and, in turn, behavioral and health-related outcomes. We plan to explore this potential in future trials of our system.
CONTRIBUTORS
All authors reviewed and edited the manuscript and contributed to the study concept and design of the experiments. CHG, PJB, WH, and MDD collected the data. KLB, JCF, JAS, WMO, and JR contributed to data annotation. CHG, WW, and SH analyzed the data and wrote the manuscript. SH and PJB acquired the funding, and SH supervised the study. GE: Glyn Elwyn has edited and published books that provide royalties on sales by the publishers: the books include Shared Decision Making (Oxford University Press) and Groups (Radcliffe Press). Glyn Elwyn's academic interests are focused on shared decision making and coproduction. He owns copyright in measures of shared decision making and care integration, namely collaboRATE, integRATE (measure of care integration, consideRATE (patient experience of care in serious illness), coopeRATE (measure of goal setting), incorpoRATE (clinician attitude to shared decision making, Observer OPTION-5 and Observer OPTION-12 (observer measures of shared decision making). He has in the past provided consultancy for organizations, including: (1) Emmi Solutions LLC who developed patient decision support tools;
Conflict of interest statement
(2) National Quality Forum on the certification of decision support tools; (3) Washington State Health Department on the certification of decision support tools; (4) SciMentum LLC, Amsterdam (workshops for shared decision making). He is the Founder and Director of &think LLC which owns the registered trademark for Option GridsTM patient decision aids; Founder and Director of SHARPNETWORK LLC, a provider of training for shared decision making. He provides advice in the domain of shared decision making and patient decision aids to: (1) Access Community Health Network, Chicago
DATA AVAILABILITY
The data set utilized in this study contains patient health information and is not publicly available. This data set can be shared with potential collaborators upon reasonable request to the corresponding author in compliance with in-place institutional policies and protocols to protect the data privacy and intellectual property. | 2021-03-31T19:06:57.631Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "e73dc43c974cd342700d91c216d6271b7cd6c962",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jamiaopen/article-pdf/4/3/ooab071/39805213/ooab071.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fef9cc93e5bcbd83b9cf7b41ed70f73042e722e9",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236573214 | pes2o/s2orc | v3-fos-license | Sensitivity of Enhanced SRTM for Sea-Level Rise Variation on Egyptian Coasts
The accuracy of digital elevation models (DEMs) is very critical to planning adaptation strategies for coastal areas under changing scenarios of global climate and sea-level rise. This research assesses the accuracy of the interferometric DEM from the SRTM mission as it is one of the free available DEMs that have been widely used in many applications. The SRTM model is enhanced using the best fit global geoid model for Egypt instead of EGM96 and by applying a mathematical scale factor formula. This research aims to obtain a higher vertical accuracy of the SRTM model that achieves the requirements for coastal inundation studies with minimum field measurement data. The methodology approach consists of three phases. In the first phase, the enhancement scale factor formulas were derived using uniformly distributed ground control points (GCPs) at two different terrains data in Egypt. The second phase contains the evaluation and validation process. It was observed that the enhanced accuracy achieved ranged from 45% to 60 % based on the type of terrains. In the last phase, the sensitivity of these DEMs to sea-level projections using the recent available local tide gauge data was analyzed. It is recommended to investigate this method approach to determine the optimal size and distribution of reference ground control points needed to adjust various free open DEMs of low vertical accuracy for further studies and other applications.
Introduction
Nowadays, planning, development, and risk management along the Egyptian coasts need to conduct several studies including topography, water resources, climate change, and sea-level rise [1,2]. Sea level rise (SLR) currently poses a significant threat to coastal areas in Egypt and around the whole world. During the period 1901-2010, the global mean sea level rose by 19 centimetres and it is very likely that the SLR will exceed the observed rate of 2.0 mm/year during 1971-2010, with the rate of 8 to 16 mm/year during 2081-2100 [3]. The existence of a reliable and precise digital elevation model (DEM) is greatly required to support the decision-makers for planning the needed adaptation strategies to sea level rise. A Digital Elevation Model (DEM) is referred to as a digital representation of the topographic surface in three dimensions with height (elevation) described above mean sea level. Traditionally, A DEM can be obtained using different sources such as Terrestrial survey, Aerial photo, LIDAR, and Satellite survey. Terrestrial survey uses both the conventional and modern surveying instruments to produce the DEM such as Global Navigation Satellite System (GNSS), Total Stations, and leveling. Although this method gives reasonably accurate results, the method is time-consuming and less efficient when applied to an area which is not easily accessible. The elevation accuracy of DEM obtained using the sources like LIDAR and aerial survey are good and cover a large area compared to Terrestrial survey. But it is restricted to do aerial surveys in some countries because of security reasons and limited resources of data collection [4,5]. There is now a revolution in the use of satellite survey data to overcome many of the obstacles facing traditional surveying methods, as there is a future promising that higher accuracy DEMs will be freely available for the globe.
Currently, numerous freely open-access global DEMs with large global coverage are available, for example, the Advanced Space borne Thermal Emission and Reflection Radiometer (ASTER), the Shuttle Radar Topography Mission (SRTM), the Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010), the Global 30 Arc-Second Elevation (GTOPO30) and EarthEnv-DEM90 digital elevation models [6]. ASTER and SRTM are the two most commonly Global DEMs that are addressed in several studies. ASTER is a cooperative effort between Japan's Ministry of Economy, Trade, and Industry (METI) and U.S. National Aeronautics and Space Administration (NASA) that was released in October 2011. The characteristics of ASTER are a spatial resolution of 1 arc sec (~30 m) and a height accuracy of about 17m [7,8,9]. On the other hand, SRTM is a joint effort of NASA, the National Geospatial-Intelligence Agency (NGA), the German Aerospace Center (DLR), and the Italian Space Agency (ASI). The recent version (4.1) of SRTM with a spatial resolution of 1 arc sec (30 m) and around 16 m of vertical height accuracy was released in September 2014 [10,11,12]. EarthEnv-DEM90 with a spatial resolution of 3 arc sec(~90m) is a compilation dataset that is merged from ASTER GDEM v2 and SRTM v4.1 [13]. GMTED2010 is developed by a collaborative effort of the U.S. Geological Survey (USGS) and the NGA. GMTED2010 has been produced at three separate spatial resolutions: 7.5 arc sec (~250 m), 15 arc sec (~500 m), and 30 arc sec (~ 1 km) [14,15]. Also, GTOPO30 is produced by the staff at the USGS Center for Earth Resources Observation and Science with a horizontal cell size of 30 arc sec (~1 km) and was obtained from several rasters and vector sources of topographic information [16]. These freely DEMs differ in the data resolution and accuracy, according to the technology of data acquisition and the methodology of processing concerning a particular terrain and land cover type. The positional and attributive accuracy of these DEMs is often unknown and non-uniform within each dataset [17,18].
On the other hand, the height supplied by open-source data is related to the reference ellipsoid which is called ellipsoidal height (h). However, the height-related to an equipotential surface of the terrestrial gravity field is the most important for DEM generation. This height is called the orthometric height (H). The relation between these two heights is the geoid undulation (N), i.e., for obtaining orthometric height, based on the ellipsoidal height. It is necessary to understand the geoid undulation. The geoid undulation varies from one site to another. The Earth Gravitational Models (EGM96 and EGM2008) are some of the global geo-potential models that are used to have geoid undulation the orthometric height [19,20]. The conversion of ellipsoidal height (h) of these DEMs into orthometric height (H) is fundamental in most geoscience and engineering applications. This conversion requires the geoid undulation (N) related to World Geodetic System (WGS84) ellipsoid by using the simple mathematical relation H = h -N [20,21].
In fact, no official precise local DEM for Egypt is published yet. So, using the free source data requires accordingly to examine the qualities and assesses the accuracy before attempting any data extraction from these freely DEM and the Global Geopotential Model (GGM) products. Rabah et al. [22] have checked the accuracy of three types of global DEMs (namely SRTM 1, SRTM 3, and ASTER) using precise GPS/leveling control points. The results showed that the most accurate one was the SRTM 1 that produced a mean height difference and standard deviations equal 2.89 and ±8.65 m respectively. El-Quilish et al. [23] have compared eight types of global DEMs in the Nile delta region including (EarthEnv-D90, SRTM 1, SRTM 3, ASTER, GMTED2010, and GTOPO30) using 416 GPS/leveling control points. They have introduced a statistical measure for DEMs accuracy evaluation called the reliability Index, which is based on the weighted average mean concept. The results showed that in the Nile delta region, EarthEnv-D90 and SRTM models have a high-reliability Index. Furthermore, Al-Krargy et al. [24] investigated three types of global DEMs (namely ASTER, SRTM 3 arc-second, and GTOPO30) and compared seven GGM including (EGM2008, and EGM96) over precise local gravity and GPS/Leveling data. The results showed that the SRTM 3 DEM produces a mean standard deviation of ± 4.3 m, using 1227 observed orthometric heights control points in Egypt. As well, it has been indicated that the EGM2008 is the most precise global model, as it produces a mean standard deviation of geoid undulation differences which equals ± 0.23 m using observed 1074 GPS/Leveling stations.
Most of the Egyptian coastal lands are becoming vulnerable to inundation as their elevation will likely be below the projected sea-level rise under the impact of global warming and climate change [25]. From the previous studies, most of these recent studies investigated the assessment analysis and comparisons between different DEMs, but did not put forwards methods of accuracy enhancement for specific applications, especially in Egypt. The accuracy of DEM data are critical regarding the results of coastal flood risk assessments, because topographic data are the most important factor in determining the extent of flooding and therefore the accuracy of flood maps. Especially in low-lying coastal zones, the simulation of flood extents was found to be very sensitive to the terrain representation [26,27]. Therefore, this research aims to obtain the highest possible vertical accuracy of SRTM that achieve the requirements for coastal inundation risk studies with minimum volume of field measurements data. In this paper, an enhanced SRTM model is introduced by using the best fit global geoid model for Egypt instead of EGM96. The enhancement was completed by applying a scale factor formula that were derived mathematically by using actual field measurements of uniformly distributed ground control points along the study area. Then, the improved accuracy was evaluated at the two study areas of different types of terrain surfaces e.g., flat terrain and hilly one. This method approach was validated by using high precession local DEMs and the Sensitivity of these DEMs to sea level projections was statistically analysed.
Study Areas and Available Data
Egypt has coastlines about1550 km on the Mediterranean Sea on the north side, including the River Nile in the middle. However, on the east side, Egypt's Red Sea coastline is about 1705 km [28]. The Nile Delta is the delta formed in Lower Egypt where the Nile River spreads out and drains into the Mediterranean Sea. From west to east, it covers around 240 km of coastline. Two case studies were selected to achieve the purpose of this research. The study area (1) of around 200 km long and 500 m wide was selected at the Nile Delta coast. This area has the advantage of being a flat terrain. The mainland used in this area is agriculture. Fig.1 represents the location of the Delta study area. The green-colored points represent the reference (GCPs) that were measured and processed using the GNSS instrument and Leveling. The Red-colored points along the whole study area are the grid of points measured using the GNSS post-processing kinematic (PPK) technique with an observation interval of 5 seconds. These points will be used for local DEM creation and validation purposes.
The second study area is along the Red Sea coast. It is about 194 km long and 500 m wide as well. This area was selected as the case study of hilly terrain and it is considered as a worldwide known touristic zone. The location of the study area (2) and the used land surveying data is shown in Fig.2.
Generally, the topographic and tidal Data are the two main types of data that were used in this research as follows: Leveling measurements of 50 GCPs on the Nile Delta coast and 46 GCPs on the Red Sea coast were used. They were distributed uniformly about 5 km spacing parallel to the coastlines and observed by using the precise level instrument. These high-precision 96 GCPs were obtained by using the international standards and specifications by the survey research institute in Egypt. Two local DEMs surfaces were generated with 30 m cell size using GIS application.
Methodology
The methodology described to achieve the objective of this paper is based on three main phases:
The first phase: SRTM data enhancement
The Horizontal datum of SRTM data is World Geodetic System (WGS 84). However, the vertical datum is Earth Gravitational Model (EGM96). To assess the quality of the SRTM data, several procedures were conducted. The 96 used reference ground control points are considered to be uniformly distributed parallel to the coastline at both study areas. The corresponding values of these points from SRTM DEM were extracted under GIS Environment by using bilinear interpolation. Then, the enhancement introduced approach was applied using these GCPs based on two steps. In the first step, the Earth Gravitational Models EGM96 which is included in SRTM was replaced to EGM2008. EGM2008 was selected as it was proven to be the most precise global model for Egypt referring to the previously mentioned studies.
The geoid heights from EGM96 and EGM2008 were computed at each location using NIMA EGM96 calculator (ver1.0) and Altrans. EGM2008 calculator, respectively. These software applications compute the accurate geoid undulation values which are used in the conversion of SRTM orthometric to ellipsoidal heights and vice versa. In the second step, a mathematical scale factor formula for each study area using the GCPs and the extracted corresponding elevation points from SRTM is derived. At 962 Sensitivity of Enhanced SRTM for Sea-Level Rise Variation on Egyptian Coasts both different terrains study areas, the simple scale factor formulas using a linear transformation method were used to achieve better agreement with the reference ground control points.
Where Z 0 is the observed elevation at reference GCPs, Z m is the Model's elevation from SRTM. N EGM96 and N EGM2008 are the geoid undulation values from EGM96 and EGM2008 respectively. The values of a and b are the estimated transformation parameters.
The second phase: Enhancement evaluation and validation by statistical analysis
The values of the actual levels of the 96 used reference points were compared with the data extracted from SRTM DEMs and after applying the scaling formulas at the same exact control point locations. The evaluation of the vertical accuracy for these points from DEMs is performed by computing the vertical Root-Mean-Square-Error (RMSE) and mean error (ME). RMSE measures the difference between the values of the DEM elevations and the values of reference GCPs elevations. These individual point differences are also called residuals. ME lets us know whether the set of measurements higher or lower than the true values RMSE and ME may be estimated by the following equations [29]: Where E o is the observed elevation, E m is the model's elevation and n is the number of tested elevation points.
The validation of the accuracy enhancement approach for the free available SRTM DEM was done by using the same scaling mathematical formulas derived from the reference GCPs in phase1. These simple scaling formulas were applied to the whole raster surface of the SRTM DEM at both study areas after the conversion to the global model of EGM2008. The raster of the gravity models of EGM2008 and EGM96 were downloaded from International Centre for Global Earth Models (ICGEM) to be used in the conversions process of SRTM [30]. The resulted scaled SRTM model surfaces have been compared with LDEMs at both study areas. The RMSE was computed by subtracting SRTM DEMs from the accurate LDEMs and producing new raster DEMs showing all the elevation differences (DOD). All these calculations related to the DEMs raster were done using the calculate statistics and raster math tool in GIS application.
The third phase: Sea Level rise Analysis
Sea level projections were based on analysing hourly, monthly, and annual tide records at Alexandria (city located on the Mediterranean Sea) and Safaga (city located on the Red Sea). Linear regression has been carried out based on this data to compute the rising rates of sea level at the two tide gauge sites. Then, the Sensitivity analysis of DEMs to sea level projections was performed using GIS application to determine the areas that are topographically lower than the elevated water levels. The inundation extent and water depth on each DEM dataset for different SLR scenarios were studied taking in consideration to check hydrologic connectivity.
Results and Discussion
The statistics of the variation of elevations data from SRTM DEMs with the corresponding actual values from GCPs have appeared in Table 2 for the two study regions.
As appeared in the above table, the height variations values range from 0.90 m to 6.53m, with an average of 2.58 m and standard deviations equalling ± 1.19 m referring to the 50 GCPs used at the Nile Delta. Moreover, the height variations at the red sea area referring to the 46 GCPs used are in the order of 31m with an average of 6.69 m and standard deviations equalling ± 6.23m (see Table 2). It can be realized that compared with the SRTM, the DEM after enhancement and scaling (Scale SRTM) produces the smallest differences with a standard deviation equalling ± 0.18 m and ± 5.56m at the Nile delta coast and the red sea coast study areas. Also, the error statistics at different terrains shows RMS error of 1.16m and 2.76m for the introduced enhanced SRTM. The figures showed that the RMS error was significantly decreased about 60% and 45% after enhancement and scaling of SRTM at the Nile Delta and the red sea coast study areas respectively (shown in Fig. 3). Generally, the RMS error of SRTM DEM's were clearly lower in plain regions like the Nile Delta coast compared to hilly regions like the red sea coast study area. In this section, the introduced methodology was validated by comparing DEMs raster with GIS application. The local DEMs (LDEMs) at two different terrains are considered as a reference to assess the vertical accuracy of SRTM after the enhancement by the same derived scaling formulas from GCPs in phase 1. The two references local DEMs properties were described in section 2. Figures (4-7) highlighted the selected area of interest (AOI) and contained layout views in different scales for each DEM raster to show the surface of elevations for the (AOI) at each study area.
The local DEM in Fig. 4(a) for the Nile Delta study area showed variations in elevations from +0.08 m to +5.45 m, while in Fig. 4(b) the variations in elevations were from -20 m to +20 m for the SRTM. Fig. 4(c) showed the enhanced elevation of SRTM from +1.12 m to +3.80 m by applying EGM2008 and the scale factor formula derived using the 50 GCPs in the Nile Delta coast. An analysis and comparisons have been made to assess the enhancement of SRTM vertical accuracy by subtracting DEM raster in Fig. 4(b) and Fig. 4(c) from the reference local DEM in Fig. 4(a) using the raster math tool in GIS application and producing a new raster map which is a DEM showing all the elevations differences (DOD). The results referring to Fig.5 (a) showed that the differences in elevations between LDEM and the SRTM are varying from +20.61m to -17.87m above MSL, while the differences in elevations between LDEM and the scaled SRTM are varying from +2.99m to -3.09m as shown in Fig.5 (b). The RMSEs were computed by using elevations differences (DOD) in Figs.5 (a,b), the results show that RMSE for SRTM was 2.63 m. However, after the enhancement, it decreases to 1.35 m. The achieved enhancement of SRTM vertical accuracy was about 48% for study region 1 at the Nile delta coast. The local DEM in Fig.6 (a) for the Red sea coast study area showed variations in elevations from -0.04 m to +29.37 m, while in Fig.6 (b) the variations in elevations were from -15 m to +32 m for the SRTM. Fig.6(c) showed the enhanced elevation of SRTM from -3.1 m to +15 m by applying the same formula derived using 46 GCPs in the Red sea coast.
The same-mentioned analysis for the first study region has been used. The results referring to Fig.7 (a) show that the differences in elevations between LDEM and the SRTM vary from +24.49m to -25.54m above MSL, while the differences in elevations between LDEM and the scaled SRTM vary from +25.50m to -9.70m as shown in Fig.7 (b). As well, the RMSEs using elevations differences (DOD) in Fig.7(a,b) were 5.01m for SRTM and decreased after the enhancement to 2.29 m. The SRTM vertical accuracy for the scaled SRTM was improved by about 54%.
The sensitivity of these DEMs to sea-level projections using the recent available local tide gauge data was analysed. Tide records spanned from 2008 to 2018 are used for measuring rates of present-day relative sea-level rise at both tide stations. The annual averages of sea level have been analysed taking with considering the zero-value for mean sea level (MSL) as a datum, it has been found that the MSL at Alexandria, during the period 2008-2018, varies from 41.96cm to 45.89cm above that datum, with a mean value of 44.76cm. At Safaga, MSL changes between 47.99cm to 53.15cm, with an average of 51.33cm. Fig.8 presents the annual MSL values for both tide gauges. Furthermore, linear regression has been carried out based on this data to compute the rising rates of sea level at the two tide gauge sites. Table 3 presents the obtained trend formulas at both tide gauge stations with their corresponding coefficients of determination. The analyses show that the two locations have positive sea rise trends with significant quality of fitting. These trends indicate that the sea level rising rate of the Mediterranean Sea at Alexandria is 3.4 mm/year and in Safaga at the Red Sea is 5.1 mm/year. Moreover, the data records collected from both stations showed that in the period 2008-2018, the Red Sea level at Safaga is higher than the Mediterranean Sea level at Alexandria. These estimates are close to results of similar previous studies [31.32]. The difference in annual mean sea level value between the Alexandria and Safaga is 7.25cm in 2018.
However, it is worth mentioning that the data utilized in the research are the most recent available local tide records which are consistent with the growing climate change. Hence, it could be considered reliable for long-term MSL rise determination. SLR. planning scenarios for climate adaptation are mostly limited to a time horizon of 100 years. Consequently, the Inundation height at 2100 for the two different terrains using the formulas in Table 3 will be 28 and 43cm above the MSL at Alexandria and Safaga, respectively. Most of the low-lying deltas coastlines are subjected to natural subsidence including the Nile delta. However, the red sea coast does not subside, since the subsurface sediments that are made of rocky layers are incompressible. Land subsidence can be estimated using different techniques including the differential interferometric synthetic aperture radar (InSAR) [33]. Based on several recent studies, the average of subsidence rates was calculated as 4 mm/y for the study area region1 [34,35]. Consequently, the Inundation height values will be approximated to 50cm above MSL at the two tide gauge stations by 2100. There are other certain aspects based on local and global factors that affect coastal flooding and SLR. So, the sensitivity of SRTM DEMs will be examined under another SLR scenario using 1m Inundation height. For each scenario specified, the total area of inundation in (km 2 ) was calculated with each DEM dataset and the results are given in Table 4. Regarding LDEMS at the flat terrain type of the Nile delta, the percentage of the area inundated at 0.5m SLR was increased from 1.5% to 34.6% at 1m SLR. The inundated areas from SRTM DEM were significantly higher than the local DEM for both scenarios. Also, the scale SRTM didn't show values below 1m as detailed and described in Fig.(4,5). The results were so different at the hilly terrain type of the red sea coast. As the percentage of the area inundated for LDEMS were increased from 0.25 % to 1.24% in both scenarios. Also, the inundated area of the scaled SRTM was enhanced to be 25% only higher than LDEM in comparison with the SRTM inundated area that was 61% higher for the 1m Inundation height.
Conclusions
The present study was an attempt to obtain a practical framework for enhancing the vertical accuracy of freely available DEM such as SRTM model at two different terrains along the Egyptian coasts. The proposed methodology was using uniformly distributed GCPs to extract mathematical formulas that can be applied for the enhancement of the SRTM raster DEMs. The RMSE of SRTM data extracted at the reference GCPs concerning plain and hilly regions were 2.88m and 5.07m respectively. These results match the specified accuracy of SRTM, which is about ±16 m at a 95% confidence level [36]. The evaluation of the enhancement results reached 60% and 45% at the two terrains of the Nile delta and red sea coasts. The validation process by applying the derived scaled formulas on 60km strip of SRTM raster DEMs ensured that RMSEs were reduced by 48% to 54% in comparison with local accurate DEMs at the flat and hilly regions. These results are very well-fitting the obtained ones from the used reference ground control points. With respect to the sensitivity of these DEMs to sea-level projections using the recent local tide gauge data, the scaled SRTM DEM at the red sea region can be reliable as low-cost data with medium accuracy fulfil the requirement of monitoring inundation height response to sea-level rise. However, the accuracy achieved for the enhanced SRTM DEM didn't fit the requirement for coastal inundation studies at the plain area of the Nile delta. Therefore, it is concluded that this method approach should be investigated under different mathematical models and parameters to determine the optimal size and distribution of reference ground control points needed to adjust free open DEMs depending on the vertical accuracy demand. | 2021-08-02T00:06:34.640Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "5bd19bab3b590355a32d91dfabc9f9638a926917",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20210530/CEA37-14823416.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "485bbf17c7b055219002c78018effa4ba815ff65",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
216257946 | pes2o/s2orc | v3-fos-license | Fuels of the Diesel-Gasoline Engines and Their Properties
Hydrocarbon-based fuels which are gasoline, diesel, natural gas, and liquefied petroleum gas (LPG) have been generally used in the diesel and gasoline engines as a fuel. In this study, hydrocarbon-based fuels such as alkanes (paraffins), naphthenes (cycloparaffins), alkenes (olefins), alkynes (acetylenes), and aromatics (benzene derivatives) have been classified. Their molecular structure and properties have been comprehensively explained. In addition to this, some of the important fuel properties of the commonly used fossil-based fuels such as gasoline and diesel in the internal combustion engine have been evaluated. Thus, hydrocarbon derivative fuels which are diesel, gasoline, natural gas, and liquefied petroleum gas (LPG) have been investigated as an internal combustion engine fuel. Their physical and chemical properties were explained and compared to each other. Octane number and cetane number substantially affect the fuel ignition delay period and self-ignition temperature properties. Therefore, the gasoline and diesel engine running is dominantly affected by the octane and cetane numbers, respectively. As a result, fossil-based fuel ’ s physical and chemical properties, advantages, and disadvantages have been comprehensively explained and compared to each other. The fuels, which are commonly used in the diesel and gasoline engine, have been investigated, and their important properties have been revealed.
Introduction
Fuels can be classified into three groups as solid, liquid, and gas. Although liquid hydrocarbons are generally used in internal combustion engines, in urban transportation where air pollution is a problem, biofuels such as alcohols and biodiesel or gaseous fuels, which are liquefied petroleum gas (LPG) or natural gas, have been rarely used as a fuel. The importance of using alternative fuels in internal combustion engines emerges because of limited oil resources and decreasing reserves, increasing oil prices, and increasing environmental problems. In order to reduce dependence on oil, alternative engine fuels such as vegetable oils, biofuels (alcohols, biodiesel, biogas), and liquefied hydrogen gas have been of particular interest to researchers [1,2].
Hydrocarbon-based fuels
Fuel compounds containing carbon and hydrogen atoms in their basic molecular structure are called hydrocarbon-based fuels. Hydrocarbons can be divided into two main groups, aliphatic and aromatic. Aliphatic hydrocarbons are divided into two subclasses as saturated and unsaturated hydrocarbons. The carbon atom in hydrocarbon is called saturated if it has bonded with four hydrogen atoms and unsaturated if the carbon atoms have made double or triple carbon-carbon bonds. Saturated hydrocarbons are classified as alkanes; unsaturated hydrocarbons are classified as alkenes or alkynes [3,4]. Hydrocarbons can be in the solid, liquid, and gas phases according to the number of carbons in the chemical structure. Generally, hydrocarbons with carbon atoms 1-4 are in gas, 5-19 are liquid, and molecules with 20 and more carbon atoms are in the solid phase [5]. C n H m is the general closed chemical formula of liquid hydrocarbons used as a fuel in the internal combustion engines. However, hydrocarbons consist of hydrogen and carbon and also small amounts of O 2 , H 2 , S, H 2 O, and some metals containing crude oil derivatives [2]. Figure 1 gives the classification of the compounds of hydrocarbons.
Alkanes (paraffins)
Alkanes are saturated hydrocarbons with the general closed formula C n H 2n+2 , also known in the literature as paraffins, which add the suffix "-an" at the end of the Latin carbon numbers. Alkanes contain more hydrogen in their chemical structure compared to other hydrocarbons, this high number of hydrogen atoms lead to be higher thermal values and lower density than other hydrocarbons (620-770 kg/m 3 ). As the number of carbon atoms in the hydrocarbon chain increases, the properties of the alkanes such as autoignition tendencies, molecular weights, and melting and boiling points increase. Each increase in carbon number in the hydrocarbon chain causes the boiling point to rise by about 20-30°C. Alkanes are insoluble in water because they are apolar. Among the apolar molecules such as hydrocarbons and inert gases are Van der Waals forces, in other words, London dispersion forces. Dispersion force is as a weak intermolecular force between all molecules by means of temporary dipoles induced in the atoms or molecules. The dispersion forces are commonly expressed as London forces. Electron numbers and surface area of the molecules are the most important affecting factors of the magnitude of dispersion forces. These tensile forces directly affect the boiling point of these materials. The alkanes may exist in a straight chain, branched chain, and cyclic form depending on the arrangement of the carbon atoms. Van der Waals forces are more effective than branched ones because the molecular surfaces of the straight-chain alkanes are more in contact with each other. Thus, the boiling point of the straight-chain alkanes having the same molecular weight is higher than the branched ones. In other words, as the branching increases, the boiling point decreases, because the branched structure makes the molecule tighter. However, the increasing of branching has led to the surface area of the molecule to become narrow, and the boiling point decreases with the reduction of Van der Waals forces between itself and neighboring molecules. The ignition tendencies of straight-chain alkanes are generally higher than the branched chain ones due to it being more easily broken down. Unlike straight-chain molecule structures, branched chain and ring structures have higher ignition resistance. Therefore, straight-chain alkanes are more suitable for use as diesel fuel rather than as gasoline fuel. However, alkane isomers which are of the same closed formula but with different branched chains and rings are more suitable for use as a gasoline engine fuel since they have higher engine knocking resistance. The property that defines whether the fuel ignites spontaneously is called the octane number. In other words, it is defined as ignition resistance. Straight long chain fuels generally have a lower octane number, whereas branched structures have a higher octane number. To summarize this briefly, the octane number is usually inversely proportional to the chain length of the molecules of the fuels. The shorter the chain structure of the fuel molecules, the higher the octane number. The octane number is directly proportional to the branched side chain components. Besides, having a ring molecular structure of fuels leads to high octane numbers. Alkanes are present in solid, liquid, and gaseous form according to their carbon number. The carbon number 1-4 is present in gas, 5-25 in liquid form, and more than 25 in solid form. Alkanes contain less than 4 carbon atoms in their natural gas and petroleum gases, 5-12 atoms in gasoline, 12-20 atoms in diesel fuels, and 20-38 atoms in lubricating oils [1][2][3][4][5][6][7][8]. Figure 2 shows the molecular structure of the first four alkanes.
Naphthenes (cycloparaffins)
Another type of alkanes is cyclic structures which demonstrate as the general formula C n H 2n . Two hydrogen atoms are missing from normal alkanes because their structures are cyclic-shaped and in closed form. As the number of hydrogen atoms is low compared to normal alkanes, they have lower thermal values but higher densities (740-790 kg/m 3 ). Cycloalkanes are difficult to break up because of their closed cycle structure and have higher ignition resistance than straight-chain alkanes. However, they are also suitable for use as both petrol and diesel fuel due to that they have lower ignition resistance than branched ones. Thermal values of naphthenes are lower than alkanes and higher than aromatics [2]. Figure 3 shows the cyclic molecular structure of cyclohexane.
Alkenes (olefins)
Alkenes are unsaturated hydrocarbons that have double bond between carbon atoms shown as the general formula C n H 2n . The olefins with one double bond in the molecular structure are called monoolefins (C n H 2n ), and those with two double bonds are called die-olefins (C n H 2nÀ2 ). Monoolefins are entitled after the "en" or "ilen" suffix at the end of the carbon number, while die-olefins are entitled by attaching the "dien" suffix to the roots showing the carbon number. Many isomers are formed by displacement of the double bonds of alkenes. Alkenes' thermal values are lower than alkanes, and their density is between 620 and 820 kg/m 3 due to that the ratio of carbon atoms to hydrogen atoms is higher in the molecular structure of alkenes. Alkenes have high ignition resistance. Alkenes are less resistant to oxidation than alkanes so that they can easily react with oxygen. Thus, oxygen causes to be gummed to alkenes and consequently block the fuel pipeline. Alkenes contain double bonds between carbon atoms, one of which is sigma (б) and the other is pi (п). For this reason, it breaks down more difficult than alkanes with a single sigma bond. Alkenes can be used as fuel for gasoline engines due to high ignition resistance. Besides, it can be used as a diesel fuel by increasing autoignition temperature. The most important properties of alkenes give addition reactions with H 2 , X 2 , HX, and H 2 O compounds. Carbon atoms of alkenes are not fully saturated with hydrogen. Therefore, alkenes can be more easily associated with elements such as hydrogen, chlorine, and bromine due to it being more chemically reactive than alkanes and naphthenes. With this reactive structure, they are used as raw materials to obtain better quality fuels by methods such as hydrogenation, polymerization, and alkylation. While alkenes are present in very small amounts in crude oil, generally they can be obtained by thermal and catalytic cracking methods which are heat or catalyst by means of large molecular product decomposition. Alkenes are present in large quantities in the gasoline obtained by these methods. The high ignition resistance of the alkenes makes them a good gasoline engine fuel, but they can also be diesel engine fuel by increasing the ignition tendencies [1-3, 5, 9]. Figure 4 shows the molecular structure of some alkenes. The cyclic molecular structure of cyclohexane [5].
Alkynes (acetylenes)
Alkynes are compounds which having the general closed formula C n H 2nÀ2 and having at least one triple bond (C☰C) between carbon atoms. Alkynes are unsaturated hydrocarbons due to all carbon atoms not having enough bonds with hydrogen. Besides, alkynes have "-in" suffix which is added to the end of the compound and entitled according to the number of carbon atoms in the longest chain. The simplest and most known compound is acetylene (C 2 H 2 ). Alkynes may also be referred to as acetylene derivatives. Alkenes are more reactive than alkanes and naphthenes because they are unsaturated. Thus, they can be more easily reacted with elements such as hydrogen, chlorine, and bromine to form a compound [3,5,9]. Figure 5 gives the molecular structure of some alkenes.
Aromatics (benzene derivatives)
At the end of the nineteenth century, organic compounds were divided into two classes as aliphatic and aromatics. Aliphatic compounds meant that the compounds exhibited "liparoid" chemical behavior, while aromatic compounds meant low hydrogen/carbon content and "fragrant." Aromatics are unsaturated hydrocarbons having double bonds between carbon atoms that have a closed general formula C n H 2nÀ6 . Aromatic compounds are bonded to each other by aromatic bonds, not single bonds. In other words, aromatics are also called arenes. Although aromatics are unsaturated compounds, they have different chemical properties than other aliphatic unsaturated compounds. Unlike alkenes and alkynes, aromatics do not give an addition reaction which is the characteristic reaction of unsaturated compounds. Furthermore, aromatics carry out displacement reactions especially specific to saturated hydrocarbons. Because of these reasons and aromatics are more stable than other unsaturated compounds, aromatics have been categorized as a separate class of hydrocarbons. Due to the presence of more than one double bonded carbon atoms and cyclic structure, they have strong bond structures and highly resistant to ignition. Densities of aromatics range between 800 and 850 kg/m 3 . Higher densities in the liquid state cause them to have a high-energy content per unit volume but have a low thermal value per unit mass. The bonds between carbon atoms are strong; aromatics have a high resistance to knocking. Therefore, because of the high octane number of aromatics, they can be a good gasoline fuel with the addition of gasoline to increase knocking resistance, but they are not suitable to use as diesel engine fuel because of their low cetane numbers. The simplest aromatic compound is benzene with the chemical formula C 6 H 6 . The main structures of other aromatics are also constituted by benzene. Generally, they can be obtained artificially from coal and can be used as a gasoline additive to improve the knocking resistance of gasoline. The aromatics must be used carefully because they are carcinogenic, cause exhaust pollution, have high solubility, and have corrosive effects on fuel supply systems [1-3, 5, 6, 9]. Figure 6 shows the molecular structure of some important aromatics.
Fuels of internal combustion engine
Gasoline and diesel fuels, which are derivatives of crude oil, are generally used in internal combustion engines. The approximate elemental structure of an average crude oil consists of 84% carbon, 14% hydrogen, 1-3% sulfur, and less than 1% nitrogen, oxygen atoms, metals, and salts. Crude oil consists of a wide range of hydrocarbon compounds consisting of alkanes, alkenes, naphthenes, and aromatics. These are very small molecular structures such as propane (C 3 H 8 ) and butane (C 4 H 10 ) but can also be composed of mixtures of various structures with very large molecules such as heavy oils and asphalt. Therefore, crude oil needs to be distilled to be used in internal combustion engines. As a result of heat distillation of crude oil, petroleum derivatives such as petroleum gases, jet fuel, kerosene, gasoline, diesel, heavy fuels, machine oils, and asphalt are obtained. In general, the distillation of crude oil resulted in an average of 30% gasoline, 20-40% diesel, and 20% of heavy fuel oil, and heavy oils from 10 to 20% are obtained [2,5].
During the distillation of crude oil, gasoline is obtained between 40 and 200°C, and diesel fuel is obtained between 200 and 425°C. In order to use these fuels in engines, some of the important physical and chemical properties such as specific gravity of the fuel, structural component, thermal value, flash point and combustion temperature, self-ignition temperature, vapor pressure, viscosity of the fuel, surface tension, freezing temperature, and cold flow properties are required. The specific mass, density of the fuel decreases with increasing hydrogen content in the molecule. The density of gasoline and diesel fuels is generally given in kg/m 3 at 20°C . The American Petroleum Institute (API) number is an international measurement system that classifies crude oil according to its viscosity according to the American standards. The specific gravity can be defined as the ratio of the weight of a given volume of a given substance at 15.15°C (60°F) to the weight of the water at the same volume and temperature. The relationship between API number and specific gravity is expressed as follows [ According to the API number, crude oil is divided into three groups as heavy, medium, and light, and as the number of the API increases, crude oil becomes thinner. The API degree of diesel fuels varies between about 25 and 45. The viscosity, color, main component, and definition of crude oil according to the API grade are given in Table 1 [1,5]. Molecular structure of some aromatics [5].
While the density of gasoline is ρ = 700-800 kg/m 3 , it varies between ρ = 830-950 kg/m 3 for diesel fuel. While the carbon content in alkane and naphthene fuels is 86%, it is around 89% for aromatics. In addition to carbon and hydrogen atoms, sulfur, asphalt, and water can be found in gasoline and diesel fuels. In particular, sulfur can cause corrosion in engine parts, and the combustion products of sulfur have a negative impact on the environment. The asphalt adheres to the valve on the piston surfaces and causes wear. The water causes corrosion and reduces the thermal value of the fuel. These are undesirable components in the fuel. The thermal values of liquid fuels are given as unit mass energy (kJ/kg or kcal/kg), while the thermal values of gas fuels are given as unit energy (kJ/l, kJ/m 3 or kcal/m 3 ). Thermal values of fuels are expressed in two ways as lower and higher heating value. If the water in the fuel is in the vapor state at the end of the measurement, it gives the lower thermal value of this fuel. When the water in the fuel condenses at the end of the measurement, it gives the evaporation heat to the system, and the measured value gives the higher heating value of the fuel. As a result, the single-phase steam is obtained in the calorimeter capsule as a result of the thermal value measurement so that the lower heating value is measured. The dual phase (liquid-vapor phase) is obtained so that the higher heating value is measured. When the temperature of an air-fuel mixture is sufficiently heated, the fuel starts to ignite by itself without external ignition. This temperature is referred to as the self-ignition temperature (SIT) of the fuel and the delay time for the combustion of the fuel to be the ignition delay (ID). The terms SIT and ID are important features of engine fuels. SIT and ID values vary depending on variables such as temperature, pressure, density, turbulence, rotation, air-fuel ratio, and presence of inert gases. Self-ignition is the basic rule of the combustion process in diesel engines. SIT value is desired to be high in gasoline engines and low in diesel engines. The autoignition temperature of the gasoline is 550°C and higher temperatures [1,2,4].
Depending on the type of gasoline or diesel engine, the desired properties of fuels vary. The most important properties of gasoline fuels are properties such as volatility and knocking resistance, whereas diesel fuels are required to have important fuel properties such as viscosity, surface tension, and ignition tendency. In gasoline fuels, volatility and knock resistance are one of the most important parameters affecting engine performance. The volatility of gasoline fuel affects the rate and amount of evaporation of the fuel in the intake channel and in the cylinder. The low volatility of the fuel influences the formation of sufficient air-fuel mixture, but when it is very volatile, it can prevent the flow of fuel by creating vapor bubbles in the suction channel with the local temperature increases. When the flame front advances during combustion, with the increasing temperature and pressure inside the cylinder, it compresses the air-fuel fill which the flame front cannot yet reach. Thus, the fuel can constitute another combustion front due to the fuel spontaneously reaches the ignition temperature to heat and radiation. The combustion speeds of the flame fronts at these different points can be 300-350 m/s, and cylinder pressures may reciprocate to as high as 9-12 MPa. At these high speed and pressure values, the flame fronts are damped by hitting each other or against the walls of the combustion chamber. These damping are not only cause loss of energy but also increase the local heat conduction. As a result of this situation, engine performance decreases. This phenomenon is called a knock in gasoline engines and is an undesirable situation. The chemical structure of the fuel has a considerable effect on the autoignition temperature. Octane number (ON) is defined as the property of fuel to knocking resistance or how well the fuel itself ignites. The octane number is inversely proportional to the chain length of the fuel molecules. The shorter the molecular chain length of the fuels has, the higher the octane number is. However, the octane number is directly proportional to the branched side chain component. The higher the branching in the molecule chain leads to the higher octane number of the fuel. In other words, it causes higher knocking resistance of fuels. Generally, increasing the number of carbon atoms in the composition of the fuel has higher impact resistance. However, the octane numbers of cyclic molecules, naphthenes, alcohols, and aromatics are high. In order to scale the octane number of gasoline, two reference points are taken, which represent points 0-100. The octane number of normal heptane (C 7 H 16 ) is assumed to be 0, while the octane number of isooctane (C 8 H 18 ) is considered to be 100. The reason of these two fuels as a reference point is that both fuel compounds have almost the same volatility and boiling point values. The reason as a reference point of these two fuels is that both fuel compounds have almost the same volatility and boiling point values. Fuels such as alcohols and benzenes with an octane number higher than the top octane number of this measure are also available. In gasoline engines, additives are used to increase the knocking resistance of the fuel to prevent knocking. The two most commonly used methods for determining the octane number of fuels are engine method and research method. The octane numbers determined by these methods give the values of motor octane number (MON) and research octane number (RON), respectively. Table 2 gives the test conditions for determining the octane number of fuel [1,2,4,5].
Since the inlet air temperature of the MON method is higher than the RON method, the post-combustion temperature reaches higher values. Thus, the fuel spontaneously ignites and knocks. Therefore, the octane number obtained by the MON method is lower than the octane number obtained by the RON method because it is operated at lower compression ratios in the MON method. The value difference between these two octane number determination methods is called fuel sensitivity (FS). When the number of fuel sensitivities is between 0 and 10, it is stated that the knock characteristic of the fuel does not depend on engine geometry, Table 2.
Test conditions for octane number measurement [4]. and if it is higher than these values, the knock characteristic of the fuel is highly dependent on the combustion chamber geometry of the engine. YD is calculated as in Eq. (3): Combustion chamber geometry, turbulence, temperature, and inert gases are the parameters that affect the octane number. The octane number is highly dependent on the flame velocity in an air-fuel charge. As the flame velocity increases, the air-fuel mixture above the spontaneous ignition temperature immediately burns during the ignition delay. Thus, there is a direct correlation between the flame speed and the octane number, as the flame speed will allow the fuel to run out without knocking. Alcohols have high flame speeds, so their octane numbers are high. The ID period does not depend on the physical properties of the fuel such as density and viscosity in a hot engine at steady state. It is strongly dependent on the components of the fuel chemistry. Therefore, additives such as alcohols or organic manganese compounds are added to increase the octane number of the fuel [4,5]. It is possible to work at higher compression ratios by increasing the octane number of fuels. Thus, high compression ratio increases engine power and provides fuel economy [10].
Diesel fuels are divided into two main categories as light diesel and heavy diesel fuels. The chemical formula of light diesel is approximately C 12.3 H 22.2 , while heavy diesel is considered as C 14.06 H 24.8 . The molar weights of light and heavy diesels are approximately 170 and 200 g/mol, respectively. Viscosity, surface tension, and ignition tendency of fuel are important fuel property parameters in diesel fuels. Light diesel fuel has a lower viscosity and requires less pumping work. Since low viscosity also reduces the surface tension of the fuel, the fuel has a smaller droplet diameter during spraying. In contrast to gasoline engines, it is desirable to have a high ignition tendency in diesel engines, since combustion in the diesel engines is based on the spontaneous combustion of the air-fuel mixture. At this point, the cetane number, which is a measure of the fuel's ignition ability, emerges as a fuel feature. In other words, it is a quantity that quantifies the ignition delay period. Hexadecane (C 16 H 34 ), a straight-chain fuel of the alkane group, is considered to be the highest reference point of the cetane number, which is the measure of the ignition tendency. The other reference point is cetane number 15 as heptamethylnonane (HMN) C 12 H 34 , or the lowest reference point was accepted zero as the cetane number value of alpha-methyl naphthalene C 11 H 10 fuel. First of all, fuel with unknown cetane value is run in the adjustable compression ratio engine. Then, the engine test is carried out until the compression rate at which the first knock starts for determining compression ratio of the fuel. Then, the mixture of these two reference fuels in various ratios is tested at the specified compression ratio, and the reference fuels are mixed until the knocking begins. The percentage of hexadecane at the moment of the knock, in the heptamethylnonane or alphamethyl naphthalene fuel mixture, gives us the cetane number of the measured fuel. Several empirical equations have been developed by using the physical properties of the fuel since the engine tests are very laborious and costly in determining the cetane number. These methods, which measure the fuel propensity to ignite, are called cetane index, aniline point, or diesel index. Aniline is an aromatic compound which is very easily mixed with compounds of its group even at low temperatures, while it is more difficult to form mixtures with alkanes (paraffins). Therefore, hexadecane (C 16 H 34 ) which is an alkane group and has a high ignition tendency has a high mixing temperature with the aniline. The mixture of the sample fuel with the same amount of aniline is heated to find the diesel index. Then, all of the aniline is dissolved in the fuel. After that the mixture is cooled for allowing to aniline to separate from the fuel. This temperature, where the aniline is separated from the fuel, is called the aniline point. The diesel index is calculated with the aniline point and API grade specified in Eq. (4): The higher the diesel index value, the fuel is more alkane (in paraffinic structure), and it has the higher ignition tendency. Increasing volatility in diesel fuels causes acceleration of fuel evaporation and decrease in viscosity. This is generally undesirable since the fuel causes a reduction in the cetane number [1,2,4].
Some fuels commonly used in engines are presented in Table 3. Some of the important properties of fuels such as the closed formulas, molar weight, lower heating value and higher heating value, stoichiometric air/fuel and fuel/air ratios, evaporation temperature, motor octane number (MON), research octane number (RON), and cetane number are given.
The cetane index can be calculated from Eq. (5) which is shown by distilling the fuel. It is calculated from the temperatures and the density of the vaporized fuel at 10, 50, and 90% volumetric ratios by the distillation of the fuel: The values of T 10 , T 50 , and T 90 are the temperatures at which the fuel evaporates in volume ratios of 10, 50, and 90%, respectively. B = Àexp[À3500(ρ À 850)] À 1, where ρ = density in kg/m 3 at 15°C. This formula is related to the number of cetane, unless cetane-enhancing additives are added to the fuel. Otherwise, the cetane number of doped fuels can be measured by engine test experiments. Another method used to calculate the cetane index is the empirical equation given in Eq. (6), which is calculated using some physical properties of the fuel [5]: SI ¼ À420:34 þ 0:016G 2 þ 0:192G log 10 T gn À Á þ 65:01 log 10 T gn À Á 2 À 0:0001809T gn 2 (6) where G = (141.5/S g ) À131.5 is the API degree of the fuel. S g and T gn are the relative boiling point temperature in°F and relative density, respectively.
The semiempirical expression that predicts ID duration based on cetane number and other operating parameters is as follows: ID (°CA) is a time in crankshaft angle, E A = (618.840)/(cetane number + 25) activation energy, R u = 8.314 kJ/kmol K universal gas constant, T em and P em temperature at the beginning of compression time, respectively, (K) and pressure (bar), ε = compression ratio, and k = cp/cv = 1.4 are the values used in air standard cycle analysis. ID is calculated by the formula given in Eq. (8). It is expressed in milliseconds for an engine at n rpm [4]: The low cetane number of diesel engines leads to an increase in ID time, which in turn reduces the time required for combustion and CA. An increased TG time Table 3.
Common fuels and their properties [4].
leads to accumulate more fuel in the combustion chamber than required. Thus, this excess fuel causes sudden and high-pressure increases during the onset of combustion. These sudden pressures increase cause mechanical stresses and hard engine operation, which is known as diesel knocking [2,4]. In brief, the number of cetane and the number of octane refer to the spontaneous combustion of fuels. A higher cetane number indicates that diesel fuel burns suddenly and easily. The high octane number defines the resistance of gasoline to sudden ignition. Generally, if the cetane number is high, the octane number is low. There is an inverse relationship between these two properties, so that the cetane number is low if the octane number is high [5].
Natural gas and liquefied petroleum gas (LPG)
Natural gas is a gas mixture containing methane, ethane, propane, pentane, and hexane in a lighter content than air, without color, smell, and taste. However, it contains a small amount (0-0.5% by volume) of carbon dioxide, nitrogen, helium, and hydrogen sulfide gas. Generally, this gas composition contains about 70-90% of methane, 0-20% ethane, and slightly less propane than ethane. The natural gas used in the market is refined and separated from other gases and used as almost pure methane gas (CH 4 ) [5]. Natural gas can be stored as compressed natural gas (CNG) at high pressures such as 16-25 MPa or liquid natural gas at low pressures such as 70-210 kPa and at very low temperatures such as À160°C. Natural gas can be stored by these methods and generally used as compressed natural gas (CNG) in internal combustion engines with a single-point spray system. The single-point spraying system allows for the most efficient use of natural gas as it provides a longer mixing time than required for natural gas [4]. Table 4 shows the compounds that form natural gas and boiling points.
There are dual fuel diesel engines in which natural gas and diesel fuels work together. Natural gas is supplied to the combustion chamber at approximately sound speed. This leads to high turbulence and high flame speeds. Natural gas has lower combustion temperatures than diesel fuel, and with late spraying, the combustion chamber temperature can be further reduced. Decrease in combustion chamber temperature significantly reduces NO x formation. However, the low carbon content in natural gas leads to less CO 2 emissions and much less solid particulate matter [4].
Dump gas engines, converting methane gas into energy, are one of the most common natural gas applications. Gases produced in landfills generally contain between 45 and 65% methane. In addition to methane, these landfill gases contain highly polluting and variable quality gases such as fluorine, chlorine, silicon, and solid particles. Especially due to the corrosive and abrasive effect of these gases, special piston and valve materials must be used in the engines. The thermal value of n-Butane À0.5 Table 4.
Compounds and boiling points in natural gas [5].
natural gas is between 33.4 and 40.9 MJ/m 3 . CO 2 , H 2 O, and 891 kJ of energy are obtained when 1 mol of methane gas is fully combusted. The combustion equation of 1 mol of methane is as described in Eq. (9) as follows: The high flame velocity and octane number 120 of the natural gas enable the natural gas to operate at high compression rates. This ensures that natural gas is a good gasoline engine fuel. Furthermore, natural gas has low exhaust emissions. In addition, the most important advantage of natural gas fuel is that natural gas can be produced from a source such as coal that has a lot of reserves all over the world. However, since the low-energy capacity of natural gas is in the form of gas, its low volumetric efficiency leads to reductions in engine performance. The disadvantages of this fuel are that natural gas requires high-pressure fuel storage tanks; refueling takes time and has variable fuel components in the content of natural gas [4]. Table 5 presents the properties of natural gas and its comparison with other fuels as thermal values.
LPG, a liquefied petroleum gas, is produced as a by-product from natural gas production processes or during the distillation of oil in refineries. In general, it contains 90% propane, 2.5% butane, and a small amount of ethane and propylene with heavy hydrocarbons. These propane and butane gas ratios in LPG may vary according to the regions and areas of use [5]. In recent years, propane-butane mixtures in different ratios (80% propane/20% butane, 70% propane/30% butane, 50% propane/0% butane) have been tested as fuel in vehicles. LPG gas used in Turkey consisted of 30% propane and 70% butane. LPG is the most preferred fuel type after gasoline and diesel fuels, since LPG is much easier to store and transport than natural gas [1,4].
LPG is a colorless, odorless, nontoxic, and easily flammable gas. LPG is a mixture of propane and butane gas, which is gas at normal pressures and temperatures. However, LPG is a liquid at moderate pressure. Besides, it is two times heavier than air and half weight of water. Therefore, LPG leaks to the floor in case of leakage. LPG in liquid state expands to approximately 273 times its liquid volume. This is called sudden expansion and cooling of the sudden temperature drop with the very rapid evaporation of the liquid fuel as it passes into the gaseous state. Since this can cause cold burns, the gas should not be touched with bare hands. Although LPG is a noncorrosive gas, it can melt paint and oil and also inflate natural rubber materials, causing them to lose their properties. Therefore, the use of LPG compatible materials in autogas systems using LPG is very important for safety [1,5]. LPG system is widely used in gasoline vehicles. With respect to this, the comparison of physical and chemical properties of propane and butane gases which are components of LPG and the gasoline fuel is given in Table 6. Highest flame speed (m/s) 0.39 1 m 3 natural gas 8250 Table 5.
The properties of natural gas and its comparison with other fuels [11].
Conclusions
Fossil-based fuels such as diesel, gasoline, natural gas, and LPG have been commonly used in engines as a fuel. However, the internal combustion engines show differences in the fuel types depending on the thermodynamic cycles. Therefore, fuels can be demonstrated in different properties with each other. For example, gasoline fuels should have a high ignition resistance, while diesel fuels should have well self-ignition. For these reasons, hydrocarbon fuels can be converted by some chemical process depending on the engine types or by improving fuel properties. Thus, new fuel formulas or various fuel properties can be improved by converting hydrocarbons each other via some of the chemicals process.
Diesel and gasoline engine fuel properties such as cetane number, octane number, viscosity, and density can be improved by fuel additives. One of the most promising fuel additives are alternative fuels in the future. High octane number and low density propensities of the alcohols lead to be improved the fuel properties such as increases the octane number of gasoline and decreased the viscosity, density properties of the diesel fuel. Besides, the diesel fuel cetane number can be improved by biodiesel, which has a high cetane number. Table 6.
Nomenclature
Properties of LPG and gasoline [1]. | 2020-03-05T10:38:47.087Z | 2020-02-26T00:00:00.000 | {
"year": 2020,
"sha1": "58e4e4b2205130726ef8f093f35c8f69c5e520a1",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/69204",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6e397c30b6383ea24c07886ff639507f88f276de",
"s2fieldsofstudy": [
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
6279803 | pes2o/s2orc | v3-fos-license | Defining hospitalist physicians using clinical practice data: a systems-level pilot study of Ontario physicians.
BACKGROUND
Hospitalists have become dominant providers of inpatient care in many North American hospitals. Despite the global growth of hospital medicine, no objective method has been proposed for defining the hospitalist discipline and delineating among inpatient practices on the basis of physicians' clinical volumes. We propose a functional method of identifying hospital-based physicians using aggregated measures of inpatient volume and apply this method to a retrospective, population-based cohort to describe the growth of the hospitalist movement, as well as the prevalence and practice characteristics of hospital-based generalists in one Canadian province.
METHODS
We used human resource databases and financial insurance claims to identify all active fee-for-service physicians working in Ontario, Canada, between fiscal year 1996/1997 and fiscal year 2010/2011. We constructed 3 measures of inpatient volume from the insurance claims to reflect the time that physicians spent delivering inpatient care in each fiscal year. We then examined how inpatient volumes have changed for Ontario physicians over time and described the prevalence of full-time and part-time hospital-based generalists working in acute care hospitals in fiscal year 2010/2011.
RESULTS
Our analyses showed a significant increase since fiscal year 2000/2001 in the number of high-volume hospital-based family physicians practising in Ontario (p < 0.001) and associated decreases in the numbers of high-volume internists and specialists (p = 0.03), where high volume was defined as ≥ 2000 inpatient services/ year. We estimated that 620 full-time and 520 part-time hospital-based physicians were working in Ontario hospitals in 2010/2011, accounting for 4.5% of the active physician workforce (n = 25 434). Hospital-based generalists, consisting of 207 family physicians and 130 general internists, were prevalent in all geographic regions and hospital types and collectively delivered 10% of all inpatient evaluation and care coordination for Ontario residents who had been admitted to hospital.
INTERPRETATION
These analyses confirmed a substantial increase in the prevalence of general hospitalists in Ontario from 1996 to 2011. Systems-level analyses of clinical practice data represent a practical and valid method for defining and identifying hospital-based physicians.
➣ Since the firSt hoSpitaliSt programS were established in the late 1990s, the hospitalist movement has grown rapidly in terms of the number of physicians specializing in hospital medicine, the proportion of inpatients cared for by hospital-based physicians, and the number of hospitals employing formal hospitalist groups. [1][2][3][4][5] Although several studies have reported on the demographic characteristics, prevalence, and outcomes of care of US hospitalists, 1,3,4,6,7 fundamental debate continues within the medical community as to what hospitalists are, how they should be defined, and what (if anything) distinguishes them from other hospital-based specialists.
The Society of Hospital Medicine has defined a hospitalist as "a physician who specializes in the practice of hospital medicine," which is in turn defined as "a medical specialty dedicated to the delivery of comprehensive medical care to hospitalized patients." 8 While these definitions identify the hospitalist's professional focus, they offer little guidance on what characteristics differentiate the clinical hospitalist from other practitioners. As a consequence, the term "hospitalist" has become colloquialized and is now commonly used to refer to a general internist or family doctor who works in a hospital. However, there are exceptions to this general rule, and some hospitalists are now specializing, with new terms like "neurohospitalist," "surgical hospitalist," and "OB-GYN hospitalist" becoming increasingly commonplace. 9 Two approaches have traditionally been applied when identifying hospitalists in comparative evaluations. The first uses voluntary surveys of institutional staff or professional society membership to estimate hospitalist prevalence. With this approach, the responding physician self-identifies as a hospitalist, but this method is impractical and imprecise for researchers and policymakers. Lacking a formal definition of the clinical hospitalist practice, any physician can choose to call himself or herself a hospitalist. Low response rates for such surveys have made it difficult to assess the population prevalence of hospital-based physicians, and the clinical workloads of practitioners are seldom explored. Furthermore, few countries offer certification or training in hospital medicine. Consequently, administrative databases rarely include physician-specialty codes that categorize physicians as hospitalists.
The second approach uses a functional definition, categorizing hospitalists by the amount of inpatient care provided. Most often a threshold is established whereby hospitalists are identified and classified on the basis of a certain proportion of each physician's practice being generated from the care of hospital inpatients (e.g., ≥ 90%). These definitions are more restrictive, limiting the category of hospitalists to direct providers of care. The associated methods are also problematic. Few authors have discussed the validity of proportional metrics, assessing whether the denominators used in their analyses have captured minimum volumes indicative of active practice (e.g., a physician with 90% inpatient practice may be classified as a hospitalist, even if he or she saw only 5 patients in the timeframe under investigation). Similarly, few, if any, authors have acknowledged the variability that exists between practice styles, adopting thresholds that can accommodate both full-time and part-time practitioners. As a result, high-volume parttime hospitalists who fall below the proportional thresholds are categorized in the comparison group alongside low-volume community providers, which mutes the effects of a hospitalist model of concentrated care.
Hospital medicine sits at a pivotal intersection for the way inpatient care is funded and delivered across the globe. With several North American, European, Asian, and Australasian governing bodies introducing activity-based funding models that reward hospitals for improved productivity and/or penalize those with lower than expected outcomes, hospital physicians and their institutions must become accountable for the quality of care and services they deliver. If the eventual goal in hospital medicine is to monitor and improve performance, a standardized, systems-level method is needed for defining the clinical hospitalist, independent of self-identification.
Canadian hospitalists emerged alongside their US counterparts after cutbacks to physician reimbursement in the mid-1990s sparked an exodus of primary care practitioners from the hospital setting. 2,10-12 Canada is unique within the hospitalist movement in that the majority of this country's hospitalists are trained as general practitioners or family physicians (GP/FPs) as opposed to specialists. 2,3,13 The hospitalist career path is attractive to GP/FPs, as it provides an opportunity to practice higher-acuity medicine while earning a competitive compensation exceeding that of an officebased practice. However, hospital medicine is not recognized as a distinct area of focused practice. There are no certification or training guidelines for Canadian hospitalists, and no method (other than self-identification) exists of distinguishing hospital-based from office-based practitioners. 12 As a result, the population prevalence of hospitalists is largely unknown and the services that the physician actually billed for in his or her practice, derived from aggregated OHIP billings and validated through periodic telephone follow-up with random physician samples. In cases of discrepancy, the physician was assigned to the medical specialty recorded most often in his or her OHIP claims for the particular year, on the assumption that a physician would not be allowed to bill under a specialty code unless licensed to do so. Pediatric surgeons and psychiatrists were combined with the corresponding adult practitioners, and diagnostic radiology, nuclear medicine, and all laboratory specialties were considered together (as "diagnostics").
Physicians' demographic characteristics were linked to OHIP billings through an encrypted identifier to determine the annual number of patient evaluation-andmanagement (E&M) claims billed in relation to the location of care delivery (inpatient setting, emergency department, office, long-term care facility, or the patient's home). An E&M claim was defined as any clinical visit, consultation, assessment, reassessment, death pronouncement, case conference, counselling session (patient, family, or group), or psychotherapy session billed to OHIP for an Ontario resident. Claims were used as a proxy indicator of the time that physicians spent in direct clinical care and case management. From the data, 3 measures of physicians' annual inpatient workloads were tabulated: (1) the total number of E&M claims billed for inpatient care, (2) the proportion of total claims generated from the care of hospital inpatients (inpatient claims/total claims), and (3) the total number of calendar days with OHIP billings for inpatient care. Because the primary role of the hospitalist is to provide direct clinical care and care coordination, procedure volumes were not explored.
The number of unique inpatients seen by each physician and the proportion of inpatients with whom physicians had a previous medical relationship (defined as patients for whom the physician had billed at least one E&M claim within 24 months before the date of admission) were determined for the most recent fiscal year (2010/2011). Characteristics of the hospitals where physicians billed the majority of inpatient care were extracted from the Ontario Hospital Reporting System, a database maintained by the Canadian Institute for Health Information that contains annual statistical information on all acute care hospitals operating in the province.
Definition of hospital-based physicians. In Table 1 we propose a conceptual framework that uses annual almost certainly under-reported, which makes hospital medicine an ideal setting to pilot the application of a functional volume framework.
In this article, we propose a novel method of defining hospital-based physicians that uses the volume of inpatient care combined with additional practice data to measure a physician's involvement in the provision of hospital care. We then apply this method at the systems level to describe the growth of the hospitalist movement, as well as the prevalence and characteristics of hospital-based physicians, in Ontario, Canada, over a 15-year timeframe.
Methods
Study population. We constructed a retrospective population-based sample consisting of all clinically active physicians who practised in the province of Ontario, Canada, between 1 April 1996 and 31 March 2011 (fiscal 1996/1997 to fiscal 2010/2011) and who submitted claims for professional fees to the Ontario Health Insurance Plan (OHIP), a publicly funded plan that covers the cost of basic health care, including hospital care, to all permanent residents of the province. The cohort was identified using the Institute for Clinical Evaluative Sciences (ICES) Physician Database, a human resources database containing validated demographic, certification, and practice characteristics for all physicians licensed in the province since 1992. Active physicians were defined yearly according to guidelines developed by the Ontario Physician Human Resources Data Centre, which include maintaining an active licence with the College of Physicians and Surgeons of Ontario; being 25 to 85 years of age with a practice located within the province; having an OHIP billing number with active insurance claims; not being engaged in postgraduate studies; and not being identified as retired or inactive because of disability, leave, sabbatical, or other reason. 14 Physicians were allowed to enter and leave the cohort throughout the 15-year observation window; however, once a physician was deemed active in a given fiscal year, it was assumed that he or she remained active throughout the fiscal period.
Outcome measures.
For each year, we extracted physicians' demographic, training, and practice characteristics from the ICES Physician Database. Each physician's medical specialty was determined by combining data on both certified and functional specialties, where certified specialty captured the most recent certification information on file and functional specialty reflected inpatient volumes and additional practice data to define and delineate hospital-based physicians. We began with a functional definition validated by Kuo et al., 1 identifying all active physicians in each fiscal year who had a minimum total volume of 100 E&M claims and for whom at least 80% of total claims were generated from the care of hospital inpatients. We then plotted the frequency distribution of active physicians by year and medical specialty according to the following 4 variables: (1) total number of inpatient claims billed, (2) proportion of total claims generated from the care of hospital inpatients, (3) the relationship between total claims volume and the proportion of claims billed for inpatient care, and (4) the relationship between inpatient claims volume and the proportion of claims billed for inpatient care. In examining variables 3 and 4, two concerns became apparent with the functional definition proposed by Kuo et al. 1 : first, total claims volume was not a specific metric, which meant that too many low-volume physicians were categorized as hospitalists (false positives); and second, the definition did not discriminate between full-time and part-time practitioners. Parttime practitioners with moderately high inpatient volumes practising exclusively in the hospital would be correctly classified as hospitalists, whereas physicians with equivalent inpatient volumes but whose practices were split between hospital and community (e.g., 70% inpatient, 30% long-term care) would incorrectly fall in the comparison group. We therefore updated the definition of Kuo et al., 1 replacing total claims volume with inpatient claims volume and distinguishing full-time from part-time but strictly hospital-based physicians on the basis of their volume of inpatient care provision. We then proposed 2 novel classifications: mixed-practice physicians (physicians with average-to-high inpatient volumes whose clinical practice is split between inpatient and outpatient care) and comprehensive community practitioners (community-based physicians who provide a full range of medical services including hospital care) (see online Appendix A for an evaluation of concordance between the 2 frameworks). The proposed thresholds were established by examining the Table 1 Conceptual framework for defi ning community and hospital-based physicians using information from administrative databases
Aspect of framework
Comprehensive community practitioner Mixed-practice physician
Description of practice
Physicians practise primarily within the community but provide occasional inpatient care. Physicians also provide long-term care, emergency, and/or home care services as appropriate.
Full-time practice is split between outpatient and inpatient care.
Majority of practice is inpatient evaluation and management, but physician works at a part-time equivalency. Inpatient practice may be general or specialty-based.
Majority of practice is inpatient evaluation and management on a full-time basis. Inpatient practice may be general or specialty-based.
Scope of inpatient practice
Hospital inpatients are enrolled in the physician's primary practice either individually or within a team; inpatients are generally lowrisk medical and ALC patients.
Hospital inpatients often come from outside the physician's primary practice through rotating call; inpatients may be general, complex medical, and ALC patients.
Physicians typically have no previous relationship with hospital inpatients; inpatients are general, complex medical, and ALC patients; physicians are often involved in comanagement of specialty patients.
Compensation mechanism
Fee-for-service billing to insurance plans; physicians have no direct fi nancial relationship with hospitals.
Fee-for-service billing to insurance plans. Hospitals may "top up" physicians' feefor-service billings.
Fee-for-service billings plus negotiated salary stipend or alternative funding plans; hospitals may pay a portion or all of the physicians' income from their operating budgets. Physicians often work as independent contractors to individual hospitals.
Annual inpatient volume* < 30% of clinical volume is hospital-based, and total annual volume indicates an active community practice (> 50% of total volume is generated from offi ce, nursing home, or home care; total volume ≥ 100 services; inpatient volume ≥ 10 services).
30%-79% of total volume is hospital-based, and inpatient volumes refl ect an active and substantial inpatient practice (≥ 500 inpatient services annually). distributions of the 4 variables listed above, looking for points at which consistent changes in physician density formed over time, indicated by an increasing frequency of high-volume practitioners and a consistent density of mid-volume practitioners (see online Appendix B and online Appendix C for selected distributions).
Statistical analysis.
After describing the characteristics of physicians who provided inpatient care in Ontario hospitals by year, we plotted the distribution of active physicians according to the annual number of inpatient claims billed by year and medical specialty. To confirm whether upward or downward trends in inpatient volumes were significant over time, the proportions of physicians achieving each billing level (i.e., ≥ 2000 inpatient claims) in fiscal year t were entered into separate autoregressive models by specialty, with a lag set to 1. This model can be presented as logit(ρ t ) = α + β 1 ρ t-1 + e t , where ρ t is the proportion of physicians in a given specialty achieving each billing threshold in fiscal year t, β 1 confirms the significance of volume changes over time, ρ t-1 is the proportion of physicians achieving the billing threshold in the previous year, and e t is the error term. Autoregressive models were needed to adjust for the autocorrelation of residuals because the physicians' inpatient volume in a given year was found to be dependent on inpatient volume in the previous year. We then used the inpatient volumes billed in 2010/2011 to describe the current population of hospital-based physicians according to the functional categories proposed in Table 1, excluding practitioners with low total billings (< 100 total claims) and low inpatient billings (< 10 inpatient claims). SAS software, version 9.2 (SAS Institute Inc., Cary, N.C.), was used for analyses. Ethics approval was obtained from Sunnybrook Health Sciences Centre and from the Health Sciences Research Ethics Board at the University of Toronto.
Results
Descriptive characteristics of physicians providing inpatient care in Ontario hospitals are shown in Table 2 for selected fiscal years. In 1996/1997, three-quarters of active physicians working in the province provided inpatient evaluation-and-management services (n = 15 275 of 19 922; 76.7%), and almost half of all inpatient physicians were trained in family medicine (n = 7418; 48.6%). Beginning in 1998, the proportion of active physicians providing inpatient services began to decline, and this trend has continued each fiscal year since. Although many specialties experienced an exodus of practitioners from provision of hospital care, the largest declines have occurred among GP/FPs (Table 2, Figure 1). In 1996/1997, nearly three-quarters of active GP/FPs provided some level of inpatient care to hospital inpatients, but by 2010, fewer than half continued to do so (71.0% v. 47.2%). Figure 1 shows the distribution of GP/FPs, general internists, and internal medicine specialists according to the annual volume of inpatient claims billed over time. Since 1997/1998, the proportion of GP/FPs providing low-to-no hospital care (< 250 inpatient claims/ year) increased from 70.7% to 83.5% (p < 0.001; Figure 1A). In turn, high-volume GP/FPs (≥ 2000 inpatient claims/year) filled the resulting gap in inpatient care provision, increasing in prevalence from 0.9% of active GP/FPs in 1996/1997 to 2.5% in 2010/2011, with growth beginning in 2000 (p < 0.001). Conversely, the percentages of high-volume general internists and specialists have decreased over time (p = 0.03; Figures 1B, 1C), which may be indicative of lighter inpatient workloads or more balanced distributions between inpatient and outpatient practices.
Despite large declines in the number of GP/FPs providing hospital care over time, the total volume of inpatient services delivered by these practitioners across the province has dropped only minimally, accounting for 32.1% of total provincial inpatient E&M claims in 1996/1997, just under 30% in the period from . Although the average volume of services has increased for those GP/FPs who have maintained hospital privileges, median volumes have decreased, which suggests that rising inpatient caseloads pertain only to practitioners to the right of the median (i.e., the high-volume GP/FP hospitalists). Figure 2 shows the current distribution of inpatient care physicians by medical specialty and annual volume. Overlaid is the cumulative distribution of total inpatient E&M claims billed in Ontario to depict the relationship between workforce density and service volume. In 2010/2011, a total of 1143 high-volume physicians (≥ 2000 claims; 6.8% of inpatient physician workforce) delivered 42% of all inpatient E&M services in the province of Ontario. Conversely, 8600 lowvolume physicians (< 250 claims; 51.1% of inpatient physician workforce) billed just 6% of provincial claims.
Applying the clinical volume algorithms from Table 1, we estimated that 620 full-time and 520 part-time hospital-based physicians were working in Ontario in fiscal year 2010/2011, of whom 548 (48.1%) were psychiatrists, 207 (18.2%) were GP/FPs, and 130 (11.4%) were general internists. The remaining physicians were internal medicine specialists (n = 105; 9.2%), anesthesiologists (n = 83; 7.3%), pediatricians (n = 43; 3.8%), and surgeons (n = 24; 2.1%). The majority of the 2164 mixed-practice physicians were internal medicine specialists (n = 645; 29.8%), psychiatrists (n = 426; 19.7%), and surgeons (n = 303; 14.0%), while comprehensive community practitioners were primarily GP/FPs (n = 2320 of 4479; 51.8%). Table 3 presents the demographic and practice characteristics of the hospital-based GP/FPs and general internists, herein referred to as "general hospitalists," with data for mixedpractice and comprehensive community practitioners provided for comparison (data for additional specialties are available by request to the authors). Full-time Table 3) billed the majority of inpatient services in 2010/2011 against 62 hospitals with and 101 hospitals without publicly disclosed hospitalist programs. The algorithm correctly identified 90% of hospitals known to employ hospitalists (specificity 98%, positive predictive value 97%). All of the false negatives (n = 6) were small community hospitals that had introduced hospitalist programs partway through the 2010/2011 fiscal year; the 2 false positives were large academic hospitals with general medicine teaching wards.
Interpretation
To our knowledge, this is the first study to propose a functional framework for defining and delineating physicians' inpatient practices on the basis of clinical volume of inpatient care. Our definition of hospitalist practice aligns with the functional approach of Kuo et al. 1,[15][16][17][18][19][20][21][22] but improves the administrative methodology by adding a continuous measure of inpatient volume, which allowed us to differentiate providers by their daily clinical workloads. In presenting this framework, our intent is not to suggest that these thresholds are exact or concrete, but rather to provide a descriptive structure that can accommodate the variety of practice styles and medical specialties that exist in hospital medicine. In doing so, we aim to move the methodology toward more objective and dynamic definitions of hospitalist practice, whereby clinical inpatient volumes can be analyzed as the primary predictor of physician practice and performance, accounting for additional provider characteristics, such as medical specialty, as desired. By examining the quality of general inpatient care as a function of a physician's annual case volume, we can extend the application of the hospitalist literature to additional acute care delivery models around the world that have instituted parallel focused-inpatient practices without necessarily establishing formalized hospitalist programs. The volume metrics and descriptive variables used in this study are simple to derive and are often captured at the population level through insurance billings and/or service utilization databases. This is also the first study to describe the prevalence and characteristics of Ontario general hospitalists using systems-level data and to describe the emergence of hospital medicine and its impact on the provision of hospital care by other inpatient physicians. By examining changes in physician billing volumes over time, clinical practice data confirmed the introduction of GP/FP hospitalists to Ontario in the early 21st century and significant growth in the number of full-time general hospitalists practising each fiscal year since. Our estimates for the current number of hospitalists in practice vastly exceed those reported by the Canadian Society of Hospital Medicine based on its voluntary membership survey (n = 110), 13 which confirms our premise that self-reporting as a hospitalist underestimates the functional prevalence of hospital-based practitioners. Our demographic data for general hospitalists are consistent with those reported elsewhere. 2,3,13 For ethical reasons we were unable to link de-identified administrative billings to a known cohort of hospitalist physicians to validate the inpatient volume thresholds proposed in our functional framework. This remains an important step in creating and refining a clinical definition of hospitalist practice. Despite this limitation, we were able to define and characterize a distinct cohort of general physicians who functionally devoted the majority of their practice to the care and management of hospital inpatients. We were able to validate our definitions at the institutional level with high precision and good sensitivity. Our definitions also had face validity triangulated across the 3 clinical volume metrics. In addition, we were able to describe trends in inpatient volume only among fee-for-service physicians, who account for about 90% of physicians working in Ontario. It is unlikely that this limitation affected our calculation of inpatient volumes or hospitalist estimates, as the majority of hospital services for general practitioners are still remunerated through feefor-service billings. Alternative payment plans are used primarily to reimburse community-based physicians and were reported to be uncommon among hospitalists responding to the Canadian Society of Hospital Medicine survey. 13 Finally, this analysis focused exclusively on direct clinical care and case management; procedure volumes were not explored. This distinction resulted in some hospital-based specialties (surgery, anesthesiology, obstetrics and gynecology) having lower inpatient volumes than might have been expected. In many instances, these subspecialty inpatients are managed or comanaged by general hospitalists, which would reduce specialists' inpatient E&M claims to those immediately preceding or following a procedure.
When we replicated the functional definition of Kuo et al. 1 with 2010/2011 OHIP claims data using a minimum volume of 100 E&M claims and an 80% inpatient practice ratio, prevalence estimates of general hospitalist practitioners were overinflated by 17%, capturing 67 physicians with low inpatient volumes reflecting minimally active practices. More importantly, the Kuo definition ignored a large segment of mixed-practice generalists (n = 512) whose clinical volumes and workload appeared to parallel if not exceed those of parttime hospitalists (Table 3). In a comparative evaluation, these physicians would be classified into the reference category, muting any associations that might ultimately be driven by clinical volume or experience, a wellestablished determinant of outcomes in health care delivery. 23 To our knowledge, the relationship between clinical inpatient volume and outcomes of care has not been assessed. Inpatient physicians are unified by the common goal of caring for hospital inpatients, and it is that professional focus which defines all practitioners, irrespective of medical specialty. As general and specialty hospitalists continue to grow in number across the globe, continuous metrics of clinical volume reflecting the dynamic continuum of inpatient practice may be advantageous for defining, identifying, and monitoring hospital-based physicians and their performance. By using the definitional framework proposed in this study, researchers can begin to test structural differences between inpatient delivery models, exploring which aspects of physician care-clinical experience, medical training, or a combination of both-correlate with changes in the processes of care delivery that in turn help to drive improvements in operating efficiency and clinical outcomes.
Contributors: All authors participated in the conception and design of this study. Heather White conducted the analysis and drafted the manuscript. All authors participated in the interpretation of the data and successive revisions and improvements of the manuscript, and all approved the version submitted for publication. The corresponding author (Heather White) had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. | 2018-04-03T01:23:20.891Z | 2013-09-16T00:00:00.000 | {
"year": 2013,
"sha1": "ae3333ed338e89a11ee964e71c1cf1b10fcffbbf",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ae3333ed338e89a11ee964e71c1cf1b10fcffbbf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246030549 | pes2o/s2orc | v3-fos-license | Modernization of the automated control system for compound feed transfer
Rational use of the fodder supply, enrichment of fooder with probiotics and other nutrients can increase the efficiency of livestock and poultry farming. The increasing demand for compound feed stimulates not only the creation of new enterprises for their production, but also the intensification of production at existing enterprises. When introducing automation systems at enterprises for the animal feed production everywhere including Russia, a typical mistake is the automation of the main production process and the preservation of the auxiliary processes of loading raw materials and finished products’ shipment at the mechanization level. The development of computer programs for the practical implementation of required control algorithms can be implemented in universal and special programming languages. As a result of this work, the general algorithms for managing the transferring processes of compound feed at the finished product area were structured. The main subtasks were highlighted, for which the solution algorithms were developed. The use of algorithms for solving the main problems made it possible to form a general algorithm for managing compound feed transfer at the finished product area. As a result, an algorithm for the automated control of feed distribution to consumers was obtained.
Introduction
Compound feeds for farm animals usually include refined and milled feed mixtures of plant and animal origin.
For the purpose of enrichment, vitamins, micro-and macroelements, enzymes and other components necessary for the normal growth and development of farm animals are added to them.
Their use allows not only to increase animal productivity (growth is usually 20-30%), but also to prevent their morbidity.
Rational use of fodder resources, enrichment of feed with probiotics and other nutrients can increase the efficiency of livestock and poultry farming [1][2][3][4].
The production of compound feeds in the world is showing steady growth. The only exception was a 1% decline in 2019 due to the African swine fever epidemic and a strong decrease in the number of their livestock, especially in the Asia-Pacific region.
The constant increase in the production volume of mixed fodders stimulates not only the creation of new enterprises for their production, but also the production intensification at existing enterprises 2 through the automation of the main and auxiliary technological processes at them. Many of them were built during the Soviet era and reflect the level of mechanization and automation of that time. But over the past 30-40 years, the theory and practice of technological processes' automation have progressed far ahead.
The introduction of technical and technological innovations makes it possible to increase production efficiency [5][6][7][8][9][10]. When introducing automation systems at enterprises for the animal feed production in Russia, a typical mistake is the automation of the main production process and the preservation of the auxiliary processes of loading raw materials and finished products' shipment at the mechanization level. As a result, it is these auxiliary areas that turn into bottlenecks for these industries and limit the intensification of production.
Materials and methods
As noted above, the task is to develop such a structure of the information subsystem of the finished product area that would contain all the necessary information on the current state of its actuating devices. Let us first introduce the necessary notation.
Let us designate the finished products' hoppers through BN, in the numbers in the Automated process control system of the finished product section (ACS TP FPS) -through BNN. The total number of finished product hoppers will be denoted through BNUM.
Actuating devices are the following ( Figure 1 Consider all types of actuating devices and the proposed display of their current state in the information subsystem. I. Bucket chains NR (NRN = 1,…, NRNUM). 1. The logical attribute NRTS is introduced to assess the current technical serviceability of the bucket chain. It is equal to 0 -in case of failure of the bucket chain and 1 -in case of its technical serviceability.
2. The logical attribute NREM is introduced to assess the current employment of the bucket chain during the loading operation. It is equal to 0 if the bucket chain is currently busy and 1 if the bucket chain is free. Table 1 contains this information, let us call it NR.
II. Horizontal conveyors GT (GTN = 1,…, NRNUM). 1. The logical attribute GTTS is introduced to assess the current technical serviceability of horizontal conveyors. It is equal to 0 -in case of a malfunction of the conveyor and 1 -in case of its technical serviceability.
2. To estimate the current occupancy of the conveyor during the loading operation, we introduce the logical attribute GTEM. It is equal to 0 -if the conveyor is currently busy and 1 -if the conveyor is not busy. Table 2 containing this information will be called GT.
If the actuating device is busy or technically faulty, then the corresponding trait is 0. If the device is free or technically sound, then the trait is 1. IV. Plurality of actuating devices that ensure the loading of finished products into a given BN hopper. According to the applied transfer technology of mixed feed at the finished product area (Figure 2.1), the NR bucket chain is first switched on, then the TR conveyor, then the FS flow switch, then the HC top cover of the finished product BN hopper is opened.
Process chains NR→ TR→ FS→ HC→ BN is suggested to be specified in Table 4 BNL. Thus, the introduced tables NR, GT, BN BNL, BNC contain all the necessary information about the current technical serviceability, the employment of all actuating devices used in the ACS TP FPS, technological chains of hoppers' loading and their current load. They constitute the information subsystem of the finished product area FPS_INF = {NR, GT, BN BNL, BNC}.
Its proposed tabular presentation allows using standard database management systems (DBMS) to support the information subsystem of the FPS_INF finished product area.
VI. Communication of bucket chains with hoppers. For each bucket chain with NRN number, the hoppers BNN1,…, BNN5 served by it are specified. It is proposed to use Table 2.6 NRB to set the connection between bucket chains and hoppers. Its proposed tabular presentation allows using standard database management systems (DBMS) to support the information subsystem of the FPS_INF finished product area.
Results and Discussion
Let us consider a general algorithm for controlling compound feed transfer at the finished product area.
The general task of finished products' transfer from the vertical mixer 9 is as follows. The specified mass CFM of the specified compound feed type CFT must be loaded through the NRN bucket chain into the finished product hoppers (Figure 2). It is assumed that the loaded CFM does not exceed BNM_MAX -the capacity of one hopper.
Task input: a) bucket chain number NRN; b) mass of compound feed CFM; c) type of compound feed CFT.
Problem output: a) logical value L, equal to: 0 -loading is impossible due to occupancy or malfunction of actuating devices, 1 -loading is possible and successfully completed. LOAD algorithm (NRN, CFM, CFT, L) for solving the general problem of compound feed transfer. Step 1. Output value initialization: Step 2. Preliminary check of the hoppers' contents associated with the bucket chain exit from the algorithm with a message about the occupancy and/or technical malfunction of the bucket chain hoppers Step 3. Choosing the optimal hopper for loading Step 4. Start of loading through the bucket chain NRN of the CFM mass of compound feed type CFT into the BNOPT hopper.
Activation of the NRN bucket chains' drives, TRN conveyor, FSN flow switch and BNOPT hopper HCN top cover.
The following actions are performed in the triggering cycle of the control device.
Step 7. End of loading. Completion of the algorithm.
Algorithm for the distribution of compound feed to the consumer at the finished products' area
The consumer needs to get a given CFM mass of compound feed of a given CFT type. Problem input: a) mass of compound feed CFM, b) type of compound feed CFT. Problem output: a) logical value L equal to: 0 -loading is impossible due to lack of sufficient CFTtype compound feed mass, 1 -loading is possible and successfully completed.
Algorithm PR_LEAVE (CFM, CFT, L) of feed distribution to the consumer. Step 1. Output value initialization: Step 2. Checking the presence of compound feed type CFT in the hoppers. Formation of an array of hoppers filled with compound feed type.
Step 3. Determination of the optimal BNOPT hopper option for loading.
Step 4. Start of compound feed loading.
Step 5. Current control of compound feed loading.
Step 6. Completion of compound feed loading. Entering the changed data on the hopper's loading. Completion of the algorithm.
Conclusion
The use of SCADA systems allows to monitor the performance of certain technological processes in real time. This software is transferred to appropriate computing facilities. Its connection with the object of control and the operator is carried out in real time using special drivers. Writing programs for the practical implementation of the necessary control algorithms can be implemented both in universal programming languages and in special languages. They are usually created in dedicated software development environments.
As a result of this work, the general algorithms for managing the transferring processes of compound feed at the finished product area were structured. The main subtasks were highlighted, for which the solution algorithms were developed. The use of algorithms for solving the main sub-problems made it possible to form a general algorithm for managing compound feed transfer at the finished product area. As a result, an algorithm for the automated control of feed distribution to consumers was obtained. | 2022-01-19T20:09:24.552Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "e7d6cdcaad94be27b3c206e2399748a582c925f4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/949/1/012061",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e7d6cdcaad94be27b3c206e2399748a582c925f4",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics"
]
} |
231705559 | pes2o/s2orc | v3-fos-license | Considerations When Using Telemedicine As the Advanced Practice Registered Nurse
Accessibility to health care is crucial to management of chronic and acute conditions. Although the severe acute respiratory syndrome coronavirus 2 pandemic significantly impacts the issue of access to health care, with the introduction of Waiver 1135, telehealth has become a positive strategy in increasing safe access to health care. This report addresses considerations to take into account when advanced practice registered nurses use telehealth to facilitate access to care.
Brief Introduction of Telehealth
Advances in telehealth strongly connect to the developments in technology and communication. In the 1960s through the 1990s, governmental agencies, such as the National Aeronautics and Space Administration, made improvements in telecommunications. 2,3 One of the most significant enhancements of technology was in the 1990s with the introduction of the internet, making information more accessible. 3 Developments in satellites, computers, and network technologies have increased the speed of transmission of medical information and communication.
Telehealth was used to deliver health care in rural and urban areas before the pandemic. However, during the COVID-19 pandemic, telehealth has been expanded in many health systems across the nation, both inpatient and outpatient, to deliver health care. Telehealth allows for same-day and chronic care appointments to increase safe access for high-quality care while avoiding exposure to infectious agents such as SARS-Cov-2. 1 Telehealth is the use of a specific method for health services from a distance to communicate with a patient. There are 4 methods of telehealth: synchronous or live video, asynchronous or store-and-forward, remote patient monitoring, or mHealth, which incorporates all of the previous 3 (Table). [2][3][4] Telemedicine is one type of telehealth, and this term is used when providing medical care from a distance by using technology. 2 Medical care can occur from provider to patient, provider to provider, medical consultations, and advanced home health. 2 One example of medical care from provider to patient with a technological device is when an APRN might perform an interactive video visit. Using a smartphone or computer with a camera to interact with a patient positive for COVID-19, symptomatic care can be provided without the risk of exposure to the APRN or the health care team. Although differences exist between telehealth and telemedicine, for the purpose of this report, telehealth will be used from this point forward.
Telehealth and the APRN
APRNs provide primary and acute care. There are 290,000 APRNs as of August 2020, and 89.7% are certified in primary care. 5 Telehealth, as an addition to APRNs practices, is improving access to care for remote, urban, and vulnerable populations by naturally eliminating barriers created by in-person visits. 6,7 For example, while a grandma is caring for her napping grandchildren, she can attend her primary care appointment via video, telephone, or message.
A survey conducted by the American Association of Nurse Practitioners between July 28 and August 9, 2020, showed that 63% of approximately 4,000 APRNs who responded to the survey indicated that they continue to transit patients from in-person visits to telehealth care. 8 Waiver 1135 was reported by 76% of the APRNs as the most beneficial action in facilitating patient access to health care provided by APRNs. 8 Although telehealth is considered impersonal and incomplete care by some providers, it is emerging as a cost-effective, convenient, high-quality alternative of care to the in-person visits. 2 APRNs' robust increase in the use of telehealth visits is a safe, convenient, cost-effective way to meet a population's necessities.
Integration of Telehealth
There are several considerations when integrating telehealth into practice. APRNs need to be familiar with the following considerations to provide the best care possible. A discussion of these considerations ensues below.
Telehealth Etiquette
There are techniques for completing a telehealth visit. Before a telehealth visit via telephone or video, establish a Health Insurance Portability and Accountability Act (HIPAA)-compliant environment by providing privacy in an examination room or private office. Privacy also eliminates distractions such as office noises and personnel walking in the background. 9,10 The provider needs to make others in the office aware of the upcoming telehealth visit by placing a "telehealth visit in progress" sign on the door. If done by video, it is also essential for the provider to be aware of the camera's position, maintain eye contact, and display engagement in the encounter by looking and leaning into the camera. 10 Removing clutter from the camera field and wearing conservative clothing will minimize distractions during the encounter and are considered necessary modifications during a video visit. 10 The provider needs to ensure that any necessary documents are present.
Once the visit is ready to commence, secure the patient's consent for a telehealth visit. 10 Verify patient identifiers, such as name, date of birth and address, to ensure the correct patient is present. If by video, introduce all parties present for the visit, and the provider should ascertain that they are visible on camera throughout the telehealth encounter. 10 When conducting a telephone visit, inquire who is in the room and ask for introductions of all parties present.
The provider reviews the chief concern for the visit with the patient and caregiver and encourages questions throughout the visit, especially from the patient. It is important to remember to speak clearly and directly into the device, muting when not speaking, and remind everyone to speak one at a time. 10 Avoid negative behaviors such as looking down or at notes because it appears as distracted and not engaged in a video visit. 10 In both a telephone and video visit, the provider will wrap up and summarize the visit by answering final questions, ensuring that the patient is agreeable to the plan of care, and scheduling a follow-up encounter. After the telehealth visit, the provider turns off or secures the equipment and reports any malfunctions. 10,11 Equipment When preparing for the telehealth visit, it is essential to check the equipment, practice a visit, know how the equipment works, and troubleshoot common problems. Preparatory measures will help the encounter proceed smoothly and relieve both the provider's and the patient's anxiety. 11 Logging in as a patient is helpful. The provider can check camera placement, the background, practice a visit as a patient, and learn how to troubleshoot the system from a patient's perspective.
The patient's home needs access to the internet and a technological device to conduct a video telehealth visit. 12 Currently, 53.6% of the global population uses the internet. 13 Of the American population, 96% own variations of cell phones, 81% own smartphones, 14 75% own laptops or desktops, and 50% own tablets. 14 Although many Americans have internet, the elderly, lower socioeconomic status, and less educated are least likely to have internet in their homes. 15 For patients without internet or electronic devices, telehealth visits by telephone are an excellent option during the COVID-19 pandemic.
Credentialing and Legislation
Implementation into practice can be difficult due to the rules and regulations over patient-provider encounters and state laws over APRNs practice. While APRNs hold national certification, they can only treat patients in the state where they obtain licensures as registered nurses. 16 For example, APRNs practicing in Ohio cannot use telehealth to treat patients who reside in Iowa, Arizona, Montana, North Dakota, or any other states without obtaining individual licensure in each state. [16][17][18][19] The APRN Compact, originally developed in 2015 and adopted in August 2020 by the National Council of State Boards of Nursing (NCSBN), can remove the obstacle of needing individual state licensures for APRNs. 19 APRNs could provide care regardless of the provider's and the patient's location with a multistate license. For the APRN Compact to become a multistate license, it requires at least 7 states to vote it into law to practice in those states. 19 The combination of telehealth and multistate licensure would allow APRNs to practice across state lines and provide safe, quality care to rural and underserved populations. For example, an APRN living in Ohio could treat a patient living in a rural area of Michigan, but currently cannot due to the licensure restrictions.
One exception is the health care providers including APRNs who work for the Veterans Affair (VA) hospitals. In 2018 the Department of Veteran Affairs published its final rule to ensure that VA health The APRN and the patient set up a telehealth visit. During the meeting, the APRN and the patient can see and talk to each other. Asynchronous Also called store-and-forward. The patient's data are sent to a health care provider who can assess the data at a later time. Usually with a specialist.
APRN in primary care consults a dermatologist about a patent's skin condition. Remote patient monitoring The use of technological devices to record health information in one location and review at a different time by another provider in a different location Heart rate or blood pressure monitors. Used by patients to record and monitor their heart rates and blood pressure to transmit to an APRN or other health care provider. mHealth (mobile health) Providing management of health care and public health information via mobile devices. Also, may include general information about disease outbreaks and educational information.
mHealth often used in the management of chronic conditions such as diabetes. Another example is the use of a smartphone ultrasound in diagnostic situations.
APRN ¼ advanced practice registered nurse.
care providers can provide care using telehealth regardless the VA provider's and VA patient's state locations. 20 In other words, APRNs and other health care providers who work in the VA system can provide safe, quality health care to veterans in any United States territories via telehealth. 20 The concept of a multistate practice can significantly increase health care access for underserved, vulnerable, rural populations. However, the APRN Compact, as of August 2020, has zero states with multistate licensure. 20 APRNs are encouraged to work with their professional organizations and write to their legislatures to support and vote for the APRN Compact multistate license.
Financial Considerations
COVID-19 plays a significant role in influencing telehealth use in health care. As of March 6, 2020, and throughout the SARS-Cov-2 pandemic, telehealth video visits are reimbursed equally to inperson visits. 1 The telehealth visits can occur in the patient's home and any health care facility. 1 Before COVID-19, the patient had to be in an office, hospital, or skilled nursing facility for a telehealth visit. Also, with free apps like Doximity, Zoom, and Skype replacing the original expensive technology, telehealth is becoming cost-effective. 21 Originally, telephone visits were reimbursed at a lower rate than video visits at $14 to $43 and then increased to $46 to $110 during the pandemic. 1 However Centers for Medicare and Medicaid Services reversed that decision in response to the overwhelming number of telephone visits occurring during the pandemic. 1 Telephone visits Current Procedural Terminology (American Medical Association) codes (99441-99443) 22 reimburse at the same rate as established patient office visits or video visits (99212-99214) 22 in primary care at approximately $46 to $211. 1 These reimbursement changes are part of the Centers for Medicare and Medicaid Services COVID-19 telehealth waiver and are not guaranteed to last beyond the public health emergency. 1 Video and telephone visits are examples of live and interactive telehealth. Video visits require a camera on a technological device and the internet, where a telephone visit requires having access to a telephone. Telephone visits are an excellent option for patients without smartphones, computers, or the internet to access health care safely.
HIPAA Guidelines
HIPAA of 1996 has strict guidelines for health care providers to safeguard telehealth date of the patient's encounter with an encrypted system, to disseminate HIPAA guidelines and agreements with all personnel, and to be responsible for protecting the provider and patient locations and the communication between the 2 sites. 21 Due to the current increased need for telehealth with the COVID-19 pandemic, the Office for Civil Rights (OCR) at the US Department of Health and Human Services published a notice to delineate the adjustment of specific rules under the HIPAA of 1996 to treat patients using telehealth. In this notice, OCR states that during the pandemic, when health care providers who use noneHIPAA-compliant nonpublic-facing remote communication products to assess, diagnose, and treat patients, OCR will not penalize for noncompliance with the HIPAA rules. 21 Penalizations will not occur as long as the health care providers have documented proof of reasonable encryption attempts, to follow the HIPAA guidelines and notify patients of the privacy risks using telehealth. 21 Nonpublic-facing remote communication products are the technologies that "allow only the intended parties to participate in the communication." 21 These products include Doximity, Skype, and Zoom. 21
Benefits
The VA's 2020 award-winning Connected Care program can reach the veterans where and when they require care. 23 With the use of VA Video Connect, they have a 1000% increase in video visits (10,000 to 127,000 per week) to veterans during the COVID-19 pandemic. 23 The veterans' high use of video visits supports the idea that telehealth improves safe access to providers and care.
APRNs can provide care by telehealth to patients who have difficulty traveling or who might be geographically isolated. Research supports that telehealth increases access to quality care and has high patient satisfaction ratings. 9 Telehealth reduces the barriers in accessing care by decreasing travel, time away from work, and costs. 7,9 Overall, patients view telehealth as safe and timely and would have telehealth visits again. 7,9,24 Barriers Health care providers recognize no physical examination as a significant barrier to telehealth. 2 However, some of the physical assessments are completed with visual and auditory observations via telehealth. Also, the use of otoscopes, stethoscopes, and other devices to gather physical information on the patient via digital technology is an option, 2 although the devices might be at the patient's cost.
Barriers of telehealth also include not using a HIPAA-compliant examination room and collection of payments. Other obstacles documented are the complexity of scheduling and triage of appropriate visits. 25 By establishing protocols and guidelines before the implementation, the team can avoid these barriers. 25 Didactics and practicing with new equipment help avoid the barriers of lack of emergency support, upgrades in software, and disrespect of personal time. 25 The shortage of funding likewise creates a hindrance to implementing or improving telehealth. 2 These barriers often need ongoing evaluation and adjustments to improve processes for the use of telehealth. They can have an array of effects on the delivery of health care.
Summary and Conclusion
Access to health care is an ongoing issue that is further complicated by the COVID-19 pandemic. Telehealth with Waiver 1135 enhances safe access to quality health care by allowing health care providers, including APRNs, to care for their patients without the risk of exposing patients or the health care team to COVID-19. Integrating the considerations mentioned in this report will assist APRNs in providing smooth transition from the traditional method of patient encounter to telehealth visits and in the meantime maintaining safe and quality health care. | 2021-01-26T14:23:51.297Z | 2021-01-26T00:00:00.000 | {
"year": 2021,
"sha1": "e17be72b6783bc384f4ce218ceb85e39af936c92",
"oa_license": null,
"oa_url": "http://www.npjournal.org/article/S1555415520306292/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "81f5c5ab7751ecf9fd3b44e3b0cbf0f6e5be64ef",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14572222 | pes2o/s2orc | v3-fos-license | Comparison between Trans-Cranial Electromagnetic Stimulation and Low-Level Laser on Modulation of Trigeminal Neuralgia
[Purpose] To determine which of the transcranial electromagnetic stimulation or low level laser therapy is more effective in the treatment of trigeminal neuralgia of multiple sclerosis patients. [Methods] Thirty multiple sclerosis patients of both sexes participated in this study. The age of the subjects ranged from 40 to 60 years and their mean age was (56.4–6.6). Participants were randomly selected from Dental and Neurology Outpatient Clinics at King Khalid Hospital, Najran University, Saudi Arabia. Patients were randomly divided into two equal groups of 15. The Laser group received a low level laser therapy, 830 nm wavelength, 10 Hz and 15 min duration, while the Electromagnetic group received repetitive transcranial electromagnetic stimulation at a frequency of 10 Hz, intensity of 50 mA and duration of 20 minutes. Patients were assessed pre and post treatment for degree of pain using a numerical rating scale, maximal oral mouth opening using a digital calibrated caliper, masseter muscle tension using a tensiometer and a compound action potentials of masseter and temporalis muscles. [Results] There were significant improvements after treatment in both groups, with a significant difference between the Electromagnetic and Laser groups, in favor of the Electromagnetic group. [Conclusion] Repetitive transcranial electromagnetic stimulation at 10 Hz, 50 mA, and 20 minutes duration is more effective than low level laser therapy at reducing trigeminal pain, increasing maximum oral mouth opening, masseter and temporalis muscle tension in multiple sclerosis patients.
INTRODUCTION
Trigeminal neuralgia (TN) is an uncommon disorder characterized by recurrent attacks of facial pain in the trigeminal nerve distribution. Typically, brief attacks are triggered by talking, chewing, teeth brushing, shaving, a light touch, or even a cool breeze 1) . The pain is nearly unilateral, and it may occur repeatedly throughout the day. Trigeminal neuralgia is characterized by sudden, severe, brief, stabbing, and recurrent episodes of facial pain 2) . The prevalence ratio is 4 per 100,000 in the population, and commonly affects patients over 50 years, occurring more frequently in women than men with a ratio of 1.5-2:1, respectively 1) . It is also more common in patients with multiple sclerosis 3) . TN is associated with decreased quality of life and impairment of daily function. It impacts upon employment in 34% of patients and depressive symptoms are not uncommon 4) . The condition may be severely disabling with high morbidity particularly among the elderly 5) . It is evident that trigeminal pain occurs in multiple sclerosis because of pressure on the trigeminal nerve root at the entry zone into the pontine region of the brain stem 6) .Compression or insufficiency of blood supply may cause local pressure, leading to demyelination of the trigeminal nerve axon which causes ectopic action potential generation 7) . TN is almost always unilateral with the maxillary branch being most commonly affected and the ophthalmic branch the least 8) . Pain attacks usually last from a few seconds to 2 min and may recur spontaneously between pain-free intervals 9) .
Trans-cranial magnetic simulation (TMS) is a technique for stimulating of the human brain. A noninvasive stimulation technique, Repetitive Trans-cranial Magnetic Stimulation (rTMS), may be suitable for the treatment of chronic neuropathic pain as it modulates neural activities not only in the stimulated area, but also in remote regions that are interconnected to the site of stimulation 9,10) . Prolonged pain relief can be obtained by repeating rTMS sessions every day for several weeks at 10 HZ frequency 11) .
A low-level laser (LLLR) produces photo-biochemical reactions that result in pain relief. Considering the effect of neurotransmitters on nerves, LLLR are expected to be effective in eliminating all kinds of pain that result from nerve irritation and nociceptor excitation (neuropathic pain) 12) . LLLR can reduce pain of inflammatory origin through their anti-inflammatory properties. Also, low-level lasers have been shown to be effective in alleviating oral J. Phys. Ther. Sci. 25: 911-914, 2013 and maxillofacial pain 13) . The hypothesis of the current study was that there are no differences between rTMS and LLLR treatments. The purpose of the current study was to determine which of rTMS or LLLR better reduces trigeminal pain, increases low oral mouth opening and improves the power of the masseter and temporalis muscles in TN of multiple sclerosis patients.
Subjects
This study was conducted at Dental and Neurology Outpatients Clinics at King Khalid Hospital, Najran University, Saudi Arabia. Thirty multiple sclerosis patients with TN (of all branches) of both sexes were randomly selected and participated in this study. Diagnosis was carried out by a neurologist through the use of physical examination and magnetic resonance imaging (MRI). Patients' ages ranged from 40 to 60 years and their mean age was 56.4 ± 6.6 years. The weights of the subjects ranged from 60 to 80 kg, and their mean weight was 75.00-7.7 kg. Classical TN was diagnosed according to the International Classification of Headache Disorders2 Criteria 14) , and the duration of illness ranged from 6 to 12 months (Table 1). Pain during attacks should not be less than six according to a numerical rating scale (NRS), with no satisfactory medical pain relief in the last three months. Patients were conscious, co-operative and free from psychological disorders (as documented by a psychologist), and disabilities secondary to orthopedic problems or special senses impairments. Patients were excluded if they had TN secondary to tumor, herpes zoster or any another causes, i.e. serious cardiopulmonary dysfunction, past invasive treatment (radiofrequency, ethanol, glycerinum injection, Gama-knife microvascular decompression) or coagulation dysfunction. Patients were randomly divided into two equal groups of 15 by a random allocation method (thirty folded papers were allocated in a bag, with two series of 15 papers on which were written either LG or MG and every patient had the chance to choose one folded paper).
The Laser group (LG) consisted of 15 patients whose ages ranged from 40 to 58 years with a mean age of 48.80-6.3 years, and a weight range of 65 to 88 kg, with mean weight of 75.26-6.80 kg. They were treated with an 830 nm wavelength LLLR 15) .
The Electromagnetic group (MG) consisted of another 15 patients, whose ages ranged from 45 to 60 years, with a mean age of 46.66-9.608 years, and a weight range of 60 to 87 kg, with a mean weight of 74.80-727 kg. They received rTMS at 10 Hz frequency 10) . There were no significant pretreatment differences between the groups in demographic characteristics (p>0.05) ( Table 1).
Methods
An electromyography device (Neuropac Apparatus, Tensiometer: Lafayette, USA 3) was used to measure the motor action potentials of the temporalis and masseter muscles. After informed consent had been obtained, all patients participated in several trials with the equipment to be psychologically assured and to familiarize themselves with the treatment steps. The treatment was performed three times per week on consecutive days for eight weeks of total twenty four sessions.
The pain intensity of all patients was assessed using NRS 16) (0=no pain, 5=moderate pain, 10=worst pain), when patients were not under medication. The masseter and temporalis muscle compound motor action potentials of all patients were measured before and after treatment.
The subjects were seated comfortably upright and were asked not to move their heads during recordings. A stimulating needle electrode was placed intra-orally on the nerve branch at the medial angle of the mandible. The recording electrodes were positioned on the masseter muscle belly parallel to muscular fibres about 3 cm above and anterior to the mandibular angle, two centimeters distance from the two recording electrodes. This electrode placement was demonstrated to be optimal for avoiding cross-talk responses from facial muscles 17) . The electrode over the anterior temporalis was placed just in front of the hairline; the reference electrode was placed just above the eyebrow. The signals were amplified, filtered, and digitized at 1,000 Hz by the Spike 2 system (Cambridge Electronic Design, Cambridge, UK) 18) . For the assessment of maximal Active mouth opening range, patients were asked to open their mouths as much as possible with their heads fixed, and the vertical distance between upper and lower teeth was measured using a calibrated caliper with 1 mm accuracy 19,20) . For the assessment of muscle power, patients were instructed to tightly clench their mouth as much as possible for assessment of masseter muscle power, and the amount of tension was recorded by a tensiometer.
Subjects in the Laser group were treated with a low power 15mW helium-neon laser of wave length 830A units and a laser beam density of 150-170 mw/cm 2 for irradiation. The treatment was first given intra-orally following the path of the nerve branch for 1-2 min, then extra-orally on the most tender points for 10 min. In the sitting position, the contact laser technique was used on the skin overlying the four tender points of the face 21,22) . Subjects in the electromagnetic group received repetitive TMS at a frequency of 10 Hz, 50 mA intensity, and 20 minutes duration. In the sitting position with all metal objects removed the splenoid was applied LG: Laser group, MG: Electromagnetic group, M: mean, SD: standard deviation tangentially over the patient's head, and held on one side (contra-lateral to trigeminal pain). A rest period of 10 minutes after application was allowed for all patients 23,24) . The results of both groups were statistically analyzed to compare the differences within each group and the differences between the two groups. The statistical package of social sciences (SPSS, version 10) was used for data processing and a p=0.05, as the level of significance.
RESULTS
The results showed no significant pretreatment differences between the two groups in pain intensity, masseter muscle tension, or maximal mouth opening, masseter and temporalis compound action potentials p>0.05 (Table 2).
There was significant post-treatment reduction in pain intensity in LG compared to the pretreatment mean value, (p=0.01), and a highly significant post-treatment reduction in pain intensity in MG when compared to pretreatment mean value (p=0.001) ( Table 3). Significant differences was found between the post-treatment values of the two groups, with the best result in MG (p=0.01) ( Table 4).
There was significant post-treatment improvement in masseter tension in LG compared to the pretreatment mean value, (p=0.01), and highly significant post-treatment improvement in MG compared to the pretreatment mean value (p=0.001) ( Table 3). Significant differences was found between the post-treatment values of the two groups, with the best result in MG (p=0.01) ( Table 4).
There was significant post-treatment improvement in mouth opening in LG compared to the pretreatment mean value, (p=0.014), and highly significant post-treatment improvement in mouth opening in MG compared to pretreatment mean value, (p=0.001) ( Table 3). Significant differences was found between the post-treatment values of the two groups, with the best result in MG (p=0.001) ( Table 4).
There were significant post-treatment improvements in masseter and temporalis CAP in LG compared to the pretreatment mean values, (both, p=0.01), and highly significant post-treatment improvements in masseter and temporalis CAP in MG compared to the pretreatment mean values (respectively p=0.001 and p=0.003) ( Table 3). Significant differences was found between the post-treatment values of the two groups, with the best results in MG (masseter, p=0.001 and temporalis CAP, p=0.003) ( Table 4).
DISCUSSION
The purpose of the study was to determine which of transcranial electromagnetic stimulation or low level laser therapy is more effective for trigeminal neuralgia of multiple sclerosis patients. Low level laser therapy (LLLT) has been used clinically and some researchers have reported the efficacy of LLLT in the treatment of various pain conditions 21) . In the present study, there were significant improvements in TN compared to pretreatment measurements and the results of NRS indicated a slight but significant reduction in facial pain. Patients also noted a reduction in their anxiety symptoms. Moreover, the present results showed a significant improvement in maximal mouth opening after application of LLLT. These findings are in agreements with reports of significant reduction in pain and improvements of range of motion after 3 months of LLLT 22,23) .
The present study showed a strong relationship between the application of repetitive transcranial electromagnetic stimulation and the improvement of TN symptoms. There was reduction of pain according to NRS. These results are in agreement with those of another study that applied the TMS at 5 Hz to treat orofacial pain patients 24) . In the present study, 10 Hz rTMS was applied to treat TN patients and there was a highly significant improvement in maximal mouth opening. This result was confirmed by other result who demonstrated that application of rTMS at 5 Hz or more was able to relieve neuropathic pain 25) , this was also in agreement with the study that applied four different frequencies (0.5 Hz, 1 Hz, 5 Hz and 10 Hz) of rTMS to treat patients with orofacial pain ; the best results was at 10 Hz 26) . The efficacy of rTMS in producing significant analgesia seems to depend on a precise targeting the frequency. It has been reported that application of rTMS sessions over the motor cortex can produce excitatory changes in the brain and induce excitation of the muscles action potentials 27) . The application of low frequency TMS may alter cerebral excitability, brain rhythms, and a variety of human behaviors [28][29][30] . The present study found that there were improvements in the masseter muscle tension in bothtreatment groups with the best results in rTMS group. These findings are supported by other studies that reported significant improvement in the cervical muscle together with significant improvements in the range of motion and relief of pain due to an inhibitory effect on neural discharges around the stimulated cortical areas 31) .
The study concluded that repetitive transcranial electromagnetic stimulation at 10 Hz and 50 mA, for 20 min is considered more effective than low level laser therapy at reducing trigeminal pain, and improving the maximum mouth opening, and masseter and temporalis muscle tensions of multiple sclerosis Patients. It is also considered more useful and safe modality than drugs for other orofacial dysfunctions.
RECOMMENDATION
We recommend the investigation of the long-term effects of both rTMS and LLLT in various orofacial dysfunctions at different frequencies, durations and intensities, as well as rTMS for other painful neurological disorders. | 2016-05-12T22:15:10.714Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "710a9adb8151a26af3efeb16b1e1a96d1c67bb21",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jpts/25/8/25_jpts-2013-033/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "710a9adb8151a26af3efeb16b1e1a96d1c67bb21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11842625 | pes2o/s2orc | v3-fos-license | A review of the perceptual effects of hearing loss for frequencies above 3 kHz.
Abstract Background: Hearing loss caused by exposure to intense sounds usually has its greatest effects on audiometric thresholds at 4 and 6 kHz. However, in several countries compensation for occupational noise-induced hearing loss is calculated using the average of audiometric thresholds for selected frequencies up to 3 kHz, based on the implicit assumption that hearing loss for frequencies above 3 kHz has no material adverse consequences. This paper assesses whether this assumption is correct. Design: Studies are reviewed that evaluate the role of hearing for frequencies above 3 kHz. Results: Several studies show that frequencies above 3 kHz are important for the perception of speech, especially when background sounds are present. Hearing at high frequencies is also important for sound localization, especially for resolving front-back confusions. Conclusions: Hearing for frequencies above 3 kHz is important for the ability to understand speech in background sounds and for the ability to localize sounds. The audiometric threshold at 4 kHz and perhaps 6 kHz should be taken into account when assessing hearing in a medico-legal context.
Evidence for effects of audibility at high frequencies on speech intelligibility
There are many studies showing that frequency components above 3 kHz contribute to speech intelligibility for people with normal hearing. In these studies, speech has been highpass or lowpass filtered with various cutoff frequencies, and speech intelligibility has been measured for each cutoff frequency. Such studies formed the basis for the Articulation Index (ANSI, 1969;Fletcher, 1953;French & Steinberg, 1947;Kryter, 1962) and its successor, the SII (ANSI, 1997) that is described in the next section of this paper. For example, French and Steinberg showed that decreasing the cutoff frequency of a lowpass filter from 7 to 2.85 kHz decreased the percentage of correctly identified syllables presented in quiet from 98 to 82%. Aniansson (1974) showed that lowpass filtering wideband speech with a cutoff frequency of 3.1 kHz reduced the percentage of words correctly identified from 94 to 85% for speech at a signal-to-noise ratio (SNR) of 0 dB, from 74 to 67% when a single background talker was added, and from 64 to 57% when three competing talkers were added. Studebaker et al (1987) used sharply filtered continuous speech materials presented in noise, and asked participants to estimate the percentage of words that they understood for each filtering condition. Several SNRs were used, specified as the level of the peaks in the speech relative to the rootmean-square noise level. They showed that compared to an 'all pass' condition (0.15-8 kHz), lowpass filtering at 3.5 kHz reduced the percentage of words understood from 57 to 41% at an SNR of 7.5 dB, from 89 to 58% at an SNR of 8.5 dB and from 94 to 79% at an SNR of 9.5 dB. From these results, it is clear that for normalhearing participants frequency components above 3 kHz make a sizable contribution to intelligibility, especially for speech in the presence of background sounds.
There are also several research studies showing that, for people with mild-to-moderate high-frequency hearing loss, speech intelligibility is improved when amplification is provided for frequencies above 3 kHz Hornsby & Ricketts, 2006;Skinner et al, 1982;Skinner & Miller, 1983;Vickers et al, 2001), although hearing loss does seem to reduce the ability to make use of audible speech information (Turner & Henry, 2002). For example, Skinner and Miller (1983) measured the intelligibility of speech in quiet and mixed with noise at +6 dB SNR as a function of its audible bandwidth for seven participants with moderate sensorineural hearing loss. Words were presented at three levels (50, 60 and 70 dB SPL) and amplified with a Limiting Master Hearing Aid (LMHA). The LMHA was set for four frequency ranges: (1) 0.266-6 kHz, (2) 0.375-4.2 kHz, (3) 0.53-3 kHz, and (4) 0.75-2.12 kHz. All participants obtained the highest mean score with the LMHA set for the widest frequency range. Averaged across levels and across quiet and noise conditions, the mean correct word identification scores were 50, 45, 31 and 17% for conditions 1, 2, 3, and 4, respectively. These results suggest that increasing the audible upper frequency limit from 3 to 4 and 6 kHz leads to progressive improvements in intelligibility, although some of the benefit may have come from decreasing the low-frequency cutoff. Baer et al (2002) measured the intelligibility of nonsense syllables presented in noise (with SNRs ranging from 0 to +6 dB) for participants with severe to profound high-frequency loss, without and with high-frequency dead regions in the cochlea (these are regions with very few or no functioning inner hair cells, synapses or neurons; see Moore et al, 2000). The stimuli were subjected to linear amplification using the Cambridge formula (Moore & Glasberg, 1998) and then lowpass filtered with various cutoff frequencies. For the participants without dead regions the mean score increased from about 70% to 80% when the cutoff frequency was increased from 3 to 7.5 kHz. For the participants with dead regions, there was no benefit of increasing the cutoff frequency from 3 to 7.5 kHz, presumably because the presence of the dead regions limited the ability to process the information conveyed by the high-frequency components. Hornsby and Ricketts (2006) assessed the effect of highpass and lowpass filtering on the intelligibility of sentences in noise at +6 dB SNR for 10 participants with normal hearing and 10 participants with sloping high-frequency loss. When the lowpass cutoff frequency was increased from 3.2 to 7 kHz, the percent correct scores increased from about 92 to 98% for the normal-hearing participants and from about 85 to 92% for the hearing-impaired participants.
Overall, the results clearly indicate that speech intelligibility is influenced by the audibility of frequency components above 3 kHz. It follows that reduced audibility of frequencies above 3 kHz, produced by NIHL, has adverse effects on the ability to understand speech in background noise.
The studies referred to above all used speech and noise that were spatially coincident. Under conditions where the target speech and interfering sounds are spatially separated, frequencies above 3 kHz may be relatively more important, and there may be a greater advantage of extending the audible frequency range provided by bilaterally fitted hearing aids (Hamacher et al, 2006). This may happen for at least two reasons: (1) For medium and high-frequency sounds, the head casts a kind of acoustic shadow. For example, a sound to the right of the head produces a greater intensity at the right ear than at the left (Kuhn, 1979). As a result, whenever the target speech is on the opposite side of the head to the most prominent interfering sound, there is a better signal-to-interference ratio at one ear than the other. The listener can attend selectively to the ear with the better signal-to-interference ratio, and can even switch rapidly from attending to one ear to attending to the other under conditions where the ear with the better signal-tointerference ratio fluctuates from moment to moment (Brungart & Iyer, 2012). The magnitude of head-shadow effects increases progressively with increasing frequency (Shaw, 1974;Bronkhorst & Plomp, 1988;1989), so the advantage of listening with the 'better' ear would be expected to increase as the audible high-frequency bandwidth increases. (2) Sometimes, when several people are talking at once, the listener may hear many speech sounds but may have difficulty in deciding which sounds come from which talker (Brungart & Simpson, 2007). This is called 'informational masking' (Brungart et al, 2001). A perceived spatial separation of the target speech and interfering speech can reduce informational masking and hence lead to improved intelligibility of the target talker (Freyman et al, 1999;. High-frequency speech sounds are used for sound localization, and especially for resolving front-back confusions (Best et al, 2005); this is described in more detail later on. Hence, when high frequencies are audible, this can improve sound localization and this in turn reduces informational masking.
Consistent with these ideas, recent research has shown that, for people with mild-to-moderate high-frequency hearing loss, the intelligibility of target speech in the presence of a background talker in a different location from the target is improved when amplification is provided even for frequencies above 5 kHz (Moore et al, 2010a;Levy et al, 2015). This indicates that frequencies well above 3 kHz contribute to speech intelligibility when the target speech and interfering sounds are spatially separated.
A recent paper (Besser et al, 2015) has shown that the ability to take advantage of spatial separation between a target speech sound and interfering speech sounds (the 'spatial advantage') depends on audiometric thresholds at high frequencies, in the range 6-10 kHz. Elevated audiometric thresholds in the frequency range 6-10 kHz are associated with a decrease in the spatial advantage. Another recent paper (Silberer et al, 2015) has shown that for speech in noise and in the absence of visual cues (i.e. without lipreading) an audible frequency range extending up to about 7 kHz is required for optimal intelligibility.
AAHL
Age-associated hearing loss HTL Hearing threshold level LMHA Limiting Master Hearing Aid NIHL Noise-induced hearing loss PTA 2,4 Pure-tone average threshold at 2 and 4 kHz SII Speech intelligibility index SNR Speech-to-noise ratio SRT Speech reception threshold Overall, the evidence is strong that the audibility of frequencies above 3 kHz is important for speech intelligibility and that NIHL for frequencies above 3 kHz has adverse effects on the ability to understand soft speech and on the ability to understand speech in background sounds, especially when the background sounds come from a different spatial location to the target sounds. This is recognized by manufacturers of hearing aids, since almost all hearing aids on the market today are designed with the goal of amplifying frequencies up to at least 5 kHz, and some manufacturers are developing hearing aids that amplify over an even wider frequency range (Fay et al, 2013;Levy et al, 2015). For people whose high-frequency hearing loss is too severe for them to benefit from amplification of frequencies above 3 kHz, frequency-lowing is often used to provide information about those components (Alexander, 2013). Also, prescriptive methods for fitting hearing aids based on the audiogram all prescribe gain for frequencies up to at least 6 kHz (Keidser et al, 2011;Moore et al, 2010b;Scollie et al, 2005).
Effects on speech intelligibility expected from the Speech Intelligibility Index
A standard method for predicting speech intelligibility is the Speech Intelligibility Index (SII; ANSI, 1997). The method is based mainly on the audibility of the speech and does not take into account the adverse effects of hearing loss on the ability to discriminate sounds that are well above the detection threshold (Moore, 2007;Plomp, 1978); such effects are discussed later in this paper. The SII does not explicitly take into account the fact that the information in speech (for example the envelope fluctuations) is correlated across frequency bands: the closer the centre frequencies of the bands, the higher is the correlation (Crouzet & Ainsworth, 2001). Hence the SII does not give accurate predictions of intelligibility for speech that is filtered into very narrow frequency bands whose separation is varied (Warren et al, 2005). Also, the SII does not give accurate predictions of intelligibility for speech in fluctuating background sounds (Rhebergen & Versfeld, 2005). However, for lowpass or highpass filtered speech presented in quiet or in a steady background sound, the SII generally gives accurate predictions.
The SII method incorporates a weighting function whereby the information at different frequencies is assigned a weight according to its relative importance. The overall weight assigned to frequencies above 3 kHz depends on the speech material. For 'average speech' the total weight assigned to frequencies above 3 kHz is approximately 23%. For some specific speech tests, using nonsense syllables where each English phoneme occurs equally often, CID words, NU6 nonsense syllables, the diagnostic rhyme test, short passages of easy materials, and SPIN test monosyllables, the corresponding percentages are 26, 16, 17, 17, 18, and 20%, respectively. When the face of the talker is visible, so lip-reading is possible, the high-frequency components in the acoustic signal become relatively less important (Kryter, 1962;Sumby & Pollack, 1954). However, there are many situations when lip-reading is not possible, for example, when listening to a companion at dinner while cutting up food or when listening to the radio.
The value of the SII varies from 0 to 1. A value of 0 indicates that no usable information is conveyed (this is an approximation). A value of 1 indicates that all of the important information in the speech is audible. A value of 0.75 is high enough for good communication with a clear talker and in the absence of reverberation. The SII for a telephone signal with a frequency range from 0.5 to 3.2 kHz, which was to designed to give just-adequate intelligibility for people with normal hearing, is 0.71. A value of 0.5 indicates that there would be some difficulty in understanding speech, with significant errors being made, and a value of 0.3 indicates considerable difficulty in understanding speech, with many errors of understanding.
To calculate the expected effect of NIHL for a given individual, the first stage is to estimate the expected hearing loss for a non-noiseexposed individual of that age and gender. In the UK this is usually done by using the audiometric thresholds at 1 and 8 kHz as anchor points, and selecting appropriate values from tables of hearing loss as a function of age and gender for non-noise-exposed individuals (Coles et al, 2000), although this approach has been criticized (Ali et al, 2014). An alternative 'two-pass' method has recently been proposed by Lutman et al (2016). This method takes into account the fact that while noise exposure typically has its greatest effects on audiometric thresholds for frequencies close to 4 kHz, the effects can spread to lower and higher frequencies as the loss becomes more severe (Passchier-Vermeer, 1974). Other approaches are used in other countries. Once the age-expected hearing loss is estimated, it is subtracted from the actual hearing loss. This gives an estimate of the noise-induced component of the hearing loss.
An example of a typical case for a man aged 55 years is shown in Table 1, using the method of Lutman et al (2016). Note that the exact method used to estimate the noise-induced component of the hearing loss is not critical for the present purpose. Row 3 of the table shows the hearing threshold levels (HTL) for frequencies from 1 to 8 kHz. The thresholds are within the normal range for frequencies up to 3 kHz, but are elevated at higher frequencies. Row 4 shows the HTLs at the anchor points of 1 and 8 kHz, and row 5 shows the age-associated hearing loss (AAHL) for a man aged 55 at the 50th percentile, taken from Table 2 of Coles et al (2000). The actual audiometric threshold is 3 dB worse than for the AAHL at the 1-kHz anchor point and 3 dB better than for the AAHL at the 8-kHz anchor point. These 'misfit' values are shown in row 6. Row 7 shows interpolated misfit values, and row 8 shows the first-pass estimate of the AAHL. Row 9 shows the 'bulge', which is the firstpass estimate of the noise-induced component of the hearing loss. Row 11 shows the modified HTL at the anchor points, which is what the HTL would be expected to be if there had been no noise exposure; the modifications are based on the first-pass estimate of the noise-induced loss at 4 kHz. Row 12 shows the AAHL values used for the second pass. Here, they are the same as the values in row 5, although they can be selected to be different. Row 13 shows the misfit values at the anchor points and row 14 shows the interpolated misfit values. The final estimate of the AAHL is shown in row 15, and the estimated noise-induced loss is shown in row 16. The mean estimate of the NIHL at 1, 2 and 3 kHz is only 2.4 dB, which would usually be considered as of no importance. The mean estimate of the NIHL at 1, 2 and 4 kHz is more substantial, at 11.7 dB.
SII values were calculated for the example illustrated in Table 1 using the band-importance function for everyday speech and for three listening situations. For each listening situation the SII was calculated for two cases: (1) For a hearing loss based on the estimated AAHL (row 15 in Table 1), and (2) For a hearing loss based on the actual audiogram. The difference between the two cases represents the extra effect of the NIHL. The outcome is shown in Table 2.
For speech presented at a typical conversational level of 65 dB SPL without any background noise, the difference in SII was 0.14 (a decrease of 15%). Since both SII values were high, the noise-induced component of the loss would not prevent good communication with a talker who spoke clearly in a non-reverberant room but might lead to slight difficulty with a talker who did not speak clearly, had a foreign accent, or was heard in a reverberant room.
Consider next the situation for soft speech at 50 dB SPL, such as might occur when a person talks in an adjacent room or when sitting close to the back of a lecture room. The difference for this situation was 0.10 (a decrease of 12%). The decrease in SII value produced by the noise-induced component of the hearing loss would lead to some difficulty in understanding clearly spoken speech and marked difficulty for a talker who did not speak clearly or was heard in a reverberant room.
The primary problem experienced by people with hearing loss, at least when the hearing loss is mild or moderate, is difficulty in understanding speech in noisy situations (Kochkin, 2010;Moore, 2007;Plomp, 1978;1986). To quantify the likely magnitude of this difficulty, the SII was calculated for speech presented at a level of 65 dB SPL in a background noise of the same overall level. This is representative of a moderately noisy situation. The background noise was assumed to have a similar average spectrum to the target speech, but with slightly less energy for high frequencies, to allow for the fact that reflection of noise from the walls, floor and ceiling of a typical room is reduced at high frequencies. The difference for this situation was 0.08 (a decrease of 21%). The decrease in SII would lead to a clearly noticeable increase in difficulty in understanding speech in noisy situations.
This example illustrates how the noise-induced component of the hearing loss at frequencies above 3 kHz can lead to some increase in difficulty in understanding soft speech in quiet and a marked increase in difficulty in understanding speech in background noise.
As noted above, the SII is based mainly on the proportion of the speech that is audible. The SII does not take into account effects of NIHL other than elevation of the audiometric threshold. Such effects include reduced frequency selectivity (the ability to 'hear out' or separate the different frequencies that are present in complex sounds like speech) (Glasberg & Moore, 1986;Moore, 2007), and degeneration of neurons in the auditory nerve (Kujawa & Liberman, 2009;Wan & Corfas, 2015). Thus calculations based on the SII probably underestimate the effects of NIHL.
Other deleterious effects of high-frequency hearing loss
The voices of small children have a higher frequency spectrum than for adults and their speech may be less clear than that of adults. Certain speech sounds (such as 's') produced by women and children may contain energy largely above 4 kHz. Hence, hearing loss at 4 kHz and above may compromise the ability to hear such sounds (Stelmachowicz et al, 2001). Certain bird songs are composed mainly of frequencies above 3 kHz (see, for example, the spectra for songs of two species of sparrow in Figures 3 and 4 of Hoese et al, 2000). It follows that hearing loss at frequencies around 4-6 kHz is likely to have an impact on the ability to hear such sounds. Of course, the importance of this is likely to vary markedly across individuals.
The ability to determine whether a sound is coming from in front or behind, and above or below, depends on information provided by reflections of sound from the pinna (the outer ear); these reflections change the spectrum of the sound reaching the eardrum (Blauert, 1997). The changes in spectrum are most marked and are most useful for frequencies above 3 kHz (Gardner & Gardner, 1973;Best et al, 2005). Hearing loss for frequencies of 4 kHz and above is likely to reduce the ability to use pinna cues and hence decrease the ability to determine whether sounds are coming from in front or behind, and above or below. This happens partly because of reduced audibility of high-frequency sounds, but mainly because hearing loss is usually associated with reduced frequency selectivity, and this decreases the ability to discriminate the spectral changes (Jin et al, 2002). The extra component of hearing loss at 4 and 6 kHz produced by noise exposure reduces the ability to judge whether sounds are coming from in front or behind, and above or below, and increases the smallest detectable change in location of a sound (Rønne et al, 2016).
Effects of noise exposure not revealed by the audiogram
There is evidence from both animal studies (Kujawa & Liberman, 2009) and human studies (Epstein et al, 2016;Stamper & Johnson, 2015) that noise exposure can lead to loss of synapses between the inner hair cells in the cochlea and the neurons in the auditory nerve, even when the audiogram remains normal or near-normal (Gourevitch et al, 2014;Wan & Corfas, 2015). Following the loss of synapses, the neurons in the auditory nerve degenerate, but this can take a considerable time, up to several years (Kujawa & Liberman, 2015). The degeneration tends to be greatest in neurons tuned to high frequencies (Kujawa & Liberman, 2009). When effects of NIHL are apparent in the audiogram, the loss of synapses and neurons is probably even greater than when the audiogram remains within normal limits. The loss of synapses and neurons results in a reduced flow of information from the cochlea to the brain, and to a less precise neural representation of the properties of sounds. Consistent with this, noise exposure is associated with greater self-reported hearing difficulty (Tremblay et al, 2015) and with a poorer-than-normal ability to detect envelope fluctuations in sounds (Stone & Moore, 2014), even when the audiogram remains within normal limits. Loss of neurons in the auditory nerve probably contributes to the difficulties experienced by people with NIHL when trying to understand speech in the presence of background sounds (Epstein et al, 2016;Plack et al, 2014). The effects of loss of neurons are not taken into account in the SII calculations described above. Hence, these calculations probably under-estimate the degree of difficulty experienced by people with NIHL.
Predicting self-reported hearing difficulty based on audiometric thresholds It has been argued that self-assessment should be the 'gold standard' for determining the effects of hearing impairment in everyday life since 'No one can assess the effects of hearing loss on daily life better than the affected person (assuming that this is a competent, cooperative adult who is not claiming compensation)' (Dobie & Sakai, 2001). However, since a person claiming compensation for hearing loss might give an exaggerated report of the adverse effect of their hearing loss, self-report is not considered appropriate when assessing individual claims for compensation. Hence, surrogate measures must be used. Two possible surrogate measures are performance on objective measures of the intelligibility of speech in quiet or in noise and some sort of average of the audiometric thresholds at selected frequencies.
A widely used approach to assessing the relative importance of hearing loss at different audiometric frequencies is to obtain selfreport assessments of hearing difficulty from a large number of hearing-impaired people and to determine the extent to which these assessments are predicted by the audiometric thresholds at specific frequencies or combinations of frequencies (Dobie, 2011;Dobie & Sakai, 2001;King et al, 1992). This approach has been reviewed by Dobie and Sakai (2001) and Dobie (2011) and it is the one that is most widely used in the medico-legal context. Generally, the audiometric thresholds showing the highest correlation with selfreported hearing difficulty are 0.5, 1 and 2 kHz. When combinations of the audiometric thresholds at different audiometric frequencies are used, the combinations including the frequencies 0.5, 1 and 2 kHz generally lead to the highest correlations. When a combination of four frequencies is used, the correlations are almost the same for the combination 0.5, 1, 2, and 3 kHz and the combination 0.5, 1, 2, and 4 kHz (Dobie, 2011). These results have been taken as indicating that hearing loss for frequencies up to 3 kHz is the major determinant of self-reported hearing difficulty, with the usual caveat that correlation does not imply causation.
One justification for the use of self-report assessments rather than objective measures of the ability to understand speech in quiet or in background sounds is that it avoids any arbitrary decision about which of the many available objective speech tests should be used. However, the selection of the test(s) used to obtain self-report measures from the many tests available is also somewhat arbitrary. An argument in favour of the use of audiometric thresholds rather than objective measures of speech intelligibility is that the former are generally more highly correlated than the latter with self-report measures of hearing difficulty (Dobie & Sakai, 2001). However, this partly reflects the fact that the objective measures of speech intelligibility used in most large-scale clinical studies are based on a relatively small number of test items, and hence have high variability. Measures of speech intelligibility based on more data, and with lower variability, such as the study of Smoorenburg (1992) described in the next section, might show a higher correlation with self-reported hearing difficulties; this remains to be determined.
The argument that self-report measures should be regarded as the gold standard can be questioned. For a hearing loss that develops slowly and progressively, as is usually the case, the affected person may not notice the change in their hearing until it becomes rather severe. Consistent with this, many people who judge their own hearing to be 'normal' nevertheless have hearing loss that presumably leads to some hearing difficulty (see the supplementary material in Füllgrabe et al, 2015). Furthermore, self-report measures are affected by factors other than hearing ability, such as age and intelligence (Gatehouse, 1990). Perhaps for these reasons, self-report measures often show only a modest correlation with audiometric thresholds. For example, the ''Communication Profile'' scores from the CPHI self-assessment inventory (Demorest & Erdman, 1987) had a correlation of À0.4 with the mean audiometric threshold at 0.25, 0.5, 1, 2 and 4 kHz in the analysis reported by Dobie (2011). This was the highest (negative) correlation obtained among the various combinations of audiometric frequencies that were evaluated.
Most of the studies that have reported correlations between selfreported hearing difficulty and audiometric thresholds have been based on participants with a wide range of ages and types of hearing loss. The best combination of frequencies for predicting hearing difficulties among people with NIHL (or a combination of NIHL and age) might differ from that for the general population of hearing-impaired people. For example, low-frequency hearing loss is often associated with hearing disorders such as Ménière's syndrome that lead to severe speech perception and other difficulties (Soderman et al, 2002). The inclusion of such people in the sample population will increase the correlation between selfreported hearing difficulty and audiometric thresholds at low frequencies. Indeed, Dobie (2011) pointed out that '. . . the best set of audiometric frequencies for predicting self-reported disability will include relatively higher frequencies for a sample of people with only mild to moderate loss and relatively lower frequencies for a sample of people with profound impairments'. Many people with NIHL fall into the former category. Gomez et al (2001) examined the relationship between audiometric thresholds and self-reported hearing difficulty for 376 farmers who were known to be exposed to potentially damaging levels of noise. The agreement between selfreport scores of hearing difficulty and audiometric thresholds was higher for the average across 1, 2, 3, and 4 kHz than for the average across 0.5, 1, and 2 kHz or across 3, 4, 6 and 8 kHz. This finding suggests that for people with NIHL, audiometric thresholds at higher frequencies (3 and 4 kHz) are related to self-reported hearing difficulty.
In summary, while for the hearing-impaired population in general self-reported hearing difficulties are predictable to some extent from audiometric thresholds for frequencies up to 3 kHz, this does not necessarily mean that hearing loss for frequencies above 3 kHz is unimportant. Furthermore, for a population restricted to those with significant noise exposure, the average of 1, 2, 3, and 4 kHz as a predictor led to better agreement with self-reported difficulties than the average of 0.5, 1, and 2 kHz.
Predicting measured speech intelligibility from the audiogram: the study of Smoorenburg (1992) Smoorenburg (1992) published a study of the effects of NIHL on the ability to understand speech in quiet and in noise and of the relationship of that ability to the audiogram, using 200 participants. This study had three strengths for the purposes of the present review. Firstly, all participants were selected because they were exposed to relatively intense noise at work, so the population was representative of those seeking compensation for NIHL. Second, the participants in the study were not actually seeking compensation for their hearing loss and had no motivation for exaggerating the extent of their hearing difficulties. Thirdly, the ability to understand speech in noise was measured for three background noise levels, so that an accurate composite estimate of that ability was obtained. All participants were younger than 55 years to minimize the effects of age.
Smoorenburg found that the speech reception threshold (SRT) for speech in quiet (the speech level required for 50% of sentences to be identified correctly) showed the highest correlation with audiometric thresholds at low frequencies. The best three-frequency predictor of the SRT for speech in quiet was the average audiometric threshold at 0.5, 1, and 2 kHz. However, most of the participants had low SRTs for speech in quiet (90% had SRTs lower than 30 dBA), indicating that they had little difficulty in understanding soft speech.
For speech in noise, the SRT (the speech-to-background ratio required for 50% of sentences to be identified correctly) showed the highest correlation with audiometric thresholds at high frequencies.
Smoorenburg determined the correlation between the audiometric threshold at each frequency and the SRT for speech in noise. The correlation showed a maximum at 4 kHz, although the correlation was above 0.6 for frequencies from 3 to 6 kHz. The best threefrequency predictor of the ability to understand speech in noise was the weighted mean threshold at 4, 5, and 2 kHz (in order of importance); the correlation in this case was 0.75. The best unweighted two-frequency predictor was the average of the audiometric thresholds at 2 and 4 kHz (denoted PTA 2,4 ); the correlation in this case was 0.72. For PTA 2,4 ¼ 0 dB, the SRT was typically close to -5 dB, whereas for PTA 2,4 ¼ 60 dB the SRT was close to +2 dB. It is noteworthy that the value of PTA 2,4 accounted for 52% of the variance in the SRTs in noise. In contrast, the best predictor of selfreported hearing difficulty in the study of Dobie (2011) accounted for only 16% of the variance in CP scores.
Again, while correlation does not prove causality, these findings suggest that hearing loss at 4 kHz (and probably 5 kHz) is important in determining the intelligibility of speech in noise for people with NIHL: the higher the audiometric threshold at 4 and 5 kHz, the worse is the intelligibility. Based on the data in Figure 10 of Smoorenburg (1992), a 10-dB increase in PTA 2,4 is associated, on average, with a 1.2-dB increase in the SNR required to identify 50% of sentences completely correctly. Such a 1.2-dB change corresponds to a 17% decrease in the number of sentences that can be correctly understood under difficult listening conditions (Plomp & Mimpen, 1979). Thus, if the noise-induced component of the hearing loss leads to an increase in PTA 2,4 of X dB, this would be expected, on average, to decrease the number of correctly identified sentences in noise by X times 1.7%. For example, if the noiseinduced component of the hearing loss averaged across 2 and 4 kHz is 12 dB, this would be expected to decrease the number of correctly identified sentences in noise by about 20%. In summary, the results of Smoorenburg (1992) indicate that even relatively small noiseinduced elevations in audiometric threshold at 4 kHz are associated with a markedly reduced ability to understand speech in noise.
The results of a study of Wilson (2011) are broadly consistent with those of Smoorenburg (1992). Wilson tested 3266 veterans, many of whom had been exposed to intense noise, and had dips in their audiograms around 4 kHz, indicating NIHL. The intelligibility of speech in noise was assessed using the Words-in-Noise (WIN) test, which evaluates word recognition in multitalker babble at seven SNRs and uses the 50% correct point (in dB SNR) as the primary outcome metric. Wilson found that scores on the WIN were predicted significantly better by the average audiometric threshold at 1, 2 and 4 kHz than by the average audiometric threshold at 0.5, 1, and 2 kHz, confirming the importance of high-frequency hearing for the ability to understand speech in noise.
Conclusions and recommendations
There is very strong evidence that NIHL for frequencies above 3 kHz has adverse effects on the ability to understand speech, especially when background noise is present. Hearing loss for frequencies above 3 kHz also adversely affects the ability to localize sounds and to hear certain kinds of environmental sounds. Therefore, the audiometric threshold at 4 kHz, and possibly also at 6 kHz, should be taken into account when considering compensation for occupational NIHL in a medico-legal context. A major complaint of people with NIHL is difficulty in understanding speech in noise. A good predictor of the ability to understand speech in noise for people with NIHL is the average audiometric threshold at 2 and 4 kHz.
Acknowledgements
I thank Bob Dobie and two anonymous reviewers for helpful comments on earlier versions of this paper.
Declaration of interest: The author acts as an expert witness in medico-legal work concerned with NIHL. The author alone is responsible for the content and writing of the paper.
Funding
The work of the author is supported by the Engineering and Physical Sciences Research Council (UK, grant number RG78536). | 2018-04-03T00:26:00.760Z | 2016-07-14T00:00:00.000 | {
"year": 2016,
"sha1": "dc7782bd8813e5d008fc6f09c88d5ab3c373ff90",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14992027.2016.1204565?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0ad774a89fc9edff8f12f2ebfcfa586c3ad1ceb3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
195386777 | pes2o/s2orc | v3-fos-license | Deciphering and quantifying linear light upconversion in molecular erbium complexes† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c9sc02068c
Single-center linear excited state absorption (ESA) can be implemented in isolated mononuclear erbium(iii) coordination complexes, thus fixing the zero-level of quantum yields for lanthanide-based molecular light upconversion.
Introduction
In optics, the common degradation of energy considers the conversion of high-energy photons into photons of lower energy (downshiing) together with heat dissipation. The reverse situation, in which low-energy photons are transformed into higher energy ones (up-conversion), was envisioned early on as a consequence of the non-linear dependence of the refractive index on the applied electric eld, 1 and theoretically predicted by Goeppert-Mayer in 1931. 2 However, the so-called non-linear optical (NLO) response of matter is so inefficient that its experimental illustration for second-harmonic generation (in quartz) 3 and for two-photon excitation uorescence (in Eu 2+doped materials) 4 was delayed until the rst ruby laser providing a strong and coherent incident beam became available in 1960. 5 Beyond symmetry rules, there is no specic limitation for implementing NLO responses in matter and both macroscopic solids and (bio)molecules are prone to work as non-linear optical activators as long as huge incident power intensities in the 10 5 -10 10 W cm À2 range are used. 6 In parallel with NLO investigations, Bloembergen, 7 rapidly followed by Auzel,8 realized that open-shell centers possessing ladder-like series of intermediate excited states with small radiative rate constants (k r ), as found for trivalent lanthanides, Ln 3+ , could be used as relays for successive linear excitations. When such ions are dispersed into low-phonon solids, the non-radiative relaxation pathways (k nr ) are also minimized to such an extent that linear excitation (k exc ) becomes competitive with relaxation (k relax ¼ k r + k nr ) and intermediate excited states can efficiently absorb additional photons to reach higher-energy excited levels. The latter sequential piling up of several photons on a single activator (Excited-State Absorption ¼ ESA) exploits linear optics and results in the conversion of low energy infrared photons into visible photons, a phenomenon referred to as upconversion (Scheme 1). 9 The use of more efficient linear optics combined with the sequential, rather than simultaneous (in NLO), nature of the excitation negates the need for excessively high incident intensities, and upconversion can be achieved using excitation powers that are 5-10 orders of magnitude lower than those required for NLO. A further gain in efficiency of up to two orders of magnitude 9a can be generated by the use of optimized peripheral sensitizers for absorbing photons prior to the stepwise transfer of the accumulated energy onto the activator (energy transfer upconversion ¼ ETU). Under these conditions, upconversion quantum yields as large as 4-12% have been implemented in multi-centered mixed lanthanide-doped oxides or uorides. 10 These encouraging achievements make a multitude of challenging applications possible which intend on (i) reducing the spectral mismatch for solar cell technology, 9 (ii) designing near-infrared addressable luminescent bioprobes where the biological tissues are transparent 11 and (iii) optimizing wave guides, 12 security inks, 13 lasers and display devices, 14 and this despite the weak absorption cross sections of f-f transitions in lanthanides (in the order of s z 10 À20 cm 2 ) 15 or of d-d transitions in transition metals (in the order of s z 10 À19 cm 2 ). 16 Attempts to reduce the size of upconverting solids toward the nanometric scale for being compatible with high-technology hybrid materials and with their incorporation into biological organisms drastically suffer from surface quenching and difficult reproducibilities. 11,17 Maximum upconversion quantum yields within the 0.1-0.5% range have been obtained for optimized nanoparticles aer surface passivation 18 and/or coupling to a surface plasmon for increasing both absorption cross sections and radiative decays. 19 Because the intensity of the upconverted light I upconversion ¼ k 2/0 r N j2i reects the population density of the second excited state N |2i , its magnitude drastically depends on the lifetime of the intermediate excited state nr Þ: Solving the matrix equation depicted in Scheme 1b for trivalent erbium incorporated into long-lived doped solids (for instance s |1i ¼ ms in Gd 2 O 2 S) under steadystate excitation using reasonable incident pump power (1-10 W cm À2 ) predicts mole fractions of 2 Â 10 À3 # N |2i # 5 Â 10 -3 for the double excited state A**. 20 Similar calculations performed for typical short-lived molecular erbium-based complexes possessing high-energy C-H, C-C and C-N oscillators (for instance s |1i ¼ 2.8 ms in a [GaErL 3 ] helix) do not exceed N |2i # 10 À11 . 20 It is thus not so surprising that singlecentered linear upconversion was originally thought to be undetectable in molecular lanthanide complexes, 21 and huge incident power intensities around 10 9 W cm À2 produced by modern pulsed femtosecond lasers were required to induce faint upconverted signals for [Ln(2,6-dipicolinate) 3 ] 3À , [Ln(EDTA)] À (Ln ¼ Nd, Tm, Er), 22 and [Tm(DMSO) x ] 3+ in solution. 23 These discouraging results, combined with the approximate 0.1 W cm À2 power density of terrestrial solar irradiance, 9c,d paved the way for the exclusive consideration of non-coherent upconversion based on triplet-triplet annihilation (TTA) as the only viable route for performing reliable and workable linear upconversion in molecules. 24 However, if the latter annihilation process occurs between two discrete triplet-state entities, their formation Scheme 1 (a) Kinetic scheme for the modeling of the linear single-ion ESA process occurring upon off-resonance irradiation into the activator-centered absorption band and (b) associated first-order kinetic equations. k i/j exc , k i/j r , and k i/j nr are the first-order rate constants for excitation, radiative decay and non-radiative decay, respectively, and The decorrelation between light absorption, performed by specic sensitizers, and light-upconversion occurring on an optimized lanthanide activator in multicenter molecular aggregates using the ETU mechanism proved to be less challenging and some protected Er(III)-activators combined with optimized peripheral Yb(III)-sensitizers in multi-doped metalorganic frameworks or coordination polymers displayed weak upconverted green Er( 4 S 3/2 / 4 I 15/2 ) and red Er( 4 F 9/2 / 4 I 15/2 ) signals upon intense Yb( 2 F 5/2 ) 2 F 5/2 ) excitation. 27 Encouraged by these preliminary data collected on innite macroscopic solids, an Er(III) activator was anked by a couple of Cr(III) sensitizers in a molecular triple helix [CrErCrL 3 ] 9+ to give the rst molecular-based green upconversion process induced by reasonable power pump intensities (Fig. 1a). 28 This success was rapidly conrmed for two other molecular sensitizer/activator pairs obtained by host-guest associations in organic solvents ([IR-806][Er(L) 4 ] in Fig. 1b) 29 or in water ([(LEr)F(LEr)] + in Fig. 1c). 30 None of these ETU processes were characterized by quantum yield measurements because of the very faint upconverted signals. In two recent publications, 31 Charbonnière and co-workers reported on two novel aqueous-phase assemblies made of a central Tb(III) activator surrounded by two or more Yb(III) sensitizers ([Tb(YbL) 2 ] in Fig. 1d and e). Surprisingly, these (supra)molecular entities exhibit detectable near-infrared to green upconversion, for which only cooperative energy transfers may explain the feeding of the high-energy Tb( 5 D 4 ) level ( Fig. 1d and e). Though some aspects of the theoretical modeling of the latter cooperative upconversion (CU) mechanism are rather analogous to ETU, its efficiency is usually much weaker because it involves quasi-virtual pair levels between which transitions have to be described by higher-order perturbations. 9a Despite this limitation, Charbonnière and co-workers were able to estimate a quantum yield of F up ¼ 1.4 Â 10 À8 for the complex depicted in Fig. 2e (deuterated water, room temperature). 31b Boosted by these remarkable results, we reasoned that ultimate miniaturization using a single-site excited-state mechanism (ESA) implemented in a trivalent erbium complex should become an obvious target for setting a zero-level for the quantication of molecular upconversion. Taking advantage of the rare dual visible (Er( 4 S 3/2 / 4 I 15/2 ) at 542 nm, green) and near-infrared (Er( 4 I 13/2 / 4 I 15/2 ) at 1520 nm) downshied emissions observed upon UV excitation of the triple-helical [Er(L1) 3 ] 3+ complex (Fig. 2a), a chromophore which closely (Fig. 1a), 32 we recently discovered that some weak upconverted green signals could be generated upon direct near-infrared excitation of the erbium center in this system. 33 Building on these preliminary data, we report here on the quantication and detailed mechanism rationalizing the rare single-site upconversion occurring in [Er(L1) 3 ] 3+ . Comparison with related optical processes implemented in analogous, but stepwise deprotected [Er(Lk) 3 ] 3+ (Lk ¼ L2-L4), mononuclear triple helices offers an opportunity for establishing some preliminary rules for implementing single-center erbium upconversion in molecular complexes (Fig. 2).
Light-downshiing operating in the mononuclear triplehelical complexes
With these structural characteristics in mind, it is not so surprising that ligand-centered excitation at 401 nm of these trivalent erbium complexes [Er(Lk) 3 ] 3+ in the solid state and in solution systematically showed dual downshied visible Er( 4 S 3/2 / 4 I 15/2 ) and near-infrared Er( 4 I 13/2 / 4 I 15/2 ) luminescence at 542 nm and 1515 nm, respectively (Fig. 3). 32 The log-log plots of the intensity of the emitted light with respect to the incident power return slopes around 1.0 (Fig. 3), 26 which are the signatures of single-photon ligand-centered excitation processes followed by energy migration according to the antenna effect (Fig. 4a). 32 Please note in Fig. 3a the superimposition of the visible Er( 2 H 11/2 / 4 I 15/2 ) and Er( 4 S 3/2 / 4 I 15/2 ) emission bands with the tails of the residual broad ligand-centered 1,3 p* / 1 p bands, which is typical for incomplete metal sensitization via the antenna mechanism.
Related linear upconverted visible signals at 542 nm (Er( 4 S 3/2 / 4 I 15/2 )) and 522 nm (Er( 2 H 11/2 / 4 I 15/2 )) can be induced in solution (Fig. 8) or in the solid state ( Fig. S13 and S14 †) via Er-centered excitation of the Er( 4 I 11/2 ) 4 I 15/2 ) transition at 966 nm and using power intensities in the 1-78 W cm À2 range. Again, the upconversion process is more efficient in [Er(L1) 3 ] 3+ , when extended 2,6-bis-(benzimadol-2yl)pyridine ligands are wrapped around Er(III) instead of terpyridines in [Er(Lk) 3 ] 3+ (Lk ¼ L2-L4; Fig. 8). In the absence of easily accessible organic dyes with well-established quantum yields following excitation at 966 nm, we did not monitor absolute quantum yields at this excitation wavelength. As previously discussed when analyzing downshiing processes (see the mechanism in Fig. 4b), excitation of the Er( 4 I 11/2 ) 4 I 15/2 ) transition at 966 nm results in multiple successive linear excitations prior to reaching the intermediate Er( 4 I 13/2 ) relay, thus leading to slopes within the 3.0-4.0 range for the linear log(I)-log(P) plots characterizing the ultimate upconversion processes (Fig. S13 and S14 †). The minimum slopes of 2.6-2.7 are still compatible with two-and three-photon processes which avoid the use of the Er( 4 I 13/2 ) intermediate excited state as a relay (Fig. 7b, le). However, the most frequent slopes reach 3.0-4.0 and imply at least one additional successive linear excitation and a 4-phonon mechanism, which is a logical consequence of the involvement of the intermediate Er( 4 I 13/2 ) level as a relay (Fig. 7b right). The lack of efficient non-radiative Er( 4 I 11/2 ) 0 Er( 4 I 13/2 ) relaxation (DE $3700 cm À1 ), previously responsible for the unusual 2phonon downshiing mechanism observed in these complexes following 966 nm excitation (Fig. 4b), appears to be a severe handicap for exploiting the 'long-lived' (2-6 ms at 298 K) 32 intermediate Er( 4 I 13/2 ) excited level as a relay for promoting visible upconversion (Fig. 7b). Finally, excitations at 966 nm of the Er( 4 I 11/2 ) 4 I 15/2 ) transition exhibit some standard decreases of the upconverted intensities with increasing temperatures (Fig. S15 †). 3 (solid state, 298 K) recorded upon laser excitation of the Er( 4 I 9/2 ) 4 I 15/2 ) transition at l exc ¼ 801 nm (ñ exc ¼ 12 284 cm À1 ) and using increasing incident pump intensities focused on a spot size of z0.07 cm 2 (the blank (¼red curve) was recorded upon irradiation of the copper plate support covered with silver glue at a maximum intensity P ¼ 29 W cm À2 ) and (b) corresponding log-log plots of upconverted intensities I as a function of incident pump intensities P (in W cm À2 ); the straight lines correspond to extrapolated linear fits. (c) Dependences of upconverted intensities I as a function of temperature (solid state, P ¼ 29 W cm À2 , the dashed lines are only guides for the eye) and (d) upconverted emissions for [Er(Lk) 3 ] 3+ (Lk ¼ L1-L4) complexes recorded using an incident pump intensity P ¼ 21 W cm À2 in acetonitrile solution (c $10 mM). The blank (red curve) was recorded from pure acetonitrile solvent using an incident pump intensity P ¼ 21 W cm À2 .
Conclusions
Upon ligand-centered or erbium-centered optical excitation, the series of nine-coordinate mononuclear triple-helical erbium(III) complexes [Er(Lk) 3 ] 3+ (Lk ¼ L1-L4) all exhibit the expected downshied near-infrared emission at 1515 nm, which originates from the lowest-energy Er( 4 I 13/2 ) excited level (solid state and solution, 10-298 K). While single photon mechanisms characterize sensitization via ligand-centered p* ) p light absorption at 401 nm or erbium-centered Er( 4 I 9/2 ) 4 I 15/2 ) absorption at 801 nm, the lack of efficient non-radiative Er( 4 I 11/2 ) 0 Er( 4 I 13/2 ) relaxation in these complexes results in unusual two-photon downshiing mechanisms upon Er( 4 I 11/2 ) 4 I 15/2 ) excitation at 966 nm. Because of vibrational quenching of the near-infrared Er( 4 I 13/2 / 4 I 15/2 ) transition with high-energy oscillators, the Er( 4 I 13/2 ) lifetime is reduced by an approximate factor of three when terminal benzimidazoles in [Er(L1) 3 ] 3+ (closest intramolecular Er/H distance ¼ 3.86Å) are replaced with pyridines in [Er(Lk) 3 ] 3+ (Lk ¼ L2-L4; closest intramolecular Er/H distance ¼ 3.42Å). With these photophysical characteristics in mind, the induction of bluegreen visible upconverted signals upon erbium-centered excitation of molecular [Er(Lk) 3 ] 3+ (Lk ¼ L1-L4) complexes using reasonable power intensities (1-50 W cm À2 ) is logically Fig. 7 Jablonski diagram summarizing the mechanisms of the Er-centered upconversion processes operating in the complexes [Er(Lk) 3 ] 3+ (Lk ¼ L1-L4) upon excitation of (a) the Er( 4 I 9/2 ) 4 I 15/2 ) transition at 801 nm and (b) the Er( 4 I 11/2 ) 4 I 15/2 ) transition at 966 nm. Excitation (dashed upward arrows), non-radiative multiphonon relaxation (downward undulating arrows), thermal equilibria (upward undulating arrows) and radiative emission processes (straight downward arrows). more efficient in [Er(L1) 3 ] 3+ and corresponds to a two-photon mechanism for erbium-centered Er( 4 I 9/2 ) 4 I 15/2 ) excitation at 801 nm and to multiple-photon processes (3-4 photons) for Er( 4 I 11/2 ) 4 I 15/2 ) excitation at 966 nm. Although weak, the associated quantum yields recorded in acetonitrile (0.4 Â 10 À8 # F up # 1.6 Â 10 À8 for l exc ¼ 801 nm) favorably compare with quantitative data reported for molecular upconversion using multi-center cooperative upconversion in deuterated water. 31b Taking the ESA mechanism operating in these [Er(Lk) 3 ] 3+ complexes as the 'zero-level' of efficiency of molecular upconversion, we should remember that Auzel taught us that optimised sensitisation followed by energy transfer according to the ETU mechanism with the help of adapted sensitizers in SA diads (S ¼ sensitizer, A ¼ lanthanide activator) may improve the upconversion output by two orders of magnitude. 8,9 Additionally, moving from molecular SA diads to SAS triads, where S is a long-lived sensitizer (i.e. millisecond lifetimes as observed in Cr(III) complexes) may theoretically further improve upconversion by more than three orders of magnitude. 20 Altogether, the connection of two adapted longlived sensitizers on each side of a central Er(III) activator to give a structure similar to that shown in Fig. 1a is expected to increase the quantum yield by roughly ve orders of magnitude compared to the ESA mechanism, thus reaching 0.1% efficiency as an upper limit for molecular upconversion using the ETU mechanism. Further optimization exploiting standard perdeuteration 41 or peruorination 42 could be used as wildcards for nal tuning. Interestingly, Er(III) protection from high-energy oscillators is helpful, but not sufficient, to design coordination complexes programmed for molecular upconversion. For instance, closely related 1 : 2 complexes [Er(L5) 2 32 In this context, it is worth remembering here that solid lms of Na 3 [Er(2,6 dipicolinate) 3 ]$xH 2 O (x ¼ 13-15), i.e. the most simple triple helical Er(III) complex with rather long intramolecular Er/H distances of 5.37Å, also failed in providing either downshiing or upconversion processes in the solid state. 21 A careful look at the crystal structures of the latter complexes 43 shows that interstitial water molecules accumulate along the threefold axis of the [Ln(2,6 dipicolinate) 3 ] 3À activators, thus leading to shorter intermolecular Er/H distances around 3.56Å, an organization which appears to be incompatible with the detection of any radiative signals following excitation.
Conflicts of interest
There are no conicts to declare. | 2019-06-26T14:12:56.604Z | 2019-06-06T00:00:00.000 | {
"year": 2019,
"sha1": "194b5e959e8080ffcc7dd712b75b90267654f543",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/sc/c9sc02068c",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60a00fa07c437e3addb41c0d197b4dfa135b0de4",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
133895748 | pes2o/s2orc | v3-fos-license | Influence of Blasted Uranium Ore Heap on Radon Concentration in Confined Workspaces of Shrinkage Mining Stope
A calculation model for radon concentration in shrinkage mining stopes under various ventilation conditions was established in this study. The model accounts for the influence of permeability and area of the blasted ore heap, ventilation air quantity, and airflow direction on radon concentration in a confined workspace; these factors work together to allow the engineer to optimize the ventilation design. The feasibility and effectiveness of the model was verified by applying it to mines with elevated radon radiation exposure. The model was found to accurately changes in radon concentration according to the array of influence factors in underground uranium mines.
Introduction
Radon concentration is higher in underground uranium mines and is known to be a severe health hazard for miners. Mining activity results in the release of radon gas and its daughter products into confined workspaces, where miners may be exposed to high levels of radon in ore heaps or from other sources. Poor ventilation condition can severely threaten the health and safety of miners due to exposure to radioactive materials. There is urgent demand for workable techniques to quantify radon concentration in confined workspaces in order to design ventilation systems for blasted uranium ore heaps in shrinkage mining stopes. Radon measurement techniques were first investigated in coal mines in Pakistan [1] and Western Turkey [2]. EI-Fawal [3] established a calculation model for airflow, air pressure and radon/daughter concentration in mine ventilation networks. Sahu [4] evaluated the effect of various ventilation parameters on radon exposure to miners in underground uranium mines; airflow rate was considered the key parameter in controlling the radon/daughter concentrations in the mine. These researches have focused mostly on radon concentration measurement in assessing radiological hazards. Many have also explored the influence of ventilation parameters on radon concentration. Data gathered in previous studies was utilized to build a calculation model for radon concentration in shrinkage mining stopes with various ventilation parameters. The calculation results may provide a workable reference for ventilation design in underground shrinkage uranium mines. The ore particles in blasted uranium ore heaps vary in size and can be roughly divided into n grades accordingly: 0~r1,r1~r2,r2~r3,……,rn-1~rn. The equivalent radius of the ith grade ore particles can be calculated as follows:
Mathematical
Where rmin and rmax is the minimum and maximum size of the ith grade ore particles (m). The mass fraction of the ith grade ore particles is a function of the fractal dimension and particle size, it can be calculated as follows [5]: Where Df is the fractal dimension of the particle size distribution. The bottom surface of uranium ore heap forms the origin of the coordinates and the top is the positive direction. Then the steady release-diffusion-seepage migration equation in the blasted uranium ore heap is as follows: Where η2 is ore heap porosity; K1 is equivalent radon decay constant of the medium; K2 is equivalent radon release rate of the medium (Bq m -3 s -1 ). Figure 1 and 2 show shrinkage stope with ascentional and descentional ventilation respectively. The pressure difference between the top and bottom surface of the ore heap is formed during the ventilation period. The value of said pressure difference is approximately equal to the ventilation resistance of the stull-supporting raise, which can be calculated as follows:
Determination of radon exhalation rate of blasted uranium ore heap
Where αf is the frictional resistance coefficient of the stull-supporting raise (Pa s m -2 ); p is the stull-supporting raise perimeter (m); s is the stull-supporting raise area, (m 2 ); Q is the ventilation air quantity of the stope (m 3 s -1 ) and L0 is the stull-supporting raise length (m). The length of the stope's stull-supporting raise can be calculated as follows : Where H is the vertical height of the blasted uranium ore heap (m); θ is ore body obliquity; L1 is the thickness of blasted uranium ore heap (m). The air seepage velocity in the blasted uranium ore heap can be calculated through 'Darcy law'. In ascentional ventilation, then radon exhalation rate of the top surface of the blasted uranium ore heap is: Where D2 is the radon diffusion coefficient of the blasted uranium ore heap (m 2 s -1 ). The radon diffusion coefficient can be calculated as follows [6]: Where D0 is the radon diffusion coefficient of air (m 2 s -1 ); T is absolute temperature (K); and m is water saturation in porous media. The radon exhalation rate of the blasted uranium ore heap bottom surface is: During descentional ventilation, radon exhalation rate of the top surface of the blasted uranium ore heap is: The radon exhalation rate of the blasted uranium ore heap bottom surface is: Under the effect of ventilation, radon diffusion is negligible if air seepage plays a dominant role in radon migration through the blasted uranium ore heap. The radon exhalation rate of the heap under the effect of ventilation can then be described as follows:
Radon concentration calculation model in confined shrinkage mining stope workspace
When calculating radon concentration in confined workspaces, as shown in Figures 2 and 3, the air inlet of the confined workspace sits at starting point 0 and the radon concentration at the point L meters from the air inlet is calculated as follows: Where C0 is the air inlet radon concentration (Bq m -3 ); Jw and Jd is the radon exhalation rate of surrounding rock and the top ore body(Bq m -2 s -1 ); W is the width of the workspace (m); H is the height of workspace (m); λ is the radon decay constant (s -1 ); Q is air quantity (m 3 s -1 ). The radon decay constant is 2.1×10 -6 (s -1 ), the length of the workspace does not exceed 100 m, and air velocity in the workspace is above 0.25 (m s -1 ), so Equation (12) can thus be simplified as follows: Again, the length of the confined workspace is L2. If in ascentional ventilation -Jdui=Jss. The exhaust radon concentration increment in the confined workspace-caused by the blasted uranium ore heap can be calculated as follows: Under descentional ventilation, Jdui=Jxs, so the exhaust radon concentration increment of workspace caused by the blasted uranium ore heap is calculated as follows:
Determining parameters
Here, we assume that the uranium ore grade is 0.5%, the value of α0 is 9 (Bq m -3 s -1 ), ore particle size is 0-0.3 m, particle size distribution fractal dimension is 2.0, ore porosity is 0.035, ore heap porosity is 0.333, ore humidity is 0, air temperature is 20 (℃), αf is 0.05 (Pa s m -2 ) , workspace height is 2 (m). The top and bottom surfaces of the blasted uranium ore heap have the same area SS is 200 (m 2 ).
Influence of ventilation airflow directions on radon concentration increment in confined workspaces
For simplicity, we assume that the height of blasted ore heap is 20 (m) regardless of changes in other parameters. Figure 3 shows that in ascentional ventilation, as ventilation air quantity increases from 1 to 2 (m 3 s -1 ), ventilation increases the velocity of radon exhalation to the point where the amount of radon emitted is less than the amount of radon exhaled. When the air quantity is over 2 (m 3 s -1 ), however, the amount of radon emitted is greater than the amount of radon exhaled. With descentional ventilation, when ventilation air quantity is from 1 to 2 (m 3 s -1 ), radon concentration increment in the workspace gradually decreases. When ventilation air quantity is from 2 to 12 (m 3 s -1 ), radon concentration increment is close to 0. When permeability is invariable, the variations in radon concentration with ascentional ventilation are greater than with descentional ventilation. Figure 4 shows that when air quantity is 6 (m 3 s -1 ) and permeability is 1×10 -8 (m 2 ), the radon concentration increment with ascentional ventilation increases as ore heap height increases. With descentional ventilation, radon concentration increment stays close to 0 regardless of heap height. Figure 5. Variation curves of ore heap area Figure 6. Variation curves of radon concentration with radon concentration increment increment in confined workspace of shrinkage caused by ore heap exhalation stope with different permeability Figure 5 shows that when air quantity is 6 (m 3 s -1 ) and permeability is 1×10 -8 (m 2 ), the radon concentration changes as the heap area increases. As area increases, however, the variation tendency of radon concentration increment in the workspace gradually decreases. Although the area of blasted uranium ore heap has nothing to do with the exhalation rate, it does alter the radon exhalation area. Figure 6 shows that radon concentration in the workspace under ascentional ventilation gradually increases until leveling off as heap permeability increases. With descentional ventilation, radon concentration decreases as permeability increases until it nears 0. Under ascentional ventilation, the airflow forces radon exhalation from the ore heap into the workspace. The higher the heap permeability, the more radon is exhaled. The opposite effects occur with descentional ventilation.
Frictional resistance coefficient affects radon concentration increment in confined workspace
International Figure 7. Variation curves of different frictional resistance coefficient with radon concentration increment caused by ore heap exhalation Figure 7 shows that when air quantity is 6 (m 3 s -1 ) and permeability is 1×10 -8 (m 2 ), radon concentration increment continuously increases under ascentional ventilation as frictional resistance coefficient increases; the growth rate gradually decreases over the curve. Under descentional ventilation, the radon concentration is less influenced by frictional resistance coefficient.
Conclusions
A calculation model for radon concentration as-affected by source and ventilation conditions was established in this study for application in shrinkage mining stopes. The radon exhalation rate with varying airflow parameters was determined accordingly. The model accounts for the source of radon in the underground uranium mine, and allows the engineer to predict changes in radon concentration based on an array of influence factors. The height, area, and permeability of the heap are positively correlated with radon concentration increment under ascentional ventilation in the workspace; ventilation air quantity has the opposite effect. Radon concentration increases to greater extent under ascentional ventilation than descentional ventilation. These observations may be used as a reference for ventilation design to control radon in shrinkage mining stopes. | 2019-04-27T13:05:32.410Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "08b7cabd6a56daf9db6fccb0863fb820dabd3b9c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/73/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bbe0ae7247ddbe554b55c59e545eafe5ce62c871",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geology"
]
} |
3643883 | pes2o/s2orc | v3-fos-license | The Clustering of the SDSS Main Galaxy Sample II: Mock galaxy catalogues and a measurement of the growth of structure from Redshift Space Distortions at $z=0.15$
We measure Redshift-Space Distortions (RSD) in the two-point correlation function of a sample of $63,163$ spectroscopically identified galaxies with $z<0.2$, an epoch where there are currently only limited measurements, from the Sloan Digital Sky Survey (SDSS) Data Release 7 Main Galaxy Sample. Our sample, which we denote MGS, covers 6,813 deg$^2$ with an effective redshift $z_{eff}=0.15$ and is described in our companion paper (Paper I), which concentrates on BAO measurements. In order to validate the fitting methods used in both papers, and derive errors, we create and analyse 1000 mock catalogues using a new algorithm called PICOLA to generate accurate dark matter fields. Haloes are then selected using a friends-of-friends algorithm, and populated with galaxies using a Halo-Occupation Distribution fitted to the data. Using errors derived from these mocks, we fit a model to the monopole and quadrupole moments of the MGS correlation function. If we assume no Alcock-Paczynski (AP) effect (valid at $z=0.15$ for any smooth model of the expansion history), we measure the amplitude of the velocity field, $f\sigma_{8}$, at $z=0.15$ to be $0.49_{-0.14}^{+0.15}$. We also measure $f\sigma_{8}$ including the AP effect. This latter measurement can be freely combined with recent Cosmic Microwave Background results to constrain the growth index of fluctuations, $\gamma$. Assuming a background $\Lambda$CDM cosmology and combining with current Baryon Acoustic Oscillation data we find $\gamma = 0.64 \pm 0.09$, which is consistent with the prediction of General Relativity ($\gamma \approx 0.55$), though with a slight preference for higher $\gamma$ and hence models with weaker gravitational interactions.
INTRODUCTION
The observed 3D clustering of galaxies provides a wealth of cosmological information: the comoving clustering pattern was encoded in the early Universe and thus depends on the physical energy densities (e.g. Peebles & Yu 1970;Sunyaev & Zel'dovich 1970;Doroshkevich et al. 1978), while the bias on large-scales encodes primordial non-Gaussianity (Dalal et al. 2008). Secondary measurements can be made from the observed projection of this clus- Email: cullan.howlett@port.ac.uk tering, including using Baryon Acoustic Oscillations (BAO) as a standard ruler (Seo & Eisenstein 2003;Blake & Glazebrook 2003) or by comparing clustering along and across the line-of-sight (Alcock & Paczynski 1979). In this paper we focus on a third type of measurement that can be made, called Redshift-Space Distortions (RSD; Kaiser 1987). RSD arise because redshifts include both the Hubble expansion, and the peculiar velocity of any galaxy. The component of the peculiar velocity due to structure growth is coherent with the structure itself, leading to an enhanced clustering signal along the line-of-sight. The enhancement to the overdensity is additive, with the extra component dependent on the amplitude of the velocity field, which is commonly parameterised on large-scales by f σ8, where f ≡ d ln D/d ln a is the logarithmic derivative of the growth factor with respect to the scale factor and σ8 is the linear matter variance in a spherical shell of radius 8 h −1 Mpc. Together these parameterise the amplitude of the velocity power spectrum.
The largest spectroscopic galaxy survey undertaken to-date is the Sloan Digital Sky Survey (SDSS), which has observed multiple samples over its lifetime. The SDSS-I and SDSS-II (York et al. 2000) observed two samples of galaxies: the r-band selected main galaxy sample (Strauss et al. 2002), and a sample of Luminous Red Galaxies (LRGs; Eisenstein et al. 2001) to higher redshifts. The Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2012), part of SDSS-III (Eisenstein et al. 2011) extended the LRG sample to higher redshifts with a sample at z ∼ 0.57 called CMASS, and a sample at z ∼ 0.32 called LOWZ that subsumed the SDSS-II LRG sample. SDSS-IV will extend the LRG sample to even higher redshifts, while simultaneously observing a sample of quasars and Emission Line Galaxies (ELGs).
In this paper we revisit the SDSS-II main-galaxy sample, herein denoted MGS, applying the latest analysis techniques. We have sub-sampled this catalogue to select high-bias galaxies at z < 0.2 (details can be found in our companion paper Ross et al. 2014, Paper I, which also presents BAO-scale measurements made from these data). This sampling positions the galaxies redshift between BOSS LOWZ, and the 6-degree Field Galaxy Survey (6dFGS; Beutler et al. 2011), filling in a gap in the chain of measurements at different redshifts. Selecting high-bias galaxies means that we can easily simulate the sample. In this paper we present RSD measurements made using the MGS data.
Recent analyses of BOSS have emphasised the importance of accurate mock catalogues (Manera et al. 2013(Manera et al. , 2014; these provide both a mechanism to test analysis pipelines and to determine covariances for the measurements made. For the MGS data, we create 1000 new mock catalogues using a fast N-body code based on a new parallelisation of the COLA algorithm (Tassev et al. 2013), designed to quickly create approximate evolved dark matter fields. Haloes are then selected using a friends-of-friends algorithm, and a Halo-Occupation Distribution based method is used to populate the haloes with galaxies. The algorithms and methods behind PICOLA can be found in Howlett et al. (2014).
Our paper is outlined as follows: In Section 2 we describe the properties of the MGS data. In Section 3 we summarise how we create dark matter halo simulations using PICOLA. In Section 4, we describe how we calculate clustering statistics, determine the halo occupation distribution we apply to mock galaxies to match the observed clustering, and test for systematic effects. In Section 5, we describe how we model the redshift space correlation function using the Gaussian Streaming/Convolved Lagrangian Perturbation Theory (CLPT) model of Wang et al. (2014). In Section 6, we describe how we fit the MGS clustering in the range 25 h −1 Mpc s 160 h −1 Mpc, test our method and validate our choice of fitting parameters and priors using the mock catalogues. In Section 7 we present the results from fitting to the MGS data and present our constraints on f σ8. In Section 8, we compare our measurements to RSD measurements at other redshifts, including results from Beutler et al. (2012); Chuang et al. (2013); Samushia et al. (2012) and Samushia et al. (2014), and test for consistency with General Relativity. We conclude in Section 9. Where appropriate, we assume a fiducial cosmology given by Ωm = 0.31, Ω b = 0.048, h = 0.67, σ8 = 0.83, and ns = 0.96. Figure 1. The blue area shows a flat, all-sky projection of the footprint of our MGS sample, which occupies 6,813 deg 2 . The red area shows the same geometry, after a 180 o rotation. This illustrates how we produce two mock galaxy samples from every full-sky dark matter halo catalog.
The Completed SDSS Main Galaxy Sample
We use the same SDSS DR7 MGS data as analysed in Paper I, which is drawn from the completed data set of SDSS-I and SDSS-II. These surveys obtained wide-field CCD photometry (Gunn et al. 1998(Gunn et al. , 2006 in five passbands (u, g, r, i, z;Fukugita et al. 1996), amassing a total footprint of 11,663 deg 2 , internally calibrated using the 'uber-calibration' process described in Padmanabhan et al. (2008), and with a 50% completeness limit of point sources at r = 22.5 (Abazajian et al. 2009). From these imaging data, the main galaxy sample (MGS; Strauss et al. 2002) was selected for spectroscopic follow-up, which to good approximation, consists of all galaxies with rpet < 17.77, where rpet is the extinctioncorrected r-band Petrosian magnitude, within a footprint of 9,380 deg 2 (Abazajian et al. 2009).
For our analysis, we start with the SDSS MGS value-added galaxy catalog 'safe0' hosted by NYU 1 (NYU-VAGC), which was created following the methods described in Blanton et al. (2005). The catalog includes K-corrected absolute magnitudes, determined using the methods of Blanton et al. (2003), and detailed information on the mask. We only use the contiguous area in the North Galactic cap and only areas where the completeness is greater than 0.9, yielding a footprint of 6,813 deg 2 , compared to the original 7,356 deg 2 . We create the mask describing this footprint from the window given by the NYU-VAGC, which provides the completeness in every mask region, and the MANGLE software (Swanson et al. 2008). We also use the MANGLE software to obtain angular positions of unclustered random points, distributed matching the completeness in every mask region. The angular footprint of our sample is displayed in blue in Fig. 1. The red patch in Fig. 1 shows the angular footprint of our galaxy sample after rotating the coordinates via RA ⇒ RA + π, DEC ⇒ −DEC and once again applying the mask. As described in Section 3, we choose to create full-sky simulations, and in doing so, we can use the mask to create two mock galaxy catalogues that match our footprint, reducing the noise in our estimates of the covariance matrix at almost no extra cost 2 . We make further cuts on the NYU VAGC safe0 sample based on colour, magnitude, and redshift. These are 0.07 < z < 0.2, Mr < 21.2 and g−r > 0.8, where Mr is the r-band absolute magnitude provided by the NYU-VAGC. These cuts produce a sample of moderately high bias (b ∼ 1.5), with a nearly constant number density that is independent of BOSS and 6dFGS samples. The resulting sample contains 63,163 galaxies. The redshift distribution is shown in Fig. 2. The effective redshift of our sample is z eff = 0.15, calculated as described in Paper I, where further details on the sample selection criteria can be found. Fig. 2 also shows (solid line) the average number density of the mock galaxy catalogues described in Section 3. We determine the n(z) that we apply to the mocks by fitting to a model with two linear relationships and a transition redshift. The best-fit is given by n(z) = 0.0014z + 0.00041; z < 0.17 0.00286 − 0.0131z; z 0.17.
We see that the mock galaxy catalogues agree with the data very well, with χ 2 = 25 for 22 degrees of freedom (26 redshift bins and 4 independent fitting parameters). The errors come from the standard deviation in number density across the set of mock catalogues.
SIMULATIONS
Simulations of our MGS data are vital in order to accurately estimate the covariance matrix of our clustering measurements and to perform systematic tests on our BAO and RSD fitting procedures. Of order 1000 mock galaxy catalogues (mocks) are necessary to ensure noise in the covariance matrix does not add significant noise to our measurements (Percival et al. 2014). For BOSS galaxies, such mocks were created using the methods described in Manera et al. (2013Manera et al. ( , 2014. The galaxies in our sample have lower bias than those of BOSS, and we therefore require a method of producing dark matter halos at higher resolution than used in BOSS, yet in such a way that we can still create a large number of realisations in a timely fashion. For this we have created the code PICOLA, a highlydeveloped, planar-parallel implementation of the COLA method of Tassev et al. (2013); this implementation is described in Howlett et al. (2014), and a user guide that will be included with the public release of the code. It should be noted that a similar method has Figure 3. The power spectrum of the dark matter field in a cubic box from the PICOLA and GADGET runs described in the text. We can see good agreement between the two even into the non-linear regime.
also recently been used to create mock catalogues for the WiggleZ survey , though the codes were developed independently.
In this section, we describe how we use PICOLA to produce dark matter fields and then halo catalogues, and how we apply a Halo Occupation Distribution (HOD, Berlind & Weinberg 2002) prescription to these halo catalogues to produce mock galaxy catalogues. We expect that the methods we use to generate these halo catalogues will be generally applicable to any future galaxy survey analyses. In Section 4, we describe how we specifically fit an HOD model to the measured clustering of the MGS to produce mocks that simulate our MGS data. These mocks are used in the RSD analyses we present and the BAO analysis of Paper I.
Producing Dark Matter fields
We generate 500 dark matter snapshot realisations using our fiducial cosmology, which we convert into 1000 mock galaxy catalogues. Although our code is capable of generating lightcones 'on the fly' without sacrificing speed, we stick with snapshots for simplicity in later stages and because we expect the inaccuracies arising from using snapshots to be small due to the low redshift of our sample. For each simulation we evolve 1536 3 particles, with a mesh size equal to the mean particle separation, in a box of edge length 1280 h −1 Mpc. We choose this volume as it is large enough to cover the full sky out to the maximum comoving distance of our sample at z = 0.2 (For our fiducial cosmology this is ∼ 570 h −1 Mpc). We evolve our simulation from z = 9.0 to z = 0.15, using 10 timesteps equally spaced in a, the scale factor. This results in a mass resolution of ∼ 5 × 10 10 h −1 M , a factor of 10 smaller than that used for the BOSS LOWZ mock catalogues. Each simulation takes around 20 minutes (including halo-finding) on 256 cores. Fig. 3 shows the power spectrum of the dark matter fields for one of our PICOLA simulations and for a Tree-PM N-Body simulation performed using (Springel 2005). Both simulations use the same initial conditions and the same mesh resolution. We can see that the power spectra agree to within 2 percent across all scales of interest to BAO measurements and the agreement continues to within 10 percent to k ∼ 0.8 h Mpc −1 . Figure 4. A comparison of the halo mass function from our GADGET-2 and PICOLA simulations run from the same initial conditions. We see a lack of halos on small scales due to the finite mesh resolution, but this is easily compensated for with the HOD fitting described later.
From Dark matter to Halos
We generate halos for our PICOLA dark matter simulations using the friends-of-friends algorithm (FoF; Davis et al. 1985) with linking length equal to the commonly used value of b = 0.2, in units of the mean particle separation. We average over all of the constituent particles of each halo to calculate the position and velocity of the centre-of-mass. The halo mass, M , is given by the individual particle mass multiplied by the number of constituent particles that make up the halo. The virial radius is then estimated as where ρc ≈ 2.77 × 10 11 h 2 M Mpc −3 is the critical density, and we use a value ∆vir = 200 (e.g. Tinker et al. 2008). The clustering of the dark matter particles is recovered well by PICOLA. It is slightly under-represented on small scales, but we do not need to modify the linking length in order to recover our halos (unlike, for example, in Manera et al. 2013). Fig. 4 shows the level of agreement between halo mass functions recovered from our matched parameter PICOLA and GADGET-2 runs. The difference in halo number density for low-mass halos is a direct consequence of the mesh resolution of our simulations. As PICOLA does not calculate additional contributions to the inter-particle forces (i.e., via a Tree-level Particle-Particle summation) on scales smaller than the mesh, using instead the approximate, interpolated forces from the nearest mesh points, we do not produce the correct structure on the order of a few mesh cells or smaller. This results in slightly 'puffy' halos. For the largest halos this is not a problem, as the overall properties of the halo are still captured. However, it does mean that we miss some of the smaller halos as the dark matter particles have not collapsed sufficiently to be grouped together by the FoF algorithm.
Regardless of this, the effect is small enough over the halo mass range of interest for the MGS that we find no correction is necessary before we apply our HOD model. In addition, as described in Section 4.2, we determine the HOD parameters directly by populating mock dark matter halos. The deficit of lower mass halos is thus compensated for by assigning more galaxies to lower mass halos. It should also be noted that although other halo-finding techniques may produce better results, we retain the FoF algorithm in the interest of speed.
Assigning Galaxies to Halos
We populate our halos in a very similar way to that of Manera et al. (2013) using the HOD model (Berlind & Weinberg 2002). Within this framework we assign galaxies to halos based solely on the mass of the halo, splitting the galaxies into central and satellite types. We define two mass-dependent functions, Ncen(M ) and Nsat(M ) , where Ncen(M ) denotes the probability that a halo of mass M contains a central galaxy and Nsat(M ) is the mean of the poisson distribution from which we randomly generate the number of satellite galaxies. These functions are themselves modelled with parameters estimated from a fit to the MGS data, as described in Section 4.2.
Central galaxies are placed at the centre of mass of the halo, and satellites at radii r Rvir with probability derived from the NFW profile (Navarro et al. 1996) where rs = Rvir/cvir is the characteristic radius, at which the slope of the density profile is -2, and ρs is the density at this radius. c is the concentration parameter, which we calculate for a halo of mass M using the fitting formulae of Prada et al. (2012). On top of this we add a dispersion to the mass-concentration relation using a lognormal distribution with mean equal to that evaluated from the fitting functions and variance σ = 0.078. This is the same value as that used in Manera et al. (2013) and is a typical value, as measured from fitting NFW profiles to halos recovered from simulations (Giocoli et al. 2010). Both central and satellite galaxies are given the velocity of the centre of mass of the halo. Satellite galaxies are then assigned an extra peculiar velocity contribution drawn from a Gaussian, with the velocity dispersion calculated from the virial theorem For an NFW profile, the mass inside a radius r is and hence the velocity dispersion for a halo of mass M is In order to assign the additional satellite velocities in each direction we use a gaussian distribution with zero mean and variance v 2 /3. To simulate the effects of Redshift-Space Distortions we displace each galaxy along the line-of-sight by Given ∆s los and a galaxy's true position, we determine angles and redshifts using our fiducial cosmology, placing the observer at the centre of each simulation box.
Power Spectrum
Although we measure cosmological from the correlation function and not the data power spectrum, we do use the monopole moment of the power spectrum to determine the HOD model used for the mocks, as it is faster to compute than its configuration-space analogue. We estimate the monopole of the power spectrum, which we denote P (k), using the Fourier-based method of Feldman et al. (1994). We convert each galaxy's redshift space coordinates to a cartesian basis using our fiducial cosmology. We then compute the overdensity on a grid containing 1024 3 cells in a box of edge length 2000 h −1 Mpc. This provides ample room to zero pad the galaxies to improve the frequency sampling and results in a Nyquist frequency of 1.6 h Mpc −1 , much larger than the largest frequency of interest. We use the random catalogue to estimate the expected density at each grid point. Galaxies and randoms are weighted based on the number density as a function of redshift, where we set PF KP = 16000 h −3 Mpc 3 , which is close to the measured amplitude at k = 0.1 h Mpc −1 . After Fourier transforming the overdensity grid we calculate the spherically-averaged power spectrum in bins of ∆k = 0.008, correcting for gridding effects and shot-noise. The power spectrum of the MGS data is displayed as points in Fig. 5. The smooth curve and error-bars display the mean of the mock P (k) and their standard deviation.
HOD fitting
We match the measured P (k) of the MGS and the average from 10 halo catalogues in order to determine the HOD model that we then apply to all of the mock catalogues. In this way, we do not need to correct our halo mass function at the low-mass end, as the lack of low-mass halos will be compensated via the population of lower-mass halos. We use the five parameter functional form of Zheng et al. Figure 6. The percentage difference between the average mock power spectrum and that of our data, with errors derived from the covariance matrix of our 1000 mock catalogues. There is good agreement (∼ 5%) between these up to k = 0.3 except on large scales (small k) where the effect of the window function is large.
(2007) for the number of central and satellite galaxies, For a halo of M < Mcut we set Nsat = 0 and in the case where we assign satellite galaxies but no central galaxy to a halo, we remove one of the potential satellite galaxies and replace it with a central. We set the values of the five free parameters by iterating over the following steps: (i) Populate a subset of the mocks using a given set of HOD parameters, (ii) Mask the mock galaxies so that they match the data, (iii) Subsample the mock galaxies to match our idealised n(z), (iv) Calculate the average power spectrum of our populated mocks and compare to the data.
We use 10 mocks to fit our HOD, populating and masking them individually, but reproducing the radial selection function by subsampling based on the ratio between the analytic fit to the data n(z) and the average number density of the 10 mocks. The fit is performed using a downhill simplex minimisation of the χ 2 difference between the average the 10 mock power spectrum and the data power spectrum in the range 0.02 k 0.3. The fit is performed twice, first using analytic errors on the power spectrum from Tegmark (1997) (equations 4 and 5 therein), and then using the covariance matrix from the first fit to generate our final best fit model.
Our best fit HOD model has the parameters log 10 (Mmin) = 13.18, wheren is dependent on the five other parameters. The best fit HOD parameters are in good agreement with the HOD parameters reported by Zehavi et al. (2011) for another SDSS galaxy sample with similar number density and magnitude limit. Fig. 6 shows the percentage difference between the average mock power spectrum and the power spectrum of the data. The errors come from the covariance estimated from the full mock sample. We can see that the amplitude of the power spectra matches well on all scales, with ∼ 5% agreement up to k = 0.3, except on the largest scales where the window function has a large effect. The fit is good, as we find χ 2 = 33 for 32 degrees of freedom (37 k-bins and 5 free parameters). Fig. 7 shows the expected number of galaxies in our mock halos for our best fit HOD model. This highlights how we are able to recover the clustering properties of the data even though we lack the correct number of low mass halos. All of the satellite galaxies exist in halos with M > 10 13 h −1 M , which are recovered quite well by our simulations. Below this mass, where our simulations lack sufficient number density, the probability of finding any galaxies within a halo also drops rapidly, such that even though these halos are more abundant in general, the contribution to the total clustering from these halos is small in comparison to the larger mass halos.
There exists significant degeneracy between the five free HOD parameters, which cannot be broken completely by just the onedimensional, two-point clustering statistics. Three-point statistics could be used to break this degeneracy (Kulkarni et al. 2007), however this would be prohibitively time-consuming and potentially very noise dominated. Another possibility is to use the quadrupole or hexadecapole moments of the power spectrum, as these contain additional information about the position and velocity distribution of the satellite galaxies within their host halos (Hikage 2014). Again, however, in our case these statistics will almost certainly be noise dominated, and are consequently not important for our current implementation of the method. As such we leave these as future improvements for our mock catalogue production process.
Correlation Function
We base our cosmological fits on configuration-space clustering measurements, calculating the correlation function for both mocks and data as a function of both the redshift space separation s, and the cosine of the angle to the line of sight µ, using the same coordinate transformation as for the power spectrum. We use the minimum variance estimator of Landy & Szalay (1993), with galaxy and random weights as given in Eq. (8), to calculate the correlation function from the normalised galaxy-galaxy, galaxy-random and random-random pair counts for 0 < s 200 and 0 µ 1 in bins of ∆s = 1.0 h −1 Mpc and ∆µ = 0.01.
From there we perform a multipole expansion of the twodimensional correlation function via the Riemann sum where µi = 0.01i−0.005 and P (µ) are the Legendre Polynomials of order . We generate the monopole and quadrupole for different bin widths by re-summing the pair counts before applying Eq. (10). Figs. 8 and 9 show the monopole and quadrupole of the correlation function for the average of the mocks and for the data for the 24 measurements in the range 8 < s < 200h −1 Mpc. The mean of the mock ξ0 and ξ2 do not match the data within the error-bars at many scales. However, we only plot the diagonal elements of the covariance matrix and the off-diagonal elements represent a significant component (see Fig. 10). A more proper comparison is the χ 2 between the mean of the mocks and the data, using the full covariance matrix. For both ξ0 and ξ2 the χ 2 /d.o.f is slightly less than one, implying the anisotropic clustering in the mock samples is a good representation of the data, even at 10h −1 Mpc scales (and hence 'χ by eye' is a bad idea).
Covariance Matrix
We use our sample of mock galaxy catalogues to estimate the covariance matrix for both the power spectrum and correlation function in the standard way, and invert to give an estimate of the inverse matrix. We remove the bias in the inverse covariance matrix by rescaling by a factor that depends on the number of mocks and measurement bins (e.g. Hartlap et al. 2007). . The quadrupole moment of the correlation function of the MGS and the mean of our mock galaxy catalogues . Though the agreement by eye looks poor on large scales, there exists significant covariance between the points at different scales, such that the chi-squared between the data and mocks is small. Fig. 10 shows the reduced covariance matrix, C red i,j = Ci,j/ Ci,iCj,j for the power spectrum and the monopole and quadrupole moments of the correlation function using our fiducial binning scheme. We can see that there is significant off-diagonal covariance in the correlation function and non-negligible crosscovariance between the monopole and quadrupole, however the power spectrum covariance matrix is much more diagonal.
To fit to the correlation function moments, we assume that the binned monopole and quadrupole are drawn from a multi-variate Gaussian distribution, and assume the standard Gaussian Likelihood, L. The validity of this assumption, for both our fits and the BAO fits to the power spectrum found in Paper I, is tested in the following section. There are additional factors that one must apply to uncertainties determined using a covariance matrix that is constructed from a finite number of realisations and to standard deviations determined from those realisations (Dodelson & Schneider 2013;Percival et al. 2014). In this work we multiply the inverse covariance matrix estimate by a further factor given by m1 in equation 18 of Percival et al. (2014), such that the errors derived from the shape of the likelihood are automatically corrected for this bias. We have the number of mocks N mocks = 1000, the number of bins N bins = 34 and the number of parameters fitted Np = 8, giving only a small correction to the inverse covariance matrix of 1.02.
Independence of mocks
The coordinate transformation that allows us to create two distinct mocks from each dark matter realisation puts the two patches as far apart as possible to minimise the covariance between mocks based on the same dark matter cube. The minimum possible distance between two objects in different patches is 170 h −1 Mpc. Whilst this is within the range of scales we are interested in, the total crosscorrelation between patches is very small. We number our mocks such that pairs of mocks (e.g. 1 & 2, or 3 & 4) were drawn from the same dark matter cube. Thus we expect the set of 500 even numbered mocks and the set of 500 odd numbered mocks to be independent of any correlations caused by the sampling, and any cross correlation to be due to noise. The cross correlation coefficient, for both the monopole and quadrupole of the correlation function, and for the power spectrum, calculated from the 500 pairs of mocks ξ 0 (r) ξ 2 (r) Figure 12. The difference in the monopole (red) and quadrupole (blue) of the correlation function measured form the data when we use the fitted and shuffled methods of generating redshifts for our random data points. The shaded areas denote the one-sigma error regions. We see that the difference between the two methods is well within the one-sigma region on all scales. drawn from the same dark matter cube is shown in Fig. 11. The dashed lines in Fig. 11 indicate the maximum and minimum correlation coefficient (at any scale considered) between 500 pairs of independent mocks (i.e. taking pairs where both mocks have even or odd numbers). The fact that the cross correlation between pairs drawn from the same dark matter cube is almost entirely within these bounds indicates that there is no cross correlation above the level of noise in our combined covariance matrix, even on scales where the pairs of mocks could, theoretically, be covariant.
Random catalogue redshift assignment
We also test the effect of assigning redshifts to our random data points from randomly chosen galaxies as opposed to simply gen- ξ 0 (r) ξ 2 (r) Figure 13. The Kolmogorov-Smirnov p-value for both the log of the power spectrum and the monopole and quadrupole of the correlation function. For both statistics the probability that they are drawn from a multivariate Gaussian is high, allowing us the compute the likelihoods for theoretical models from the chi-squared difference between the model and data.
erating them by sampling a smooth fit to the number density. In Fig. 12 we present the differences in the measured correlation function monopole and quadrupole moments of the MGS data, when they are calculated either using random data points that are assigned redshifts from the corresponding galaxy catalogue ('shuffled'), or when they are given redshifts sampled from the fitted number density described in Section 2. We may expect 'shuffling' to reduce the clustering, especially on scales below 100 h −1 Mpc, because spherically averaged features in the galaxy field are removed in the shuffled approach. The power removed is predominantly along the line of sight, and hence the quadrupole is affected more than the monopole. From Fig. 12 we see that for both monopole and quadrupole, the difference in clustering between the two methods is well below the level of the noise. We adopt the shuffling approach as we do not know the true radial distribution for the data, and this approach allows for all features caused by the galaxy selection, at the expense of a small reduction in the monopole and quadrupole moments. Further, Ross et al. (2012) found that the shuffling approach is less biased than fitting to a smooth n(z) when both methods were tested on BOSS mocks (with a known n(z)), and the differences we find are consistent with those of Ross et al. (2012). Such differences are so small that we do not need to account for this in our model fitting.
Gaussianity of data
Our final test is on the assumption that the measured correlation function and power spectrum are drawn from an underlying multivariate Gaussian distribution. This assumption is the basis of the likelihood calculations made in both the BAO fits of Paper I and the RSD fits presented in this paper. We perform a Kolmogorov-Smirnov test on the log of the power spectrum (which is used in the BAO fits of Paper I) and monopole and quadrupole of our mock catalogues, using the cumulative distribution function (CDF) of the normalised differences between the two-point statistics measured from each mock realisation and the average over all the mock catalogues. Fig. 13 shows the Kolmogorov-Smirnov test p-value for the two-point statistics as a function of scale. We can see that there is no trend with scale and across all scales of interest the p-value indicates a high probability that both the power spectrum and correlation function are drawn from a Gaussian distribution. The log of the power spectrum has a particularly high probability of being drawn from a Gaussian distribution, which is why we use this rather than the power spectrum itself when fitting the BAO feature in Paper I. More specifically, the p-value is the probability that our observed difference between the measured CDF and a Gaussian CDF would be a large as it is if our underlying distribution were Gaussian. Hence, based on the p-values we obtain, we find that even for those bins were the difference between our measured CDF and a Gaussian CDF is largest we could expect a greater difference at least 20% of the time if our measured clustering statistics were drawn from an underlying Gaussian distribution.
Modelling the Effect of Galaxy Velocities
To model our redshift space monopole and quadrupole we use the combined Gaussian Streaming/Convolved Lagrangian Perturbation Theory (CLPT) model of Wang et al. (2014). The clustering of galaxies in redshift space can be written as a function of their real space correlation and their full pairwise velocity dispersion (Fisher 1995;Scoccimarro 2004). In the Gaussian Streaming model, introduced by Reid & White (2011), the pairwise velocity dispersion is approximated as a Gaussian, which allows one to write the two-dimensional redshift space correlation function, ξ(s ⊥ , s || ), as a function of the real-space correlation function, ξ(r), and the mean infall velocity and velocity dispersions betweens pairs of galaxies, v12(r) and σ 2 12 (r, µ) respectively, Here s ⊥ = r ⊥ and s || denote redshift space separations transverse and parallel to the line of sight, r || denotes the real space separation parallel to the line of sight, such that r 2 = r 2 ⊥ + r 2 || , and µ = r || /r is as defined previously.
Reid & White (2011) evaluate v12(r) and σ 2 12 (r, µ) using a standard perturbation theory expansion of a linearly biased tracer density field, however this does not accurately replicate the velocity statistics of the tracer field on small scales, nor the smoothing of the BAO feature. This was improved upon by Reid et al. (2012) in their analysis of the BOSS CMASS galaxy sample by using Lagrangian Perturbation Theory to generate the real-space correlation function above scales of 70 h −1 Mpc. This proved effective for the BOSS CMASS sample, although Reid et al. (2012) note that the BOSS CMASS galaxy sample has a second order bias close to zero, the point at which the accuracy of the standard perturbation theory evaluation of v12(r) and its derivative is greatest. Carlson et al. (2013) and Wang et al. (2014) further improve the modelling of the correlation function by computing the realspace correlation function using Convolved Lagrangian Perturbation Theory and evaluating v12(r) and σ 2 12 (r, µ) in the same framework. This formulation relies on a perturbative expansion of the Lagrangian overdensity and displacement which in turn allows us to write the correlation function and velocity statistics as a series of integrals over powers of the linear power spectrum. For biased tracers the model assumes a local real-space Lagrangian bias function, F , and solutions up to O(P 2 L ) reveal a dependence on both the first and second derivatives of the bias function, F and F , and combinations thereof. Furthermore, as would be expected, the velocity statistics have a dependency on the growth rate of structure, f , via the multiplicative factor, f 2 . From Matsubara (2008) we can easily relate the linear galaxy bias, b, to the first derivative of the Lagrangian bias function by F = b − 1.
The model is calculated as follows. For a vector r in real space and vector q in Langrangian space, we can define three functions that depend on the Lagrangian bias, growth rate and linear power spectrum: M0(r, q, F , F , f, PL), M1,n(r, q, F , F , f, PL) and M2,nm(r, q, F , F , f, PL). M0 is a scalar function, whilst M1,n and M2,nm are vector and tensor functions along cartesian directions n and m. The exact form of the functions M0, M1,n, and M2,nm are given in Wang et al. (2014) We can then calculate ξ(r) and v12(r) by projecting the scalar and vector functions along the pair separation vector and integrating with respect to the Lagrangian separation, v12(r) = [1 + ξ(r)] −1 d 3 qM1,n(r, q)rn.
We split the velocity dispersion σ 2 12 (r, µ) into components perpendicular and parallel to the pair separation vector and evaluate these separately by projecting and integrating the tensor function, where σ 2 || (r) = [1 + ξ(r)] −1 d 3 qM2,nm(r, q)rnrm, and δ K nm is the Kronecker delta. Hence, for a given cosmological model parameterised by PL, b, F and f , we can calculate, for any scale of interest, a unique set of ξ(r), v12(r) and σ 2 12 (r, µ). Entering these into Eq. (12) allows us to generate our two-dimensional redshift space correlation function and from there we can generate a model monopole and quadrupole. These models are fitted to the measurements from data and mocks as described later to constrain a given set of cosmological parameters.
Alcock-Paczynski Effect
In calculating the correlation function of our data we have to assume a (fiducial) cosmological model to calculate the physical separations between galaxies parallel and transverse to the line of sight. Specifically, to calculate the separation along the line of sight we require the Hubble parameter, H(z), and the galaxy redshifts, whilst the transverse separation requires knowledge of the angular diameter distance, DA(z), and the angular separation of the galaxy pair. Any difference between the relative values of these parameters in the fiducial cosmology and the true cosmology will manifest as anisotropic clustering, that is, a difference in the clustering of galaxies parallel and perpendicular to the line of sight. If an observable such as the BAO feature is expected to be statistically isotropic, then any measured anisotropy can also be used to constrain the true cosmology of our universe. This is the Alcock-Paczynski (AP) test (Alcock & Paczynski 1979).
Anisotropy is also being added by Redshift Space Distortions. As such, the AP effect and RSD are degenerate and we need a way to disentangle these effects.
Following Xu et al. (2013), we introduce two scale parameters, α and . α denotes the stretching of all scales and hence encapsulates the isotropic shift whilst parameterises the AP effect. Measuring these two parameters allows us to constrain the angular diameter distance and Hubble expansion independently, where a subscript 'fid' denotes our fiducial model and rs is the measured BAO peak position. Values α = 1.0 and = 0.0 would indicate that our fiducial cosmology is the true cosmology of the measured correlation function.
In terms of our model correlation function the α and parameters modify the scales at which we measure a given value for the correlation function, During our fits we apply the values of α and directly to alter the scales at which we calculate the two-dimensional redshift space correlation function (given by Eq. (12)), calculating the necessary correction to the parallel and perpendicular separations, s || and s ⊥ , before using these to calculate the corresponding values of r, r || and µ required by the integrand. We subsequently integrate the 2D model for the correlation function to estimate monopole and quadrupole moments.
Correction for binning effects
Finally, we must account for the way we bin our data when calculating our model. Rather than evaluating our model at the centre of those bins, we take into account variations in the model correlation function across each bin, and instead take the weighted average of our model within each bin. For a bin from s1 to s2 centred at s, our model is Where V is the normalisation for the weighted mean, For all the fits detailed in this paper we calculate our model in bins of width ∆s = 1 h −1 Mpc between 0 h −1 Mpc < s 200 h −1 Mpc, before calculating Eq. (21), using a cubic spline interpolation method to interpolate the value of the monopole and quadrupole at any point required for the integration.
Cosmological Parameters
Here we consider the shape of the linear power spectrum parameterised by the cold dark matter and baryonic matter densities, Ωch 2 and Ω b h 2 , and the scalar index, ns, whilst the amplitude of the power spectrum is quantified using σ8. With the RSD parameter f σ8, and Alcock-Paczynski and BAO dilation parameters α and , which we measure independently of the power spectrum shape, we wish to explore a parameter space p = {Ωch 2 , Ω b h 2 , ns, σ8, F , F , f σ8, α, }. In theory the dependence of the CLPT model on PL, F , F and σ8 is such that, combined with the other dependencies, all of the parameters in p can be independently measured if the data has no noise. In practice however, the CLPT model is only very weakly dependent on σ8 and we are unable to use any information from this dependency to break the degeneracies between f , b and σ8. In addition, we can provide no constraints on the shape of the linear power spectrum beyond those, already tight, constraints given by the Planck Collaboration's analysis of the Cosmic Microwave Background radiation. In lieu of this we fix Ωch 2 , Ω b h 2 and ns to the fiducial values used to create our mock catalogues, which correspond closely to the Planck best-fit values, and assume that any variation in these parameters can be captured by departures from α = 1.00 and = 0.00. We then explore the following combination of cosmological parameters: p = { F , σ8, bσ8, f σ8, α, }. In all fits we do not allow f σ8 to vary in such way that we choose unphysical values of f < 0 or σ8 < 0 h 3 Mpc −3 , and we apply uniform priors of 0.8 < α < 1.2 and −0.2 < < 0.2, as for the BAO fits of Paper I. We include priors on α and σ8 as described and tested in Section 6.3.1.
Nuisance Parameters
We marginalise over two nuisance parameters while fitting the correlation function, which we denote σ of f set and IC. The first of these corresponds to an additive correction to σ12 in the Gaussian Streaming model. This compensates for two different effects that both manifest at the same point in the model. The first is the CLPT model's inability to fully recover the large scale halo velocity dispersion. Whilst the scale-dependence of both the σ || and σ ⊥ parts of σ12 is well recovered by the CLPT, there is a massdependent, constant amplitude shift across all scales. This systematic offset in the halo velocity dispersion offset is identified in Reid & White (2011) and further explored in Wang et al. (2014), who go on to suggest that it stems from gravitational evolution on the smallest scales, which cannot be accurately predicted by perturbation theory and hence cannot be separated from the overall scaledependence of σ12. Rather than calibrate the corrective factor required to shift the amplitude of the velocity dispersion using, for example, N-Body simulations we simply treat this as a free parameter, and part of the σ of f set nuisance parameter. The second component of σ of f set is the additional velocity dispersion along the line of sight due to the so called, 'Fingers-of-God', resulting from peculiar motions of the galaxies within their host halos. This effect is expected to be small on our scales of interest and in the monopole and quadrupole of the correlation function.
We apply a very broad, flat prior of −40 Mpc 2 < σ of f set < 40 Mpc 2 . This range is similar to that used in Reid et al. (2012), where they allow the Fingers-of-God intra-halo velocity dispersion to vary from 0 Mpc 2 to 40 Mpc 2 , providing a detailed set of tests to validate this prior. We additionally allow this term to go negative over the same range to account for the fact that, as mentioned in Reid & White (2011), the perturbation theory calculation of σ12 overestimates the amplitude of the positive offset required to bring linear theory in line with the measurements from N-Body simulations, hence resulting in a σ12 which is larger than would be measured.
Our second nuisance term is the integral constraint, which takes the form of an additional constant added to the correlation function monopole. This accounts for incorrect clustering on the largest scales due to the finite volume of our survey. Whilst, given a model, this can be calculated analytically from the properties of our survey, we include it as a free parameter to also account for additional uncertainties in the modelling of the monopole and potential observational systematic effects, which tend to add nearly scale-independent clustering (Ross et al. 2012). Under the assumption that the integral constraint is independent of the angle to the LOS, this vanishes for the quadrupole and so we only apply a nuisance parameter of this form to the monopole.
Testing RSD measurements on mocks
We test the model and our fitting methodology by fitting the average monopole and quadrupole of the correlation function recovered from the 1000 mocks. We use the joint covariance matrix appropriate for a single realisation, including the cross-covariance between the monopole and quadrupole: thus the errors recovered should match those from a single realisation. To perform the fit, we perform a MCMC sampling over models using the publicly available EMCEE python routine (Foreman-Mackey et al. 2013). For each parameter we quote the best-fit value of the marginalised likelihood, with 1σ errors defined by the ∆χ 2 = 1 regions around this point. Our fiducial choices are as follows: we use ∆s = 8 h −1 Mpc and 25 h −1 Mpc s 160 h −1 Mpc as our fiducial bin width and fitting range, we apply a prior on α based on the results of Paper I, and we apply a prior on σ8 based on results using data from the Planck satellite (Planck Collaboration et al. 2013). Our fiducial range of scales is chosen based on the facts that including larger scales adds little extra information and the accuracy of the CLPT model starts to decrease below s = 25 h −1 Mpc for the range of halo masses where galaxies in our sample are found (Wang et al. 2014). We will motivate our other choices and demonstrate that our f σ8 measurements are largely independent of these choices in the following sections .
The best fit values for all of our fitting cases are collated in Table 1. Fig. 14 shows the best-fit values for the cases listed in the table along with the ΛCDM prediction of f σ8, which closely matches that used in the production of the mock catalogues, and the expected galaxy bias assuming linear theory (Hamilton 1992). For our fiducial ΛCDM cosmology, and assuming GR, we have f (z ef f ) = Ωm(z ef f ) 0.55 = 0.609 and σ8(z ef f ) = 0.766, and from our HOD fits to the MGS we have 1.5 b 1.6 depending on the exact scales used to estimate the linear galaxy bias.
In Fig. 15 we plot the best-fitting model monopole and quadrupole for our fiducial fit alongside that measured from the average of mocks. We can see that the CLPT model does remarkably well in modelling the monopole and quadrupole across all the scales we fit against, with only small inaccuracies at the smallest scales and around s = 100 h −1 Mpc. The inaccuracies are clearly well below the expected level of noise in our measurements. Figure 14. The marginalised f σ 8 and bσ 8 values and one-sigma errors from fitting to the mean of the mocks for the 10 cases listed in Table 1. The dashed line indicates the expected growth rate assuming our fiducial ΛCDM cosmology. The shaded band indicates the expected linear galaxy bias as measured from our HOD fits to the MGS sample, we use a band rather than a line to account for the fact that the calculated value depends slightly on the range of scales used.
Effects of Priors
We include priors on α and σ8 in our fiducial f σ8 measurements, and we test the effect of including these priors for mock results in this section. We begin by testing the effects of including a prior that limits α. As may be inferred from Fig. 8, before reconstruction the BAO feature in the monopole is very noisy. Much of the Alcock-Paczynski measurement comes from the BAO (Reid et al. 2012), and we have more information on α from the post reconstruction fits of Paper I. As such we use a Gaussian prior on α, centred on the recovered post-reconstruction best-fit values from Paper I, and with a variance calculated from the difference between pre-and post-reconstruction fits to the BAO feature (the pre-reconstruction uncertainty is a factor 2.5 times greater than the post-reconstruction result), i.e., we expect the inclusion of the α prior to recover the same uncertainty on α as found in Paper I. We find that including a prior on alpha has only a small effect on the recovered values and errors for f σ8 and bσ8, slightly decreasing the error range for both. The recovered best-fit values only change by a small amount compared to the statistical error on the measurements. This indicates that such a process introduces no bias into our results, which is not surprising, as the α prior comes from the from the comparison of the data itself before and after reconstruction, and we expect systematic effects entering during the reconstruction process to be very small. The reduction in the error range comes from the improvement in the Alcock-Paczynski measurement when the BAO position is known, and not from double counting as we have carefully only included the extra information recovered post-reconstruction.
The dependency on σ8 used in the CLPT model is so weak that our data provides no constraints on this except through the first order measurements of bσ8 and f σ8. We therefore consider adopting a Planck+WP+highL prior on σ8 (Planck Collaboration et al. 2013), which takes the form of a Gaussian with mean σ8(z ef f ) = 0.766 and variance 0.012, so that the second order corrections to the model do not stray into unphysical regions of parameter space, where the model itself is not expected to be accurate. When we include this prior there is a small change in the recovered mean values of f σ8 and bσ8. For the average of the mocks we can see that the value of f σ8 decreases slightly from 0.49 to 0.45. This shift actually brings the values of f σ8 closer to that expected based on the cosmology used to generate the mocks and is well within the expected statistical deviation of the measurement. Additionally, adding in the σ8 prior increases the value of bσ8 and tightens our constraints, bringing them closer to the expected value. This is because the prior allows us place constraints on the second order contribution to the galaxy bias, which, in the CLPT model, enters as additional small scale clustering proportional to F 2 . When this contribution is completely unconstrained, large values force the linear galaxy bias to be lower than it should be to fit the smallest scales. Due to the strong degeneracy between bσ8 and f σ8 it is actually this stronger constraint on bσ8 that has a knock-on effect of reducing the value of f σ8 we obtain. For our baseline fits, we adopt this prior, which we consider not to be introducing any additional information to our measurements, but simply forcing us to only consider physical solutions for the CLPT model.
Testing bin width and fitting range
We perform several robustness tests using the fiducial α and Planck prior measurement, looking at the effects of changing both the bin width of our measurements and the fitting range. When we change the fitting range to 35 s 140 we see a slight increase in f σ8, and corresponding decrease in bσ8, though these shifts are well within the statistical uncertainty. The reason for this shift stems from the higher order Lagrangian bias contributions: when we remove the small scale data, our constraints on F become much weaker and it is harder to decouple from F . We can also see that the errors on f σ8 and bσ8 increase when we reduce our fitting range, consistent with the loss of information, particularly at small scales.
The results in Table 1 and Figure 14 also show that our choice of bin width has negligible effect on the results we obtain. In Cases 5 and 6 we perform fits using our fiducial fitting range and priors but using a correlation function and covariance matrix that has been binned using ∆s = 5 h −1 Mpc and ∆s = 10 h −1 Mpc respectively. We find that the results are fully consistent with each other and our fiducial bin width case, with only small, statistically driven deviations in the mean and 1σ marginalised values of f σ8 and bσ8.
Effects of Neglecting the Alcock-Paczynski Effect and Using a Linear Model
Finally, we look at models where we neglect the Alcock-Paczynski effect altogether, as in several previous studies (Blake et al. 2011a;Beutler et al. 2012;Samushia et al. 2012) and the case where we perform a simple linear model fit as per Hamilton (1992). We perform the former by fixing the values of α and in the fits. We perform three separate fits to the average of the mocks: fixing α = 1.00, = 0.0 (the values we expect to recover from the mocks); fixing α = 1.04, = 0.0, which corresponds to the mean of the recovered likelihood from the BAO-only fits; and fixing α = 1.00, = 0.05 to look at the effect of assuming an incorrect value for . For our first fit we can see that the recovered values of f σ8 and bσ8 are in good agreement with our fiducial fitting scheme. However we do find a significant reduction in the statistical errors. The degeneracy between the AP effect and RSD constitutes a large portion of the error budget even at the low effective redshift of our sample and hence by neglecting this contribution we underestimate the size of our error bars and overestimate the significance of our constraints. We expect this effect to be exacerbated as we go to higher redshifts. Furthermore, if we were to fit only to the RSD signal, in which we are implicitly assuming that our fiducial cosmology is the correct cosmology, we could be systematically biasing our results and overestimating our constraining power if the true best fit values are different from α = 1.0 and = 0.0 This is shown by our other two fits with fixed values of α and different from those we expect for the average of the mocks. In both cases we can see changes in our mean recovered values of f σ8 and bσ8 and the case were we change the value of away from the expected value is particularly troubling. We find a ∼ 1σ difference in the recovered value of f σ8, which indicates that we could expect such a significant bias in the results were we to assume, incorrectly, that our fiducial cosmology were the same as the true cosmology. Overall, the fact that these differences for different α and , whilst generally below the level of statistical uncertainty in our results, are still noticeable at the low effective redshift of our sample, points to potential systematics which would bias results from surveys with lower levels of noise or at higher redshifts. Lastly, in Table 1 and Figure 14 we show the results when fitting using a linear model. Here we still keep our BAO-fitting prior on α, and vary f σ8, bσ8, α, and IC. In this case we again find that the error budget for both f σ8 and bσ8 is being significantly underrepresented in comparison to our fiducial fit, and to a greater extent than when we use our perturbation theory model but neglect the AP effect. A simple linear model is unable to properly reproduce the observed RSD signal even on relatively large scales, and especially around the BAO feature. On top of this it neglects the contributions from higher order bias corrections which for our sample are non-negligible and have been shown to affect our estimation of bσ8 and, by way of the strong degeneracy therein, f σ8.
RESULTS
In this section we present our constraints on f σ8 and bσ8 from fitting to the MGS data using the method detailed and tested in the previous section. We have shown that our fitting method is independent of our choice or priors, fitting range and bin size, but in the interest of completeness we perform a range of fits equal to those performed on the average of the mocks. For equivalent fits to both data and mocks we use the same covariance matrix, so any differences stem from noise in the data or, of course, differences between our fiducial cosmology and the true cosmology. The marginalised mean values and 1σ constraints on f σ8 and bσ8 for all of our fits are given in Table 2 with the minimum χ 2 values, and shown in the corresponding Fig. 16.
As for the results fitting the average of the mocks, we can see that adding a prior on α introduces no noticeable bias to our best fit f σ8 and bσ8 values and only a slight reduction in the errors. When Figure 16. The marginalised f σ 8 and bσ 8 values and one-sigma errors from fitting to the data for the 10 cases listed in Table 2. As for Fig. 14, the dashed line indicates the expected growth rate assuming our fiducial ΛCDM cosmology. The shaded band indicates the expected linear galaxy bias as measured from our HOD fits to the MGS sample, we use a band rather than a line to account for the fact that the calculated value depends slightly on the range of scales used.
fitting to the data, the best fit χ 2 increases slightly from 26.0 to 26.2 for 26 degrees of freedom (34 bins and 8 free parameters) when we introduce our prior on α. Such an increase is to be expected as the prior forces our best-fit model away from the overall maximum likelihood model, however the difference is very small indicating no strong preference for models outside our prior range. When we add in the the Planck prior on σ8 we find a larger difference in the f σ8 and bσ8 constraints than for the mocks, though the value of f σ8 does not shift by more than we would expect based on the statistical errors, and as we do not believe this prior to be adding in any bias to our results from our tests on the mocks, this change is purely statistically driven. Before adding in the σ8 prior the measured values of bσ8 are lower than we would expect, but this value increases by ∼ 1σ when this prior is included. It is this change in the mean recovered value of bσ8 which causes the slight change in f σ8. The reason for the underestimation of bσ8 is as mentioned previously; without the σ8 prior we overestimate F and hence underestimate bσ8. When we introduce the σ8 prior we find χ 2 = 28.6, which is again a slight increase compared to the fits with only the α prior, however for all three cases with different priors the recovered χ 2 values for our model are very reasonable.
Our fiducial fitting case including both α and σ8 priors is shown in Fig. 17, where we plot the 2-D redshift space correlation function of our data along with the maximum likelihood model. In Fig. 18, we also plot the recovered bσ8-f σ8 contour for our fiducial fitting case, alongside the marginalised 1D histograms for these parameters. Here we can see the strong degeneracy between f σ8 and bσ8 that drives the small variations we see in our mean values when fitting to both the data and the average of the mocks.
When we change the fitting range or the bin size, we see similar results as for our fiducial case, and as with the average of the mocks there is no indication that our fitting choices are creating biased results. As for the average of the mocks removing the smallest scales from our fits reduces our recovered bσ8 value and increases the error, but the mean f σ8 remains almost unchanged. For all of our tests of bin width and fitting range, we find χ 2 values that are in agreement with our fiducial case and which indicate that all of our fits are good. The largest χ 2 /dof belongs to the case where we modify our fitting range, where we find χ 2 = 25.8 for 20 degrees of freedom. However, this value is still very good and we would expect a worse χ 2 ≈ 17% of the time. For all our fits to the data it is worth noting that we do seem to fit a slightly lower value for bσ8 than we would expect based on our HOD fits to the MGS data. Looking back to Fig. 8 we can see why. The amplitude of the monopole on the scales 25 s 60, where most of our information on the linear bias comes from, seems to be slightly lower for the data than for our HOD fit applied to mocks, though when we include scales above and below this range the mock amplitude is well matched. In our fitting we are not including scales below s = 25 h −1 Mpc, where the mocks and data are in better agreement, and so it is not surprising the data prefers slightly smaller values of bσ8.
The final set of fits we perform, fixing α and in order to mimic neglecting the AP effect, and using a simpler linear model, corroborate our results when fitting to the average of the mocks. We see in these cases a significant underestimation of the statistical errors on f σ8 with the potential for biased results if we assume that our fiducial cosmology does not differ from the true cosmology by a measurable amount. Looking at the χ 2 values we find that fixing α and or using a linear model is slightly disfavoured in Figure 19. Comparison of measurements of the growth rate using the two-point clustering statistics from a variety of galaxy surveys below z = 0.8. We split the results into two groups: those that perform a full shape fit and hence include the Alcock-Paczynski degeneracy; and those that just fit the growth rate for a fixed cosmology, neglecting this degeneracy. Our measurement is shown as a filled red star, with other data points representing the 6dFGS (filled diamond; Beutler et al. 2012 comparison to using the CLPT model and including the AP effect. However, this is not unexpected as we know from our fits to the average of the mocks that a linear model can not accurately reproduce the RSD signal, and it is highly unlikely that our maximum likelihood fit lies exactly on the plane in parameter space that we have confined our model to when fixing the values of α and .
Overall, by fitting the full-shape of the correlation function monopole and quadrupole, and including the Alcock-Paczynski effect, we find best-fit values of f σ8 = 0.53 +0.19 −0.19 and bσ8 = 1.17 +0.14 −0.18 . When we assume that our fiducial cosmology is the correct cosmology for analysing our data we find tighter constraints of f σ8 = 0.44 +0.16 −0.12 and bσ8 = 1.12 +0.09 −0.14 . In the following section we will use our fiducial, AP-included, results to constrain the growth index, γ, and compare this to the prediction from General Relativity. As the 1-D f σ8 and 3-D f σ8, α and likelihoods cannot be well approximated by a Gaussian we use the likelihoods themselves to achieve this. For future analyses making use of our results the prepared MCMC samples for our fiducial fit to the MGS data will be made publicly available upon acceptance.
COSMOLOGICAL INTERPRETATION AND COMPARISON TO PREVIOUS STUDIES
In this section we compare our measurements of f σ8 to those from a range of different galaxy surveys and perform a simple consistency test against the prediction of the growth rate from General Relativity (GR) using the commonly used γ parameterisation of the growth rate, where f (z) is approximated as For GR we have γ ≈ 0.55 (Linder & Cahn 2007). Measurements of f σ8 have been made up to z = 0.8 using data from the 2-degree Field Galaxy Redshift (2dFGRS; Percival et al. 2004), 6-degree Field Galaxy (6dFGS; Beutler et al. 2012), SDSS-II Luminous Red Galaxy Oka et al. 2014), BOSS (Chuang et al. 2013;Samushia et al. 2014;Beutler et al. 2013), VVDS (Guzzo et al. 2008) and Wig-gleZ (Blake et al. 2011a,b) surveys among others. Although these measurements were all made using different models of varying complexity and different fitting methods to either the correlation function or power spectrum, they can be roughly grouped into two distinct categories: those that were made assuming a fixed fiducial cosmological model and those that fit the full shape of the galaxy clustering statistics. The latter simultaneously measures both the RSD and BAO signals and as such includes the degeneracy with the AP effect, which as seen in our measurements in the previous section, can contribute to a large fraction of the error budget even at low redshifts We plot these two sets of measurements separately in Fig. 19. The z = 0.57 BOSS and four WiggleZ measurements were calculated with and without the inclusion of the AP effect and we can see that they too find a large difference in the constraints when incorporating this degeneracy into their measurements. Alongside these measurements we also plot the Planck-ΛCDM predictions for f σ8 assuming different values for the γ parameter. We can see that the majority of the measurements, including our MGS measurements, are in good agreement with the GR prediction.
As a more quantitative consistency test of GR we use the likelihood recovered from our full-fit MCMC analysis to put constraints on γ itself. We use our data in combination with the publicly available Planck likelihood chains 3 , subsampling these to enforce a prior on Ωm We importance-sample the Planck chain by randomly choosing a value 0 γ 1.5 for each point in the chain 3 Figure 20. Constraints on γ and Ωm from the combination of our marginalised f σ 8 and Planck likelihoods. Contours correspond to the 1σ and 2σ confidence intervals of the recovered posterior distribution. We additionally look at the case where we include the BOSS-DR11 CMASS measurement of the growth rate (Samushia et al. 2014). In both cases we find good agreement with the prediction from GR (dotted line). and evaluating the likelihood for that parameter combination. One caveat, however, is that we have to correct the value of σ8 to account for the fact that this also depends on γ. For each point in the Planck chains we have Ωm,0 and σ8,0, where the later is derived from the CMB power spectrum amplitude assuming GR. The correct value of f σ8 is then evaluated by scaling back σ8 to a suitably high redshift (for simplicity we use the redshift of recombination, z * ) and then scaling both σ8 and Ωm to our effective redshift using the correct value of γ. i.e., for scale factor a = 1/(1 + z), (1 − Ωm,0 − ΩΛ,0) a 2 + ΩΛ,0 (28) Our subsequent constraints on γ and Ωm are shown in Fig. 20. Here we also show the joint constraints when including the measurements of f σ8 from the BOSS-DR11 CMASS sample (Samushia et al. 2014). For our simple consistency check we only include the CMASS measurement as the method used to make this measurement is very similar to that used in this work. On top of this, the BOSS-DR11 LOWZ and WiggleZ measurements do overlap partially in terms of area and redshift distribution with both our measurement and the CMASS measurement, so to properly include these would require an accurate computation of the cross correlation between these measurements which is beyond the scope of this work. When combining the MGS result with our Planck prior we recover γ = 0.58 +0.50 −0.30 , consistent with GR. With the addition of the CMASS measurement we recover γ = 0.67 +0.18 −0.15 , which is also consistent with GR to within 1σ. However it should be noted that in both cases we do find a slight preference for higher values of γ than would be expected from GR.
We take this one step further and include BAO information from our measurement and from the BOSS-DR11 CMASS results as the inclusion of anisotropic distance information helps to better constrain Ωm and hence can reduce the uncertainty on our γ constraints. We use the 3D f σ8, α and likelihood from our fiducial fits as well as the equivalent constraints from the CMASS sample. The results of this are shown in Fig. 21 where we find γ = 0.64 ± 0.09 with, and γ = 0.54 +0.25 −0.24 without, the inclusion of the CMASS measurement. Both of these measurements are consistent with GR to within 1σ. The addition of our MGS f σ8, α and measurements improves the constraints on γ by ∼ 10% compared to the constraints we get on γ using the CMASS measurement alone.
The growth index has also been measured by Beutler et al. (2013), Sánchez et al. (2014) and Samushia et al. (2014) from the combination of BOSS CMASS and Planck data. Additionally Sánchez et al. (2014) use BOSS LOWZ data to produce their constraints. In Fig. 22 we plot our MGS+Planck constraint on γ alongside these other measurements. We see good consistency between all measurements, even though the methods used to measure the growth rate and anisotropic BAO information are very different. In all cases we also see a slight preference for higher values of γ, which corresponds to models where gravitational interactions are weaker.
There exists significant tension (∼ 2.3σ) between the Beutler et al. (2013) BOSS CMASS measurement of the growth index and the prediction from GR. An interesting question to ask is whether the addition of our measurements at low redshift helps to alleviate this tension and how this combination of measurements compares to the result presented previously when we combine the MGS and from these two combinations are also presented in Fig. 22, where we find that our measurement brings both combinations towards better agreement with the GR prediction, however there is still a 2σ tension between this prediction and the value of γ recovered when combining our measurements with the Beutler et al. (2013) CMASS results.
CONCLUSIONS
In this paper we have presented measurements of the growth rate of structure at an effective redshift of z = 0.15 from fits to the monopole and quadrupole of the correlation function of the SDSS Data Release 7 Main Galaxy Sample (MGS). We have also described the creation of a large ensemble of 1000 simulated galaxy catalogues which enabled both this measurement and the isotropic BAO measurements made in Paper I, where the sample itself is detailed. Our main results can be summarised as follows: • We have used a newly developed code PICOLA to generate 500 unique dark matter realisations. We use the Friends-of-Friends algorithm to create halos and populate these halos using a HOD model fitted to the power spectrum of the MGS. We find that the resultant 1000 galaxy catalogues are highly accurate, reproducing the observed clustering down to scales less than 10 h −1 Mpc. Full details of our code PICOLA can be found in Howlett et al. (2014) • Using these mock catalogues we construct covariance matrices for our two-point clustering measurements and test some of the assumptions made in the BAO fits presented in Paper I. We find: negligible cross-correlation between mock galaxy catalogues generated from the same dark matter field; that the method used to generate our random data points introduces no significant systematic effects; and that we can assume our errors on the power spectrum and correlation function are drawn from an underlying multivariate Gaussian distribution.
• We use the CLPT model (Wang et al. 2014) to fit the monopole and quadrupole of the correlation function. We use our mock catalogues to test the model for systematic effects and find excel-lent agreement between the model and the average monopole and quadrupole of the correlation function. We also perform a series of robustness tests of our method, looking at our choice of priors, fitting range and binsize. In all cases we see no evidence that our results are biased in any way, with all methods recovering the expected value of f σ8 for our mock catalogues.
• Fitting to the MGS data we measure f σ8 = 0.53 +0.19 −0.19 when fitting to the full shape of the correlation function and f σ8 = 0.44 +0.16 −0.12 when assuming a fixed fiducial cosmology. We have also shown that even at this low redshift the Alcock-Paczynski effect still contributes to a large portion of the uncertainty on measurements of the growth rate and so should not be neglected.
• Using our fiducial results to fit the growth index, γ, we find γ = 0.58 +0.50 −0.30 when including Planck data and γ = 0.67 +0.18 −0.15 when also including BOSS-DR11 CMASS measurements of the growth rate. When we include the additional anisotropic BAO from the full fits to the shape of the correlation function our constraints tighten to γ = 0.54 +0.25 −0.24 and γ = 0.64 ± 0.09 respectively, the latter of which is a ≈ 10% improvements on the constraints from the CMASS and Planck measurements alone. All of our results are fully consistent with the predictions of General Relativity, γ ≈ 0.55, and the constraints from other measurements at different redshifts. Our fiducial MCMC chains used for this analysis will be made publicly available upon acceptance. | 2015-01-30T12:03:38.000Z | 2014-09-10T00:00:00.000 | {
"year": 2015,
"sha1": "497eb8ae058bb6de0c33b53dc6398cf1fe68e1de",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/449/1/848/17335801/stu2693.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "497eb8ae058bb6de0c33b53dc6398cf1fe68e1de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267748076 | pes2o/s2orc | v3-fos-license | How changes in depression severity and borderline personality disorder intensity are linked – a cohort study of depressed patients with and without borderline personality disorder
Background Borderline personality disorder (BPD) is often complicated by comorbid major depressive episodes (MDEs), which can occur as part of major depressive disorder (MDD) or bipolar disorder (BD). Such comorbidity is related to worse outcomes in both disorders. Subsyndromal features of BPD are also common in depression. However, studies of simultaneous changes in BPD and depression severities are scarce, and their interactions are poorly understood. Aims Studying the associations between changes in BPD and depression symptoms over the course of an MDE. Methods In a 6-month naturalistic cohort study of MDE/BPD, MDE/MDD, and MDE/BD patients (N = 95), we measured change in BPD features between baseline and six months with the Borderline Personality Disorder Severity Index (BPDSI), an interviewer-rated instrument quantifying recent temporal frequency of BPD symptoms. We examined changes in BPD severity and their correlation with depression severity and other clinical measures and compared these across patient groups. Results There were significant reductions in BPD severity, both in number of positive BPD criteria (-0.35, sd 1.38, p = 0.01672) and in BPDSI scores (-4.23, SD 6.74, p < 0.001), reflecting mainly a reduction in temporal frequency of symptoms. These were similar in all diagnostic groups. In multivariate regression models, changes in depression severity independently associated with changes in symptoms in the BDSI. This relationship was strongest in MDE/BPD patients but was not found in MDD patients without BPD. Conclusions In the six-month follow-up, BPD features in MDE patients alleviated mainly by decreasing temporal symptom frequency and intensity. In BPD patients with comorbid MDE, changes in both conditions are strongly correlated.
Introduction
Major depressive episodes (MDEs) can occur as part of major depressive disorder (MDD) or bipolar disorder (BD) [1].Borderline personality disorder (BPD) is associated with a significantly increased risk of these mood disorders, with lifetime prevalence rates of MDD around 70% and BD around 20% [2][3][4].Conversely, in MDE patients, comorbid BPD is common, with rates around 10% in MDD and 20% in BD [5].Comorbid BPD in depression is correlated with a less favourable prognosis, increased risk of relapse, and increased risk of suicide attempts, affecting treatment [6][7][8][9][10].Hence, the course of comorbid BPD is relevant for the prognosis and treatment of depression patients, and vice versa.
The reciprocal relationship between BPD and mood disorders
According to several long-term cohort studies, BPD is not a static condition, instead the symptoms of BPD tend to ameliorate over time, with the great majority of patients reaching symptomatic remission in long-term follow-up, although functional impairments seem more persistent [11][12][13][14][15].The prevalence of depression in BPD also tends to decline over time but remains relatively high in follow-up, and relapses are common [16].Over longterm follow-up of patients diagnosed with both mood disorders and BPD, there is evidence of bidirectional negative effects on outcome in MDD/BPD but less robustly in BD/BPD [7].A previous prospective cohort of MDD patients found a significant correlation between decline in depression severity and number of positive personality disorder (including but not limited to BPD) criteria and self-reported neuroticism [17,18].The factors underlying these relationships are likely to be complex.For instance, since a diagnosis of BPD is usually based on information obtained in a diagnostic interview, it can, in a depressed patient, be influenced by such factors as autobiographical, attentional, and emotional cognitive biases related to depression [19], with BPD symptoms seeming more pronounced during an MDE and less severe during remission.The DSM-5 recognizes this issue and explicitly warns against misdiagnosis of BPD in these circumstances: "Because the cross-sectional presentation of borderline personality disorder can be mimicked by an episode of depressive or bipolar disorder, the clinician should avoid giving an additional diagnosis of borderline personality disorder based only on cross-sectional presentation without having documented that the pattern of behaviour had an early onset and a long-standing course" [1].Still, PD diagnoses made during an MDE seem to have important prognostic implications, and a BPD diagnosis can be made also during an acute MDE, ascertaining that BPD symptoms have been present also when the patient is not acutely depressed [20].How the symptomatology of BPD changes over the course of an MDE is not well known, however, and more detailed study of this issue would deepen our understanding of how these commonly comorbid disorders influence each other.
In longitudinal follow-up, BPD exhibits both trait-like (i.e.temporally stable) and state-like (more dynamic) features, with the stable component, or BPD proneness, closely correlated with Five Factor Model traits (i.e.descriptive normative personality traits), such as neuroticism, previously linked to BPD [21].Examining how BPD feature severity changes over time and whether this change correlates with changes in depression severity in different patient groups (such as depression patients with and without BPD) would illuminate these issues further.
Categorical and dimensional aspects of BPD
There is long-standing discussion on whether personality disorders are best described using categorical or dimensional diagnoses [22,23].BPD is still conceptualized as a categorical diagnosis in the main DSM-5 model, but the DSM also includes an alternative, hybrid approach that takes both traits and level of functioning into account [24], and ICD-11 utilizes a primarily dimensional approach based on functioning, with trait-based descriptors (including borderline pattern) being optional [1,25].Thus, attempts have been made to reconcile categorical diagnosis with more theoretically, and perhaps prognostically, valid dimensional evaluation.
One approach to quantifying BPD severity is according to the number of positive DSM-5 diagnostic criteria or otherwise measured symptoms, with more symptoms signifying higher severity [12,15].However, since the rating concerns long-standing patterns apparent from (at least) young adulthood, these are by design not very sensitive to change over the short or even medium-term (weeks to months), and quick changes in these might reflect a change in recall and other cognitive biases rather than personality change.More accurate methods are also available; the Borderline Personality Disorder Severity Index (BPDSI) is an interviewer-rated, valid, and reliable instrument for quantifying recent BPD symptom frequency (mostly, in 8 of 9 symptom domains by rating how often symptoms occur) in greater detail [26], and has been used as a measure of treatment efficacy in trials of psychotherapeutic, pharmacological, and neuromodulatory treatment of both BPD and persistent depressive disorders [27][28][29][30].Consistent with the view of BPD as a dimensionally occurring phenomenon that may increase the risk of mood disorders, subsyndromal symptoms of BPD are more common in depression than in the general population.For example, the non-BPD participants in this study had a significantly higher BPDSI score at baseline than previously found in healthy controls [31,32].Nonetheless, to the best of our knowledge, there are no studies comparing the changes in dimensionally measured BPD feature severity in depression patients with MDD or BD, and with and without BPD, over time.A diagnosis of BPD, according to the DSM-5, is made based on established (retrospective) symptom patterns of high pathology, pervasiveness, and persistence over adult lifetime.Therefore, one might reasonably assume that prospectively assessed BPD symptom frequency and severity (measured, for instance, with the BPDSI) may be more temporally stable in BPD than non-BPD patients; however, this has not been previously investigated using methods precisely quantifying symptom frequency and severity.
Aims of the study
We evaluated the changes in BPD feature severities over the course of an MDE in MDD and BD patients, including patients with and without comorbid BPD.We hypothesized, firstly, that frequency and intensity of BPD symptoms, measured by the BPDSI and BPD criteria, would ameliorate over the course of the MDE, correlating with attenuation of depression severity.Secondly, BPD symptoms were hypothesized to be more stable in BPD patients than in others.If a correlation between the changes in BPD symptom and depression severities emerged, we intended to explore whether such a relationship was also present for anxiety and BPD symptoms.
Method
This naturalistic cohort study with a follow-up of at least 6 months is based on the Bipolar -Borderline Depression (BiBoDep) cohort.
Recruitment and sampling
Our recruitment process has been described in more detail elsewhere [31,33].We recruited patients with depression starting outpatient treatment at one of two psychiatric care clinics of the City of Helsinki, Finland, with a total catchment area of 234 000 adults.
We aimed to include adequate numbers of MDE patients with MDD, BD, and/or comorbid BPD, applying stratified randomized sampling to achieve this.Based on information in the referrals, we divided all incoming depression referrals (n = 1655) into six preliminary strata by (i) sex and (ii) probable diagnosis: (a) MDD, (b) MDE in BD, (c) MDE with comorbid BPD.We prioritized patients in strata that were underrepresented in our sample at that time.If there were multiple possible recruits within the preferred stratum, recruitment order was determined randomly with a random number generator available online at random.org.Patients were contacted by phone, and those providing preliminary consent were met and given additional oral and written information about the study.
Consenting patients were then interviewed with the Structured Clinical Interview for DSM-IV, i.e.SCID-I and SCID-II [34,35].The diagnostic interviews were thorough, lasting around three hours per patient at baseline, and the diagnostic evaluation was also based on information in patients' clinical charts.Diagnostic reliability, assessed with independent rating of videos of these interviews, was found to be excellent, with a Cohen's kappa of 1.00 for MDD, 0.90 for BD, and 0.89 for BPD.We examined current depression severity with the Montgomery Åsberg Depression Rating Scale (MADRS) [36].
Inclusion and exclusion criteria
Inclusion criteria were a current MDE, a MADRS score of 15 or more, and age of 18-50 years.Exclusion criteria have been described in more detail previously [31,33], but included psychotic illness or ongoing psychotic symptoms, active substance use disorders, antisocial personality disorder, lacking proficiency in the Finnish language, and significant neurocognitive or sensory impairments.
Sample and subcohort assignment
Altogether 124 patients were included in the study at baseline.Our patients were divided into three subcohorts, such that all patients with MDD without BPD belonged to one subcohort (MDD, n = 50), patients with BD belonged to the second subcohort (MDE/BD, n = 43), and patients with comorbid BPD belonged to the third subcohort (MDE/BPD, n = 31).BD patients with comorbid BPD were assigned to the BD subcohort if they had type I BD, otherwise we assigned them to either the MDE/BD or the MDE/BPD subcohort depending on main clinical picture at and preceding baseline.Unclear cases were discussed in the study group, and a consensus decision of subcohort assignment was reached.
Baseline evaluation
In addition to the diagnostic interviews and MADRS, we also asked study participants to complete the Beck Depression Inventory II (BDI II) [37] and the Overall Anxiety Severity and Impairment Scale (OASIS) [38].
Borderline personality disorder severity index
We evaluated severity of recent BPD symptom severity with the BPDSI [26].The BPDSI rates 70 items, comprising occurrence frequency (for 8/9 symptoms) and severity (the identity disturbance symptom) of instances of the 9 DSM-IV (and 5) symptoms during the preceding 3-month period, yielding a total sum score measuring overall BPD severity, as well as symptom level subscores.In rating the BPDSI, we also had access to patients' clinical charts, with information regarding possible suicide attempts and other relevant information.The BPDSI interviews lasted around 1 h per patient.
Follow-up
We had a follow-up period of at least 6 months, after which we met with patients again, repeating the SCID, MADRS, and BPDSI.Altogether 95 patients were available for follow-up.Remission from the MDE was achieved by 56.8% of patients, with no significant differences between cohorts: MDE/MDD 56.4%, MDE/BD 60.6%, and MDE/BPD 52.2%, p = 0.8196 [10].In assessing clinical course (relevant for e.g. the BPDSI and SCID), we had access to clinical charts as well as a biweekly online follow-up questionnaire consisting of an expanded version of the Personal Health Questionnaire-9 [10].Of note, the focus of both SCID-II interviews was whether PD criteria were currently met, based on information regarding participants' lifetimes.As previously reported, we found no significant differences between drop-outs and non-drop-outs in subcohort or clinical data [10].
Analyses
Data were assembled into a database using SQLite, version 3.35.5 (SQLite Team, www.sqlite.org)and analysed with R version 4.2.2 (R Foundation, www.r-project.com) on PC computers running Microsoft Windows 11.We used parametric and non-parametric tests as appropriate, analysis of variance testing, and linear regression models.Significance testing of changes over time was done with paired samples t-tests comparing baseline and later scores, and change magnitude between groups was examined with ANOVA comparisons of the change in scores.
Results
Demographic and clinical characteristics of the cohort and the subcohorts at baseline and follow-up are reported in Table 1.
Changes in categorical BPD diagnoses
For the vast majority of patients, there was no change in their BPD diagnostic status at follow-up (i.e.most BPD diagnoses were still valid, and most patients not meeting BPD criteria at baseline did not meet them at follow-up either).Only two patients diagnosed with BPD at baseline did not meet diagnostic criteria at follow-up, whereas two other patients who had not met BPD criteria at baseline now did so; thus, there was no change in the 22) in MDE/BPD subcohorts, respectively, with no significant differences between the cohorts in the amount of change (p = 0.59).We did not find evidence of significant differences in magnitudes of change between diagnostic groups (BPD vs. non-BPD, BD vs. non-BD).
BPDSI total and subscores
Changes in BPDSI total and subscores are reported in Table 2.The effect size for total BPDSI change was moderate (Hedge's g = 0.5).
Grouping patients into diagnostic groups, BPD patients (regardless of subcohort) had a significant mean total BPDSI score change of -4.91 (sd 7.32, p = 0.017) and bipolar patients − 5.18 (sd 7.54, p < 0.001).There were no significant differences in the amounts of change between BPD and non-BPD patients (p = 0.674) or between BD and non-BD patients (p = 0.328).
The mean change in BPDSI score was − 3.21 (sd 6.82) in patients who still fulfilled MDE criteria at follow-up, whereas those who were in a state of remission from MDE had a mean change of -5.02 (sd 6.66); the difference between remitted and non-remitted patients was nonsignificant (p = 0.2385).
We examined the robustness of the correlation between change in BPDSI and MADRS through linear regression models.When controlled by age, sex, change in OASIS score, and BPD and BD diagnostic status, the MADRS change remained a significant predictor of BPDSI change (p = 0.007), which the other variables were not; however, this model was not significant in itself (F 1.858 on 6 and 71 df, p = 0.1001).Stepwise dropping of non-significant variables yielded a significant (F 4.251 on 2 and 75 df, p = 0.01784) model in which MADRS change was significantly (estimate 0.240, SE 0.083, 95% CI 0.074-0.406,p = 0.005) correlated with BPDSI change when controlled by OASIS change (the correlation of the latter being nonsignificant: estimate − 0.269, SE 0.17711, 95% CI -0.622-0.084,p = 0.132).
Main findings
In this 6-month cohort study of major depressive patients with and without borderline personality disorder (BPD), we found that BPD feature severity decreased significantly over time both in BPD patients and in patients with subsyndromal BPD features.This was noted both in reduced number of positive BPD criteria in repeated diagnostic interviews and in a lower BPD severity index (BPDSI) score, reflecting lower frequency and intensity and of borderline symptoms.Whereas the effect size for the change in number of positive BPD diagnostic criteria was small, the effect size for change in BPDSI scores was moderate, indicating that this instrument was more sensitive to changes in the occurrence frequencies of symptoms.There were no significant differences in the amelioration of BPD symptoms over time between unipolar and bipolar depression patients, nor between BPD and non-BPD-patients.Changes in BPD feature severity were significantly correlated with changes in depression severity.Interestingly, this correlation was significantly stronger in BPD patients than in others, and was nonsignificant in MDD patients without comorbid BPD.Put differently, even MDE patients without a BPD diagnosis had significant BPD features at baseline, which became less marked during follow-up; however, in contrast to BPD patients, there was no evidence of this amelioration
BPD outcome after follow-up
No net change occurred in BPD point prevalence over time, supporting earlier reports that diagnostic change seems faster in mood disorders than in BPD [15].Thus, our time frame might have been too short to detect such changes on a categorical diagnostic level.The changes in number of BPD criteria were also less marked than those in BPDSI scores.These findings thus might reflect the aim of the SCID-PD interview, which is primarily to evaluate the significance of symptoms over patients' entire lifespans, rather than only recently.Our findings are not in conflict with the prevailing view of BPD as a partly dynamic disorder with a clear tendency toward symptomatic amelioration over time [14,21], as there was a significant, although modest, reduction in BPD feature severity measured with BPDSI scores (corresponding to symptoms occurring less frequently or strongly) as well as with BPD criteria.As a concrete example of the magnitude of changes in this time period, at baseline the score for the affective hyperreactivity category in the MDE/BPD subcohort was approximately 7 (rounded from 7.3), signifying that the average patient had experienced these symptoms weekly, and after follow-up the mean score was around 6 (6.1), which corresponds to symptoms in this domain occurring twice every three weeks.This dynamic seems to be valid both for depressive patients meeting the full BPD criteria and for those with subsyndromal symptoms, in line with viewing BPD as a dimensionally occurring phenomenon, rather than a categorical entity.
Correlations between changes in BPD and depression severity
Changes in depression and BPD severity were linked also when controlled for other relevant factors (such as anxiety and main diagnoses).However, when examining how changes in BPD symptom severity are correlated with changes in depression severity, we found marked differences between depressive patients with and without BPD; the correlation in BPD patients was significant and moderately high, but we found no evidence of a correlation in MDD patients without BPD.Considering the large difference between correlations (r 0.67 vs. 0.07), this seems unlikely to be simply an inferential (type II) error.This finding was contrary to our a priori hypothesis and warrants further study, but we wish to offer some possible explanations.Depression confers negative cognitive biases [19], and BPD patients might potentially be more affected by these biases than others, perhaps as a function of what has been described as BPD proneness or personality features, such as neuroticism [21], which would increase the correlation between the two.Interestingly, changes in anxiety (as measured by the OASIS) and BPDSI change did not correlate in any subgroup; attentional and cognitive biases in anxiety are more related to perceived external threats than to the self [39], and thus, changes in these may not influence the experience and occurrence of BPD symptoms as strongly (or, indeed, aet all).This difference in correlations may also reflect a difference in the unmeasured precipitants of depression.For example, the role of external triggers of symptomatic decline (such as adverse life events) might differ for BPD and non-BPD patients.In addition to potential differences in these triggers per se, BPD patients might, due to their affective hyperreactivity, have a tendency to react to these triggers more strongly, which would also explain differences in symptom change correlations.Another possibility is that (fullblown, syndromal) BPD is a cause of MDE, and that depression symptoms alleviate when BPD features alleviate in these patients, but not in others.Emotional dysregulation is closely linked to the BPD phenotype, and has been shown to mediate the effect of childhood maltreatment on risk of later depression [40], and decline in emotional dysregulation might thus explain both alleviation of depression and BPD symptoms.Alternatively, as the relationship of depression and borderline features may be reciprocal and bidirectional [7], this observed pattern might be conceptualized as an alleviation of a more global illness process rather than of two discrete disorders [41].
BPD symptom subdomains
In addition to an overall alleviation of BPD symptoms, we found significant reduction in five (out of nine) of the DSM symptom subdomains: identity disturbance, suicidality/self-harm, identity disturbance, feelings of emptiness, and difficulties in anger control; all, but the last, were highly significant.A reduction in suicidality is to be expected, as depression (generally) lessened over time and has indeed been reported in this cohort (using other methods and measures) previously [9].In one earlier cohort study of BPD symptomatic change, the results were somewhat different, as impulsivity was the first to change and affective symptoms the last, with interpersonal and cognitive symptoms lying between the two [15].Another study found amelioration in impulsive, affective, and interpersonal symptoms, but not in cognitive symptoms, and a third reported approximately similar rates of decline in all of the DSM-5 symptom domains of BPD over 10 years [13].Differences in time frames and instruments used and, perhaps most importantly, our focus being on MDE patients may contribute to the variability of results.
Strengths and limitations
Strengths of this study include the clinically and theoretically relevant comparative design of three central depressive groups of treatment-seeking psychiatric care patients, the prospective study design, and the use of valid and reliable dimensional measures of BPD symptomatology and other symptom severity.The study also has some limitations.The follow-up time of 6 months was chosen in order to examine change over the course of an MDE but precludes drawing longer term conclusions.Since the research interviews were done by the same researcher for each patient, they were not blinded to diagnoses when assessing, e.g. the BPDSI.Although inter-rater reliability was excellent for main diagnoses, it was not assessed for all measures, including the MADRS and the BPDSI.Our sample size was moderate, but even so, we made significant new findings.Since we investigated outpatient psychiatric care patients, confirmation of our results in other settings is required.We focused on MDD patients, and the relationships between BPD and depression severities might conceivably be different in persons with minor depression or subsyndromal depression symptoms.Although we found interesting and suggestive relationships, the study design precludes drawing firm conclusions about causal relationships -for instance, we did not assess the possible role of psychosocial stressors as triggers for MDE, and thus, any changes in these,or other common causes of both BPD and MDD, such as emotional dysregulation, over the follow-upperiod could explain changes in both depression and BPD severity.Alternatively, some features of BPD and depression may overlap at least indirectly or otherwise influence each other (for instance depressive dysphoria increasing the risk of anger and/or self-harm, and BPD-linked interpersonal problems might worsen depressive symptoms); the precise mechanisms of such reciprocal effects were largely beyond the scope of this study.The BPDSI instrument mostly focuses on symptom frequency, which was detectable; however, other mechanisms by which BPD feature severity may decrease, not identified using these methods, are also possible.What we see is thus dependent on what is being sought.Still, we would argue that the BPDSI is a methodological improvement over less detailed methods used in earlier research, such as number of positive BPD criteria in the SCID-PD, and quite specific for the DSM symptoms of BPD.Use of other dimensional assessment models of personality pathology, such as the DSM-5 alternative model and the ICD-11 are likely to illuminate these issues further, and could be combined with BPDSI or other measures for detecting changes in symptoms in future research.
Conclusions
In conclusion, we found interesting similarities, but also some differences, between changes in BPD severities over the course of an MDE in patients with MDD, BD, and/or BPD.The view of BPD as a partially dynamic phenomenon with both trait-and state-like components is refined by a deepened understanding of the relationship of frequently co-occurring BPD and depression.Specifically, the frequency and severity of BPD symptoms tend to ameliorate when recovering from depression, and one way in which this change takes place is through a lessening in frequency of both observable and subjective symptoms of BPD.Change in BPD and depression symptom severities seem to correlate in BPD patients, but not in non-BPD patients; this phenomenon warrants replication and further investigation.Seeing change in BPD is partly dependent on using instruments (such as the BPDSI) calibrated to detect change over the relevant period.
NotesMDD = Major depressive disorder, MDE = Major depressive episode, BD = Bipolar disorder, BPD = Borderline personality disorder, sd = standard deviation.p refers to significance testing of amount of change (i.e.
Fig. 1
Fig. 1 Correlation between changes in BPDSI and MADRS during study in whole cohort
Fig. 2
Fig. 2 Correlation between changes in BPDSI and MADRS during study by subcohort
Table 1
Demographic and clinical characteristics at baseline and follow-up of BPD patients.The patients no longer meeting full BPD diagnostic criteria at follow-up were MDD patients sorted into the BPD subcohort, who had met 5 (the diagnostic minimum) of the BPD diagnostic criteria at baseline and 3 and 4, respectively, at follow-up, one of them had achieved remission from MDE as well.Patients with new BPD diagnoses were BD subcohort patients with type II BD, meeting 3 and 4 BPD criteria at baseline and 3 more (i.e. 6 and 7) at follow-up, respectively, one of them having achieved remission from MDE.
Notes Correlations are between baseline and follow-up scores.MDD = Major depressive disorder, MDE = Major depressive episode, BD = Bipolar disorder, BPD = Borderline personality disorder, MADRS = Montgomery Åsberg Depression rating scale, BDI II = Beck depression inventory II, OASIS = Overall anxiety severity and impairment scale, BPDSI = BPD severity index.p refers to Anova or Fisher's exact Χ 2 testing of intra-subcohort differences net sum
Table 2
Changes in BPDSI Total and Subscores from Baseline to Follow-up | 2024-02-20T06:16:17.569Z | 2024-02-19T00:00:00.000 | {
"year": 2024,
"sha1": "2e4e77fcbee9f541e50276ec95b1551a68248c03",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "71f8e01652c0409a8433a23df06742f97b2c195c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52549864 | pes2o/s2orc | v3-fos-license | The Bradford Model and the Contribution of Conflict Resolution to the Field of International Peacekeeping and Peacebuilding
This article outlines the important contribution made by the Department of Peace Studies, particularly the Centre for Conflict Resolution (CCR), at the University of Bradford, to the field of international peacekeeping and peacebuilding. It adds to Woodhouse's examination of the crucial role of Adam Curle, the first chair of the department, in the field of peace studies (Woodhouse, 2010). In the spirit of Woodhouse, this article provides further investigation into how the department has developed research into one of Curle's main strands of activity relevant to peacemaking, "to nurture social and economic systems which engender cooperation rather than conflict" (Woodhouse, 2010, p. 2). Woodhouse's article speaks of how Peace Studies at Bradford University explored this strand with a focus on "critical research on institutions for international co-operation and interdependence" (Woodhouse, 2010, p. 2). This article complements the approach by examining the impact the Centre for Conflict Resolution has had on the practice of peacekeeping over three generations, and how, in turn, the practice of peacekeeping has informed critical enquiry of peacekeeping and peacebuilding.
between equals with no element of dominance (between two states).However in conflicts where both parties are not equal (i.e. a conflict between the centre and periphery within a state), peacekeeping runs the risk of preserving a status quo as a result of intervening.Through containing the conflict and maintaining the status quo the peacekeeping force is actually taking sides in the conflict (Galtung, 1976b, p. 284).
Galtung noted that doctrines of non-intervention in the affairs of a state must be rejected.Only by doing this, Galtung argued, would peacekeeping operations "unequivocally… break through these artificial walls called regions and states mankind has built around itself " (Galtung, 1976b, p. 286).To normalise intervention, Galtung examined three ways in which peacekeeping could react to vertical conflict (conflict between a strong centre and weaker periphery): 1) the formalistic stand (third party intervention which handles any war in the same way); 2) the let-it-work-itself-out stand (no third party intervention); 3) the use-peacekeeping-on-the-side-of-peace stand (third party intervention seeks to remove both direct and structural violence).
Galtung rejected the first two approaches outright, and chose to explore the third.Although he outlined problems in it, Galtung argued in favour of this approach, and stated that: "A peacekeeping operation in a vertical conflict should be more like a one-way wall, permitting the freedom fighters out to expand the liberated territory, but preventing the oppressors from getting in." (Galtung, 1976b, p. 288) Importantly, Galtung's work showed that peacekeeping could have a role in radical conflict transformation, and move beyond containment of overt violence.This was very much a case of incorporating peacekeeping into conflict resolution theory, and placed military forms of peacekeeping within the wider context of conflict transformation.
Following in this tradition, the CCR has examined how peacekeeping practice can move beyond negative peace and towards transformation and emancipation.This set of important theoretical insights have linked micro and macro-level processes, and helped to develop a reciprocal understanding between those who carry out the practicalities of peacekeep-ing, and those who engage in wider theoretical debates in the field of conflict resolution.
ccr engagemenT wiTh peacekeeping in The 1990s
The early 1990s heralded the first contributions from the CCR to the field of peacekeeping research.The end of the Cold War and the early 1990s was a period characterised by a sense of optimism at the UN, encapsulated in Secretary-General Boutros-Boutros Ghali's Agenda for Peace (UN, 1992).Moreover, there was a rapid expansion in peacekeeping deployments, with operations covering a much wider set of peacebuilding tasks.However, not all multidimensional operations worked as well as was hoped, with operations deployed in environments which had not reached the point of consent and agreement with the goals of the mission.This led to problematic engagements, most notably in Somalia, the former Yugoslavia, and Rwanda.
Although such problems were ongoing, the period heralded the engagement of the CCR with peacekeeping operations, with a burgeoning number of publications examining the role that peacekeeping can play in conflict resolution processes, and the specific conflict resolution skills which may be required for peacekeepers to carry out their roles effectively.
A major contribution made by the CCR in this period was Fetherston, Ramsbotham and Woodhouse's analysis of the UNPROFOR1 operation in 1994 (Fetherston et al., 1994).This analysis posed a number of areas where the field of conflict resolution could contribute to improving the operation,2 and outlined important contributions that conflict resolution could make in wider peacekeeping interventions.
The authors advocated the use of conflict resolution principles to help understand how peacekeeping personnel relate to the parties in a conflict.By doing so, they could greatly benefit from understanding the social dynamics of belligerent groups and those they were sent to protect.Preparing peacekeepers to understand this was seen as critical for them to provide security amongst the groups, and open up avenues for peacebuilding.It also raised the chances that operations would engage with groups who might not have had access to power structures during the conflict.Fetherston et al. also highlighted how conflict resolution theory and practice could facilitate relations between the military and non-military components of the peacekeeping operation.
In modern-day operations this is formalised through civilmilitary cooperation and coordination strategies, at the time it was written, 1994, the linkages between the civil and military actors were not formalised at all (Slim, 1996).Finally, through developing an understanding of the multinational and multicultural aspects within peacekeeping deployments, the authors firmly established conflict resolution as a tool which could develop understanding between the various nationalities within a peacekeeping deployment.Peacekeeping is still a global undertaking (more so than in 1994), and for peacekeeping operations to function effectively, there is a requirement for military peacekeepers to understand crosscultural communication within the operation, as well as towards external actors.
Fetherston's early work on training for peacekeeping advocated the strengthening of links between peacekeeping and conflict resolution, both at theoretical and tactical levels.Her 1994 study of training suggested that existing definitions of peacekeeping were "inadequate" because they "have not been placed within a larger framework".She offered a theoretical framework to "analyze the utility of peacekeeping as a third party intervention and as a tool of conflict management" (Fetherston, 1994, pp. 139-140).She further argued that: "It is not enough to send a force into the field with a vague notion that they should be impartial and help to facilitate settlement.To act as a third party in a protracted violent, polarized conflict is an extremely difficult and delicate task.Diplomats, academics and others who have acted in the capacity of a third party are generally well trained, highly experienced individuals with a good base of knowledge about the particular conflict.On the whole, peacekeepers have limited preparation and experience." (Fetherston, 1994, p. 140) Noting that peacekeeping operations represent a form of third-party intervention and that there exists no framework for understanding when to intervene, she linked peacekeeping to the contingency model outlined in Fisher and Keashly's 1990 research, arguing that it "seems to offer the best possibility for a more effective management of conflict" (Fetherston, 1994, p. 123).The model was devised to match third party intervention to certain characteristics of the conflict (Fisher and Keashly, 1991).
In order for peacekeeping to fit the model, Fetherston advocated that effective coordination must be made between the traditional security aspects and the civilian peacebuilding aspects of the operation.Without this, in Fetherston's view, operations faced "insurmountable odds" of moving beyond controlling violence and maintaining a status quo (Fetherston, 1994, p. 150).Within this framework, she also considered that peacekeeping could be visualised in a two-tiered approach.Firstly with peacekeeping personnel "working in the area of operation at the micro-level, facilitating a more positive atmosphere", and secondly with peacekeeping operations "cooperating and coordinated with peacemaking and peacebuilding efforts at the macro-level" (Fetherston, 1994, p. 150).Fetherston suggested that peacekeeping could play a valuable role in the successful resolution of conflicts by creating an environment conducive to further resolution of conflict (much like the important role of pre-negotiation).She found that: "Co-ordinating peacekeeping at the micro-level at least begins the groundwork of what might be called a pre-resolution or a pre-peacebuilding phase.This has taken the form of coordination of local level resolution processes, either at the initiative of local people or at the initiative of the peacekeepers." (Fetherston, 1994, pp. 151-152) So peacekeepers were seen as a critical interface between micro and macro approaches to conflict resolution.To facilitate this link, Fetherston emphasised the importance of peacekeepers possessing the two 'contact skills': skills in conflict resolution, such as mediation, negotiation, conciliation, and the skills required for effective cross-cultural interaction.She emphasised the importance of these skills for deployed peacekeepers, arguing that the "essence of peacekeeping as a third party intervention must be contact skills".She adds: "It is through the use of communication skills, methods of negotiation, facilitation, mediation, and conciliation that peacekeepers de-escalate potentially violent or manifestly violent situations and facilitate movement toward conflict resolution." (Fetherston, 1994, p. 219) Her findings also supported the view that it is important to provide "specific training to effect a shift from a military to a peacekeeping attitude and to learn and practice contact skills" (Fetherston, 1994, p. 217).
This work is supplemented by the 1994 article, Putting the peace back into peacekeeping (Fetherston, 1994a), which outlined the importance of training for peacekeepers.Here, she argued that a lack of training for peacekeepers means that the task peacekeepers undertake, representing the international community's message of non-violent consensual conflict management, becomes increasingly difficult.In a 1998 article, she warned that, without basic research on what peacekeepers do and why they do it, "training will continue to be inconsistent and inappropriate".She added "[...] if we only prepare people for war it is far more likely that is what we will get." (Fetherston, 1998, p.178) Alongside the development of Fetherston's work on training, Woodhouse and Ramsbotham both furthered research into peacekeeping and conflict resolution.Their 1996 paper, "Terra Incognita: Here be Dragons", applied Azar's Protracted Social Conflict theory to contemporary conflict.From this, Woodhouse and Ramsbotham suggested that peacekeeping operations be deployed in International Social Conflict (ISC): a conflict neither purely inter-state, nor intrastate, but somewhere between the two.Using this framework, their response to the failures of peacekeeping deployments was to advocate the use of the 'middle ground' between peacekeeping and peace enforcement (Woodhouse and Ramsbotham, 1996).Also in 1996, Humanitarian Intervention in Contemporary Conflict was published.This book examined approaches to, and attempted to widen understanding of, humanitarian intervention by drawing together existing analyses from the field of international relief organisations, and studies from the security field.It was also an early indication of the work that Woodhouse and Ramsbotham would later carry out on cosmopolitan approaches to peacekeeping (Ramsbotham and Woodhouse, 1996).Further contributions by Woodhouse were his analysis of the psychological aspects of peacekeeping, and the requirements for military personnel to understand conflict resolution concepts and techniques (Woodhouse, 1998), as well as an analysis of national policies, such as the development of UK doctrine and practice, (Woodhouse, 1999).
Woodhouse and Ramsbotham also formalised the links between peacekeeping and conflict resolution in two particularly important contributions to the field.The first, Encyclopaedia of International Peacekeeping Operations (published in 1999), offered a comprehensive approach to all facets of international peacekeeping, but also included entries from the conflict resolution field, incorporating the scholarly work that had been ongoing within the CCR and other institutions (Ramsbotham and Woodhouse, 1999).The second major publication was Ramsbotham, Woodhouse and Miall's Contemporary Conflict Resolution.Also published in 1999, it incorporated peacekeeping practice as part of international efforts to alleviate conflict and facilitate positive peacebuilding (Ramsbotham et al., 1999).The publications were aimed at different audiences: one was more specifically concerned with the intricacies of peacekeeping, and the other predominantly for conflict resolution scholars.However, through incorporating both fields into a common endeavour, the publications further solidified links.
Research at the CCR widened towards the end of the 1990s.Tamara Duffey's research advocated the incorporation of Betts Fetherston's contact skills into military training for peacekeeping operations (Duffey, 1998, p. 106) and argued that military peacekeepers preparing for Cold War operations received virtually no specialised peacekeeping training in mediation, negotiation and other conflict resolution skills.Because of this, they would often find themselves in "dangerous and stressful situations unprepared to effectively handle them" (Duffey, 1998, p. 129).To address this deficit in training, Duffey's analysis outlined the importance of cultural training which should have two components.Firstly culturegeneral training, which focuses on basic understandings of culture (including how culture influences one's own assumptions, values, actions and reactions, along with intercultural communication skills, and developing an awareness of other organisational cultures).Secondly, culture-specific training, which concentrates on developing an understanding of the specific culture in which the intervention takes place (Duffey, 1998, p. 270).Overarching this is the need for all involved in peacekeeping (including the military, civilian agencies and conflict resolution scholars) to carefully consider the "culturally appropriate ways of re-evaluating and reforming peacekeeping" (Duffey, 1998, p. 271).
Research in the late 1990s also reflected the dynamic changes that were occurring in the field of international peacekeeping.Langille's thesis on the development of training, role specialisation and rapid deployment of peacekeepers took the case study of his attempts to develop the idea of a specialised peacekeeping training establishment in Canada.Langille's research mapped the debates leading up to the development of a training centre, reporting on the considerable amount of opposition to the notion of turning a redundant military facility into a peacekeeping training centre.(Langille, 1999, p. 101).As the manifestation of these efforts resulted in the creation of the Pearson Peacekeeping Centre, it can be seen that the thesis and the CCR itself were at the forefront of developments in the field of peacekeeping.
The second generaTion: reflecTions on 1990s peacekeeping and The advenT of peace supporT
The end of the 1990s highlighted a shift in how peacekeeping practice was conceptualised.At the UN, Kofi Annan launched a period of reflection, through official reports into the failures in Rwanda and Srebrenica.This reflection culminated in the 2000 Report of the Panel on United Nations Peace Operations -a wide-reaching report into all aspects of peacekeeping and peacebuilding deployments (UN, 2000c).In the UK, a doctrine was developed to meet the wider operational demands of deployment in peacekeeping operations where robust peace enforcement needed to be linked with civilian expertise.Whilst uncertainty existed about exactly how operations could achieve the ambitious targets set out in the Security Council Resolutions, the turn of the century also heralded the third generation of peace operations, with multifunctional and robust deployments in Sierra Leone and the DRC.
Ramsbotham and Woodhouse's 2001 book Peacekeeping and Conflict Resolution, published during a time of transition and uncertainty, reflected on the current debates, underscoring peacekeeping operations by stating that: "[...] the future of UN peacekeeping will depend on the capability and willingness to reform and strengthen peacekeeping mechanisms, and to clarify its role in conflict resolution." (Woodhouse and Ramsbotham, 2001, p. 3) Thus, the authors argued, the purpose of the collection was to consider the contribution that conflict resolution can make in the development of future peacekeeping practices.The book offered the viewpoints of academics, who applied conflict resolution theory to peacekeeping practice, and "experienced military peacekeepers seeking to enrich peacekeeping by uses of conflict theory" (Woodhouse and Ramsbotham, 2000, p. 6).It included articles spanning the spectrum of international conflict resolution efforts, from prevention to peacekeeping and peacebuilding, providing a crucial contribution as it solidified links made between the two fields.Articles provided by practitioners included Philip Wilkinson (who wrote about the role of conflict resolution in the UK's Peace Support Operation doctrine), John Mackinlay (examining the role of warlords) and Peter Langille (who examined the development of standing capacities for UN peacekeeping).Articles by scholars in conflict resolution examined the role of culture in peacekeeping practice (Tamara Duffey's article highlighting the difficulties encountered in the UN operation in Somalia), how operations can best address the peacebuilding capability gap, and complementary approaches dealing with ethno-political conflict.Further articles linked peacekeeping and peacebuilding interventions to wider theoretical approaches from the conflict resolution field.Woodhouse provided a response to criticism of international conflict resolution processes, Ryan looked at integrating peacekeeping strategies into wider conflict resolution approaches, Ramsbotham examined the UN approaches to peacebuilding, and finally, Fetherston (in a radical change from her early work on peacekeeping) provided a critical assessment of peacekeeping and peacebuilding, and advocated that peace operations provide a wider transformative process to promote a post-hegemonic society.
PhD scholarships at the CCR continued throughout this period, and provided significant analyses to peace-keeping and conflict resolution practices.Solà i Martín analysed MINURSO to understand why the mission failed to provide space for transformative conflict resolution, after the successful reduction of violent conflict.He found constraints on the operation as a result of power politics (Solà i Martín, 2004, p. 22).The second part of his research examined the potential of new ideas in peacekeeping research, in particular, through the use of a Foucauldian analysis of power versus knowledge to assess peacekeeping operations in the context of power relations at a local and international level.He found that through the examination of the parties' production of power and knowledge, conflict resolution could have a larger impact on peacekeeping research (Solà i Martín, 2004, pp. 241-244).
Yuka Hasegawa focused on the UN operation in Afghanistan (UNAMA) and provided an analysis of the role of peace operations in the protection and empowerment of human security.His research also asserted the importance of UN peacekeeping forces as a third party intervener, with their impartiality derived from the UN's pursuit of basic human security (linked to Burton's Human Needs theory).This impartiality is the most important facet of peacekeeping operations.In the case of UNAMA, the pursuit of impartiality was key in its effectiveness.Hasegawa concluded that the significance of UN peacekeeping missions is that they represent a collective means to address issues of human security, as opposed to being "yet another tool with which to coordinate various interests both at the global and micro levels" (Hasegawa, 2005, pp. 332-337).
As well as advances in the field of peacekeeping, wider social and cultural developments were beginning to impact on peacekeeping and peacebuilding.One critical development was the spread of the Internet and other tools to increase global communication in the first decade of the 21st century.Laina Reynolds-Levy sought to document the real-world use of the Internet by organisations operating in the post-conflict context of Kosovo in the period 2000-2003, focusing on understanding how the Internet could contribute to post-conflict peacebuilding.She considered the potential impact of information and communication technologies (ICT) on peace and conflict issues, and offered practical examples of how the Internet was used as a vehicle of change in the working practices of peacebuilding organisations (Levy, 2004, pp. 61-97).Informing this is the importance Levy attached to the emergent uses of ICT in this post-conflict Kosovo, particularly, "in order to formulate ideas on how ICTs could be best used to build stable, peaceful and just societies in the aftermath of war" (Levy, 2004, pp. 1-2).In terms of peacekeeping, Levy links the role of ICT to recommendations in the UN Brahimi Report, which was explicit in making the case for ICT to be used to link peacekeeping operations (Levy, 2004, p. 108).
The Third generaTion: criTical appraisal and cosmopoliTan peacekeeping
The third generation of conflict resolution interaction with peacekeeping has come as a result of wider theoretical critiques over the type of peace that peacekeeping operations attempt to achieve.The backdrop to this is continued reflection and uncertainty over peacekeeping practice, with operations in Sierra Leone and Burundi successfully making the transition from peacekeeping to peacebuilding, operations in the DRC and Lebanon suffering setbacks on wider peace processes, and new operations in Darfur and Chad/CAR failing to deploy rapidly.This period is also informed by four main thematic debates.Firstly, within overarching peacekeeping and peacebuilding practice, there has been an evolution of normative values for protecting civilian populations, a responsibility to protect, and a wide and varied approach to the phenomenon of human security.Secondly, through the practice of robust peacekeeping, military peacekeepers have been more able to use force in deployments in a pre-emptive manner, and at times under the rubric of protecting civilians.Thirdly, there has been a rise in studies and assessments that ask questions about the liberal economic underpinnings of peacebuilding.Finally, this era is characterised by unilateral and, at times, non-UN sanctioned intervention under the rubric of the Global War on Terror.At the CCR, the arrival of Professor Mike Pugh meant that the journal International Peacekeeping was housed at the centre.It is a cornerstone for contemporary debates in the field and has succeeded in becoming an "important source of analysis and debate for academics, officials, NGO workers and military personnel" (Pugh, 1994), On a wider scale, challenges to the role of conflict resolution in peacekeeping practices came from the background of critical theory.There was increasing criticism of problem-solving approaches to peacekeeping in the literature on peace operations, arguing that it devoted "too much attention to policy relevance" (Paris, 2000, p. 27), and overlooked "larger critical questions that could be posed" (Whitworth, 2004, p. 24).Bellamy and Williams took the critiques a stage further in International Peacekeeping (Bellamy and Williams, 2005), examining peacekeeping from a critical theory standpoint and challenging many of the overarching conceptions of peacekeeping.They offered a substantial critique of problem-solving approaches to peacekeeping operations: "By failing to question the ideological preferences of interveners… problem-solving theories are unable to evaluate the extent to which dominant peacekeeping or peacemaking practices may actually help reproduce the social structures that cause violent conflict in the first place." (Bellamy, 2004, p. 19) The authors suggested that critical approaches to peace operations would open up a new stage in how they were theorised.This critical appraisal was reflected in the CCR, most pertinently spearheaded by Professor Mike Pugh, who elaborated on this by arguing that peacekeeping operations were not neutral, but served an existing global order within which problem solving adjustments could occur.In this framework, peacekeeping can be considered as "forms of riot control directed against the unruly parts of the world to uphold the liberal peace" (Pugh, 2004, p. 41).
Pugh furthered this work with Mandy Turner, and Neil Cooper, in Whose Peace?Critical Perspectives on the Political Economy of Peacebuilding.The collection provided an analysis of present peacebuilding strategies, separated into seven inter-related areas (liberal war and peace, trade, employment, diasporas, borderlands, civil society and governance), and argued that largely disregarded local bodies struggle against universal presumptions of a "particular liberal-capitalist order" (Pugh et al., 2008, p. 2).From the analysis, the authors found that concepts of human security had either not been followed through or had been "captured to work in the interests of global capitalism".Thus, the authors propose a less securitized life welfare approach to peacebuilding: "[…] there is a need, then, to develop a new, unsecuritised language and to contemplate a paradigm that takes local voices seriously, rejects universalism in favour of heterodoxy, reconceptualises the abstract individual as a social being and limits damage to planetary life -in short, a 'life welfare' perspective." (Pugh et al., 2008, p. 394) The authors make a strong case for the development of a life welfare perspective.The process would not so much be a prescription of resigned relativism, but more a prescription for a politics of emancipation in which the need for dialogue between heterodoxies is a core component.Whose Peace demonstrates the crucial role that the approaches of critical theory provide in deepening understanding about the role of peacekeeping and peacebuilding as a vehicle for conflict resolution.
Although these approaches have been criticised themselves for not elaborating on how suggestions for transformation can be operationalized, there are signs of policy considerations in wider transformation projects.Pugh finds a role for deployments akin to peace support operations (PSO) in a transformative framework.
He argued that PSO would be likely to be increasingly subtle and flexible in responding to crises, providing expert teams similar to disaster relief specialists, taking preventative action, and offering economic aid and civilian protection.Pugh's article contended that this may only happen if such expert teams are released from the state-centric control system, making them "answerable to a more transparent, democratic and accountable institutional arrangement" (Pugh, 2004, p. 53).Moreover, Pugh found that such a scheme would be based on a permanent military volunteer force "recruited directly among individuals predisposed to cosmopolitan rather than patriotic values"' (Pugh, 2004, p. 53).
Towards a cosmopoliTan framework
This links to Woodhouse and Ramsbotham's 2005 article, Cosmopolitan Peacekeeping and the Globalisation of Security, where the authors examined how future peacekeeping and peacebuilding operations could work within an emancipatory framework.It posits that the framework of cosmopolitan peacekeeping is situated in conflict resolution theory and practice, engaging with peacekeeping practice in a way in which the authors believe critical theory does not (Woodhouse and Ramsbotham, 2005, p. 141).
The article noted the revival of UN peacekeeping operations as a commitment by the international community to peacekeeping as a "vital instrument in pursuing conflict resolution goals internationally" (Woodhouse and Ramsbotham, 2005, p. 142).Looking at theoretical approaches to future interventions, the authors3 argued for a cosmopolitan approach, "for deeper reforms, an accountable permanent rapid reaction or a standing UN force and an enhanced resolution capacity, including gender and culture-aware policy and training" (Woodhouse and Ramsbotham, 2005, p. 152).Developing such an architecture could release the potential of peacekeeping operations "as a component of a broader and emancipatory theoretical framework centred on the idea of human security" (Ramsbotham et al., 2005, p. 147).
Woodhouse followed up on this in an article with Curran, (Curran and Woodhouse, 2007) which investigated the emergence of a cosmopolitan ethic in African peacekeeping through the emergence of the African Union standby brigades and conflict prevention network, as well as the response to the peace operation in Sierra Leone.The authors concluded that peacekeeping in general, and African peacekeeping in particular, is seen as a: "[…] force in the making for cosmopolitan governance, characterized by an impartial, universal, democratic, cosmopolitan community which promotes human security (positive peace) over national security and state-centric interest." (Curran andWoodhouse, 2007, p. 1070) The understanding of cosmopolitan peacekeeping developed at the CCR links to the cited works of Galtung, who argued strongly against peacekeeping operations being placed in positions where they are unassumingly supporting the status quo in vertical conflicts.For peacekeeping to be effective, he argued, it must protect those who are trying to alter the status quo and remove the violent structures that are creating conflict.This is an area where critical theorists have made an important contribution.Without a strong body of research into the role of peacekeeping in global politics and the global economy, it will most likely fail to alter the status quo.Woodhouse and Ramsbotham's work on cosmopolitan peacekeeping elaborates on Galtung's 'one-way wall' concept of peacekeeping operations, but instead of protecting what Galtung termed the freedom fighter, it protects the vulnerable groups within conflict zones, who may possess the capacity for emancipatory political transformation.
conclusions
In outlining the contribution of the Bradford School to peacekeeping research, this article has outlined the critical role played by the Centre for Conflict Resolution in approaching the micro-level debates over peacekeeping practice, and linked them to wider understandings of the process of conflict resolution in post-conflict environments.
Cosmopolitan approaches propose an avenue to engage in critical appraisals of peacekeeping, but they certainly do not propose an 'end of history' with regard to peacekeeping and peacebuilding.What this article demonstrates is that conflict resolution research (if the CCR is used as a case study for other centres of its type) is sufficiently robust to effect change in the field of peacekeeping practice, and foster developments in understanding and appraisal of the practice of peacekeeping and peacebuilding.
The question as to how conflict resolution will adapt in the future is, according to Ramsbotham, Miall and Woodhouse, dependent on the ability of the field to become truly global (Ramsbotham et al., 2011).This will be facilitated by the multiple effects of the expanding role of ICT in peacekeeping, peacebuilding and conflict resolution.Firstly, ICT will allow the dissemination of information and sharing of examples of good practice.The wide use and availability of peacekeeping training over the Internet is an example, where peacekeepers can learn about the skills necessary for peacekeeping (including many of Fetherston's contact skills) without the need to travel to a recognised training institute.Secondly, the spread of ICT and the 'shrinking' of the globe will allow information and critique to influence overarching theories of conflict resolution, by allowing greater theoretical input from practitioners, academics, and groups from areas previously untouched.Finally, the expansion of the Internet and social media has already made a wealth of information available for those engaged in the field of conflict research and encouraged transparency on the part of institutions, leading them to advocate transparency in other institutions and actors.Hopefully, the outlined processes will give the conflict resolution field greater depth and allow it to continue to engage with peacekeeping and peacebuilding. | 2018-09-18T19:15:19.903Z | 2012-03-31T00:00:00.000 | {
"year": 2012,
"sha1": "dad675e9f873c480f5b119f59916eb6468bc667b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7238/joc.v3i1.1417",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dad675e9f873c480f5b119f59916eb6468bc667b",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
3684946 | pes2o/s2orc | v3-fos-license | Thermochromic microcapsules with highly transparent shells obtained through in-situ polymerization of urea formaldehyde around thermochromic cores for smart wood coatings
In this paper, thermochromic microcapsules were synthesized in situ polymerization with urea formaldehyde as shell material and thermochromic compounds as core material. The effects of emulsifying agent and conditions on surface morphology and particle size of microcapsules were studied. It was found that the size and surface morphology of microcapsules were strongly depending on stirring rate and the ratio of core to shell. The stable and small size spherical microcapsules with excellent transparency can be obtained at an emulsifying agent to core to shell ratio as 1:5:7.5 under mechanical stirring at 12 krpm for 15 min. Finally, the thermochromic property was discussed by loading microcapsules in wood and wood coatings. Results indicate that microcapsules can realize the thermochromic property while incorporated with wood and coatings, and could have high potential in smart material fabrication.
The use of wood materials indoor helps to improve the energy efficiency of buildings, additionally, that can helps to improve human thermal comfort in physical and mental wellbeing 1 . With rapid development of novel materials, much attention is paid to functional and smart materials to improve the energy efficiency of building [2][3][4][5] . Thermochromic material is a kind of intelligent material that undergoes a series of color transitions with a specified temperature range. Introducing of thermochromic pigment into wood materials will help to improve the seasonal visual effect of wood. Moreover, it provides a new solution to energy consumption for interior buildings 6 .
Liu et al. first introduced thermochromic materials into wood products 7,8 . They colored poplar veneers by ultrasonic impregnation using crystal violet lactone, biphenyl A, 1-tetradecanol and sodium thiosulfate. Jiang et.al selected three kind of aliphatic alcohols as solvents to preparing thermochromic wood, they found the color-change temperature depended on the melting point of solvent 9,10 . Fu et.al improved the light fastness of thermochromic wood by adding ultraviolet absorber 11 . These studies adopted the impregnation treatment for thermochromic wood fabrication, which could cause a large consumption of raw materials, and most of the thermochromic agents contain phenols and alcohols, the unstable components can be affected by the environments and result in an adverse effect to environmental adaptability. Moreover, the loss of thermochromic compounds during wood fabrication would limit the scope of application.
Microencapsulation technology is an efficient method to solve these problems. Microcapsules have been in use in many fields, such as medicine, textile, food, adhesives, building concrete, etc. [12][13][14][15][16][17][18] , but only recently introduced in wood materials. Xu et.al prepared reversible thermochromic wood plastic composites with thermochromic microcapsules 19 . Hu et.al investigated the color-changing behavior of medium density fiberboard (MDF) by surface coating thermochromic microcapsules 20,21 . In the present studies, the thermochromic wood materials were fabricated incorporated with microcapsules coatings to achieve the thermochromic function. As is well known, the reactive and functional thermochromic core materials have been applied in some frontier fields, in which the functional microcapsules play an important role in thermochromic properties. However, the systematical investigation on microencapsulation of thermochromic pigment and the application in wood materials is still limited.
In this study, the thermochromic compound microcapsules were prepared via in situ polymerization method, and the optimal emulsifying conditions for preparing thermochromic microcapsules for application in wood materials were explored. In addition, the thermochromic performances of thermochromic microcapsules loading in wood and wood coatings were also discussed in this paper.
Results and Discussion
Determination of emulsifying agent. Table 1 described the feature of five emulsification systems. The hydrophile-lipophile balance (HLB) values of sodium dodecyl benzene sulfonate (SDBS) and alkyl phenyl polyoxyethylene ether (OP-10) are lower than 1-tetradecanol, which is the solvent used for thermochromic compound synthesis, and they presented poor emulsification effect. The linkage of OP-10 ether bond and water molecules was unstable, as the temperature raised, the linkage was broken down and the hydrophilicity of OP-10 decreased, and the unstable emulsion system was formed. As an anionic emulsify agent, sodium dodecyl sulfate (SDS) presented strong emulsifying ability and it was beneficial to condensation polymerization of prepolymer on core material. However, a large number of bubbles generated during emulsification procedure, which had adverse effect on encapsulation and thermochromic property of core material. Polysorbate (Tween −80) presented good emulsification effect, but the poor affinity with UF prepolymer caused shell materials self-assembled, which prevented the encapsulation of thermochromic core materials and the formation of the microcapsules. Acacia presented intermediate emulsification effect. Acacia is composed of simple sugars galactose, arabinose, rhamnose, glucuronic acids and protein component 22 . The protein-rich high molecular mass component adsorbs preferentially onto the surface of the oil phase, and the carbohydrate blocks inhibit flocculation and coalescence through electrostatic and steric repulsions, which is beneficial to smooth and firm shell formation. On the other hand, as the acacia negative charged as the solution pH value greater than 2.2, it can adsorb the positive charged prepolymers to form the microcapsules. Figure 1 showed the morphology of microcapsules prepared with acacia emulsifying agent.
The stability of core droplets in emulsifying system affects the microcapsule morphology and particle size during the encapsulation procedure. As Fig. 1 showed, the microcapsule and core material presented spherical shape, which indicated good dispersion and emulsifying effect of acacia. The magnified SEM of thermochromic microcapsules displayed the surface of UF shell material and the broken shell-core structure. The UF prepolymer was dissolved in water and then polycondensation reaction formed UF polymers as the pH value of solution adjusted to acid 23,24 . The reaction of UF prepolymer at the thermochromic compounds interface formed the capsule shell, and the surface of microcapsules gradually became coarse and it was covered with granular deposits as the reaction going on. The rough surface resulted from the deposition and agglomeration of UF polymer. The shell wall of microcapsule was transparent, and it was beneficial to thermochromic property. It was also observed that the color of microcapsules was different from thermochromic compounds. The encapsulation of thermochromic compounds caused light refraction of UF shell and showed bright color in visible light (Fig. 2).
Effect of fabrication conditions on microcapsules.
The prerequisite for manufacturing thermochromic microcapsules with uniformed size, smooth non-porous wall surface morphology and thermochromic properties is related to the emulsion conditions 25,26 . The effects of stirring rate, emulsifying time, ratio of emulsifying agent to core and ratio of core to shell on microcapsule particle size and morphology were investigated in this study. Table 2 showed the orthogonal experiment results of thermochromic microcapsules.
Range analysis and ANOVA analysis were used to determine the optimal conditions and evaluate the significance of emulsifying factors on mean particles size at α = 0.05 level (Table 3 and Fig. 3). It indicated that stirring rate and the ratio of core to shell had significant effect on average particle size of thermochromic microcapsules.
Previous work showed that the size control of microcapsules could be realized in three stages, emulsification, pH adjustment and curing 27 . Range analysis of fabrication conditions on the microcapsules size and standard errors were shown in Fig. 3. As illustrated in Fig. 3A and Table 3, the emulsifying agent dosage showed no significant effect on mean microcapsule size. The microcapsules prepared with emulsifying agent to core ratio as 1:5
Emulsify agent Features description
Acacia Intermediate emulsification effect, a small number of microcapsules with large particle size formed, core materials maintain thermochromic property SDBS Poor emulsification effect, unstable emulsion system OP-10 Poor emulsification effect, unstable emulsion system Tween-80 Good emulsification effect, few microcapsules formed SDS Excellent emulsification effect, a lot of microcapsules formed, serious adhesion phenomenon, ash green appearance, core materials loss thermochromic property Table 1. The feature of emulsification system and thermochromic microcapsules prepared by different emulsify agents.
showed the smallest mean particle size as 54.48 μm. The small dosage of emulsifying agent (as the ratio of emulsify agent to core was 1:6) led to the insufficient dispersion of mixture and large micelle particle size. Meanwhile, large amount of emulsifying agent (as the ratio of emulsify agent to core was 1:4) increased the viscosity of reaction mixture, which suppressed the redispersion of emulsion droplet and resulted in a small microcapsule size distribution. Figure 3B presented the trend of microcapsule size changed with stirring rate. When the stirring rated increased, small size of microcapsules were obtained. Since high speed of emulsification increased the shear stress to the droplet, the droplet would be dispersed to smaller diameter; also the higher stirring rate could reduce the aggregates of UF prepolymer curing and deposited on core surface, which helps to form the thermochromic microcapsules with small particle sizes. With shown in Fig. 3C, the microcapsule size changed with emulsifying time. The microcapsules can be any shape and resulted in a wide particle size distribution as the emulsifying time Table 3. ANOVA results for average thermochromic particle size. It can be seen that multi-core microcapsules were observed as the emulsifying time increased to 25 min (Fig. 3). This phenomenon can be attributed to the good dispersion of core material, the small and uniform core materials encapsulated by UF polymer and the excess UF prepolymer cured on the microcapsules surface. Therefore, the large size and rough multi-core microcapsules were formed. For low ratio of shell wall materials, little amount of UF prepolymer was cured and deposited on the core surface and could not form a stable core-shell structure. When the ratio of core to shell increased, numerous colloidal UF polymer particles enriched and deposited on the micelle surface, and the stable microcapsules with large size were obtained. Excessive cured UF polymer shell resulted in the roughness morphology surface, as Fig. 4 shows. The curing and deposition of UF polymer strengthened the mechanical properties of shell wall, but the optical performance became worse.
To obtain the stable and optimal surface morphology microcapsules with minimum size, the microencapsulation of thermochromic compounds could carried out at an emulsifying agent to core to shell ratio as 1:5:7.5 under mechanical stirring at 12 krpm for 15 min.
Properties of thermochromic wood materials. The incorporation of microcapsules into a wood varnish system is a simple mechanical stirring work. In order to compare the effect of varnish on thermochromic properties of microcapsules, colorimetric parameters of wood veneers treated with thermochromic microcapsules aqueous solution were also recorded. Figure 5 shown the color differences values of wood veneers treated with microcapsules aqueous solution (MIV) and microcapsules varnish coatings (MCV). Compared with color change values during 0 °C to 70 °C, MCV showed higher ΔE values. It was found that MIV and MCV displayed brown hue, lightness of all thermochromic wood veneers decreased, and the color of MCV was darker than MIV. The decrease of lightness was attributed to the increase of coating thickness and the decrease of surface roughness, which resulted in the reduction of diffuse reflection on the surface 28 . Comparison of color characteristics between control veneer, MIV and MCV, the redness and greenness chromaticity of MIV declined, and the blueness and yellowness chromaticity of MIV and MCV goes up. It indicated that the MIV exhibited greenish and yellowish color, whereas the MCV exhibited reddish and yellowish color than control wood veneers. This was associated with the waterborne varnish, which contains red and yellow impurities. The color characteristics of thermochromic wood materials were depended on temperature and procedure is illustrated in Fig. 6. T1 and T2 described the initial and final achromatic temperature during decolourization procedure. As the temperature below 31 °C, the color parameters (L, a, b) were rarely change, and the color change value ΔE tended to be stable. Between 31-37 °C decolourization occurs. Combined with Fig. 6, the color parameters (L, a, b) increased significantly. As the temperature above 37 °C, the decolourizations slow down. It is known from the literature that reversible thermochromic change occurs via two competing reactions 29 . At low temperature, the solvent exists in its solid form in leuco dye-developer-solvent system, as the temperature increased, the solvent melts, the leuco dye-developer system convert to colorless state.
During the chromatic procedure, as the temperature above 34 °C, the color parameters a and b rarely changed, the ΔL and ΔE values slightly increased with the temperature ranged from 36-34 °C. T3 and T4 describe the initial and final chromatic temperature during reverse action. As the temperature ranged between 34-26 °C, the color parameters significantly reduced and the ΔE values enlarged, the system regains color. Based on the color change characteristic, it can be deduced that the initial and final chromic temperature were 34-26 °C during reverse action. Perfect reversible process should return to the same color after cooling. It can be seen from the graphic, the color hysteresis phenomenon occurs between heated and cooled state. This phenomenon was also found in previous studies 30 .
The reversible stability of thermochromic property was evaluated according to the change of colorimetric parameters in heat-cool loops. Figure 7 showed the colorimetric parameters values of MIV and MCV in 30 times heat-cool loops.
It can be seen from Fig. 7, for MCV samples, Δa values were rarely changed after 30 times heat-cool loops, the Δb, ΔL and ΔE values were slightly fluctuated during the circles. However, the color differences values of MIV showed dramatic fluctuation during 30 times heat-cool loops, this might associate with the unstable connection
Conclusion
Microcapsulation technology has a good potential use in textile, film, fluorescence indication, and electronics [31][32][33] . The purpose of this study was to investigate the thermochromic properties of wood materials fabricated with thermochromic microcapsules. The thermochromic core material was the mixture of ODB-2, bisphenol-A and 1tetradecanol, and shell wall was urea formaldehyde resin. The thermochromic compounds could be encapsulate by UF polymer using in-situ polymerization. Microcapsules prepared in the existence of acacia emulsifying agent presented stable and excellent spherical shape, and their core-shell structure was verified by SEM. The optimal thermochromic microcapsules can be prepared at an emulsifying agent to core to shell ratio as 1:5:7.5 under mechanical stirring at 12 krpm for 15 min. Finally, wood veneers were treated by the thermochromic microcapsules. All of the wood veneers samples treated with microcapsules aqueous solution and microcapsules coatings exhibited good thermochromic properties. The results of color differences confirmed that thermochromic microcapsules decreased the lightness of treated wood, blueness and yellowness chromaticity went up and the ΔE values changed remarkably. The color changed within a temperature range of 31-37 °C during heat procedure, and the reverse action occurs within the temperature range of 34-26 °C. Color hysteresis was found during heat and cool circles. It was also found that thermochromic wood coatings had good thermal stability. Therefore, it would be a great potential to be applied in wood materials.
Materials and Methods
Materials. Poplar veneers, with an area of 150 × 110 mm and 2.82 mm thickness, were supplied by Taoshan Corporation (Harbin, China), the average moisture content and density were 7% and 0.395 g/cm 3 .
Methods. Preparation of thermochromic compounds. Thermochromic compounds were synthesized by mixing ODB-2, bisphenol A and 1-tetradecanol in the ratio of 1:2:60 and heated in boiling flask-4-neck at 70 °C water bath. The mix components were stirred with a teflon paddle at 600 rpm for 1 hour. The reversible thermochromic compounds were obtained after natural cooling. The experimental process was shown in Fig. 8a.
Synthesis of urea formaldehyde (UF) prepolymer.
A certain amount of aqueous formaldehyde solution and urea were added into a boiling flask-3-neck in the molar ratio of 1.7:1. The pH of the solution was adjusted to 7-9 with 2.5 w.t.% NaOH solution. The mixture was heated to 75 °C in 35 min. After 1 hour reaction and stirred at 600 rpm, the UF prepolymer was obtained (Fig. 8b).
Fabrication of thermochromic microcapsules. The thermochromic microcapsules were prepared using in situ polymerization procedure. A certain amount of thermochromic compounds were melted and mixed with emulsifying agent solution in 55 °C water bath under homogenization shearing. Gum arabic, sodium dodecylbenzenesulphonate, alkyl phenyl polyoxyethylene ether, polyoxyethylenesorbitan monooleate and sodium lauryl sulfate were investigated as the emulsifying agents to achieve the optimal emulsify effect. Through a vigorous agitation, the stable micelles were obtained. The UF prepolymer was added into the oil micelles mixture at 35 °C, which consist of emulsifying agents and thermochromic compounds, and stirred at 600 rpm. The pH value of O/W emulsion was kept between 2.5 to 3 by adding citric acid for 1 h. After pH adjustment, a few drops of sodium chloride solution were added into the mixture and stirred at 400 rpm. The temperature of water bath was slowly raised to 65 °C and maintained for 40 min to complete the reaction. The synthesized microcapsule suspension was cooled down, filtered and air dried (Fig. 8c). Fabrication of thermochromic wood materials. Thermochromic wood materials were fabricated by surface finishing with microcapsule varnish. The thermochromic microcapsules were added into the waterborne varnish in the ratio of 20 w.t.% and stirred at 500 rpm for 20 min. The wood veneers were finished with one layer of primer and two layers of top coat. 80 g per m 2 weights of modified varnish was applied on the wood surface with short-haired brush. The films were left for 7 days at room temperature for complete curing. Experiment design. Emulsifying agent. Emulsification is the action of a liquid dispersed in an insoluble liquid with a tiny droplet, which is an interfacial phenomenon occurred between two insoluble liquids. The emulsifier is a surfactant with hydrophilic and lipophilic groups, which formed an adsorption layer on the interface to reduce the oil-water phase interfacial tension 34 . The appropriate emulsifier plays an important role to obtain well-dispersed core materials and stable emulsion. The hydrophilic-lipophilic balance (HLB) value is an important parameter for choosing emulsify agent. For a system using surfactants with HLB values between 8 and 16, O/W emulsification is obtained. Table 4 presented the HLB values of emulsifying agents used in this study 35 .
The dosage of emulsifying agent was 15.0 w.t.% of core materials, they were melt in a certain amount of deionized water and mixed with thermochromic core materials in 55 °C water bath under homogenization shearing of 3000 rpm for 5 min. After a few minutes standing, UF prepolymer was added for microcapsules preparation. The appropriate emulsifying agent was determined according to optical microscope and SEM analysis of microcapsules morphology.
Orthogonal experiments design of fabrication conditions. Microcapsules fabrication conditions affect the sizes of thermochromic particles and the stability in continuous phase. In this study, orthogonal experiment design was adopted to select the optimal condition for microcapsules fabrication. Table 5 lists the factors and levels. The average particle size was calculated to evaluate the emulsification degree.
Property measurements and analysis. Characterization of microcapsules. The morphology of microcapsules was analyzed using KSZ-4GA optical microscopy (Xiwan, China) at 40× and 100× magnifications. Three hundred particles were selected randomly from optical microscopic photographs and their diameters were measured to calculate the average diameters. Scanning electron microscopy (SEM) (JSM-7500F, Japan) was used to examine the external morphology of microcapsules at 12.5 kV.
Analysis of thermochromic properties. The thermochromic veneer samples were conditioned in consistent cabinet at 50% RH to equilibrium at the temperature set at 0-70 °C. The colorimetric parameters of wood surface were measured according to CIElab system. The average color parameters, including L (lighness index), a (red-green index) and b (yellow-blue index) of wood veneer surface measured by NP10QC chroma meter (3NH, Inc., China).
The color difference value ΔE was calculated with the following equation: | 2018-04-03T01:01:12.867Z | 2018-03-05T00:00:00.000 | {
"year": 2018,
"sha1": "494bfe431108dfb4dc12763c155673886138fb18",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-22445-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "494bfe431108dfb4dc12763c155673886138fb18",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
6985975 | pes2o/s2orc | v3-fos-license | Metabolic syndrome in infertile women with polycystic ovarian syndrome
ABSTRACT Objective : The aim of the present study was to determine the prevalence of metabolic syndrome (MS) in infertile Iranian women with polycystic ovary syndrome (PCOS) using the ATPIII criteria. Subjects and methods : In this cross-sectional study, 624 women with PCOS were enrolled at a tertiary referral center in Tehran, Iran, between April, 2012 and March, 2013. Diagnosis of MS was according to ATPIII criteria. Also, we divided PCOS patients into following two main groups: (i) with MS (n = 123) and (ii) without MS (n = 501), and then compared variables between two groups. Results : The mean age, body mass index (BMI) and waist circumference were 28.6 ± 4.3 years, 26.7 ± 3.7 kg/m2 and 85.2 ± 8.7 cm, respectively. The prevalence of MS was 19.7%. Our findings showed that age, BMI, waist circumference and all metabolic parameters were higher in PCOS women with MS than related values in those without MS. The most and least prevalent forms of MS were low level of high density lipoprotein-cholesterol (HDL-C) and hypertension, respectively. Conclusion : It seems the prevalence of metabolic syndrome in our country isn’t as high as western countries. The prevalence rate of MS increased with age and BMI. One of the major cardiovascular risk factors, low level of HDL-C, is the most prevalent metabolic abnormality in our participants.
INTRODUCTION
P olycystic ovary syndrome (PCOS) is one of the most common reproductive endocrinological disorders in women, affecting about 15% of women in general population according to Rotterdam criteria (1).Insulin resistance (IR) plays an important role in pathophysiology of PCOS (2,3).Evidence has shown that IR and compensatory hyperinsulinemia also play central roles in the evolution of metabolic syndrome (MS) (4)(5)(6)(7)(8).MS is a group of risk factors that identify individuals at increased risk for type 2 diabetes mellitus and atherosclerosis (9,10).These risk factors include central obesity, hypertriglyceridemia, low levels of high-density lipoprotein (HDL) cholesterol, elevated blood pressure and fasting plasma glucose levels (10).Many of the metabolic abnormalities of PCOS patients overlap with components of MS.The prevalence rates of MS in PCOS women vary among different countries and ethnicities as follows: 43-46% in America (11,12), 37.9% in India (13), 35.3% in Thailand (14), 28.4% in Brazil (15), 16.8% in China (16), 14.5% in Korea (17), 11.6% in Turkey (18) and 8.2% in Southern Italy (19).These differences in prevalence rates of MS in PCOS patients in different countries may be depended to several factors, like age, BMI, and race of patients as well as different approaches to define MS and PCOS.
To consider the various reports about the prevalence of MS among PCOS patients in different countries and the lack of evidence describing the prevalence of MS in PCOS patients in Iran, we sought to report these.The aim of the present study was to determine the prevalence of MS and its components in Iranian infertile women with PCOS using the ATPIII criteria.
SUBJECTS AND METHODS
This cross-sectional study was conducted between April, 2012 and March, 2013, while it was approved by the Institutional Review Board and the Ethical Committee of Royan Institute Research Center according to the Helsinki Declaration.Informed consent was also signed by all participants.
Patients
Women with PCOS attending the tertiary referral Infertility Clinic of Royan Institute, Tehran, Iran, were (20).Women who were aged > 40 and who used contraceptive drugs within 3 months prior the study were excluded from the study.Based on appropriate clinical and/or laboratory tests, other causes of hyperandrogenism, such as 21-hydroxylase deficiency, Cushing's syndrome, androgen secreting tumors, hypothyroidism, and hyperprolactinemia were also excluded from the study.So, we divided PCOS patients into following two main groups: (i) with MS (n = 123) and (ii) without MS (n = 501).
Procedure
National Cholesterol Education Program Adult Treatment Panel (NCEP ATP III) criteria was used for the diagnosis of MS; therefore, we considered the presence of three or more of the following abnormalities to confirm a diagnosis: waist circumference ≥ 88 cm, fasting glucose ≥ 100 mg/dL, fasting serum triglycerides ≥ 150 mg/dL, serum HDL-C < 50 mg/dL, and blood pressure ≥ 130/85 mmHg (21).
Blood pressure was measured on the right arm of women in a sitting position after 15 minutes rest using a manual mercury sphygmomanometer.Hirsutism was defined by the presence of excessive body hair in an androgen-depended pattern, using the Ferriman and Gallwey score > 8 (22).Oligomenorrhea was defined as the presence of three or more cycles of > 35 days in the previous 6 months, and amenorrhea was referred to the absence of vaginal bleeding for 3 months.Hypermenorrhea was defined as vaginal bleeding occurring at an interval of less than 21 days.Vaginal ultrasonography was performed by expert gynecologists on third day of menstrual cycle for each patient.
Blood samples were drawn after a 12-hour overnight fasting on second or third day of their spontaneous or progesterone induced menstrual cycles.Subsequently, luteinizing hormone (LH), follicle-stimulating hormone (FSH), free testosterone, dehydroepiandrosteronesulfate (DHEA-S), 17OH progesterone, triglycerides, total cholesterol, low density lipoprotein (LDL) cholesterol, high density lipoprotein (HDL) cholesterol, fasting blood glucose and insulin, and 2-hour blood glucose (after eating 75 gram oral glucose) were carried out at the laboratory department of the Royan Institute.Non HDL cholesterol was calculated by subtracting HDL cholesterol form total cholesterol.The states of glucose tolerance were classified into four groups according to the World Health Organization (WHO) and American Diabetes Association (ADA) (23) as follows: (i) impaired fasting glucose (IFG), (n = 54) when fasting plasma glucose was between ≥ 100 mg/dL and < 126 mg/dL; (ii) impaired glucose tolerance (IGT), (n = 39) when after 120 minutes and taking 75 g anhydrous glucose, plasma glucose was between ≥ 140 mg/ dL and < 200 mg/dL; as well as (iii) diabetes, (n = 11) when either fasting plasma glucose was ≥ 126 mg/dL, and/or after120 minutes and glucose load, plasma glucose was ≥ 200 mg/dL.
Sample size was calculated for estimation of the prevalence of MS in PCOS women attending our infertility clinic.Data were analyzed by a software package used for statistical analysis (SPSS) version 20 (SPSS, Inc., Chicago, IL, USA).Descriptive statistics, mean ± standard deviation (SD) and frequency (%) were used to describe the characteristics of participants.The Student's t-test was used to compare the continuous variables between groups.For comparing categorical variables, χ 2 test was used.A p-value of < 0.05 was considered statistically significant.
RESULTS
A total of 624 women were enrolled to study.The mean age and BMI were 28.6 ± 4.3 years and 26.7 ± 3.7 kg/m 2 , respectively.Obesity was seen in 114 (18.3%) women with PCOS, while 295 (47.3%) were overweight.According to the results of glucose metabolism, 80.1% had normal glucose metabolism, 8.7% had IFG, 6.3% had IGT, 3.2% had combined IFG and IGT, and 1.8% had diabetes mellitus.The prevalence rates of menstrual irregularities in our patients were: oligomenorrhea (64.7%), amenorrhea (25.3%), eumenorrhea (8.4%), polymenorrhea (0.8%) and mixed pattern (0.8%).In addition, hirsutism score > 8, acne and male pattern balding were seen in 30.1%, 22.1% and 8.3% of patients, respectively.The overall prevalence of MS was 19.7 %.Also, 37.7% and 28.2% of patients showed one and two criteria for MS, respectively.Clinical and biochemical characteristics of participants are summarized in table 1, while table 2 demonstrates the prevalence of the metabolic syndrome according to different age and body mass index groups.The prevalence rates of different components of MS are shown in table 3. Our results indicated that among PCOS patients with MS, the most prevalent forms of MS components were low level of HDL cholesterol (92.8%) followed by increased WC (82.9%), whereas the least prevalent form was high blood pressure (8.1%) (Table 3).
DISCUSSION
This study showed the prevalence of MS in Iranian infertile PCOS patients was 19.7% according to ATPIII criteria and Rotterdam criteria.Also, our results show that this prevalence increased with age and BMI.This prevalence is lower than related values in many American and Asian reports.For example, in several studies in US, the prevalence of MS was 33.4-46% according to NCEP-ATP III criteria (12,25).In India and Brazil, this prevalence rates were 37.9% and 28.4% (13,17), respectively.In contrary, in several European countries, the prevalence rates of MS in PCOS women are lower than our results, like 11.6% in Turkey (18), 8.2% in southern Italy according to ATPIII criteria (19) and 1.6% in the Czech Republic (26).It seems one of the important causes for discrepancy in the prevalence of MS in women with PCOS is different criteria for diagnosis of PCOS and MS.For example, Bhattacharya showed in Indian PCOS women, MS was found in 47.5% and 37.9% cases according to IDF criteria and ATP III criteria, respectively (13).Also, based on WHO criteria and ATP-III criteria, Carmina and cols.found the prevalence rates of MS were 16% and 8.2%, respectively, in Italian PCOS women (19).In addition to different criteria for diagnosis of PCOS and MS, the characteristics of the population studied, such as race, age, BMI, different dietary habits and lifestyle in different countries, had important roles for different prevalence rate of MS in PCOS women.Another important risk factor for MS in PCOS patients and also in general populations is advance age (27)(28)(29).In our study, the mean age of subjects was 26.8 ± 4.3 years.Vural and cols.found in Turkish PCOS women -the country with similar (geographical environment and eating habits to our country -the prevalence rate of MS was lower than our study.This difference may be due to lower average age of participants in their study (21.4 ± 1.8 years) (18).
The prevalence rate of MS in our study was higher in upper age groups [9.5% (< 25 years) vs. 34.5% (35-40 years)].According to results of Third National Health and Nutrition Examination Survey (NHANES) for US population, the prevalence of the MS increased with advanced age, reaching peak levels in the seventh decade for women (29).It seems increasing in prevalence of MS with advanced age is related to increased prevalence of overweight and obesity.Soares and cols.(15) reported in Brazilian women with PCOS, the prevalence of MS increased with advancing age.The cause of age related insulin resistance, however, remains unknown, Boden and cols.(30) found that at least part of the insulin resistance in aging may be due to age-related changes in body composition rather than age itself.
Obesity has a key role in evolution of MS.Our study showed in upper BMI groups, the prevalence of MS were higher [3.2 % (BMI < 25) vs. 46 % (BMI ≥ 30)].The association between the prevalence of the MS and BMI was shown in normal population (29) and PCOS patients (15,25).Ehrmann and cols.showed PCOS women in the highest quartile of BMI had nearly a 14fold increased chance of having the MS compared with women in the lowest quartile of BMI (25).
Insulin resistance and compensatory hyperinsulinemia are key pathogenetic factors in MS, but insulin levels per se are not applied for diagnosis of the MS.We found that fasting insulin levels in PCOS patients with MS were significantly higher than PCOS patients without MS.In agreement with this finding, Ehrmann and cols.showed a significant increasing trend in the proportion of women with the MS as related to the fasting insulin concentration; the prevalence of the MS from lowest to highest quartile of fasting insulin was 12.1, 25.3, 38.5, and 58.2%, respectively.This trend remained significant even after adjusting for BMI (25).Belong to Ehrmann and cols.'s study chance for having MS in the highest quartile of fasting insulin was 5-fold greater (95% CI = 2.1-11.8)than lowest quartile after adjustment for the effect of body weight (25).Also, our finding showed that one of the important IR index, HOMA-IR, was significantly higher in PCOS with MS group.In agreement with these findings, several studies show a statistically significant increase in fasting insulin level and in HOMA-IR in PCOS patients with and without MS (25,31,32).
Our results show the most prevalent factors of MS component in PCOS patients were low level of HDL cholesterol (71.5%) followed by increased WC (34.6%).However, the most prevalent forms of MS component in PCOS patients were different in previous studies.In agreement with our results, Soares and cols.(15) showed the most prevalent forms of MS components was HDL-C level < 50 mg/dL in 69.6% followed by WC ≥ 88 cm in 57.9%.
Marcondes and cols.'s study showed the best predictors of MS were a WC > 88 cm, HDL-C < 50 mg/ dL and triglycerides ≥ 150 mg/dL (31).In Espinós-Gómez and cols.'s study, WC, low HDL-C and high triglyceride concentrations had a valid association for selecting PCOS patients as a good candidate for routine metabolic screening (32).
In conclusion, it seems the prevalence of MS in our country isn't as high as western countries.The prevalence increases significantly with age and BMI.The most prevalent form in metabolic abnormality is low HDL-C.
Metabolic syndrome and polycystic ovarian syndromeArch Endocrinol Metab.2016;60/3 Metabolic syndrome and polycystic ovarian syndrome Arch Endocrinol Metab.2016;60/3 enrolled in this study.Diagnosis of PCOS was based on Rotterdam criteria
Table 1 .
Anthropometric, hormonal, metabolic and sonographic characteristics of PCOS women with and without MS PCOS: polycystic ovary syndrome; MS: metabolic syndrome; BMI: body mass index; WHR: waist-to-hip ratio; LH: luteinizing hormone; FSH: follicle stimulating hormone; DHEA-S: dehydroepiandrosterone sulfate; FG: fasting glucose; TG: triglycerides; LDL-C: low density lipoprotein cholesterol; HDL-C: high density lipoprotein cholesterol; HOMA-IR: Homeostasis Model Assessment-Insulin Resistance.Data are presented as mean ± standard deviation or number (%).*Comparison is performed between PCOS women with and without MS by Student t-test and Chi-square test.Metabolic syndrome and polycystic ovarian syndromeArch Endocrinol Metab.2016;60/3
Table 2 .
Prevalence of the metabolic syndrome according to different age and body mass index groups MS: metabolic syndrome; BMI: body mass index. | 2017-10-22T02:17:28.887Z | 2016-02-11T00:00:00.000 | {
"year": 2016,
"sha1": "ff4e2be17f588d5a561797ab4351279fef62cd32",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/aem/v60n3/2359-3997-aem-2359-3997000000135.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b17ecd3bdbbc99b8b87296f06495199e631f6e91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251973462 | pes2o/s2orc | v3-fos-license | EVALUATION OF THE ANTIBACTERIAL ACTIVITY OF ESSENTIAL OILS AGAINST E. COLI ISOLATED FROM RABBITS
The antibacterial activity of essential oils extracted from Origanum compactum, Thymus capitatus, Foeniculum vulgare, and Rosmarinus officinalis was assessed with the well diffusion method and a microbroth dilution assay against E. coli isolated from the carcasses of rabbits. The chemical composition of these essential oils was also determined by gas chromatography coupled with mass spectrometry (GC/MS). The results of this study indicate that essential oils with high phenol content exert a strong antibacterial activity against E. coli. Essential oils of Origanum compactum and Thymus capitatus containing high amounts of the monoterpenoid phenols thymol and carvacrol (68.99% and 95.25% carvacrol composition, respectively) were particularly effective against E. coli with low values of MIC = 0.3125% v/v and MBC = 0.625% v/v to report. The essential oil of Foeniculum vulgare also possessed moderate antibacterial activity (MIC = 50 % v/v) with a non-bactericidal effect, while the essential oil of Rosmarinus officinalis was ineffective at the concentrations tested.
INTRODUCTION
Rabbit colibacillosis is an infectious disease of rabbit caused by an infection of pathogenic Escherichia coli. This disease affects animals that have just been weaned resulting in severe diarrhea after infection including some cases that exhibit phases of mucoid diarrhea (24). Death can sometimes precede these symptoms. Antibiotics are often employed by rabbit breeders to treat and prevent the disease, however this approach can be problematic for human health. While antibiotics are capable of eliminating susceptible bacteria, this can have the unintended effect of promoting the proliferation of resistant strains in bacteria and lead to the greater occurrence of infections that are impossible to treat with conventional antibiotics (36). Each year, antibiotics-resistant strains of bacteria cause the death of 25,000 people in Europe and 19,000 in the United States (7). The essential oils of plants have been traditionally used for medicinal purposes and have proven their efficacy in several areas, which suggests one promising alternative to the routine use of antibiotics in animal agriculture. In order to better understand the ability of plants to control infections in rabbit breeding, the essential oils of four medicinal and aromatic plants recognized for their digestive and antispasmodic properties (2 , 9 , `12 , 23) were tested for their bacteriostatic and bactericidal qualities against E. coli isolated from the carcasses of dead rabbits. Plants that were featured in this study include: Foeniculum Vugaris, Origanum compactum, Rosmarinus officinalis, and Thymus capitatus. Essential oils were extracted from the seeds and leaves of Foeniculum Vugaris independently.
MATERIALS AND METHODS Plant and essential oil
Large quantity of seeds from the plant Foeniculum vulgare were purchased while leaves of the plant were collected, cleaned and then dried without light in at room temperature.
Specimens of Origanum compactum, Rosmarinus officinalis, and Thymus capitatus were collected in northern Morocco during the month of June. Species identification was done by Professor Bakkali, a specialist in botany, in the laboratory of ERGB. The essential oils (EO) of each plant were extracted via steam distillation for 3 hours using a Clevenger-type apparatus. Essential oil yield was determined as a percentage of the weight of the dry plant matter. Chemical analysis of essential oils was done by gas chromatography coupled with mass spectrometry (GC/MS). Isolation of E. coli bacteria Specimens were collected from the carcasses of rabbits that showed previous signs of diarrhea and bloating. Specimens from the intestinal mucosa and cecum along with their liquid contents were put into a ringer solution. Culture and isolation of E. coli was done on MacConkey agar. Petri dishes containing the culture medium were inoculated in depth and incubated at 37 °c for 24 hours. Miniature biochemical tests can conveniently and simultaneously performed on a colony for E. coli identification using an API 20 e gallery. Antibacterial activity A preliminary assay was performed with the agar diffusion method to compare the antibacterial effects of the essential oils against the performance of the antibiotic Oxytetracycline. The diameters of the resulting inhibition zones were measured in centimeters, including the diameter of the well. The results are expressed as an average by three determinations (+/-) standard deviation. The minimum inhibitory concentration (MIC) is defined as the smallest essential oil concentration capable of producing a total inhibition of growth after an incubation period of 24 to 48 hours (30). The minimum bactericidal concentration (MBC) is defined as the minimum bactericidal concentration of the oil capable of killing the inoculum. The MIC 1 and MBC 2 values were determined by microbroth dilution assay using resazurin as an indicator (21) of bacterial growth.
RESULTS AND DISCUSSION
The yield of each essential oil extracted are summarized in Table 1. The species of plant in order of percentage yield are: Origanum Compactum, seeds of Foeniculum vulgare, Thymus capitatus, leaves of Foeniculum vulgare, and Rosmarinus officinalis. The chemical composition of essential oils is very complex and diversified. An understanding of their constituents is very important in evaluating their properties and predicting their potential toxicity. The chemical composition of the essential oils of Foeniculum vulgare, Origanum compactum, Thymus capitatus and Rosmarinus officinalis are shown in Table 2. consists mainly of 1-8 cineol (51,62%), αpinene (18,94%), and α-Campholène aldehyde (10,65%). The results of the antibacterial activity of essential oils against E. coli are shows in Table 3. Table 3
. Diameter of inhibition zones of essential oil against E.coli (cm)
E. coli showed resistance to oxytetracycline with an inhibition diameter of 0,6 cm, while the essential oil of the Thymus capitatus showed high antibacterial activity (5,1cm), followed by the essential oil of Origanum compatum (4,2 cm). Rosmarinus officinalis essential oil appears to be ineffective towards this strain (0,6 cm). Low activity was observed with the essential oils of the leaves and seeds of the Foeniculum vulgare (1,1 and 1,4 cm successively). The combination of the two essential oils Thymus capitatus and Foeniculum vulgare seed was found to be more effective against E. coli (2,1 cm) than the antibiotic oxytetracycline (0,6 cm).
The minimum inhibitory concentrations (MIC) and bactericidal (MBC) of essential oils against E. coli are grouped in table 4. (13) found a low percentage in carvacrol (13,4%) and high in P-Cymene (18,9%) for the essential oil of Thymus capitatus (harvested in February), the harvest period does not only influence the yield but also the Composition. The essential oil of the Algerian Thymus capitatus consists of carvacrol (55%) and γterpinene (11%) ( 34) and that of Tunisia is also composed by carvacrol (70%), accompanied by other constituents βcaryophyllene (8.5%), γ-terpinen (4.3%) and P-cymen (3.8%) (26). Whereas Turkish Capitatus Coridothymus is characterized by an average content of carvacrol (35,6%) and Pcymene (21%) and a moderate proportion in thymol (18.6%) and γ-terpinene (12.3%) (15 (26) in this essential oil is 1-8 cineol (44,2%), camphor (12%) and α-pinene (11,6%). These results are similar to our results. The antibacterial activity of essential oils has been demonstrated by several studies (17 , 28), including that of Origanum compatum and Thymus capitatus (5 , 8), Foeniculum vulgare, and Rosmarinus officinalis (18,19,25). However, in the literature few studies address the effect of essential oils on pathogenic bacteria isolated from farmed animals. The low antibacterial power of the essential oil of Rosmarinus officinalis against E. coli 011 compared with that of Origanum Compatum has already been reported by Mathlouthi et al.(22). The presence of Monoterpene phenols in the two essential oils of Thymus capitatus and Origanum Compactum seems to be responsible for the important antibacterial activity demonstrated in our work by these two oils. Carvacrol and thymol have proven antibacterial power against a wide range of bacteria (20, 36).The essential oil of Rosmarinus officinalis was ineffective against our isolated bacterium despite its capacity in oxygenated monoterpenes (51%). Indeed, Ait-Ouazzou et al., (1) studied the antimicrobial effect of 11 major constituents of essential oils and found that the 1.8 cineole had moderate activity compared with other oxygenated monoterpèniques compounds such as carvacrol, thymol, and linalol. The low antibacterial activity recorded by essential oils of seeds and leaves of Foeniculum vulgare has already been demonstrated by the study of Grigore et al., (16). According to this study, the essential oil of Foeniculum vulgare (80% d'anéthol and 13% de limonène) has a low antibacterial activity compared to that of Thymus vulgare. The De et al. Team (9) found that the anethole isolated from anise is responsible for the antibacterial power of this plant. Phenylpropanoids have a lower antibacterial coefficient than terpene phenols. The low presence of oxygenated monoterpenes such as fenchone and limonene, respectively, in the essential oil of seeds and leaves, may also be responsible for their low antibacterial activity (35). The complex composition of essential oils with all its majority and minority products offers this antibacterial power. Some studies have shown that the mixture of the majority constituents of the essential oil has a low antibacterial activity compared to the whole essential oil (27) which shows that the effect of the compounds quantitatively minority is sometimes not negligible and this supports the presence of additive or synergistic effect of all the compounds of the essential oil. Mathlouthi et al., (22) showed that the bacterial activity of the essential oil of Rosmarinus officinalis against E. coli 011 is moderate (MIC = 4, 4mg/ml) compared with that of Origanum compatum (MIC = 0.9 mg/ml). According to Sienkiewicz et al., 2013, the essential oil of Rosmarinus officinalis has antibacterial activity against clinical strains of E. coli isolated from the human abdominal cavity (MIC = 18 Μ L/ml). In the present study, the potency of the antibacterial action of essential oils varies according to the chemical profile of their majority constituents. The modes of action of essential oils and their main constituents described so far, all seem to affect the wall or cytoplasmic membrane. Indeed, the attack of the bacterial wall by the essential oil and the damage of the plasma membrane causes an increase in permeability, a loss of the cellular constituents and a coagulation of the cytoplasmic content (4). The inhibition of the resultant proton motor force and the alteration of the membrane proteins block the production of the cell energy resulting in the death of the bacterium .In fact, the chemical variability of essential oils suggests the existence of molecules that can act by new cellular mechanisms.
Conclusion
In this work, a relationship between the biochemical families of the active constituents of essential oils and their antibacterial powers has been revealed. Indeed, the essential oils of Origanum compactum and Thymus capitatus rich in terpene phenols (thymol, carvacrol) demonstrated great antibacterial power, followed by Foeniculum vulgare oil rich in phenols phenylpropanoidic (anethole) which revealed Moderate antibacterial activity. The lowest activity is recorded by Rosmarinus officinalis oil rich in terpene oxides (1.8 cineole). The essential oil of oregano, thyme, and fennel can be suggested as phytobiotics to prevent, treat colibacillosis and reduce the mortality of rabbit. | 2022-09-01T15:14:41.008Z | 2022-08-30T00:00:00.000 | {
"year": 2022,
"sha1": "327831082f6ea3946ccbac814f51410478d21cce",
"oa_license": "CCBY",
"oa_url": "https://jcoagri.uobaghdad.edu.iq/index.php/intro/article/download/1592/1095",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aa0fed089dcffc4a533fbf28b802d31a2eb1ab01",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
52933052 | pes2o/s2orc | v3-fos-license | In vivo fertilizing ability of stallion spermatozoa processed by single layer centrifugation with Androcoll-E™
A colloid with a species specific silane-coated, silica-based formulation, optimized for stallion (Androcoll-E™), enables a better sub-population of spermatozoa to be selected from stallion ejaculates. However, such a practice has not been critically evaluated in stallions with fertility problems. In this study we evaluate whether single-layer centrifugation (SLC) through Androcoll-E™ could be used to enhance fertility rates in a subfertile stallion. Ejaculates were obtained from two different stallions, one Lusitano (fertile) and one Sorraia (subfertile), with distinct sperm characteristics and fertility. Motility, morphology, plasma membrane structural (eosin-nigrosin) and functional integrity (HOS test), mitochondrial functionality (Δψm; JC-1) and longevity (motility after 72 h cooling) after centrifugation in Androcoll-E™, as well as pregnancy rates obtained after artificial insemination (AI), with and without (control group) SLC-treated sperm were assessed. The effect of SLC on sperm characteristics, and fertility results were evaluated by ANOVA and Fisher procedures, respectively. Our results showed that SLC-selected sperm did not differ from the raw semen in terms of viability, morphology, response to hypo-osmotic conditions (HOS test) and mitochondrial membrane potential (↑ΔΨmit; JC-1). Sperm motility in cooled samples was not improved by SLC treatment. Our data show that SLC through Androcoll-E™ has no effect on pregnancy rates in the stallions used in this trial.
Introduction
Artificial insemination (AI) in farm animals has been used through decades, but the expansion in horses was lower for several reasons (Loomis, 2001), including specie-specific constraints related to sperm conservation techniques. Several sperm quality studies in mammalian species have been made demonstrating how it can be defined and measured despite some lack of correlation with fertility (Colenbrander et al., 2003). Mammalian ejaculates are characterized by several sperm populations, not only related with motility, morphology and viability but also with acrosome and DNA integrity, mitochondrial membrane potential, among others. Nowadays, in equine breeding industry, the goal is to select the best sperm population to develop AI techniques which have additional benefits in terms of efficiency in the use of semen and improvement of fertility's potential of stallions. The main challenge is to improve the technique by minimizing seminal plasma and bacterial carryover. Morrell and Wallgren (2011) showed that this can be achieved by sperm selection using single layer centrifugation (SLC) procedures. In fact, the use of SLC through a colloid with a species specific silane-coated, silica-based formulation optimized for stallion spermatozoa (Androcoll-E TM ), enables a sub-population of highly motile spermatozoa with normal morphology and good chromatin integrity to be selected from stallion ejaculates (Johannisson et al., 2009;Morrell et al., 2010). The work developed by our team and others (Costa et al., 2012) showed that SLC with Androcoll-E TM improved progressive motility and percentage of live cells with intact acrosome, in fertile stallions, but did not have an effect on DNA integrity.
The use of SLC to select the best spermatozoa from ejaculates of low quality and/or low fertility for subsequent use in AI was reported by Mari et al. (2011). However, strong evidence on the benefits of AI SLCtreated sperm in stallions with fertility problems is lacking. Therefore, in this assay, the effect of treatment of sperm with Androcoll-E TM on per cycle fertility was studied using ejaculates from two stallions of two Portuguese autochthonous breeds -Lusitano, a male already proven to be fertile, and Sorraia, known to have very different sperm quality (Gamboa et al., 2009). Sorraia is a critically endangered breed, with an extremely reduced effective population, high level of inbreeding (Luı´s et al., 2007) and low fertility rates (Oom et al., 1991;Gamboa et al., 2009). A conservation plan for this endangered breed is ongoing and strategies for breeding management are under study (Pinheiro et al., 2013). For this reason, fertile mares were artificial inseminated with SLC and non-SLC samples to determine if pregnancy rates could be improved by SLC-selected sperm used in AI programs. Viability, morphology, resistance to hypo-osmotic conditions and mitochondrial membrane potential were evaluated. Sperm survival rate of Androcoll-E TM treated semen cooled and stored at 4°C for 24 h, 48 h and 72 h was also analyzed.
Materials and methods
Unless otherwise stated, all reagents were purchased from Sigma-Aldrich (St. Louis, MO, USA).
Semen collection and evaluation
This study was undertaken in two consecutive reproductive seasons (2012 and 2013) using two healthy adult male horses (Equus caballus) from different Portuguese breeds: a Lusitano (13 years old; 2012 season) with proven fertility and a Sorraia (4 years old; 2013 season), of unknown fertility. The animals were located at the Coimbra College of Agriculture, Polytechnic Institute of Coimbra (Coimbra, Portugal-40°12 0 54.3 00 N and 00°41 0 E) and were kept in boxes with straw bedding, water freely available, and fed with hay and concentrates three times a day.
Semen was routinely collected (at least three collections/ week) from May to July, using a phantom (Hannover model) and an artificial vagina (INRA model). Immediately after dismount the raw ejaculates were brought to the laboratory and filtered through a sterile gauze to remove the gel and any large particles of debris. The gel-free semen was immediately assessed for color, smell and general appearance and maintained at 35°C in a water-bath during seminal evaluation and handling. Semen volume and pH were also registered. Sperm motility was assessed using a phase contrast microscope (Laborlux, Leica; equipped with a heating stage), at the time of collection, and the percentage of total progressive motile spermatozoa (PMSAC) was estimated visually by the observation of 5-10 microscopy fields in each of two drops (5.5 ll each drop) of raw semen. Filtered semen was diluted 40Â in formal saline solution to assess sperm concentration using a photocolorimeter (k = 546 nm; Colorimeter 254, Ciba-Corning). Sperm viability and morphology were accessed as previously described (Gamboa and Ramalho-Santos, 2005). Briefly, an aliquot of sperm was mixed with eosin-nigrosin on a slide for viability evaluations (Bloom, 1950), as well as with India ink for sperm morphology analysis (Foote, 2003). Two hundred cells were assessed for each slide using bright-field microscopy (Laborlux, Leica; Â1000) in 20-50 microscopy fields.
Semen treatment
Before semen collection, 15 mL of Androcoll-E TM (available from J.M. Morrell, SLU, Uppsala, Sweden) was transferred to a 50 mL Falcon tube and maintained at r.t. (20-23°C) for 15 min. After collection, semen was diluted at a final concentration of 100 Â 10 6 sperm/mL, to a final volume of 18 mL, in INRA96 extender (IMV technologies, L'Aigle, France) maintained at 35°C prior to semen collection. This volume of extended semen was carefully transferred to the top of the colloid and centrifuged at 500g for 20 min. Above the pellet, more than one ''phase supernatant" (or layer) was distinguished ( Fig. 1) and each one of them was separately removed and stored in different tubes. The pellet was re-suspended to a final volume of 5 mL and sperm concentration was determined using a Neubauer chamber. Sperm doses, both for AI and for motility evaluation after cooling, were prepared in INRA96 extender to a final concentration of 20 Â 10 6 spz/mL. For AI, 15 mL of diluted semen was packaged in an Air-Tite type syringe with no air and used within the first 30 min after collection. For sperm motility analysis, 10 mL of diluted semen was fractionated in 10 mL centrifuge tubes that were then packed into 50 mL Falcon tubes and stored under anaerobic conditions in a refrigerator (4°C) for 24 h, 48 h and 72 h. Anaerobic conditions were reached by loading the tubes with 10 ml of diluted sperm and no air. Raw semen was also diluted in INRA96 and the sperm doses, both for AI and for motility evaluation after cooling, were also prepared to a final concentration of 20 Â 10 6 spz/mL. Raw semen, the sperm treated and non-treated with Androcoll-E TM , as well as each layer, were studied in relation to volume, sperm concentration and viability, HOS test, and mitochondrial membrane potential.
Hypo-osmotic swelling test (HOS test)
The hypo-osmotic swelling test (HOS test) was used to evaluate plasma membrane functionality. The HOS test determines the ability of the sperm membrane to maintain equilibrium between the sperm cell and its environment. Influx of the fluid due to hypo-osmotic stress causes the sperm tail to coil and balloon or ''swell". A higher percentage of swollen sperm indicates the presence of sperm having a functional and intact plasma membrane (Ramu and Jeyendran, 2013). Semen samples were subjected to a hypo-osmotic medium (Hank's balanced salt solution with 26 mM Hepes buffer (HHBS): 1,3 mm CaCl 2 Á2H 2 O; 0,3 mM Na 2 HPO 4 Á12H 2 O; 0,4 mM KH 2 PO 4 ; 5,4 mM KCl; 0,8 mM MgSO 4 Á7H 2 O; 26 mM C 8 H 18 N 2 O 4 S; 5,5 mM C 6 H 12 O 6 ; 417 mM NaHCO 3 , 50 mOsm; 1:40 dilution) and, in order to eliminate potential misinterpretation of the results, the incidence of spermatozoa with a bent flagellum prior to HOS was also evaluated in isotonic media (HHBS, 300 mOsm; 1:40 dilution). Samples were count using a hemocytometer (Neubauer chamber).
Artificial insemination
Thirteen (13) fertile mares (4-20 years old), belonging to the Coimbra College of Agriculture, were used in this study.
Mares reproductive management was as follows: after detecting the mare's estrus (mare reproductive behavior assessed in the presence of an intact stallion), the mare's reproductive status was observed regularly by ultrasonographic scanning (Falco-Esaote, Probe 5 MHZ transducer -Pie Medical Equipment BV); when a follicle P35 mm in diameter was observed, females were artificially inseminated with 300 Â 10 6 spz (20 Â 10 6 sperm/mL; 15 mL) per AI, every other day until ovulation. Insemination was performed, using a sterile uterine catheter, within the first 60 min after semen collection. The time-interval ''last AI-ovulation" was determined. For diagnosis of pregnancy and twining inspection ultrasonographic images of the conceptus at 13 days after last AI were taken. To calculate the fertility per cycle (FC) we utilized the estimations and rules commonly used in French National Stud Farms (France, 1996) and previously described (Gamboa and Ramalho-Santos, 2005).
In the first estrous cycle explored, mares were assigned to semen treatment at random (SLC or non-SLC) and if they did not conceive, another cycle was explored in order to alternate IA/SLC-treated with IA/non-SLC-treated sperm and vice versa. If they conceive the embryonic loss was induced by the use of Dinolytic (Pfizer Animal Health, Louvain-la-Neuve, France). In 2012s' breeding season, 10 mares were inseminated with the Lusitano sperm in 18 cycles: in nine cycles Androcoll-E TM treated semen was used and, in the other nine, non-treated doses. In 2013s' breeding season 11 mares were used to study Sorraia sperm: from 15 cycles, in six we used Androcoll-E TM treated semen was tested and in the other nine, non-treated doses. Four mares were inseminated over two consecutive cycles.
Statistical analysis
Data were analyzed with the SPSS Statistic software (version 20, IBM). Differences in spermatozoa quality in the several fractions after SLC-treatment were tested by ANOVA. Bonferroni post hoc tests were performed only if the initial test result was significant at P < .05. Sperm motility differences between treated and non-treated Androcoll-E TM samples were analyzed by the independent-samples T test procedure. A significant difference was reported at P < .05 (Maroco, 2007).
Results
Stallions sperm characteristics (mean value ± SEM) are presented in Fig. 2. For the Lusitano stallion, from a total of 19 ejaculates, seven of them were non-SLC-treated, eight were both treated and non-treated and four were SLC-treated. For the Sorraia horse, from a total of 13 ejaculates, five of them were non-SLC-treated and eight were treated and nontreated. As previously reported (Gamboa et al., 2009), seminal traits differed significantly between stallions except for DWmit. Significant differences (P < .05) between stallions were also observed for semen volume (34.2 ± 17.3 and 9.3 ± 9.5, PSL and Sorraia stallions, respectively) and sperm concentration (156.0 ± 54.6 and 454.0 ± 280.0, PSL and Sorraia stallions, respectively).
For the Lusitano stallion, 12 ejaculates treated with Androcoll-E TM showed 4 layers after centrifugation: upper layer (seminal plasma plus extender with some spermatozoa), middle layers 1 and 2 (Androcoll-E TM plus spermatozoa, debris and extender) and pellet (rich in spermatozoa). In the Sorraia stallion, 8 ejaculates treated with Androcoll-E TM showed only 3 layers after centrifugation (Fig. 1). The sperm characteristics from each layer are shown in Figs. 2 and 3. In relation to initial sperm diluted in INRA96, there was 46.7% ± 8.4 and 54.0 ± 9.0 of sperm loss with SLC-treatment in Lusitano and Sorraia stallions, respectively. The percentage of spermatozoa retained in the colloid layer for Lusitano and Sorraia stallions were, respectively, 43.8% ± 8.3 and 48.9% ± 10.3.
Both in Lusitano and Sorraia stallions, raw semen was characterized by a low percentage of spermatozoa with a positive result for HOS test (HOS+), and the mean values did not differ (P > 0.05) from the percentage of sperm with membranes osmotically active, recovered after SLC-treatment (Fig. 2). For the HOS test, no differences were evident between middle layers and pellet, in both stallions. Nevertheless, in the Lusitano samples SLC-recovered sperm presented the highest percentage of live cells with membranes osmotically active, contrasting with Sorraia samples where the highest percentage of live cells with membranes osmotically active was retained in the upper layer (Fig. 2).
When sperm morphological quality was analyzed, the SLCselected pellet presented a better population of spermatozoa regarding normal head in Lusitano samples (P < 0.05), while in Sorraia stallion no difference was found (Fig. 2).
Figure 3
Proportion (mean ± SEM) of sperm mitochondrial membrane potential (DWmit) in raw semen and in the different layers after single-layer centrifugation (SLC) on the sperm from Lusitano (A) and Sorraia (B) stallions. "DWmit, high mitochondrial membrane potential; ;DWmit, low mitochondrial membrane potential; w/DWmit, without mitochondrial membrane potential. Values bearing + possible differ statistically (P < 0.1).
Sperm processing by SLC does not significantly increase the percentage of spermatozoa with high mitochondrial membrane potential. In fact, no differences were observed between raw semen and sperm SLC-selected from both Lusitano and Sorraia stallions (Fig. 3).
The inferential statistical analysis (Fisher test) (Maroco, 2007) showed that, in the Lusitano stallion, per cycle fertility obtained with Androcoll-E TM treated semen did not differ (the Fisher exact test statistic value is 1. The result is not significant at P < 0.05) from per cycle fertility obtained with non SLC-treated sperm. For the Sorraia stallion, per cycle pregnancy rate was equal to zero for both treatments (Table 1).
Discussion
Single-layer colloid centrifugation (SLC) of semen is used to select the best spermatozoa from sperm samples in a variety of mammals (Henkel and Schill, 2003;Samardzija et al., 2006;Dorado et al., 2011a,b;Nicolas et al., 2012) including horses (Morrell et al., 2010). In order to evaluate if SLC through Androcoll-E TM can be helpful in horse's reproduction, we used sperm samples obtained from a fertile stallion (Lusitano) and a Sorraia horse of unknown fertility. The Sorraia breed is characterized by high levels of inbreeding (Luı´s et al., 2007;Pinheiro et al., 2013), poor seminal traits and low fertility (Gamboa et al., 2009;Oom et al., 1991).
Considering sperm membrane structural and functional integrity, morphology and mitochondrial membrane potential, no differences were found between raw semen and the sperm recovered after SLC. Moreover, fertility rates were not improved with SLC treatment using Androcoll-E TM . The sperm quality results obtained contrast with other results carried out in horses (Johannisson et al., 2009;Morrell et al., 2010;Costa et al., 2012). Nevertheless, at the head level, morphologically significant differences were observed for the sperm population recovered by SLC in the Lusitano stallion but not for the Sorraia one. The Sorraia breed is characterized by polymorphic spermiogram, with macrocephalic and microcephalic spermatozoa present and the SLC procedure seems not sufficiently efficient in selecting the best sperm.
Concerning the effect of SLC on sperm membrane functionality, some authors (Shekarriz and DeWire, 1995) consider that time of centrifugation, more than g-force, induces ROS formation in semen. Reactive oxygen species (ROS) are associated with sperm membrane injury through spontaneous lipid peroxidation, which may change sperm function (Agarwal and Allamaneni, 2004). Previous work carried out in horses showed that Androcoll-E TM selects a subset of live sperm capable of producing superoxide anion in isosmolar conditions (Macı´as-Garcı´a et al., 2012). However, in our study, if ROS were produced during SLC-centrifugation, it is reasonable to assume that their level was not deleterious enough to be associated with sperm membrane injury, as suggested by viability and HOS data. Besides, the functional integrity of sperm membrane is a prerequisite to fertilization and fertility rates Figure 4 Proportion (mean) of motile spermatozoa after collection (PMSAC) and after dilution in IRA96, with (SLC-selected) and without (non SLC-selected) centrifugation on Androcoll-E TM , cooled (4°C) and stored for 24 h (PMS24 h), 48 h (PMS48 h) and 72 h (PMS72 h). Values bearing * differ significantly between stallions (P < 0.05). obtained with sperm SLC-treated do not differ from non-SLCtreated rates. The HOS test evaluates the functional integrity of plasmalemma (Ramu and Jeyendran, 2013). It is a different approach to eosin-nigrosin, which evaluates the structural integrity of plasma membrane (Zhu and Liu, 2000). In this study, it seems that, in both stallions, the sperm regulatory volume mechanism is not affect by SLC-treatment since that sperm plasmalema remains functional as we have shown by the HOS test results. Apparently, the sperm extender compensates both the loss of osmolytes of low molecular weight and the antioxidant defenses offered by the seminal plasma. This assumption seems to be supported by results obtained for sperm motility in cooled samples over time. Indeed, sperm motility in SLC-selected samples did not differ significantly from that observed in non-SLC-selected samples.
It has been believed that the function of the mitochondria in the midpiece is to provide ATP for sperm movement through oxidative phosphorylation. In stallions, sperm mitochondrial activity was correlated with sperm viability and progressive motility in raw semen (Foote, 2003). In frozen-thawed sperm (Macı´as-Garcı´a et al., 2009), SLC through Androcoll-E TM improved the percentage of spermatozoa depicting high DWmit. To our knowledge, this is the first report where mitochondrial membrane potential was evaluated in fresh SLCselected stallions' sperm. The ejaculate of Lusitano horse was characterized by a good viability but only satisfactory motility and low mitochondrial membrane potential, even in conditions where oxygen was present, contrasting with Sorraia's semen, where poor viability and motility were observed but mitochondrial membrane potential was higher. However, AI of fertile mares with sperm doses prepared following SLC did not result in any pregnancy.
In our study, there was no significant difference in pregnancy rates between SLC and non-SLC sperms for either the fertile or subfertile stallion tested. Overall, the SLCcentrifugation seems not to affect sperm plasmalema functionality, but did not select for other evaluated characteristics of sperm quality.
Conclusion
This technique did not improve pregnancy rates in either the fertile or the infertile stallions used in this trial. The results of the present study demonstrate that any benefits of the SLC technique may be stallion-dependent. | 2018-10-22T06:13:30.787Z | 2016-01-22T00:00:00.000 | {
"year": 2016,
"sha1": "f60e44791867a1e3ec4692320afaab19d118d82e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2016.01.030",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67e21b793e9fa3c45ad818a72757f777f6ab08d1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246294936 | pes2o/s2orc | v3-fos-license | Fast Moving Natural Evolution Strategy for High-Dimensional Problems
In this work, we propose a new variant of natural evolution strategies (NES) for high-dimensional black-box optimization problems. The proposed method, CR-FM-NES, extends a recently proposed state-of-the-art NES, Fast Moving Natural Evolution Strategy (FM-NES), in order to be applicable in high-dimensional problems. CR-FM-NES builds on an idea using a restricted representation of a covariance matrix instead of using a full covariance matrix, while inheriting an efficiency of FM-NES. The restricted representation of the covariance matrix enables CR-FM-NES to update parameters of a multivariate normal distribution in linear time and space complexity, which can be applied to high-dimensional problems. Our experimental results reveal that CR-FM-NES does not lose the efficiency of FM-NES, and on the contrary, CR-FM-NES has achieved significant speedup compared to FM-NES on some benchmark problems. Furthermore, our numerical experiments using 200, 600, and 1000-dimensional benchmark problems demonstrate that CR-FM-NES is effective over scalable baseline methods, VD-CMA and Sep-CMA.
I. INTRODUCTION
T HIS work focuses on black-box optimization (BBO) problems. In BBO, we cannot obtain representations of objective functions explicitly; optimization should be performed with only function evaluation values for solutions, not gradient or other information. Because evaluating a solution often demands high computational resources, reducing the number of evaluations is critical in BBO. For example, hyperparameter optimization [1], which is an important task for achieving high performance of machine learning algorithms and can be formlated as BBO, needs a lot of evaluation time for constructing one training model. Towards solving such realworld BBO problems, many optimization methods including evolutionary algorithms have been actively studied [2]- [7].
Our particular interest lies on Natural Evolution Strategies (NES) [8]- [20], which shows promising performance in BBO. In contrast to typical evolutionary algorithms which minimizes an objective function f (x) by seeking the optimal solution x * directly, NES attempts to find the parameter θ * of a probability distribution p(x|θ) that minimizes the expected evaluation value J(θ) = f (x)p(x|θ)dx of the solution x sampled from the probability distribution p(x|θ). This technique is called stochastic relaxation, and has appealing properties in a theoretical view [21]. NES generates solutions according to a current probability distribution p(x|θ) and evaluates them in each iteration. A critical component in NES is the natural gradient [22]- [25], which is defined by the steepest descent direction with a sufficiently small Kullback-Leibler divergence [26]- [28]. The natural gradient is estimated by using the evaluation values of the solutions, and the parameter θ of the distribution is updated by using the estimated natural gradient.
Among the many variants of NES, FM-NES [29] shows good performance in BBO problems. In addition to the update based on the natural gradient, FM-NES introduces several mechanisms to speed up the search even with a small number of samples, which results in significant speed-up over other NES algorithms [8], [30] and CMA-ES [31]- [34].
However, it is difficult to employ FM-NES for optimizing high-dimensional problems due to the high complexity of FM-NES; the time complexity of FM-NES is O(d 3 ) where d is the dimension number, and the space complexity of FM-NES is O(d 2 ), which makes it difficult to apply FM-NES to high-dimensional problems. This is mainly because the number of parameters of the covariance matrix to be updated is d(d + 1)/2 + 1 ∈ Θ(d 2 ).
To address the issue, we propose, in this paper, Cost-Reduction Fast Moving Natural Evolution Strategy (CR-FM-NES), which is a new version of FM-NES that can be utilized for optimizing high-dimensional problems. CR-FM-NES builds on the sophisticated representation of the covariance matrix proposed in [35]. While CR-FM-NES retains the high speed of FM-NES, it suppresses the time and space complexity linearly with respect to the dimension number.
The rest of the paper is organized as follows. In Section II, we describe the existing method, FM-NES, and its problem when applied to high-dimensional problems. In Section III, we propose CR-FM-NES, which deals with the issue of FM-NES. In Section IV, we perform numerical experiments to demonstrate the effectiveness of the proposed method. Section V concludes this study.
II. FM-NES AND ITS PROBLEM A. FM-NES
Fast Moving Natural Evolution Strategy (FM-NES) [29] is one of the state-of-the-art BBO algorithms. FM-NES performs optimization by updating the parameters of the multivariate normal distribution based on the estimated natural gradient. We give an overall procedure of FM-NES as follows.
Step 1. Initialize the parameters of the multivariate normal distribution N (m (0) , σ (0) B (0) (σ (0) B (0) ) ). Here, m (0) is the mean vector, σ (0) is the step size, and B (0) is the normalized transformation matrix which holds det(B) = 1. Let the iteration t = 0 and the evolution path p (0) Step 2. Generate λ solutions by the antithetic variates method [36]; that is, the solutions are generated according to the following equations: where λ is a positive even number.
Step 3. Sort the generated solutions by their evaluation values.
Step 4. Update the evolution path [33], [37] p (t) σ : where z i:λ means that x i is the i-th best solution among the λ ones. The learning rate c σ , the weight w rank i , and µ eff are given by Step 5. Set the search phase to "movement" if . Otherwise, set the search phase to "convergence".
Step 6. Set the weight to w i = w dist i if the search phase is "movement". Otherwise, set the weight to where α is the distance weight parameter. See [30] for how to calculate α.
B
. Set η m = 1.0. For these learning rates, the recommended values are presented in [30].
Step 8. Estimate the natural gradients: where I ∈ R d×d is the (d × d) identity matrix.
Step 9. Update the parameters of the distribution: In addition, update the evolution path p (t) c : Step 10. Emphasize the expansion of the distribution when the search phase is "movement": where Q ∈ R d×d is a matrix for expanding the normalized transformation one. γ is an expansion rate. And, τ i (i = 1, . . . , d) is a change of a second moment in each direction from B (t) B (t) to B (t+1) B (t+1) . The calculation of τ i (i ∈ {1, · · · , d}) is defined as where {e i } d i=1 are the eigenvectors of the normalized covariance matrix, B (t) B (t) . If τ i > 0, the distribution is expanded in a direction of e i due to the above equations. The update equation of the expansion ratio γ is the following.
Step 11. Perform the rank-one update: where c 1 is a learning rate. For c 1 , the recommended value is presented in [34].
Step 12. Update the iteration t ← t + 1 and move to Step 2 when the stopping criterion is not satisfied.
B. Problem of FM-NES
A drawback of FM-NES is that it is difficult to apply it to high-dimensional problems. This is because that the time complexity of FM-NES is O(d 3 ), and the space complexity of FM-NES is O(d 2 ), which makes it difficult to apply FM-NES to high-dimensional problems.
III. CR-FM-NES
To address the problem of FM-NES, in this section, we propose Cost-Reduction Fast Moving Natural Evolution Strategy (CR-FM-NES), which reduces the time and space complexity of FM-NES. The time and space complexity of CR-FM-NES is O(d), which enables it to be applied to high-dimensional problems. We first describe basic ideas of CR-FM-NES in Section III-A, and then describe the details in Section III-B. In Section III-C, we design the learning rate for CR-FM-NES. We then present an overall procedure of CR-FM-NES in Section III-D
A. Basic Ideas
To reduce the time and space complexity of FM-NES, in CR-FM-NES, we utilize the representation of the covariance matrix used in VD-CMA [35]. We define the covariance matrix of the multivariate normal distribution in CR-FM-NES as follows: where D ∈ R d×d is a diagonal matrix, v ∈ R d is a ddimensional vector, and σ ∈ R + is a step size. The important thing is that, by using this representation instead of the full covariance matrix, the number of parameters is reduced from There are several methods restricting the representation of the covariance matrix, for example, sep-CMA-ES [38] and R1-NES [39]. However, the performance of sep-CMA-ES deteriorates on problems with variable dependencies, and the performance of R1-NES deteriorates on ill-conditioned problems. We can alleviate these issues by employing the representation in Eq. (22). Note that, however, there exists objective functions that the covariance matrix with Eq. (22) cannot represent, i.e., the covariance matrix cannot approximate the inverse Hessians of objective functions.
The following operations cannot be naively transported to CR-FM-NES with keeping the time and space complexity O(d).
• Emphasizing the expansion of the probability distribution • Updating the parameters v and D To achieve the linear complexity, we do not include emphasizing the expansion of the probability distribution in CR-FM-NES. We describe how to update the parameter v and D with linear complexity in Section III-B.
Due to the restrictive representation of the covariance matrix, it is expected that the learning rate of the covariance matrix in CR-FM-NES can be set to higher values than that of FM-NES because the number of the parameters in the covariance matrix is reduced from d(d + 1)/2 + 1 ∈ Θ(d 2 ) to 2d + 1 ∈ Θ(d).
We describe the specific design of the learning rate in Section III-C.
In the following, we use the notations without redefinition if they have been already introduced in Section II (for example, the weight function w(·) and the learning rates).
B. Parameter Update 1) v and D Update: The parameters of the covariance matrix in CR-FM-NES, v and D, are updated by the estimated natural gradient as follows [35]: It should be noted that, however, we do not have to calculate the inverse of the Fisher information matrix explicitly, which will be shown in the next paragraph. Following [35], the natural gradients∇ v ln p θ (x) and ∇ D ln p θ (x) are updated as follows: where s and t are calculated by the following steps:
Let
Here, V is a diagonal matrix whose diagonal elements are composed of v. denotes an operation which performs elementwise product. We emphasize that all the above operations can be performed in linear complexity. Note that we employ a corrected version of H which is different from that in [35]; we leave the detailed derivation in Appendix. After updating v in Eq. 23 and D in Eq. 24, we normalize D to keep the determinant of D(I + vv )D. This implies that the determinant of the covariance matrix is affected only by the σ update, which will be described in the next section.
2) σ Update: Similarly to FM-NES, we update the step-size σ by using the estimated natural gradient as follows. Note that this operation also can be performed in linear time.
C. Learning Rates Design
We design, in this section, learning rates with restrictring the representation of the covariance matrix. This definition of the learning rate c 1 used to the rank-one update is the same as that used in [35], i.e., Additionally, we set the learning rate η B used for v and D to η B = tanh min(0.02λ, 3 ln(d)) + 5 0.23d + 25 .
We determined this equation by fitting a parametric model given the fixed c 1 defined above. Figure 1 shows a comparison of the learning rate for the covariance matrix used in FM-NES and CR-FM-NES when we vary the dimension number d ∈ {10, 20, · · · , 100} and set λ = 20. We can see that the learning rate used in CR-FM-NES is much larger than the one used in FM-NES. For other parameters in CR-FM-NES, the recommended values of FM-NES [29] are used .
D. Algorithm of CR-FM-NES
Algorithm 1 shows the overall procedure of CR-FM-NES. Note that E is the set of positive even integers. In lines 3-7, candidate solutions are generated from the multivariate normal distribution. In line 8, the candidate solutions are sorted with respect to their evaluation values. In line 9, updating the evolution path p σ is performed. In line 10-11, switching the weight function and the learning rates is performed by using the norm of the evolution path. In line 12, updating the evolution path p c is performed. In lines 13-19, updating the parameters of the multivariate normal distribution is performed based on the estimated natural gradient and the evolution path. Note that D is normalized to keep the determinant of D(I + vv )D in lines 16-17. 15: 16:
IV. EXPERIMENTS
In this work, we experiments with benchmark problems in order to investigate the following research questions (RQs). RQ1. How is the performance of CR-FM-NES, compared to FM-NES? RQ2. Is CR-FM-NES more efficient than baseline methods for high-dimension problems?
We first describe the experimental settings in Section IV-A. In Section IV-B, we compare FM-NES with CR-FM-NES on 80-dimensional problems (RQ1). We then compare CR-FM-NES with other baseline methods (RQ2) in Section IV-C. Name Definition The code for running the proposed method is available at https://github.com/nomuramasahir0/crfmnes.
As the performance metrics, we employ the average number of evaluations until the best evaluation value f (x best ) reaches the target objective value 10 −10 over successful trials divided by the success rate [40]. A trial is judged to be successful if the target objective value is obtained within the maximum number of evaluations, 5d×10 4 . Based on preliminary experiments, we determined the settings of the population size λ as follows: For all the Sphere, k-Tablet, Ellipsoid, and Rosenbrock functions, we employ λ = λ def , 2λ def , 3λ def , 4λ def , 5λ def , where Note that λ is set to be an even number to use the antithetic sampling method. In this setting, we obtain λ def = 18, 20, 24, and 24 for the dimension number d = 80, 200, 600, and 1000, respectively. In addition, Table II shows the population sizes used in the experiments on the Rastrigin function. A trial number (that is, the number of times that the experiment is repeated to calculate the average number of evaluations and the success rate) is set to 30 for the Rastrigin function, and 10 for the other functions.
B. Comparison to FM-NES
To compare the performance of FM-NES and that of CR-FM-NES, we conduct experiments on 80-dimensional benchmark problems. Figure 2 shows the results in each benchmark problem. For the Ellipsoid, the k-Tablet, and the Rosenbrock functions, CR-FM-NES has achieved a significant speedup compared to FM-NES. We argue that this is because the number of parameters of the covariance matrix in CR-FM-NES is 2d + 1 ∈ Θ(d), which enables us to set the learning rates much higher than those in FM-NES, as described in Section III-C. In addition, the result in the Rastrigin function implies that CR-FM-NES does not lose its efficiency even in multimodal problems. Note that, however, CR-FM-NES will fail to perform optimization in problems whose inverse Hessian of the objective function cannot be approximated by the covariance matrix represented in Eq. (22), as illustrated in [35].
We additionally investigate the computational time of both methods. The CPU is Intel Xeon E5-2680 V4 Processor (2.4GHz) and the memory is 7.5GB. The OS is SUSE Linux Enterprise Server 12 SP4. All the code used in the experiment is implemented by Python and its version when executed is 3.6.3. Figure 3 shows the computational time required to execute 1000 iterations for each method. We vary the dimension number d ∈ {10, 20, · · · , 100} and set λ = 20. We can confirm that the computational time of FM-NES increases rapidly as the dimension number increases, while that of CR-FM-NES is fairly small.
C. Comparison in high-dimensional problems
In this section, to verify the effectiveness of CR-FM-NES in high-dimensional problems, we compare CR-FM-NES with VD-CMA [35] and Sep-CMA [38]. Although VD-CMA uses the same representation of the covariance matrix as CR-FM-NES, its procedure is fairly different as VD-CMA is based on CMA-ES. Sep-CMA is also based on CMA-ES, but restricts the covariance matrix to a diagonal matrix. We do not include R1-NES for the baseline methods because Akimoto et al. [35] have been reported that VD-CMA shows clearly better performance than R1-NES. We experiment on problems with dimension d = 200, 600, and 1000. Figure 4 shows the results of the experiment. For the Ellipsoid and the k-Tablet functions where the inverse Hessian can be approximated by even a diagonal matrix, the performance of CR-FM-NES is competitive with that of Sep-CMA. In contrast, for the Rosenbrock function where the inverse Hessian cannot be approximated by a diagonal matrix, the efficiency of CR-FM-NES increases substantially compared to Sep-CMA, and it also shows better performance than VD-CMA. We believe that the performance difference between CR-FM-NES and VD-CMA is originated from the one in FM-NES and CMA-ES; CR-FM-NES inherits the efficiency of FM-NES, which leads to good performance in high-dimensional problems. From the results of the Rastrigin function, we might argue that CR-FM-NES is efficient even in high-dimensional multimodal problems. Further investigations in this direction are left for future work.
V. CONCLUSION
In this work, we introduced a new NES variant for highdimensional problems, Cost-Reduction Fast Moving Natural Evolution Strategy (CR-FM-NES). CR-FM-NES extends FM-NES, a recently proposed state-of-the-art NES algorithm, to be applicable in high-dimensional problems. To do so, CR-FM-NES uses the representation of the covariance matrix proposed in [35], instead of a full covariance matrix. Our experimental results suggest that compared to FM-NES, CR-FM-NES has achieved significant speedup on some problems or at least achieved competitive performance. Furthermore, CR-FM-NES showed favorable performance over VD-CMA [35] and Sep-CMA [38].
The main limitation of the proposed method lies on the restricted representation of the covariance matrix. As noted in [35], CR-FM-NES will fail optimization when the covariance matrix cannot approximate the inverse Hessian of the objective function. Therefore, estimating the problem complexity in online and switching the level of the representation of the covariance matrix, such as online model selection [41], is an important future direction.
Additionally, application of CR-FM-NES to real-world problems such as machine learning is also an exciting direction. For example, in optimizing parameters of scalable machine learning algorithms, it is often necessary to solve high-dimensional and multimodal optimization problems [42]- [44]. From our experimental results that CR-FM-NES is efficient even in highdimensional and multimodal problems, we believe that there is a room that CR-FM-NES can have an active role in real-world applications including machine learning. Furthermore, not only comparison with evolutionary algorithms, but also comparison with gradient-based method such as stochastic gradient descent [45], [46], which has already become a de facto standard in machine learning, is important to expand the scope of application of the method.
As introduced in Section III, V is a diagonal matrix whose diagonal elements are composed of v. Note that the resulting H is different from the value derived in [35]. For a fair comparison, in our experiments, we use this definition of H for not only CR-FM-NES (our proposal) but also VD-CMA. For the exact value of I v,v , I v,D , and I D,D , see the Lemma 3.2 in [35].
By letting
Therefore, by letting H : | 2022-01-28T02:15:46.003Z | 2022-01-27T00:00:00.000 | {
"year": 2022,
"sha1": "6aee662f8e018ca8e66d427f65f2493ad21da588",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6aee662f8e018ca8e66d427f65f2493ad21da588",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
18945320 | pes2o/s2orc | v3-fos-license | Electron microscope histochemical evidence for a partial or total block of the tricarboxylic acid cycle in the mitochondria of presynaptic axon terminals.
Respiration-linked, massive accumulation of Sr(2+) is used to reveal the coupled oxidation of pyruvate, alpha-oxoglutarate, succinate, and malate by in situ mitochondria. All of these substrates were actively oxidized in the dendritic and perikaryal mitochondria, but no alpha-oxoglutarate or succinate utilization could be demonstrated in the mitochondria of the presynaptic axon terminals. A block at an early step of alpha-oxoglutarate and succinate oxidation is proposed to account for the negative histochemical results, since the positive reaction with pyruvate and malate proves that these mitochondria possess an intact respiratory chain and energy-coupling mechanism essential for Sr(2+) accumulation. This indicates that the mitochondria in the axon terminals would be able to generate energy for synaptic function with at least some of the respiratory substrates. With regard to the block in the tricarboxylic acid cycle, the oxaloacetate necessary for citrate formation is suggested to be provided by fixation of CO(2) into some of the pyruvate.
INTRODUCTION
Current attempts at elucidating enzymatic functions and other metabolic properties of the different structural elements of neural tissue (neuron perikarya, dendrites, axons and their synaptic terminals, specific postsynaptic sites, glia, etc .) are seriously limited by thorough entanglement of these elements in a three-dimensional network . A relatively unexplored approach to this problem is opened through electron microscope histochemistry. Although they have the undeniable drawback of giving only qualitative information, histochemical experiments have in their favor the significant advantage of direct visualization of metabolic activities in the various tissue elements .
A histochemical procedure for the demonstration under the electron microscope of the coupled 2 16 FERENC HAJOS and SANDOR KERPEL-FRONIUS From the Department of Anatomy, Semmehveis University, Budapest, Hungary ABSTRACT Respiration-linked, massive accumulation of Sr 2+ is used to reveal the coupled oxidation of pyruvate, a-oxoglutarate, succinate, and malate by in situ mitochondria . All of these substrates were actively oxidized in the dendritic and perikaryal mitochondria, but no a-oxoglutarate or succinate utilization could be demonstrated in the mitochondria of the presynaptic axon terminals . A block at an early step of a-oxoglutarate and succinate oxidation is proposed to account for the negative histochemical results, since the positive reaction with pyruvate and malate proves that these mitochondria possess an intact respiratory chain and energy-coupling mechanism essential for Sr2+ accumulation . This indicates that the mitochondria in the axon terminals would be able to generate energy for synaptic function with at least some of the respiratory substrates . With regard to the block in the tricarboxylic acid cycle, the oxaloacetate necessary for citrate formation is suggested to be provided by fixation of CO 2 into some of the pyruvate . oxidation of various respiratory substrates by in situ mitochondria has been developed recently (15) with the use of respiratory-linked accumulation of Sr2+ . The present paper reports results obtained by this method, which show substantial metabolic differences between mitochondria located in the presynaptic axon terminals and those located in other neural elements .
MATERIALS AND METHODS
Respiration-linked, massive accumulation of Sr 2+, giving an electron-opaque reaction product (8), was applied as described previously (15), in order to demonstrate the coupled oxidation of the different respiratory substrates by in situ mitochondria .
RESULTS
Sr2 + uptake is demonstrated routinely in the socalled glomeruli (or cerebellar islands) of the cerebellar cortex granular layer, a complex synaptic apparatus of well known ultrastructure (3,6,7,11) . Perikarya, axons, and other tissue elements are always found nearby. It has to be emphasized, however, that the choice of the cerebellar cortex to represent the central nervous system in general is based upon the observation of entirely similar Sr2 + accumulation pattern in all othercentral and peripheral-neural regions studied . With all respiratory substrates tested, the needle-like or granular deposits are localized exclusively in the inner compartment of the more or less swollen mitochondria . Pyruvate ( Fig . 1) supports Sr2+ accumulation in all neuronal mitochondria . However, in the mitochondria of the large mossy fiber terminal, large, scattered granules about 400-600 A in diameter are visible, whereas in those localized in the surrounding terminal "digits" of the granule cell dendrites, diffuse, needle-like deposits are observed . Malate ( Fig . 2) brings about the same accumulation pattern as does pyruvate, but only a reaction product of granules about 200 A in diameter can be observed .
With the use of a-oxoglutarate ( Fig. 3) and succinate ( Fig . 4) as respiratory substrates, the mitochondria in the axon terminals are always devoid of precipitate . Abundant needle-shaped or granular deposits are characteristic, on the other hand, for mitochondria localized in other brain structures like the dendrites (Figs. 3, 4), perikarya (Fig . 3), and, most probably, glial cells . (Unequivocal identification of the glial elements meets, unfortunately, with considerable difficulty owing to their intense swelling.) It is noteworthy that in the case of precipitation in scattered granules some mitochondria in all brain structures examined appear to be devoid of reaction product . This is probably caused by the fact that the deposit may easily be outside the plane of sectioning in part of the mitochondrion . This possibility decreases obviously with the increasing number of granules and does not occur in the case of needle-like precipitation . This does not, of course, interfere with the recognition of the consistent, complete negativity of the numerous mitochondria of the large mossy terminals .
DISCUSSION
With respect to the presence and the form of the reaction product, three kinds of mitochondria are observed : (a) some do not contain any precipitate, while in others deposits are encountered that may be either (b) needle-like and diffusely distributed all over the mitochondrial profile, or (c) in the form of scattered granules . These differences can be used as a semiquantitative estimate of reaction intensity, since, according to the observations of Greenawalt and Carafoli (8), needle-shaped deposition occurs only if large amounts of Sr 2+ are accumulated, while in the case of more moderate uptake the deposits appear as large, scattered granules . It is striking that the mitochondria of presynaptic axon terminals never exhibit any reaction if a-oxoglutarate and succinate are used as substrates, and also that in pyruvate and malate oxidation only granular deposit appears . Conversely, with all substrates diffuse needle-like or abundant granular precipitate is seen in the mitochondria located in the perikarya and in the dendrites. These results indicate that the tricarboxylic acid cycle (TCAC) intermediates examined are less efficiently oxidized in the presynaptic mitochondria than in those located in perikarya, in dendritic, and, most probably, in glial structures . The demonstration of energy-dependent accumulation of Sr2 + with some TCAC intermediates proves, nevertheless, that the mitochondria in the axon terminals would be able to generate energy for synaptic function with at least some of the respiratory substrates .
The positive reaction with pyruvate and malate indicates, furthermore, that neither the damage of the respiratory chain and energy-coupling mechanism necessary for Sr2+ accumulation nor the lack of essential cofactors could account for the negative result in the presynaptic mitochondria . The observations are, therefore, suggestive of a block in these mitochondria at an early step of a-oxoglutarate and succinate oxidation, probably at the level of the primary dehydrogenase itself (14) . Lack in the presynaptic mitochondria of succinate oxidation demonstrable both with tetrazolium salt (22) and with ferricyanide (9) fits well into this picture . On the basis of these experiments, it cannot be decided whether the histochemical findings are due to the lack of the enzyme molecule or, conversely, only to blocked activity of the corresponding enzymes .
The histochemical data are thus at variance with the data obtained by fractionation methods (see, for references, 25), reporting considerable succinic dehydrogenase (SDH) activity in the synaptosomal fraction attributed to the mitochondria entrapped in the nerve ending particles . However, in view of the results reported, this activity may be mainly or entirely due to the presence of mitochondria from sources (perikarya, dendrites, glia) other than axon terminals . We think, therefore, that the characterization of mitochondria located in the synaptic boutons on the basis of synaptosomal fraction studies would not seem justified with respect to the possibility of contaminating mitochondria having higher specific activities . At any rate, SDH cannot be considered to be a reliable mitochondrial marker in the brain, whereas cytochrome oxidase, which shows uniform activity in all brain mitochondria (13), would be a more suitable guide for indicating the distribution of mitochondria through the fractions .
The Validity of the Histochemical Results
In order to postulate absent or significantly diminished oxidation of a respiratory substrate, it is an essential requirement to show that the respiratory substrate had free access to the enzyme molecule . Extramitochondrial penetration barriers were ruled out by performing the histochemical assay on isolated mitochondria (10) . But one has to consider, additionally, whether or not the mitochondria themselves are impermeable to the TCAC intermediates, which can cross the membrane only by the aid of specific permeases (1) . It might well be assumed that the observations are due to the lack of substrate penetration . In other words, the presynaptic mitochondria might be able to utilize only the TCAC intermediates generated within them .
A possible way to rule out this objection would be to incubate frozen sections in which the mitochondrial membranes become disrupted to such an extent that even the soluble matrix enzymes are released into the incubation medium (4,12,24) . Unfortunately, such treatment not only uncouples energy-linked processes but also destroys the fine structure of unfixed tissues so severely that the recognition under the electron microscope of the relevant tissue elements becomes virtually impossible . However, the pioneering work of Szentâgothai (22) carried out on frozen sections of the ciliary ganglion of the chicken, in which the pre-and postsynaptic mitochondria are easily identifiable even at the light microscope level, has already shown a similar lack in SDH activity in the axon terminal mitochondria.
Another line of evidence can be derived from the fact that succinate and malate according to Chappell (1) require the same permease to enter the mitochondria . Since malate is actively oxidized in the presynaptic mitochondria, it is reasonable to assume that succinate, too, should be able to penetrate . Unfortunately, it cannot be FIGURE 1 Cerebellar glomerulus of rat . Sr 2+ accumulation is supported by pyruvate . Granular precipitation in the mitochondria of the mossy fiber terminal (Mo) is marked . The mitochondria of the granule cell dendritic "digits" (Gd) are filled with diffuse, very fine, needle-like precipitate . X 29,000. Cerebellar glomerulus of the rat. Sr 2 + accumulation is supported by a-oxoglutarate . No reaction product is visible in the mitochondria of the large mossy "rosette" (Mo), whereas the mitochondria in the granule cell dendrites (Gd) and granule cell perikarya (Ge) contain numerous precipitate granules . X 29,000 . excluded that the accumulation of Sr 2+ is due to the oxidation of pyruvate generated from malate outside the mitochondria . But even if the difference in the histochemical reactions of presynaptic and other mitochondria were not to be merely a permeability phenomenon, one might also think of some specific inactivation process occurring in the presynaptic axon terminals . Moreover, the apparent block of the TCAC might happen only under in vitro conditions and might not reflect in vivo circumstances .
Although no satisfactory answer may be given to these questions, for the time being, the problem might be further elucidated by looking more closely at the oxidation of pyruvate (as is done in the following paragraph) and by doing further, similar studies involving other respiratory substrates (presented elsewhere, 16) .
The Oxidation of Pyruvate
A block in the TCAC would bring the whole cycle rapidly to a standstill since no oxaloacetate, essential for the entrance of pyruvate, would be generated. However, the oxaloacetate may be provided by fixation into pyruvate of C02, generating oxaloacetate either directly through pyruvate carboxylase or over malate by the malic enzyme . Since in the nervous system the equilibrium of the latter favors the production of pyruvate and C02, this path is less likely. On the other hand, the activity of pyruvate carboxylase could easily account for the CO2 fixation observed in the nervous system (5, 21) . Indeed, in the isolated nonmyelinated lobster nerve the bulk of the oxaloacetate, used for citrate synthesis, is generated through CO2 fixation into pyruvate (2) . The high level of CO 2 fixation is, according to those authors, a compensation mechanism to replace the net loss of TCAC intermediate observed in the isolated axons .
In slices of the corpus striatum, however, only about 10-15% of oxaloacetate is produced by CO 2 fixation, according to the same authors, showing the predominance of the TCAC pathway generating this compound in the gray matter. This is also supported by the observation by McMillan and Mortensen (19), showing that around 10-15% of the total activity of the carbon skeleton of glutamate, after the intracisternal injection of (2-14C) pyruvate, is contained in the second and third carbon atoms. Although the amount of pyruvate consumed in this way may not be too large, its significance would be difficult to deny . The importance of CO2 coupled to the function of the TCAC has been stressed already by Waelsch et al. (23), particularly in view of its importance in the maintenance of nerve function (17,18) . One is tempted, on the basis of our data, to propose that the CO2 fixed by the nervous tissue is used predominantly for the maintenance of the synaptic metabolism .
Received for publication 25 January 1971, and in revised form 30 April 1971 . | 2014-10-01T00:00:00.000Z | 1971-10-01T00:00:00.000 | {
"year": 1971,
"sha1": "13e3c39839d4c0125cdc3009a521da0e4e6dafaa",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/51/1/216.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f7551d77fcce60166833fe2bd794ee124c0de6af",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251402315 | pes2o/s2orc | v3-fos-license | Fundamental Exact Sequence for the Pro-\'Etale Fundamental Group
The pro-\'etale fundamental group of a scheme, introduced by Bhatt and Scholze, generalizes formerly known fundamental groups -- the usual \'etale fundamental group $\pi_1^{\mathrm{et}}$ defined in SGA1 and the more general group defined in SGA3. It controls local systems in the pro-\'etale topology and leads to an interesting class of"geometric covers"of schemes, generalizing finite \'etale covers. We prove the homotopy exact sequence over a field for the pro-\'etale fundamental group of a geometrically connected scheme $X$ of finite type over a field $k$, i.e. that the sequence $$1 \rightarrow \pi_1^{\mathrm{proet}}(X_{\bar{k}}) \rightarrow \pi_1^{\mathrm{proet}}(X) \rightarrow \mathrm{Gal}_k \rightarrow 1$$ is exact as abstract groups and the map $\pi_1^{\mathrm{proet}}(X_{\bar{k}}) \rightarrow \pi_1^{\mathrm{proet}}(X)$ is a topological embedding. On the way, we prove a general van Kampen theorem and the K\"unneth formula for the pro-\'etale fundamental group.
INTRODUCTION
In [BS15], the authors introduced the pro-étale topology for schemes. The main motivation was that the definitions of ℓ-adic sheaves and cohomologies in the usual étale topology are rather indirect. In contrast, the naive definition of e.g. a constant Q ℓ -sheaf in the pro-étale topology as X proét ∋ U ↦ Maps cts (U, Q ℓ ) is a sheaf and if X is a variety over an algebraically closed field, then H i (Xé t , Q ℓ ) = H i (X proét , Q ℓ ), where the right hand side is defined "naively" by applying the derived functor RΓ(X proét , −) to the described constant sheaf.
Along with the new topology, the authors of [BS15] introduced a new fundamental group -the proétale fundamental group. It is defined for a connected locally topologically noetherian scheme X with a geometric pointx and denoted π proét 1 (X,x). The name "pro-étale" is justified by the fact that there is an equivalence π proét 1 (X,x) − Sets ≃ Loc X proét between the categories of (possibly infinite) discrete sets with continuous action by π proét 1 (X,x) and locally constant sheaves of (discrete) sets in X proét . This is analogous to the classical fact that πé t 1 (X,x) − FSets is equivalent to the category of lcc sheaves on Xé t , where G − FSets denotes finite sets with a continuous G action. This is the first striking difference between these fundamental groups: π proét 1 allows working with sheaves of infinite sets. In fact, the authors of [BS15] study abstract "infinite Galois categories", which are pairs (C, F ) satisfying certain axioms that (together with an additional tameness condition) turn out to be equivalent to a pair (G − Sets, F G ∶ G − Sets → Sets) for a Hausdorff topological group G and the forgetful functor F G . In fact, one takes G = Aut(F ) with a suitable topology. This generalizes the usual Galois categories, introduced by Grothendieck to define πé t 1 (X,x). In Grothendieck's approach, one takes the category FÉt X of finite étale coverings together with the fibre functor Fx and obtains that πé t 1 (X,x) − FSets ≃ FÉt X . Discrete sets with a continuous π proét 1 (X,x)-action correspond to a larger class of coverings, namely "geometric coverings", which are defined to be schemes Y over X such that Y → X: (1) is étale (not necessarily quasi-compact!) (2) satisfies the valuative criterion of properness. We denote the category of geometric coverings by Cov X (seen as a full subcategory of Sch X ). It is clear that FÉt ⊂ Cov X . As Y is not assumed to be of finite type over X, the valuative criterion does not imply that Y → X is proper (otherwise we would get finite étale morphisms again) and so in general we get more. A basic example of a non-finite covering in Cov X can be obtained by viewing an infinite chain of (suitably glued) P 1 k 's as a covering of the nodal curve X = P 1 {0, 1} obtained by gluing 0 and 1 on P 1 k (to formalize the gluing one can use [Sch05]). Then, if k =k, π proét 1 (X,x) = Z and πé t 1 (X,x) =Ẑ. In this example, the prodiscrete group π SGA3 1 defined in Chapter X.6 of [SGA70] would give the same answer. This is essentially because our infinite covering is a torsor under a discrete group in Xé t . However, for more general schemes (e.g. an elliptic curve with two points glued), the category Cov X contains more. So far, all the new examples were coming from non-normal schemes. This is not a coincidence, as for a normal scheme X, any Y ∈ Cov X is a (possibly infinite) disjoint union of finite étale coverings. In this case, π proét 1 (X,x) = π SGA3 1 (X,x) = πé t 1 (X,x). In general πé t 1 can be recovered as the profinite completion of π proét 1 and π SGA3 1 is the prodiscrete completion of π proét 1 . The groups π proét 1 belong in general to a class of Noohi groups. These can be characterized as Hausdorff topological groups G that are Raȋkov complete and such that the open subgroups form a basis of neighbourhoods at 1 G . However, open normal subgroups do not necessarily form a basis of open neighborhoods of 1 G in a Noohi group. In the case of π proét 1 , this means that there might exist a connected Y ∈ Cov X that do not have a Galois closure. Examples of Noohi groups include: profinite groups, (pro)discrete groups, but also Q ℓ and GL n (Q ℓ ). A slightly different example would be Aut(S), where S is a discrete set and Aut has the compact-open topology.
The fact that groups like GL n (Q ℓ ) are Noohi (but not profinite or prodiscrete) makes π proét 1 better suited to work with Q ℓ (or Q ℓ ) local systems. Indeed, denoting by Loc X proét (Q ℓ ) the category of Q ℓlocal systems on X proét , i.e. locally constant sheaves of finite-dimensional Q ℓ -vector spaces (again, the "naive" definition works in X proét ), one has an equivalence Rep cts,Q ℓ (π proét 1 (X,x)) ≃ Loc X proét (Q ℓ ).
This fails for πé t 1 , as any Q ℓ -representation of a profinite group must stabilize a Z ℓ -lattice, while Q ℓ -local systems (in the above sense) stabilize lattices only étale locally. The group π SGA3 1 is not enough either; as shown by [BS15,Example 7.4.9] (due to Deligne), if X is the scheme obtained by gluing two points on a smooth projective curve of genus g ≥ 1, there are Q ℓ -local systems on X that do not come from a representation of π SGA3 1 (X). We will often dropx from the notation for brevity. This usually does not matter much, as a different choice of the base point leads to an isomorphic group.
Classical results. In [SGA71], Grothendieck proved some foundational results regarding the étale fundamental group. Among them: (1) The fundamental exact sequence, i.e. the comparison between the "arithmetic" and "geometric" fundamental groups: Exp. IX, Théorème 6.1]) Let k be a field with algebraic closurek. Let X be a quasi-compact and quasi-separated scheme over k. If the base change Xk is connected, then there is a short exact sequence 1 → πé t 1 (Xk) → πé t 1 (X) → Gal k → 1 of profinite topological groups.
(2) The homotopy exact sequence: Exp. X, Corollaire 1.4]) Let f ∶ X → S be a flat proper morphism of finite presentation whose geometric fibres are connected and reduced. Assume S is connected and lets be a geometric point of S. Then there is an exact sequence πé t 1 (Xs) → πé t 1 (X) → πé t 1 (S) → 1 of fundamental groups.
(3) "Künneth formula": Exp. X, Cor. 1.7]) Let X, Y be two connected schemes locally of finite type over an algebraically closed field k and assume that Y is proper. Letx,ȳ be geometric points of X and Y respectively with values in the same algebraically closed field extension K of k. Then the map induced by the projections is an isomorphism πé t 1 (X × k Y, (x,ȳ)) ∼ → πé t 1 (X,x) × πé t 1 (Y,ȳ) (4) Invariance of πé t 1 under extensions of algebraically closed fields for proper schemes ([SGA71, Exp. X, Corollaire 1.8]); (5) General van Kampen theorem (proved in a special case in [SGA71, IX §5] and generalized in [Sti06]); The aim of this and the subsequent article [Lar21] is to generalize statements (1) and (2), correspondingly, to the case of π proét 1 . In the present article, we also establish the generalizations of all the other points besides (2). The main difficulties in trying to directly generalize the proofs of Grothendieck are as follows: • geometric coverings of schemes (i.e. elements of Cov X defined above) are often not quasicompact, unlike elements of FÉt X . For example, for X a variety over a field k and connected Y ∈ Cov Xk , there may be no finite extension l k such that Y would be defined over l. Similarly, some useful constructions (like Stein factorization) no longer work (at least without significant modifications). • for a connected geometric covering Y ∈ Cov X , there is in general no Galois geometric covering dominating it. Equivalently, there might exist an open subgroup U < π proét 1 (X) that does not contain an open normal subgroup. This prevents some proofs that would work for π SGA3 1 to carry over to π proét 1 .
• The topology of π proét 1 is more complicated than the one of πé t 1 , e.g. it is not necessarily compact, which complicates the discussion of exactness of sequences.
Our results. Our main theorem is the generalization of the fundamental exact sequence. More precisely, we prove the following.
Moreover, the map π proét 1 (Xk) → π proét 1 (X) is a topological embedding and the map π proét 1 (X) → Gal k is a quotient map of topological groups.
The most difficult part is showing that π proét 1 (Xk) → π proét 1 (X) is injective or, more precisely, a topological embedding. This is Theorem 4.13.
As in the case of usual Galois categories, statements about exactness of sequences of Noohi groups translate to statements on the corresponding categories of G − Sets. If the groups involved are the proétale fundamental groups, this translates to statements about geometric coverings. We give a detailed dictionary in Prop. 2.37. As Noohi groups are not necessarily compact, the statements on coverings are equivalent to some weaker notions of exactness (e.g. preserving connectedness of coverings is equivalent to the map of groups having dense image). In fact, we first prove a "near-exact" version of Theorem 4.14 and obtain the above one as a corollary using an extra argument.
For π proét 1 (Xk) → π proét 1 (X) to be a topological embedding boils down to the following statement: every geometric covering Y of Xk can be dominated by a covering Y ′ that embeds into a base-change tok of a geometric covering Y ′′ of X (i.e. defined over k).
For finite coverings, the analogous statement is easy to prove; by finiteness, the given covering is defined over a finite field extension l k and one concludes quickly. This is also the case for infinite coverings detected by π SGA3 1 , see Prop. 4.8. But for general geometric coverings, the situation is much less obvious; as we show by counterexamples (Ex. 4.5 and Ex. 4.6), it is not true in general that a connected geometric covering of Xk is isomorphic to a base-change of a covering of X l for some finite extension l k. This property is crucially used in the proof of [SGA71, Exp. IX, Theorem 6.1], and thus trying to carry the classical proof of SGA over to π proét 1 fails. This last statement is, however, stronger than what we need to prove, and so does not contradict our theorem.
A useful technical tool across the article is the van Kampen theorem for π proét 1 . Its abstract form is proven by adapting the proof in [Sti06] to the case of Noohi groups and infinite Galois categories. For a morphism of schemes X ′ ↠ X of effective descent for Cov (satisfying some extra conditions), it allows one to write the pro-étale fundamental group of X in terms of the pro-étale fundamental groups of the connected components of X ′ and certain relations. By the results of [Ryd10], one can take X ′ = X ν → X to be the normalization morphism of a Nagata scheme X. As π proét 1 and πé t 1 coincide for normal schemes, this allows us to present π proét 1 (X) in terms of πé t 1 (X ν w ), where X ν = ⊔ w X ν w , and the (discrete) topological fundamental group of a suitable graph. In this case, the van Kampen theorem takes on concrete form and generalizes [Lav18,Thm. 1.17].
Theorem (van Kampen theorem, Cor. 3.19 + Rmk. 3.21 + Prop. 3.12, cf. [Sti06]). Let X be a Nagata scheme and X ν = ⊔ w X ν w its normalization written as a union of connected components. Then, after a choice of geometric points, étale paths between them and a maximal tree T within a suitable "intersection" graph Γ, there is an isomorphism π proét 1 (X,x) ≃ * top w πé t 1 (X ν w ,x w ) * top π top 1 (Γ, T ) ⟨R 1 , R 2 ⟩ Noohi where R 1 , R 2 are two sets of relations described in Cor. 3.19 and (−) Noohi is the Noohi completion defined in Section 2.
In the proof of the main theorem, the van Kampen theorem allows us to construct π proét 1 (Xk)and π proét 1 (X)-sets in more concrete terms of graphs of groups involving πé t 1 's. We "explicitly" construct a Galois invariant open subgroup of a given open subgroup U < π proét 1 (Xk,x) in terms of "regular loops" (with respect to U ), see Defn. 4.20.
In fact, the existence of elements that are too far from being a product of regular loops is tacitly behind the counterexamples Ex. 4.5 and 4.6, while the fact that, despite this, there is still an abundance of (products of) regular loops (i.e. their closure is open) is behind our main proof. We also sketch a quicker but less constructive approach in Rmk. 4.27.
Another interesting result proven with the help of the van Kampen theorem is the Künneth formula.
Proposition (Künneth formula for π proét 1 , Prop. 3.29). Let X, Y be two connected schemes locally of finite type over an algebraically closed field k and assume that Y is proper. Letx,ȳ be geometric points of X and Y respectively with values in the same algebraically closed field extension K of k. Then the map induced by the projections is an isomorphism Along the way, we prove the invariance of π proét 1 under extensions of algebraically closed fields for proper schemes (see Prop. 3.31) and give a short direct proof of the fact that π SGA3 1 (Xk,x) ↪ π SGA3 1 (X,x), see Cor. 4.10.
In a separate article [Lar21], we discuss the homotopy exact sequence for π proét 1 . It is proven by constructing an infinite (i.e. non-quasi-compact) analogue of the Stein factorization. Although the construction does not use the main results of this article, the auxiliary results on Noohi groups and π proét 1 have proven to be very handy.
We hope that our techniques, with some extra tweaks and work, will allow to draw similar conclusions about other Noohi fundamental groups arising from the infinite Galois formalism. One such example could be the de Jong fundamental group π dJ 1 , defined in the rigid-analytic setting in [dJ95]. In a later joint work [ALY21], we have proven the existence of a specialization morphism between π proét 1 and π dJ 1 , relating π proét 1 to this more established fundamental group.
Acknowledgements. The main ideas and results contained in this article are a part of my PhD thesis. I express my gratitude to my advisor Hélène Esnault for introducing me to the topic and her constant encouragement. I would like to thank my co-advisor Vasudevan Srinivas for his support and suggestions. I am thankful to Peter Scholze for explaining some parts of his work to me via e-mail. I thank João Pedro dos Santos for for his comments and feedback. I owe special thanks to Fabio Tonini, Lei Zhang and Marco D'Addezio from our group in Berlin for many inspiring mathematical discussions. I thank Piotr Achinger and Jakob Stix for their support. I would also like to thank the referee for careful reading, valuable remarks and urging me to write a more streamlined version of the main proof.
My PhD was funded by the Einstein Foundation. This work is a part of the project "Kapibara" supported by the funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 802787).
The major revision was prepared at JU Kraków and GU Frankfurt. I was supported by the Priority Research Area SciMat under the program Excellence Initiative -Research University at the Jagiellonian University in Kraków. This research was also funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 326 Geometry and Arithmetic of Uniformized Structures, project number 444845124.
• H < ○ G will mean that H is an open subgroup of G.
• For subgroups H < G, H nc will denote the normal closure of H in G, i.e. the smallest normal subgroup of G containing H. We will use ⟨⟨−⟩⟩ to denote the normal closure of the subgroup generated by some subset of G, i.e. ⟨⟨−⟩⟩ = ⟨−⟩ nc . • For a field k, we will usek to denote its (fixed) algebraic closure and k sep or k s to denote its separable closure (ink). • The topological groups are assumed to be Hausdorff unless specified otherwise or appearing in a context where it is not automatically satisfied (e.g. as a quotient by a subgroup that is not necessarily closed). We will usually comment whenever a non-Hausdorff group appears. • We assume (almost) every base scheme to be locally topologically noetherian. This does not cause problems when considering geometric coverings, as a geometric covering of a locally topologically noetherian scheme is locally topologically noetherian again -this is [BS15, Lm. 6.6.10]. • A "G-set" for a topological group G will mean a discrete set with a continuous action of G unless specified otherwise. We will denote the category of G-sets by G − Sets. We will denote the category of sets by Sets. • We will often omit the base points from the statements and the discussion; by Cor. 3.18, this usually does not change much. In some proofs (e.g. involving the van Kampen theorem), we keep track of the base points.
2. INFINITE GALOIS CATEGORIES, NOOHI GROUPS AND π proét 1 2.1. Overview of the results in [BS15]. Throughout the entire article we use the language and results of [BS15], especially of Chapter 7, as this is where the pro-étale fundamental group was defined. Some familiarity with the results of [BS15, §7] is a prerequisite to read this article. We are going to give a quick overview of some of these results below, but we recommend keeping a copy of [BS15] at hand.
Definition 2.1. ([BS15, Defn. 7.1.1]) Fix a topological group G. Let G− Sets be the category of discrete sets with a continuous G-action, and let F G ∶ G − Sets → Sets be the forgetful functor. We say that G is a Noohi group if the natural map induces an isomorphism G → Aut(F G ) of topological groups.
Here, S ∈ Sets are considered with the discrete topology, Aut(S) with the compact-open topology and Aut(F G ) is topologized using Aut(F G (S)) for S ∈ G−Sets. More precisely, the stabilizers Stab The following groups are Noohi: Q ℓ , Q ℓ for the colimit topology induced by expressing Q ℓ as a union of finite extensions (in contrast with the situation for the ℓ-adic topology), GL n (Q ℓ ) for the colimit topology (see [BS15, Example 7.1.7]).
The notion of a Noohi group is tightly connected to a notion of an infinite Galois category, which we are about to introduce. Here, an object X ∈ C is called connected if it is not empty (i.e., initial), and for every subobject (1) C is a category admitting colimits and finite limits.
(2) Each X ∈ C is a disjoint union of connected (in the sense explained above) objects.
(3) C is generated under colimits by a set of connected objects. (4) F is faithful, conservative, and commutes with colimits and finite limits. The fundamental group of (C, F ) is the topological group π 1 (C, F ) ∶= Aut(F ), topologized by the compact-open topology on Aut(S) for any S ∈ Sets.
An infinite Galois category (C, F ) is tame if for any connected X ∈ C, π 1 (C, F ) acts transitively on F (X).
Example 2.4. If G is a topological group, then (G − Sets, F G ) is a tame infinite Galois category. (1) π 1 (C, F ) is a Noohi group.
(2) There is a natural identification of Hom cont (G, π 1 (C, F )) with the groupoid of functors C → G − Sets that commute with the fibre functors. (3) If (C, F ) is tame, then F induces an equivalence C ≃ π 1 (C, F ) − Sets.
The "tameness" assumption cannot be dropped as there exist infinite Galois categories that are not of the form (G − Sets, F G ), see [BS15,Ex. 7.2.3]. This was overlooked in [Noo08], where a similar formalism was considered.
Remark 2.6. The above formalism was also studied in [Lep10,Chapter 4] under the names of "quasiprodiscrete" groups and "pointed classifying categories".
In Section 2.2 below we will study "Noohi completion" and the dictionary between Noohi groups and G − Sets (see Section 2.3). For now, let us return to gathering the results from [BS15].
Definition 2.7. Let X be a locally topologically noetherian scheme. Let Y → X be a morphism of schemes such that: (1) it is étale (not necessarily quasi-compact!) (2) it satisfies the valuative criterion of properness. We will call Y a geometric covering of X. We will denote the category of geometric coverings by Cov X .
As Y is not assumed to be of finite type over X, the valuative criterion does not imply that Y → X is proper (otherwise we would simply get a finite étale morphism).
Example 2.8. For an algebraically closed fieldk, the category Cov Spec(k) consists of (possibly infinite) disjoint unions of Spec(k) and we have Cov Spec(k) ≃ Sets.
More generally, one has: Lemma 2.9. ([BS15, Lm. 7.3.8]) If X is a henselian local scheme, then any Y ∈ Cov X is a disjoint union of finite étale X-schemes.
Let us choose a geometric pointx ∶ Spec(k) → X on X. By Example 2.8, this gives a fibre functor Fx ∶ Cov X → Sets. By [BS15, Lemma 7.4.1]), the pair (Cov X , Fx) is a tame infinite Galois category. Then one defines Definition 2.10. The pro-étale fundamental group is defined as π proét 1 (X,x) = π 1 (Cov X , Fx).
In other words, π proét 1 (X,x) = Aut(Fx) and this group is topologized using the compact-open topology on Aut(S) for any S ∈ Sets.
One can compare the groups π proét 1 (X,x), πé t 1 (X,x) and π SGA3 1 (X,x), where the last group is the group introduced in Chapter X.6 of [SGA70].
Lemma 2.11. For a scheme X, the following relations between the fundamental groups hold (1) The group πé t 1 (X,x) is the profinite completion of π proét 1 (X).
As shown in [BS15, Example 7.4.9], π proét 1 (X,x) is indeed more general than π SGA3 1 (X,x). This can be also seen by combining Example 4.5 with Prop. 4.8 below.
The following lemma is extremely important to keep in mind and will be used many times throughout the paper. Recall that, for example, a normal scheme is geometrically unibranch.
(1) A map f ∶ Y → X of schemes is called weakly étale if f is flat and the diagonal (2) The pro-étale site X proét is the site of weakly étale X-schemes, with covers given by fpqc covers.
This definition of the pro-étale site is justified by a foundational theorem -part c) of the following fact.
We write Loc X for the corresponding full subcategory of Shv(X proét ).
We are ready to state the following important result.
Topological invariance of the pro-étale fundamental group. We note that universal homeomorphisms of schemes induce equivalences on the corresponding categories of geometric coverings.
Proof. As Cov X ≃ Loc X , the theorem follows by the same proof as in [BS15,Lm. 5 Alternatively, one can argue more directly (i.e. avoiding the equivalence with Loc X ) as follows. By [Sta20, Theorem 04DZ], V ↦ V ′ = V × X X ′ induces an equivalence of categories of schemes étale over X and schemes étale over X ′ . By [Ryd10,Proposition 5.4.], this induces an equivalence between schemes étale and separated over respectively X and X ′ . The only thing left to be shown is that if for an étale separated scheme Y → X, the map Y × X X ′ → X ′ satisfies the existence part of the valuative criterion of properness, then so does Y → X. But this property can be characterized in purely topological terms (see [Sta20,Lemma 01KE]) and so the result follows from the fact that h is a universal homeomorphism.
Noohi completion. Let
HausdGps denote the category of Hausdorff topological groups (recall that we assume all topological groups to be Hausdorff, unless stated otherwise) and NoohiGps to be the full subcategory of Noohi groups. Let G be a topological group. Denote C G = G − Sets and let F G ∶ C G → Sets be the forgetful functor. Observe that (C G , F G ) is a tame infinite Galois category. Thus, the group Aut(F G ) is a Noohi group. It is easy to see that a morphism G → H defines an induced morphism of groups Aut(F G ) → Aut(F H ) and check that it is continuous. Let ψ N ∶ HausdGps → NoohiGps be the functor defined by G ↦ Aut(F G ). Denote also the inclusion i N ∶ NoohiGps → HausdGps.
Definition 2.18. We call ψ N (G) the Noohi completion of G and will denote it G Noohi .
Example 2.19. In [BS15, Example 7.2.6], it was explained that the category of Noohi groups admits coproducts. Let G 1 , G 2 be two Noohi groups and let G 1 * N G 2 denote their coproduct as Noohi groups. Let G 1 * top G 2 be their topological coproduct. It exists and it is a Hausdorff group ( [Gra48]). Then Proposition 2.20. For a topological group G, the functor F G induces an equivalence of categories Moreover, α * G ○F G ≃ id, and thus α * is an equivalence of categories, too. Proof. The first part follows directly from [BS15, Theorem 7.2.5]. The natural isomorphism α * G ○F G ≃ id is clear from the definitions. It follows that α * G is an equivalence.
Lemma 2.21. For any topological group G, the image of α G ∶ G → G Noohi is dense.
Proof. Let U ⊂ G Noohi be open. As G Noohi is Noohi, there exists q ∈ G Noohi and an open subgroup V < ○ G Noohi such that qV ⊂ U . The quotient G Noohi V gives a G Noohi -set. It is connected in the category G Noohi − Sets and, by Prop. 2.20, α * G (G Noohi V ) is connected. Thus, the action of G on G Noohi V is transitive and so there exists g ∈ G such that α G (g) ⋅ [V ] = [qV ], i.e. α G (g) ∈ qV . Thus, the image of α G is dense.
Observation. Let f ∶ H → G be a map of topological groups. Directly from the definitions, one sees that the following diagram commutes: Corollary 2.23. The functor ψ N is a left adjoint of i N .
Remark 2.24. There are few places, where we write G Noohi for a non-Hausdorff group G. This is mostly to avoid a large overline sign over a subgroup described by generators. In these cases, we mean where G Hausd is the maximal Hausdorff quotient. As (−) Hausd is a left adjoint as well, this usually does not cause problems. This also provides a left adjoint to the forgetful functor NoohiGps → TopGps to all topological groups.
We now move towards a more explicit description of the Noohi completion.
Lemma 2.25. Let (G, τ ) be a topological group. Denote by B the collection of sets of the form Then B is a basis of a group topology τ ′ on G that is weaker than τ and open subgroups of (G, τ ) form a basis of open neighbourhoods of 1 G in (G, τ ′ ).
Moreover, the natural map
Proof.
where τ ′ denotes the topology described in the previous lemma and . . . denotes the Raȋkov completion.
Proof. We combine Fact 2.26 with the last lemma and get (G, τ ) Noohi ≃ (G, τ ′ ) Noohi ≃ (G, τ ′ ). Observation 2.28. Let G be a topological group and H a normal subgroup. Then the full subcategory of G − Sets of objects on which H acts trivially is equal to the full subcategory of G − Sets on which its closureH acts trivially and it is equivalent to the category of G H − Sets. So, it is an infinite Galois category with the fundamental group equal to (G H ) Noohi .
Lemma 2.29. Let X be a connected, locally path-connected, semilocally simply-connected topological space and x ∈ X a point. Let F x be the functor taking a covering space Y → X to the fibre Y x over the point x ∈ X. Then (TopCov(X), F x ) is a tame infinite Galois category and π 1 (TopCov(X), F x ) = π top 1 (X, x), where we consider π top 1 (X, x) with the discrete topology. Here, TopCov(X) denotes denotes the category of covering spaces of X.
Proof. We first claim that there is an isomorphism: (TopCov(X), F x ) ≃ (π top 1 (X, x)−Sets, F π top 1 (X,x) ). This is in fact a classical result in algebraic topology, which can be recovered from [Ful95,Ch. 13] In a topological group open subgroups are also closed, so a thickly closed subgroup is also an intersection of closed subgroups, so it is closed in G. Observe also that an arbitrary intersection of thickly closed subgroups is thickly closed. This justifies, for example, the existence of the smallest normal thickly closed subgroup containing a given group. In fact, we can formulate a more precise observation.
Observation 2.31. Let H < G be a subgroup of a topological group G. Then the smallest normal thickly closed subgroup of G containing H is equal to (H nc ), where H nc is the normal closure of H in G.
Observation 2.32. Let G be a topological group such that the open subgroups form a local base at 1 G . Let W ⊂ G be a subset. Then the topological closure of W can be written as W = ∩ V < ○ G W V .
The following lemma can be found on p.79 of [Lep10]. Let us make an easy observation, that will be useful to keep in mind while reading the proof of the technical proposition below. ]. This justifies using words "injective" or "surjective" when speaking about maps in (C, F ).
Recall the following fact.
Observation. Let f ∶ G ′ → G be a surjective map of topological groups. Then the induced morphism ← C G the corresponding maps of the infinite Galois categories. Then the following hold: (1) The map h ′ ∶ G ′′ → G ′ is a topological embedding if and only if for every connected object X in C G ′′ , there exist connected objects X ′ ∈ C G ′′ and Y ∈ C G ′ and maps X ′ ↠ X and X ′ ↪ H ′ (Y ).
(2) The following are equivalent The functor H maps connected objects to connected objects.
if and only if the composition H ′ ○ H maps any object to a completely decomposed object. (5) Assume that h ′ (G ′′ ) ⊂ ker(h) and that h ∶ G ′ → G has dense image. Then the following conditions are equivalent: (a) the induced map (G ′ ker(h)) Noohi → G is an isomorphism and the smallest normal thickly closed subgroup containing Im(h ′ ) is equal to ker(h), Proof.
(1) The proof is virtually the same as for usual Galois categories, but there every injective map is automatically a topological embedding (as profinite groups are compact). Assume that For the other implication: we want to prove that G ′′ → G ′ is a topological embedding under the assumption from the statement. It is enough to check that the set of preimages h ′ −1 (B) of some basis B of opens of e G ′ forms a basis of opens of e G ′′ . Indeed, assume that this is the case. Firstly, observe that it implies that h ′ is injective, as both G ′′ and G ′ are Hausdorff (and in particular T 0 ). If U is an open subset of G ′′ , then we can write The surjectivity of the first map means that we can assume (up to replacingŨ by a conjugate)Ũ ⊂ U . The injectivity of the second means that we can assume (up to replacing V by a conjugate) that h ′−1 (V ) ⊂Ũ . Indeed, the injectivity implies that if h ′ (g ′′ )V = V , then g ′′Ũ =Ũ which translates immediately to h ′−1 (V ) ⊂Ũ . So we have also h ′−1 (V ) ⊂ U , which is what we wanted to prove.
(2) The equivalence between (a) and (b) follows from the observation that a map between Noohi groups G ′ → G has a dense image if and only if for any open subgroup U of G, the induced map on sets G ′ → G U is surjective. Here, we only use that open subgroups form a basis of open neighbourhoods of 1 G ∈ G. Now, the functor H is automatically faithful and conservative (because F G ′ ○H = F G is faithful and conservative). Assume that (b) holds. Let S, T ∈ G−Sets and let g ∈ Hom G ′ −Sets (H(S), H(T )). We have to show that g comes from g 0 ∈ Hom G−Sets (S, T ). We can and do assume S, T connected for that. Let Γ g ⊂ H(S) × H(T ) be the graph of g. It is a connected subobject. As . This shows that G ′ U pulls back to a completely decomposed object.
The other way round: assume that for every connected object Y of C G ′ such that H ′ (Y ) contains a final object, H ′ (Y ) is completely decomposed. Let U be an open subgroup of G ′ containing h ′ (G ′′ ). Then G ′′ fixes [U ] ∈ G U and so, by assumption, fixes every [g ′ U ] ∈ G U . This implies that for any As this is true for any U containing h ′ (G ′′ ) we get that h ′ (G ′′ ) = (h ′ (G ′′ ) nc ) and the last group is the smallest normal thickly closed subgroup of G ′ containing h ′ (G ′′ ) (Observation 2.31).
(4) The same as for usual Galois categories, we use that Sets G ′′ acts trivially on S}, the assumption of (b) implies that the functor G − Sets → G ′ ker(h) − Sets is essentially surjective. By the global assumption that G ′ → G has dense image, it is fully faithful (see (2)).
As ker(h) acts trivially on H(Z), we conclude that it also acts trivially on Y . Thus, by abuse of notation, . We give two proofs of this fact.
First proof: We have proven above that (b) ⇒ (G ′ ker(h)) Noohi ≃ G. Let N be the smallest normal thickly closed subgroup of G ′ containing h ′ (G ′′ ). Observe that N ⊂ ker h (as ker(h) is thickly closed). Let U be an open subgroup containing N . We want to show that U contains ker h. This will finish the proof as both N and ker h are thickly closed. Write Y = G ′ U . Observe that G ′ U pulls back to a completely decomposed G ′′ -set if and only if . So N ⊂ U implies that Y pulls back to a completely decomposed G ′′ -set and, by assumption, Y is isomorphic to a pull-back of some G-set and so ker(h) acts trivially on Y . This implies that ker h ⊂ U , which finishes the proof. Alternative proof: We already know that (b) ⇒ (G ′ ker(h)) Noohi ≃ G. Let N ⊂ ker(h) be as in the first proof above. Consider the map G N ↠ G ker(h). The assumption (b) and full faithfulness of H (by the global assumption and using (2)) imply that completely decomposed object. As we have seen while proving "(b) ⇒ (a)", this implies Observation 2.31, there is N = (H nc ). By assumption, we have N = ker h and so we conclude that ker h ⊂ U . But then, by assumption To distinguish between exactness in the usual sense (i.e. on the level of abstract groups) and notions of exactness appearing in Prop. 2.37, we introduce a new notion. It will be mainly used in the context of Noohi groups.
. Then we will say that the sequence is (1) nearly exact on the right if h has dense image, equal to the kernel of h, (3) nearly exact if it is both nearly exact on the right and nearly exact in the middle.
We end this subsection with a lemma on topological groups and their Noohi completions that will be used later in the proof of the main theorem.
Lemma 2.39. Let G be a topological group andG be a subgroup of G Noohi such that the canonical map be the discrete set that comes naturally with an abstract action byG. If the induced abstract G-action on S is continuous, Proof. By the universal property, the G-action on S extends to G Noohi and this action is transitive. is closed in V . But asG contains the image of G, it is dense in G Noohi , and from the definition of V it follows that V 0 has to be dense in V . Putting this together, we
2.4.
A remark on valuative criteria. We will sometimes shorten "the valuative criterion of properness" to "VCoP". It is useful to keep in mind the precise statements of different parts of the valuative criterion, see [Sta20, Lemma 01KE], [Sta20, Section 01KY] and [Sta20, Lemma 01KC]. Let us prove a lemma (which is implicit in [BS15]), that VCoP chan be checked fpqc-locally.
Lemma 2.40. Let g ∶ X → S be a map of schemes. The properties: (a) g is étale (b) g is separated (c) g satisfies the existence part of VCoP can be checked fpqc-locally on S.
Moreover, the property (c) can be also checked after a surjective proper base-change.
Proof. The cases of étale and separated morphisms are proven in [Sta20, Section 02YJ]. For the last part: satisfying the existence part of VCoP is equivalent to specializations lifting along any base-change of g ([Sta20, Lemma 01KE]). It is easy to see that this property can be checked Zariski locally. Thus, if S ′ → S is an fpqc cover such that the base-change g ′ ∶ X ′ → S ′ satisfies specialization lifting for any base-change, we can assume that S, S ′ are affine with S ′ → S faithfully flat. Let T → S be any morphism. Consider the diagram: It is enough to show, that W is stable under specialization or, equivalently, that T ∖ W is stable under generalization. But, from flatness ([Sta20, Lemma 03HV]), generalizations lift along S ′ × S T → T . Thus, it is enough to show that the preimage of T ∖ W in S ′ × S T is stable under generalizations or, equivalently (using the surjectivity of S ′ × S T → T ), that the preimage of W in S ′ × S T is closed under specializations. But an easy diagram chasing (using the fact that the right square of the diagram above is cartesian) shows that the preimage of W in S ′ × S T is the image of a closed subset of S ′ × S X × T . We conclude, because specializations lift along S ′ × S X × S T → S ′ × S T by assumption.
The last part of the statement is proven in an analogous way.
Lemma 2.41. Let f ∶ Y → X be a geometric covering of a locally topologically noetherian scheme. Then f is separated.
3. SEIFERT-VAN KAMPEN THEOREM FOR π proét 1 AND ITS APPLICATIONS 3.1. Abstract Seifert-van Kampen theorem for infinite Galois categories. We aim at recovering a general version of van Kampen theorem, proven in [Sti06], in the case of the pro-étale fundamental group. Most of the definitions and proofs are virtually the same as in [Sti06], after replacing "Galois category" with "(tame) infinite Galois category" and "profinite" with "Noohi", but still some additional technical difficulties appear here and there. We make the necessary changes in the definitions and deal with those difficulties below. Denote by ∆ ≤2 a category whose objects are By a 2-complex E we mean a 2-complex in the category of sets. We often think of E as a category: its objects are the elements of E n for n = 0, 1, 2 and its morphisms are obtained by defining ∂ ∶ s → t where s ∈ E n and t = E(∂)(s).
and its corresponding linear map d ∶ ∆ m → ∆ n sending e i to e ∂(i) , and s ∈ E n and x ∈ ∆ m . We call E connected if E is a connected topological space.
Definition 3.1. Noohi group data (G , α) on a 2-complex E consists of the following: (1) A mapping (not necessarily a functor!) G from the category E to the category of Noohi groups: to a complex s ∈ E n is attributed a Noohi group G (s) and to a map an element α vef ∈ G (v) (its existence is a part of the definition) such that the following diagram commutes: Observe that there is an obvious notion of a morphism of (G , α)-systems: a collection of G (s)-equivariant maps that commute with the m's. Let us denote by lcs(E, (G , α)) the category of locally constant (G , α)-systems.
Let M ∈ lcs(G , α) for Noohi group data (G , α) on some 2-complex E. We define oriented graphs E ≤1 and M ≤1 (which will be an oriented graph over E ≤1 ) as in [Sti06], but our graphs M ≤1 are possibly infinite. For E ≤1 the vertices are E 0 and edges E 1 such that ∂ 0 (resp. ∂ 1 ) map an edge to its target (resp. origin). For M ≤1 the vertices are ⊔ v∈E 0 M v and edges are ⊔ e∈E 1 M e serves as the set of edges. The target/origin maps are induced by the m ∂ and the map M ≤1 → E ≤1 is the obvious one.
There is an obvious topological realization functor for graphs ⋅ . By applying this functor to the above construction we get a topological covering (because M is locally constant) M ≤1 → E ≤1 . This gives a functor Choosing a maximal subtree T of E ≤1 gives a fibre functor is an infinite Galois category and the resulting fundamental group π 1 (Cov( E ≤1 ), F T )) is isomorphic to π top 1 ( E ≤1 ) (see Lemma 2.29) which is in turn isomorphic to Fr(E 1 ) ⟨⟨{⃗ e e ∈ T } Fr(E 1 ) ⟩⟩ = Fr(⃗ e e ∈ E 1 ∖T ), where Fr(. . .) denotes a free group on the given set of generators and ⟨⟨{⃗ e e ∈ T } F r(E 1 ) ⟩⟩ denotes the normal closure in Fr(E 1 ) of the subgroup generated by {⃗ e ∈ T }. Here, ⃗ e acts on F T (M ) via π 0 (p −1 ( T )) ≅ π 0 (p −1 (∂ 0 (e))) ≅ π 0 (p −1 ( e )) ≅ π 0 (p −1 (∂ 1 (e)) ≅ π 0 (p −1 ( T )) As in [Sti06], for every s ∈ E 0 and M ∈ lcs(E, (G , α)) we have that F T (M ) can be seen canonically as a G (s)-module by M s = π 0 (p −1 (s)) ≅ π 0 (p −1 (T )). Denote π 1 (E ≤1 , T ) = Fr(E 1 ) ⟨⟨{⃗ e e ∈ T } Fr(E 1 ) ⟩⟩. Putting the above together we get a functor In the setting of usual ("finite") Galois categories, it is usually enough to say that a particular morphism between two Galois categories is exact, because of the following fact ([Sta20, Tag 0BMV]): Let G be a topological group. Let F ∶ Finite−G−Sets → Sets be an exact functor with F (X) finite for all X. Then F is isomorphic to the forgetful functor.
As we do not know if an analogous fact is true for infinite Galois categories, given two infinite Galois categories (C, F ), (C ′ , F ′ ) and a morphism φ ∶ C → C ′ , we are usually more interested in checking whether F ≃ F ′ ○φ. If φ satisfies this condition, it also commutes with finite limits and arbitrary colimits. Indeed, we have a map colimφ(X i ) → φ(colimX i ) that becomes an isomorphism after applying F ′ (as F ′ and F = F ′ ○ φ commute with colimits) and we conclude by conservativity of F ′ . Similarly for finite limits.
Proposition 3.5. Let (E, (G , α)) be a connected 2-complex with Noohi group data. Define a functor F ∶ lcs(E, (G , α)) → Sets in the following way: pick any simplex s and define F by M ↦ M s . Then (lcs(E, (G , α)), F ) is a tame infinite Galois category.
Moreover, the obtained functor Colimits and finite limits: they exist simplexwise and taking limits and colimits is functorial so we get a system as candidate for a colimit/finite limit. This will be a locally constant system, as the colimit/finite limit of bijections between some G-sets is a bijection.
Each M is a disjoint union of connected objects: let us call N ∈ lcs(G , α) a subsystem of M if there exists a morphism N → M such that for any simplex s the map N s → M s is injective (we then identify, for any simplex s, N s with a subset of M s ). We can intersect such subsystems in an obvious way and observe that it gives another subsystem. So for any element a ∈ M v there exists the smallest subsystem N of M such that a ∈ N v . We see readily that for any vertices v, v ′ and a ∈ M v , a ′ ∈ M v ′ the smallest subsystems N and N ′ containing one of them are either equal or disjoint (in the sense that, for each simplex s, N s and N ′ s are disjoint as subsets of M s ). It is easy to see that in this way we have obtained a decomposition of M into a disjoint union of connected objects.
F is faithful, conservative and commutes with colimits and finite limits: observe that φ s ∶ lcs(E, (G , α)) ∋ M ↦ M s ∈ G (s) − Sets is faithful, conservative and commutes with colimits and finite limits and It is obvious that F ≃ F forget ○ Q. We are now going to show that Q preserves connected objects. Take a connected object M ∈ lcs(E, (G , α)) and suppose that N is a non-empty subset of F T (M ) stable under the action of π 1 (E ≤1 , T ) and G (v) for v ∈ E 0 . Stability under the action of π 1 (E ≤1 , T ) shows that N can be extended to a subgraph N ≤1 ⊂ M ≤1 : for an edge e of M ≤1 we declare it to be an edge of N ≤1 if one of its ends touches a connected component of p −1 ( T ) corresponding to an element of N . This is well defined, as in this case both ends touch such a component -this is because the action of m ∂ 1 m −1 ∂ 0 equals the action of → e∈ π 1 (E ≤1 , T ). Now we want to show that it extends to 2-simplexes. This is a local question and we can restrict to simplices in the boundary of a given face f ∈ E 2 . Define N f as a preimage of N s via any ∂ such that ∂(f ) = s. We see that if the choice is independent of s, then we have extended N to a locally constant system. To see the independence it is enough to prove that if (vef ) is a barycentric subdivision (i.e. we have ∂ and ∂ ′ such that ∂ ′ (f ) = e and ∂(e) = v), then m −1 and thus N can be seen as an element of lcs(E, (G , α)) which is a subobject of M , which contradicts connectedness of M .
To see that lcs(E, (G , α)) is generated under colimits by a set of connected objects, observe that in the above proof of the fact that Q preserves connected objects, we have in fact shown the following statement. We want to show that there exists a set of connected objects in lcs(G , α) such that any connected object of lcs(G , α) is isomorphic to an element in that set. As an analogous fact is true in Looking at the graph of this isomorphism, we find a connected subobject Z ⊂ QX × QY that maps isomorphically on QX and QY via the respective projections. By the above fact, we know that there exists W ⊂ X × Y such that QW = Z. Because F ≃ F forget ○ Q and F is conservative, we see that the projections W → X and W → Y must be isomorphisms. This shows X ≃ Y as desired.
The only claim left is that lcs(E(G , α)) is tame, but this follows from tameness of ( * N v∈E 0 G (v) * N π 1 (E ≤1 , T )) − Sets, the equality F ≃ F forget ○ Q and the fact that Q maps connected objects to connected objects.
Let us denote by π 1 (E, G , s) the fundamental group of the infinite Galois category (lcs(E, G ), F s ). The proposition above tells us that there is a continuous map of Noohi groups with dense image * N v∈E 0 G (v) * N π 1 (E ≤1 , T )) → π 1 (E, G , s). We now proceed to describe the kernel.
Theorem 3.7. (abstract Seifert-Van Kampen theorem for infinite Galois categories) Let E be a connected 2-complex with group data (G , α). With notations as above, the functor Q induces an isomorphism of Noohi groups where ⟨⟨−⟩⟩ denotes the normal closure of the subgroup generated by the indicated elements and α's come from the definition of a (G , α)-system for each given f .
Proof. The same proof as the proof of [Sti06, Thm. 3.2 (2)] shows that Q induces an equivalence of categories between the infinite Galois categories (lcs(E, G ), F s ) and the full subcategory of objects of * N v∈E 0 G (v) * N π 1 (E ≤1 , T ) − Sets on which H acts trivially. We conclude by Observation 2.28. Remark 3.8. It is important to note that we can replace free Noohi products by free topological products in the statement above, as we take the Noohi completion of the quotient anyway. More precisely, the canonical map of a group having the same generators as H. This is because the categories of G − Sets are the same for those two Noohi groups.
Fact 3.9. The topological free product * top i G i of topological groups has as an underlying space the free product of abstract groups * i G i . This follows from the original construction of Graev [Gra48].
Application to the pro-étale fundamental group.
Descent data. Let T • be a 2-complex in a category C and let F → C be a category fibred over C , with F (S) as a category of sections above the object S.
Definition 3.10. The category DD(T • , F ) of descent data for F C relative T • has as objects pairs such that the cocycle condition holds, i.e., the following commutes in F (T 2 ): Definition 3.11. In the above context h ∶ S ′ → S is called an effective descent morphism for F if h * is an equivalence of categories. Proof. This was proven by Lavanda and relies on the results of [Ryd10]. More precisely, this follows from Prop. 5.4 and Thm. 5.19 of [Ryd10], then checking that the obtained algebraic space is a scheme (using étaleness and separatedness, see [Sta20, Tag 0417]) and that it still satisfies the valuative criterion (see Lemma 2.40).
Discretisation of descent data. We would like to apply the procedure described in [Sti06, §4.3] but to the pro-étale fundamental group. However, in the classical setting of Galois categories, given a category C and functors F, F ′ ∶ C → Sets such that (C, F ) and (C, F ′ ) are Galois categories (i.e. F, F ′ are fibre functors), there exists an isomorphism (not unique) between F and F ′ . Choosing such an isomorphism is called "choosing a path" between F and F ′ . However, it is not clear whether an analogous statement is true for tame infinite Galois categories as the proof does not carry over to this case (see the proof of [Sta20,Lemma 0BN5] Question 3.13. Let C be a category and F, F ′ ∶ C → Sets be two functors such that (C, F ) and (C, F ′ ) are tame infinite Galois categories. Is it true that F and F ′ are isomorphic?
As we do not know the answer to this question, we have to make an additional assumption when trying to discretise the descent data. Fortunately, it will always be satisfied in the geometric setting, which is our main case of interest.
Definition 3.14. Let (C, F ), (C ′ , F ′ ) be two infinite Galois categories and let φ ∶ C → C ′ be a functor. We say that φ is compatible if there exists an isomorphism of functors F ≃ F ′ ○ φ.
Let F → C be fibred in tame infinite Galois categories. More precisely, we have a notion of connected objects in C and any T ∈ C is a coproduct of connected components. Over connected objects F takes values in tame infinite Galois categories (i.e. over a connected Y ∈ C there exists a functor F Y ∶ F (Y ) → Sets such that (F (Y ), F Y ) is a tame infinite Galois category but we do not fix the functor).
Definition 3.15. Let T • be a 2-complex in C . Let E = π 0 (T • ) be its 2-complex of connected components: the 2-complex in Sets built by degree-wise application of the connected component functor. We will say that T • is a compatible 2-complex if one can fix fibre functors F s of F (s) for each simplex s ∈ E such that (F (s), F s ) is tame and for any boundary map ∂ ∶ s → s ′ there exists an isomorphism of fibre functors F s ○ T (∂) * ∼ → F s ′ . The 2-complexes that will appear in the (geometric) applications below will always be compatible. From now on, we will assume all 2-complexes to be compatible, even if not stated explicitly. Let T • be a compatible 2-complex in C . Fix fibre functors F s and isomorphisms between them as in the definition of a compatible 2-complex. For any ∂, denote the fixed isomorphism by ⃗ ∂. For a 2-simplex (vef ) of the barycentric subdivision with ∂ ′ ∶ f → e and ∂ ∶ e → v we define or, more precisely, We define Noohi group data (G , α) on E in the following way: G (s) = π 1 (F (s), F s ) for any simplex . We define elements α as described above and we easily check that this gives Noohi group data.
Proposition 3.16. The choice of functors F s and the choice of ⃗ ∂ as above fix a functor which is an equivalence of categories.
Proof. Given a descent datum (X ′ , φ) relative T • we have to attach a locally constant (G , α)-system on E in a functorial way. For v ∈ E 0 , e ∈ E 1 and f ∈ E 2 , the definition of suitable G (v) (or G (e) or G (f )) sets and maps m ∂ between them can be given by the same formulas as in [Sti06,Prop. 4.4] and also the same computations as in [Sti06,Prop. 4.4] show that we obtain an element of lcs(E, (G , α)). Again, the reasoning of [Sti06,Prop. 4.4] gives a functor in the opposite direction: given M ∈ lcs(E, (G , α)) we define Maps from edges to vertices define a map φ ∶ T (∂ 0 ) * X ′ → T (∂ 1 ) * X ′ and to check the cocycle condition one reverses the argument of the proof that discr gives a locally constant system.
To apply the last proposition we need to know that the compatibility condition holds in the setting we are interested in. ). Let f ∶ X ′ → X be a morphism of two connected locally topologically noetherian schemes and letx ′ ,x be geometric points on X ′ , X, correspondingly. Then the functor f * ∶ Cov X → Cov X ′ is a compatible functor between infinite Galois categories (Cov X , Fx) and (Cov X ′ , Fx′ ), i.e. the functors Fx and Fx′ ○ f * are isomorphic.
Proof. Looking at the image ofx ′ (as a geometric point) on X, we reduce to the case when bothx ′ and x lie on the same scheme X. In that case we proceed as in the second part of the proof of [BS15, Lm. 7.4.1].
The above results combine to recover the analogue of [Sti06,Cor. 5.3] in the pro-étale setting.
Corollary 3.19. Let h ∶ S ′ → S be an effective descent morphism for geometric coverings. Assume that S is connected and S, S ′ , S ′ × S S ′ , S ′ × S S ′ × S S ′ are locally topologically noetherian. Let S ′ = ⊔ v S ′ v be the decomposition into connected components. Lets be a geometric point of S, lets(t) be a geometric point of the simplex t ∈ π 0 (S • (h)), and let T be a maximal tree in the graph Γ = π 0 (S • (h)) ≤1 . For every boundary map ∂ ∶ t → t ′ let γ t ′ ,t ∶s(t ′ ) → S • (h)(∂)s(t) be a fixed path (i.e. an isomorphism of fibre functors as in Lm. 3.17). Then canonically with respect to all these choices where H is the normal subgroup generated by the cocycle and edge relations for all parameter values e ∈ S 1 (h), g ∈ π proét 1 (e,s(e)), and f ∈ S 2 (h). The map π proét 1 (∂ i ) uses the fixed path γ ∂ i (e),e and α Remark 3.20. Similarly as in Rmk. 3.8, we could replace * N by * top in the above, as we take the Noohi completion of the whole quotient anyway.
Remark 3.21. We will often use Cor. 3.19 for h -the normalization map (or similar situations), where the connected components S ′ v are normal. In this case π proét . This implies that π proét 1 (∂ 1 ) factorizes through the profinite completion of π proét 1 (e,s(e)), which can be identified with πé t 1 (e,s(e)). Moreover, the map π proét 1 (e,s(e)) → πé t 1 (e,s(e)) has dense image and, in the end, we take the closure H of H. The upshot of this discussion is that in the definition of generators of H we might consider g ∈ πé t 1 (e,s(e)) instead of g ∈ π proét 1 (e,s(e)) and πé t 1 where H is the normal subgroup generated by (R 1 ) πé t 1 (∂ 1 )(g)⃗ eπé t 1 (∂ 0 )(g) −1 ⃗ e −1 for all e ∈ S 1 (h), g ∈ πé t 1 (e,s(e)) and Let us move on to some applications.
Ordered descent data. Let F be a category fibred over C with a fixed splitting cleavage (i.e. the associated pseudo-functor is a functor). Assume that C is some subcategory of the category of locally topologically noetherian schemes with the property that finite fibre products in C are the same as the finite fibre products as schemes. Let h = ⊔ i∈I h i ∶ S ′ = ⊔ i S ′ i∈I → S be a morphism of schemes and let < be a total order on the set of indices I.
be the open and closed sub-2-complex of schemes in C of ordered partial products S be a morphism of schemes such that, for every i, j ∈ I, the maps induced by the diagonal morphisms ∆
fully faithful. Then the natural open and closed immersion
. We first claim that there is exactly one isomorphism ∂ 0 * , so there is at most one map φ as above (we use here and below that we work with a splitting cleavage and so, by definition, the pullback functors preserve compositions of maps). Moreover, our assumptions imply that ∆ * 2,i is fully faithful as well, which shows that φ ∶ ∂ * 0 Y S i → ∂ * 1 Y S i corresponding to id Y S i will satisfy the condition. A similar reasoning shows that if we have φ ij specified for i < j, then φ ji is uniquely determined and the if φ ij 's satisfy the cocycle condition on S ijk for i < j < k, then φ ij 's together with φ ji 's obtained will satisfy the cocycle condition on any S αβγ , α, β, γ ∈ {i, j, k} is an isomorphism, then (still assuming splitting of the cleavage) the assumptions of the proposition are satisfied.
Two examples.
Example 3.24. Let k be a field and C be P 1 k with two k-rational closed points p 0 and p 1 glued (see [Sch05] for results on gluing schemes). Denote by p the node (i.e. the image of p i 's in C). We want to compute π proét 1 (C). By the definition of C, we have a map h ∶C = P 1 → C (which is also the normalization). It is finite, so it is an effective descent map for geometric coverings. Thus, we can use the van Kampen theorem. This goes as follows: • Check thatC × CC ≃C ⊔p 01 ⊔p 10 as schemes over C, where p αβ are equal to Spec(k) and map to the node of C via the structural map. This can be done by checking that Hom C (Y,C⊔p 01 ⊔p 10 ) ≃ Hom C (Y,C) × Hom C (Y,C); • Similarly, check thatC × CC × CC ≃C ⊔ p 001 ⊔ p 010 ⊔ p 011 ⊔ p 100 ⊔ p 101 ⊔ p 110 , where the projectionC × CC × CC →C × CC omitting the first factor maps p abc to p bc and so on; • We fix a geometric pointb = Spec(k) over the base scheme Spec(k) and fix geometric pointsp 0 andp 1 over p 0 and p 1 that map tob. Then we fix geometric points onC, p 01 , p 10 ⊂C ⊔p 01 ⊔p 10 ≃ C × CC in a compatible way and similarly for connected components ofC × CC × CC (i.e. let us say thatp αβγ ↦p α via v 0 andp αβ ↦p α ). We fix a path γ fromp 0 top 1 that becomes trivial on Spec(k) via the structural map (this can be done by viewingp 0 andp 1 as geometric points oñ Ck, choosing the path onCk first and defining γ to be its image). Letp be the fixed geometric point on C given by the image ofp 0 (or, equivalently,p 1 ).
With this setup, the α (f ) ijk 's (defined as in Cor. 3.19) are trivial for any f and so the relation (2) in this corollary reads gives that the image of π 1 (Γ, T ) ≃ Z * 3 in π proét 1 (C,p) is generated by a single edge (in our case only one maximal tree can be chosen -containing a single vertex). The choice of paths made guarantees π proét 1 (∂ 0 )(g) = π proét 1 (∂ 1 )(g) in π proét 1 (C,p 0 ) for any g ∈ π proét 1 (p ab ,p ab ) = Gal(k). So relation (1) in Cor. 3.19 implies that the image of π proét 1 (C,p 0 ) ≃ Gal(k) in π proét 1 (C,p 0 ) commutes with the elements of the image of π 1 (Γ, T ). Putting this together we get Example 3.25. Let X 1 , . . . , X m be geometrically connected normal curves over a field k and let Y m+1 , . . . , Y n be nodal curves over k as in Ex. 3.24. Let x i ∶ Spec(k) → X i be rational points and let y j denote the node of Y j . Let X ∶= ∪ • X i ∪ • Y j be a scheme over k obtained via gluing of X i 's and Y j 's along the rational points x i and y j (in the sense of [Sch05]). The notation ∪ • denotes gluing along the obvious points. The point of gluing gives a rational point x ∶ Spec(k) → X. We choose a geometric pointb = Spec(k) over the base Spec(k) and choose a geometric pointx over x such that it maps tob. The maps X i → X and Y j → X are closed immersions (this is basically [Sch05, Lm. 3.8]). We also get geometric pointsx i and y j over x i and y j that map tob as well. Denote It is a copy of Gal k in the sense that the induced map πé t 1 (x i ,x i ) → πé t 1 (Spec(k),b) is an isomorphism. Let us denote by ι i ∶ Gal k → Gal k,i the inverse of this isomorphism. The group πé t 1 (x i ,x i ) acts on πé t 1 (X i ,x i ) and allows to write πé t 1 (X i ,x i ) ≃ πé t 1 (X i ,x i ) ⋊ Gal k,i . After some computations (as in the previous example), using Cor. 3.19 and Ex. 3.24, one gets Let us describe the category of group-sets.
Lemma 3.26. Let K and Q be topological groups and assume we have a continuous action K × Q → K respecting multiplication in K. Then K ⋊ Q with the product topology (on K × Q) is a topological group and there is an isomorphism Proof. That K ⋊Q becomes a topological group is easy from the continuity assumption of the action. The isomorphism is obtained as follows: from the universal property we have a continuous homomorphism K * top Q → K ⋊ Q and the kernel of this map is the smallest normal subgroup containing the elements qkq −1 ( q k) −1 (this follows from the fact that the underlying abstract group of K * top Q is the abstract free product of the underlying abstract groups, similarly for K ⋊ Q and that we know the kernel in this case). So we have a continuous map that is an isomorphism of abstract groups. We have to check that the inverse map K ⋊ Q ∋ kq ↦ kq ∈ K * top Q ⟨⟨qkq −1 = q k⟩⟩ is continuous. It is enough to check that the map K × Q ∋ (k, q) ↦ kq ∈ K * top Q (of topological spaces) is continuous, but this follows from the fact that the maps K → K * top Q and Q → K * top Q are continuous and that the multiplication map (K * top Q) × (K * top Q) → K * top Q is continuous.
Let us also state a technical lemma concerning the "functoriality" of the van Kampen theorem. It is important that the diagram formed by the schemes X 1 , X 2 , X, X 1 in the statement is cartesian.
Lemma 3.27. Let f ∶ X 1 → X 2 be a morphism of connected schemes and h ∶ X → X 2 be a morphism of schemes. Denote by h 1 ∶ X 1 → X 1 the base-change of h via f . Assume that h and h 1 are effective descent morphisms for geometric coverings and that local topological noetherianity assumptions are satisfied for the schemes involved as in the statement of Cor. 3.19. Assume that for any connected component W ∈ π 0 (S • (h)), the base-change W 1 of W via f is connected. Choose the geometric points on W 1 ∈ π 0 (S • (h 1 )) and paths between the obtained fibre functors as in Cor. 3.19 and choose the geometric points and paths on W ∈ π 0 (S • (h)) as the images of those chosen for X 1 . Identify the graphs Γ = π 0 (S • (h)) ⩽1 and Γ 1 = π 0 (S • (h 1 )) ⩽1 (it is possible thanks to the assumption made) and choose a maximal tree T in Γ. Using the above choices, use Cor. 3.19. to write the fundamental groups π proét 1 (X 1 ) ≃ ( * top W ∈π 0 ( X) π proét 1 (W 1 )) * top π 1 (Γ 1 , T ) ⟨R ′ ⟩ Noohi and π proét 1 (X 2 ) ≃ ( * top W ∈π 0 ( X) π proét 1 (W )) * top π 1 (Γ, T ) ⟨R⟩ Noohi .
Proof. It is clear that on (the image of) π proét 1 (W 1 ) (in π proét 1 (X 1 )) the map is the one induced from The part about π 1 (Γ 1 , T ) follows from the fact that π 1 (Γ 1 , T ) < π proét 1 (X 1 ) acts in the same way as π 1 (Γ, T ) < π proét 1 (X 2 ) on any geometric covering of X 2 . This follows from the choice of points and paths on W ∈ π 0 (S • (h)) as the images of the points and paths on the corresponding connected components W 1 ∈ π 0 (S • (h 1 )). The maps as in the statement give a morphism π proét 1 (W )) * top π 1 (Γ, T ) and it is easy to check that φ(R ′ ) ⊂ R, which finishes the proof.
3.3. Künneth formula. In this subsection we use the van Kampen formula to prove the Künneth formula for π proét 1 . Let X, Y be two connected schemes locally of finite type over an algebraically closed field k and assume that Y is proper. Letx,ȳ be geometric points of X and Y respectively with values in the same algebraically closed field extension K of k. With these assumptions, the classical statement says that the "Künneth formula" for πé t 1 holds, i.e. Fact 3.28. ([SGA71, Exp. X, Cor. 1.7]) With the above assumptions, the map induced by the projections is an isomorphism We want to establish analogous statement for π proét 1 . Proposition 3.29. Let X, Y be two connected schemes locally of finite type over an algebraically closed field k and assume that Y is proper. Letx,ȳ be geometric points of X and Y respectively with values in the same algebraically closed field extension K of k. Then the map induced by the projections is an isomorphism π proét Choosing a path between (x,ȳ) and some fixed k-point of X × k Y (seen as a geometric point) and looking at the images of this path via projections onto X and Y reduces us (by Cor. 3.18 and compatibility of the chosen paths), to the situation where we can assume thatx andȳ are k-points. We are going to assume this in the proof. Before we start, let us state and prove the surjectivity of the above map as a lemma. Properness is not needed for this.
Lemma 3.30. Let X, Y be two connected schemes over an algebraically closed field k with k-points on them:x on X andȳ on Y . Then the map induced by the projections It is easy to check that the map induced on fundamental groups π proét Proof. (of Prop. 3.29) As X, Y are locally of finite type over a field, the normalization maps are finite and we can apply Prop. 3.12. Let X → X be the normalization of X and let X = ⊔ v X v be its decomposition into connected components and let us fix a closed point x v ∈ X v for each v. Similarly, let ⊔ uỸu =Ỹ → Y be the decomposition into connected components of the normalization of Y with closed points y u ∈Ỹ u .
We first deal with a particular case. Claim: the statement of Prop. 3.29 holds under the additional assumption that • either, for any v, the projections induce isomorphisms • or, for any u, the projections induce isomorphisms Proof of the claim. Apply Cor. 3.19 to h ∶ X → X. We choosex and x v 's as geometric pointss(t) of the corresponding simplexes t ∈ π 0 (S • (h)) 0 and chooses(t) to be arbitrary closed points (of suitable double and triple fibre products) for t ∈ π 0 (S • (h)) 2 . We fix a maximal tree T in Γ = π 0 (S • (h)) ≤1 and fix paths γ t ′ ,t ∶s(t ′ ) → S • (h)(∂)s(t). Thus, we get π proét where H is defined as in Cor. 3.19.
Observe now that X v × k Y are connected (as k is algebraically closed) and that h × id Y ∶ X × Y → X × Y is an effective descent morphism for geometric coverings. So we might use Cor. 3.19 in this setting.
and similarly for triple products, we can identify in a natural way ). In particular we can identify the graph Γ Y = π 0 (S • (h × id Y )) ≤1 with Γ and we choose the maximal tree T Y of Γ Y as the image of T via this identification. For t ∈ π 0 (S • (h)) choose (s(t),ȳ) as the closed base points for i(t) ∈ π 0 (S • (h × id Y )). Denote by α ijk elements of various π proét 1 ( X v ) defined as in Cor. 3.19 and by ⃗ e elements of π 1 (Γ, T ). By the choices and identifications above we can identify π 1 (Γ Y , T Y ) with π 1 (Γ, T ). Using van Kampen and the assumption, we write Here π proét 1 (Y,ȳ) v denotes a "copy" of π proét 1 (Y,ȳ) for each v. By Lm. 3.30, for T ∈ π 0 (S • (h)) the natural map π proét 1 (T × Y, (s(T ),ȳ)) → π proét 1 (T,s(T )) × π proét 1 (Y,ȳ) is surjective. It follows that the relations defining H Y (as in Cor. 3.19) can be written as where α's in the second relation are elements of suitable π proét 1 ( X v )'s and are the same as in the corresponding generators of H. The h y,i denotes a copy of element h y ∈ π proét 1 (Y,ȳ) in a suitable π proét 1 (Y,ȳ) v . Varying e and h y while choosing g = 1 ∈ π proét 1 (e,s(e)) for every e, gives that h y,1 ⃗ e = ⃗ eh y,0 . For e ∈ T we have ⃗ e = 1 and so the first relation reads h y,1 = h y,0 , i.e. it identifies π proét relations of H. Using notations from the above discussion, we can sum it up by writing Putting this together, we get equivalences of categories where equality ♠ follows from the fact that for topological groups G 1 , G 2 there is equivalence This finishes the proof of the Claim in the "either" case. After noting that eachỸ u is still proper, the "or" case follows in a completely symmetrical manner. We have proven a particular case of the proposition.
Let us now go ahead and prove the full statement. General case. The general case follows from the claim proven above in the following way: let ⊔ v X v = X → X and ⊔ uỸu =Ỹ → Y be decompositions into connected components of the normalizations of X and Y . Fix v and note that π proét (Y ) by applying the claim to Y and X v . This is possible, asỸ u 's, X v and the productsỸ u × k X v (for all u) are normal varieties and so their pro-étale fundamental groups are equal to the usual étale fundamental groups (by Lm. 2.12) for which the equality πé t is known (see Fact 3.28). Thus, for any v, we have that π proét We can now apply the claim to X and Y and finish the proof in the general case.
3.4. Invariance of π proét 1 of a proper scheme under a base-change K ⊃ k of algebraically closed fields.
Proposition 3.31. Let X be a proper scheme over an algebraically closed field k. Let K ⊃ k be another algebraically closed field. Then the pullback induces an equivalence of categories Proof. Let X ν → X be the normalization. It is finite, and thus a morphism of effective descent for geometric coverings. Let us show that the functor F is essentially surjective. Let Y ′ ∈ Cov X K . As k is algebraically closed and X ν is normal, we conclude that X ν is geometrically normal, and thus the base change (X ν ) K is normal as well (see [Sta20, Tag 038O]). Pulling Y ′ back to (X ν ) K we get a disjoint union of schemes finite étale over (X ν ) K with a descent datum. It is a classical result ([SGA71, Exp. X, Cor. 1.8]) that the pullback induces an equivalence Fét X ν → Fét X ν K of finite étale coverings and similarly for the double and triple products X ν 2 = X ν × X X ν , X ν 3 = X ν × X X ν × X X ν . These equivalences obviously extend to categories whose objects are (possibly infinite) disjoint unions of finite étale schemes (over X ν , X ν 2 , X ν 3 respectively) with étale morphisms as arrows. These categories can be seen as subcategories of Cov X ν and so on. These subcategories are moreover stable under pullbacks between Cov X ν i . Putting this together we see, that Y ′′ = Y ′ × X K (X ν ) K with its descent datum is isomorphic to a pullback of a descent datum from X ν . Thus, we conclude that there exists Y ∈ Cov X such that Y ′ ≃ Y K . Full faithfulness of F is shown in the same way. If X is connected, it can be also proven more directly, as F being fully faithful is equivalent to preserving connectedness of geometric coverings, but any connected Y ∈ Cov X is geometrically connected, and thus Y K remains connected by Lm. 2.37 (2). Note that in the above argument we do not claim that the double and triple intersections X ν 2 , X ν 3 are normal, as this is in general false. Instead, we are only using that all the considered geometric coverings of those schemes came as pullbacks from X ν , and thus were already split-to-finite.
Statement of the results and examples. The main result of this chapter is the following theorem.
Theorem. (see Theorem 4.14 below) Let k be a field and fix an algebraic closurek. Let X be a geometrically connected scheme of finite type over k. Then the sequence of abstract groups Moreover, the map π proét 1 (Xk) → π proét 1 (X) is a topological embedding and the map π proét 1 (X) → Gal k is a quotient map of topological groups.
One shows the near exactness first and obtains the above version as a corollary with an extra argument. The most difficult part of the sequence is exactness on the left. We will prove it as a separate theorem and its proof occupies an entire subsection.
Theorem. (see Theorem 4.13 below) Let k be a field and fix an algebraic closurek of k. Let X be a scheme of finite type over k such that the base change Xk is connected. Then the induced map is a topological embedding.
By Prop. 2.37, it translates to the following statement in terms of coverings: every geometric covering of Xk can be dominated by a covering that embeds into a base-change tok of a geometric covering of X (i.e. defined over k). In practice, we prove that every connected geometric covering of Xk can be dominated by a (base-change of a) covering of X l for l k finite.
For finite coverings, the analogous statement is very easy to prove simply by finiteness condition. But for general geometric coverings this is non-trivial and maybe even slightly surprising as we show by counterexamples (Ex. 4.5 and Ex. 4.6) that it is not always true that a connected geometric covering of Xk is isomorphic to a base-change of a covering of X l for some finite extension l k. This last statement is, however, stronger than what we need to prove, and thus does not contradict our theorem. Observe, that the stronger statement is true for finite coverings and, even more generally, whenever π proét 1 (Xk) is prodiscrete, as proven in Prop. 4.8.
Let us proceed to proving the easier part of the sequence first.
Observation 4.1. By Prop. 2.17, the category of geometric coverings is invariant under universal homeomorphisms. In particular, for a connected X over a field and k ′ k purely inseparable, there is π proét 1 (X k ′ ) = π proét 1 (X). Similarly, we can replace X by X red and so assume X to be reduced when convenient. In this case, base change to separable closure X k s is reduced as well. We will often use this observation without an explicit reference.
We start with the following lemmas.
Lemma 4.2. Let k be a field. Let k ⊂ k ′ be a (possibly infinite) Galois extension. Let X be a connected scheme over k. Let T 0 ⊂ π 0 (X k ′ ) be a non-empty closed subset preserved by the Gal(k ′ k)-action.
Proof. Let T be the preimage of T 0 in X k ′ (with the reduced induced structure). By [Sta20, Lemma 038B], T is the preimage of a closed subset T ⊂ X via the projection morphism p ∶ X k ′ → X. On the other hand, by [Sta20, Lemma 04PZ], the image p(T ) equals the entire X. Thus, T = X and T = X k ′ , and so T 0 = π 0 (X k ′ ).
Lemma 4.3. Let X be a connected scheme over a field k with an l ′ -rational point with l ′ k a finite field extension. Then π 0 (X k sep ) is finite, the Gal k action on π 0 (X k sep ) is continuous and there exists a finite separable extension l k such that the induced map π 0 (X k sep ) → π 0 (X l ) is a bijection. Moreover, there exists the smallest field (contained in k sep ) with this property and it is Galois over k.
Proof. Let us first show the continuity of the Gal k -action. The morphism Spec(l ′ ) → X gives a Gal kequivariant morphism Spec(l ′ ⊗ k k sep ) → X k sep and a Gal k -equivariant map π 0 (Spec(l ′ ⊗ k k sep )) → π 0 (X k sep ). Denote by M ⊂ π 0 (X k sep ) the image of the last map. It is finite and Gal k -invariant, and by Lm. 4.2, M = π 0 (X k ′ ). We have tacitly used that M is closed, as π 0 (X k ′ ) is Hausdorff (as the connected components are closed). As Gal k acts continuously on π 0 (Spec(l ′ ⊗ k k sep )) (for example by [Sta20, Lemma 038E]), we conclude that it acts continuously on π 0 (X k sep ) as well. From Lm. 4.2 again and from [Sta20, Tag 038D], we easily see that the fields l ⊂ k sep such that π 0 (X k sep ) → π 0 (X l ) is a bijection are precisely those that Gal l acts trivially on π 0 (X k sep ). To get the minimal field with this property we choose l such that Gal l = ker(Gal k → Aut(π 0 (X k sep ))).
Theorem 4.4. Let k be a field and fix an algebraic closurek. Let X be a geometrically connected scheme of finite type over k. Letx ∶ Spec(k) → Xk be a geometric point on Xk. Then the induced sequence of topological groups is nearly exact in the middle (i.e. the thick closure of im(ι) equals ker(p)) and π proét 1 (X) → Gal k is a topological quotient map. Proof.
( On the other hand, this image is dense as we have the following diagram where . . . prof means the profinite completion. In the diagram, the left vertical map has dense image and the lower horizontal is surjective. This shows that π proét 1 (X) → Gal k is surjective.
2.37 and the fact that the map Xk → Spec(k) factorizes through Spec(k).
(3) The thick closure of im(ι) is normal: as remarked above, π proét 1 (Xk) = π proét 1 (X k s ), where k s denotes the separable closure. Thus, we are allowed to replacek with k s in the proof of this point. Moreover, by the same remark, we can and do assume X to be reduced. Let Y → X be a connected geometric covering such that there exists a section s ∶ X k s → Y × X X k s = Y k s over X k s . Observe that any such section is a clopen immersion: this follows immediately from the equivalence of categories of π proét 1 (X k s ) − Sets and geometric coverings. DefineT ∶= ⋃ σ∈Gal(k) σ s(X k s ) ⊂ Y k s . Observe that two images of sections in the sum either coincide or are disjoint as X k s is connected and they are clopen. Now,T is obviously open, but we claim that it is also a closed subset. This follows from Lm. 4.3 (which implies that π 0 (Y k s ) is finite), but one can also argue directly by using that Y k s is locally noetherian and σ s(X k s ) are clopen. Now by [Sta20, Tag 038B],T descends to a closed subset T ⊂ Y . It is also open as T is the image ofT via projection Y k s → Y which is surjective and open map. Indeed, surjectivity is clear and openness is easy as well and is a particular case of a general fact, that any map from a scheme to a field is universally open ([Sta20, Tag 0383]). By connectedness of Y we see that T = Y . So Y k s =T . But this last one is a disjoint union of copies of X k s , which is what we wanted to show by Prop. 2.37. (4) The smallest normal thickly closed subgroup of π proét 1 (X) containing im(ι) is equal to ker(p): as we already know that this image is contained in the kernel and that the map π proét 1 (X) → Gal k is a quotient map of topological groups, we can apply Prop. 2.37. Let Y be a connected geometric covering of X such that Yk = Y × X Xk splits completely. Denote Yk = ⊔ α Xk ,α , where by Xk ,α we label different copies of Xk. By Lm. 4.3, π 0 (Yk) is finite, and thus the indexing set {α} and the covering Y → X are finite. But in this case, the statement follows from the classical exact sequence of étale fundamental groups due to Grothendieck.
As promised above, we give examples of geometric coverings of Xk that cannot be defined over any finite field extension l k.
Example 4.5. Let X i = G m,Q , i = 1, 2. Define X to be the gluing X = ∪ • X i of these schemes at the rational points 1 i ∶ Spec(Q) → X i corresponding to 1. Fix an algebraic closure Q of Q and so a geometric pointb over the base Spec(Q). This gives geometric pointsx i on X i = X i,Q and X i lying over 1 i , which we choose as base points for the fundamental groups involved. Similarly, we get a geometric pointx over the point of gluing x that maps tob. Then Example 3.25 gives us a description of the fundamental group Noohi and of its category of sets: Recall that the groups πé t 1 (X i ,x i ) are isomorphic toẐ(1) = lim ← µ n as Gal Q -modules. Fix these isomorphisms. Let S = N >0 . Let us define a π proét 1 (X,x)-action on S, which means giving actions by πé t 1 (X 1 ,x 1 ) and πé t 1 (X 2 ,x 2 ) (no compatibilities of the actions required). Let ℓ be a fixed odd prime number (e.g. ℓ = 3). We will give two different actions of Z ℓ (1) on S which will define actions ofẐ(1) by projections on Z ℓ (1). We start by dividing S into consecutive intervals labelled a 1 , a 3 , a 5 , . . . of cardinality ℓ 1 , ℓ 3 , ℓ 5 , . . . respectively. These will be the orbits under the action of πé t 1 (X 1 ,x 1 ). Similarly, we divide S into consecutive intervals b 2 , b 4 , b 6 , . . . of cardinality ℓ 2 , ℓ 4 , . . ..
We still have to define the action on each a m and b m . We choose arbitrary identifications b m ≃ µ ℓ m as Z ℓ (1)-modules. Now, fix a compatible system of ℓ n -th primitive roots of unity ζ = (ζ ℓ n ) ∈ Z(1). For a m 's, we choose the identifications with µ ℓ m arbitrarily with one caveat: we demand that for any even number m, the intersection b m ∩a m+1 contains the elements 1, ζ ℓ m+1 ∈ µ ℓ m+1 via the chosen identification a m+1 ≃ µ ℓ m+1 . As b m ∩ a m+1 > 0 and b m ∩ a m+1 ≡ 0 mod ℓ, the intersection b m ∩ a m+1 contains at least two elements and we see that choosing such a labelling is always possible.
Assume that S corresponds to a covering that can be defined over a finite Galois extension K Q. Fix s 0 ∈ a 1 ∩ b 2 . By increasing K, we might and do assume that Gal K fixes s 0 . Let p be a prime number ≠ ℓ that splits completely in K and p be a prime of O K lying above p. Let φ p ∈ Gal K be a Frobenius element (which depends on the choice of the decomposition group and the coset of the inertia subgroup). It acts on Z ℓ (1) via t ↦ t p and this action is independent of the choice of φ p . Choose N > 0 such that p N ≡ 1 mod ℓ 2 and let m be the biggest number such that p N ≡ 1 mod ℓ m . If m is odd, we look at p ℓN instead. In this case m + 1 is the biggest number such that p ℓN ≡ 1 mod ℓ m+1 and so, by changing N if necessary, we can assume that m is even, > 1. The whole point of the construction is the following: if s ∈ a i ∩ b j with i, j < m is fixed by φ N p , then so are g ⋅ s and h ⋅ s (for h ∈ πé t 1 (X 1 ,x 1 ) and g ∈ πé t 1 (X 2 ,x 2 )). Then moving such s with g's and h's to b m ∩ a m+1 leads to a contradiction. Indeed, let s 1 ∈ b m ∩ a m+1 ⊂ S correspond to 1 ∈ µ ℓ m+1 ≃ a m+1 (it is possible by the choices made in the construction of S). Write s 1 = g m h m−1 . . . h 3 g 2 h 1 ⋅ s 0 with h i ∈ πé t 1 (X 1 ,x 1 ) and g j ∈ πé t 1 (X 2 ,x 2 ) (this form is not unique, of course). This is possible thanks to the fact that the sets a i , b j form consecutive intervals separately such that b j intersects non-trivially a j−1 and a j+1 . By the construction of S again, there is an element s 2 ∈ b m ∩ a m+1 corresponding to ζ ℓ m+1 ∈ µ ℓ m+1 via a m+1 ≃ µ ℓ m+1 . We can now write s 2 in two ways: s 2 = ζ ⋅ s 1 = g ⋅ s 1 , where g ∈ πé t 1 (X 2 ,x 2 ) and ζ is the chosen element in πé t 1 (X 1 ,x 1 ) ≃Ẑ(1). By the choices made, the action of φ N p fixes the elements s 1 and g ⋅ s 1 , while it moves ζ ⋅ s 1 .
Example 4.6. Let X i = G m,Q , i = 1, 2, 3 and let X 4 , X 5 be the nodal curves obtained from gluing 1 and −1 on P 1 Q (see Ex. 3.24). Define X to be the gluing X = ∪ • X i of all these schemes at the rational points corresponding to 1 (or the image of 1 in the case of the nodal curves). We fix an algebraic closure Q of Q and so fix a geometric pointb over the base Spec(Q). We get geometric pointsx i on X i = X i × Q Q lying over 1. We have and fix the following isomorphisms of Gal Q -modules. For 1 ≤ i ≤ 3, πé t 1 (X i ,x i ) ≃Ẑ(1) and for 4 ≤ j ≤ 5, we have π proét 1 (X j ,x j ) ≃ ⟨t Z ⟩ (i.e. Z written multiplicatively).
Let t i ∈ π proét 1 (X i ,x i ) be the elements corresponding via these isomorphisms to a fixed inverse system of primitive roots ζ ∈Ẑ(1) (for i = 1, 2, 3) and to t ∈ ⟨t Z ⟩ (for i = 4, 5). Example 3.25 gives a description of the fundamental group and of its category of sets: be the subgroup of upper triangular matrices. Fix u 1 ∈ Z × ℓ such that u p 1 ≠ u 1 . Let H = * top i πé t 1 (X i ,x i ) and define a continuous homomorphism ψ ∶ H → G by: It is easy to see that ψ is surjective.
Let U ⊂ G be the subgroup of matrices with elements in Z ℓ , i.e. U = * * * ⊂ GL 2 (Z ℓ ). It is an open subgroup of G. Thus, using ψ and the fact that H Noohi = π proét 1 (X,x), we get that S ∶= G U defines a π proét 1 (X,x)-set. It is connected (i.e. transitive) and so corresponds to a connected geometric covering of X. Assume that it can be defined over a finite extension L of Q. We can assume L Q is Galois. By the description above, it means that there is a compatible action of groups Z(1) i , Z * 2 and Gal L on S. By increasing L, we can assume moreover that Gal L fixes [U ].
Choose p ≠ ℓ that splits completely in Gal L , fix a prime p of L dividing p and let φ p ∈ Gal L denote a fixed Frobenius element. Let t u 1 3 denote the unique element of ψ −1 As n ≫ 0 and u p 1 ≠ u 1 , it follows that φ p ⋅ [U ] ≠ [U ] -a contradiction.
It is important to note, that the above (counter-)examples are possible only when considering the geometric coverings that are not trivialized by an étale cover (but one really needs to use the pro-étale cover to trivialize them). In [BS15], the category of geometric coverings trivialized by an étale cover on X is denoted by Loc Xé t and the authors prove the following We are now going to prove: Proposition 4.8. Let X be a geometrically connected separated scheme of finite type over a field k. Let Y ∈ Cov Xk be such that Y ∈ Loc (Xk)é t . Then there exists a finite extension l k such and Y 0 ∈ Cov X l such that Y ≃ Y 0 × X l Xk.
Proof. By the topological invariance (Prop. 2.17), we can replacek by k sep if desired. By the assumption Y ∈ Loc (Xk)é t , there exists an étale cover of finite type that trivializes Y . Being of finite type, it is a base-change X ′ k = X ′ × Spec(l) Spec(k) → Xk of an étale cover X ′ → X l for some finite extension l k. Thus, Y X ′ k is constant (i.e. ≃ ⊔ s∈S X ′ = S) and the isomorphism between the pull-backs of we use the fact that X ′ k is étale over Xk, and thus π 0 (X ′ k × Xk X ′ k ) is discrete, in this case even finite). By enlarging l, we can assume that the connected components of the schemes involved: X ′ , X ′ × X l X ′ etc. are geometrically connected over l. Define Y ′ 0 = ⊔ s∈S X ′ . The discussion above shows that the descent datum on Y X ′ k with respect to X ′ k → Xk is in fact the pull-back of a descent datum on Y ′ 0 with respect to X ′ → X l . As étale covers are morphisms of effective descent for geometric coverings (this follows from the fpqc descent for fpqc sheaves and the equivalence Cov X l ≃ Loc X l of [BS15, Lemma 7.3.9]), the proof is finished.
Remark 4.9. Over a scheme with a non-discrete set of connected components, Aut(S) might not be equal to Aut(S).
Proposition 4.8 shows that our main theorem is significantly easier for π SGA3 1 . Corollary 4.10. Let X be a geometrically connected separated scheme of finite type over a field k. Fix an algebraic closurek of k. Then π SGA3 is a topological embedding.
4.2.
Preparation for the proof of Theorem 4.13. We are going to use the following proposition.
Proposition 4.11. Let X be a scheme of finite type over a field k with a k-rational point x 0 and assume that Xk is connected. Let Y 1 , . . . , Y N be a set of connected finite étale coverings of Xk. Then there exists a finite Galois étale covering Y of X such that for all 1 ≤ i ≤ N , there exists a surjective map Yk ↠ Y i of coverings of Xk.
Proof. There is a finite connected Galois covering of Xk dominating Y 1 , . . . , Y N . Thus, we can assume N = 1 and Y 1 is Galois. Fix a geometric pointx 0 over x 0 . The k-rational point x 0 gives a splitting s ∶ Gal k → πé t 1 (X,x 0 ), allowing to write πé t 1 (X,x 0 ) ≃ πé t 1 (Xk,x 0 ) ⋊ Gal k and so an action of Gal k on πé t 1 (Xk,x 0 ). Fix a geometric pointȳ on Y 1 overx 0 . The group U = πé t 1 (Y 1 ,ȳ) is a normal open subgroup of πé t 1 (Xk,x 0 ). As Y 1 is defined over a finite Galois field extension l k (contained ink), it is easy to check that Gal l ⊂ Gal k fixes U , i.e. σ U = U for σ ∈ Gal l . It follows that the set of conjugates σ U is finite, of cardinality bounded by [l ∶ k]. Define V = ∩ σ∈Gal k σ U . It follows that this is an open subgroup of πé t 1 (Xk,x 0 ) fixed by the action of Gal k . Moreover, it is normal, as g(∩ σ∈Gal k σ U )g −1 = ∩ σ∈Gal k g σ U g −1 = ∩ σ∈Gal k σ (( σ −1 g)U ( σ −1 g −1 )) = ∩ σ∈Gal k g σ U g −1 , due to normality of U . The open normal subgroup V ⋅ Gal k = V ⋊ Gal k < πé t 1 (Xk,x 0 ) ⋊ Gal k corresponds to a covering with the desired properties.
Before starting the proof, we need to collect some facts about the Galois action on the geometric πé t 1 . They are discussed, for example, in [Sti13,Ch. 2]. The existence, functoriality and compatibility with compositions of the action can be readily seen to generalize to π proét 1 as well, but note (see the last point below) that one has to be careful when discussing continuity . For a connected topologically noetherian scheme W and geometric pointsw 1 ,w 2 , let π proét 1 (W,w 1 ,w 2 ) = Isom Cov Wk (Fw 1 , Fw 2 ) denote the set of isomorphisms of the two fibre functors, topologized in a way completely analogous to the case whenw 1 = w 2 . By Cor. 3.18, it is a bi-torsor under π proét 1 (W,w 1 ) and π proét 1 (W,w 2 ). The bi-torsors under profinite groups πé t 1 (W,w 1 ,w 2 ) are defined similarly and are rather standard. For a geometrically unibranch W , the two notions match.
g) The action Gal k × πé t 1 (Wk,w 1 ,w 2 ) → πé t 1 (Wk,w 1 ,w 2 ) is continuous. Note, however, that at this stage of the proof we do not know whether this is true for π proét 1 . In fact, this is closely related to the main result we need to prove.
Theorem 4.13. Let k be a field and fix an algebraic closurek of k. Let X be a scheme of finite type over k such that the base-change Xk is connected. Letx be a Spec(k)-point on Xk. Then the induced map is a topological embedding.
Then, we will derive the final form of the fundamental exact sequence.
Theorem 4.14. With the assumptions as in Thm. 4.13, the sequence of abstract groups Moreover, the map π proét 1 (Xk,x) → π proét 1 (X,x) is a topological embedding and the map π proét 1 (X,x) → Gal k is a quotient map of topological groups.
In the proof, after some preparatory steps (e.g. extending the field k), we define the set of regular loops in π proét 1 (Xk) with respect to a fixed open subgroup U < ○ π proét 1 (Xk,x) and use it to construct an Galois invariant open subgroup V inside of U (see Steps II and III below). There is also an alternative approach to proving the existence of V that avoids the direct construction involving regular loops. We sketch it in Rmk. 4.27. While this latter approach is quicker, it is less instructive: as explained in Rmk. 4.26 below, the notion of a regular loop provides an insight of what goes wrong in the counterexample Ex. 4.5. Still, it might be worth having a look at, as our main approach is rather lengthy.
Step have tacitly liftedx to X l . Thus, we can start by replacing k by a finite extension. Considering the normalization X ν → X, base-changing the whole problem to a finite extension l of k, considering the factorization l l ′ k into separable and purely inseparable extension of fields, and using first that the base-change along a separable field extension of a normal scheme is normal and then the topological invariance of π proét 1 , we can assume that we have a surjective finite morphism h ∶ X → X such that the connected components of X, X × X X, X × X X × X X are geometrically connected, have rational points and for each W ∈ π 0 ( X), there is π proét 1 (W ) = πé t 1 (W ) and π proét 1 (Wk) = πé t 1 (Wk). Let X = ⊔ v∈Vert X v be the decomposition into connected components. Note that the indexing set Vert is finite. For each t ∈ π 0 ( X)∪π 0 ( X × X X)∪π 0 ( X × X X × X X)), we fix a k-rational point x(t) on t and ak-pointx(t) ont = tk lying over x(t). We will often writex t to meanx(t). Let us fix vx ∈ Vert for the rest of the text and say that the image ofx( X vx,k ) in Xk will be the fixed geometric pointx of Xk and its image in X the fixed geometric point of X. For any Wk, W ′ k ∈ π 0 (S • (h)) and every boundary map ) between the chosen geometric points, as in Cor. 3.19. This is possible thanks to Lm. 3.17. We define γ W ′ ,W to be the image of this path.
Leth ∶ Xk → Xk be the base-change of h. We choose a maximal tree T (resp. T ′ ) in the graph Γ = π 0 (S • (h)) ⩽1 (resp. Γ ′ = π 0 (S • (h)) ⩽1 ). After making these choices, we can apply Cor. 3.19 with Rmk. 3.21 to write the fundamental groups of (X,x) and (Xk,x). This way we get a diagram * top v πé t where (. . .) denotes the topological closure, ⟨R⟩ nc denotes the normal subgroup generated by the set R, and R 1 , R ′ 1 , R 2 , R ′ 2 are as in Rmk. 3.21. Note that, while the (connected components of the) fibre products X × X X, X × X X × X X are not necessarily normal nor satisfy π proét 1 (W ) = πé t 1 (W ), we can effectively work as if this was the case, see Rmk. 3.21.
Observation 4.15. The maps and groups above enjoy the following properties. a) By Lm. 3.27, the left vertical map is the Noohi completion of the obvious map of the underlying quotients of free topological products. b) By geometrical connectedness of the schemes in sight, we can (and do) identify π 0 (S • (h)) = π 0 (S • (h)), Γ ′ = Γ and T ′ = T c) As γ W ′ ,W 's are chosen to be the images of γ W ′ k ,Wk 's, we see that α (f ) abc 's appearing in R 2 , and so a priori elements of πé t 1 ( X v ,x v )'s, are in fact in πé t 1 ( X v,k ,x v ). It follows that R ′ 2 = R 2 d) The k-rational points x(W ) give identification πé t 1 (W,x W ) ≃ πé t 1 (Wk,x W ) ⋊ Gal k When W = X v for v ∈ Vert, we will write Gal k,v in the identification above to distinguish between different copies of Gal k in the van Kampen presentation of π proét 1 (X,x). e) As γ W ′ ,W is the image of the path γ W ′ k ,Wk on W ′ k , it maps to the trivial element of πé t 1 (Spec(k),x(W ),x(W ′ )) = Gal k . It implies, that the following diagram commutes πé t 1 (Wk,x(W )) πé t 1 (W,x(W )) Let P be a walk in Γ, i.e. a sequence of consecutive edges (with possible repetitions) e 1 , . . . , e m in Γ with an orientation such that the terminal vertex of e i is the initial vertex of e i+1 . Using the orientation of Γ, it can be written as ǫ 1 e 1 . . . ǫ m e m with ǫ i ∈ {±} indicating whether the orientation agrees or not. This will come handy as follows: define In the following, we will use ○ ? to denote the "composition of étale paths" and • ? to denote the multiplication in some group(oid) ?. When ? = π proét 1 (Xk,x) or π proét 1 (X,x), we will skip the subscript.
While we could just use ○ ? everywhere, it is sometimes convenient to keep track of when some paths "have been closed" by using • ? .
Step II: Defining regular loops in π proét 1 (Xk,x) Definition 4.16. An element γ ∈ Isom Cov Xk (Fx w , Fx v ) is called an étale path of special form supported on P if it lies in the image of the composition map above for some walk P starting in w and ending in v.
Any element (γ 2m , . . . , γ 1 ) in the preimage of such γ will be called a presentation of γ with respect to P .
For a walk P , denote by l(P ) the length of P , i.e. the number of consecutive edges (not necessarily different) it is composed of.
Observation 4.17. A useful example of a path of special form is the following. In the van Kampen presentation, the maps πé t (Xk,x) are given by is defined as follows: if P vx,v ⊂ T denotes the unique shortest path in the tree T ⊂ Γ (forgetting the orientation) from vx to v, then the choices of paths γ W ′ k ,Wk made when applying the van Kampen theorem give a unique étale path of special form γ v supported on P vx,v .
Before introducing the main objects of the proof, we note a simple result.
Proof. This follows from the continuity of the composition maps of paths and the fact that the statement is true for πé t 1 . To prove Thm. 4.13, it is enough to prove the following statement: any connected geometric covering Y of Xk can be dominated by a covering defined over a finite separable extension l k.
Indeed, let Y ′ ∈ Cov X l be a connected covering that dominates Y after base-change tok. By looking at the separable closure of k in l and using the topological invariance of π proét 1 , we can assume l k is separable. The composition Y ′′ = Y ′ → X l → X is an element of Cov X and there is a diagonal embedding Y ′ × Spec(l) Spec(k) → Y ′′ × Spec(k) Spec(k). By Prop. 2.37(5), the proof will be finished.
Let us fix a connected Y ∈ Cov till the end of the proof and denote by S = Yx the corresponding π proét 1 (Xk,x)-set. Fix some point s 0 ∈ S and let U = Stab π proét 1 (Xk,x) (s 0 ).
supp. on P s = γ ⋅ s 0 and call it the set of "elements at v reachable in at most N steps".
The following is a crucial observation regarding O N v . Lemma. For any v and N , the set O N v is finite. Proof. We proceed by induction on N . For N = 1, the walks of length not greater than N starting in v 0 (are either trivial or) consist of a single edge whose initial vertex is necessarily vx. As Γ is finite, there are only finitely many such edges. Let us fix one, named e, with vertices v 0 , w. We need to show that the set 0 (x e ),x w ) is finite. However, as in general the sets πé t 1 (W,x 1 ,x 2 ) are (bi-)torsors under profinite groups (namely πé t 1 (W,x 1 ) and πé t 1 (W,x 2 )) and the maps and actions in sight are continuous, we see that the finiteness of this last set follows directly from finiteness of orbits of points in discrete sets under an action by a profinite group. Now, to see the inductive step, assume that the claim is true for N . To prove it for N + 1, note that any element in O N +1 v can be connected by a single edge to an element of O N w (for some vertex w). As O N w is finite and as we have just explained that, starting from a fixed point, one can only reach finitely many points by applying étale paths of special forms supported on a single edge, the result follows.
x v )-sets. We can find sets satisfying the first two conditions by applying Prop. 4.11, and the last condition can be guaranteed by choosing the C N v 's inductively (for a given v). We now proceed to define a subgroup of π proét 1 (X,x) that will lead to the desired π proét 1 (X,x)-set.
For that we need to find a suitably large subgroup of elements of U that are well behaved under the Galois action.
Here, the larger bullets correspond tox v i 's and the smaller ones to ∂ ǫ 0 or 1 (x(e i )). Remark 4.21. We find the definition involving C N v 's quite convenient. One could, however, avoid introducing C N v 's and make a slightly different definition. Define O N,+ v to be the set of (isomorphism classes of) Gal k -conjugates of the πé t 1 ( Let V 0 < π proét 1 (Xk,x) denote the subgroup generated by the set of regular loops and let V be its topological closure.
denote the topological group appearing in the van Kampen presentation above. We have that G Noohi = π proét 1 (Xk,x). LetG ⊂ π proét 1 (Xk,x) denote the subgroup of all étale paths (or "loops", rather) of special form, supported on walks from vx to vx.
Observation 4.22. By Obs. 4.17, the map G → π proét 1 (Xk,x) = G Noohi factorizes throughG. Directly from the definitions, there is V 0 <G. We are thus in the situation of Lm. 2.39. We will use it below.
For brevity, let us denoteḠ v = πé t 1 ( X v,k ,x v ) and G v = πé t 1 ( X v ,x v ) ≃Ḡ v ⋊ Gal k in the proofs below. Proposition 4.23. The following statements about the subgroup V hold: (1) There is a containment V < U . Proof.
(1) As any open subgroup is automatically closed, it is enough to show that any regular loop lies in U . Let g be a regular loop and write g = γ ′ ○ β ○ γ with γ, γ ′ étale paths of special form supported on some walk (and its inverse) from vx to v of length m, with presentations (γ 1 , . . . , γ 2m ) and (γ ′ 2m , . . . , γ ′ 1 ) of γ and γ ′ , and β ∈ ker(πé t 1 ( X v,k ,x v ) → C m v ), as in the definition of a regular loop. Let us introduce the following notation (and analogously for γ ′ ) For i = m, it follows from the condition on β that (β ○ γ) ⋅ s 0 = γ ⋅ s 0 . Similarly, the condition on The process continues in a similar fashion to show that g stabilizes s 0 , and thus belongs to U . (2) By Lm. 2.39, it is enough to check that the map G → Aut(G V 0 ) is continuous whenG V 0 is considered with the discrete topology.
Using the universal property of free topological products, continuity can be checked separately forḠ v and D. For D, this is automatic, as D is discrete. To see the result forḠ v 's, we need to show that the stabilizers of the action ofḠ v onG V 0 induced byḠ v → G are open. Fix [gV 0 ] ∈G V 0 and g ∈G representing it. The element g is represented by some étale path (or a "loop", in fact) of special form ρ supported on a walk P ρ of length l(P ρ ). By Obs. 4.17, the morphismḠ v →G ⊂ π proét 1 (Xk,x) is also defined using an étale path of special form γ v supported on a walk P vx,v in the tree T ⊂ Γ.
Then H v is open inḠ v and its image inG can be written as It follows from the setup that for β ∈ H v , (3) For each σ, the map ψ σ is continuous. As V = V 0 G Noohi , it is thus enough to prove that V 0 is Gal k -invariant. By Lm. 4.12, it follows that under the action of Gal k , an étale path of special form supported on a walk P is mapped again to an étale path of special form supported on P . Consequently, checking that the action of σ ∈ Gal k maps a regular loop g to another regular loop boils down to checking the following fact. If g has a presentation g = γ ′ ○β ○γ as in the definition of a regular loop, then • ψ σ (β) still acts trivially on C , depending on parity, still acts trivially on C ⌈i⌉ v ⌊i⌋+1 for every i. However, as the automorphism ψ σ on πé t 1 ( X v j ,k ,x v j ) matches conjugation by σ in πé t 1 ( X v j ,x v j ) restricted to its normal subgroup πé t 1 ( X v j ,k ,x v j ) and the sets C j v j were Galois as πé t 1 ( X v j ,x v j )sets, the result follows.
Lemma 4.24. For each v ∈ Vert, define an (abstract) Gal k,v -action on π proét 1 (Xk,x) to be Then there exists a finite extension l k, such that for all v ∈ Vert, there is a) Gal l,v fixes V ; b) The obtained induced Gal l,v -action on S ′ can be written as c) The induced Gal l,v action on S ′ is continuous and compatible with theḠ v -action.
Proof. As there are finitely many vertices v, it is enough to prove the statements for a single fixed v. Let g ∈ V . By definition of ρ v , there is By Prop. 4.23, we have ψ σ (g) ∈ V and we only need to show that γ −1 v ○ ψ σ (γ v ) ∈ V . By Lm. 4.18 and Obs. 4.17, the map Gal k ∋ σ ↦ ψ σ (γ v ) ∈ π proét 1 (Xk,x,x v ) is continuous, and we conclude that for an open subgroup of σ ∈ Gal k we have the desired containment.
It follows from the previous point that we get an induced action of Gal l,v on S ′ . Using that γ −1 v ○ ψ σ (γ v ) ∈ V , the alternative formula in the statement follows from the computation Let us move to the last point. Compatibility with theḠ v -action follows from Lm. 4.12(d) and the fact that the mapḠ v → π proét 1 (Xk,x) is defined by postcomposing with ρ v . To check continuity, fix [gV ].
By Lm. 2.39, this class is represented by a path (loop) of special form, and so we can assume this about g. Checking that the stabilizer of [gV ] is open boils down to checking that for an open subgroup of σ's in Gal l,v , one has g −1 • (γ −1 v ○ ψ σ (γ v )) • ψ σ (g) ∈ V . However, this follows from the openness of V and Lm. 4.18.
Proof. By the van Kampen theorem for π proét 1 (X l ,x), it is enough to show that there are continuous actions ofḠ v ⋊ Gal l,v 's and D compatible with theḠ v and D actions that S ′ is already equipped with, and such that the van Kampen relations are satisfied. We already have a continuous action by D on S ′ , and by Lm. 4.24, we get an action ofḠ v ⋊ Gal l,v .
We have finished our main proof, and thus the most difficult part of the exact sequence is now proven. We now obtain the final form of the fundamental exact sequence.
Proof. (End of the proof of Thm. 4.14) We already know the statements of the "moreover" part and the near exactness in the middle of the sequence. All we have to prove is that π proét can be performed after replacing π proét 1 (X,x) by any open subgroup U such that π proét 1 (Xk,x) < U < ○ π proét 1 (X,x). Choosing a suitably large finite field extension l k and looking at U = π proét 1 (X l ,x), we are reduced to the situation as in the proof of Thm. 4.13, i.e. we have enough rational points on the connected components we are interested in when applying van Kampen. LetG < π proét 1 (Xk,x) be the dense subgroup defined above Prop. 4.23. Note that by the van Kampen theorem applied to π proét 1 (X,x) together with the observations in Obs. 4.15, it follows that the subgroup generated byG and Gal k,v 's is dense in π proét 1 (X,x). Putting this together, it follows that it is enough to check that, for each v, conjugation by elements of Gal k,v fixesG in π proét 1 (X,x). This, however, follows from Lm. 4.12 c) d) and the fact that Gal k,v → π proét 1 (X,x) is defined as the composition Gal k,v → π proét 1 (X,x v ) ρv → π proét 1 (X,x), Remark 4.26. Let us revisit the counterexample of Ex. 4.5 from the point of view of the proof above. We will freely use the notation set there. In this example, we have started from the fixed point s 0 , and used the group elements to reach point s 1 = g m h m−1 . . . h 3 g 2 h 1 ⋅ s 0 . We have then concluded that s 2 = ζ ℓ m+1 ⋅ s 1 = g ⋅ s 1 and justified that the setup forces that this equality contradicts the possibility of extending the Galois action to the set S. The problem here is caused by the fact that, denoting γ = g m h m−1 . . . h 3 g 2 h 1 , the element γ −1 ○ g −1 ○ ζ ℓ m+1 ○ γ stabilizes s 0 , but it is not a "regular loop" in the language introduced above.
Of course, this only means that this particular "obvious" presentation is not as in the definition of a regular loop. But, by now, we know that it provably cannot be a regular loop with any presentation.
Remark 4.27. We sketch a slightly different approach to the central part of the main proof. It is a bit quicker, but less constructive, i.e. does not "explicitly" construct the desired Galois invariant open subgroup in terms of regular loops. We will freely use the fact that a surjective map from a compact space onto a Hausdorff space is a quotient map.
Assume that we have already done the preparatory steps of the main proof, i.e. we have increased the base field to have many rational points and applied the van Kampen theorem. We want to prove that the action Gal k × π proét 1 (Xk,x) → π proét 1 (Xk,x) given by ψ σ is continuous. Let G,G be as introduced above Obs. 4.22.
Firstly, one checks that any element ofG, so a path of special form, can be in fact rewritten with a presentation that makes it visibly an image of an element of G, at the expense of the presentation possibly getting longer. Another words, the map G →G is surjective. By default,G is considered with the subspace topology from π proét 1 (Xk,x). Let us denote (G, quot) the same group but considered with the quotient topology from G. We thus have a continuous bijection (G, quot) →G.
The group G is a topological quotient of the free topological product of finitely many compact groups G v and a finitely generated free group D ≃ Z * r . One checks from the universal properties that this free product can be written as a quotient of the free topological group F (Z) (see [AT08,Ch. 7.]) on a compact space of generators Z = ⊔ v G v ⊔ {1,...,r} * , i.e. the disjoint union of G v 's and r singletons.
By [AT08, Thm. 7.4.1], F (Z) is, as a topological space, a colimit of an increasing union . . . ⊂ B n ⊂ B n+1 ⊂ . . . of compact subspaces. These spaces are explicitly described as words of bounded length in F (Z) (this makes sense, as the underlying group of F (Z) is the abstract free group on Z). From this, it follows that (as a topological space) (G, quot) = colimK n , with K n = im(B n ).
Working directly with K n 's is inconvenient for our purposes, as these sets are not necessarily preserved by the Galois action. The reason is that the van Kampen presentation as a quotient of a free product uses fixed paths, while applying Galois action will usually move the paths. One then has to conjugate by a suitable element to "return" to the paths fixed in van Kampen, possibly increasing the length of the word.
Instead, we can consider subsets K ′ n ⊂G of elements that are paths of special form of length ⩽ n, i.e. possessing a presentation as a path of special form of length ⩽ n (see Defn. 4.16). By a reasonably simple combinatorics, one can cook up "brute force" bounds f (n, d), g(n, d) ∈ N in terms of n and the diameter d = diam(Γ) of Γ such that there is and K ′ n ⊂ K g(n,d) In conclusion, (G, quot) = colimK ′ n in Top. By Lm. 4.12, the Gal k -action preserves the sets K ′ n and Gal k × K ′ n → K ′ n is continuous. As Gal k is compact, Gal k × (−) has a right adjoint Maps cts (Gal k , −) in Top and so Gal k × (colim n∈N K ′ n ) = colim n∈N (Gal k × K ′ n ). From this, we immediately get that Gal k × (G, quot) → (G, quot) is continuous. As Gal k -action respects the group action ofG, it quickly follows that the action is still continuous when (G, quot) is equipped with the weakened topology τ making open subgroups a base at 1, as in Lm. 2.25.
By (the easier part of) Lm. 2.39, this weakened topology on (G, quot) matches that ofG. It follows that Gal k ×G →G is continuous.
By Lm. 2.25 again, one has to check that the continuity is not lost when passing to the Raǐkov completion of the maximal Hausdorff quotient of (G, τ ). This in turn can be justified by similar arguments as in the proof of Lm. 2.39. This finishes the sketch. See also [BS15,Prop. 4 | 2019-10-30T17:54:12.000Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "96723c62f68e6ee3b2f46dc39dcc769760820544",
"oa_license": "CCBY",
"oa_url": "https://msp.org/ant/2024/18-4/ant-v18-n4-p01-s.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "96723c62f68e6ee3b2f46dc39dcc769760820544",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
225039743 | pes2o/s2orc | v3-fos-license | Development of an optical photon-counting imager with a monolithic Geiger APD array
We have developed a sensor system based on an optical photon-counting imager with high timing resolution, aiming for highly time-variable astronomical phenomena. The detector is a monolithic Geiger-mode avalanche photodiode array customized in a Multi-Pixel Photon Counter with a response time on the order of nanoseconds. This paper evaluates the basic performance of the sensor and confirms the gain linearity, uniformity, and low dark count. We demonstrate the system's ability to detect the period of a flashing LED, using a data acquisition system developed to obtain the light curve with a time bin of 100 microseconds. The Crab pulsar was observed using a 35-cm telescope without cooling, and the equipment detected optical pulses with a period consistent with the data from the radio ephemeris. Although improvements to the system will be necessary for more reliability, the system has been proven to be a promising device for exploring the time-domain optical astronomy.
Introduction
A variety of high-energy astronomical phenomena, for example gamma-ray bursts (e.g. Kumar et al. 2015), fast radio bursts (e.g. Petroff et al. 2019), a neutron star merger (Abbott et al. 2017), X-ray binaries with Quasi Periodic Oscillations (e.g. van der Klis 2004) and pulsars (e.g. (Enoto et al. 2019)), are known to have short timescales variability. To study such objects, instruments with a high time resolution will undoubtedly play a crucial role. In optical astronomy, very short time-scale phenomena, typically shorter than one second, still remain relatively unexplored compared with studies at other wavelengths such as radio, X-ray and gamma rays. Generally, due to a power-law spectrum, number of photons from nonthermal radiation are even higher in optical wavelengths than in X-rays or gamma rays. Since photon statistics is important for detecting time variability, optical observations may have an advantage over higher energies, even though often contaminated with thermal emission. The variability time scale τ restricts the size of emission region as cτ /Γ = 3 (τ /1 ms)(Γ/100) −1 km, where c is the speed of light and Γ is a bulk Lorentz factor of emitters. Highly time-resolved observation with plenty photons could be a powerful tool for probing very small scale structures.
ample, the Tomo-e GOZEN camera mounted on the Kiso-Schmidt telescope has demonstrated a wide field-of-view (FoV) and fast readout, namely 2 frames/s in full frame mode and ∼500 frames/s in partial readout mode (Sako et al. 2018).
Using photon-counting devices is another way to achieve fine time resolution, since such detectors have an extremely fast response time. A photomultiplier tube (PMT) has been the device of choice, offering fast response and a large internal gain, typically 10 6 to 10 8 . High Speed Photometer (Bless et al. 1999) once onboard the Hubble Space Telescope utilized a PMT to obtain the light curve of the Crab pulsar at visible and ultraviolet wavelengths with ∼ 20 µs resolution (Percival et al. 1993).
ARCONS (Mazin et al. 2013) has developed microwave kinetic inductance detectors for astronomy that enables photon counting and spectroscopy from visible to infrared wavelengths. Although ARCONS requires cooling by a cryostat, its excellent performance was demonstrated by the significant detection of the enhancement of the Crab pulse accompanying the giant radio pulse (Strader et al. 2013).
Silicon semiconductor devices have also been studied intensively, in particular, the family of Geiger-mode single-photon avalanche photodiodes (SPAD). OPTIMA ) discovered an X-ray and optical correlation from black hole candidate , and magnetar flares (Stefanescu et al. 2008). Optical pulsation from milisecond pulsars were detected by SiFAP (Ambrosino et al. 2017) and Aqueye+ (Zampieri et al. 2019).
The Multi-Pixel Photon Counter (MPPC), also commonly known as a silicon photomultiplier, is a semiconductor photo-sensor that consists of many avalanche photodiodes ("cells" "microcells") in a two-dimensional array (Yamamoto et al. 2006). Each cell works in the Geiger mode in order to employ an internal gain of ∼ 10 6 , which allows even a single optical photon to be detected. All the cells are connected in parallel so that net output signal is proportional to the number of photons detected at the same time in each cell. Its fast response with a time jitter typically of 100 ps is also suited for precise measurements of photon arrival times. However, MPPCs also generates "dark counts," which are spurious pulses that cannot be distinguished from real signals due to photon detections. The dark count rate depends on temperature, the size of the active area, and the operating voltage. A typical dark count rate is several hundred kHz, so using MPPCs for astronomical observation is challenging, especially for faint sources. However, it should be noted that the ground-based imaging atmospheric Cherenkov tele-scopes are exceptions. They actively employ MPPCs or SiPMs for their cameras. FACT (Anderhub et al. 2013) has been in operation for several years, and ASTRI-Horn (Lombardi et al. 2020) was developed within the framework of the Cherenkov Telescope Array (Acharya et al. 2013). In these systems, coincidence between many pixels and adequate trigger setting suppress the effects of dark counts. In a similar way, applying such a coincident technique, Li et al. (2019) developed an optical observation system with MPPCs.
This paper presents an optical photon-counting observation system with uniquely customized MPPCs and significantly reduced dark count rate. Such a simple and compact system could be easy to handle, carry and replicate, and applicable to a variety of observational targets and strategies. This paper is organized as follows. Section 2 reports the structure and performance of the sensor system. Section 3 describes the data acquisition system and the instrument accuracy. Section 4 explains our observation procedures for celestial objects. Section 5 presents observational results of the Crab pulsar. The conclusions at the end discusses the overall performance of this prototypical system and future applications. Figure 1 shows a customized MPPC as a dual-in-line package (Hamamatsu, S13361−9088) . The sensing area consists of a 4 × 4 array of 100 × 100 µm 2 cells providing a total of 16 channels. Unlike commonly available products, the anodes of these individual cells are not connected to each other, and every cells work as independent Geiger avalanche photodiode (GAPD). This structure gives the customized MPPC (hereafter GAPD array) a sensitivity to the arrival position of photons and can be operated as an imager. The advantage of a monolithic GAPD over an assembly of individual APDs is that its channel characteristics are naturally uniform. Typical photon detection efficiency (PDE) curves provided by the manufacturer 1 give an operating wavelength of 300 − 900 nm, with a peak about 450 nm. These PDE values include the filling factor for the sensitive area, so absolute quantum efficiencies can be estimated as high as 70% around the peak. Since the sensing area for each channel is much smaller than that of an MPPC, a lower dark count rate can be expected. For example, the dark count rate should be more than two orders of magnitude less than that for a commercial 1.3 × 1.3 mm 2 MPPC. The low dark count rate is one of the most important aspects in the application of the GAPD array to astronomical observations.
Pulse shape
We fed the output signal from a channel to two ×10 fast amplifiers (Philips, 775) connected in series, achieving a gain of ∼ 100, and then to an oscilloscope with a bandwidth of 1 GHz (Tektronix, MSO4014B-L). Figure 2 shows examples of pulse shapes with and without an after-pulse that appears at ∼ 80 ns later than the main pulse. During this measurement the detector was placed in a thermal chamber at 25 • C, and a bias voltage of VOV = 3.0 V was applied (see also the following subsection). A sharp pulse shorter than 5 ns duration is clearly seen, followed by a tail of the slow component. The smaller area of each channel is responsible for the short initial pulse width, due to the smaller capacitance and hence shorter time constant. Simply by using a leading-edge discriminator, timing precision for the pulse detection is expected to be on the order of nanoseconds. After-pulses could cause an overestimate of the number of pulses. Since the wave heights are significantly different, however, it can be sufficiently well discriminated if the threshold has been chosen properly.
Basic characteristics
The GAPD array was illuminated by a light emitting diode (LED) and the data acquisition was triggered by a synchronized pulse. The GAPD signal was amplified and recorded by a charge sensitive ADC (Hoshin, V005) with an integration period of 200 ns to include the slow component. Figure 3 shows the charge spectra for various bias voltages. The single photon peak and the pedestal are clearly separated and no other peaks, corresponding to two or more photons, are seen (as expected). This is because each cell detects only one photon at a time. Since the integration time is long enough to contain the after pulses, the single photon peak has a tail to the right. Apparently the fraction of the tail component increases with gain. This can be naturally understood by the known fact that the greater gain increases the probability of after-pulsing. Commonly known features of MPPCs are clear: the gains are proportional to the over-voltage; lowering the temperature decreases the breakdown voltage V br , and this leads to higher gain at the same bias voltage. The variation across the array of the breakdown voltages and gains at the operating voltage Vop = V br + 3.0 V are shown in figure 5. The dispersion is less than 1% and 3% in the V br and the gain, respectively.
Another important characteristics is the dark count rate. Bias voltage is set at Vop for all the following measurements performed. Figure 6 shows the dark count rate as a function of the threshold voltage on temperature. At each threshold, the rates calculated using time bins of 10 s are plotted. Below 15 mV, circuit noise is dominant. There is a stable plateau above 20 mV; above 60 mV, the rate drops sharply as the threshold exceeds the 1 photoelectron. pulse height. Each plateau level indicates a dark count rate equivalent to the 0.5 photoelectron threshold. Note that each channel consists of a single cell and hence multiphoton components are not observed. More noteworthy is the low dark count rate, which is one-hundredth that of a commercial MPPC, as expected from the element area ratio. Figure 7 shows the dark count rate at the plateau for all 16 channels. The dark count rates of all 16 channels, with the exception of channel 9, are concentrated around 400 counts/s at room temperature. With lower temperature, the rates drop for every channels, though the dropping ratio is slightly different for each channel. The characteristics of each channel are likely due to multiple factors and are not easy to identify, so that we do not discuss them further in this paper. Two possible causes are a difference of intermediate levels in the band gap and local concentration of the electric field due to poor pattern formation. It is important to observation as to whether the dark count rate is higher than the night sky background. This point is discussed later.
Finally, the optical crosstalk (OC) probability was evaluated. A fired cell sometimes emits photons those might trigger the Geiger discharge in neighboring cells. This is called the OC that generates a correlating signal without incident photons. In general, thicker enclosure lowers the OCT probability (e.g., Asano et al. 2018) and the GAPD array used in this work has a silicone resin of 0.7 ± 0.2 mm thick.
The OC pulses triggered by the dark count were observed with the bias voltage of Vop and the temperature of 0 • C. The waveforms of each cell is recorded at 1-G samples/s with > 500 MHz bandwidth by a VME-based waveform digitizer module (V1742; CAEN). The offline pulse search analysis was performed and the timing of the pulses was identified for each cell. Pulses from two different cell within a time window of 20 ns are selected (Otte et al. 2017) and the earlier pulse was considered to initiate the later pulse due to the OC. removed. As a result, the number of survived events are ∼ 5 × 10 4 . Apparently the closer cells tend to be higher OC probability as expected, and the highest probability of 2.8% was observed at channel 2, which is at the left of channel 6. Despite the geometrical symmetry of the package structure, the asymmetric OC distribution was observed.
The reason for this asymmetry is not clear, but possibly due to slight differences in Geiger discharge probabilities among channels. Figure 9 is a schematic view of the data acquisition system for recording light curves without dead time for any of the pixels. The output signal from the sensor is fed into inverting amplifier circuit which employs fast AD8099 amplifiers. When a signal equivalent to 1 p.e. is generated, a 16 channel discriminator (CAEN, V895) generates digital pulses. The threshold level can be set independently for each channel so that the system is flexible to pulse height variations caused mainly by the gain and offset variation of the fast amplifier circuits. Finally a scaler (CAEN, V830) counts the number of pulses in every 100 µs interval for each channel. The scaler is triggered by its internal clock and data are stored in an onboard buffer, independent of data transfer processes. This function enables our measurements to be completely free of dead time. Pulse-persecond (PPS) signals from a GPS receiver are also connected to the scaler. The GPS pulses tag a time stamp on the recorded light curve bins once every second, and the absolute time can be calculated using these tags and the clock of the DAQ computer synchronized by ntpd.
Data acquisition and instrument accuracy
To demonstrate the capability of period detection with high time resolution, we irradiated the GAPD array re- amplifier VME bus scaler Fig. 9. Setup of the data acquisition system.
peating LED flashes and recorded the light curves. The frequency of the flash was 30 Hz triggered by a function generator with a pulse width of 100 ns. This measurement was performed at 0 • C. The exposure time is 60 seconds and all the pixels are measured simultaneously. Figure 10 shows the Fourier power spectra produced from the light curves for each pixel. For all pixels, a clear peak at 30.00±0.02 Hz and the second and third harmonic peaks are clearly seen. The first peak frequencies are consistent with that of the light source, and the spectra are consistent with that to be expected from the δ-function-like illumination light curve. The power spectrum of channel 9 has the worst signal-tonoise ratio, since the dark count rate of this channel is the highest among the 16 channels. Figure 11 shows the folded light curves with a frequency of 29.9997 Hz that is the best value derived from the epoch folding technique. The number of bins for a cycle is 333, which corresponds to ∼ 100 µs. The statistical error is estimated as ±0.0003 Hz following Larsson (1996). which is consistent with the light source (Agilent, 33120A) frequency of 30 Hz. The detection time stamp of the LED flash is very well contained in a single time bin of 100 µs for all the pixels as expected. The flat offset component in each light curve corresponds to the dark count, which occurs randomly independent of the flash. One can also find that channel 9 has the highest dark count rate from this figure. We also confirmed the capability of higher frequency detection using the LED, by raising the flashing frequency of 1 kHz and more. For example, 999.9980 ± 0.012stat Hz was obtained from 1-minute observation of the 1 kHz light source.
From these demonstrations, we conclude that our detector and system are able to observe light curves with 100µs time resolution.
Observations
We decided on the Crab pulsar as the first target to demonstrate the performance of our time-resolving observation system. This was not only because the Crab pulsar is the brightest pulsar at visible wavelength and its optical light curve is well known (e.g., Zampieri et al. 2014), but also because its ephemeris is continuously provided by radio facility (Lyne et al. 1993) and monitored by various observatories from radio frequencies to gamma rays.
Instruments and observational setup
All the observations reported in this paper were conducted at the Yamagata Astronomical Observatory, which is located on a roof of Yamagata University.
The Cartesian coordinates of the observatory is (X, Y, Z) = (−3861744, 3200488, 3927194) m. It is in the middle of the city of Yamagata and mainly for amateur use. Because of this environment, the night sky background is expected to be considerably brighter than that of other observatories. However, the light pollution from the city is not expected to be a problem in this work.
Observations were made with a 35-cm diameter telescope which is commercially available Advanced Coma-Free optics (Meade, F8ACF) and is mounted on an equatorial mount (Takahashi, EM-400 FG-Temma2Z). We also employ an automatic guiding system (Laccerta, M-Gen) to improve the tracking accuracy. The focal length of the telescope is 2845 mm so that the FoV of the sensor is ∼ 28.8 × 28.8 or 7.2 × 7.2 /channel. The equatorial mount is controlled by software (Stella Navigator version 10, Astro Arts). Only a manual focussing system is available by turning a built-in knob by hand. The point spread function (PSF) is roughly estimated as ∼ 40 µm or ∼ 3 (FWHM) in advance by taking images of stars with a digital camera.
We fabricated an imaging box consisting of a sensor jig fixed on an XY stage with an accuracy of 1 µm (Sigma Kouki, TAMM40-10C(XY)). The XY stage is remotely controlled via serial communication. We mechanically connected the imager box to the telescope as shown in figure 12. By moving the stages at the focal plane of telescopes, we could effectively cover a wider FoV by mosaic imaging even with the very small sensitive area of the GAPD array.
Procedure
Before starting observations, we measured the dark count rate for each channel with the telescope lid closed, and determined the corresponding threshold levels for each channel. The observatory is exposed to the open air and the temperature was not controlled. The sensor box has no mechanical cooling system since no devices in the box gen- erate significant heat. The air temperature during the observations was at least 5 ± 5 • C and stable so that the threshold levels were not changed until the end. And the large signal-to-noise ratio for every pulse enabled us to determine appropriate threshold voltages with wide margins. Therefore the system is tolerant against any gain variation due to temperature fluctuations.
The detailed of the operating procedure up to focussing on a star image are descried in Appendix 1. After focussing, data for the flat-field correction and for the normalization of detection efficiency was obtained as follows; Every channel of the GAPD array was exposed to the same sky region by repositioning the XY stage. The correction coefficients were calculated so that the counts are the same after subtracting the corresponding dark counts for each channel. The exposure time was determined to have a statistical error of 1% or less. From this measurement the dark-sky background rate was as high as ∼ 2 kcounts/s, which is significantly higher than the dark count rate at the temperature of 5 • C or so, even when compared to the channel 9 with the exceptionally high dark count rate.
The Crab pulsar region was observed from 11:45 to 13:35, 2019 March 9 (UTC), with elevation angles of 63 • .4 to 44 • .0. First, a mosaic of 5 × 5 images was taken, each with a 1-second exposures to confirm telescope alignment to the approximate position of the Crab pulsar. We observed the region around the pulsar position performing 4 × 4 mosaic acquisitions. Each frame of the mosaic has a duration of 1 minute. Completing all the frames of the mosaic takes approximately 20 − 25 minutes. The mosaic acquisitions were repeated seven times. As the Crab pulsar falls inside a single frame, the net effective exposure was 7 minutes. The sky condition was rather fine and stable. Figure 13 shows the count map around the Crab pulsar. This map is composed of 5 × 5 1-second exposures, where the dark counts have been subtracted and the flat-field correction is applied. The Crab nebula and surrounding stars are successfully imaged (see Appendix 2 for imaging a wider field of the sky), although the Crab pulsar is too faint to be identified in this image. The count rate of the Crab pulsar can be estimated in two ways: the one is derived from well known B and V-band flux studied by many authors (e.g., Percival et al. 1993), and the other is a fit to the count rates -magnitude correlation observed by our system. In this fit, only two stars HIP26328 and HIP26159 were used and the slope was fixed M −2.5 , where M is a magnitude of these stars. Both estimates yield consistent results of ∼ 100 − 200 counts/s considering systematic uncertainty in the convolution of the spectrum and the photon detection efficiency. No filters were used in this work, which may have caused a systematic error for the both estimations.
Timing
The recorded data include light curves with 100-µs bins for each channel, the light curve of the PPS signal and the time stamp when the run was started. We first identified the PPS-tagged light curve bins which are used not only for calculating absolute UTC time for each time bin but also for the correction for time shift of the quartz oscillator that controls the 100 µs bins onboard the scaler. We used the TEMPO2 package (Hobbs et al. 2006) to transform the time of arrival to the solar system barycenter (TDB). Used timing parameters are listed in table 1 and were provided by the Jodrell bank observatory (Lyne et al. 1993) 2 . Figure 14 shows an example of the folded light curve for the 1-min exposure data (run id 190309211033-1-0), without the subtraction of the dark counts and the flatfield correction. The results of a fit to a constant function are overlaid and the corresponding reduced χ 2 are also shown. When our detection criterion for the periodic signal is set for a confidence level of 0.5%, the channels 11, 12, and 15 turned out to contain the Crab pulsar signal in this case. The same fit was applied to the data after the dark count and flat correction summed for such pixels with the period detection. Table 2 summarizes the number of pixels detecting the period npix and resulting reduced χ 2 for each run. It can be seen that in most of runs, pulses are detected across multiple pixels. This is not because the PSF is larger than the pixel size, but because of a continuous shift of the image position due to a tracking inaccuracy. This fact has been directly confirmed by digital camera images of stars and is also supported by the fact that the pixel where the 2 http://www.jb.man.ac.uk/ pulsar/crab.html significance of period detection grows shifts from one to another over time, even in a single run.
A phase interval of 0.6 − 0.8 was defined as off-pulse and then the pulsed component was calculated by subtracting the averaged counts during the off-pulse. The derived count rates of the pulsed component are also listed in Table 2. These values are consistent with each other and with the predicted count rate as mentioned in the previous subsection, though run id 190309204926-0-0 shows a marginally lower count rate. This is probably because the Crab pulsar was on the edge of the FoV and a considerable fraction of the Crab flux could not be detected. npix = 1 supports this interpretation, and the corresponding pixel was channel 3, which was indeed located on the outer edge of the GAPD array. Power spectra are also calculated by Fast Fourier Transform (FFT) for all the period-detecting channel data. Figure 15 shows the power spectrum obtained from the light curve of all observations where such periodicity was detected. The normal mode can be identified at 29.6213 ± 0.0001 Hz in TDB time coordinate, which is consistent with the prediction from the radio ephemeris of the Crab pulsar, from 29.621208 down to 29.621206 Hz during the whole acquisition.
Finally, the integrated pulse profile as shown in figure 16 was obtained by summing all the period-detecting pixels. With the effective exposure of 7 minutes, both the main pulse and the interpulse were successfully detected. That the optical pulse slightly leads the 1.4 GHz radio pulse can also be observed.
Discussion and conclusion
The system developed in this study successfully detected optical pulses of the Crab pulsar using an amateur telescope under the relatively bright night sky in the city. Capability of single photon detection and time resolution played a crucial role in this high sensitivity. Our system was proved to be promising even though further improvements to the equipment are required. Since the current DAQ system is observing only light curves with a time bin of 100 µs, there is a potentially better timing resolution, for example by tagging a time stamp to each photon. On the other hand, finer time bins require sufficient number of photons in each bin in order to discuss the time variability. Although this work was intended to be a demonstration, only 7 minutes of data were obtained for the Crab pulsar. More photon statistics are needed to discuss the variability of the pulsed component such as the short time scale stability as discussed in Karpov et al. (2007) and a few % enhancements accompanying giant radio pulses (Shearer et al. 2003;Strader et al. 2013). To realize more efficient observation, longer exposure or mounting the system on a telescope with a larger collection area is required.
From the viewpoint of photometry, a larger sensor or wider FoV is preferred. Astronomical objects with fast variability are by nature expected to be compact, so an expanded FoV is not necessarily required if such target position is well known, like the Crab pulsar. With larger FoV, however, the system will be tolerant of pointing error, ambiguous tracking accuracy, and scintillation of the ob-ject. In order to achieve an accurate flux measurement, the whole image of the object should be contained in the FoV. At the same time, larger GAPD area is directly related to an increase of readout channels. Thus a more integrated DAQ system employing FPGA is under development in our project.
A much larger FoV, for example by a factor of > 10 2 , would enable this system to search for transients with uncertain position information such as gamma-ray bursts and (non-repeating) fast radio bursts. Searching for micro meteors from supernova ejecta (Siraj et al. 2020) might be another interesting target. In addition, observing reference stars in the same FoV could improve the photometric accuracy.
Another possible application could be observations of sub-second time-scale occultation of stars by small asteroids (Tanga and Delbo 2007) and Kuiper belt objects. Several projects in the world has already been ahead for such observations such as TAOS II (Lehner et al. 2012), CHIMERA (Harding et al. 2016), OASES (Arimatsu et al. 2019), Tomo-e GOZEN (Sako et al. 2018) using fast CMOS cameras. More opportunity could be expected to observe fainter and faster transients since our GAPD array is more sensitive than CMOS. And also portability is advantageous for multipoint observations of asteroidal occultation which happen in limited locations. Since the system is sufficiently small, for example without powerful cooling equipment, our system will achieve portability once the integrated DAQ is developed which is expected to have low power consumption and mass productivity. with a one-second exposure were made until a defocused ring-like image of the bright star was found. Then many trials were conducted to adjust the focus by turning the built-in knob, and checked by mosaic taken each time. At the same time, the star was moved to the center of the FoV by the fine adjustment of the equatorial mount. ζTau is enough bright to saturate the output signals, when the sensor is close to the focal plane. Second, the telescope was automatically slewed to HIP26328, which is a 6.88 mag star closer to the Crab pulsar. By checking the mosaic again, HIP26328 was turned out to be away from the center of the FoV by ∼ 1 arcminutes. This is probably due to a slight misalignment of the polar axis of the equatorial. Since there are no other imaging devices are equipped, the mosaic imaging by the custom MPPC was the only way to confirm the star position and to allow manual readjustment of the direction of the telescope. The fine focus was adjusted until the image of the star was successfully contained in a single FoV. Next the telescope was slewed to HIP 26159, closer to the Crab pulsar than HIP 26328. Centering and fine focussing were checked. Indeed, the shift to the center of the image was only several pixels, and further focus adjustment was not necessary. Finally the telescope pointing was set to the Crab pulsar. Therefore the Crab pulsar was naturally expected to appear off-center of the FoV. This is consistent with the position where the pulsation was found as shown in figure 13. Figure 17 summarizes the whole procedure of the observations. Figure 18 shows a mosaic count map around the HIP 26328 region, in a logarithmic scale. This map is composed of 1second 15 × 15 exposures, where the dark counts are subtracted and the flat-field correction is applied. Mechanical vignetting caused by the sensor box is apparent. A 7mm diameter hole is located ∼ 35 mm above the sensor and leads to a gradual and radial decrease of the back- ground field. This trend is quantitatively confirmed by using ROBAST ray-trace simulator (Okumura et al, 2016). The map is fit by a disk with a radial gradient in addition to the two-dimensional Gaussian at the star position, which yields the PSF of 17.1 (FWHM). This value indicates the tail of the PSF contaminates the neighboring pixels and is slightly worse than the star image located at (13, 16) in figure 13. This value is also worse than the achieved value using a digital camera as mentioned in section 4.1, probably due to manual focus. To achieve a better PSF and reproductivity, an additional stage in the Z-axis is naturally required. | 2020-10-23T01:00:49.181Z | 2020-10-22T00:00:00.000 | {
"year": 2020,
"sha1": "7f977d614a920de1e0f984396eb23d1603046cc4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.11907",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "95458c5d3ae37258bc7802d24ba4f6eff0155bec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119365855 | pes2o/s2orc | v3-fos-license | Phase locking of a semiconductor double quantum dot single atom maser
We experimentally study the phase stabilization of a semiconductor double quantum dot (DQD) single atom maser by injection locking. A voltage-biased DQD serves as an electrically tunable microwave frequency gain medium. The statistics of the maser output field demonstrate that the maser can be phase locked to an external cavity drive, with a resulting phase noise of -99 dBc/Hz at a frequency offset of 1.3 MHz. The injection locking range, and the phase of the maser output relative to the injection locking input tone are in good agreement with Adler's theory. Furthermore, the electrically tunable DQD energy level structure allows us to rapidly switch the gain medium on and off, resulting in an emission spectrum that resembles a frequency comb. The free running frequency comb linewidth is ~8 kHz and can be improved to less than 1 Hz by operating the comb in the injection locked regime.
I. INTRODUCTION
Narrow linewidth lasers have a wide range of applications in communication technology, industrial manufacturing, and metrology. [1][2][3]. Unlike in atomic systems, where linewidths can approach 1 mHz [4][5][6], charge noise in semiconductor lasers typically leads to linewidths that are 10-100 times larger than the Schawlow and Townes (ST) prediction [7][8][9][10][11][12]. It is therefore often desirable to stabilize the frequency of solid state masers/lasers using existing narrow linewidth sources via the injection locking effect [13,14]. To achieve an injection-locked state, an external cavity drive is applied to the laser, resulting in stimulated emission at the frequency of the injected signal and a corresponding reduction in linewidth [15]. In addition to frequency stabilization, the precisely locked phase can be used as a resource for other metrology applications. For example, the phase of an injection-locked, trapped-ion-phonon laser has been proposed for applications in mass spectrometry and as an atomic-scale force probe [16].
In this paper we examine phase locking of a DQD semiconductor single atom maser (SeSAM) [17]. Driven by single electron tunneling events between discrete zerodimensional electronic states, this device results in microwave frequency photon emission with a free-running emission linewidth of 6 kHz. Due to low frequency charge noise, the linewidth is still 50 times larger than the ST limit [7,9,17,18]. Here we use injection locking to significantly improve the performance of the SeSAM. In contrast with our previous work, which demonstrated injection locking of a multi-emitter maser, we directly measure the degree of phase stabilization in the injection locked state by examining the photon statistics of the output field [19]. The locked maser output achieves a phase noise better than L = -99 dBc/Hz (1.3 MHz offset). The locking phase and locking range are shown to be in good agreement with Adler's prediction [15].
Looking beyond single-tone narrow linewidth sources, the electrically tunable energy level structure of the SeSAM allows the gain medium to be switched on and off. We explore the output of the SeSAM in both free running and injection locked modes while the DQD energy levels are periodically modulated at frequency f [20]. When the SeSAM is unlocked, it outputs a frequency comb with a mode spacing of f and a 8 kHz linewidth. Under injection locking conditions, the linewidth of the modulated SeSAM frequency comb emission peaks is reduced to less than 1 Hz. These measurements demonstrate that a single cavity-coupled DQD may serve as a compact, low temperature microwave source that is suitable for use in quantum computing experiments.
II. DOUBLE QUANTUM DOT MICROMASER
The SeSAM is implemented in the circuit quantum electrodynamics architecture (cQED), where strong coupling has been demonstrated between microwave photons and a variety of mesoscopic devices [21][22][23][24]. As illustrated in Fig. 1(a), the maser consists of a single semiconductor DQD that is coupled to a microwave cavity [17]. The DQD gain medium is formed from a single InAs nanowire that is bottom gated to create an electrically tunable double-well confinement potential [25,26]. The DQD energy level detuning is gate-voltage-controlled and a source-drain bias V SD can be applied across the device to result in sequential single electron tunneling. DQD fabrication and characterization details have been described previously [17,18,27].
The cavity consists of a half-wavelength (λ/2) Nb coplanar waveguide resonator with a resonance frequency f c = 7596 MHz and quality factor Q c = 4300 [17,21,28]. Cavity input and output ports (with coupling rates κ in /2π = 0.04 MHz and κ out /2π = 0.8 MHz) are used to drive the SeSAM with the injection locking tone and to measure the internal field of the maser. The cavity output field is amplified and then characterized using either a spectrum analyzer (R&S FSV) or heterodyne detection. With heterodyne detection, the output field is demodulated by a local reference tone of frequency f lo to yield the in-phase (I) and quadrature-phase (Q) components [19,28]. When the cavity is driven by an injection locking tone, the local reference is always set to the injection locking tone frequency f lo = f in in order to measure the phase φ of the maser output field relative to the injection locking input tone.
With V SD = 2 mV applied, single electron tunneling is allowed when > 0. In this configuration a single electron tunnels down in energy through the device [see Fig. 1(a)] and the source-drain bias repumps the DQD to generate the population inversion necessary for photon gain in the cavity [17,18]. A trapped charge in the DQD forms an electric dipole moment that interacts with the cavity field with a rate g c /2π ≈ 70 MHz [28][29][30][31][32][33]. Inelastic interdot tunneling results in a combination of phonon and photon emission [18,26,34]. The gain mechanism of the SeSAM is similar to the single emitter limit of a quantum cascade laser, where a macroscopic number of electrons flow through quantum well layers and lead to cascaded photon emission [35].
The maser is first characterized in free-running mode with P in = 0 (no injection locking tone applied). Figure 1(b) plots the power spectral density of the output radiation, S(f ). The emission peak is nicely fit by a Gaussian with a FWHM Γ = 5.6 kHz that is 300 times narrower than the cavity linewidth κ tot /2π = f c /Q c = 1.8 MHz. The emission signal, and its narrow linewidth, are suggestive of an above-threshold maser state. Maser action is confirmed by measuring the statistics of the out-put field [17,18]. Figure 1(c) shows the two-dimensional histogram resulting from 1.7 × 10 7 individual (I, Q) measurements that were sampled at a rate of 12.3 MHz. Here f lo = f e = 7595.8 MHz, where f e is the emission frequency. The IQ histogram has donut shape that is consistent with an above-threshold maser. However, the histogram clearly shows that the phase of the maser output samples all angles in the (I, Q) plane, which indicates there are large phase fluctuations in free running mode. The randomization of phase is attributed to charge noise, which leads to random fluctuations in [19]. In this paper we use injection locking to further improve the output characteristics of the SeSAM.
III. INJECTION LOCKING RESULTS
We now investigate the degree to which the output characteristics of the SeSAM can be improved using injection locking. In Section III.A we present results showing that the maser emission can be phase locked by driving the input port of the cavity with an injection locking tone. In the injection locked state, the maser output field has a phase noise L = -99 dBc/Hz at f e = 7595.8 MHz (1.3 MHz offset). In Section III.B, we measure the phase of the maser output field relative to the injection locking input tone as a function of input frequency, and show that it is in good agreement with Adler's prediction. We then measure the injection locking range as a function of injection locking input tone power in Section III.C. The phase and frequency locking range measurements are consistent with each other, giving further evidence that the frequency locking observed in previous work is due to phase stabilization via the injection locking effect [19].
A. Phase Locking the SeSAM
We first demonstrate frequency narrowing of the maser emission relative to the free-running state using injection locking [19]. Figure 2(a) shows S(f ) as a function of the injection locking input tone power P in with f in = 7595.805 MHz set near the free running emission frequency f e for this device tuning configuration. For negligible input powers (P in < −125 dBm) the emission spectrum exhibits a broad peak near 7595.805 MHz with a typical FWHM Γ ≈ 6 kHz. Due to low frequency charge noise, the center frequency of the free-running emission peak fluctuates within the range f e = 7595.805 ± 0.005 MHz. With P in > -125 dBm, the broad tails of the emission peak are suppressed and the spectrum begins to narrow. The SeSAM eventually locks to the injection locking input tone around P in = -115 dBm. In the injection locked state, the large fluctuations in f e are suppressed and the measured linewidth is Γ ≈ 100 Hz, more than a factor of 50 narrower than the free-running case [36].
The IQ histograms in Figs evolution of the maser output phase relative to the injection locking input tone as P in is increased (for these data sets f lo = f in ). A movie showing the evolution with P in is included in the supplemental material [37]. For small P in < −120 dBm, the histograms shown in Fig. 2(b-d) have a ring shape. In contrast to the freerunning histogram shown in Fig. 1(c), these histograms have an unequal weighting in the IQ plane. For example, the Fig. 2(d) histogram has a higher count density for phase angles around φ = -30 • . The ring shape indicates that the relative phase of the injection locking input tone and the maser emission are unlocked, while the increased number of counts near a specific phase angle φ is due to stimulated emission at f in . The radius of the rings in the IQ-plane doesn't significantly change as P in is increased, which indicates that the total output power of the SeSAM is nearly constant and limited by the DQD photon emission rate. As P in is further increased, the phase distribution continues to narrow, consistent with the narrowing of the emission peak shown in Fig. 2(a) [19].
Around P in = −115 dBm the ring shaped IQ histogram evolves into a distribution that is localized within a relative phase φ ± ∆φ = φ ± 3σ φ,h = −40 ± 60 • , as demonstrated in Fig. 2(e). Here φ = arctan(Ī,Q) is the maximumly populated angle and σ φ,h is the measured standard deviation. In this configuration the phase of the maser output is locked to the injection locking input tone. The distribution in phase space is further narrowed with increasing P in as demonstrated by Figs. 2(f-h), where the relative phase is φ = −20±12 • for P in > −100 dBm. The P in value at which phase stabilization occurs is in good agreement with the value of P in where frequency locking occurs, as demonstrated in Fig. 2(a).
The detected phase fluctuations in the histograms have a standard deviation σ φ,h = 4 • for P in > −100 dBm. These fluctuations have a contribution from the intrinsic maser output fluctuations with a standard deviation σ φ,0 and a contribution from amplifier background noise h amp [see Fig. 1(a)], which has h † amp h amp = 42 [18,28,38]. The detected field α = I + iQ consists of α = α 0 + h amp , where α 0 = I 0 + iQ 0 is the cavity output. Given α 0 is independent of h amp and h amp = 0, the distribution in the detected phase φ h = arg(α) = arctan(I, Q) (in units of rad) has a standard deviation After subtracting h amp , the maser output phase fluctuations have a standard deviation σ φ,0 = 1.5 • within our detection resolution bandwidth RBW = 2.6 MHz. The average phase noise of the locked maser output near f e is then estimated to be L = σ 2 φ,0 /2RBW = 1.3 × 10 −10 rad 2 /Hz or, equivalently, L = -99 dBc/Hz at a frequency offset of 1.3 MHz, when P in > −100 dBm. For comparison, the phase noise is 40-50 dBc/Hz larger than a typical precision microwave source such as the Keysight E8267D.
B. Phase Evolution Across the Injection Locking Range
We now investigate the relative phase φ between the maser output and the injection locking input tone across the full injection locking range. The insets of Fig. 3 show IQ histograms acquired with P in = −98 dBm at f in = 7595.64 MHz (left inset) and f in = 7595.73 MHz (right inset). With f in = 7595.64 MHz, which is detuned by 0.17 MHz from the free running maser frequency f e = 7595.81, the IQ distribution has a ring-like shape and thus the phase is unlocked. Note that in this regime the output is essentially the sum of two different tones, and this results in a noticeable offset in the ring. When f in approaches f e , the phase will be localized within a small range, as demonstrated in the right inset, which shows a distribution that is limited to φ = 48 ± 15 • . Here f in is detuned from f e by only 0.08 MHz.
The main panel of Fig. 3 shows φ as a function of f in with P in = −98 dBm. Within the indicated frequency range of ∆f in = 0.19 MHz, the histograms are similar to the right inset and show output phases in the range φ ∈ (−90 • , 90 • ). The maser output is thus "phase locked" to the input tone when |f in − f e | is small.
The measured phase can be compared with predictions from Adler's theory, which analyzes the maser dynamics when the injection locking tone input power is small compared to the free running emission power [15]. We express the cavity output field in the lab frame as where P out is the output power. The relative phase follows the Adler equation: In the injection locking range |f in − f e | < ∆f in /2, Eq. (2) has a static solution Fluctuations in φ can be introduced by fluctuations in f e and the intrinsic standard deviation σ φ,0 diverges near the boundaries of the injection locking range. Outside of this range φ is unlocked. The phase dependence predicted by the Adler equation is plotted as the blue curve in Fig. 3 and is in good agreement with our data.
C. Injection Locking Range
We next determine the frequency locking range from measurements of S(f ) and compare these data with the phase locking measurements presented in the previous section. Figure 4(a) shows a color-scale plot of S(f ) as a function of f in measured with P in = -98 dBm. Similar to our previous work [19], the input tone has little effect on the maser emission when f in is far-detuned from f e . As f in approaches f e , frequency pulling is visible and emission sidebands appear as a mixing between the injection locking input tone and the free running maser emission [19,39,40]. The maser then abruptly locks to f in , and remains locked to f in over a frequency range ∆f in = 0.18 MHz. The frequency locking range is consistent with the phase locking data shown in Fig. 3, which is measured at the same P in . By repeating these measurements at different P in , we obtain the data shown in Fig. 4(b), where ∆f in measured by the two methods is plotted as a function of P in . The measurements are in good agreement, verifying that the frequency locking we observe in measurements of S(f ) is due to the injection locking effect [19]. The black line in Fig. 4(b) is a fit to the power law relation ∆f in = A M √ P in , with the measured prefactor A M = (0.48 ± 0.16) × 10 6 MHz/ √ W, where the error bar is due to the uncertainty in the input transmission line losses. From theory, we find: where the cavity prefactor C κ = 2 √ κ in κ out /κ tot accounts for internal cavity losses and is obtained using cavity input-output theory [19,39]. The error bar is due to the uncertainty in κ in/out and the calibration of P out . We therefore find reasonable agreement between the data and the predictions from Adler's theory, considering the uncertainties in the transmission line losses.
IV. MICROWAVE FREQUENCY COMB
We have so far examined the output characteristics of the SeSAM in free-running mode and under the influence of an injection locking tone. In this section, we investigate the output characteristics of the SeSAM while a periodic modulation is applied to the DQD energy levels, which modulates the gain medium. With the periodic modulation applied, we observe a comb-like emission spectrum, where the spacing between the emission peaks is set by the modulation frequency. The SeSAM frequency comb can also be operated under injection locking conditions, which leads to a dramatic narrowing of the emission peaks. The data presented in this section were acquired on a different device that has an emission frequency f e = 7782.86 MHz and linewidth Γ = 3 kHz.
The modulation method is described in Fig. 5(a), which plots the electron current I e and P out as a function of . In free-running mode, the maximum output power P out = 0.2 pW is obtained at an offset detuning 0 = 0.2 meV due to a strong phonon sideband [41]. We next modulate the gain medium by applying a sine wave to the DQD gates, such that = 0 + ac sin(2πf t).
Here ac and f are the amplitude and frequency of the detuning modulation. As shown in Fig. 5(a), the SeSAM emission power is strongly detuning dependent. Therefore the effective gain rate will be modulated by the sinusoidal gate drive [18]. Figure 5(b) plots S(f ) as a function of f with ac = 0.2 meV. We observe a central emission peak around f = 7782.86 MHz that is independent of f . In addition to the central emission peak we observe a series of narrow emission peaks that shift away from the central emission peak as f is increased. Up to 4 emission sidebands are clearly observed on both the low and high frequency sides of the central emission peak. With such a large modulation amplitude applied, photoemission from the DQD will turn on and off at a beat frequency f . The beating in the time domain results in sidebands in the frequency domain, with a sideband spacing set by f . The black curve in Fig. 5(c) shows a line cut through the data in Fig. 5(b) at f = 0.2 MHz. The sidebands can be fit to a Lorentzian with a linewidth of 8 kHz, similar to the free-running maser linewidth Γ = 3 kHz.
The linewidth of the emission peaks in the frequency comb can be significantly improved using the injection locking effect [3]. For example, the red curve in Fig. 5(c) shows S(f ) when the frequency comb is injection locked to an input tone at f in = 7782.86 MHz and P in = −108 dBm. Compared to the free-running frequency comb data, the peak height and linewidth of the injection locked frequency comb have been dramatically improved. In addition, we observe 2 additional sidebands on both the low and high frequency sides of the central emission peak. The inset of Fig. 5(c) shows S(f ) measured near the fourth sideband on the high frequency side of the central emission peak [near f = 7783.65 MHz, see rectangle in main panel of Fig. 5(c)]. The sideband is best fit to a Gaussian of width 0.9 Hz, which is most likely limited by the 1 Hz resolution bandwidth of the microwave frequency spectrum analyzer [36].
V. CONCLUSION AND OUTLOOK
We have presented experimental evidence of phase locking of a semiconductor DQD single atom maser (SeSAM). The statistics of the maser emission in the complex plane demonstrates that the SeSAM can be phase locked to an injection locking input tone resulting in a emission signal with a phase noise L = -99 dBc/Hz at a frequency offset of 1.3 MHz. Both phase and frequency locking data are shown to be in good agreement with Adler's prediction. In addition, we utilize the electrical tunability of the DQD energy level structure to modulate the DQD gain medium. The resulting emission spectrum is a frequency comb, where individual emission peaks in the comb have a linewidth of around 8 kHz. By injection locking the SeSAM, we reach linewidths < 1 Hz, an 8000-fold improvement. The SeSAM allows for studies of fundamental light-matter interactions in condensed matter systems. These measurements demonstrate that a single DQD may serve as a compact low temperature microwave source that is suitable for use in quantum computing experiments. | 2017-07-24T18:18:46.000Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "24061cd6799fa4b223315856987bc390fc47426f",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.96.053816",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "24061cd6799fa4b223315856987bc390fc47426f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
266985352 | pes2o/s2orc | v3-fos-license | Continuous-wave GaAs/AlGaAs quantum cascade laser at 5.7 THz
Abstract Design strategies for improving terahertz (THz) quantum cascade lasers (QCLs) in the 5–6 THz range are investigated numerically and experimentally, with the goal of overcoming the degradation in performance that occurs as the laser frequency approaches the Reststrahlen band. Two designs aimed at 5.4 THz were selected: one optimized for lower power dissipation and one optimized for better temperature performance. The active regions exhibited broadband gain, with the strongest modes lasing in the 5.3–5.6 THz range, but with other various modes observed ranging from 4.76 to 6.03 THz. Pulsed and continuous-wave (cw) operation is observed up to temperatures of 117 K and 68 K, respectively. In cw mode, the ridge laser has modes up to 5.71 THz – the highest reported frequency for a THz QCL in cw mode. The waveguide loss associated with the doped contact layers and metallization is identified as a critical limitation to performance above 5 THz.
Introduction
Terahertz (THz) quantum cascade lasers (QCLs) have an important application as sources for high-resolution spectroscopy of rotational transitions in polar molecules and fine structure lines of selected atomic species.Specifically, they can be used as heterodyne local oscillators to pump Schottky diode mixers or arrays of superconducting NbN and MgB 2 mixers in astrophysical observations of the interstellar medium and planetary atmospheres [1].Below 3 THz, Schottky diode frequency multiplier chains are the standard source for this application; however, their available power drops rapidly for higher frequencies.For this reason, QC-lasers have found a role on two recent heterodyne instruments -the upGREAT instrument on the SOFIA airborne observatory and the GUSTO ultra-long duration balloon observatory -which have targeted the neutral oxygen line [OI] at 4.74 THz [2], [3].Indeed, THz QCLs provide milliwatt to tens-of-milliwatt levels of output power, which makes them appealing for pumping next generation heterodyne instruments with (many) tens of pixels.However, there are other compelling spectral lines above 5 THz that can be exploited, such as the elemental sulfur [SI] (5.32 THz) and iron [FeI] (5.52 THz), and doubly ionized nitrogen [NIII] (5.23 THz), oxygen [OIII] (5.79 THz), and iron [FeIII] (5.8 THz) [4].Yet, there has been no demonstration of continuouswave (cw) operation in a QCL above 5.26 THz, which is a necessary requirement for a local oscillator.The challenge in making THz QCLs above 5 THz lies in increased THz optical loss as well as a reduction in the intersubband (ISB) gain.Both effects are related to the proximity of the operating frequency to the Reststrahlen band of GaAs (8-9 THz) associated with optical phonon resonances, which makes the material highly absorptive and reflective [5], [6].The main contributing factors to the waveguide loss above 5 THz are the increased losses from the GaAs phonons, the heavily doped contact layers, and the ISB absorption within the active region.While the GaAs phonon losses are inherent to the material and cannot be avoided, the metal cladding can be optimized by using the right materials and thicknesses, heavily doped contact layers can often be removed entirely, and the ISB absorption losses can be mitigated through active region design.The gain degradation above 5 THz is a result of increased nonradiative, thermally activated scattering of upper-state carriers to both lower and parasitic levels.For example, the nonradiative scattering rate between the upper and lower radiative states can be approximated with the thermally activated expression, where W hot 54 is the scattering rate when the carriers in the upper radiative subband (labeled 5) have sufficient in-plane energy to emit a longitudinal-optical (LO) phonon and relax to the lower state (labeled 4).As the THz QCL frequency increases beyond 5 THz (E 54 > 20.7 meV), it approaches the LO phonon energy of GaAs (E LO = 36 meV) and the activation energy (E LO -E 54 ) is reduced.So far, these effects have limited the pulsed operation of QCLs above 5 THz to a maximum operating frequency of 5.6 THz [7] and only at temperatures below 100 K [8].Additionally, there has been only one demonstration of cw operation above 5 THz, which was at 5.26 THz with a T max of 15 K [8].
In this work, we present strategies for optimizing the THz QCL active region and waveguide above 5 THz.We then demonstrate this improvement by growing and testing two devices -labeled D1 and D2 -that are designed for ∼5.4 THz operation.The first design D1 is an incrementally modified version of a previous design aimed at 4.7 THz (details in the Supplementary Material); the barrier thicknesses are the same, and the well widths have been changed slightly to scale the gain up to 5.4 THz.For the second design D2, however, we performed a systematic numerical modeling process to optimize the design for improved temperature performance.Finally, we present a brief discussion on the effectiveness of these strategies for cw mode of operation.
Active region design
We base our active region design strategy around the hybrid bound-to-continuum/resonant-phonon (BTC-RP) scheme [9], with high Al 0.25 Ga 0.75 As barriers (∼250 meV band offset) to suppress over-the-barrier leakage [10]; two examples of this are shown in Figure 1.The upper and lower radiative states are labeled 5 and 4, respectively (although at some biases, there can also be a significant oscillator strength between level 5 and 3).Depopulation of the lower state(s) takes place through a combination of electronic scattering, tunneling, and finally fast LO-phonon scattering into the injector state 1.Also, of concern are states 6 and 7, which can act as a second thermally activated parasitic current channel.It is convenient in our following discussion to refer to the following simple relation for the peak ISB gain coefficient, where J is the injection current density into the upper state, e is the electron charge, f 54 is the transition oscillator strength, and Δ is the ISB transition linewidth.A similar expression for the ISB gain associated with transitions from 5 → 3 can be written if needed.
Our strategy for improving >5 THz performance has the following elements.First, we use the well-known principle that the upper state lifetime 5 can be increased by reducing the wavefunction overlap between level 5 and 4 [11]- [14]; i.e., making the radiative transition more spatially diagonal reduces almost all scattering rates out of level 5, including W 54 hot of Eq. (1).Increased diagonality, however, comes at the price of reduced oscillator strength f 54 , which in turn reduces g 54 .To quantify this effect, we performed a systematic numerical study using the nextnano nonequilibrium Green's function (NEGF) simulation package [15] to plot the peak gain coefficient for a set of 5.4 THz active regions where the level of diagonality was varied by changing the radiative barrier (RB) thickness from 12 to 24 Å, as shown in Figure 2(a), while small changes to well thicknesses were made to maintain the same transition frequency.While the active regions with a thicker RB have lower gain at low temperatures compared to the designs with a thinner RB, their gain degrades more slowly with increasing temperature.This slower gain degradation is a clear indication of the decreased electron-optical-phonon scattering of the upper state electrons due to a smaller spatial overlap of upper and lower state carriers (see ( 1)).At low temperatures, the phonon scattering of electrons in the upper state is reduced, and the gain is higher for designs with a thinner RB due to a higher f 54 , as is also inferred from (2).The increased diagonality also results in a broader linewidth Δ, likely due to the increased effect of interface roughness scattering (see Figure 2(a) inset).Second, we consider the use of higher doping levels -which results in a larger maximum injected current density J max -to counter the reduced oscillator strength that accompanies the more diagonal transition [16].Again, a set of NEGF numerical experiments is used to plot a nominal maximum operating temperature (T max ) value (i.e., the temperature where the peak gain coefficient is reduced to equal a waveguide loss coefficient of 25 cm −1 ) versus RB thickness for varying sheet doping densities, as shown in Figure 2(b).The improvement in T max is clearly seen for thick RBs (>14 Å) as the sheet doping density is increased.For very thick RBs (>20 Å), the T max eventually drops as the oscillator strength becomes too small.For thin RBs (<14 Å), however, the increased sheet doping density seems to decrease T max .This finding contrasts with the previous investigations of this behavior using a rate-equation model [16], and the difference can be attributed to the increased electron temperature associated with larger current density as well as increased electron-impurity scattering, both of which increase nonradiative scattering from the upper state into both lower and parasitic states.Therefore, there is an optimal range of radiative barrier thicknesses (18-20 Å) where current density is not excessively high, and a diagonal, high-doped design leads to the maximum improvement in T max .
Third, in addition to using tall Al 0.25 Ga 0.75 As barriers, we have the option of generating designs with overall thinner GaAs wells.While this pushes all of the subband energies up in the band structure, the effect is more pronounced for the parasitic states 6 and 7, since they have the character of excited quantum well states, whose energies scale as the square of the quantum number.This will help bring the design closer to a clean 5-level system by reducing coupling to the higher-lying parasitic states [10], [17], [18].Additionally, this has the effect of increasing the depopulation energy (E 21 ) above that of the bulk GaAs LO phonon energy (36 meV).This is not believed to cause a large change in the depopulation rates [19].However, it will have a significant benefit for QCLs >5 THz as the photon energy (E 54 > 20.7 meV) approaches E 21 .Since the injector state 1 typically holds the majority of the electronic population even at design bias, there is a strong ISB absorption at the energy E 21 .By increasing E 21 , the loss associated with the wings of the ISB transition lineshape at E 54 is effectively reduced.Note: E 54 is the energy spacing between upper and lower state, f is the oscillator strength of the transition, Δ 0 is the anticrossing gap (E 1'5 at design bias), E 75 is the energy spacing between the upper state and parasitic state, J max and J th are the experimental maximum current density and threshold current density (at 45 K, pulsed), respectively, and T max is the experimental maximum operating temperature.Design 1 is from wafer no.VB1400, and design 2 is wafer no.VB1401.
Using these strategies, we chose to proceed with two designs for experiments, as shown in Figure 1.The first design D1 is similar to a design previously tested in our group at 4.7 THz with the same injection and radiative barrier thicknesses (see Supplementary Material, Figure S1).The wells, however, are incrementally adjusted to scale the lasing frequency to 5.4 THz.The second design D2 is optimized using the previously discussed strategies informed by the NEGF simulations.The key design parameters for both designs are summarized in Table 1, and the layer sequences are listed in the Methods section.A radiative barrier of 17 Å is chosen for D2, which reduces f 54 and f 53 .To counter the reduced oscillator strength, a sheet doping density of 7.4 × 10 10 cm −2 is used.The downside of the higher doping density is the inevitable increase in the J max and the difficulty in achieving cw operation.Thus, a thicker injection barrier of 40 Å is chosen to slightly reduce J max .Next, thinner well widths are chosen for D2 to increase the upper-to-parasitic energy separation (E 75 ) to 62 meV and reduce scattering to parasitic states.Doing so, E 21 is also increased to 48 meV, which reduces the ISB losses.The simulated gain spectra for both designs are shown in Figure 3(a).It is immediately apparent that the gain spectra of D2 have higher peak gain values and broader linewidth.Additionally, the peak gain of D2 drops at a slower rate with temperature than D1 (inset of Figure 3(a)).
Device fabrication and testing
Both D1 and D2 structures were grown using molecular beam epitaxy (MBE) growth on GaAs substrates and fabricated into metal-metal (MM) Fabry-Pérot ridge waveguides using Cu-Cu thermocompression bonding process [20], [21] followed by substrate removal and photolithographic definition and dry etching using a self-aligned metal mask (see Section 5).An example SEM of a fabricated ridge with dry-etched facet is shown in Figure 3(b).Two fabrication runs were performed on these wafers.In the first fabrication, the top 100 nm-thick n + GaAs layer was left un-etched to minimize the parasitic voltage drop at the top Ti/Au metallic contact.In general, the ridgewaveguide lasers tested from this fabrication run (1 mm × 75 μm) showed poor performance.Design 1 did not lase at all when tested down to 40 K, and design 2 lased with a T max of 67 K, which is much lower than we expected from the simulations.We attribute this poor performance to excessive waveguide loss.This is justified by the simulated waveguide losses for these waveguides shown in Figure 3(b) (details of the simulation in Section 5).The main contributors to the total waveguide loss are the metal cladding, highdoped layer, and GaAs phonon losses, all of which increase strongly with frequency.It is noted that while the loss from the heavily doped contact layers is not significant at frequencies below 4 THz, its contribution increases strongly at higher frequencies as it approaches GaAs plasma frequency (21.6 THz for 5 × 10 18 cm −3 doping).While removing the lower heavily doped layer was not possible once the wafer was grown, we refabricated MM ridge waveguides in which the upper heavily doped contact layer was etched away; the improvement is shown in Figure 3(b) (dashed blue line).Additionally, the top metal contact was changed to Ta/Cu/Ti/Au since improvements in THz QCL temperature performance have been observed by using Cu waveguides instead of Au waveguides [22].Because the appropriate material parameters for thin metal films at low temperature are uncertain, our simulations are ambiguous on this point -not much improvement is predicted by switching to Ta/Cu below 5 THz, and the new metal stack can be slightly more lossy above 5 THz (dashed green line).Therefore, the potential improvement from the new metal stack is debatable.This new waveguide geometry reduces the waveguide loss by around 2.1 cm −1 at 5.4 THz (dashed back line).An additional improvement is achieved by testing a longer ridge, as the facet reflectance for MM waveguides gets smaller for higher frequencies [23], [24].Full-wave simulations show that the facet reflectance is 0.57 at 5.4 THz for a 75 μm wide, 7 μm thick MM waveguide with dry-etched facets.Therefore, an additional 2.8 cm −1 reduction in loss is achieved by testing a 2 mm long ridge instead of 1 mm.Therefore, we expect an overall 4.9 cm −1 reduction for the second round of devices.
The pulsed light-current-voltage (L-I -V) and spectra of both designs are shown in Figure 4.The ridge from D1 lased pulsed with a T max of 83 K with modes from 5.31 to 5.61 THz (at 45 K).The ridges from D2 gave a higher pulsed T max of 117 K with modes spanning 4.76-6.03THz (at 45 K).The characteristic temperature T 0 is extracted for both devices using the empirical relation for threshold current density versus temperature (J th = J 0 exp (T/T 0 )), as shown in the inset of Figure 4(a) and (c), and they are 37 K and 63 K for D1 and D2, respectively.The higher T 0 of D2 is presumably a result of reduced upper state carrier scattering due to a larger E 75 and a more diagonal transition.Additionally, D2 has a higher peak power and dynamic range than D1.These devices also lased in cw mode, as shown in Figure 5, with a T max of 68 K and 60 K for D1 and D2, respectively.The cw spectra of D2 spans 4.95-5.71THz (at 45 K), while the spectra of D1 has modes from 5.4 to 5.6 THz.Although D2 has a higher pulsed T max , the cw T max of D1 is higher in this case.This is because of the higher current level and power dissipated by D2 device; in fact, a narrower ridge from D2 (15 μm wide) increased the cw T max of D2 to 68 K due to a more favorable thermal geometry and reduced power dissipation (see Supplementary Material, Figure S2).So far, the cw T max of D1 and D2 are tied at 68 K, even though D2 has a higher pulsed T max .This can be explained by noting that the power density for D1 is 4.4 MW/cm 3 , which is ∼4 times smaller than that of D2.This almost matches the ratio of ΔT D2 /ΔT D1 ∼ 3.3, where ΔT is the difference between pulsed and cw T max .This clearly demonstrates the importance of keeping the device's power density low for cw operation, and that the strategy for achieving high operating temperature in pulsed mode does not necessarily hold true for cw mode, unless it is paired with an effective waveguide design.For example, one way to mitigate this may be to improve heat removal from the sidewalls by using buried heterostructures [25], [26], although this makes the fabrication quite challenging.Alternatively, thinner active regions can be considered to improve heat extraction [27].While this approach will slightly increase the waveguide loss and reduce the pulsed T max , it may be an effective way to reduce ΔT and increase the cw T max above the liquid nitrogen temperature (77 K).
Conclusions
In summary, we have proposed and demonstrated strategies for improving the operating temperature of THz QCLs above 5 THz.Furthermore, we identified the main source of losses for MM waveguides above 5 THz.Employing these design strategies and by improving the waveguide design, we have demonstrated ridge THz QCLs that emit in pulsed mode up to 6.03 THz and in cw mode up to 5.71 THz and achieved T max of 117 K in pulsed mode and 68 K in cw mode.This is the highest reported frequency and T max above 5 THz to date for a THz QCL, in both pulsed and cw mode.The relatively high power and broad spectra of D2 make this active region a suitable choice for the development of local oscillators in the 5-6 THz range.The devices reported here have not been optimized for output power or beam pattern -both can be improved by implementation of this active region in an end-fire antenna cavity [3] or a metasurface verticalexternal-cavity surface-emitting-laser (VECSEL) configuration [28].The fact that we observed lasing at various modes ranging over 4.76-6.03THz from a single active region suggests that broadband tuning is also feasible in a tunable VECSEL [29].Given the robust performance of these devices, and informed by NEGF numerical design optimization, it is likely that THz QCL operation above 6 THz is possible.
Active region transport modeling
The modeling of the 4-well BTC-RP design was done using a Schrödinger solver, and then the electron dynamics and gain spectra were investigated using the nextnano simulation package, which is based on an NEGF solver [15].The NEGF model accounts for both coherent transport effects such as resonant tunneling and incoherent evolution such as scattering mechanisms (namely electron-electron, impurity, interface roughness, acoustic, and LO phonon scattering).We simulate two modules of the active region to include the full effect of coherence and tunneling for both laser states and the higher-lying channels.The energy range for the calculations is selected to be as large as the conduction band offset to account for all the high-level states.For the GaAs/AlGaAs system, we consider electron transport from carriers in the Γ-valley with an effective mass of 0.067m 0 , where m 0 is the free electron mass.The nonparabolicity is accounted for using a 3-band model.The electron-optical-phonons are mediated through the Fröhlich interaction, and a LO phonon energy of 36 meV is used for GaAs.The NEGF simulations naturally include ISB gain/loss from all possible level pairs (especially 1 → 2); therefore, no active-region free-carrier loss is explicitly included in the waveguide simulations.Finally, although the dispersive effects of the phonon band are included in the material index, GaAs bulk loss is excluded from the model, so that it can be separately included in the waveguide loss model.
Finite element method simulation
The waveguide loss simulations are performed using COMSOL Multiphysics 6.1, using the electromagnetic wave, frequency domain (emw) interface in the radio frequency module.The MM waveguide is simulated in a two-dimensional cross section eigenmode solver and includes all the layers of the device as-fabricated: bottom metal contact: Ta/Cu, top and bottom high-doped GaAs layers, active region, and top metal contact: Ti/Au, and Ta/Cu/Ti/Au.The permittivity of the metallic and doped layers is fitted using the Drude model, and the Drude parameters are taken from [30]- [33], and they are listed in the supplementary material (Table S1).The loss is extracted from the imaginary part of the mode index.
QCL growth and MM waveguide fabrication/characterization
The QCL active layer is grown on a GaAs wafer using molecular beam epitaxy.Finally, the back contact metallization Ti/Au (15/300 nm) is evaporated on the backside of the wafer piece.After fabrication is complete, laser ridges are cleaved and mounted on copper submount using indium bonding, and then they are wire bonded for characterization.The bottom contact pad is directly wire bonded to the exposed ground plane to minimize any parasitic voltage drops.The ridges are then characterized by mounting on a cold finger of a Stirling cycle cryocooler (Longwave Photonics).In pulsed mode, the ridges are biased with 500-ns pulse width and 100 kHz repetition rate, and the power is measured using a pyroelectric detector (Gentec).The absolute power is measured with a calibrated thermopile detector (Scientech).The spectra are measured with a Nicolet FTIR in continuous-scan mode, with the optical path purged by nitrogen gas.
Figure 2 :
Figure 2: NEGF simulation of 5.4 THz QCLs with varying radiative barrier (RB) thicknesses.(a) Peak gain coefficient versus temperature for different RB thicknesses.The insets show the gain spectra for RBs 12 Å and 16 Å versus temperature.(b) Temperature at which gain coefficient = 25 cm −1 versus RB thicknesses for different sheet doping densities (n sh ).
Figure 3 :
Figure 3: Simulated active region gain spectra and waveguide loss coefficient.(a) NEGF simulation of the gain spectra for various temperatures for design 1 (green) and design 2 (blue).The inset is a plot of the peak gain for D1 and D2 versus temperature.(b) COMSOL simulation of different components of the MM waveguide loss coefficient versus frequency, for a 75 μm wide and 7 μm thick ridge.Solid lines correspond to the first fabrication (Ti/Au top contact), and dashed lines correspond to the second fabrication (Ta/Cu/Ti/Au top contact, and the top high-doped contact layer removed).The inset shows the SEM of the fabricated MM waveguide.
Figure 4 :
Figure 4: Pulsed L-I -V data and spectra (at 45 K) of MM waveguide for (a, b) design 1 (2.2 mm × 75 μm) and (c, d) design 2 (2 mm × 75 μm).The inset of the L-I -V figures shows the T 0 parameter fitting, and (e) shows the spectrum for a shorter ridge (0.5 mm) from design 2 with modes up to 6.03 THz at 45 K.
Figure 5 :
Figure 5: Continuous-wave L-I -V data and spectra for (a) design 1 and (b) design 2. The insets show the corresponding cw spectra at 45 K.
Table 1 :
Summary of key design parameters and experimental results.
f , f 𝚫 (meV)
The growth sequence for the two devices used in this work is listed below.The layer thicknesses are given in angstroms, the Al 0.25 Ga 0.75 As barriers are in boldface, and the Si-doped layers are underlined.Design 1: 93/14/112/28/85/31/161/37, where the middle 59 Å of the underlines layer is doped at 5 × 10 16 cm −3 (wafer No. VB1400).Design 2: 86/17/97/28/75/31/147/40, where the entire underlined layer is doped at 5 × 10 16 cm −3 (wafer No. VB1401).The growth starts with a GaAs buffer layer followed by a 200 nm Al 0.55 Ga 0.45 As etch-stop layer and a 100 nm high-doped GaAs layer (5 × 10 18 cm −3 ).A total of 118 and 127 repetitions of QCL stages (GaAs/Al 0.25 Ga 0.75 As) are grown for D1 and D2, respectively.The growth is then followed by 50 nm of high-doped GaAs layer (5 × 10 18 cm −3 ) as well as 10 nm very highly doped GaAs layer (5 × 10 19 cm −3 ) and a low-temperature grown GaAs cap layer.Total epitaxial thickness is 7 μm.The fabrication starts by Ta/Cu (10/300 nm) evaporation on the epitaxial wafer piece and a receptor GaAs wafer.The pieces are then bonded in vacuum by thermocompression bonding at 350 • C for 1 h and followed by 1 h of anneal time at the same temperature.Next, the substrate of the epitaxial pieces is mechanically lapped until ∼50 μm remains, and the rest is removed using a citric acid selective wet etch.The Al 0.55 Ga 0.45 As etch stop layer is then etched by a few seconds dipping in hydrofluoric acid solution.Next, the 100-nm-thick heavily doped GaAs layer is removed by wet-etching.Top metal contact, either Ti/Au/Ni (15/250/200 nm) or Ta/Cu/Ti/Au/Ni (10/135/20/150/200 nm), is then defined using photolithography and is used as a self-aligned mask to etch the active region with a BCl 3 /Cl 2 ICP-RIE dry etch.The etch is stopped at the bottom copper layer to enable direct wire bonding to the ground plane.The Ni etch mask is then chemically removed, leaving behind the exposed Au surface. | 2024-01-16T14:05:12.626Z | 2024-01-16T00:00:00.000 | {
"year": 2024,
"sha1": "8f27c7b633b9b2b8eccaca2cb2786a801cf5753e",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/nanoph-2023-0726/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6d376e18df61c071dceda75ccaa5678eabd15ec",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
} |
10851492 | pes2o/s2orc | v3-fos-license | A cross-sectional assessment of the burden of HIV and associated individual- and structural-level characteristics among men who have sex with men in Swaziland
Introduction Similar to other Southern African countries, Swaziland has been severely affected by HIV, with over a quarter of its reproductive-age adults estimated to be living with the virus, equating to an estimate of 170,000 people living with HIV. The last several years have witnessed an increase in the understanding of the potential vulnerabilities among men who have sex with men (MSM) in neighbouring countries with similarly widespread HIV epidemics. To date, there are no data characterizing the burden of HIV and the HIV prevention, treatment and care needs of MSM in Swaziland. Methods In 2011, 324 men who reported sex with another man in the last 12 months were accrued using respondent-driven sampling (RDS). Participants completed HIV testing using Swazi national guidelines as well as structured survey instruments administered by trained staff, including modules on demographics, individual-level behavioural and biological risk factors, social and structural characteristics and uptake of HIV services. Population and individual weights were computed separately for each variable with a data-smoothing algorithm. The weights were used to estimate RDS-adjusted univariate estimates with 95% bootstrapped confidence intervals (BCIs). Crude and RDS-adjusted bivariate and multivariate analyses were completed with HIV as the dependent variable. Results Overall, HIV prevalence was 17.6% (n=50/284), although it was strongly correlated with age in bivariate- [odds ratio (OR) 1.2, 95% BCI 1.15–1.21] and multivariate-adjusted analyses (adjusted OR 1.24, 95% BCI 1.14–1.35) for each additional year of age. Nearly, 70.8% (n=34/48) were unaware of their status of living with HIV. Condom use with all sexual partners and condom-compatible-lubricant use with men were reported by 1.3% (95% CI 0.0–9.7). Conclusions Although the epidemic in Swaziland is driven by high-risk heterosexual transmission, the burden of HIV and the HIV prevention, treatment and care needs of MSM have been understudied. The data presented here suggest that these men have specific HIV acquisition and transmission risks that differ from those of other reproductive-age adults. The scale-up in HIV services over the past decade has likely had limited benefit for MSM, potentially resulting in a scenario where epidemics of HIV among MSM expand in the context of slowing epidemics in the general population, a reality observed in most of the world.
Introduction
Swaziland is a small, land-locked, lower-middle-income country that is surrounded by South Africa and Mozambique; it has a population of approximately 1.1 million people and a life expectancy at birth of approximately 48 years [1]. Similar to other Southern African countries, Swaziland has been severely affected by HIV, with over a quarter of its reproductive-age adults (15Á49) estimated to be living with the virus, equating to an estimate of 170,000 people living with HIV [2]. Moreover, the incidence of HIV appears to have peaked in 1998Á1999 at 4.6% [95% confidence interval (CI) 4.27Á4.95], according to estimates by the Joint United Nations Programme on HIV/AIDS (UNAIDS), while in 2009 it was estimated to be 2.7% (95% CI 2.2Á3.1%) [3Á6]. There appear to have been further declines in incidence according to 6054 person-years of follow-up data from 18,154 people followed from December 2010 to June 2011 as part of the Swaziland HIV Incidence Measurement Survey (SHIMS) longitudinal cohort. Overall incidence was approximately 2.4% (95% CI 2.1Á2.7%), with incidence estimated to be 3.1% (95% CI 2.6Á3.7) among women as compared to 1.7% (95% CI 1.3Á 2.1) among men [7]. Indeed, women and girls have been more burdened with HIV than men throughout the history of the HIV epidemic in Swaziland, with the HIV prevalence among women 15Á24 in 2006 being estimated to be 22.6% compared to 5.9% among age-matched men and boys [5].
The 2009 Swaziland Modes of Transmission study characterized major drivers of incident HIV infections to be multiple concurrent partnerships before and during marriage as well as low levels of male circumcision [8]. These risk factors were confirmed in the SHIMS study, with risk factors for incident HIV infections among both men and women including not being married or living alone, having higher numbers of sex partners and having serodiscordant or unknown HIV status partners [7]. There are no known HIV prevalence estimates for key populations in Swaziland, including female sex workers (FSW) or men who have sex with men (MSM) [9,10]. The 2009 Swazi Modes of Transmission Study indicates that both sex work and maleÁmale sexual practices are reportedly infrequent and assumed to be minor drivers of HIV risks in the setting of a broadly generalized HIV epidemic. However, the prevalence of these risk factors has not been measured in the HIV surveillance systems that are used to inform the Modes of Transmission Surveys [11]. The last several years have witnessed an increase in the understanding of the potential vulnerabilities among these same key populations through targeted studies including MSM in neighbouring countries with similarly widespread HIV epidemics [12,13].
The largest body of data is available from South Africa, where the first study completed in 1983 of 250 MSM demonstrated a high prevalence of HIV, syphilis and hepatitis B virus [14]. More recently, a study of rural South African men found that approximately 3.6% of men studied (n 046) reported a history of having sex with another man [15]. Among these men, HIV prevalence was 3.6 times higher than among men not reporting male partners (95% CI 1.0Á13.0, p00.05) [16]. There have also been several targeted studies of MSM in urban centres across South Africa that consistently highlight a population of men who have specific risk factors for HIV acquisition and transmission and limited engagement in the continuum of HIV care [17Á19]. Relatively recent studies from other countries, including Lesotho, Malawi, Namibia and Botswana, have shown similar diverse populations of MSM [16,20,21]. Diversity among populations of MSM across Southern Africa manifests through diverse sexual orientations and practices ranging from those who are gay identified, with primarily male sexual partners, to those who are straight identified, with both male and female sexual partners [22]. Diversity has also been measured in the range of HIVrelated risk practices among MSM, including understanding of the HIV acquisition and transmission risks associated with unprotected anal intercourse and of the levels of use of condoms and condom-compatible lubricants (CCLs) [23].
To better characterize vulnerabilities and HIV prevention, treatment and care needs among MSM in Swaziland, a crosssectional assessment was completed to provide an unbiased estimate of the prevalence of HIV and syphilis among adult MSM in Swaziland. This study was completed in equal collaboration with the Swaziland National AIDS Program (SNAP) in the Ministry of Health. This study further sought to describe the significant correlates of prevalent infections, including individual behavioural characteristics, and describe social and structural HIV-related factors and risks for HIV infection among MSM.
Methods
Sampling MSM in Swaziland were recruited via respondent-driven sampling (RDS), a peer referral sampling method designed for data collection among hard-to-reach populations [24]. Potential participants were required to be at least 18 years of age, report anal sex with another man in the previous 12 months, be able to provide informed consent in either English or siSwati, be willing to undergo HIV and syphilis testing and possess a valid recruitment coupon.
Survey administration and HIV testing
All participants completed face-to-face surveys and received HIV and syphilis tests on site. Surveys were administered by trained members of the research staff and lasted approximately one hour. The study was completely anonymous and did not collect any identifiable information; we used verbal rather than signed consent to further ensure anonymity. Questions on socio-demographics (e.g., age, marital status and education), behavioural HIV-related risk factors (e.g., HIVrelated knowledge, attitudes and risk behaviours) and structural factors (e.g., stigma, discrimination and social cohesion) were included [25]. HIV and syphilis tests were conducted by trained phlebotomists or nurses, according to official Swazi guidelines. Test results, counselling and any necessary treatment (for syphilis) and/or referrals (for HIV) were provided on site. Participant surveys and test results were linked using reproducible, yet anonymous, 10-digit codes.
Analytical methods
Population and individual weights were computed separately for each variable by the data-smoothing algorithm using RDS for Stata [26]. The weights were used to estimate RDSadjusted univariate estimates with 95% bootstrapped confidence intervals (BCIs). Crude bivariate regression analyses were also conducted to assess the association of HIV status with demographic variables as well as a selection of variables either expected or shown to be associated with HIV status in the literature. All demographic variables were then included in the initial multivariate logistic regression model regardless of the estimated strength of their crude bivariate association with HIV status. Non-demographic variables were included in the initial multivariate model if the chi-square p value of association with HIV status was 50.25 in the bivariate analyses. Most of the demographics variables, however, dropped out of the final model after controlling for other independent variables.
Because regression analyses of RDS data using sample weights are complicated due to the fact that weights are variable-specific [27], RDS-adjusted bivariate and multivariate analyses were conducted using individualized weights that were specific to the outcome variable (i.e., HIV status) [27]. The adjusted odds ratio (aOR) estimates were not statistically different from the unadjusted estimates in the bivariate analyses, although some slight differences were observed in the multivariate analyses. Thus, only the unadjusted odds ratios (ORs) are reported for bivariate analyses, while both are presented in Table 1 for multivariate analyses. All data processing and analyses were conducted using Stata 12.1 [28].
Missing data
Eleven out of the 324 participants were excluded from this analysis due to missing data on key RDS-related variables. There were 29 out of 313 participants with missing data on at least one variable used in the multivariate analyses. Only two variables had data missing for more than three participants: age at first sex with another man (n missing 04) and knowledge about the type of anal sex position that puts you most at risk of HIV infection (n missing06). Two of the 29 participants with missing data were living with HIV; thus, the effective crude HIV prevalence used in the multivariate model was 17 Although the total number of cases with missing data is not very small (9.3%: 29/313), the number missing by variable is very small. Due to the small change in HIV prevalence in the analysis sample compared to the complete sample as shown in this article, no effort was made to impute missing data. The 29 cases were excluded in the multivariate regression models.
Sample size calculation
The sample size was calculated based on the ability to detect significant differences in condom use among MSM living with HIV and those not living with HIV. There were no known estimates of condom use among MSM in Swaziland, but previous studies of MSM from nearby countries estimated that consistent condom use during anal sex with other men among MSM is approximately 50% [19]. In addition, . This sample size facilitates the detection of significant differences in HIV-related protective practices, such as consistent condom use, and targeted HIV-prevention measures, and is sufficient for key social factors such as experiences with stigma and discrimination.
Ethics
The study received approval for research on human participants from both the National Ethics Committee of Swaziland as well as the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.
Results
Three hundred and twenty-four men were accrued from six seeds over a range of between 1 and 14 waves of accrual, with the largest recruitment chain including 123 participants. As shown in Table 2 (Table 3). About one-third of participants reported having had both male and female sexual partners in the previous 12 months (35.7%, 95% CI 27.7Á43.6). Approximately one-half of the participants reported always using condoms during sex, although significant numbers of men reported both unprotected insertive and receptive anal intercourse in the past 12 months. Condom use was not significantly different between main and casual male or female partners. Overall, safe sex with other men, defined as always using condoms and water-based lubricants over the last 12 months, was not common, with 12.6% (95% CI 7.6Á12.6) measured to report this behaviour. Safe sex, defined as condom use with all sexual partners over the last 12 months, was significantly higher with female partners (at 40.0% in the crude assessment) than with male partners (p B0.05). Overall, safe sex with all sexual partners was uncommon and was reported by 4.3% (RDS-adjusted 1.3%, 95% CI 0.0Á9.7). Knowledge of basic questions related to safe sex for MSM, including sexual positioning, type of sexual act and lubricant use, was low, with 11.2% (RDS-adjusted 9.1%, 95% CI 5.2Á 13.0) of participants providing correct answers. Table 4 demonstrates levels of service uptake, with evidence of statistically significantly lower levels of access to targeted services focused on preventing HIV transmission via sex between men as compared to sex between men and women (p B0.05 for both). Notably, only about half of the sample was somewhat or very worried about HIV. Just under half of the men who had symptoms of a sexually transmitted infection (STI) were tested in the previous 12 months, with 7.8% (95% CI 3.9Á11.7) diagnosed in this same time frame. About half of the sample had been tested for HIV in the previous 12 months (50.7%, 95% CI 43.2Á59.2), including some who were tested more than one time. Reports of any experienced rights violations related to sexual practices, including denial of care, police-mediated violence and physical or verbal harassment, were reported by about half of the sample, although perceived rights violations related to sexual orientation (fear of seeking healthcare and fear of walking in the community) were more common, with 79.6% (95% CI 73.7Á85.5) calculated to report this. Disclosure of sexual practices to healthcare workers was reported by one-quarter of the sample (25.0%, 95% CI 19.0Á31.0), whereas about half of the participants (44.0%, 95% CI 36.4Á51.7) had reported disclosure of sexual practices to a family member.
HIV prevalence was strongly correlated with age in both bivariate analyses (OR 1.23, 95% BCI 1.15Á1.21) for each year of age and multivariate-adjusted analyses (aOR 1.24, 95% BCI 1.14Á1.35) ( Table 1). Other statistically significant associations with HIV in adjusted analyses included identifying as the female gender, having ever been to jail or prison, having lower numbers of casual partners, being diagnosed with an STI in the last 12 months and having easier access to condoms.
Discussion
In the country with the highest HIV prevalence in the world, this study describes the burden of HIV and associated characteristics among MSM who were accrued using RDS. Interpreting the prevalence of HIV among MSM and its relationship with the widespread and generalized femalepredominant epidemic in Swaziland is challenging on a While the participants in our study were relatively young, the HIV prevalence was consistent with that of general reproductive-age men until age 24Á26, when the prevalence of HIV among age-matched MSM appears to be higher than that of other men sampled as part of the Swazi DHS study (Figure 1) [2]. Given that relatively few men in our sample reported female sexual partners, their HIV acquisition and transmission risks are likely different from those of other men in Swaziland and potentially more related to anal intercourse. Conversely, Swaziland may be among a small number of countries where even the low acquisition risks associated with insertive penile-vaginal intercourse is counterbalanced by the significantly higher HIV prevalence among women, resulting in significant acquisition risks associated with sex with women. However, the idea that acquisition risk for MSM primarily related to sex with other men is reinforced by the results that condom use was lower with male sexual partners than with female sexual partners. Condoms being used more frequently during sex with women as compared to sex with other men have been observed in other studies of MSM across Sub-Saharan Africa and provide an argument against MSM being a population that bridges the HIV epidemic from within their sexual networks to lower risk heterosexual networks [19,20,32,33].
However, to answer this question, phylogenetic studies and the characterization of sexual networks are needed to better describe patterns of HIV transmission. Participants were far more likely to have received information about preventing HIV infection during sex with women as compared to sex with other men. This lack of access to or uptake of information, education and communication services has resulted in participants in this study having a limited knowledge base of the sexual risks associated with same-sex practices. Primarily, participants incor-rectly believed that unprotected penile-vaginal intercourse was associated with the highest risk of HIV transmission, consistent with earlier studies of MSM across Sub-Saharan Africa. Numerous studies have shown the opposite: HIV is far more efficiently transmitted during anal intercourse as compared to vaginal intercourse [13,34]. There was also limited knowledge related to the importance of water-based lubricants being CCLs, which is especially important during anal intercourse given the absence of physiological lubrication in the anal canal. The importance of CCL was underscored as ultimately being the determining factor in just six study participants reporting safe sex with all partners in this study. Thus, while there is significant provision of general HIV-prevention messaging across Swaziland, there has been limited information focused on educating MSM on how to prevent HIV acquisition and transmission during sex with other men. Data suggest that starting with simple and proven approaches, including peer education programmes, is necessary to educate these men about their risks and protective behavioural strategies [35]. However, these approaches will likely not be sufficient to change the trajectory of HIV epidemics given the high risk of infection associated with unprotected anal intercourse with non-virally suppressed HIV serodiscordant partners. Thus, moving forward necessitates assessing the feasibility of combination approaches that integrate advances such as antiretroviral-mediated preexposure prophylaxis and universal access to antiretroviral therapy for people living with HIV [13]. However, the success or failure in achieving coverage with these HIV prevention, treatment and care approaches among MSM will, in part, be determined by the level of stigma affecting MSM.
It is now broadly accepted that addressing the needs of people living with HIV is vital to protect their own health as well as prevent onward transmission of HIV [36]. In addition, mean and total viral loads in a population have been linked to population-level transmission rates of HIV [37]. Only a quarter of the men living with HIV in this study were aware of their diagnosis, demonstrating the need to increase HIV testing, linkage to CD4 testing, and antiretroviral treatment and adherence support for those who are eligible. A recent systematic review and meta-analysis of self-testing for HIV in both low-and high-risk populations demonstrated that selftesting was both appropriate and associated with increased uptake of HIV tests [38]. This may be especially relevant in the Swazi context, where fear of seeking healthcare was prevalent, suggesting the need to study new strategies to overcome barriers to HIV testing among MSM in Swaziland, including leveraging community networks and potentially self-testing [39]. In this study, being a person living with HIV was associated with lower numbers of casual male partners in the last 12 months. This relationship appeared to be stronger among those who were aware of their status, although it was not statistically significant because of limited numbers. In addition, these data are consistent with earlier research findings that simply being made aware of one's status of living with HIV can change one's sexual practices to decrease onward transmission [40]. This further argues for implementation science research focused on optimal strategies to scaleup HIV testing for MSM in Swaziland [41]. Over one-quarter of participants in this study self-identified as women, and this was independently associated with living with HIV. There is nearly a complete dearth of information related to HIV among transgender people across Sub-Saharan Africa [42,43]. However, where transgender people have been studied, they have been found to be the most vulnerable to HIV acquisition because of increased structural barriers to HIV prevention, treatment and care services and because of increased sexual risks, including unprotected receptive anal intercourse [43]. Given the limited information available about transgender people, transgender was assessed in this study as both a sexual orientation and a gender identity. There was a significant disconnect between these two as no participants self-identified as being transgender. Ultimately, further ethnographic research is needed to better understand the HIV-prevention needs of transgender people in Swaziland.
Having been to jail was also independently associated with living with HIV among MSM in this study. Globally, incarceration has been shown to be an important risk factor for HIV, given the limited access to HIV-prevention services such as condoms and CCLs, the interruption of HIV treatment as well as exposure to higher risk sexual partners [44Á47]. While further research is needed on same-sex practices within jails, there is likely a need to provide HIV-prevention services for men in Swazi prison settings [47].
The methods employed in this study have several limitations. While RDS is an effective approach to characterize asymptotically unbiased estimates intended to approximate population-based estimates of characteristics in the absence of a meaningful sampling frame, there are still several uncertainties in the most appropriate tools for interpretation of these data [48]. Moreover, the sample of men accrued here was relatively young, consistent with recruitment challenges observed in other studies of MSM across sub-Saharan Africa. While we conducted significant engagement with older MSM, fear associated with inadvertent disclosure limited their participation in the study. Only with improved social environments will more information about the needs of older MSM become available in difficult contexts [49]. In addition, while RDS was used to accrue a diverse sample, all of the seeds were connected with Rock of Hope, a newly registered organization serving the needs of lesbian, gay, bisexual and transgender populations in Swaziland. We thus may have overestimated actual service uptake among MSM in Swaziland.
Conclusions
The implementation of the research project was guided by recent guidelines to inform HIV-related research with MSM in rights-constrained environments [50]. While these men had not been previously engaged in research on HIV prevention, treatment and care, the success of this study highlights the fact that accrual of this population is both feasible and informative for the HIV response in Swaziland. Moreover, the interconnected social and sexual networks leveraged for accrual can likely serve to disseminate HIV-prevention approaches via MSM throughout the country. While the epidemic in Swaziland is one driven by heterosexual transmission, the burden of HIV and the HIV prevention, treatment and care needs of MSM have been understudied, and these men have been underserved in the context of large-scale programmes [51]. The data presented here suggest that these men have specific HIV acquisition and transmission risks that differ from those of other reproductive-age adults. Encouragingly, Swaziland has seen declines in the rate of new HIV infections over the last seven years, and these declines are related to HIV testing and treatment scale-up [5]. However, the increase in HIV services likely has had limited benefit for MSM, which may result in a scenario where epidemics of MSM expand in the context of slowing epidemics in the general population Á a reality observed in most of the world [13]. | 2016-05-04T20:20:58.661Z | 2013-02-12T00:00:00.000 | {
"year": 2013,
"sha1": "a3e2ead10994aeb353c6b0e9c045e4e115a9ec60",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.7448/IAS.16.4.18768",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3e2ead10994aeb353c6b0e9c045e4e115a9ec60",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22994701 | pes2o/s2orc | v3-fos-license | 590 SCHOOL AS A “ PROTECTIVE FACTOR ” AGAINST DRUGS : PERCEPTIONS OF ADOLESCENTS AND TEACHERS
This study aims to discover and describe protective factors regarding the use of drugs, according to teachers and students, aged 14 to 15 years, from a Public Secondary School in Santiago de Querétaro, Mexico. This is a descriptive and exploratory study. Data collection was carried out through semi-structure interview and non-participative observation with ten students and five teachers. Three themes resulted from data analysis: school and school’s environment: the school does not provide a healthy environment; use of drugs: perceived by both the students and teachers in the institution itself; prevention programs: there are health promotion and prevention programs available at the school. According to the students’ and teachers’ perceptions, the school represents a risk factor.
INTRODUCTION
The drug phenomenon represents one of the biggest public health problems in Mexico today.
Teenagers are involved, as the results of the National Addictions Research show: more than 200,000 teenagers between 12 and 17 years old (215.634)use drugs.In terms of gender, for every female user, there are 3.5 male users., On the average, consumption started at the age of 14.These statistics show that drugs use as one of the most problematic behaviors in young people nowadays and is more frequent in the teenage population (1) .
Adolescence is a transition phase between childhood and adult age, which is characterized by the group of physical, psychological, emotional and social changes, together with internal and external physical development.Modifications occur in the social structure, with an increasing importance of the friends group.Furthermore, adolescents tend to imitate the group's way of dressing, speaking and acting, adopting habits that negatively affect their health (2) .These actions can condition teenagers to use alcohol and other drugs.These experiences can provoke their denial or acceptance of these kinds of substances, which generates significant problems for individual and family health (3) .
In view of this situation, it is important for educational institutions to support the promotion and strengthening of "protection factors" to avoid drugs use.These allow people to face the problems they are confronted with and open an array of possibilities, emphasizing the positive forces or aspects of human beings.
The school is a propitious environment for students to develop a healthy way of living, involving cognitive, emotive, affective, cultural, behavioral, and social patterns.This helps teenagers to resist drugs use, lowering that risk (4) .This way, the school has a vital role as a protection factor in the psychosocial development of children and teenagers (5) .The promotion of School Education Programs is important, since it is a place where students become aware of the skills for life and the school becomes a protection factor that favors the ideal physical, social, and psychological well-being.Their purpose is to provide the future generations with knowledge, abilities, and skills to promote and care for their health, and create and maintain a healthy study, work and cohabitation environment (6) .In order to become a protection factor against drug use, the school should create support networks with parents, students and teachers so as to strengthen the students' healthy habits.School health promotion should be based on the following principles: articulation between the health and education sectors to establish work programs; construction of an interdisciplinary and multidisciplinary perspective; comprehension of the reality; and development of student, family, and teacher groups (7) .On that account, it is fundamental to implement health policies in public schools, to perform health promotion inside the institutions.These, in turn, will be instructed about the addictions problems in the school population and can therefore support and promote the protection factors against drugs.
Data collection was performed in the classroom and in the institution's public spaces, with a view to obtaining additional data to reach the previously established objectives.The data analysis had three purposes: 1) To comprehend the collected data, 2) to confirm the research premises and/or answer the research question, and 3) to increase knowledge about the investigated project, linking it with the cultural context (9) .Data analysis was performed through semistructured interviews and non-participant observation.
The content concepts were grouped in categories, seeking to join common features.
Individual Characteristics
The study participants were ten students, six understand the reality; and foster the development of student, family, and teacher groups (7) .
During non-participant observation, it was noticed that the school environment is not appropriate for teenagers, because it is located in a conflict zone of the city.Therefore, the students are influenced by the youngsters on the streets.Besides, it was observed that, in the school, students show great apathy towards the institution, generating great disinterest in keeping the installations in good conditions or in a healthy environment.It was also observed that there is a loss of values, because the students do not respect the teachers and companions.
In turn, most teachers do not show interest in the students' academic and personal development.Often, they do not care for the students' problems, and this makes students consider themselves unimportant.
CONCLUSIONS
According to the objectives, it was observed that the teachers and students do not identify the school as a "protection factor".That confirmation was based on the results obtained through interviews and non-participant observation, where it was analyzed that the school is not appropriate to generate a healthy environment for teenagers.
Hence, the students state that the teachers do not put what they say in practice, because they are the first to smoke inside the institution.Besides, they mention that some teachers come to school with their breath smelling of alcohol, and said that the school is not a protection factor, according to the interviews performed in the "Secondary School".Both students and teachers report that the school is a space that propitiates the abuse of substances that cause dependence, because the students follow the example of their teachers and companions.
In the same way, the teachers mention the the institution authorities' need for an adequate program to foment a healthy environment and the teenager's integral development.
Therefore, the current study considers it important to highlight that the "School", in the teenagers and teachers' point of view, is not a "protection factor", but rather a risk factor.To reverse this aspect, it is important for teachers and students Health promotion is fundamental in students' development.An articulation between the education and health sectors must exist to support school health, especially in secondary schools, where teenagers adopt or imitate adult actions without concern with the damage these acts can cause, or to belong to a particular group (7) .
Secondary schools should also articulate with the University, with active participation of the Nursing To learn and describe the protection factors related to drugs use, considered by teachers and 14-15-year-old teenagers of a secondary public school in the city of Santiago de Querétaro, Mexico.The specific objectives were: -To identify if the teenagers of 14-15 years old can perceive what protection factors the school offers against drugs use; -To identify if the teacher can perceive what protection factors the school offers to 14-15 year-olds against drugs use; -To identify if the school supports the protection factors against drugs use present in teenagers.METHODOLOGY This is a descriptive and exploratory study, performed at a secondary public school in Santiago do Querétaro, Mexico, with the participation of ten students and five teachers.This study considered the ethical aspects established in the determinations and general principles of the Mexicans Health Law related to health research.The following will be used: Second Title Rev Latino-am Enfermagem 2008 julho-agosto; 16(especial):590-4 www.eerp.usp.br/rlaeSchool as a "protective factor" against... García de Jesús MC, Ferriani MGC.Chapter I, articles 13, 14, 16, 17, 18, 20, 21, 22, as well as Chapter V, article 57.The research in question does not represent any kind of risk to society, since it does not manipulate the teenage population To continue with the research, an authorization request to perform the study was sent to the secondary public school's principal.The individuals who participated in the investigation provided written consent that stated the purpose of the research, as well as the result treatment, which would be informed to the teenagers, parents and teachers.The sample was distributed at random, using a list.This study used the techniques of non-participant observation and semi-structured interview.Data was recorded through brief and detailed notes, recordings, and observation records, among others.Inside the data collection plan, the qualitative researcher uses a reflexive posture and tries to eliminate, as much as possible, the beliefs or experiences associated to the studied theme, to avoid any influence on the results males and four females, between 14 and 15 years old, who attended school in the evening.The five teachers who participated were women between 29 and 50 years old, with career time between 5 and 29 years.They taught diverse subjects, like mathematics, social works, physical education and guidance.Most of them had two jobs.The results were grouped in three themes: school and school environment, drugs use, and prevention programs -School and School Environment The teachers state that the secondary school does not support or favor a healthy school environment.The students point out that the teachers do not give examples of their acts and that there is a lack of adequate guidance.In view of this situation, it is important for teenagers to be able to count on people that provide them with confidence and guidance.This situation favors the consolidation of a healthy lifestyle, where parents and teachers have a fundamental role, and where the school must prioritize the promotion and education of teenagers' health, thus improving their quality of life.As previously stated, the school is a propitious environment for students to acquire abilities and skills that can favor their own, their family's and social health.-DrugsUse Concerning drugs use, the teachers admit that they smoke at the facilities and some have even come to work smelling alcohol.They also mention that some students smoke too, and occasionally consume alcohol inside the school facilities.In turn, the students state that the teachers smoke inside classrooms and do not live by the example they verbally propose, since some of them offer guidance to students regarding drugs use.Similarly, the youngsters state that some of their companions smoke at the back of the school and do not care if they are suspended or expelled.The reasons they give for using psychoactive substances are defiance, liking it, lack of understanding from their families, and belonging to a friends group, among others.Drugs use among students is associated with disapproval, family disintegration or low self-esteem.
the school offers guidance, support, motivation, confidence and talks that favor a healthy lifestyle for teenagers.However, these efforts are affected by the bad examples some teachers give to the students.Besides, the lectures that occur in the institution occur in the morning, or occasionally according to a specific schedule.The students insinuate that the school offers training, teaching and advice to keep them away from drugs, but unfortunately this occurs only in the morning, and that, whenever it does occur in the evening, it is directed to some specific groups.In view of this problem, educational institutions need to implement strategies that allow the teenagers to become aware of the serious problem drugs use represents.It is important for school principals to link up with other institutions to provide a culture of health promotion and drug prevention, which should take place in class hours.Schools should support health promotion programs, besides counting on an articulation between the health and education sectors to establish work programs; build an interdisciplinary and multi-disciplinary perspective; to become aware of the importance of establishing health promotion and drug prevention programs in order to prevent drugs use.The Public Education Secretary and the school rector need to become aware of the importance of generating health policies that contribute to a healthy environment for students, achieved through health education programs.These policies must address health promotion and drugs prevention, considering the extent of the teenage drug dependence phenomenon in Querétaro, according to the 2002 National Addictions Research. | 2018-04-03T02:09:17.082Z | 2008-08-01T00:00:00.000 | {
"year": 2008,
"sha1": "68c4c47eb1234728f99b12354ea138c89dd76f01",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rlae/a/xNQRPrsLXhdLsp663mWcXgm/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68c4c47eb1234728f99b12354ea138c89dd76f01",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
255415055 | pes2o/s2orc | v3-fos-license | Diagnostic performance between in-house and commercial SARS-CoV-2 serological immunoassays including binding-specific antibody and surrogate virus neutralization test (sVNT)
This study aimed to evaluate the correlation between in-house and commercial binding-specific IgG antibodies and between in-house and commercial SARS-CoV-2 surrogate virus neutralization tests (sVNT). Samples from healthcare workers who received vaccines against SARS-CoV-2 were tested for RBD-specific antibody, S-specific antibody, and in-house ELISA, commercial sVNT, and in-house sVNT, against wild-type SARS-CoV-2. Three hundred and five samples were included in the analysis. The correlation between S-specific binding antibodies and in-house ELISA was 0.96 (95% CI 0.96–0.97) and between RBD-specific antibodies and in-house ELISA was 0.96 (95% CI 0.95–0.97). The Cohen’s kappa between in-house sVNT and the commercial test was 0.90 (95% CI 0.80, 1.00). If using 90% inhibition of sVNT as the reference standard, the optimal cut-off value of RBD-specific antibodies was 442.7 BAU/mL, the kappa, sensitivity, and specificity being 0.99, 99%, and 100%, respectively. The optimal cut-off value of S-specific antibodies was 1155.9 BAU/mL, the kappa, sensitivity, and specificity being 0.99, 100%, and 99%, respectively. This study demonstrated a very strong correlation between in-house ELISA and 2 commercial assays. There was also a very strong correlation between in-house and commercial SARS-CoV-2 sVNT, a finding of particular interest which will inform future research.
Results
Out of 153 participants who previously received 2 doses of CoronaVac® vaccine, 39 patients (25.5%) were male and the median age was 44 (IQR 30, 53) years. There were 305 specimens consisting of 77 samples at before the third dose of ChAdOx1 nCoV-19 vaccine, 76 samples at before the third dose of the BNT162b2 mRNA vaccine, 76 samples at 4 weeks after the third dose of ChAdOx1 nCoV-19 vaccine, and 76 samples at 4 weeks after the third dose of BNT162b2 mRNA vaccine. A total of 305 specimens were analysed.
The optimal cut-off value of binding specific IgG antibody using the commercial sVNT as a reference standard.
• At a cut-off level of 35% inhibition of commercial sVNT RBD-specific-IgG-test which designated 7.1 BAU/mL as a cut-off level of positive result, had 69% agreement with Cohen's kappa 0.04, 100% sensitivity, and 3% specificity. The S-specific-IgG-test, which designated 0.8 BAU/ mL as a cut-off level of a positive result, also had a 68% agreement with kappa 0, and 100% sensitivity.
After 4 sequential steps were applied, the area under the ROC (AUROC) curve of RBD-specific-IgG-test and S-specific-IgG-test was 0.99 and 0.97, respectively. ( Fig. 2A) The cut-off value for RBD-specific-IgG-test was 83.9 BAU/mL, which increased the agreement to 96%, with kappa 0.91. The sensitivity and the specificity were 96% and 96%, respectively. The optimal cut-off value for S-specific-IgG-test was 90.9 BAU/mL. By using this cut point, the agreement increased to 91% with kappa 0.82. The sensitivity and specificity were 94% and 89%, respectively. (Tables 1 and 2).
After 4 sequential steps were applied, the AUROC curve of RBD-specific-IgG-test and S-specific-IgG-test were 1 and 1, respectively. (Fig. 2B) RBD-specific-IgG-test had 99% agreement with kappa 0.99 at an optimal cut-off value of 442.7 BAU/ml with 99% sensitivity, and 100% specificity. At an optimal cut-off value of 1155.9 BAU/mL, S-specific-IgG-test had 100% agreement with kappa 0.99. The sensitivity of this assay was 100% and the specificity was 99%. The optimal cut-off value, % agreement, kappa, sensitivity and specificity for 80%, 85%, and 95% inhibition are shown in Tables 1 and 2.
Discussion
This study demonstrated a "very strong" correlation among SARS-CoV-2 IgG Quant II assay, Elecsys anti-SARS-CoV-2 S immunoassay, and S-specific antibodies by in-house ELISA, with a correlation above 0.95. Several studies have reported moderate to strong correlations between commercially available immunoassays [26][27][28] .
Although the correlation was very strong as regards direction, S-specific antibody by Elecsys anti-SARS-CoV-2 S immunoassay had a higher level than RBD-specific antibody detected by SARS-CoV-2 IgG Quant II assay at 4-weeks after, but not before the third dose vaccination. This finding has also been reported by Perkmann T. et al., which found that the correlation between Elecsys anti-SARS-CoV-2 S immunoassay and Abbott Architect RBD-specific antibodies was different at different time points after vaccination 29 . This emphasizes the importance of detailed interpretation of the results, particularly from different tests and at different time points. In addition, the antibodies from the in-house ELISA had lower levels than both S-specific antibodies by Elecsys Anti-SARS-CoV-2 S immunoassay and RBD-specific antibodies. (Fig. 1) A comparison between commercial and in-house techniques which detect antibodies to the spike protein but by different methods i.e. ECLIA in the former and ELISA in the latter, the antibody levels from in-house ELISA were lower. The correlation was weaker at the time point after the third dose than before. Similar findings were found when comparing the antibodies from in-house ELISA with the RBD-specific antibodies. In the case of the neutralizing antibodies, the correlation between in-house sVNT and commercial sVNT was also "very strong" with a correlation of 0.91, although there were www.nature.com/scientificreports/ some scattered results away from the regression line at 30-70% inhibition. There were satisfactory results of the in-house ELISA and also in-house sVNT with the commercially available tests. These tests may be cost effective alternative ways of detecting antibodies with acceptable performance. Several studies showed that vaccine efficacy and vaccine effectiveness correlated with neutralizing antibody levels for protection against symptomatic COVID-19 infection 9,24,30 . Protection against severe disease required lower neutralizing antibody levels than those conferring protection against mild disease 9 . Predictive models identifying a mean neutralization titer (relative to convalescent) of 1 correlate with 80% protective efficacy against SARS-CoV-2 infection, mostly due to SARS-CoV-2 variant B.1.177 24 . That study predicted vaccine efficacy against symptomatic infection of 50%, 60%, 70%, 80%, and 90% with the level of anti-spike IgG, anti-RBD IgG, normalized live-virus neutralization assay and pseudovirus neutralization assay respectively 24 . For example, 80% vaccine efficacy against symptomatic COVID-19 infection with mostly B.117 variants of SARS-CoV-2 was achieved with anti-spike IgG of 40,923 AU/mL, and 264 BAU/mL and anti-RBD-IgG of 63,383 AU/ml, and 506 BAU/mL, and NF50 for normalized live-virus neutralization assay of 247, and ID50 for pseudovirus neutralization assay of 57. No data to compare the vaccine efficacy and % inhibition of sVNT was published in that study. However, a correlation between sVNT and virus neutralization test (the plaque-reduction neutralization test; PRNT) has been reported 31 . www.nature.com/scientificreports/ A cut-off level of positive result from SARS-CoV-2 IgG Quant assay and Elecsys anti-SARS-CoV-2 S immunoassay demonstrated no agreement with a cut-off level of seroconversion by commercial sVNT. After the 4 sequential steps were applied, the optimal cut-off values for SARS-CoV-2 IgG Quant assay and Elecsys anti-SARS-CoV-2 S immunoassay with 35% inhibition of commercial sVNT were 83.9 BAU/mL and 90.9 BAU/ mL, respectively. The level of 442.7 BAU/mL for SARS-CoV-2 IgG Quant assay and 1155.9 BAU/mL showed almost perfect agreement with 90% inhibition. We also performed tests at the cut points of 80%, 85%, and 95% inhibition, and the results are shown in Tables 1 and 2. The sensitivity and specificity were not reduced above 85% inhibition. However, the results must be interpreted with caution as the assays were tested against wild-type SARS-CoV-2, but not their variants.
The kappa statistic between in-house sVNT and the commercial sVNT was almost perfect. We have proposed an equation for switching between them. However, this equation may only be suitable for this study population and for research purpose when the costs are limited.
With straightforward technical requirements and lower operating costs, the determination of binding-specific antibody levels was more convenient for evaluating the immune response after vaccination against SARS-CoV-2 infection or immunity after infection. Furthermore, when long-acting antibodies (LAAB) are available for use for pre-exposure prevention of COVID-19 in certain populations, the antibody level may help to guide selection of eligible patients. For example, the Department of Disease Control, Ministry of Public Health of Thailand suggested prescribing LAAB for patients with end-stage renal disease (ESRD), kidney or other organ-transplant or bone marrow transplant who received immunosuppressive therapy, ESRD on hemodialysis or peritoneal dialysis, who had received at least 3 doses of vaccine against SARS-CoV-2 but had anti-spike IgG of/ or comparable to < 264 BAU/mL, as a first priority due to an imbalance between certain population and available LAAB 32 .
This study has several limitations. First, the binding specific antibodies and % inhibition from sVNT were tested against wild-type SARS-CoV-2 virus. The major variants of interest (VOIs), and variant of concerns (VOCs) required higher levels of antibodies than against wild-type SARS-CoV-2 30 . Interpretation and the cutoff levels from this study may not be applied the current situation where circulating virus are VOIs and VOCs. Second, the reference standard used in this study was the sVNT, which was not the gold standard for neutralizing antibody detection. Although there was a correlation between sVNT and the PRNT, the results must be interpreted with caution. Third, although in-house sVNT and in-house ELISA provide satisfactory results and has economic advantages when compared to the commercial assays, they require an overnight incubation of ACE-2 or the S antigen in ELISA plates. Therefore, they take more time to complete the assays compared to the ready to use commercial kits.
In conclusion, this study demonstrated a very strong correlation between in-house ELISA with 2 commercially assays available in Thailand, i.e. the SARS-CoV-2 IgG II Quant assay and the Elecsys anti-SARS-CoV-2 S immunoassay. However, when testing for binding specific antibody, the same tests must be used for longitudinal study as the antibody levels in each test were different (Fig. 1) There was also a very strong correlation between in-house sVNT and the commercial sVNT which is of great interest and will inform future research. These findings are very helpful for both research purposes, and to save costs with acceptable levels of outcome.
Methods
This cross-sectional study was conducted at Maharaj Nakorn Chiang Mai Hospital, a tertiary care hospital affiliated to Chiang Mai University from September to December 2021. For comparison of the tests, left-over serum samples taken from healthcare workers who enrolled onto a study into immunogenicity against SARS-CoV-2 after vaccination and had provided written informed consent for future studies were collected. In brief, participants were healthcare workers aged 18-60 years, previously received 2 doses of CoronaVac® vaccine, and received the third dose of vaccine either ChAdOx1 nCoV-19 (AZD1222) or BNT162b2 mRNA. Only serum from participants at before and 4 weeks after the third dose of vaccine were selected.
Laboratory assays
Binding antibody. The SARS-CoV-2 IgG II Quant assay (Abbott Laboratories Inc, IL, USA) 33 . This an automated, two-step immunoassay was designated for the qualitative and semi-quantitative detection of IgG antibodies to SARS-CoV-2 in human serum and plasma using chemiluminescent microparticle immunoassay (CMIA) technique. The amount of antibodies were presented as arbitrary units (AU)/mL and this assay measured the concentration of anti-RBD-WT (wild-type) IgG levels between 21 and 40,000 AU/mL. Those values were converted to binding antibody units (BAU)/ mL by multiplying by 0.142 per WHO recommendations 34 . The cut-off level for a positive result was ≥ 50 AU/ml (7.1 BAU/mL). 35 . Double-antigen sandwich is a major principle of the test. The reagent consists of antigens which predominantly captured anti-SARS-CoV-2 IgG, but also captured anti-SARS-CoV-2 IgA and IgM, levels being determined by electrochemiluminescence immunoassay (ECLIA). The values were presented as U/mL and this assay measured the concentration of anti-S-WT (wild-type) IgG levels between 0.4 and 250 U/mL. The samples with anti-SARS-CoV-2-S concentrations over the measurable range were diluted with Diluent Universal up to 1:100 dilution. Those values were converted to BAU/ mL by multiplying by 1.029 as per WHO recommendations (44). The cut-off level of a positive result was ≥ 0.8 U/mL (0.8232 ~ 0.8 BAU/mL) 34 www.nature.com/scientificreports/ Determination of specific S-binding IgG antibody by in-house ELISA. Antibodies specific to the spike protein were determined by indirect ELISA. Fifty microliters of 1 µg/mL spike proteins (Genscript, Piscataway, USA) in bicarbonate buffer (pH 9.6) were added to 96-well Maxisorp immunoplates (Thermo scientific, Roskilde, Denmark). After incubation overnight at 4 °C, plates were washed with washing buffer (0.05% Tween-20 (Calbiochem, Gibbstown, USA) in phosphate buffer saline (PBS) and blocked with 2% skimmed milk at 37 °C for 1 h. Fifty microliters of samples were diluted in the ratio 1:100 and serially diluted positive controls were added and incubated at 37 °C for 1 h. After washing, 50 µl of 1: 2000 goat anti-human IgG conjugated with horse radish peroxidase (HRP) (Invitrogen, Carlsbad, USA) were added and incubated at 37 °C for 1 h. After washing, 50 µl of tetramethylbenzidine (TMB) substrate (Life Technologies, Frederick, USA) were added and plates were incubated at room temperature for 30 min. The enzyme reaction was terminated using 0.2 M sulfuric acid and absorbance was read at 450 nm on a microplate reader (CLARIOstar®, Ortenberg, Germany). Antibody levels were determined from a standard curve, derived by serial dilution of WHO international standards for anti-SARS-CoV-2 immunoglobulin (NIBSC, UK) which was assigned an arbitrary unitage of 1000 BAU/mL. The theoretical curve for single antibody and antigen, and least-squares fit to logit-transformed data model was used to calculate the arbitrary units of samples as described previously 36 . The values were presented as BAU/mL and the cut-off level of seroconversion was set as ≥ 3 times of standard deviation of the negative test value which was 50.7 BAU/mL.
Neutralization assay. The SARS-CoV-2 NeutraLISA (EUROIMMUN Medizinische Labordiagnostika AG,
Lübeck, Germany) 37 . The concept of this test intended to mimic virus-host interaction utilizing recombinant RBD of the SARS-CoV-2 spike protein to detect antibodies that block the RBD binding to the hACE2 receptor by using an established ELISA method. The assay was performed following the manufacturer's instruction. The values identified in this study were presented as percent inhibition and the cut-off level of seroconversion was 35%.
The in-house SARS-CoV-2 surrogate virus neutralization test (in-house SARS-CoV-2 sVNT).
In-house SARS-CoV-2 sVNT was modified from the method previously described by Tan
Statistical analysis
Sample size calculation 38,39 . The sample size was calculated based on the comparisons between (1) S-specific-IgG-test and (2) commercial sVNT, and between (3) RBD-specific-IgG-test and commercial sVNT, which required the largest sample size in this study. Using the expected kappa of greater than 0.8 (e.g. 0.9), and probability of positive results for tests (1) and (3) were 0.75 and 0.75, and probability of seroconversion for test (2) was 0.7, respectively, with a two-sided alpha of 0.05, and a power of 80%, 312 samples were required.
Data analysis. Demographic data including age, gender, and underlying diseases were described as number (%), mean ± SD, and median (IQR) as appropriate. The correlation between 2 tests with continuous outcomes i.e.
To identify the value of binding specific IgG antibody which was correlated with 35%, 80%, 85%, 90%, and 95% inhibition of commercial sVNT, 4 sequential steps were performed. First, the regression of binary outcome on continuous outcome with logistic regression, (2) use of the receiver operating characteristic (ROC) curve to assess efficiency of classification, (3) identification of the cut-off point by Euclidian's index method for ROC curve, and (4) assessment of the accuracy of cut-off point for sensitivity and specificity using commercial sVNTs as a reference standard 42
Data availability
The datasets used and analysed during this study available from the corresponding author on reasonable request. | 2023-01-05T05:09:23.606Z | 2023-01-02T00:00:00.000 | {
"year": 2023,
"sha1": "f99d51298cae930747558540849c1bd576405aa9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f99d51298cae930747558540849c1bd576405aa9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249748530 | pes2o/s2orc | v3-fos-license | The fuel–climate–fire conundrum: How will fire regimes change in temperate eucalypt forests under climate change?
Abstract Fire regimes are changing across the globe in response to complex interactions between climate, fuel, and fire across space and time. Despite these complex interactions, research into predicting fire regime change is often unidimensional, typically focusing on direct relationships between fire activity and climate, increasing the chances of erroneous fire predictions that have ignored feedbacks with, for example, fuel loads and availability. Here, we quantify the direct and indirect role of climate on fire regime change in eucalypt dominated landscapes using a novel simulation approach that uses a landscape fire modelling framework to simulate fire regimes over decades to centuries. We estimated the relative roles of climate‐mediated changes as both direct effects on fire weather and indirect effects on fuel load and structure in a full factorial simulation experiment (present and future weather, present and future fuel) that included six climate ensemble members. We applied this simulation framework to predict changes in fire regimes across six temperate forested landscapes in south‐eastern Australia that encompass a broad continuum from climate‐limited to fuel‐limited. Climate‐mediated change in weather and fuel was predicted to intensify fire regimes in all six landscapes by increasing wildfire extent and intensity and decreasing fire interval, potentially led by an earlier start to the fire season. Future weather was the dominant factor influencing changes in all the tested fire regime attributes: area burnt, area burnt at high intensity, fire interval, high‐intensity fire interval, and season midpoint. However, effects of future fuel acted synergistically or antagonistically with future weather depending on the landscape and the fire regime attribute. Our results suggest that fire regimes are likely to shift across temperate ecosystems in south‐eastern Australia in coming decades, particularly in climate‐limited systems where there is the potential for a greater availability of fuels to burn through increased aridity.
| INTRODUC TI ON
Fire regimes are changing across the globe. The number and extent of wildfires are increasing, as is the occurrence of extreme fire behaviors (Duane et al., 2021). The 2019/2020 fire season saw some of the largest fires on record in south-eastern Australia Filkov et al., 2020) and the western United States (Higuera & Abatzoglou, 2021). Importantly, these changes have not been restricted to a single season. Recent changes to fire regimes have been linked to climatic factors such as warmer and earlier springs (Westerling et al., 2006), warm dry summers (Morgan et al., 2008), and increases in temperature and vapor pressure deficit (Abatzoglou & Williams, 2016). Changes to the fire regime include increasing wildfire activity in the western United States (Westerling, 2016) with associated increases in the extent of highseverity fire (Parks & Abatzoglou, 2020). There is evidence in Canada that large fires (>200 ha) are getting larger, and fire seasons are getting longer (Hanes et al., 2019). Studies in both France and Australia show increases in the severity of fire weather (hot, dry conditions) over the last 50 years (Barbero et al., 2020;Clarke et al., 2013), which has increased the likelihood of summers with extreme fire danger in France (Barbero et al., 2020) and has been associated with increases in area burned in forest bioregions of Australia Fairman et al., 2016). While fire is a natural phenomenon in many of these parts of the world, changing fire regimes attributable to human-caused climate change (Barbero et al., 2020) or other anthropogenic factors (Cattau et al., 2020;Hagmann et al., 2021) will increase fire-related risks to life and property through increased exposure of assets (Moritz et al., 2014), and to biodiversity through inappropriate fire regimes for some species .
Four key conditions (otherwise known as fire 'switches') must be met for fires to occur; there must be biomass (fuel), the fuel must be available to burn (be sufficiently dry), the weather needs to meet conditions for fire spread, and an ignition must occur (Bradstock, 2010;Pausas & Keeley, 2021). Climatic change can influence fire regimes through at least three of the four switches both directly, by affecting fire weather and fuel moisture, and indirectly, by affecting fuel load and structure. Severe fire weather is associated with high temperatures, low humidity, and high wind speed. The current predictions of future fire weather tend to show an increase in the magnitude of fire weather in fire-prone regions throughout the world; however, the degree of change varies between biomes (Clarke et al., 2013;Pausas, 2004;Pitman et al., 2007;Suppiah et al., 2007). Changes to fire weather also influence fuel moisture and hence the availability of fuel to burn. Fuel can only ignite if it is dry enough to burn, therefore, a drying climate will alter the broad-scale patterns of fuel moisture and connectivity, changing the propensity for large fires to occur across landscapes (Caccamo et al., 2012;Ellis et al., 2021;Nolan et al., 2016). Megafires (>10,000 ha in extent [Stephens et al., 2014]) in recent years have been strongly linked to drought that increase fuel dryness and prime landscapes to burn (Abram et al., 2021;Higuera & Abatzoglou, 2021;. However, fire, climate, and fuel processes are continually interacting and other important controls of fire regimes, such as biotic feedbacks-that could influence fuel accumulation and structure-are often neglected in research that explores future fire regimes. Climate change has the potential to influence both the composition and structure of vegetation communities (Albrich et al., 2020;Harvey et al., 2016). Live and dead vegetation are the fuel in wildfires; hence, any changes to vegetation may alter fire occurrence and behavior (Bradstock, 2010). Long-term climate and short-term weather can interact to influence vegetation persistence, where, for example, mature trees survive in a warming, drying climate but fail to regenerate in the prevailing climate (Jackson et al., 2009;Parks et al., 2016). These interactions can lead to trailing-edge disequilibrium, where the directional effects of climate do not immediately result in changes to vegetation that is dominated by long-lived species, which can survive and persist despite not being capable of regenerating in a newly unsuitable climate (Sheth & Angert, 2018). On a shorter timescale, a fire that kills mature individuals, combined with inappropriate climate or weather for regeneration could increase the rate of vegetation changes and potentially result in the loss of some species. For example, desiccation in the post-fire environment markedly increased seedling mortality of multiple serotinous shrubs in many Cape fynbos communities of South Africa (Mustart et al., 2012). Multidecadal shifts in vapor pressure deficit, soil moisture, and maximum surface temperature have also resulted in fewer opportunities for postfire regeneration of low-elevation conifers in the western United States (Davis et al., 2019). Similar instances of multi-species regeneration failure could contribute to species losses and ecosystem conversions that could alter both fuel load (the amount of fuel in an ecosystem) and structure (how the fuel is arranged vertically and horizontally) and therefore subsequent fire behavior.
Climate mediated changes to fire regimes will vary depending on the vegetation community (Abatzoglou & Williams, 2016). The type and degree of change could differ depending on what constrains fire in that community. Ecosystems can be fuel-limited-having insufficient fuel biomass to burn most years, such as in xeric shrublands or grasslands-or climate-limited-where cool, moist climatic conditions mean that fuels are not available to burn in most years, such as in tropical or some temperate ecosystems (Bradstock, 2010;Krawchuk & Moritz, 2011). However, this is not a binary classification with many ecosystems falling along a climate-limited fuellimited continuum (McKenzie & Littell, 2017). For example, tropical savannas occur in regions with an annual wet season, allowing biomass growth and fuel accumulation. This is followed by the annual dry season, which often reduces fuel moistures to levels conducive to fire spread (Bradstock, 2010). Fire occurrence in tropical savannas is therefore not clearly limited by either fuel amount or fuel availability. Systems might also move along the continuum as climate, fire, and vegetation interact. Positive feedbacks between increased fire occurrence linked to warmer and drier conditions have sometimes led to, or are predicted to lead to, community-type conversions with fire intolerant species replaced by fire tolerant species that are typically more flammable (Landesmann et al., 2021).
The key attributes of the fire regime are often defined as fire intensity, frequency, season of occurrence, size, and heterogeneity (Bond & Keeley, 2005;Gill, 1975;Gill & Allan, 2008) with the combination of attributes producing the variation across different ecosystems. Forest ecosystems of temperate Australia vary along the fuel-limited and climate-limited continuum with higher-biomass forests often sitting closer to the climate-limited end, and open woodlands positioned towards the fuel-limited end. Higher-biomass forests include the relatively restricted distributions of temperate rainforests, which are rarely burned (>100 years) due to high fuel moisture (Murphy et al., 2013). Eucalypt-dominated closed and open forests also sit closer towards the climate-limited end and are typically burned every 20-100 years by high-intensity fires, often driven by high fuel loads and preceding drought (Cawson et al., 2018;Murphy et al., 2013). In comparison, eucalypt-dominated woodlands and mallee are typically characterized by lower comparative fuel loads and are burned by low-to mid-intensity litter or grass fires every 20-100 years (Montreal Process Implementation Group for Australia & National Forest Inventory Steering Committee, 2018; Murphy et al., 2013). Climate change could therefore influence fire regimes within forest ecosystems of temperate Australia in a variety of ways. In some areas, we are already seeing fire weather influencing the likelihood of extreme forest fires (Abram et al., 2021). However, the magnitude of change may not be spatially uniform, with smaller increases predicted for the summer rainfall and more climate-limited ecosystems, and greater increases for the winter rainfall and fuellimited ecosystems (Clarke et al., 2011). The contrasting fire regimes and fuel-climate conditions in native vegetation will interact under changing climate to influence the nature of future fire regimes, an interaction that remains largely underexamined.
Understanding of potential shifts in fire regimes and underlying mechanisms is needed to manage and conserve fire-prone ecosystems under changing climates. To better assess the potential for change, both short-and long-term processes, such as short-term weather versus long-term climate, need to be captured along with variation and uncertainty across landscapes. Deterministic models often cannot capture this uncertainty, so approaches are required that identify and account for sources of uncertainty. Forecasting fire regimes is a challenging task due to the interacting nature of climate, fuels, and fire, but one well suited to process-based simulation models due to their potential to explicitly capture complex interactions as they vary in both space and time. Simulation models-predictive models used for the purposes of exploration, scenario-building, projection, prediction, and forecasting (Loehman et al., 2020;Perera et al., 2015)-are widely used to interpret fire behavior and predict changes in vegetation and other ecosystem attributes (Andrews et al., 2008;Finney, 1998;Tymstra et al., 2007). Fire models range in complexity and scale from small-scale and detailed fluid-dynamics models (Mcgrattan et al., 2013) to global models of fire occurrence (Bond & Keeley, 2005). Landscape-scale models capture fire, climate, and vegetation interactions at intermediate temporal (days to years to decades) and spatial (10 0 -10 3 km 2 ) scales relevant to environmental processes and most management decisions, and therefore, have the potential to increase our understanding of fire regimes (Keane et al., 2015).
In this study, we use a landscape fire modelling framework to simulate fire regimes over decades to centuries in six forested landscapes across temperate Australia. Our approach uses fire behavior simulations combined with models of future fuel and climate projections to predict fire regimes and associated uncertainties under different scenarios to explore the independent and interacting effects of predicted future fuels and future fire weather. Based on previous research in forest landscapes Canadell et al., 2021;Duane et al., 2021), we anticipate future fires will become more extensive and of higher intensity, the fire interval will reduce, and fire seasons will become longer, but that these changes will vary according to if the ecosystem is more climate-limited or more fuel-limited. We also anticipate that the direct influence of climate change on fire weather will be the most important factor influencing future fire regimes in temperate Australia, especially in landscapes dominated by more climate-limited ecosystems, but that the indirect effect of climatic change on fuel load and structure may offset some of the fire regime shifts in all landscapes.
| Study area selection
The six study areas span the temperate region of south-eastern Table S2). The six study areas span a climate-limited fuel-limited continuum using net primary productivity (Haverd et al., 2013) as an indication of potential fuel load, and the fraction of time monthly PET exceeds precipitation as a representation of climate ( Figure 2).
| Simulation modelling design
We modelled the effects of climate change on fuel hazard and weather on the fire regime in each of the study areas. We consider the impact of climate change in two separate pathways that are tested independently and interactively. Fuel hazard is influenced by changes to annualized values of climate such as mean annual temperature (see below), whereas weather is influenced through the predicted hourly values of variables such as temperature, humidity, and wind. We acknowledge that while the base data for these values are not truly independent, we wish to focus on the independent responses (i.e. fuel vs. weather). To examine fuel and weather effects independently, our weather and fuel scenarios consist of present weather and present fuel (Pw_Pf), present weather and future fuel (Pw_Ff), future weather and present fuel (Fw_Pf), and future weather and future fuel (Fw_Ff; Figure 3). The weather and fuel scenarios were run with each of six climate models (see Section 2.7) giving 24 simulation scenarios for each study area. The landscape fire modelling framework 'FROST' (see below) was run for 120 years in each simulation scenario to simulate effects of the combined weather and
| Fire regime simulator
We simulated fire regimes over 120 years using the landscape fire modelling framework 'FROST' (Fire Regime and Operations Simulation Tool). FROST uses a framework of "modules" to combine fire behavior simulation with Bayesian network (BN) models to capture and account for uncertainty in the modelled systems (Penman et al., 2015).
| Weather module
The weather module uses daily weather to determine the daily number of ignitions, and hourly weather to simulate fire behavior when ignitions occur. Weather data for this project were from the 'NARCliM' project (NSW and ACT Regional Climate modelling; see Section 2.4) (Evans et al., 2014).
| Ignition module
The ignition module calculates ignition probability using weather, proximity to roads, and house density as inputs to a BN (Clarke, Gibson, et al., 2019). The ignition module then predicts the number and time of ignitions for each day across the simulation area using a second BN based on historical ignitions. Across the 24 scenarios, the proximity to roads and housing density is static and does not account for changes over time.
| Fuel module
The fuel module predicts hazard ratings of fine fuels (<6 mm thick dead and <3 mm thick live plant material) in each of the four strata relevant to native ecosystems of temperate Australia (surface, nearsurface, elevated, and bark) using separate models for native fuels, and non-native fuels (predominantly agricultural land in our study F I G U R E 2 Location of the six study areas along a gradient of potential fuel load (x axis) represented by net primary productivity (Haverd et al., 2013), and climate (y axis) represented by the fraction of time monthly PET exceeds precipitation . The red gradient represents those areas and ecosystems where fire regimes are more likely to be fuel limited (e.g. shrublands or grasslands), whereas the blue gradient represents those areas and ecosystems where fire regimes are more likely to be climate limited (e.g. tall forests). PET, potential evapotranspiration. areas). Surface fuels are defined as leaves, twigs, bark and other fine fuel lying on the ground (Hines et al., 2010). Near-surface fuels are connected to the ground but not lying on it and less than 1 m in height, that is grasses (Hines et al., 2010). Elevated fuels are generally upright in orientation, are between 1 and 5 m tall and are physically separated from the surface fuels (Hines et al., 2010). Bark fuels are the bark attached to tree stems and branches at all heights from the ground to canopy (Hines et al., 2010). Fuel hazard ratings are measured in the field using visual assessments of the horizontal and vertical continuity of fine fuel in each fuel strata that would burn in the flaming front of a fire (McColl-Gausden et al., 2020). The fuel predictions focus on fine fuel as they contribute the most to rate of spread and flame height (Hines et al., 2010).
Predictions of native fuel hazard ratings by strata were made using the empirical models of McColl-Gausden et al. (2020), which are random forest models developed from tens of thousands of fuel hazard assessments across south-eastern Australia. These models predict fuel hazard as a function of seven predictor variables: three climate, three soil, and time since fire in years (Table S1). These seven predictors allowed us to model variations in future fuel hazard directly from biophysical data without the need to model potential changes in the distribution or composition of vegetation classes.
Climate variables used in predictions of present native fuel hazard were three bioclimatic variables from WorldClim (Busby, 1991): annual mean temperature (bio1), max temperature of the warmest month (bio 5), and precipitation of the warmest quarter (bio 18). The same three bioclimatic variables were used for the predictions of future native fuels except the values were derived from each of the future climate models (see Section 2.4). Soil variables were from the Soil and Landscape Grid of Australia (Viscarra Rossel et al., 2015), which vary spatially but were held constant over time, that is, between present and future simulation scenarios, based on a lack of F I G U R E 3 The study's overall simulation framework. A simulation run involved selecting the study area and one of six regional climate models, and then one of four weather/fuel scenarios. These data were used to drive FROST, the fire modelling framework, where the fire regime simulations were replicated 50 times and run over 120 years for a total of 24 scenarios per study area. quantitative evidence of soil changes with fire and climate. Time since fire was dynamically calculated when fires occurred in a simulation to account for fire feedbacks on fuel within both present and future simulations. If no fires occurred within a simulation year in a simulation cell, time since fire was advanced by 1 year. Average native fuel hazard for each fuel strata over a time series of time since fire is presented in the Supplementary Data (Figures S1-S4). The exponential fuel model was used for non-native fuels where fuel hazard accumulation was modelled for each strata within each vegetation type using Olson curves (Olson, 1963) based on time since fire, where time since fire was also dynamically calculated on a per fire or yearly basis Penman et al., 2013) ( Table S1). All predicted hazard ratings of native and non-native fuels per strata were subsequently converted into fuel loads (as per equations in Table S1) to be used within the fire event simulator, PHOENIX RapidFire (Tolhurst et al., 2008).
| Climate model selection
All weather and climate data used in this study come from the NARCliM project (Evans et al., 2014). The NARCliM project provides dynamically downscaled climate projections for south-east Australia at a 10-km resolution. The data include hourly surface air temperature, surface specific humidity, near-surface wind speed and direction, surface wind speed, and surface pressure, which are required for fire simulations and are referred to as weather in this study. The data also included standard annual bioclimatic variables [BIOCLIM (Busby, 1991)], which are referred to as climate in this study. NARCliM uses the SRES A2 emissions scenario (IPCC, 2007), which projects a warming of the planet by approximately 3.4°C by 2100 and is comparable to the subsequent scenario RCP8.5 (Moss et al., 2010).
The NARCliM project includes four equally plausible global climate models (GCMs) selected for their skill, independence, and capacity to span a range of alternate climate scenarios (Evans et al., 2014).
Global climate models have cell grids that can be hundreds of kilometers wide and are not useful for projecting regional differences.
Thus, three regional climate models (RCMs) are used to downscale the four GCMs to a grid size of 10 km, which better represents features important for local and regional weather and fire behavior such as topography and coastlines. The resulting 12-member NARCliM ensemble has been extensively evaluated and used by managers and policymakers (Clarke, Tran, et al., 2019;Di Luca et al., 2016;Evans et al., 2017;Fita et al., 2017;Olson et al., 2016). For this study, we selected two of the four GCMs-ECHAM5 and CSIRO Mk3-and all three associated RCMs for each GCM, resulting in a 6-member climate ensemble. Selection of these six climate projections was based on their skill in simulating observed mean and extreme fire weather Across the six study areas and six climate projections, precipitation is projected to change between present and future conditions by between +18% in the Blue Mountains to −18% in the Adelaide Hills.
Mean temperature is projected to increase by between 1.2°C in the Grampians to 2.5°C in the Blue Mountains ( Figure S5).
| Fire regime attributes
We used five fire regime attributes in our analysis: (i) annual area burnt, (ii) annual area burnt at high intensity, (iii) fire interval, (iv) fire interval of high-intensity fires, and (v) season midpoint. The attributes relate only to native vegetation, that is all other cell types are masked out of the analysis, except for season midpoint which incorporates all cells. These represent key components of the fire regime-namely, fire frequency, intensity, seasonality, and extent (Gill, 1975;Gill & Allan, 2008;Pausas & Keeley, 2009)-and are important determinants of ecosystem processes in fire-adapted systems (Steel et al., 2021). Annual area burnt per scenario was calculated as the area burnt per year (each an average of 50 replicates) averaged over the 100-year simulation analysis period. Annual area burnt at high intensity was calculated in the same way but only included cells that were burnt at intensities greater than 10,000 kW/m. Fire interval was defined as the mean fire interval across the 100-year simulation analysis period (only cells burnt at an intensity greater than 10,000 kW/m for high-intensity inter-fire interval). The wildfire season in FROST is a fixed period between 15th November and the 15th March (i.e. the last month of spring to the first month of autumn based on historical fire seasons in temperate Australia); therefore, to calculate season midpoint, we used the number of days from the start of the wildfire season to when 50% of the total area burnt in that season was reached.
| Statistical analysis
To assess the independent and interactive effects of weather and fuel on each of the fire regime attributes, we used linear mixed models (LMMs) in the lme4 package (Bates et al., 2015) in R version 3.4.0.2 (R Core Team, 2020). Climate model was included as a random effect in all LMMs. The fuel epoch (present or future) and the weather epoch (present or future) were considered as fixed effects.
To assess LMM assumptions, we used residual diagnostic tests using the DHARMa package (Hartig, 2020). Annual area burnt and annual area burnt at high intensity were log transformed to meet LMM assumptions. We calculated marginal (R 2 m) and conditional (R 2 c) coefficients of determination to summarise the explanatory power of the models for each of the six study areas using the MuMIn package (Barton, 2009) in R. We explored the relationship between fuel and weather epoch and each fire regime attribute by considering size and uncertainty (95% confidence intervals) of standardized model coefficients.
| Fire regime predictions
Fire regimes were predicted to shift under future weather and future fuel conditions. However, the different fuel and weather scenarios influenced these shifts. Compared with current predictions (Pw_Pf), future weather consistently increased annual area burnt and annual area burnt at high intensity both with and without future fuels (Fw_Ff and Fw_Pf respectively; Figure 4a,b). In addition, these weather effects were stronger in more climate-limited study areas (Figure 5a Effects of future fuels alone (Pw_Ff) on fire regime attributes were comparatively smaller than future weather (Figures 4 and 5), with the exception of the interval between fires, including fires of high intensity where future fuels contributed to increased fire intervals, with the effect most pronounced in fuel-limited systems (Grampians and Adelaide Hills; Figure 5c,d).
The six RCMs produced a wide range of fire regime predictions, but there were some distinct patterns. The RCMs derived from the CSIRO GCM predicted a warmer and drier future across all study areas, in comparison to the ECHAM group of RCMs, which predicted an even hotter future with limited change to precipitation ( Figure S5). The associated impact on the predicted fire regime was that the warmer drier CSIRO climate models typically predicted lower areas burnt, and longer fire intervals compared to predictions derived from the ECHAM models ( Figures S6-S10).
| The role of weather versus fuel
Analysis of the independent and interactive effects of future weather and future fuel indicated consistent effects of weather in all study areas and more variable effects of fuel. Annual area burnt increased, fire intervals decreased, and season midpoints were earlier under predictions of future weather (Figure 6; see Table S3 for marginal and conditional R 2 values).
The role of future fuel was more variable. Area burnt (both annual area burnt and annual area burnt at high intensity) increased under predicted future fuels for the most fuel-limited system ( Figure 6). However, the same areas are not always burnt each year, as expressed by the spatial variability in the number of fires across the landscapes (Adelaide hills; Figure S11). For the remaining study areas, the result was more variable and the effect size smaller ( Figure 6).
At the two ends of the climate-limited fuel-limited continuum, Adelaide Hills and the Grampians, and East Gippsland, future fuels increased the intervals between fires (Figure 6). There is considerable spatial variation depending on the location within a study area.
For example, Adelaide hills under present weather and future fuel predicted average fire intervals of between zero years, that is, multiple fires in 1 year, and 99 years, that is, the maximum fire interval in the simulation ( Figure S12). In the middle of the continuum, there was either no clear effect of fuel (Blue Mountains) or future fuels decreased intervals (Alpine and ACT; Figure 6). Future fuels had little effect on fire-season midpoint ( Figure 6).
Interactive effects of weather and fuel on fire regime attributes were often only significant at the ends of the climate-limited fuellimited continuum (Figure 6). The interaction was negative for the most fuel-limited system, Adelaide hills, tempering the increase in area burnt under a combination of both future weather and future fuel, and positive in the most climate-limited system, East Gippsland, increasing the area burnt under the same combination. For fire intervals, only study areas at the more fuel-limited end (Adelaide Hills, Grampians) were predicted to have a negative climate-fuel interaction, reducing the effect of future weather on fire intervals. Interactive effects of weather and fuel on the fire-season midpoint were more variable but consistently minor in all study areas ( Figure 6).
| DISCUSS ION
Shifts in fire regimes in forest landscapes are unlikely to be uniform across temperate Australia. Future weather and fuel will increase wildfire extent and intensity, decrease fire interval, and change aspects of the fire season across temperate south-eastern Australia.
Future weather had the largest effect, with future fuel acting synergistically or antagonistically with future weather depending on the study area and fire regime attribute of interest. Future weather effects were stronger in climate-limited study areas, and the effects of future fuel were stronger in more fuel-limited study areas.
| Future changes in fire regimes greater in climate-limited systems
Predicted area burnt was greater and fire intervals shorter in fuel-limited areas compared to climate-limited areas. This is consistent with current patterns of fire in temperate Australia with drier eucalypt woodlands and forests (more fuel-limited) F I G U R E 4 Average fire regime attributes by study area (n = 50 replicates × 100 years × 6 climate models). Study areas from left to right represent the most fuel-limited to the most climate-limited. Errors bars are SD around the mean. Red = present weather and present fuel, yellow = future weather and present fuel, light blue = present weather and future fuel, dark blue = future weather and future fuel. (a) The percent of native vegetation burnt annually, (b) the percent of native vegetation burnt annually at high intensities, (c) mean fire interval in native vegetation, (d) mean high intensity fire interval in native vegetation, and (e) season midpoint. typically being burnt by low-moderate intensity fires every 5-20 years, compared with tall wet eucalypt forests (more climate-limited), which are typically burnt by high-intensity fire every 20-100 years (Murphy et al., 2013). However, the relative changes in fire regime attributes from current to future predictions were generally higher for climate-limited study areas, particularly for area burnt. These results suggest climate-limited systems have potentially more environmental space for their fire regimes to shift, with comparatively abundant fuels that could increase in flammability through increased aridity Kennedy et al., 2021;Nolan, Blackman, et al., 2020).
F I G U R E 5
Percent change from current conditions (present weather and present fuel, Pw_Pf) for each fire regime attribute by study area (n = 50 replicates × 100 years × 6 climate models). Study areas from left to right represent the most fuel-limited to the most climatelimited. Errors bars are SD around the mean. Yellow = future weather and present fuel, light blue = present weather and future fuel, dark blue = future weather and future fuel.
| Weather had the greatest influence on changes to future fire regimes
Averaged across all climate models, our simulations indicate that future weather rather than future fuels will have greater overall effects on future fire regime attributes. Projecting future fire regimes via predicted direct effects of future climate on fire weather and fuel moisture is a relatively common approach (Balshi et al., 2009;Liu et al., 2010;Nitschke & Innes, 2008;Westerling et al., 2011), and matches the future weather, present fuel (Fw_Pf) scenarios tested in our study. However, while fuel limitations appear to only modestly reduce the projected area burnt at subcontinental scales , this may not be the case at all scales and was not seen universally in this study.
There is increasing evidence that contemporary fire-climate relationships may not hold into the future as we move towards the potential for new interactions without historical analogues. One of the biggest limitations in predicting future fire regimes is uncertainty about the degree to which fuel may interact with both fires and climate. The influence of fuel (the indirect influence of climate combined with time since fire) had contrasting or interacting effects. In climate-limited areas increases in fire weather and ignition likelihood under the warmer and potentially drier conditions are likely to increase area burnt and decrease inter-fire interval due to the increased availability of fuel and occurrence of weather conducive to fire spread. In fuel-limited areas there may be a more variable response depending on the ecosystem, with the structure and flammability of fuel often as important as the amount of fuel (Landesmann et al., 2021). Future climates are predicted to reduce productivity and therefore burnable biomass (Stegen et al., 2011;Zhao & Running, 2010) and these changes may reduce or counteract the direction of changes to the fire regime in all ecosystems.
| Implications for human and natural values
As increasing wildfire events are correlated with lives lost and house loss (Filkov et al., 2020), the predictions of more fire in all study areas suggests that human lives and assets in temperate Australia will be increasingly exposed to fire under future F I G U R E 6 Standardized model coefficients (with 95% confidence intervals) for each study area and fire regime attribute. The effect of future weather (dark green), the effect of future fuel (mid green), and the effect of their interaction (light green).
climate. While not all fire has negative outcomes for people or the environment (Kolden, 2020), a number of our study areas contain major population centers with complex wildland urban interfaces, making fire management challenging. Prescribed burning is generally used in these areas with an objective of reducing fire risks to people, property, and infrastructure (Penman, Collins, et al., 2020). However, the planned burning treatments have variable efficacy in reducing fire risks and can lead to more fire in the landscape King et al., 2006;Penman, Clarke, et al., 2020;Price et al., 2015). While our modelling framework focuses on decades long fire regimes, rather than single fire events, evidence from multiple sources points toward more extreme wildfire events in temperate Australia (Duane et al., 2021). The 2019/2020 fire season in south-eastern Australia impacted nearly all of our study's landscapes. This one fire season saw a total of 18,983,588 ha burned, 3113 houses destroyed, and 33 lives lost in 15,344 bushfires (Filkov et al., 2020). Smoke from the bushfires is estimated to be responsible for 417 deaths and thousands of hospitalisations (Borchers Arriagada et al., 2020). Increases in fire regimes as predicted here are likely to have significant implications on people, property, and economic assets.
Predictions of shifts in key fire regime attributes raise several concerns for biodiversity. Fire itself is not problematic for many species in fire-prone ecosystems, however shifts in the fire regime may leave species unable to sustain viable populations (Enright et al., 2015).
Fire interval is a key concern for many plant species, with both obligate seeders (species that rely on seed production for regeneration) and resprouters (species that can resprout from buds arising from the stem, branches, or roots) requiring adequate time to restore regenerative capacity before the next fire (Fairman et al., 2019;Turner et al., 2019). However, if we assume that resprouting eucalypt species are more resilient to repeat fires (Collins, 2020), there may be different impacts on eucalypt forest structure and composition depending on their dominance by obligate seeders or resprouters.
For example, obligate seeder forests are generally located on wetter, more productive sites, such as those ecosystems at the more climate-limited end of the continuum (Fairman et al., 2016;Vivian et al., 2008). In our study, mean fire interval consistently decreased across all of the study areas. Therefore, the climate-limited systems in our study could be more exposed to potential shifts in species composition due to the higher abundance of obligate seeding species . There are also indications of changes to fire seasonality in our study, with shifts towards an earlier start of up to 10 days in the most fuel-limited study area. Changes to season midpoint suggest seasonality shifts that could potentially influence multiple mechanisms involved in plant persistence like propagule availability and seedling establishment (Miller et al., 2019). Much remains unknown about how changed fire timing and frequency will interact with changes in plant phenological events. Moving forward, conservation emphasis could be placed on species or communities that are already near the edge of their fire regime niche and at risk of extinction (Bowman et al., 2014;Coop et al., 2020;Ratajczak et al., 2014). Or we could focus on maintaining overall forest or ecosystem resilience to reduce overall impacts if predicted changes to disturbance regimes eventuate (Ingrisch & Bahn, 2018;Johnstone et al., 2016;Keane et al., 2018).
| Limitations
Our simulation approach is based on a number of assumptions, including that the relationships between fuel variables with climate and time since fire will hold under a changing climate. This is a common assumption in many climate change models, including those that track changes in habitat (Thomas et al., 2004;Thuiller et al., 2006) and fire activity (Archibald et al., 2013;Batllori et al., 2013;Krawchuk et al., 2009;Young et al., 2017). Nonetheless, relationships among fuel, climate, and fire may shift under changing climates, leading to novel interactions (Keeley & Syphard, 2016). Management actions such as active fire suppression and prescribed burning can also influence fuel-climate-fire relationships (Parks et al., 2015), as can exotic invasive species that lead to novel ecosystems (Setterfield et al., 2010;Taylor et al., 2017). While management actions were not accounted for in the scenarios, our study suggests the more fuel-limited study areas have greater scope to mitigate fire impacts through fuel manipulations. This could be of particular importance around wildland urban interfaces where the majority of fire impacts on human values occur. However, the stronger influence of future weather in the climate-limited study areas suggest reduced opportunity for mitigation actions through fuel manipulations alone.
The only anthropogenic factor we considered in this study was the effect of climate change on future fire regimes. Other anthropogenic factors such as population growth from urban centers may increase wildland urban interfaces, therefore exposing more people and property to risk from wildfires. Population growth may also increase fragmentation of vegetation and shift ignition distributions (Pausas & Keeley, 2021). However, increasing ignition rates are unlikely to change the likelihood of large fires as fires in our study areas are rarely limited by ignitions, that is, the ignition 'switch' is nearly always activated (Bradstock, 2010). Changes to fuel profiles resulting from shifts in vegetation through land-use change associated with population growth are possible. However, large portions of our study areas are protected areas of native vegetation that are likely to remain so over the 100-year time horizon of our modelling simulations.
The role of fire feedbacks leading to shifts in species and potentially whole ecosystems was also not examined in our study. Changes to the fire regime combined with direct effects of climate change on species demography such as growth rates and reproduction, can interact to change species population viability (Enright et al., 2015) and thus fuel profiles. Our future research will involve the combined threats of climate and fire regimes shifts on individual species and key functional types by combining our fire regime approach with spatially explicit population viability analysis.
| Conclusion
Fire activity is predicted to intensify across forested ecosystems from fuel-limited to climate-limited systems. The magnitude of change is highest in the climate-limited areas, which have historically been responsible for fires resulting in the greatest human and environmental impacts (Filkov et al., 2020). These patterns are likely to play out in other forested systems globally and recent extreme fire seasons around the globe strongly support this (Duane et al., 2021). Land managers are unlikely to have the capacity to offset all the predicted changes in fire regimes through fuel manipulations and suppression. We may therefore be forced to accept that intensification of fire regimes in multiple landscapes may be inevitable if climate projections eventuate and therefore plan to reduce associated impacts on multiple assets when and if possible.
CO N FLI C T O F I NTE R E S T
No conflict of interest declared.
DATA AVA I L A B I L I T Y S TAT E M E N T
Regional climate data were provided by the NARCliM project and are freely available: https://clima techa nge.envir onment.nsw.gov. au/Clima te-proje ction s-for-NSW. The modelling data that support the findings of this study are openly available in Dryad at http://doi. | 2022-06-18T06:17:51.329Z | 2022-06-16T00:00:00.000 | {
"year": 2022,
"sha1": "b3cc84eed36299dfebc8caae84c57b4f223a5d27",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1111/gcb.16283",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "a25c6ac04e3e8605edc37c2bb6546e16de5c6917",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252752487 | pes2o/s2orc | v3-fos-license | Robot-assisted total pelvic exenteration for rectal cancer after neoadjuvant chemoradiotherapy: a case report
Background There are numerous indications for minimally invasive surgery. However, the laparoscopic approach for extended pelvic surgery is currently provided by only a few institutions specializing in cancer treatment, primarily because of technical difficulties that arise in cases involving a narrow pelvis and rigid forceps. We report a case of robot-assisted total pelvic exenteration for rectal cancer involving the prostate. We assessed the feasibility of robot-assisted total pelvic exenteration and compared the short-term outcomes of other conventional and minimally invasive approaches. Case presentation A 67-year-old man was referred to our hospital after positive fecal blood test results. The initial diagnosis was clinical T4bN2aM0, Stage IIIC rectal cancer involving the prostate. The patient underwent neoadjuvant chemoradiotherapy. Consequently, robot-assisted total pelvic exenteration with an ileal conduit and end colostomy creation were performed. The total operative duration was 9 h and 20 min. The durations of robot console usage by the colorectal and urological teams were 2 h 9 min and 2 h 23 min, respectively. The patient was discharged on postoperative day 21. The pathological diagnosis was T4b (prostate) N0M0, Stage IIC. The resection margin was 2.5 mm. During reassessment at 2 years after resection, no evidence of recurrence was observed. Conclusions Robot-assisted total pelvic exenteration was performed for a patient with advanced rectal cancer without serious complications. Robot-assisted total pelvic exenteration may provide the advantages of minimally invasive surgery, particularly in the enclosed space of the pelvis.
Background
Preoperative staging has improved because of enhanced imaging technology and the multidisciplinary approach to rectal cancer have facilitated patient selection [1]; however, some patients require total pelvic exenteration (TPE). TPE involves total surgical removal of the pelvic viscera, including the bladder, rectum, and reproductive organs [2]. Minimally invasive colorectal cancer surgery has become widely accepted. Some institutions specializing in cancer treatment have reported the safety and feasibility of laparoscopic TPE [3,4]. However, the manipulation of rigid forceps against rectal cancer adherent to adjacent organs within a narrow pelvis remains a complicated and challenging surgical procedure that is regarded as an exclusion criterion for laparoscopic resection at most hospitals [5,6]. A robotic approach to TPE may be advantageous over conventional laparoscopic surgery because of the enhanced three-dimensional Open Access *Correspondence: kyo1kihara@tottori-u.ac.jp 1 Division of Gastrointestinal and Pediatric Surgery, Department of Surgery, Tottori University Faculty of Medicine, 36-1 Nishimachi, Yonago City, Tottori 683-8504, Japan Full list of author information is available at the end of the article views and stable magnified views, as well as the increased dexterity of EndoWrist ® (Intuitive Surgical, Sunnyvale, CA, USA) instruments, which provide a greater range of motion while eliminating tremor. We describe robotassisted (RA) TPE (RA-TPE) performed for a patient with advanced rectal cancer involving the prostate, the status after neoadjuvant chemoradiotherapy (CRT), and the feasibility RA-TPE compared to laparoscopic TPE, conventional TPE, and simultaneous RA surgery for synchronous primary rectal cancer and prostate cancer.
Case presentation
A 67-year-old man was referred to our hospital after positive fecal blood test results. The patient was 164 cm tall and weighed approximately 51 kg. Further examination revealed advanced rectal cancer located below the peritoneal reflection and at the level of the dentate line that involved the prostate (Fig. 1a, b). According to the eighth edition of the TNM classification set by the Union for International Cancer Control, the initial diagnosis was clinical T4b (prostate) N2aM0, stage IIIC without lateral lymph node metastasis.
Neoadjuvant CRT consisting of 45 Gy in 25 fractions combined with tegafur, uracil, and folinic acid was administered. After neoadjuvant CRT, the tumor size decreased from 55 to 45 mm, but the prostate was still involved (Fig. 1c, d). Possible surgical procedures were discussed at a multidisciplinary conference. Partial prostatectomy was thought to be a suboptimal procedure for attaining negative tumor resection margins. The advantages and disadvantages of bladder-sparing prostatectomy with vesicourethral anastomosis were considered. For patients with lower rectal cancer who undergo neoadjuvant CRT, a diverting ileostomy is usually planned to avoid anastomotic leakage and two-stage stoma closure because of the impact of neoadjuvant CRT on anastomoses. However, there is little evidence supporting the feasibility of vesicourethral anastomoses after CRT. Because of the concern regarding refractory vesicourethral leakage in the dead space of the pelvic cavity after TPE, we Fig. 1 a Colonoscopy image and b magnetic resonance image at the time of the initial diagnosis. The ulcerative tumor is located in the lower rectum and proctodeum. Biopsy revealed tubular adenocarcinoma. c Colonoscopy image and d magnetic resonance image after neoadjuvant chemoradiotherapy. Partial response was observed. However, the prostate was still involved abandoned vesicourethral anastomosis and pursued RA-TPE. Because our center for minimally invasive surgery had been performing robotic surgeries for more than 10 years, the institutional ethical review board decided that our proposal for RA-TPE successfully met the guidelines for safe introduction of highly complex medical techniques set by the Japanese Ministry of Health, Labor, and Welfare. The expense for this novel surgery was not covered by the Japanese health insurance; therefore, it was covered by our institution.
Surgical procedure
Five robotic ports were placed, including one 12-mm port (Fig. 2). Another 12-mm conventional laparoscopic port for an assistant operator was placed in the right upper quadrant because of the uncertainties regarding the adequacy of current measures to achieve effective hemostasis using robotic instruments. The patient was positioned in the modified Lloyd-Davies position and tilted in the Trendelenburg position by 17 to 20 degrees. The Da Vinci Xi ® (Intuitive Surgical, Sunnyvale, CA, USA) surgical system cart was installed on the left side of the patient. First, the colorectal surgeons targeted the left external iliac artery with the robotic axis for mobilization of the left mesocolon. The inferior mesenteric artery was ligated for total mesorectal excision. After the pelvic phase, the axis changed toward the peritoneal reflection. Mobilization of the posterior mesorectum proceeded down to the pelvic floor. Subsequently, the urological surgeons began operating without repositioning the robot. The bilateral urinary tracts were taped and dissected toward the bladder. The Retzius space was dissected to reach the prostatic apex. The deep dorsal vein of the penis was divided and sealed with a robotic vessel sealer. Santorini's venous plexus was tied using 3-0 V-Loc ® (Medtronic, Minneapolis, MN, USA). Renal damage was minimized by transecting the ureters at the end of the procedure. The RA procedure was completed by amputation of the sigmoid colon using a surgical stapler, and the specimen was retrieved through perineal resection. For patients who have undergone neoadjuvant chemoradiotherapy, prophylactic lymph node dissection of the pelvic wall is not routinely performed at our institution. An ileal conduit and ureteric anastomoses were created extracorporeally with a mini-laparotomy measuring 7 cm.
Results
The duration of the entire procedure was 9 h and 20 min. The durations of robot console usage by the colorectal and urological teams were 2 h 9 min and 2 h 23 min, respectively. The estimated volume of blood loss was 200 ml. The pathological diagnosis was ypT4b (prostate) N0M0, stage IIC, with a resection margin of 2.5 mm (Fig. 3). Oral intake was reintroduced on postoperative day 3, starting with a liquid diet. Without symptoms, laboratory data indicating mild inflammation were observed and antibiotics were administered until postoperative day 14. The patient had a postoperative hospital stay of 21 days. At 2 years after resection, there was no evidence of cancer recurrence.
Discussion
Although the global standard of management for advanced rectal cancer is neoadjuvant radiotherapy with or without chemotherapy and total mesorectal excision [7], the Japanese guidelines for the treatment of colorectal cancer recommend total mesorectal excision with lateral lymph node dissection based on evidence from Japanese Clinical Oncology Group 0212 [8,9]. Prophylactic lateral lymph node dissection contributes to a lower rate of locoregional recurrence, but it does not enhance the overall survival of advanced rectal cancer patients without radiotherapy. To date, there has been only one randomized controlled trial comparing the outcomes of lateral lymph node dissection and nerve-preserving resection for patients with rectal cancer after preoperative radiotherapy; that study consisted of 51 patients and showed no difference in survival or disease-free survival [10]. It remains controversial whether neoadjuvant radiotherapy can be a substitute for prophylactic lateral lymph node dissection. Our institution conforms to the global standard for neoadjuvant radiotherapy without prophylactic lateral lymph node dissection. If Fig. 2 Schema of port placement. A total of five robotic ports were placed (red line), and another conventional laparoscopic 12-mm port (black line) was inserted in the right upper quadrant by an assistant operator. The Da Vinci Xi ® patient cart was rolled to the left side of the patient only once during surgery locoregional recurrence is detected only in the pelvic wall, then metachronous lateral lymph node dissection of the recurrent side is proposed. During the present study, our surgical decisions considered the expected effects of neoadjuvant radiotherapy, such as tumor size reduction and possible preservation of the prostate.
The first use of RA-TPE was published in 2011 by Vasilescu et al., who performed RA-TPE for recurrent endometrial cancer [11]. Subsequently, Shin et al. reported RA-TPE for rectal cancer in 2014 [12]. Although both RA prostatectomy and RA rectal resection are becoming common choices for malignancies originating from the prostate and rectum, respectively, RA-TPE has been reported for only a few cases. For advanced colorectal cancer, only four case reports of RA-TPE were found by cross-searching "colorectal neoplasms, " "robot surgery, " and "pelvic exenterations" in MEDLINE (Table 1) [12][13][14][15]. Including our case, operative times ranged between 200 and 560 min, and the amount of blood loss was 100 to 350 ml (missing in one case). The most severe complication was ureteric stricture requiring stent placement. The length of the postoperative hospital stay ranged from 7 to 21 days. All cases were T4, and there was only one case of stage IV [14]. Our case had the longest follow-up period (24 months). Oncological outcome data of RA-TPE, which are the most meaningful data, are still lacking because of the small number of reported cases to date; therefore, its prevalence remains unknown.
Heah et al. reported three cases of RA bladder-sparing pelvic exenteration for colorectal cancer [16]. All patients underwent neoadjuvant CRT before resection. In contrast, salvage radical prostatectomy has not been widely accepted as treatment for radiation-recurrent prostate cancer because of the surgical morbidity associated with the procedure. The incidence of anastomotic stricture in salvage radical prostatectomy varies from 9% to 33%, whereas that of urinary continence ranges from 33% to 80% [17]. These results are reflected in the very low prevalence of salvage radical prostatectomy [18]. Considering these risks, vesicourethral anastomosis was avoided in our case, despite the advantages of enhanced dexterity with a robotic anastomosis compared to that of the laparoscopic approach. We believe that further validation during patient selection after CRT is required to estimate the feasibility of vesicourethral anastomosis and RA bladder-sparing pelvic exenteration. A brief review of the feasibility of RA-TPE performed by comparing the short-term outcomes of the other approaches, including conventional TPE, laparoscopic TPE, simultaneous RA surgery of synchronous primary rectal and prostate cancer, is shown in Table 2. The shortterm outcomes of the reported RA-TPE cases seem considerable based on the latest report of a large-scale investigation of TPE [19]. The median operative durations were 480 min (range 200-570 min) for RA-TPE cases and 462 min (range 333-582 min) for conventional TPE. The median amount of blood loss was 250 ml (range 100-350 ml) in the former group, and 50% of the patients who underwent conventional TPE required transfusion. Major complications, classified as Clavien-Dindo grade 3 or greater, were observed in one of five patients of RA-TPE and 120 of 749 patients of conventional TPE. Fukuta [20]. Preoperative radiotherapy was not introduced. Four patients underwent total mesorectal excision and radical prostatectomy separately, and only one patient underwent en bloc resection of both cancers. The short-term outcomes, including operative duration, amount of blood loss, and hospital stay, indicated the possible feasibility of simultaneous RA rectal resection and RA radical prostatectomy, even though one patient developed colorectal anastomotic leakage and two patients experienced vesicourethral anastomotic leakage. Their outcomes are quite similar to those of RA-TPE. The operative duration of RA-TPE seems even shorter, probably because of the differences in the procedures, such as the necessity for dissection between the rectum and prostate in the narrow pelvis. Uehara et al. reported the feasibility of laparoscopic TPE compared to that of conventional TPE [6]. The authors mentioned that laparoscopic TPE should be applied for carefully selected patients. In the field of rectal cancer, particularly in men, because laparoscopic surgery is performed deeper in the pelvis, the range of the rigid forceps motion becomes further limited, and the dissection becomes more difficult. However, RA-TPE seems advantageous over laparoscopic surgery because of the increased dexterity of EndoWrist ® instruments, which provide a greater range of motion even in the narrow pelvis.
Our study provides considerable short-term outcomes of RA-TPE for primary advanced rectal cancer. Because of the small number of reported cases, the optimal criteria for RA-TPE have not been elucidated; therefore, its application should be limited to carefully selected patients. RA-TPE warrants further studies with more cases to estimate its exact feasibility.
Conclusions
RA-TPE was performed for a patient with advanced rectal cancer without serious complications. RA-TPE may provide the advantages of minimally invasive surgery, particularly in the enclosed space of the pelvis. | 2022-10-07T15:30:15.231Z | 2022-10-07T00:00:00.000 | {
"year": 2022,
"sha1": "a2c1b4d7afbd44ce2a00881ef329ff7abea5a137",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "a2c1b4d7afbd44ce2a00881ef329ff7abea5a137",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247779887 | pes2o/s2orc | v3-fos-license | Toward bioinspired polymer adhesives: activation assisted via HOBt for grafting of dopamine onto poly(acrylic acid)
The design of bioinspired polymers has long been an area of intense study, however, applications to the design of concrete admixtures for improved materials performance have been relatively unexplored. In this work, we functionalized poly(acrylic acid) (PAA), a simple analogue to polycarboxylate ether admixtures in concrete, with dopamine to form a catechol-bearing polymer (PAA-g-DA). Synthetic routes using hydroxybenzotriazole (HOBt) as an activating agent were examined for their ability in grafting dopamine to the PAA backbone. Previous literature using the traditional coupling reagent 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) to graft dopamine to PAA were found to be inconsistent and the sensitivity of EDC coupling reactions necessitated a search for an alternative. Additionally, HOBt allowed for greater control over per cent functionalization of the backbone, is a simple, robust reaction, and showed potential for scalability. This finding also represents a novel synthetic pathway for amide bond formation between dopamine and PAA. Finally, we performed preliminary adhesion studies of our polymer on rose granite specimens and demonstrated a 56% improvement in the mean adhesion strength over unfunctionalized PAA. These results demonstrate an early study on the potential of PAA-g-DA to be used for improving the bonds within concrete.
Introduction
Interest in the design of catechol-bearing polymers has rapidly exploded over the last decade across a diverse array of industrial and biomedical applications such as adhesives, energy storage and drug delivery platforms [1][2][3]. This research has been in part driven by the characterization of the foot proteins of mussels, such as Mytilus edulis, which contain an unusually high concentration of posttranslationally modified tyrosine residues in the form of dihydroxyphenylalanine (L-DOPA) [4,5]. The catechol moiety of L-DOPA provides mussels with the ability to adhere to surfaces in wet environments, an elusive property for many synthetic materials [4,[6][7][8]. Along with its wet-setting properties, mussel protein adhesion is not hindered by the surface energy of the substrate, as bonding has been demonstrated even to surfaces such as Teflon [4]. This versatility arises from its unique ability to bind to organic and inorganic substrates in multiples ways, including ionic coordination, π-π stacking, covalent and hydrogen bonding [9][10][11].
While mussel-inspired hydrogels, adhesives and coatings have gained extensive interest in biomedicine and nanotechnology [12][13][14], the use of such polymers in construction applications, such as concrete, has been relatively unexplored. When considering other marine adhesive-producing organisms, analogous comparisons to concrete have emerged. The eastern oyster, Crassostrea virginica, produces an organic-inorganic hybrid adhesive to adhere to one another forming vast reef structures [15,16] and incorporates material from the surrounding environment that enhances the material properties [17] in a similar fashion that aggregate can enforce cement paste. However, the understanding of oyster adhesion is incomplete, and only indirect evidence of DOPA chemistry has so far been found [18]. The sandcastle worm, Phragmatopoma californica, is known to secrete an L-DOPA containing silk-like adhesive to build habitats from sand grains [19][20][21].
These marine cementitious analogues suggest that such bioinspired polymers could be used in concrete for improved material properties. Polymers have been widely used as chemical admixtures in concrete, enabling enhanced properties such as setting, workability, durability and chemical resistance [22,23]. In particular, polycarboxylate ethers (PCEs) are employed as superplasticizers and set retardants in concrete and have been more recently investigated for use in low-carbon 'green' cements [24][25][26]. PCEs are primarily composed of carboxylic acid blocks that are frequently combined with polyethylene oxide side chains. In ordinary Portland cement, particles in solution tend to flocculate due to electrostatics; the PCEs acting as surfactants in concrete mixes can reduce the amount of water by chain adsorption onto charged cement particle surfaces to better disperse them. The addition of polymer chains deflocculates the hydrating cement particles by changing the overall charge (zeta potential) of the solution by electrostatics and by steric hindrance limiting the van der Waals forces between particles [27].
In addition to the applications mentioned above for polymeric admixtures, there is potential for adhesive polymers to be used in strengthening the interfacial transition zone (ITZ) between aggregate and paste [28,29]. In this region, differences in elastic modulus and shrinkage lead to crack formation, making the ITZ a significant weak point in the composite [30]. Catechol moieties could be useful in this region due to their ability to bind to inorganic surfaces such as those found of common aggregate substrates as well as with calcium-silicate-hydrate (CSH), the primary component of cement paste. In this work, we examine poly(acrylic acid) (PAA) as a simple PCE analogue in the synthesis of a catechol-bearing polymer with adhesive functionality via grafting with dopamine.
While PAA-dopamine (PAA-g-DA) conjugates have been described in prior literature, the methodology behind the synthesis has been inconsistent, and no reports on repeatability or scalability are mentioned [31][32][33]. Conjugation of primary amines with carboxyl groups to form amide linkages has traditionally been performed using carbodiimides, such as the zero-length cross-linker 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide (EDC) [34][35][36]. Water solubility and activation at physiological pH have earned EDC widespread use in conjugation reactions, peptide synthesis and peptide immobilization. In the coupling process, EDC forms an active ester intermediate, O-acylisourea, which directly interacts with an amine to form the amide bond and releases water-soluble urea by-product which can be difficult to separate out from water-soluble products on a larger scale. Despite this efficiency, EDC is susceptible to hydrolysis and has the tendency for formation of the non-reactive N-acylurea during coupling reactions [37,38]. An additional complication for PAA is the formation of anhydride during the EDC coupling process, preventing the formation of the amide bond [39].
The addition of N-hydroxysuccinimide (NHS) or N-hydroxysulfosuccinimide (sulfo-NHS) stabilizes the intermediate by the formation of an NHS-ester and has been shown to enhance amide bond yields [40]. The reaction is widely accepted as a two-step process, with the formation of the O-acylisourea optimized at pH 4.5-7.5 and the formation of the NHS-ester at pH 7.5-8. Alternative activators for EDC that focus on decreasing racemization of the acylurea have also been explored, including hydroxybenzotriazole (HOBt), which, alongside reducing racemization, increases the rate of the coupling reaction [41]. During the amide formation reactions, HOBt forms activated esters that react with amines at room temperature to give the desired amides. One drawback of anhydrous HOBt is its explosive potential in non-aqueous systems; however, most commercially available HOBt is provided wetted at no less than 20% water and does not exhibit explosive properties [42]. Additionally, gasphase experiments showed triazole-ester reagents such as HOBt to be more reactive than NHS-esters and, by computational predictions, to have a lower transition state barrier [43].
In this work, the synthesis of PAA-g-DA in an aqueous solvent system was optimized by exploring an HOBt-based activation pathway without the use of EDC. Our goal was to synthesize a catechol-bearing polymer adhesive for use as a concrete admixture focusing on a simplistic reaction and have the potential for scale-up.
Reagents
High purity deionized water was obtained from a Millpore milli Q Advantage water system with a measured resistivity of ≥18.
Synthesis of dopamine-grafted poly(acrylic acid) via HOBt-mediated activation
PAA (1.1 ml, 4.16 mmol) was diluted in DI-water (15 ml) and stirred while a stream of nitrogen bubbled through the solution for 10 min using a long needle (18 G) to remove dissolved oxygen and was kept constant throughout the reaction process. Grafting of dopamine to PAA was performed along the routes shown in scheme 1. First, either NHS (0.240 g, 4.16 mmol) or HOBt (0.675 g, 4.16 mmol) was added and stirred for 10 min. To aid the solubility of HOBt, it was first dissolved in a minimal amount of anhydrous N,N-dimethylformamide (DMF). For reactions featuring only HOBt or NHS, once the addition was complete, the solution was stirred for 15 min. After completion of the addition, white precipitate formed due to the insolubility of HOBt, and additional DMF (5 ml) was added. The additional DMF re-dissolved the HOBt, and the solution turned a pale yellow. For two of the synthetic routes, NHS and HOBt were used together, alternating what was added first. Finally, DA (0.789 g, 4.16 mmol) was added in small increments ensuring complete dissolution with each addition. The reaction was stirred for 10 min. The pH was adjusted with 100 µl of 1 M NaOH and a catalytic amount of trimethylamine (150 µl, 0.1 mol%) and stirred for 48 h at room temperature. Dialysis was performed for 3 days in DI water at room temperature, followed by lyophilization to obtain a solid white product. The above reaction has a stoichiometric ratio of 1 : 1 : 1 : 1 (PAA : NHS : HOBt : DA) for all reactants except catalytic amounts of NaOH and trimethylamine. Similarly, reactions were performed for 1 : 1 : 2 : 2 and 1 : 1 : 0.5 : 0.5. Additionally, reactions were conducted with and without the catalytic amounts of the base.
Instrumentation for characterization
A ThermoFisher Scientific (Waltham, MA, USA) Nicolet iS 10 FTIR spectrometer was used for FTIR characterizations. 1D 1 H and 13 C NMR were performed using a Bruker (Billerica, MA, USA) Avance royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 300 MHz NMR spectrometer, while 2D 1 H-1 H COSY and 1 H-13 C HMBC NMR were performed on a Bruker (Billerica, MA, USA) Avance 600 MHz NMR. All NMR spectra were collected in D 2 O. Thermogravimetric analysis was performed using a TA Instruments (New Castle, DE, USA) simultaneous thermal analyser (SDT 650) under nitrogen atmosphere. Data were collected from 30 to 800°C at a heating rate of 10°C min −1 after water was removed through equilibration at 110°C for 4 min. Scanning electron microscopy (SEM) images were obtained in backscatter mode using a Phenom ProX SEM (Phenom-World B.V., Eindhoven, Netherlands).
Adhesive testing
To evaluate the potential of PAA-g-DA as a bonding agent in concrete, tensile adhesion tests using a modified ASTM D2095 method were performed. Rose granite, selected as a model substrate for aggregate commonly found in concrete composites due to its relatively lower porosity than other aggregate materials (e.g. limestone), was cut into cylindrical specimens for butt-joint testing. Such testing allows for examination of uniaxial tension, which is relevant for strength testing of concrete materials, [44][45][46] especially along the ITZ where such forces cause crack propagation to occur. The adhesive was prepared in water at 0.3 g mL −1 and cured for 48 h at room temperature onto cylindrical rose granite adherends affixed to steel plantons with epoxy (Sikadur-35 Hi-Mod LV) (electronic supplementary material, figure S1). Adherends were polished using a Buehler Ecomet 3 variable speed grinder-polisher with P-120 grit silicon carbide paper. For each adherend, 50 µl of PAA-g-DA or PAA solution (0.3 g ml −1 ) was applied to the interface and a 250 g weight was used to apply pressure to the interface during curing. Adhesion tests were performed using an Instron (Norwood, MA, USA) universal testing system at 0.02 in min −1 with a preload force of 30 lbf. The reasoning behind the specific preload force was to rectify any residual kinks within the bicycle chain without masking the adhesives stress response. The bicycle chains allow for some rotational freedom but need some applied force to have the pins straighten out as displacement as tension is applied. The steel plantons were attached to the load frame with bicycle chains to allow for rotational freedom during loading. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 and polydopamine, the selected articles were specific to PAA-dopamine synthesis via grafting. While this point is not discussed in prior work with PAA-dopamine, it was found that a nitrogen-purged solution and atmosphere through all steps were essential for the success of the coupling reaction. Dissolved oxygen may exhaust the coupling agent, and reaction yield drops significantly, often resulting in no coupling altogether.
The four articles cited in table 1 used a range of molecular weights of PAA, from 5 to 100 k. For this work, a 50 k MW PAA was used. Except for Duan et al. [47] EDC was used as the coupling reagent for grafting dopamine for all work, and only Wu et al. [33] used NHS as an activator. The schemes by Min et al. [31] Lee et al. [32] and Wu et al. [33] performed the reaction as a one-pot synthesis, whereas Duan isolated the PAA-NHS ester first from a N,N'-dicyclohexylcarbodiimide (DCC)-PAA reaction and then reacted the isolated product with dopamine. Reaction conditions were maintained in a pH range of 5.5-6.0 for EDC; however, according to the information provided in table 1, Lee et al. [32] did not use a PBS buffer. Dialysis conditions also varied; Lee et al. [32] used a pH 5.0, 10 mM NaCl dialysis, Wu et al. [33] used unadjusted Milli-Q water, and Min et al. [31] did not specify the conditions. The present study found no significant differences in grafting per cent as a result of using PBS buffer versus DI water during reaction or in dialysis conditions. However, dialysis was required for at least 3 days to entirely remove unreacted reagents and by-products, followed by lyophilization for 3 days to remove all residual water from the final fibrous product. Finally, % grafting, as determined by NMR integration of polymer backbone protons and aromatic protons was under 10% except for Min et al. who reported a high 27.5% functionalization. It is unclear why grafting yield was so much higher for the study by Min et al. [31] and the amount of dopamine used in the reaction was not provided. Inconsistency in amide bond yields with EDC and sensitivity to pH is problematic with replication and scalability of these reactions. Additionally, unwanted formation of anhydride with PAA and EDC adds additional complications to grafting yields.
Given these issues, an alternative to EDC was sought. In scheme 1, four alternative routes toward synthesis of PAA-g-DA are depicted. Additionally, we attempted the HOBt + NHS route both with and without a catalytic amount of base and triethylamine. For all coupling routes, reagents were added one at a time and allowed to stir in solution before additional reagents were added. HOBt required 1 : 1 ratio of DMF : water for complete dissolution. After the addition of dopamine, the royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 reaction was allowed to stir for at least 48 h providing adequate time for coupling of the dopamine to the polymer. An additional 24 h was allowed for scaled-up reactions.
Validation and characterization of PAA-g-DA
After lyophilization, grafting of catechol was confirmed by the presence of aromatic protons in 1 H NMR spectra (figure 1). The PAA backbone is characterized by a broad peak centred at 2.2 ppm arising from the de-shielded proton of the carbon adjacent to the carboxyl group, and a broad split peak around 1.5 ppm for the methylene protons. The methylene bridge of the dopamine functional group is seen as a pair of triplets near 2.8 and 3.15 ppm, shifted downfield from those in the dopamine starting material. The aromatic protons of the dopamine appear between 6.6 and 6.9 ppm as a series of doublets arising from ortho and meta coupling of the catechol [48]. Percentages of functionalization were calculated by the integration of the proton peak near the carbonyl group (labelled 1 in figure 1) of the polymer backbone to the aromatic proton isolated from the other aromatic peaks (labelled 5-7 in figure 1). From this integration ratio, the percentage of acrylic acid grafted with dopamine was derived. Additionally, it was evident from 13 C NMR that the amide linkage has formed due to a peak around 177 ppm, which is typical for an amide carbonyl carbon (electronic supplementary material, figure S2). In addition, two-dimensional NMR further characterized the amide bond formation. Bond coupling was observed in HMBC spectra (figure 1c) through the carbonyl carbon of the amide bond with the polymer protons, indicating amide formation. Two-dimensional COSY NMR (figure 1b) showed that there is J 3 coupling between the polymer protons (zone-1), followed by methylene proton coupling (zone-2), and finally aromatic protons coupling (zone-3). As shown in figure 2, when the desired PAA-g-DA was formed, the methylene protons shifted more downfield (Δδ = 0.0145) when compared with the DA by itself. This again indicates that the amide bond was successfully formed, as the downfield shifts are due to the presence of carbonyl group from the polymer backbone. Similarly, the downfield shift was seen in the aromatic protons due to the electronegative effects from the amide bond formation (Δδ = 0.0291). The aromatic protons after grafting were broadened due to faster transverse relaxations times (T 2 ) when a small molecule such as dopamine is bound to the large macromolecule such as PAA.
Optical and SEM imaging of PAA and PAA-g-DA (figure 3a-d) highlights the morphological changes of grafting dopamine to the polymer. The PAA stock was diluted prior to freeze-drying to match the concentration found in the reaction solution and appears to comprise thin, sheet-like structures with thin strands along the edges, probably an artefact arising from drying as water is pulled from the material. By contrast, the grafted polymer retained sheet-like morphologies but also contained many thin, fibrous structures. Additionally, the successful coupling can be confirmed by the presence of the amide bond in the FTIR spectrum (figure 3e) [49]. A small peak at 2934 cm −1 (figure 3e and electronic supplementary material, figure S3) corresponds to CH 2 stretching vibration from the polymer backbone; as DA is grafted to the backbone the molecular weight increases, which results in a change of vibration inversely proportional to the change in mass. In the final product, this peak has shifted to 2930 cm −1 , as the addition of dopamine increases molecular weight. The most prominent peak in the spectra is the carbonyl stretching frequency at 1694 cm −1 which shifts to 1697 cm −1 in the dopamine grafted product. Subtraction of the PAA spectra from the PAA-g-DA spectra resolves a hidden peak near 1604 cm −1 that may correspond to a portion of the amide I band or to aromatic C-C bonds. For amide I, this band contains characteristics of both the C=O and C-N stretches, leading to the overlap with the C=O of the neat PAA. The peak at 1544 cm −1 , only present in PAA-g-DA, correlates to the amide II, confirming grafting of dopamine to PAA. Amide II contains characteristics of N-H bending and C-N stretching. Given there was only 8% grafting in the product analysed, it is expected for this peak to be small. The TGA and differential (DTG) curves of 50 K MW PAA and the dopamine functionalized 50 K MW PAA are shown in figure 3f. The hydrophilic nature of PAA can cause large amounts of water (approx. 5-10 wt%) to be present that can obscure results, and so care was taken to evaporate water by equilibrating the material for 4 min at 100°C. The first region of 140-310°C had an observed weight loss of 25% for all samples; previous reports have linked degradation in this region to carboxylic acid side chains interactions and decomposition, cyclization to form anhydrides, decomposition and release of CH 4 and CO 2 [50,51]. The main chain degradation and scission of PAA is the second region of temperatures above 310°C with a mass loss of 57.41% for the 50 K MW PAA control. The dopaminecontaining PAA (PAA-g-DA) had similar thermal decomposition behaviour. In the first region of interest at 140-310°C, PAA-g-DA exhibited a 21.12% weight loss. It appears that dopamine slightly royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 enhances the stability in the second region of 310-600°C that has been previously attributed to the main chain degradation. In the range of 300-400°C, the PAA-g-DA showed a delay in degradation compared with non-functionalized PAA. Catechol groups improving the thermal stability of polymers has been observed in prior literature and has been attributed to the hydrogen bonding and restriction of chain motion [52,53]. Additionally, dopamine can scavenge radicals generated by C-C bond pyrolysis, blocking depolymerization of the PAA backbone by chain scission [52,54]. At a temperature of 600°C, the 50 K MW PAA has reached an equilibrium weight of 19.0% remaining. For the dopamine functionalized samples, the weight remaining at 600°C was 17.9%. This may indicate the breakdown of the styrenic rings and other oxidized structures.
Effect of HOBt and NHS on coupling
For all experiments, HOBt-mediated grafting resulted in a soluble product that showed a coupling dependence on the ratio of the dopamine/activating reagent to polymer used. As this ratio increased, so did the percentage of grafting ( figure 4). In table 2, grafting percentage for the 1 : 1 : 1 (PAA : HOBt : dopamine) reactions for each of the four pathways tested are shown. When in the presence of NHS, the reaction seemed to perform less efficiently (8% grafting) when compared with when HOBt is the sole activating agent (11% grafting). This is potentially due to the two reagents acting as competitive activation reactions resulting in lower dopamine grafting yields. For the reaction with NHS without HOBt, the polymer product showed signs of dopamine oxidation, evidenced by dark material formation after dialysis. Oxidation of catechol leads to a higher degree of cross-linking within the polymer resulting in lower adhesive strength, and so was discarded [55]. Addition of a catalytic amount of base to the reaction improved grafting yield for path b (HOBt) to 18% and path d (NHS + HOBt) to 12%, while path a (HOBt + NHS) was unchanged. It was also found that repeated experiments were consistent in grafting yield within a range of a few per cent and the order of adding NHS and HOBt was largely irrelevant to the final grafting yield. To further test the robustness of the reaction, the synthesis of the HOBt + NHS route at 2×, 6× and 12× scale-up based on the reagent amounts described in the methods was conducted ( §2.2) (table 3). The HOBt pathway continued to show improved grafting over the NHS + HOBt pathway; however, the grafting percentage did decline from 18% to 11-13% at the 6× and 12× scale, respectively. Overall, this indicates that the use of HOBt as the sole activating agent increases the reaction performance as there is no other competitive activating agent which could hinder the reaction efficiency through a competitive process. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637
Proposed mechanism for HOBt-mediated synthesis
HOBt has traditionally been used in combination with a carbodiimide coupling agent, like EDC, to enhance peptide coupling reactions (vide supra) [56]. However, our experimental results show the reaction can take place without the carbodiimide. The utilization of HOBt as an activating agent appears to be an alternative novel method for functionalizing PAA with amide moieties. Shown in figure 5, we propose that the activation is initiated by condensation between the carboxylic acids on PAA with the HOBt leading to the formation of activated transesterified HOBt ester. The formation of free amine occurs by the addition of catalytic amounts of 1 M NaOH and triethylamine (TEA) [57]. The free dopamine reacts with the activated carboxylic acid while eliminating HOBt, resulting in the final desired dopamine grafted polyacrylic acid. Reaction mechanism calculations were performed for the addition of dopamine to form the tetrahedral intermediate using M06-2X/6-31G(d) including implicit solvent (SMD = water). The PAA was modelled as a monomer unit truncated with a CH 2 CH 3 group to save computational time. This computational modelling suggested that formation of the activated PAA-HOBt complex was unfavourable (electronic supplementary material, figure S5) despite experimental evidence; however, these results were simulated without accounting for the influence of DMF or the catalytic amount of base. Calculations on addition of dopamine to the already activated PAA showed the reaction barrier for PAA-HOBt (3.9 kcal mol −1 ) to be slightly less than the alternative PAA-NHS complex (4.3 kcal mol −1 ) (electronic supplementary material, figure S6), which Table 2. Grafting percentage for the coupling pathways used in these experiments, both with and without a catalytic amount of base. Ratios of all reagents were 1 : 1 for all reactions. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 indicates that the HOBt-activated pathway is more favourable than NHS-activated pathway. However, the final product for the HOBt pathway was only 0.1 kcal mol −1 lower than the predicted tetrahedral transition state complex which indicates this reaction is possibly reversible. Complementary work by Bu et al. [43] performed similar calculations of a sulfo-benzoyl-HOAt complex reacting with methylamine (NH 2 -CH 3 ); reactants involved are different from those presented in this work's calculations and were done at a different level of theory [B2LYP/6-311G + +(d,p)]. The results discussed in Bu et al. [43] should and do differ from present work, but qualitatively similar trends were observed. The HOAt complex shown in Bu et al. [43] has a slightly lower activation barrier than the NHS complex, which aligns with data given in this study; however, the product provided in the SI of Bu et al. [43] is not a tetrahedral intermediate but optimized to where the HOAt has already eliminated from the intermediate. The conclusion is that rather than existing as an electrostatic, tetrahedral complex, the intermediate predicted in SI figure breaks relatively weak van der Waals interactions to yield the final product. Therefore, it is possible that allowing the tetrahedral intermediate to optimize and allow the HOBt to leave will then indeed make the overall reaction energetically favourable as shown in figure 5.
Adhesion testing of PAA-g-DA
All samples were polished to reduce potential mechanical interference from rough surfaces. Figure 6a shows the modified ASTM D2095 set-up with rose granite substrates adhered to one another in tension. PAA and PAA-g-DA were fashioned as water-based adhesive to mimic the wet conditions found within concrete. The average tensile adhesive strength of 8% dopamine-grafted PAA (based on NMR analysis) was 1.94 MPa, significantly higher than PAA alone 1.15 MPa ( p-value 0.0045).
Adhesion strength for the PAA-g-DA showed more variability than PAA, with a maximum and minimum strength of 2.6 and 1.3 MPa, respectively, for the 10 samples examined (two PAA-g-DA adherends failed prior to loading). Most samples demonstrated cohesive failure, with PAA-g-DA visible on both adherends after the test (electronic supplementary material, figure S7A-B).
Additionally, there were some instances of substrate failure, where the granite was chipped near the Figure 5. Proposed mechanistic route for HOBt-mediated synthesis.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 surface (electronic supplementary material, figure S7C). While the rose granite samples were polished, the porosity and inhomogeneity of the surface may have led to the variation seen in these results.
Adhesive strength was also tested for PAA-g-DA at a pH of 13-14 (comparable to that of concrete) with the addition of 10 M NaOH immediately following application of the polymer to the granite substrate. It was found that adhesion strength was considerably weakened in this condition, and while the maximum adhesion was higher than that of PAA alone (2.3 versus 1.3 MPa) the mean value was not statistically different to that of PAA alone (1.16 MPa, p-value 0.9569). The effect of pH on catechol adhesion in mussel foot proteins has been noted before, [58,59] with observations that oxidation of OH groups at high pH generally lead to weaker adhesive strength on mica and titanium. However, Yu et al. also noted that binding strength of the DOPA-containing mussel foot protein-3 increased on TiO 2 with increasing pH (up to 7.5), due to a shift in binding mode from hydrogen to coordination bonding [60]. This increase in pH led to the opposing effects of decreasing DOPAmediated adhesion and increasing bidentate DOPA-Ti coordination as a result of catechol oxidation [60]. We believe our study is the first on catechol adhesion at high alkaline conditions. Finally, to examine the effect of moisture, adhesion tests were run at room temperature in high humidity environment (R h % = 70-80%) at both the low and high pH conditions. All specimens for this method failed at low loading, however, due to incomplete curing (electronic supplementary material, figure S8). Hydration of the material would lead to adhesive bond disruption, weakening the material. While we theorize that when used within concrete the adhesive will cure as the concrete dries, it may still be necessary to further optimize this system.
Conclusion
We sought to find a simple, robust and scalable route to synthesize dopamine-grafted PAA adhesive for use in concrete admixture. The work here illustrates a viable route for HOBt-mediated amide coupling of dopamine to PAA. The developed methodology avoids the unwanted by-products and sensitivity of EDC and provides greater control over the per cent DA grafted to the polymer. While HOBt is typically used as an activator with EDC [56], observations that HOBt can provide amide bond formation without EDC were made. Furthermore, HOBt alone aids in an unhindered activation-mediated reaction that assists in forming products without the use of an additional coupling agent. The synthetic scheme was robust and consistent, even when scaling the reaction up to 12-fold. Characterization through NMR and FTIR confirmed the formation of PAA-g-DA, and adhesion testing on rose granite substrate demonstrated successful bonding of the polymer to aggregate material, with a 56% improvement in adhesive strength over neat PAA. While further optimization of the material may be necessary for higher pH conditions, these results demonstrate a promising application for this material. Future work will focus on further adhesion testing between cement paste and aggregate substrates to simulate the bonding at the ITZ. royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 9: 211637 | 2022-03-30T13:09:40.291Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "eb1fa6a657d8d932eb0f3229b8107cd7e0ebde60",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "eb1fa6a657d8d932eb0f3229b8107cd7e0ebde60",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52036203 | pes2o/s2orc | v3-fos-license | Late Proterozoic – Paleozoic Geology of the Golan Heights and Its Relation to the Surrounding Arabian Platform
. Weissbrod, T., 2005. The Paleozoic in Israel and Environs, in J. K. Hall, V. A. Krasheninnikov, F. Hirsch, H. Benjamini, and A. Flexer, eds., Geological Framework of the Levant, 2: Jerusalem, p. 283-316. Weissbrod, T. and Sneh, A., 2002. Sedimentology and paleogeography of the late Precambrian early Cambrian arkosic and conglomeratic facies in the northern margins of the Arabo-Nubian Shield. Isr. Geol. Surv. Bull., 87. Wolfart, R., 1967. Geologie von Syrien und dem Lebanon, 326 p. Gebruder Borntrager, Berlin.
. a. Regional setting of the study area. Modified after Garfunkel et al., 1981. DSFS -Dead Sea Fault System; EAF -East Anatolian Fault; JRV -Jordan Rift Valley; b. Location map of the seismic lines, deep boreholes and shallow water wells in the GH and the adjacent areas (Israeli side only), overlaying Digital Terrain Model (DTM). DTM after Hall, 1993. MH -Mount Hermon. The map is given in WGS-1984 and Israel-TM-grid coordinate systems. c. Location map of the deep boreholes drilled in the Northern Jordan and SW Syria areas. Thickness information on the principle stratigraphic units penetrated by these boreholes is presented in Table 1.
The sedimentary succession accumulated within the Golan part of the depression extends from the Late Proterozoic at the bottom to the Pleistocene basalts at the top of the section, attaining a thickness of at least 8.5km in the northern part of the plateau. The stratigraphic column beneath the basalt cover consists of up to 3,500m of Infracambrian -Paleozoic succession; up to 5,000m of Mesozoic rocks and about 1,500m of Cenozoic section (Meiler et al., 2011). The thickness of the basaltic layer covering the Golan depression attains at the central Golan 1,100m (Reshef et al., 2003;Meiler et al., 2011).
Regional background
During Late Proterozoic -Paleozoic the areas surrounding the Golan Plateau, i.e. Levant, Arabian Platform and North Africa, constituted a part of the Gondwana continent. Following the Pan-African orogenic event and the subsequent cratonization, the region behaved typically as a stable platform during this time span. An extensive sedimentary cover of marine and continental origin has accumulated over the area in several deposition cycles. Sedimentation of mostly siliciclastic deposits has continued on the stable subsiding passive margin shelf of the Gondwanaland until Permian, when a series of rifting events related to the Neo-Tethys opening set on a new episode in the regional geological history (Garfunkel and Derin, 1984;Garfunkel, 1988;Weissbrod, 2005). The Late Precambrian -Early Cambrian clastic cycle consists of immature, polymictic and poorly sorted conglomerates and arkose that were mostly derived from the Pan-African metamorphic and Plutonic terrain in the Arabo-Nubian Shield, to the west and south of the study area. The detritial sediments of the conglomeratic facies accumulated due to rapid and repeated subsidence episodes along major fault scarps and tectonic depressions, whereas the arcosic facies was deposited in a broad pericratonic basin, which extended from the Arabo-Nubian Shield in the south to the passive margin and the Paleo-Tethys in the north. Today, these clastics are discontinuously exposed throughout Saudi-Arabia, Egypt, Jordan and Israel, separated by erosion gaps on the elevated igneous rocks of the Arabo-Nubian Shield. The thickness of the conglomeratic facies preserved within the rift-related depressions in Northern Arabia and Eastern Desert of Egypt locally attains 5,000m, whereas the thickness of arcosic facies in Israel and Jordan attains at least 2,500m (Weissbrod, 2005). The Paleozoic sediments are very wide spread in the north-eastern part of the Arabo-African continent, comprising one of the most voluminous bodies of sediments in the region (Garfunkel, 1988). This second sedimentary cycle continued from the Middle Cambrian to Permian, incorporating mostly siliciclastic deposits, with mixed carbonate-shale intercalations throughout the sequence. The sediments were accumulated in the fluviatile environment and shallow epicontinental shelf, attaining a thickness of almost 5,000m. Overall, the Late Precambrian -Paleozoic sequence attains thickness of more than 10,000m. However, due at least three major uplift-and-erosion events ((1) end of Silurian; (2) end Devonian to Early Carboniferous; (3) Late Carboniferous to Early Permian) a complete time sequence is hardly found at any locality in the northern part of the Arabo-African continent (Garfunkel and Derin, 1984;Weissbrod, 2005).
The scope of the study
The purpose of the current work is to present the deepest stratigraphic section identified beneath the volcanic cover of the Golan Plateau, based on the extensive depth-domain seismic analysis, and to discuss the geological evolution of the study area during the Late Precambran -Paleozoic time span in the light of the available information from the surrounding north-western parts of the Arabian Platform. Fig. 2. Generalized cross-section showing the regional geological structure and the Late Proterozoic -Phanerozoic stratigraphic column in the area laying in between the Ajlun anticline at the south and the Mt. Hermon at the north (Modified after Meiler, 2011;Meiler et al., 2011). The cross-section is based on analysis of three deep boreholes located in the Northern Jordan (AJ-1, ER-1A and NH-2) and depth-domain interpretation of seismic data that covers the GH area ( Figure 1b). The cross-section outlines the syncline nature of the study area, confined by the Ajlun and Hermon anticlines. Note the similarity with respect to www.intechopen.com the thickness of the Infracambrian -Paleozoic sections revealed in the subsurface of Northern Jordan and Golan Heights areas, suggesting analogous geological history during this time-span. On the contrary, the thickened Jurassic succession interpreted in the central and northern parts of the GH implies that significantly different geological environment prevailed in the GH with respect to that of the Jordanian Highlands during the Early -Middle Mesozoic.
Database (Figures 1b & 1c)
A set of twenty five 2-D seismic reflection lines covering the GH area Formation tops from eighteen deep oil-exploration boreholes located in the Golan Heights area, Eastern Galilee, Northern Jordan and SW Syria. Table 1 presents the thickness information from the Jordanian and Syrian wells which penetrated the Paleozoic succession. Figure 1c indicates the location of these drillings. Formation tops from twenty shallow water and research wells drilled in the Golan Heights area Geological and topographical maps in different levels of resolution and geological cross-sections in local and regional scales
Seismic data processing
In the course of the present study, the Pre-Stack Depth Migration (PSDM) technique was utilized as the main seismic processing tool. PSDM was carried out from the surface topography, enabling an enhanced imaging of the Base-of-Basalt interface. The seismic data processing and analysis were accompanied by examination of staratigraphic information derived from the deep boreholes of Jordanian Highlands and SW Syria (Figure 1c), which penetrated the Mesozoic-Paleozoic successions, and in one case -the Precambrian basement (Ajlun-1 borehole). Interval velocity analysis consisted of two steps: 1. 2-D velocity function construction for each of the 25 seismic lines was based on the Constant Velocity Half Space technique (Reshef, 1997). 2. 3-D interval velocity model construction, utilizing a MULTI 2-D approach. The procedure resulted in a comprehensive 3-D interval velocity model that covers the entire study area, including the subsurface parts which lay in between the seismic lines. The velocity model was then smoothed in the 3-D domain, resulting in a global interval velocity model of the study area. The final depth sections were obtained by the Pre-Stack Explicit Finite-Difference Shot Migration and Post-stack Explicit Finite-Difference Depth Migration algorithms, employing extracted 2-D velocity functions from the global 3-D model.
Seismic data quality
Despite the thick basaltic layer entirely covering the Golan Plateau, the final depth sections show surprisingly good quality of seismic data. The final depth sections show reflections from 7 -8 km bellow the datum (Figure 3) in the southern and central parts of the study area. There is a considerable deterioration of the seismic quality towards the Northern Golan.
Seismic interpretation
Eleven seismic markers were identified and mapped in the subsurface of the GH (Figure 3). Since the borehole information in the GH area is restricted to the upper 1,400 meters, direct correlation between the seismic data and the borehole stratigraphic information is limited to the upper two horizons only: the Base-of-Basalt (H1) and the Near Top Turonian (H2). Stratigraphic identification of the deeper seismic horizons became possible due to the fact that the seismic data was Pre-Stack depth migrated and the entire interpretation procedure took place in the geological (i.e. depth) domain. This enabled to perform an instantaneous correlation of the prominent seismic markers with the exposures of the Mesozoic section outcropping on the adjacent Mt. Hermon Anticline and to compare the intervals between the horizons with the thickness information derived from the deep boreholes of Northern Jordan. Hence, stratigraphic ascription of the LC-3 horizon (H3, electric log marker within the Lower Cretaceous) and the Near Top Jurassic horizon (H4) relies mainly on the correlation of the seismic data with the exposures of the Lower Cretaceous and the Jurassic strata outcropping on the Mt. Hermon Structure. Identification of the Near Top Triassic (H5) and the three Paleozoic -Infracambrian horizons (H6 -H8) is based on the concept that the thickness of the principle stratigraphic units in the Southern Golan should be comparable to the thickness reported in the Jordanian Highlands, across the Yarmouk River, where it is controlled by a series of deep oil-exploration boreholes. Three additional reflections with limited spatial distribution were identified in different parts of the study area; they were designated as: within the Tertiary (H1b), within the Early Jurassic (H4b) and the Near Top Precambrian basement (H9). The lowest four horizons (H6 -H9) are within the scope of current study. A detailed description of various seismic processing and interpretation aspects implemented during the study was presented by Meiler et al., 2011. 3. Results
Lithostratigraphic identification
Near top basement (Horizon 9) The deepest reflection recognizable on the depth sections was tentatively assigned as the Near Top Precambrian basement (Horizon 9). The horizon was identified on several profiles, mostly in the eastern parts of the GH. It is generally absent in the western and northern parts (Figure 3), although patches of it can be scarcely observed on some lines in these areas. Horizon 9 is stratigraphicaly identified relying on the assumption that a smooth and gradual transition of the basement is expected between the Jordanian Highlands and the Southern Golan in the Yarmouk River area. The base of the sedimentary cover was penetrated by the AJ-1 borehole (Figures 1c & 2; Table 1), 50km south to the study area, reaching the basement at depth of nearly 3,800 meters beneath the surface. The closest boreholes to the study area drilled in the Northern Jordan and SW Syria, i.e. ER-1A, NH-2 and BU-1 (Figure 1c), did not penetrate bellow the upper Paleozoic. However, NH-1 well, located about 70km south-east to the GH, penetrated ~1,000m of the Saramuj and an unassigned clastic units, which overlay the basement. Thus, it is assumed that on the most southern profiles of the GH the basement should be found abound 1km below the Near Top Saramuj horizon (H8), the penetrated figure of the Saramuj clastics within NH-1 (Figures 3 & 4).
Sarmuj formation and the unassigned clastic unit (Horizon 8) Horizon 8 is interpreted as the near top of the Late Precambrian -Early Cambrian sedimentary succession, known as the Saramuj Formation and the unassigned clastic unit ( Figure 5). The sequence is known in the Arabian Platform region as the oldest non-metamorphosed sedimentary sequence, consisting of polymict conglomerate and poorly sorted coarse to fine grained arkose, accompanied by magmatic intrusions and extrusions (Weissbrod, 2005).
Near top burj formation (Horizon 7) and the near top paleozoic (Horizon 6) In four, out of seven deep boreholes of Northern Jordan and Syrian Busra-1, the Paleozoic succession is topped by the Permian strata, usually limited to several hundred meters in thickness (Table 1). Therefore, it seems reasonable to assume that in the Southern GH some few tens to several hundred meters of Permian section rest at the top of the Paleozoic succession and Horizon 6 may roughly represent the Near Top Permian. The thickness of the Permian in the subsurface is expected to increase towards north, as up to 600-700m of Permian deposits were reported within the Palmyra Trough (Leonov, 2000 Syria and is known as a prominent regional seismic interface, designated as "D-reflector" (McBride et al., 1990). Therefore Horizon 7 was lithostratigraphicaly assigned as the Near Top Burj Formation.
Structural and isopach maps
Several structural and isopach maps were compiled in order to outline the geological evolution of the GH during the Late Proterozoic -Paleozoic time span. Figure 6 presents the structural map of the Near Top Basement Horizon (H9) and the isopach map compiled for the entire Infracambrian -Phanerozoic sedimentary cover. Due to its limited appearance on the seismic sections, Horizon 9 was only partly interpreted in the subsurface of the GH area and therefore the structural and the isopach maps presented in figure 6 are restricted to the eastern and central parts of the study area. Nevertheless, the general structure and the architecture of the crystalline basement can be inferred from the maps. The depth to the Near Top Basement Horizon, given its restricted seismic appearance and the uncertainty with respect to its stratigraphic correlation, ranges in the GH area between 5,700 to 7,700m beneath the sea level (Figure 6a) or between 6,150 -8,500m beneath the surface topography (Figure 6b). The depth to the top of the crystalline basement in the Southern Golan is estimated to be 6 -6.5km. The depth to the base of the sedimentary cover increases towards the Northern Golan and the Hermon Structure, where the sedimentary succession is outlined by its thickened Mesozoic sequence (Figure 2). The thickness of the Infracambrian interval (i.e. Saramuj Formation and the unassigned clastic units of the Upper Proterozoic) in the study area varies in range from several hundred to 1,500 meters (Figure 7).
www.intechopen.com
The structural map of Horizon 6 is presented in figure 8. The map displays the contemporary configuration of the Near Top Paleozoic (Permian?). The structural setting is dominated by the notable westward dipping from -3,5km in the east to -7km in the west, in the proximity of the DSFS.
Overall, based on the information derived from the deep boreholes of Northern Jordan and Syrian Busra-1, it is reasonable to assume that the seismic interval interpreted in the GH between the Near Top Paleozoic (Horizon 6) and the Middle Cambrian Burj Formation (Horizon 7) incorporates few tens to several hundred meters of Permian, overlaying additional several hundred meters of Ordovician to Middle -Upper Cambrian strata. The thickness of this interval varies between 500 -1,100m for most of the GH area, locally attaining 1,300m (Figure 9). A cumulative thickness of the Paleozoic succession is presented in isopach map calculated between the seismic intervals H6 -H8 ( Figure 10). The interpreted thickness of this interval ranges from 1,100 to 2,250m.
Near top basement (Horizon 9)
The depth to the base of the crystalline basement in the study area ranges between ~6km in the Southern GH to ~8.5km in the Northern Golan (beneath the surface topography). The basement dips northwards towards the Hermon Structure (Figure 2 & 3). At the foot of Mt. Hermon the thickness of the sedimentary cover is not known, but assumed to exceed the 8,500m calculated in between the Qela and the El-Rom area (Figure 6), the most northern area where the horizon is traceable and could be interpreted. In the south-western part of the Palmyride fold belt the depth to the basement was estimated to 11km within the Palmyra Trough (Seber et al., 1993). East of the GH, outside of the Palmyrides, the depth to basement was estimated at 8 -10km (Rybakov and Segev, 2004). To the west of the GH, across the DSFS in the Galilee region, the thickness of the sedimentary cover attains its regular figures of 6 -8km (Ginzburg and Folkman, 1981). Thus, considering the northward dip of Horizon 9, it is suggested that the basement continues to deepen in the Northern Golan and the Mt. Hermon areas, whilst its depth beneath the Hermon Structure may attain 10 -11km, as was estimated by Seber et al. (1993) in the south-western parts of the Syrian Palmyrides. At the south-eastern part of the Golan the basement morphology is outlined by the significant structural uplift, referred here as the Pezura Structure (Figures 4, 6 & 8). It rises for several hundred meters above its surrounding and its structural influence can be traced upwards within the Paleozoic, Mesozoic and also Cenozoic sedimentary units. Reconstruction of seismic data to the Mid-Cambrian level (Horizon 7) indicates that this structure existed as a local high already in the Late Proterozoic -Early Cambrian (Figure 11, see).
Sarmuj formation and the unassigned clastic unit (Horizon 8)
The Late Precambrian -Early Cambrian sedimentary succession in the Arabian Platform comprises the oldest non-metamorphosed sedimentary sequence in the region, consisting of polymict conglomerate and poorly sorted coarse to fine grained arkose, accompanied by magmatic intrusions and extrusions (Weissbrod, 2005). The term "Infracambrian" describes this non-metamorphosed, mostly clastic sequence (Wolfart, 1967;Horowitz, 2001). Horizon 8 is interpreted as the near top of this sedimentary sequence in the subsurface of the Golan Heights.
In the Negev area of Southern Israel, a large Infracambrian sedimentary depression was reported overlaying the Precambrian basement (Weissbrod, 1980). It comprises a part of a broad marginal basin known as the Arabian-Mesopotamian Basin, which extends from the Arabo-Nubian Shield across Arabia, Levant and Mesopoamia to the edge of the Arabian Plate along the Bitlis Suture (Weissbrod and Sneh, 2002). The basin was filled with several kilometers of immature clastics and volcanics, defined in the Southern Israel as Zenifim and Elat Conglomerate Formations. Saramuj Formation and the unassigned clastic unit of Northern Jordan ( Figure 5) consist both of clastic sediments, mainly coarse conglomerate and arkosic sandstones as well as some volcanic components (Andrews, 1991). These units are considered both as time and lithological equivalents of the Infracambrian Elat Conglomerate and the Zenifim sandstones reported from the Southern Israel (Garfunkel, 2002;Hirsch and Flexer, 2005;Weissbrod, 2005). The overlaying Salib Formation is very similar in composition and corresponds to the Lower Cambrian Amudei Shelomo and Timna Formations (Southern Israel) composed of predominantly clastic units. Horizon 8 is hypothesized to represents the near top of this Infracambrian sequence which is characterized by the immature clastics of Saramuj conglomerate followed by the Salib arcosic sandstones, similar to their southern contemporaneous known as Zenifim Formaion and the Lower Cambrian Amudei Shelomo and Timna Formations; all units comprising a part of the above mentioned Arabian-Mesopotamian Basin. The thickness of the Ifracambrian interval (i.e. Saramuj Formation and the unassigned clastic units of the Upper Proterozoic) in the study area varies in range from several hundred to 1,500 meters (Figure 7). These Infracambrian units unconformably overlay the Near Top Basement Horizon (H9), filling the locally fault-bounded blocks (Figures 4 & 11a). These interpreted figures of the Infracambrian succession in the GH are comparable thickness figures to ~2,500m of Zenifim Formation estimated by Weissbrod and Sneh (2002) to overlay the basement on the regional scale. The Infracambrian sedimentary section recognized in Jordan and Saudi Arabia is considered as a syn-rifting succession accumulated during the extensional phase of the Late Proterozoic -Early Cambrian time span (Abed, 1985;Huesseini, 1989;Best et al., 1990). The period was dominated by the intra-continental rifting and wrenching (Husseini and Husseini, 1990), resulting in a series of asymmetric half-grabens with occasionally rotated basement blocks and immature syn-rift clastic deposition (Andrews, 1991;Figures 5 & 11a). The thick Infracambrian section (Figure 7) which fills the underlying faulted blocks observable in the subsurface of the GH (Figure 11) is in agreement with the idea of possible pre-Cambrian or Early Paleozoic rifting episode that took place in the North-Western Gondwanian Arabia, as suggested by the above mentioned authors.
Pezura structure
A complex basin-and-swell configuration was proposed to prevail throughout the northern parts of the Gondwanaland during the Paleozoic (Garfunkel, 1998). Several large up-doming elements related to this Paleozoic configuration were reported in the Eastern Mediterranean region: the Hercynian Geoanticline of Helez, centred in the coastal plane of Israel (Gvirtzman and Weissbrod, 1984); Hazro structure extending across the Turkish-Syrian border (Rigo de Righi and Cortesini, 1964); Riyadh swell in central Saudi Arabia (Weissbrod, 2005). The elevated feature interpreted in the south-eastern corner of the GH, referred here as the Pezura Structure (Figures 4 & 6), may represent one of the uplifted features which constituted a part of this basin-and-swell configuration, although in considerably smaller scale. The uplift, followed by the notable tilting and on-lapping sedimentation of younger Paleozoic strata (Figure 11c), can be related to the Hercynian Orogenic episode, which is dated in Jordan as mid-Carboniferous (Andrews, 1991) and Pre-Carboniferous or Pre-Permian (Gvirtzman and Weissbrod, 1984) event in Israel. However, figure 11b shows that the structure preceded the Middle Cambrian Burj Formation (H7) deposits, originating already in the upper Proterozoic and affecting the subsequent Paleozoic sedimentation. This is evidenced by the on-lapping stratigraphic relations between H7 and H8. Thus, it seems that the Pezura structure was established as a tectonically active area already in the upper Proterozoic and it continued to act periodically throughout the Paleozoic, as part of the Hercynian Orogenic episode. The location of the presently elevated Pezura structure coincides with the formerly well developed fault-bounded depression (Figure 11a). This overlapping pattern in which the Upper Proterozoic rifting zones became a regional uplifts during the Early Paleozoic characterize additional regional highs, such as Rutba swell (Seber et al., 1993).
www.intechopen.com Late Paleozoic (c) stages are presented, through flattening the southern section of DS-3096 profile to H8, H7 and H6 seismic markers, respectively. (Horizon legend as in figure 3). a. The reconstruction presents the Infracambrian section overlaying the crystalline basement, as appeared at the end of the deposition of Saramuj Formation (H8). Note that in the Pezura structure area the Infracambrian Saramuj section fills the faulted blocks of the basement.b. The reconstruction presents the Late Proterozoic -Early Cambrian sections overlaying the crystalline basement, as appeared at the end of the deposition of Burj Formation (H7). Note the uplifted Pezura structure in the area formerly outlined by a series of down-faulted blocks. c. The reconstruction presents the Late Proterozoic -Late Paleozoic sections overlaying the crystalline basement, as appeared at the end of the Paleozoic (H6). Note additional faulting in the Pezura area, suggesting for an alternating tectonic activity throughout Paleozoic. www.intechopen.com
Near top burj formation (Horizon 7) and the near top paleozoic (Horizon 6)
Since in most of the deep drillings adjacent to the GH the Paleozoic succession is topped by the Permian strata, it is assumed here that in the Southern Golan some few tens to several hundred meters of Permian section rest at the top of the Paleozoic succession and Horizon 6 roughly represents the Near Top Permian. On the regional scale the Paleozoic sediments are very widespread in the north-eastern part of the Arabo-African continent (Alsharhan and Nairn, 1997;Garfunkel, 2002;Weissbrod, 2005). Large Paleozoic basin was reported in Syria, where more than 5,000m of Cambrian -Carboniferous section was documented in the subsurface. Total thickness of the Paleozoic section in Syria locally attains 7,000m (Krasheninnikov, 2005;Leonov, 2000). In Northern Jordan, the thickness of the Paleozoic succession reaches nearly 2,000m in NH-1 borehole (Table 1). In Southern and Central Israel the Paleozoic succession is highly reduced and attains thickness of several hundred meters only (Weissbrod, 1980;Ginzburg and Folkman, 1981). Thus, the thickness of the sedimentary section interpreted in the GH within the seismic interval Horizon 6 -Horizon 8 ( Figure 10) appears to be comparable to the thickness of the coeval units reported in the Northern Jordan area. It's worth noting that the eastern regional dip of the Paleozoic strata well-documented throughout the Eastern Mediterranean ( Figure 11; Gvirtzman and Weissbrod, 1984;Andrews, 1991) was not observed in the subsurface of the GH. (Andrews, 1991). The sedimentary succession is outlined by the notable eastern dipping of the Paleozoic section, uncomfortably overlain by the Mesozoic units tilted towards west.
Moreover, on some profiles (Figures 3, 4 & 8) horizons attributed to the Paleozoic and Infracambtian sections (i.e. Horizons 6, 7 and 8) clearly show inclination to the opposite direction, i.e. due west, whilst a slight angular unconformity appears between the Mesozoic and the Paleozoic stratigraphic packages. A possible explanation would be an existence of an uplifted structure, like the above mentioned Pezura Structure, which locally tilted the sedimentary section to the west. However, this western inclination is clearly visible also on the northern profiles, away from the Pezura area; therefore it seems more reasonable to relate the inclination to a regional tectonic tilting which, according to the seismic data, took place during the Late Paleozoic -Early Mesozoic.
www.intechopen.com
On the isopach map presenting the H6 -H7 seismic interval (Figure 9), figures of up to 1,300m are observable at the eastern edge of the GH, partly overlapping the line of the Pezura Fault Zone (The main fault plain is marked on figure 8; a number of unassigned individual fault segments related to the Pezura Fault Zone were not mapped). This increased H6 -H7 interval corresponds to the line of the volcanic cones covering the Golan Plateau and may suggest that plutonic intrusions occupy the lower parts of the Paleozoic succession. However, no definite seismic indications were observed on the depth sections to support this suggestion.
Summary
A series of structural and isopach maps compiled based on an extensive depth-domain seismic analysis displays the Late Proterozoic -Paleozoic evolution of the GH. The depth to the base of the crystalline basement within the study area varies in range from 6km at the Southern Golan to 8.5km at the Northern GH (beneath the surface). As the Near Top Basement Horizon dips northward, it may attain 10 -11km beneath the Hermon Structure, as was estimated in other parts of the Palmyrides. The deepest sedimentary section interpreted in the subsurface of GH consists of two primary sequences: 1. Infracambrian (Late Precambrian -Early Cambrian) Saramuj Formation and unassigned clastic units which comprise the oldest non-metamorphosed sedimentary sequence in the region. 2. Paleozoic section, consisting of various units attributed to Lower Cambrian -Permian time span. A total estimated thickness of the Infracambrian -Paleozoic succession interpreted in the subsurface of the GH varies in range of 1,800 to 3,500m ( Figure 13). About 1,000 -1,500m of this figure corresponds to the Infracambrian deposits; its lower part (i.e. Saramuj Fm.) is interpreted as a syn-tectonic sequence, accumulated within the faultrelated depressions, such as the Pezura Structure. There is a notable contrast between the Paleozoic and the subsequent Mesozoic thickness distribution patterns within the GH. The thickness map of the Paleozoic (Figure 8) does not show the typical Mesozoic zoning and north-western thickening (Meiler, 2011), but rather characterized by a mosaic and irregular thickness distribution. This supports the findings in Syria, Jordan and Israel according to which it can be concluded that the Paleozoic structure of the northern Arabo-African Platform had very little in common with the structure that persisted during the following periods, which by the Early Mesozoic time was already greatly influenced by the establishment of the passive continental margin to the north of the Arabia shores. Overall, it can be concluded that the stratigraphic column and the major sedimentary cycles of the Upper Proterozoic -Paleozoic interpreted in the GH closely resemble the corresponding geologic history of the adjacent Northern Jordan area. In both areas a 3 -3.5km thick sedimentary succession of this period is preserved in the subsurface. The Paleozoic succession found in these areas attains more than 2,000m and differs significantly from the reduced Paleozoic succession exposed in the Southern Israel area, to the west and south of the GH. This configuration has changed during the subsequent Mesozoic Era, when the deposition environment of the GH became closely affiliated to the Syrian and Israeli geologic history rather to that of the Northern Jordan. | 2018-08-17T04:36:45.615Z | 2011-12-07T00:00:00.000 | {
"year": 2011,
"sha1": "bf3b68563a9630b8a5dcee61f9d82e52e8499fdf",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/24550",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6d4b2929186d50d8a85637058d5c00fff7ffda09",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
256023398 | pes2o/s2orc | v3-fos-license | Changes in the Mechanical Behavior of Electrically Aged Lithium-Ion Pouch Cells: In-Plane and Out-of-Plane Indentation Loads with Varying Testing Velocity and State of Charge
: The knowledge about the influence of electrical aging on the behavior of lithium-ion cells under mechanical loads is of high importance to ensure a safe use of batteries over the lifetime in electric vehicles. In order to describe the mechanical behavior in relation to electrical aging, fresh and electrically aged NCM pouch cells were investigated under different mechanical crash loads. For the first time, the aged cells’ behavior under quasistatic lateral loading was taken into account. Aged cells showed lower maximum forces compared to the fresh cells. The reason of the changed mechanical cell behavior was explained with the different buckling behavior of fresh and aged cells by experimental images. Furthermore, quasistatic and dynamic crash tests in cell’s thickness direction were performed at varying state of charge (SOC) and compared to the results of a previously published study. Independently of the testing velocity, the electrically aged cells failed at increased deformation values. This observation was justified by an increased cell thickness due to an additional softer layer, formed on the aged graphite particle surface, which was observed by the means of scanning electron microscopy. Furthermore, the aged cells showed lower failure forces of up to − 11% under quasistatic and dynamic loads at 0% SOC. It was also illustrated that electrical aging causes a deeper voltage drop after cell failure, which suggests a higher energy release after the internal short circuit. The investigations show that electrical aging has a significant influence on the mechanical properties of lithium-ion cells and must be taken into account in the safety assessment.
Introduction
Lithium-ion batteries (LIBs) nowadays find a wide variety of applications in different industry sectors, including consumer electronics, public or private transport or stationary energy storage systems.For the production of vehicles with alternative fuels, lithium-ion battery technology presents great interest due to the high energy density and low selfdischarge rates [1].One important aspect, which needs to be considered during electric vehicle (EV) applications, is the safety of LIBs.Mechanical, electrical or thermal abuse can provoke an internal short circuit (ISC) of the cells within a battery, thus leading to a further safety-critical, uncontrolled energy release [2,3] during the so-called thermal runaway (TR) process.
In order to prevent potential hazards originating from external mechanical loads (e.g., vehicle crash), several studies focused on defining mechanical load limits [4][5][6][7] of LIBs in order to assess LIBs' safety.Factors, influencing the mechanical behavior of LIBs, such as the state of charge (SOC) [8][9][10][11], the testing velocity [12][13][14][15][16][17] or the loading direction [18], were also analyzed in order to increase the knowledge of LIB performance under mechanical abuse.However, most of this investigations were conducted on fresh LIBs.For a well known fact LIBs are stressed by a high amount of electrical charge and discharge cycles over their whole lifetime, which leads to capacity fade due to electrochemical reactions inside the cell [19].The resulting degradation mechanisms depend on the operating conditions like the environmental temperature, the charging rate (C-rate) and the Depth of Discharge (DoD) or by calendrical aging [19,20].These effects influence the behavior of LIBs under mechanical abuse and have to be considered in the safety assessment of LIBs.Several studies have focused on the investigation of electrically aged LIBs' mechanical properties under different conditions [21][22][23][24][25][26][27][28][29].
Kovachev et al. [25] investigated fresh and aged 100% SOC pouch cells under quasistatic indentation, which were cycled with 1C at 60 °C.A higher failure force was noticed for aged cells as well as right-shifted force-displacement curves.This outcome was justified with a thickening of the aged anode layer by different decomposition products.Liu et al. [26,28] conducted mechanical indentation tests on electrically cycled lithium-ion pouch cells at a temperature of 0 °C with 0.8C.Test results of these studies showed similar trends as observed by Kovachev et al. [25].Sprenger et al. [27] used cells, which were electrically cycled under state-of-the-art EV battery module conditions with a charging-discharging strategy based on high-performance driving profiles.A temperature range between −3 °C and 31 °C was applied.Besides a shifted force-displacement curve, Sprenger et al. reported a lower failure force of up to −29% for aged cells under cylindrical indentation at 0% SOC.The behavior was traced back to the lower mechanical strength of the aged anode and separator [27].
Despite the increasing number of studies, focusing on electrically aged pouch cells, further investigations are needed in order to generate extensive insights regarding the influence of aging on battery safety.All of the discussed investigations took into consideration only the mechanical behavior of aged cells in thickness direction as it seems to represent the most critical loading direction for separator failure.To the authors best knowledge, no investigations under in-plane loads were performed with aged cells although they are of high importance as they describe an underbody load case.This analysis was set as one main focus of the current publication.Secondly, no information was found whether aging affects the anisotropic behavior of the cell, as discussed by Raffler et al. [18].For this reason, quasistatic indentation tests over the cell's long side were conducted.A lack of knowledge was also identified in the effect of aging on the strain rate dependency of aged cells, which has to be considered especially for battery safety assessment in crash scenarios.Therefore, dynamic mechanical experiments in thickness direction were carried out, similar to the quasistatic test from Sprenger et al. [27].In addition to all mechanical tests further scanning electron microscopy (SEM) analyses of fresh and aged cells were conducted in order to provide better understanding about the influence of aging on the occurred dependencies.
Investigated Lithium-Ion Cell
The investigations of this study were performed on a 74 Ah large-format automotive lithium-ion pouch cell.A detailed overview of the cell's structure showing the cell's dimensions can be seen in Figure 1.In order to ensure a clearly defined description of the cell's loading directions, a local coordinate system (u, v, w) was defined by Sprenger et al. [27].As a result, the coordinate u describes the long cell side, v the short cell side and w the cell's thickness direction.The used cell consists of several double-coated layers including graphite anodes, LiNiMnCoO 2 (NCM) cathodes and a z-folded polyethylene separator with a ceramic Al 2 O 3 coating [27].
The artificially aged cells were electrically cycled using the same aging strategy as proposed by Sprenger et al. [27].Aging was done under real EV battery module conditions with several cells serially connected to each other.The used aging strategy was derived from an estimated high-performance customer behavior with an average discharge power of 1.1 kW and an average charging power of 0.6 kW for each cell.Charging/discharging was performed in a range between 10% and 90% SOC.A temperature range between −3 °C and 31 °C was applied [27].Electrical aging was performed until a residual capacity of 90% was reached.In order to see a more detailed description of the investigated cell and aging strategy, the reader is referred to [27].
Post-Mortem Analysis
Regarding the investigation of the electrical aging's influence on the mechanical properties, the cell layers and their specific aging effects were analyzed in detail.In this work, SEM images from each cell layer were used to determine morphological changes due to aging as these changes might help to create plausible explanations for a changed behavior on cell level.The investigations were made analogously to the procedure of Sprenger et al. [27].In order to avoid reactions with oxygen, the cells were first opened in a glove box and the layers were washed in dimethyl carbonate (DMC).In a next step, specimens of 5 mm diameter were generated from aging conspicuous spots of the cell layers.In order to investigate the specimens, a Tescan MIRA3 XMU Scanning Electron Microscope was used.Additionally, energy dispersive X-ray (EDX) measurements were conducted to determine changes in the material decomposition through electrochemical aging [27].
Experimental Design
Aiming to determine the influence of electrical aging on the mechanical behavior of lithium-ion cells, the tests shown in Table 1 were performed.All conducted experiments were done using a ∅ 30 mm impactor.First, lateral indentation tests were used to determine the difference of fresh and aged cells under in-plane loading (v-direction).Therefore, the cells were sandwiched between two L-shaped sample holders (S235JR), in order to keep the cell in position without applying any external force on the battery surface (see Figure 2a).To confirm the effects for in thickness loads (w-direction) reported by Sprenger et al. [27], additionally quasistatic cylindrical indentation tests were performed.As a short side orientated impactor (v-direction, Figure 2c) was already used by Sprenger et al. [27], the impactor was 90°rotated (impactor length along u-direction, Figure 2b) in the current study aiming to investigate how the anisotropic cell properties influence the aging's impact.Furthermore, cylindrical indentation tests along the short cell side (v-direction) were performed at v = 3 m/s at two different SOCs (0% and 100%).The measured data was compared to the quasistatic results, conducted by Sprenger et al. [27], to determine the influence of testing velocity on the mechanical performance of the investigated cells.In order to analyze the influence of aging (∆aged), testing velocity (∆vel) and SOC (∆soc) under in-thickness loads, three material parameters including cell stiffness S, failure displacement d f ail and failure force F f ail were defined.The cell stiffness S was defined as the linear region of the force-displacement curve after the compressible behavior of the cell, which is characterized by the parabolic characteristic at the beginning of the test curve.Failure displacement d f ail and failure force F f ail were defined at the point of ISC occurrence.A detailed description of the testing equipment used can be found in the following sections.Quasistatic loading experiments were conducted on the hydraulic press PRESTO 420, designed to achieve a maximum load of 420 kN with cross-head speeds residing in the range of 0.05 mm/s-6.4mm/s.For cylindrical indentation tests (Figure 2b,c), the test specimens were placed on top of a 1 mm thick isolating Pertinax® plate (Kaindl, HP2061) on the movable lower plate of the press.The upward movement of the lower press plate was realized via four guidance rails, positioned symmetrically in the corners of the testing chamber in order to avoid tilting of the plate during movement.Displacement data was recorded by a high-precision linear glass scale encoder with an accuracy of 1 µm.A cylindrical indenter with a diameter of 30 mm was mounted perpendicular over the long side of the cell as shown in Figure 2b.The battery sample was deformed with a testing velocity of 1 mm/s.Electrical cell failure, characterised by an ISC, was defined as the criterion of test termination.In the case of lateral indentation (Figure 2a), the test termination criterion was set to 25 mm of displacement after contact between cell and test stamp or if cell failure is detected.Thus, a maximum cell deformation of approximately 40% of the unclamped cell width was achieved.Testing speed for this test case remained unchanged at 1 mm/s with respect to the aforementioned cylindrical indentation tests.The force during quasistatic tests was logged via a NI-9237 bridge input module with a resolution of 24 bits and a sample rate of 50 kS/s using a load cell of the type GTM Serie K 500 kN and a GTM Serie K 20 kN, both of which have an accuracy class of 0.02%.The integrated 24-bit NI-9229 voltage input module (0-60 V per channel) measured the voltage signal of the cells during mechanical testing.Data acquisition frequency for all measured signals was set to 1 kHz [25,27] Impactor
Dynamic Testing Device
The ELLMAR test rig (see Figure 3), especially designed for dynamic battery experiments at arbitrary SOC levels, was used in this work to conduct high-speed dynamic impact tests with a velocity of 3000 mm/s.Here, the cell and the underlying isolating Pertinax® plate (Kaindl, HP2061), were fixed vertically on a 85 kg horizontal ball bearing guided steel sled, which was next accelerated by an electric driven belt within a 16 m long acceleration section.The above mentioned sled mass was chosen in order to achieve cell failure with a constant velocity during impact.After reaching the constant target velocity at the end of the acceleration section, the sled decoupled from the belt and hit a 30 mm cylindrical impactor, mounted directly on top of a load cell.The resulting force signal was measured by a KISTLER Z20730 piezoelectric load cell with 500 kN nominal force and a sampling rate of 100 kHz.Sled displacement was recorded by a SIKO MSC500 high-precision magnetic sensor, characterized by a resolution of 1 µm and an accuracy of ±5 µm.Cell voltage was measured during all dynamic tests with a Dewetron DAQP-STG module with a sampling rate of 100 kHz and a voltage input accuracy of ±0.05%.
Results
In this section the results of the investigated mechanical dependencies of aged cells are presented and compared to fresh cells.First, SEM images and SEM-EDX results of fresh and aged anode samples are analyzed with the goal to explain any observed changes in the mechanical behaviour.In a next step, the behavior of fresh and aged cells under lateral indentation is discussed in greater detail and the differences are highlighted.In the third subsection an orientation dependency assessment of the investigated cells is performed, which is based on the evaluation of the conducted cylindrical indentation tests over the cell's long side (u) in thickness direction (w).Finally, a comparison of the dynamic indentation tests over the short cell side (v) to the quasistatic results is conducted in order to determine a possible influence of aging on the strain rate dependency of the cell's mechanical properties.
Post-Mortem Analysis
The main changes of the investigated cell by electrical aging were found for the anode active material in terms of a grown in size solid electrolyte interface (SEI) layer.This process resulted in growth of the anode's graphite layer and to an overall thicker cell.Additionally, the appearance of local lithium plating was confirmed with inductively coupled plasma optical emission spectrometry (ICP-OES) measurements in previous investigations [27].In the current study, further SEM and SEM-EDX measurements of the anode material were performed to specify the observed changes.The results are shown in Figure 4. Investigations of the electrically aged cathode and separator material revealed no significant changes compared to the fresh samples.The SEM images (Figure 4a) of the fresh anode display a very clean structure of the graphite particles without any conspicuity.SEM-EDX measurements in Figure 4i show that the graphite particles mainly consist of typical components like carbon and oxygen as well as slight amounts of fluorine and phosphorous.In contrast, for the aged anode surface two suspicious morphological structures were found.Detailed images demonstrate several graphite particles of the aged anodes, which are surrounded by an additional bright layer (Figure 4b).SEM-EDX measurements of this layer in Figure 4j indicate an increased fluorine and phosphorus content.Thus, the additional layer can be characterized as decomposition products between the electrolyte and the anode's SEI.The mechanism can be explained based on the work of Xiong et al. [20] as follows.During electrical charging the insertion of lithium into the anode particle leads to internal mechanical stresses.Cracks of the SEI layer occur, which evoke a direct contact and side reaction between the electrolyte and the graphite material.As a result an additional layer grows over the cell's lifetime by the accumulation of this process.This process also results in an electrolyte consumption and a capacity fade [20].
The second conspicuous area of the aged anode can be seen in Figure 4c.Here, various crystal-like structures can be recognized, which show increased proportions of oxygen, taking into account the SEM-EDX measurements shown in Figure 4k.The fact that only an increased oxygen content is present suggests possible lithium plating effects.However, this can only be assumed to be an aging mechanism, since lithium cannot be detected by the means of SEM-EDX.Nevertheless, it must be taken into account that lithium plating effects have already been detected on the aged anode material of the investigated cell.In the following, SEM cross section images (Figure 4d,h) were generated to investigate the impact of these aging effects on the anode's thickness.The results reveal a thickness increase of 12 µm of the investigated aged anode compared to the fresh specimen.Taking into account the number of anode layers, a total increase in cell thickness of 0.5 mm results, assuming the same growth per layer.
Sprenger et al. [27] reported a lower mechanical strength of the aged anode under tension, which was also mentioned as one possible reason for a lower mechanical strength of aged cells under cylindrical indentation.Thus, in this study the aged copper was investigated by SEM-EDX in order analyze the occurance of corrosion effects due to contact with the electrolyte.A few small spots, indicating a morphological change of the copper material, were found (Figure 4e-g).The mentioned areas can be identified as pitting corrosion of the copper.This can be justified by the specific cavity appearance and the occurrence of several elements besides copper.SEM-EDX measurements of the holes reveal depositions of carbon, oxygen and fluorine (Figure 4l).Whilst taking into account the results of Dai et al. [30], a reaction of the copper current collector with the electrolyte is the most probable reason for the mentioned formation of pitting holes.
The observed corrosion effects may have led to a lower adhesion of the graphitecopper interface and a decrease of the tensile strength of the whole anode due to material lost.However, it must be taken into account that these are minor corrosion effects and the degree of influence on mechanical strength cannot be determined with certainty.
Lateral Indentation
Lateral indentation (Figure 2a) test results are presented in Figure 5. Due to the high variation in the behavior of each cell during the individual test repetitions, all forcedisplacement curves are illustrated in the diagram.With the selected test setup, the cell was held in place on the bottom part with two L-shaped cell holders.The cell top remained unconstrained, which allowed cell buckling to occur independently on the cell condition.For this reason also no short circuit was detected during the experiment, since the cell was not restricted in its position within the deformation zone.As seen for the conducted tests in thickness direction [27], a highly reduced force level can be noticed for electrically aged cells for this abuse case.This effect can be explained by the different buckling pattern of fresh and aged cells, originating from the changes to the layer structure of the anode active material.For fresh cells, a relatively large buckled area occurs in the range between 0 mm and 5 mm cell deformation, having a buckling point near the L-shape cell holders.It is possible that fanning out of the individual cell components led to a reduced buckling strength of the laminate.With further intrusion of up to 25 mm, the buckled area is compressed in the lateral direction while the buckling point remained stationary.In contrast, the origin of the buckling point for aged cells was at a position near the test stamp, which led to a smaller buckling area on the cell surface at deformations below 5 mm.A fold over of aged cells was seen, the start of which was identified as the force peak seen in Figure 5a until a deformation of 5 mm.The fold over process subsequently led to continuous deformation of the cell layers in thickness direction with increasing intrusion until 25 mm, where a movement of the buckling point in the lateral cell direction was observed.A lower slope of the force increase was also detected for aged cells when compared to fresh in the deformation range starting after the fold over up to the set maximum deformation.The resulting slope change in the force-displacement characteristic between 5 mm and 25 mm of aged cells originates from the growth of the anode thickness, thus increasing the mechanical preloading within the cell.The growth of an additional layer on the anode was confirmed by SEM-EDX measurements in Section 3.1.Although to a certain extent the swelling amount of the anode can be compensated by the flexibility of the pouch material, overall cell thickness increase is observed.A thicker cell with increased internal preload would increase in this case the resistance to buckling.
Cylindrical Indentation Long Side
Figure 6 shows the difference between the mechanical response of fresh and electrically cycled cells when subjected to cylindrical indentation along the long cell side (Figure 2b).The fresh and aged curves represent the average force-displacement characteristic, calculated from the mechanical response of all three conducted test repetitions.Additionally, error bars are used to visualize the low variation between the individual tests.The results illustrate a significant influence on the displacement at failure by electrical aging.A right-shifted force-displacement characteristic of approximately 0.7 mm was observed for these cells.Thus, a higher failure displacement d f ail of up to +17% was seen compared to fresh cells.In addition, a decreased failure force F f ail of −11% was measured during the current test case.Similar findings were confirmed by previous investigations published by Sprenger et al. [27], where a −29% decrease in maximum force was explained by the lower mechanical strength of the investigated cell's aged separator.The increase in failure deformation d f ail was explained by the thickening of the SEI layer, which had formed on the surface of the aged anodes.In this case, a thicker anode active material would mean an overall cell thickness increase, which would shift the force-displacement curve to the right.The SEM investigations in Section 3.1 confirm this theory since the SEI change was the main responsible aging mechanism for the aged anode's thickness increase of approximately 0.5 mm considering each anode layer of the entire cell.It has to be considered that in the simplified calculation of the anode's thickness increase in Section 3.1 the same thickness change for each anode layer in the cell was assumed, which may explain the difference of 0.2 mm compared to the shifted failure displacement d f ail .The stiffness S of the aged cells, defined as the slope of the linear region in the forcedisplacement curves in the range between 150 kN-300 kN, decreased about −35% after cell cycling.The reason for this effect is the formation of a more elastic SEI layer at the outer section [31], which when initially compressed softens the overall mechanical response of the cell.An important issue, having a high priority in the safety assessment of electrically aged cells, is the electrical voltage drop, indicating the occurrence of an ISC inside the cell.In the case of cylindrical indentation over the long cell side a simultaneous force and voltage drop could be identified, as often seen in literature [27,32,33].In this study a deeper voltage drop of approximately 50% for aged cells was observed compared to fresh cells, which can be also explained by the loss of mechanical strength of the investigated cell's separator observed by Sprenger et al. [27].Due to the separator's lower mechanical strength, a higher number of separator layers may have failed, which led to the deeper voltage drop.Similar effects for aged cells were also reported for aged cells in several publications [26][27][28].
In addition to the decrease in maximum force F f ail and the increase in critical displacement at failure d f ail , an anisotropic behavior of the cells was seen during cylindrical indentation, regardless of the cell aging status.This effect can be traced back to the combined anisotropic effect of the separator and anode, which also showed a direction dependency during tensile testing.The observed changes to the tensile properties of the mentioned aged components are stated as the reason for the observed changes in the behavior of aged cells, indented in uand v-direction.
Cylindrical Indentation Short Side
In order to analyze the strain-rate dependency of the investigated fresh and aged lithium-ion cells, dynamic indentation tests along the short cell side in thickness direction (Figure 2c) were conducted with an impact velocity of 3000 mm/s and the results were compared to previously conducted quasistatic tests (∆vel), outlined in the work of Sprenger et al. [27].Additionally, the difference between 100% SOC and 0% SOC (∆soc) and the difference between fresh and aged cells (∆aged) was investigated for both the quasistatic and dynamic load case.
In the work, which was previously conducted by Sprenger et al. [27], a reduction of −29% of the maximum achieved force F f ail was observed for the investigated aged cells compared to fresh cells, when tested at 0% SOC under quasistatic load (Figure 7a).In addition, a right-shifted force-displacement characteristic as seen for cylindrical indentation along the long cell side (see Section 3.3), was observed.In the case of 100% SOC a small decrease of F f ail about −2% between fresh and aged cells could be determined.An increase in testing velocity (∆vel) resulted in an overall different curve characteristic (Figure 7b), when compared to the quasistatic load case.Four force peaks could be distinguished until cell failure for both fresh and aged cells, which were only noticeable under dynamic loading conditions.The first two peaks ("1" and "2") are represented by a force plateau, which can be associated with the influence of the electrolyte on the mechanical behaviour of the cells.
During highly dynamic load cases by compressing the internal cell layers, which are soaked in electrolyte solution, the electrolyte is being pressed out of the material pores with high velocity, which results in the observed force plateaus.A second important difference to quasistatic loads is the increased cell stiffness S of up to +57% for fresh and +75% for aged cells.This effect can be attributed to the solid-fluid interaction between the electrolyte and the active material coatings at different loading velocities, as seen from studies comparing the mechanical behavior of dry and wet cells, tested at different speeds [17,18].During dynamic tests the electrolyte seems to flow in the porous coating material, causing additional viscous forces [28].This would in turn mean as the loading velocity increases, the flow of electrolyte is accelerated which in turn results in higher material stiffness.Regardless of the aging state or SOC, two additional force plateaus ("3" and "4") can be noticed in the range between 80 kN and 120 kN under dynamic loading.A similar curve characteristic was reported for the anode material under compression [23,27].In the mentioned study, the force plateau was clearly evoked by the graphite material's failure, which was followed by a further force increase.As there was no ISC detected at this point for the dynamically tested cells in this study, a local damage of the graphite under the impactor could explain the third force plateau.For all cell conditions the last force peak ("4") indicated the point of electromechanical cell failure, which coincided with the voltage drop, detected for the tested batteries.At this point, cell failure was observed at lower peak forces of up to −26% compared to the quasistatic tests independent of the aging state and SOC.Furthermore, a decrease of the failure displacement d f ail between −14% and −17% was noticed by increasing the testing velocity.
The influence of aging on the mechanical behavior of cells during dynamic loading was clearly visible.As the force-displacement curves' shape remained similar after aging, an observed difference was however the decrease in size of the observed plateaus, which can be explained by the reduction of electrolyte content after aging.The curves of aged cells at 0% SOC showed a shift to the right (d f ail +7%), which is caused by the thickness increase of the cell due to anode growth and a lower cell stiffness S of up to −8% compared to fresh cells.For charged cells the changed failure displacement d f ail after aging was even more significant (+60%).Reason for this was the cell's additional thickness increase of approximately 1.78 mm when fully charged [27].As a result under quasistatic loading higher failure displacements d f ail of up to +32% were noticed.For electrically aged cells at 0% SOC a −6% decrease of failure force F f ail under dynamic testing velocity compared to fresh cells could be observed.At fully charged state, the failure force F f ail for aged cells was approximately +6% higher as for fresh cells.Independently of the SOC, a deeper voltage drop compared to fresh cells could be noticed.This behavior may result in a higher energy release after ISC, which can e.g., evoke higher temperatures after an internal short [26][27][28].
Finally, it can be derived that electrical aging led independently of the SOC or the testing velocity to a decreased cell stiffness S and a higher failure displacement d f ail .A lower failure force F f ail for electrical aged cells was mainly seen at 0% SOC.Compared to fresh cells, aged cells showed a higher dependency regarding the SOC.Especially the change of d f ail and F f ail were strongly influenced by the SOC for aged cells.In contrast, for fresh cells a higher SOC only evoked a small increase of d f ail of up to +8% and an increased failure force F f ail of up to +5%.A summary of the parameter analysis for all conducted load case scenarios, SOC and SOH levels from both studies can be seen in Table 2.
Summary and Conclusions
In this study, mechanical tests of fresh and electrically aged pouch cells were conducted to investigate the influence of electrical aging on lithium-ion cells' mechanical behavior under typical crash load cases.For the first time, the electrical aging's impact under lateral cell loads was investigated.Beside electrical aging, also the influence of the SOC and the testing velocity were taken into account for loads, applied to the investigated lithium-ion cells in thickness direction.Furthermore, SEM-EDX analyses were used to explain the observed changes to the mechanical behavior of aged cells in the current and previous investigations.
It was shown by cylindrical indentation tests that electrical aging has a significant impact on the cell's mechanical properties in thickness direction.Compared to fresh cells, a right-shifted force-displacement curve, which led to a higher failure deformation of up to +17% was observed for aged cells cells at 0% SOC.SEM analyses revealed that the main responsible mechanism for this change is the aged cell's thickness increase due to the growth of the SEI on the anode active material.The observed shift of the forcedisplacement characteristic was clearly increased for fully charged cells, which correlated with the thickness change between 0% and 100% SOC of the investigated cell.Furthermore, it was shown that electrical aged cells at 0% SOC show a lower mechanical failure force in a range of −11% and −29% under quasistatic cylindrical indentation, dependant on the impactor orientation.Dynamic cylindrical indentation tests revealed a special curve characteristic with four conspicuous force peaks.Peak one and two were identified as effects evoked by the compression of the electrolyte between the cell layers.These force peaks were slightly decreased due to the electrolyte consumption for aged cells.The third peak of the dynamic curve was justified with the internal failure of the anodes' graphite layer.Finally, the fourth peak indicated cell failure by a simultaneous voltage drop.For aged cells at 0% SOC a lower failure force of −6% was noticed.In contrast, aged cells showed a higher failure force of +6% at 100% SOC compared to fresh cells.Increasing the testing velocity evoked a higher cell stiffness and lower failure forces in every test case.It was shown additionally that regardless of the test speed, aged cells cause a steeper and deeper voltage drop after failure, which can lead to faster and higher energy release.
Under lateral loading neither fresh or electrically aged cells showed an internal short circuit up to 25 mm deformation.Aged cells exhibited greatly reduced force values, which could be justified by different buckling behavior using image recordings of the tests.Fresh cells showed buckling close to the used specimen holder, whereas aged cells buckled closer to the impactor.For aged cells, a force drop could be identified at the beginning of the deformation, which led to a folding of the upper part of the cell.Subsequently, the cell layers were deformed in the thickness direction, resulting in reduced force values.
The most important findings of this study can be summarized as follows: 1.
The investigated aged pouch cells show a right-shifted force-displacement curve, a lower stiffness and deeper voltage drops under mechanical indentation in thickness direction compared to fresh cells.Furthermore, lower failure forces of aged cells can be noticed at 0% SOC.
2.
The right-shifted force-displacement curve in the thickness direction of the cell is caused by aging effects such as the growth of the SEI layer as well as lithium plating effects.The thickness increase of the anode examined in the SEM measurements correlates with the shift of the force-displacement curve at 0% SOC.This demonstrates electrolyte consumption within the cell, which has a direct influence on the size of the plateaus in the dynamic test.
3.
Under lateral loading, the fresh cell shows lower buckling stability.The most probable reason for this is the lower mechanical stability of the laminated composite.Aged cells are subjected to higher mechanical pressure in the thickness direction, which compresses the cell interior.4.
Regardless of the aging condition, the cell under study shows two significant force drops under dynamic cylindrical indentation, with the former presumably involving failure of only the anode graphite layer and the latter involving failure of the separator.
5.
The influence of state of charge is similar under quasistatic and dynamic loading.Aged cells show a higher dependence on the state of charge.6.
The tensile strength of the aged anode may be reduced by aging effects such as pitting corrosion, as these effects can lead to reduced adhesion between the current collector and the active material.
For future investigations, the authors recommend a complete restraint of the cell in order to investigate also lateral loading under more realistic cell module conditions, where neighboring cells can restrict buckling of the loaded cell.Thus, short circuit detection could be also observed.
Figure 1 .
Figure 1.Structure and cell dimensions of the investigated lithium-ion pouch cell referring to Sprenger et al. [27]: (a) Shows the cell's length and width.(b) Shows the cell's thickness and layer structure.
Figure 2 .
Figure 2. Visualization of the used experimental setup for mechanical cell testing: (a) Lateral indentation test setup with test bench, L-shaped sample holders and the ∅ 30 mm cylindrical impactor.(b) Shows the cylindrical indentation test along the long cell side (u) and (c) the cylindrical indentation test setup along the short cell side (v) [27].
Figure 3 .
Figure 3. Illustration of the ELLMAR test rig with movable sled used for dynamic cell tests at 3000 mm/s in this study.
Figure 4 .
Figure 4. Analyzing the electrical aging mechanisms at the anode of the investigated lithium-ion cell with SEM-EDX: (a) Surface image of the fresh anode.(b) First conspicous detail of the aged anode showing decomposition products.(c) Second conspicous detail of the aged anode indicating lithium-plating.(d) Cross section image and thickness measurement of the fresh anode.(e-g) Surface images of the aged copper current collector with pitting corrosion effects.(h) Cross section image and thickness measurement of the aged anode.(i) SEM-EDX analysis of the fresh graphite anode.(j,k) SEM-EDX analysis of the first and second conspicous spot of the aged anode.(l) SEM-EDX analysis of the aged copper current collector.
Figure 5 .
Figure 5.Comparison of the lateral indentation test results between fresh and aged lithium-ion cells: (a) shows the measured results of force and electrical voltage.(b) shows the buckling behavior of fresh and aged cells at 5 mm and 25 mm displacement.
Figure 6 .
Figure 6.Comparison of the cylindrical indentation test results along the long cell side for fresh and electrically aged lithium-ion cells at 0% SOC.
Figure 7 .
Figure 7.Comparison of the cylindrical indentation test results along the short cell side for fresh and electrical aged lithium-ion cells: (a) Quasistatic results at v = 1 mm/s by Sprenger et al. [27] and (b) dynamic results at v = 3000 mm/s.
Table 1 .
Quasistatic (QS) and dynamic (DYN) mechanical tests for the characterization of the investigated lithium-ion pouch cell.
Table 2 .
Change of cell stiffness S, failure displacement d f ail and failure force F f ail and how these parameters are influenced by increasing the (a) testing velocity (∆vel), (b) the state of charge (∆soc) or (c) using an electrical aged cell compared to a fresh cell (∆aged).
by BMK, BMDW, the Province of Upper Austria, the province of Styria as well as SFG.The COMET Program is managed by FFG.Institutional Review Board Statement: Not applicable. | 2023-01-20T16:04:16.790Z | 2023-01-17T00:00:00.000 | {
"year": 2023,
"sha1": "53346867793baeb46198721c1f81d2555be0852a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2313-0105/9/2/67/pdf?version=1675066866",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e06491eccf0618210eef9ba869e34b05ec8877a6",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
220611606 | pes2o/s2orc | v3-fos-license | Determination of interatomic coupling between two-dimensional crystals using angle-resolved photoemission spectroscopy
Lack of directional bonding between two-dimensional crystals like graphene or monolayer transition metal dichalcogenides provides unusual freedom in the selection of components for vertical van der Waals heterostructures. However, even for identical layers, their stacking, in particular the relative angle between their crystallographic directions, modifies properties of the structure. We demonstrate that the interatomic coupling between two two-dimensional crystals can be determined from angle-resolved photoemission spectra of a trilayer structure with one aligned and one twisted interface. Each of the interfaces provides complementary information and together they enable self-consistent determination of the coupling. We parametrise interatomic coupling for carbon atoms by studying twisted trilayer graphene and show that the result can be applied to structures with different twists and number of layers. Our approach demonstrates how to extract fundamental information about interlayer coupling in a stack of two-dimensional crystals and can be applied to many other van der Waals interfaces.
F ollowing the isolation of graphene (a layer of carbon atoms arranged in regular hexagons) in 2004 1 , many other atomically thin two-dimensional crystals have been produced and can be stacked in a desired order on top of each other. In contrast to conventional heterostructures, in which chemical bonding at interfaces between two materials modifies their properties and requires lattice matching for stability, stacks of two-dimensional crystals are held together by weak forces without directional bonding. As a result, any two of these materials can be placed on top of each other, providing extraordinary design flexibility [2][3][4] . Moreover, subtle changes in atomic stacking, especially the angle between the crystallographic axes of two adjacent layers, can have big impact on the properties of the whole heterostructure, with examples including the observation of Hofstadter's butterfly 5,6 and interfacial polarons 7 in graphene/hexagonal boron nitride heterostructures, interlayer excitons in transition metal dichalcogenide bilayers 8,9 , appearance of superconductivity in magicangle twisted bilayer graphene 10,11 and explicit twist-dependence of transport measurements in rotatable heterostructures [12][13][14] . Phenomena like these arise because the misalignment of two crystals changes the atomic registry at the interface and hence tunes the spatial modulation of interlayer interaction. Consequently, understanding the coupling between two two-dimensional materials at a microscopic level is crucial for efficient design of van der Waals heterostructures.
The impacts of a twisted interface and modulated interlayer coupling on the electronic properties of two-dimensional crystals include band hybridisation [15][16][17] , band replicas and minigaps due to scattering on moiré potential 15,18,19 , charge transfer and vertical shifting of bands 17,20,21 as well as changes of the effective masses 17,20 . Variations in the interlayer coupling as a function of the twist angle, θ, were probed for example using photoluminescence, Raman and angle-resolved photoemission (ARPES) spectroscopies 20,[22][23][24] . Here, we use the last of those methods to image directly the electronic bands in trilayer graphene with one perfect and one twisted interface. From our data, we extract the interatomic coupling, t(r, z), describing coupling between two carbon atoms separated by a vector r 3D = (r, z) = (x, y, z). Such coupling functions, usually based on comparisons to ab initio calculations, can be used to determine electron hoppings in tight-binding 25,26 and continuum 27,28 models of corresponding van der Waals interfaces at any twist angle. We show that t(r, z) determined purely by measurements on one of the structures accurately describes electronic dispersions obtained for stacks with different θ and number of layers, providing an experimentally verified set of parameters to model twistronic graphene. Our approach makes use of the fact that a trilayer structure is the thinnest stack that can contain both a perfect and twisted interface. The former, due to translational symmetry, can be straightforwardly described in the real space using t(r, z). At the same time, the impact of the moiré pattern formed at the latter can be captured in the reciprocal space by considering scattering by moiré reciprocal vectors on the momentum-dependent potential tðq; zÞ which is a two-dimensional Fourier transform F ½tðr; zÞ of t(r, z) (see the comparison of the two cases in Fig. 1a). As a consequence, this method should enable determination of interatomic couplings for all van der Waals interfaces for which moiré effects were observed.
Results
ARPES of twisted trilayer graphene. We grew our graphene trilayers on copper foil using chemical vapour deposition 29,30 . The inset of Fig. 1b shows the intensity map of copper d-band photoelectrons which are attenuated differently by the overlying graphene layers depending on their number. This provides means to identify all of the layers in our stack, shown in the inset with different shades of grey and indicated with the red arrows. As depicted schematically in the main panel of Fig. 1b, the bottom two layers form a Bernal bilayer (2L) while the crystallographic axes of the top monolayer (1L) are rotated by an angle θ with respect to those of the layer underneath. As a result, the Brillouin zones corresponding to the bilayer and monolayer are also rotated with respect to each other, Fig. 1c. We focus here on the vicinity of one set of the corners of the two Brillouin zones, which we denote K 2 and K 1 , for the bilayer and monolayer, respectively. The separation between these two points, dependent on the twist angle, defines an effective superlattice Brillouin zone, indicated in orange in the inset of Fig. 1c.
In Fig. 2a, we present ARPES intensity along a cut in the k-space connecting K 2 and K 1 , with the energy reference point set to the linear crossing (Dirac point) at K 1 . Close to each corner, the intensity reflects the low-energy band structures of (1). Inset shows photoemission intensity from copper substrate which is attenuated by graphene layers above, providing a measure of graphene layer number. The red arrows indicate each of the graphene layers in the trilayer stack and the cyan line corresponds to the distance of 10 μm. c Brillouin zones of the Bernal bilayer (black) and rotated monolayer (blue) with bilayer and monolayer graphene low-energy electronic spectra shown in the vicinities of one set of the Brillouin zone corners. The inset depicts in orange the superlattice Brillouin zone and the cyan line indicates the k-space path cuts along which are presented in Figs. 2a and 3.
unperturbed 2L and 1L. Because the bilayer flake is below the monolayer, signal from the former is attenuated due to the electron escape depth effect. In between the two spectra, coupling of the two crystals leads to anticrossings of the bands and opening of minigaps (marked as ε g I and ε g II in the figure). As the size of the superlattice Brillouin zone depends on the twist angle, the energy positions of the minigaps also depend on θ. Moreover, the magnitudes of the minigaps depend on the interlayer coupling between the bilayer and monolayer and also, in principle, vary with θ. However, fundamentally, all of the features in our spectrum originate in interactions between carbon atoms, be it in the same or different layers, at the twisted or aligned interface. This provides us with an opportunity to study the interatomic coupling t(r, z) in carbon materials.
Parametrising carbon-carbon interaction potential. In order to understand our data, we use a generic Hamiltonian for a van der Waals heterostructure comprised of three layers of the same two-dimensional crystal In this Hamiltonian, the diagonal block,Ĥ 0 θ i ; ε i ð Þdescribes the i-th layer at a twist angle θ i , with on-site energies of atomic sites in this layer, ε i . Here, because only the relative twist between any two adjacent layers is important, we have θ 1 = θ 2 = 0 and θ 3 = θ. Also, our choice of energy reference point is equivalent to ε 3 = 0 and we introduce potential energy difference, 2u = ε 1 − ε 2 , as well as average energy, Δ = (ε 1 + ε 2 )/2, of layers 1 and 2 (the charge transfer between the copper foil and the graphene layers giving rise to u ≠ Δ ≠ 0 is discussed in more detail in ref. 29 ). For graphene, the intralayer blocksĤ 0 can be straightforwardly described using a tight-binding model 31 for a triangular lattice with two inequivalent atomic sites, A and B, per unit cell and nearest neighbour coupling between them γ 0 ≡ −t(r AB , 0), where r AB is a vector connecting neighbouring A and B atoms with the carbon-carbon bond length |r AB | = 1.46 Å.
Of more importance for us, however, are the off-diagonal blocksTðθ i À θ iÀ1 Þ which capture the twist-dependent interlayer interactions between adjacent layers (we neglect the interaction between the bottom and the top layers which is at least an order of magnitude weaker 32 ). As the bottom two layers are stacked according to the Bernal stacking, a real-space description of the interlayer interaction blockTð0Þ is possible with the leading coupling t(0, c 0 ) ≡ γ 1 , with interlayer distance c 0 = 3.35 Å, due to atoms with neighbours directly above or below them, as shown in Fig. 1a 33 . In contrast, we describe the coupling between the twisted layers, i = 2, 3, in the reciprocal space based on electron tunnelling from a state with wave vector k in layer 2 to a state with wave vector k 0 in layer 3 with the requirement that crystal momentum is conserved 34,35 are the reciprocal vectors of layers 2 and 3, respectively. The strength of a given tunnelling process is set by the twodimensional Fourier transform, F ½tðr; zÞ ¼tðq; zÞ, of the realspace coupling t(r, z) so that where τ = (−|r AB |, 0) andR θ is a matrix of clockwise rotation by angle θ (see Supplementary Note 1 for more details on the construction of the HamiltonianĤ). The uniqueness of a trilayer with one perfect and one twisted interface (as exemplified in Fig. 1a for the case of graphene) lies in the fact that the HamiltonianĤ contains interlayer blocks based on both the real-space (Tð0Þ) and reciprocal-space (TðθÞ) descriptions which provide complementary information and at the same time are related to each other because of the Fourier transform connection between t(r, z) andtðq; zÞ. Because of this, comparison of the photoemission data with the spectrum calculated based on Eq.
(1) provides more information about the interatomic coupling t(r, z) than structures with one type of interface only. For our graphene trilayer, we compute the miniband spectrum ofĤ (see Methods for more details) assuming a Slater-Koster-like two-centre ansatz for t(r, z) 25 , tðr; zÞ ¼ tðjrj; zÞ V π ðr; zÞ ¼ Àγ 0 exp Àα π ðjr 3D j À jr AB jÞ ½ ; where V π and V σ represent the strength of the π and σ bonding 36 , respectively, and α π and α σ their decay with increasing interatomic distance. In fitting our numerical results to the experimental data in Fig. 2a, we first determine the position of 1L Dirac point what sets the ε = 0 reference point. We then use the electronic band gap at K 2 to fix the electrostatic potential 2u and position the bilayer neutrality point halfway in the gap, establishing the potential energy shift Δ. We obtain the in-plane nearest neighbour hopping γ 0 from the slope of the 1L linear dispersion close to the Dirac point at K 1 while the direct interlayer coupling γ 1 is set by the splitting of the 2L lower valence band from the neutrality point at K 2 . Finally, the decay constants α π and α σ are found numerically using the constraints that (i) the magnitudes of the gaps ε The miniband spectrum resulting from our model is shown in red dashed lines in Fig. 2a, the functions t(|r|, c 0 ) andtðjqj; c 0 Þ are plotted in Fig. 2b and the corresponding values of the parameters γ 0 , γ 1 , α π and α σ are summarised in Table 1. The interatomic potential we obtain decays more rapidly in the real space (and hence slower in the reciprocal space) than suggested by computational results 25 . Importantly, parametrization of t(r, z) does not depend on the twist angle and so should be applicable to other graphene stacks with twisted interfaces. It also does not depend on the doping level because, for the relevant range of electric fields, the electrostatic energies Δ and u do not modify the electron hoppings. At the same time, once these energies are determined for a particular stack, their influence on the band structure (shifting of the positions and magnitudes of anticrossings) is captured through the HamiltonianĤ. To confirm applicability of a single parametrization of t(r, z) to different graphene stacks, we compare in Fig. 3 the miniband spectra computed using the parameters from Table 1 to ARPES intensities measured along a similar K 2 -K 1 k-space cut for, in Fig. 3a, a trilayer with θ = 9 ∘ and, in Fig. 3b, twisted bilayer with θ = 19.1 ∘ . Our model describes the bands of both of the structures well, despite changes in the twist angle, number of layers, potentials u and Δ (which vary with growth conditions and thickness of the stack 29 and are determined for each structure individually) and the magnitudes of minigaps.
Probing electron wave function. We assess the accuracy of our parametrization of the interatomic potential, t(r, z), further by modelling directly the ARPES intensity data (we use approach developed in ref. 37 and applied to the graphene/hexagonal boron nitride heterostructure in ref. 38 ; see Methods and Supplementary Note 3 for further details). In graphene materials, interference of electrons emitted from different atomic sites within the unit cell provides additional information about the electronic wave function 37 . This is best visualised by ARPES intensity patterns at constant electron energy, which we present, both as obtained experimentally (top row) and simulated theoretically (bottom row), in Fig. 4 for the trilayer sample with θ = 9 ∘ and energies indicated with grey dashed lines in Fig. 3. For the map at the energy ε = 0, the two spots of high intensity indicate the positions of the valleys K 1 and K 2 . For energies 0 < ε < −0.6 eV, the bilayer and monolayer dispersions are effectively uncoupled. The crescent-like intensity pattern in the vicinity of K 1 reflects the pseudospin of n = 1 (evidence of Berry phase of π 39 ) of electrons in monolayer graphene. In contrast, in bilayer graphene, the lowenergy band hosts massive chiral fermions 40 with pseudospin n = 2 so that the outer ring pattern in the vicinity of K 2 displays two intensity maxima, feature best visible in panel (II). Because in our model all electron hoppings are generated naturally by t(r, z), agreement of our ARPES simulation with experimental data provides confirmation that our model and parametrization of the interatomic coupling t(r, z) leads to the correct band structure. Finally, panels (III)-(V) in Fig. 4 show the constant-energy maps in the vicinity of the minigaps which open due to hybridisation of the bilayer and monolayer bands. The merging of 1L and 2L contours in panel (III) leads to a van Hove singularity and an associated peak in the electronic density of states, similarly to the case of twisted bilayer graphene 15 and discussed also for twisted trilayer graphene 29 (in the latter, the position of the van Hove singularity is established by tracking the minigap; the former is caused by saddle points in the electronic dispersion as the bands flatten at the anticrossings and so every minigap is accompanied by a van Hove singularity). Overall, our simulated patterns correctly reflect the evolution of the minigap as a function of energy and wave vector as well as the measured photocurrent intensity. ilayer, q = 19.1°K 1 K 2 g
Fig. 3 Modelling stacks with different twists and layer numbers.
Comparison of the ARPES intensity and the calculated electronic band structure (obtained using the parameter set in Table 1 and shown with red dashed lines) for a twisted trilayer with θ = 9 ∘ and b twisted bilayer with θ = 19.1 ∘ , both measured along the direction connecting Brillouin zone corners K 2 and K 1 as shown in Fig. 1c and indicated in the inset. In a, the grey dashed lines, labelled (I)-(V), indicate energies for which constantenergy ARPES intensity maps are presented in Fig. 4.
Discussion
Our parametrization of t(r, z) is applicable to a wide range of twist angles, including the magic-angle regime 10,34 as well as the 30 ∘ -twisted bilayer graphene quasicrystal 41,42 . To mention, it yields the k-space interlayer coupling at the graphene Brillouin zone corner K,tðjKj; c 0 Þ ¼ 0:11 eV. This agrees with the values used in effective models of the low-twist limit of twisted bilayer graphene 27,34,35,43 which requiretðjKj; c 0 Þ as the only parameter.
Overall, our form of t(r, z) decays more rapidly in the real space (and hence slower in the reciprocal space) than usually assumed. This might explain the discrepancy between theory and experimental ARPES intensities of Dirac cone replicas observed for the case of 30 ∘ -twisted bilayer graphene in ref. 41 .
As we have shown, the same interatomic coupling t(r, z) can be used in graphene structures with different number of layers as, similarly to the case of perfect graphite and other layered materials, coupling to the nearest layer dominates the interlayer couplings. The continuum approach has been applied extensively to model the graphene/graphene interface, including to predict the existence of the magic angle 34 . Hence, in Supplementary Figure 1, we use our results to simulate ARPES spectra for twist angles in the vicinity of the magic angle, θ ≈ 1.1 ∘ , and show qualitative agreement with the recent experimental data 44,45 . The continuum model was also used successfully to interpret experimental observations in graphene on hexagonal boron nitride 5 as well as homo-and heterobilayers of transition metal dichalcogenides 46,47 . Our approach allows for experimental parametrization of the interatomic coupling t(r, z) for each of these interfaces as well as for others for which influence of neighbouring crystals can be approximated by considering the harmonics of the moiré potential 43,[48][49][50][51][52] . To comment, previous studies suggest that adapting our model to stacks of transition metal dichalcogenides requires taking into account changes in the interlayer distance as a function of the twist angle 20 . Moreover, in contrast to graphene, for which the part of tðq; zÞ most relevant to modelling twisted interfaces is that for q pointing to the Brillouin zone corner, q ≈ K, for transition metal dichalcogenides more significant changes due to interlayer coupling occur in the vicinity of the Γ point. In multilayers of 2H semiconducting dichalcogenides MX 2 (M = Mo, W, and X = S, Se), coupling of the degenerate states at the Γ point built of transition metal d z 2 and chalcogen p z orbitals leads to their hybridisation and splitting which drives the direct-to-indirect band gap transition 53,54 . Using the form of t(r, z) suggested in ref. 26 for chalcogen p z -to-p z hopping (which dominates the interlayer coupling) in transition metal disulfides and diselenides, we computed the correspondingtðq; zÞ and obtained an estimate oftðΓ; c XÀX Þ $ 1:2 eV for interlayer nearest neighbour distance between chalcogen sites, c X−X ≈ 3 Å. Taking into account the fractional contribution of the p z orbitals to the top valence band states at Γ in a monolayer 26 , we obtain coupling between two such states in bilayer~0.4 eV. This, in turn, suggests band splitting of~0.8 eV, in qualitative agreement with observations [53][54][55] . This supports the idea that our model can accurately describe and parametrise interatomic coupling between materials other than graphene.
Experimentally, our approach requires fabrication of trilayer (or thicker) stacks with one twisted and one perfect interface in order to benefit from the complementarity of the information obtained from self-consistent real-and momentum-space description of the interfaces. However, to note, building on the observations of superconductivity in magic-angle twisted bilayer graphene 10,11 , structures containing both a twisted and a perfect interface like twisted trilayer graphene 56,57 , double bilayer graphene [58][59][60][61][62][63] or double bilayer WSe 2 64 recently attracted attention on its own due to observation of correlated electronic behaviour. Our approach provides one of the avenues to build an experimentally validated single-particle base to study such effects. It could be, in principle, also applied to stacks of different materials, as long as one of the interfaces is commensurate and can be described in the real space in a tight-binding-like fashion. Finally, apart from continuum models, the interatomic coupling t(r, z) can also be used directly in large scale tight-binding calculations for commensurate twist angles 25,26,[65][66][67] .
Methods
ARPES measurements. The ARPES measurements were performed at the Spectromicroscopy beamline at the Elettra synchrotron (Trieste, Italy). Before measurements, the samples were annealed at 350 ∘ for 30 minutes. The experiment was then performed at a base pressure of 10 −10 mbar in ultrahigh vacuum and at the temperature of 110 K. We used photons with energy of 74 eV and estimate our energy and angular resolution as 50 meV and 0.5 ∘ , respectively. For each sample, we determined the twist angle θ by measuring the distance between the Brillouin zone corners K 2 and K 1 which depends on the twist angle, jK 2 À K 1 j ¼ 8π Theoretical calculations. We write the HamiltonianĤ in Eq. (1) in the basis of sublattice Bloch states constructed of carbon p z orbitals ϕ(r 3D ) 31 , e ikÁðR l þτ X;l Þ ϕðr 3D À R l À τ X;l Þ; where k is electron wave vector, X = A, B is the sublattice, R l are the lattice vectors of layer l and τ X,l points to the site X in layer l within the unit cell selected by R l . We include in the basis all states coupled to k throughTðθÞ which are less than a distance 28π 3 ffiffi 3 p r AB sin θ 2 away from it, compute the matrix elements ofĤ in this truncated basis and diagonalize the resulting matrix numerically. In order to simulate the ARPES intensity, we project the eigenstates of the moiré Hamiltonian, H, on a plane-wave-like final state (see Supplementary Note 3 for more details and ref. 38 for a detailed discussion of this approach for the case of graphene on hexagonal boron nitride). We determine the broadening of the ARPES signal as well as the decay constant for the intensity of Bernal bilayer signal by fitting to the experimental data.
Data availability
The data used in this study are available from the University of Bath data archive at https://doi.org/10.15125/BATH-00864 68 . | 2020-07-18T15:41:24.393Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "26af530ecdcabc916f7023d152d2b464404a78a5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-17412-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26af530ecdcabc916f7023d152d2b464404a78a5",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
} |
230659783 | pes2o/s2orc | v3-fos-license | Characteristics of lead glass for radiation protection purposes: A Monte Carlo study
Background: Lead glass has a wide variety of applications in radiation protection. This study aims to investigate some characteristics of lead glass such as the γ-ray energy-dependent mass and linear attenuation coefficients, the half-value layer thickness, and the absorbed dose distribution for specific energy. Materials and Methods: The attenuation parameters of different lead glass types against high-energy photons (0.2-3 MeV) of gamma rays have been calculated by the Monte Carlo technique and a deterministic method. Besides, the depth dose distribution inside the volume of two cubic lead glass samples was calculated by two Monte Carlo-based computer codes, for gamma rays of 300 keV. In each part of the study, the results of the two methods have been compared. Results: Increasing the Pb concentration (weight in %) by 1% in the lead glass causes a 1.6%-3% increase in the linear attenuation coefficient, depending on the energy. However, the mass attenuation coefficient does not show significant variation for different types of lead glass, especially for the energies higher than 400 keV. Moreover, almost half of the total dose from 300 keV photons will be absorbed in the first 3.5 mm of the sample’s thickness. Conclusion: Results indicate that the Monte Carlo technique is as reliable as the deterministic methods for calculating the attenuation characteristics of the lead glass. The provided data in this investigation can be useful for radiation protection purposes, especially in the case of selecting the lead glass type and dimension based on a specific application.
INTRODUCTION
While working with the ionizing radiation, the protection of living organisms and sensitive equipment against the hazard of radiation is crucially needed. This protection can be done in terms of dosimetry and/or shielding against ionizing radiation. Shielding an experimental setup containing radioactive sources, with a suitable transparent shielding material will protect the experimenter personnel from the hazard as well as preventing the experiment results from the environmental interference, without losing the visual contact.
Photon-matter interaction parameters such as linear and mass attenuation coefficients and half-value layer thickness (HVL) can provide some information to evaluate the performance of the radiation shielding materials. One of the most commonly used materials in radiation shielding applications is lead glass. Being both transparent and good attenuator of high energy photons (due to their high density and atomic number), different types of lead glass can be applied as viewing windows at radioactive storage stations, hot cells, nuclear fuel development and reprocessing plants, and for the applications related to nuclear reactors (1,2) . Furthermore, all the widely spread radiotherapy centers are using lead glass as the observation window, while ensuring the protection of personnel from being overexposed to the ionizing radiation (3) . Therefore, studying the photon attenuation characteristics of the lead glass as a radiation-shielding material is an important subject in medical physics and other radiation-related nuclear fields (1)(2)(3) .
Besides being a very good radiation-shielding material, lead glass can also provide important radiation protection -related dosimetric data, which means it can be used as a passive reusable solid-state dosimeter (4,5) . When a glass sample (lead glass in this case) is exposed to the ionizing radiation, some of its parameters (like color) will change (6) . In other words, exposure to a particular dose of ionizing radiation causes a proportional darkening in the lead glass and affects its transparency (7) . These changed parameters are proportional to the absorbed dose by the sample (8,9) . Investigation on radiation-shielding properties of different types of glass has been performed before for some photon energies (10)(11)(12) . However, studies regarding a wide energy region with the most commonly used lead glasses, which cover both dosimetry and radiation-shielding applications are very scarce. There have been also studies on the dose-related variation of the optical properties of glass samples (13) . As is known, the absorbed dose is correlated with the radioactive source type and energy, the density and elemental composition of the absorber, and the source-absorber distance. Therefore, it is important to choose the optimum density and thickness of the lead glass, to maintain both transparency and shielding characteristics.
This work aims to investigate the attenuation properties of different types of the lead glass (ZF1, ZF2, ZF3, ZF6, and ZF7) against γ-rays of 0.2-3 MeV energies, as well as the dose distribution through the volume of cubic samples of ZF1 and ZF7, for gamma rays of 300 keV energy. The attenuation characteristics provide useful information on the shielding applications of the lead glass, and the dose distribution results can be useful in dosimetry applications.
Attenuation parameters
The characteristics of the five types of lead glass are presented in table 1. The mass fractions (weight in %) of the main components (SiO2 and PbO) as well as the other materials (K2O, As2O3, etc.) are provided in this table. Since lead is a heavy and opaque element, it is obvious that increasing the Pb concentration leads to more density and less transparency in the glass samples.
To calculate the attenuation parameters, a Monte Carlo-based computer code (MCNP-4C) has been applied. MCNP is a general-purpose Monte Carlo N-Particle transport code, which is developed and maintained by Los Alamos National Laboratory (14,15) . The geometry setup, consisting of two cylindrical collimators made of pure lead (source and detector collimators) and a detection area, is identical to figure 1 of the ref. (16) . A disk-shaped sample of 1 cm thickness and 8 cm diameter, made of lead glass is located between the source and detector collimators. Point-like monoenergetic gamma sources in the energy range of 0.2-3 MeV with 200 keV increments have been located at the entrance of the source collimator, to obtain the attenuation parameters as a function of energy.
To obtain the flux over the detector cell, the F4 tally of MCNP-4C has been used. F4 is a special tally for track length estimation of cell flux. The number of tracked photons (events) was considered as 10 7 in each simulation process. For each energy value, the theoretical experiments were performed six times, for five types of the lead glass (ZF1, ZF2, ZF3, ZF6, and ZF7). At first, the simulation was done with no sample between the collimators, and the result of F4 tally was registered as the photon flux over the detector cell, in the absence of the attenuator (known as incident photons). Then, the simulation was repeated in the presence of each sample type individually, to obtain the flux of the transmitted photons. Therefore, the linear attenuation coefficient (µ) can be determined by equation (1) (17) .
Where; N0, N, and x are the incident photons, transmitted photons, and the sample thickness, respectively.
The mass attenuation coefficient (µρ), an independent parameter of the material density, is defined by equation (2) (17) . (2) In equation (2), ρ is the material density in the unit of (g/cm 3 ) and µ is the linear attenuation coefficient in the unit of (cm -1 ), obtained from equation (1). For the mass attenuation coefficient, the simulation results have been compared with the results of the XCOM (18) program. This comparison can be a validation of the modeling process. Furthermore, one can obtain a continuous mass attenuation graph by using XCOM, which may be applicable for any energy value by extrapolation of the graph. The XCOM program is freely available on the National Institute of Standards and Technology (NIST) website (19) .
The half-value layer thickness (HVL) (equation 3), is defined as the thickness of the attenuator that reduces the intensity of photons by half of its initial magnitude (12) :
Dose distribution as a function of depth inside a sample
In the second part of the study, a cubic sample of 1×1×1 cm 3 has been considered to investigate the dose distribution as a function of depth inside the volume of the lead glass. The dimension of the sample is chosen arbitrarily. A point-like monoenergetic gamma source of 300 keV energy was located at 100 cm distance from the center of the mass of the sample. MCNP-4C and EGSnrc codes have been applied to perform the simulation. EGSnrc was originally released in 2000, as a complete overhaul of the Electron Gamma Shower (EGS) software package originally developed at the Stanford Linear Accelerator Center (SLAC) in the 1970s (20) . The calculation has been performed for ZF1 and ZF7 (as two extrema of the Pb concentration among the five models mentioned in table 1). The cubic lead glass sample was divided into ten identical layers of 1 mm thickness. Using the simulation codes, the total dose absorbed by each layer of the main cube has been calculated. Having a quite small sample of lead glass (each layer is 1×1×0.1 cm 3 ), the number of tracked particles was set at 10 8 in this part of the work, to reduce the relative error to a reasonable value (less than 0.1). Since Monte Carlo is a statistical-based technique, extensive statistical analysis for outputs is applied in each code. For example, ten statistical checks in MCNP are made with a pass yes/no criterion to assess the tally convergence, relative errors, figure of merit, etc. The relative error is inversely related to the number of histories and must be less than 0.1 to be evaluated as a reliable output.
RESULTS
The linear attenuation coefficients for ZF1, ZF2, ZF3, ZF6, and ZF7 glasses are plotted in figure 1(a) as a function of gamma-ray energy, for the energy interval 0.2-3 MeV with 200 keV increments. According to data presented in table 1 and figure 1(a), the average increasing rate of linear attenuation coefficient (in percent) has been obtained related to a 1% increase of the PbO concentration (in terms of the mass fraction) in the lead glass sample and is shown in figure 1(b).
The mass attenuation coefficients as a function of gamma-ray energy are plotted in figure 2(a), for the same energies as figure 1. Since the mass attenuation coefficient is Downloaded from ijrr.com at 2:17 +0330 on Friday January 8th 2021 independent of the material density, one can observe just a slight difference in mass attenuation coefficients between various types of lead glass, due to different elemental concentrations.
A comparison between the mass attenuation coefficients calculated by the Monte Carlo method (MCNP-4C) and the ones obtained from XCOM shows a very good agreement between the two different methods ( figure 2(b)).
The half-value layer thicknesses (HVL) have been calculated for ZF1, ZF2, ZF3, ZF6, and ZF7, using the equation (3) and the results are illustrated in figure 3.
To investigate the depth dose distribution inside the volume of each sample, the percentage of the total dose absorbed by each layer inside the main cube, for ZF1 and ZF7 and photons of 300 keV energy has been calculated. The results of simulation by MCNP-4C and EGSnrc are shown in figure 4. It can be found from this figure that in the first 3.5 mm thickness of the main cube, more than 45% and 55% of the total dose will be absorbed in ZF1 and ZF7 cubic samples of 1×1×1 cm 3 , respectively. This figure also shows the accordance of the results obtained by two different simulation codes, both working based on the Monte Carlo technique. The observed discrepancy between the results of MCNP-4C and EGSnrc for photons of 300 keV energy was no more than 3% for both ZF1 and ZF7.
DISCUSSION
As can be seen in figures 1(a) and, the linear and mass attenuation coefficients show an exponential decrease by increasing the energy of gamma rays, which is in agreement with refs. (3,10,12) . Furthermore, increasing the concentration of Pb as a heavy element leads to a higher attenuation coefficient in these glasses. Figure 1 (b) reveals this increasing rate.
It can be found from figure 1(b) that for the photons of 200 keV energy, increasing the lead concentration by 1% causes an averagely increase in the linear attenuation by around 3%. While the gamma-ray energy increases, this increasing rate drops drastically. In the energy range of 0.8-3 MeV, the increasing rate of linear attenuation is less than 2%, for an increased Pb concentration by 1%. Indeed, the maximum (3%) and minimum (1.6%) increasing rates in this energy interval happen for 200 keV and 1.4 MeV, respectively.
For the energy values of 0.2, 0.4, 0.6, and 0.8 MeV, the mass attenuation coefficient of ZF7 (with the highest Pb concentration) compared to ZF1 (with the lowest Pb concentration) has increased by 29.6%, 15.4%, 7.6%, and 3.8%, respectively. However, in the energy range of 1-3 MeV, the relative difference between the mass attenuation coefficients of ZF1 and ZF7 is less than 2.5% (figure 2(a)). Figure 2(b) is presented for a comparison between MCNP and XCOM as a deterministic method. The results of the two methods for all types of lead glass and in the interested energy interval comply with each other very well (less than 4% discrepancy). For brevity, this comparison is just shown for ZF7 in figure 2(b). The attenuation coefficients are higher in the low energy range in which the photoelectric effect is dominant. Then they show a rapid decrease by increasing the energy to the range corresponding to the Compton scattering region (figures 1(a) and 2).
CONCLUSION
Using the results of this study and based on both transparency and attenuation properties needed for a specific application of the lead glass, one can select the optimum concentration of the lead for a desired model. This study also shows that the Monte Carlo technique is as reliable as the deterministic methods for calculating the shielding parameters of lead glass at the interested energy interval. The provided results and data in this work can be used as references or comparable values for radiation protection purposes, in terms of both dosimetry and shielding applications. | 2021-01-06T07:02:56.544Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "2e0445ca1a3c82db96e160a868f0ba3d7202e6e2",
"oa_license": null,
"oa_url": "http://ijrr.com/files/site1/user_files_fad21f/nakisarezakhani-A-10-2055-77-f57caa7.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd745eb7606d8aa770c4200159460ebdedcbe5b7",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
214372616 | pes2o/s2orc | v3-fos-license | Optimal design of foundations by means of nonlinear calculation methods
This paper proposes using the defining equations from the theory of adaptive evolution of mechanical systems (which is based on the variational principles of nonlinear structural mechanics) to design the shape and size of foundations. It presents an expression for finding the potential energy of a system and the deformation energy density, as well as the variational Lagrange equation. The paper formulates a nonlinear boundary problem solved by finite-element analysis. The solution imposes a constraint on the modulus of elasticity to take into account the physico-mechanical properties of the materials. A calculation algorithm and an ADPL program are written for ANSYS. The paper also presents a solution to the problem of finding the rational foundation shaped for the case of plain strain. The solution-derived rational foundation shape is shown. The authors plot the stresses and energy densities as a function of evolution at the onset and finish of iterative processes. Note that the resulting foundation shape is more stable, more accurately positioned in the soil, and can carry a greater load compared to more conventional shapes.
Introduction
The choice of a foundation design for a building depends on a number of factors, including geotechnical conditions, the presence or absence of groundwater, the design of the structure itself, the loads it has to sustain, and the calculation methods. The foundation shape determines the bearing capacity, the cost-effectiveness, the constructability, and the conditions of further use.
Advancements in the methods for optimizing the foundation design must be reflected in the effective standards; any such advancement is imperative today.
As of today, foundation design uses calculations of two groups of limit states: those by strength, and those by strain. Conventionally, the foundation shape is designed and calculation-verified to suit the conditions of further use and the geotechnical conditions. Shall it be necessary to increase the bearing capacity or stability, engineers usually use variational design of specific elements, enlarge the cross-sections, or adjust the material properties.
The existing method for calculating settling and subsidence per SNiP 2.02.01-83* [1] uses a few conventions and assumptions. The disadvantages of the conventional foundation-sizing method are as follows: it produces a profile with an uneven distribution of material stiffness; it cannot precisely identify the capacity of the compressible stratum; it uses linear calculation models.
Defining Equations of a Nonlinear Structural System per TAEMS
Optimal structural design is a matter covered in a number of papers [1][2][3][4][5][6][7][8][9][10]. The theory of adaptive evolution of mechanical systems ("the TAEMS") [11] is based on synergetic principles [12,13] and can be used to rationally configure this or that object while predicting their behavior in this or that application.
This paper proposes using G.V. Vasilkov's theory of adaptive evolution of mechanical systems [11] (which is based on the variational principles of nonlinear structural mechanics) to design the shape and size of foundations [14][15][16][17].
The variational principle of nonlinear mechanics is formulated in [11]. The total potential energy of the system 1 : is the current strain energy density; n is the standardized value of the same.
While evolving, the system is in equilibrium. At the (n+1) step, the variational Lagrange equation is written as (2): where n is the full potential system at the nth step.
The defining value of obtaining a more uniformly strong foundation design the better meets the strength requirements is such value, at which the condition holds (3) To find the optimal foundation design, state a nonlinear boundary problem soluble by nonlinear iterative methods. To solve such practical problems, use finite-element analysis [18,19].
The basic ratios of finite-element analysis are as follows: Here i is the vector of bulk forces, i g is the vector of surface forces.
The finite-element stiffness matrix
Algorithm to Find the Rational Foundation Structure
To solve practical problems, the author has compiled a calculation algorithm that comprises the following operations: introduce the dimensions of the analyzed area, construct a grid of triangular elements of optimal shape for the case of plain strain, the initial moduli of soil ( ; adjust the energy density; calculate the modulus of elasticity at the n+1 step calculate the coefficient to account for the presence of reinforcements in the foundation 0 0 ; n CON arm n SO plot the displacement and stress curves. Stop calculating when the relative error of calculating the total potential energy is below the value : Calculate by means of finite elements using the TAEMS defining equations [11]. The calculation algorithm and an ADPL program are written for ANSYS [20]. In this case, there are 18,372 triangular elements and 9,384 nodes. The required accuracy is attained after 270 iterations. The solution-derived rational continuous-footing shape is shown in Figures 1 to 4. The displacement of the rationally shaped foundation totals 3.2 cm; cf. the 5.3 cm vertical displacement of a rectangular foundation of an equally sized foundation. That being said, a TAEMS-derived optimized foundation will under the same conditions settle to a lesser degree, be more stable, have better carrying capacity, and feature more accurate depth and cross-section profile. The shape of the foundation greatly depends on the order of load application. After finding the foundation shape, the engineers finalize the design and make calculations for limit state groups I and II (strength and strain limit states). The foundation is either a standard design or is based on the reinforced-concrete structures calculations.
Summary
International Conference on Construction, Architecture and Technosphere Safety IOP Conf. Series: Materials Science and Engineering 687 (2019) 044032 IOP Publishing doi:10.1088/1757-899X/687/4/044032 5 The variational principles of the structural mechanics (the TAEMS) have been used to develop a method to optimize the foundation shape and to propose a foundation cross-section shape. The model features better bearing capacity, stability, and more accurate positioning in the soil. | 2019-12-12T10:14:24.652Z | 2019-12-10T00:00:00.000 | {
"year": 2019,
"sha1": "257a732e7540b8175e74179af05757a9875b18d2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/687/4/044032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8d71791b91a5aae2214353b6a98de16ededc649d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258212848 | pes2o/s2orc | v3-fos-license | On the Optimal Control of a Linear Peridynamics Model
We study a non-local optimal control problem involving a linear, bond-based peridynamics model. In addition to existence and uniqueness of solutions to our problem, we investigate their behavior as the horizon parameter $\delta$, which controls the degree of nonlocality, approaches zero. We then study a finite element-based discretization of this problem, its convergence, and the so-called asymptotic compatibility as the discretization parameter $h$ and the horizon parameter $\delta$ tend to zero simultaneously.
INTRODUCTION
This paper focuses on an optimal control problem with a system of constraint equations derived from peridynamics (PD), which is a contemporary non-local model in solid mechanics, [46,48]. PD models do not assume the differentiability (even in the weak sense) of pertinent forces acting on a body nor on the resulting displacement vector fields, unlike their local counterparts in continuum mechanics. This feature of PD models makes them attractive to analyze certain physical phenomena with inherent discontinuities, such as the formation of cracks in solids [51,49,50]. In this work, we will focus on the bond-based PD model, where particles in a solid are assumed to expend long distance forces on other particles within a certain radius. With this in mind, we will consider the problem of linearly deforming a [possibly heterogeneous] elastic solid occupying a domain Ω ⊂ R n to achieve a desired deformation state by applying a certain external force. The deformation field given by v(x) := x + u(x), where u is the displacement, and the external force g are related via the linearized bond-based PD model [20,35,47] given by L δ u(x) :=ˆR n f δ (s[u](x, y), y, x)dy = g(x), x ∈ Ω, where the vector-valued pairwise force density function f δ along the bond joining material points x and y, and the scalar linearized strain field s[u] associated with the displacement u are given by f δ (s[u](x, y), y, x) = H(x, y)k δ (|x − y|)s[u](x, y) y − x |x − y| .
In the above, H(x, y) = 1 2 (h(x) + h(y)) serves as material coefficient for some bounded function h. The function k δ (|x − y|) is the interaction kernel that is radial and describes the force strength between material points. The parameter δ > 0, in the definition of L δ , is called the horizon and measures the degree of non-locality, i.e., the radius within which the interaction forces are considered. We assume that k δ (|x − y|) = 0 if |x − y|≥ δ; additional assumptions on the family {k δ } δ>0 will be given later.
To quantify the desirability of a displacement state u subject to the the external force g, which will be our control, we introduce an objective functional I(u, g). This functional will be taken to be a sum of two parts: one measures, say, the mismatch of the displacement state u and the desired displacement field, say u des , and the other penalizes the control g and serves as a regularizer. We will delay the exact form of the objective functional until the next section, but the optimal control problem of interest of the paper can now be stated as min{I(u, g) | (u, g) ∈ X ad × Z ad }, L δ u = g in Ω, (1.1) where the admissible set X ad × Z ad will be specified in the next section. As described above, the state equation, codified by the operator L δ , will be a strongly coupled linear system of integral equations. The definition of L δ u requires the knowledge of the state u outside of the domain Ω, up to a boundary layer of thickness δ. Thus, we close the state equation in (1.1), by assigning u to be a fixed displacement field u 0 in the boundary layer which we call the nonlocal Dirichlet boundary condition.
In this work, we prove the well-posedness of (1.1) for a more general class of objective functionals and a broader class of interaction kernels k δ that include fractional-type kernels. We also study the behavior of the optimal pair (u, g) as a function of the horizon δ. In fact, we demonstrate that in the vanishing horizon limit δ → 0 + the integral equation-based optimal control problem (1.1) converges, in a certain sense, to a differential-equation-based optimal control problem. Well-posedness as well as vanishing nonlocality limit for the state equations have been studied in [35,45,56]. In addition, we consider the numerical approximation of solutions to (1.1) via the first-order optimality conditions. The discrete problem will involve two parameters: the discretization parameter h and the horizon δ. We will show that we have convergence, not only when h tends to zero, but that we also have asymptotic compatibility (see [53]), in the sense that the limit is unique regardless of the path we use to let h → 0 + and δ → 0 + . While literature on optimal control problems is immense, we cite some works that are related to the current study. The optimal control problem when the state equation is a scalar fractional or non-local equation is studied in [3,4,2,12,17,40]. The papers [7,8,1,17,22] study the finite element analysis of optimal control problems of fractional or nonlocal equations. For our approach of using the first-order optimality conditions in order to approximate the continuous problem with the corresponding discrete problems, we refer the reader to [2,15,14,17,41] for more on this subject matter. To the best of our knowledge the optimal control problem for a strongly coupled system of nonlocal equations of peridynamic-type has not been studied in the literature; the current work makes a contribution in that direction. We also mention that while the present work focuses on the basic linear bond-based peridynamic model, similar analysis can be done on the more general state-based peridynamics [48] as well as other nonlinear models, like those studied in [37]. This and other related issues will be addressed in future work.
We now outline the contents of the rest of the paper. First, Section 2 states the problems to be studied, with all notation made precise. Section 3 highlights some structural properties of the function space of interest such as compact embedding. The framework from which the well-posedness of our local and non-local optimal control problems can be deduced is carried over in Section 4. The remaining sections study the relationship between our problems as δ and h change: Section 5 considers Γ-convergence results as δ → 0 + ; Section 6 features finite element analyses for the local and non-local problems as h → 0 + ; and Section 7 proves the asymptotic compatibility of limits as δ and h both tend to 0.
PROBLEM FORMULATION
2.1. Notation and assumptions. Let us begin by introducing some notation; first, by A B we mean that there is a nonessential constant c, such that A ≤ cB. In addition, A ∼ B means A B A. We assume throughout the paper that Ω ⊂ R n is an open, bounded domain with a Lipschitz boundary, and denote Ω δ := Ω ∪ {x ∈ R n | dist(x, Ω) < δ}, where δ > 0 is the horizon parameter. By volumetric boundary we mean the boundary layer Ω δ \ Ω surrounding Ω. For any r > 0 and x 0 ∈ R n , we denote a ball centered at x 0 with radius r by B r (x 0 ). Next we provide assumptions on our kernels which are adopted from [9,40].
To properly define our function spaces and norms, we introduce some additional notation. First, given u : Ω δ → R n measurable, we let Du represent the projected difference defined as This quantity is the trace of (u(x) − u(y)) ⊗ x − y |x − y| . Notice then that the linearized strain field Using these notations, the vector-valued nonlocal operator L δ is given by whenever it makes sense. We notice that for u, v ∈ C ∞ c (Ω; R n ), see [23,Proposition A.5], . The latter defines a bi-linear form and we understand the strongly coupled system of nonlocal equations for the state u, L δ u = g, in the weak sense as the Euler-Lagrange equation for the corresponding quadratic potential energy defined on an appropriate space of functions with a displacement field on the nonlocal boundary.
Recall that H(x, y) = 1 2 (h(x) + h(y)) and that there are positive constants h min and h max such With this assumption on H and for g ∈ L 2 (Ω; R n ), the energy in (2.2) is finite for u : Ω δ → R n measurable such that We denote this space of functions by X(Ω δ ; R n ); i. e.
We also introduce the corresponding space of functions having a zero nonlocal boundary condition as X 0 (Ω δ ; R n ) = {u ∈ X(Ω δ ; R n ) | u = 0 on Ω δ \ Ω} . It is not difficult to show that the spaces X(Ω δ ; R n ) and X 0 (Ω δ ; R n ) are normed spaces with the norm One objective of this work is to make connections between the non-local optimal control problem and a local control problem as δ → 0 + . As we will show, the corresponding bi-linear form of interest is where ·, · F is the Fröbenius inner product on matrices: It turns out that the appropriate energy space for the resulting local problem is the classical space with the natural norm and corresponding semi-norm Now, to state the optimal control problem of interest precisely, we define the pertinent objective functional. As we mentioned earlier the functional will be taken to be the sum of two terms. The first is a quality functional Q : X ad ⊂ X(Ω δ ; R n ) → [0, ∞), that assigns a certain value Q(u) to each admissible displacement field depending on a certain criteria. For example, given a desired displacement state u des , we may want a state u that matches u des as closely as possible. In this case we wish to choose u that keeps the mismatch between u and u des to the minimum. The mismatch may be defined as a weighted squared errorˆΩ γ(x)|u(x)−u des (x)| 2 dx for some 0 ≤ γ ∈ L ∞ (Ω).
Notice that by choosing γ appropriately, we may seek to match the desired state only on a portion of the domain. More generally, we would want the quality functional to have the form where the integrand F : Ω × R n → R possesses the following properties: (1) For all v ∈ R n the mapping x → F (x, v) is measurable; (2) For all x ∈ Ω the mapping v → F (x, v) is continuous and convex; (3) There exist constant c 1 > 0 and l ∈ L 1 (Ω) for which for all x ∈ Ω and all v ∈ R n . The second part of the objective functional is a cost functional associated with the external force. We seek a forcing term g whose associated displacement has the desired quality while keeping the cost as minimal as possible. Typically, we take this cost functional, C(g), to be a weighted L 2 -norm of g of the form for some 0 < Γ ∈ L 1 (Ω). To that end, we take the admissible control space to be Z ad , a nonempty, closed, convex, and bounded subset of L 2 (Ω; R n ), and it takes the form . . , n}. Without loss of generality, we shall assume that 0 ∈ Z ad . In summary, the objective function we will be working with is of the form under the above assumptions on F and Γ.
2.2.
Problem set up. Now that we have specified the different function spaces as well as bi-linear forms of interest, we are now ready to precisely pose the optimal control problems. The first one is the optimal control problem of the coupled system of nonlocal equations. Given a boundary data u 0 ∈ X(Ω δ ; R n ), the problem is finding a pair (u δ , g δ ) ∈ X(Ω δ ; R n ) × Z ad such that where the minimization is over pairs (u δ , g δ ) ∈ X(Ω δ ; R n ) × Z ad that satisfy Here we use the notation ·, · for the L 2 -inner product. We remark that without loss of generality we may assume that u 0 = 0 in the above formulation. Indeed, if u δ solves (2.12) and we set e δ := u δ − u 0 , then e δ ∈ X 0 (Ω δ ; R n ) and After noting that the map v → B δ (u 0 , v) is a bounded linear functional on X 0 (Ω δ ; R n ), the right hand side of (2.13) can be viewed to define a duality pairing between X 0 (Ω δ ; R n ) and its dual. The objective functional as a function of (e δ , g δ ) with still have the form as (2.10) with an integrand F (x, e) = F (x, e + u 0 (x)). Notice thatF the exact same properties as F . With this simplification at hand, we summarize the problem as follows.
Problem 2.2 (Non-local continuous problem). Find a pair (u δ , g δ ) ∈ X 0 (Ω δ ; R n ) × Z ad such that I(u δ , g δ ) = min I(u δ , g δ ) (2.14) where the minimization is over pairs (u δ , g δ ) ∈ X 0 (Ω δ ; R n ) × Z ad that satisfy The effective admissible class of pairs for this nonlocal optimal control problem is We are also interested in the behavior of the above nonlocal optimal control problem in the limit of vanishing nonlocality as quantified by δ which turns out to be a local problem.
Problem 2.3 (Local continuous problem).
Find a pair (u, g) ∈ H 1 0 (Ω; R n ) × Z ad such that I(u, g) = min I(u, g), (2.17) where the minimization is over pairs (u, g) ∈ H 1 0 (Ω; R n ) × Z ad that satisfy B 0 (u, v) = g, v , ∀v ∈ H 1 0 (Ω; R n ). (2.18) As before, the effective admissible class of pairs for the control problem is We now introduce notation for our finite element scheme and discretized problems. The family of meshes {T h } h>0 discretizing Ω δ is assumed to be quasi-uniform and of size h. Let X h ⊂ X 0 (Ω δ ; R n ) denote the space of continuous, piecewise linear, functions subject to the mesh with zero non-local boundary data, i.e., 20) and X δ,h will denote this same function space, albeit with a different norm. For the local discrete problem, this space will denoted as X h and equipped with the norm (2.6), and for the non-local discrete problem, this space will instead be denoted as X δ,h and equipped with the norm (2.3). Similarly, let Z h denote the piecewise constant functions with respect to our mesh, i.e., Here and henceforth, we denote the space of vector-valued polynomials of degree m as (2.22) We will use X δ,h and X h , as appropriate, to discretize the state space, and Z h to discretize the control space. Now we may state our non-local and local discrete problems.
where the minimization is over pairs (u δ,h , g δ,h ) ∈ X δ,h × Z h that satisfy The effective admissible class of pairs for the above nonlocal discrete problem is Finally we state the local discrete optimal control problem.
Problem 2.5 (Local discrete problem). Find a pair (u h , g h ) ∈ X h × Z h such that
26)
where the minimization is over pairs The effective admissible class of pairs for this local problem is Note that in each problem, the state equation governs the relationship between the force [control] and the displacement [state] that must take place in any admissible solution.
PROPERTIES OF FUNCTION SPACES
In this section we state and prove some structural properties of the function spaces X(Ω δ ; R n ) and X 0 (Ω δ ; R n ) defined in the previous section. We begin noting that the function spaces are separable Hilbert spaces with the following inner product defined for u, v ∈ X(Ω δ ; R n ): that, under the working assumption on H, we have that which we also use as a seminorm. It then follows from [39] that if u is the zero extension of u to R n then there exists a constant C = C(δ, p) > 0 such that, for any open set B containing Ω δ , we have In particular, the constant is independent of B, and we may select B := R n , where we define We now seek to demonstrate a continuous embedding result for Sobolev spaces into the space X(Ω δ ; R n ). To accomplish this, we need a quantitative version of continuity in the L 2 -norm; a local, scalar-valued analogue is discussed and proven in [9].
Proof. We first prove the desired claim in the special case where v ∈ C ∞ (R n ; R n ). Fix ξ ∈ R n \{0}. Then by the Chain Rule, the Mean-Value Theorem for integrals, and the Cauchy-Schwarz Inequality, we havê where in the last step we have used invariance of the L 2 -norm under translations, demonstrating the inequality for v ∈ C ∞ (R n ; R n ). The general case for v ∈ H 1 (R n ; R n ) follows by density.
The estimate of Lemma 3.1 will now be used to prove a continuous embedding result.
Lemma 3.2 (Continuous embedding).
For all δ > 0, we have That is, H 1 0 (Ω; R n ) ֒→ X 0 (Ω δ ; R n ), and the constant is independent of δ. Proof. Since ∂Ω is Lipschitz, for any v ∈ H 1 0 (Ω; R n ) its extension by zero outside of Ω is in H 1 0 (R n , R n ) vanishing almost everywhere outside of Ω. Now for any δ > 0, we have where we have used that supp(k δ ) ⊂ B(0, δ). Now our expression is in a form on which we can use Lemma 3.1 on the inner integral to conclude that 6) which completes the proof.
Next we show that compactly supported smooth functions are dense in X 0 (Ω δ ; R n ).
Let u ∈ X(R n ; R n ), and we claim that To this end, we compute By the Dominated Convergence Theorem we deduce Now, to handle the first integral in (3.8) we define By using the conditions on k δ and the Lipschitz continuity of ψ, the sequence |K R (x)| is uniformly bounded in R and in x. Further, K R (x) → 0 pointwise on R n as R → ∞, so we may, once again, use the Dominated Convergence Theorem to conclude that proving (3.7). Finally, with one more application of the Dominated Convergence Theorem, we see that 12) and this lets us complete the proof, with {uϕ R } ∞ R=1 as our approximating family of functions in X(R n ; R n ) with bounded support.
The following result is analogous to [ Lemma 3.4 (Mollification). Let u ∈ X(R n ; R n ) be a vector field that vanishes outside a compact subset of R n . For ǫ > 0 denote by η ǫ a standard mollifier, and u ǫ = u * η ǫ . Then, for ǫ sufficiently small, we have u ǫ ∈ X(R n ; R n ). Moreover, Since the mollifier η ǫ is even, we may use Hölder's Inequality, Jensen's Inequality, and the identitŷ to obtain the estimate and these definitions in turn imply that The proof will be complete once we show that U ǫ → U in L 2 (R n × R n ) as ǫ → 0 + . As is standard for mollifiers, u ǫ → u strongly in L 2 (R n ; R n ), and a.e. pointwise in R n , both as ǫ → 0 + . Thus by Fatou's Lemma, we get the convergence while the reverse inequality follows from sending ǫ → 0 + in (3.15). This combined with showing is enough to show the strong convergence in L 2 (R n × R n ) that we seek, so we focus on proving this weak convergence. Let V ∈ L 2 (R n × R n ) be arbitrary and define the function With this definition in mind, the Dominated Convergence Theorem tells us that (3.20) Since for all x, y ∈ R n , we can see that these functions have bounded support, and thus belong to L 2 (R n ; R n ) for all j ∈ N + . Then due to the a.e. convergence u ǫ → u, we have (3.21) which holds for all j ∈ N + . Taking a limit supremum in ǫ, the convergence in (3.21), and applying Hölder inequality gives and from this it follows that U ǫ ⇀ U in L 2 (R n × R n ), completing the proof.
We can now combine Lemma 3.3 and Lemma 3.4 to immediately obtain the density of C ∞ 0 (R n ; R n ) in X(R n ; R n ), which we state below as a corollary, see [29,Remark 4.2] and [23] for the scalar case.
For well-posedness of the state system (specifically, for stability) we shall need a nonlocal Poincaré-type inequality. In addition, to understand the behavior of our system in the limit as δ → 0 + , it is essential that the constant in this inequality is independent of δ. The following result was proven in [35], but various versions of this inequality are proved in [5, 6, 11, 21, 20, 24, 36, ?, 42, 22]. Proposition 3.6 (Nonlocal Poincaré). There exists a δ 0 > 0 and a constant C(δ 0 ) > 0 such that for all δ ∈ (0, δ 0 ] and u ∈ X 0 (Ω δ ; R n ), we have With the aid of above Poincaré-type inequality we may apply Lax-Milgram to deduce the unique solvability of the state equations of the nonlocal optimal control problem stated in the previous section. We summarize this with the following corollary. From standard linear theory, we know that the solution operator of the state equations is linear and continuous. One important fact we need to demonstrate the solvability of optimal control problems is the compactness of this solution operator. While for the discrete problems this question is trivial, for the continuous problems it needs a resolution. The compactness of the solution operator is related to the compactness of the image space which, for (2.15), is X 0 (Ω δ ; R n ); whereas for (2.18) is H 1 0 (Ω; R n ). The compactness of the latter in L 2 (Ω; R n ) is standard. Below we build a framework needed to ultimately prove the compact embedding os X 0 (Ω δ ; R n ) into L 2 (Ω δ ; R n ). This is much akin to the compact embedding results for fractional Sobolev spaces; see, for instance, [18,19]. This will largely be based on the results of [29], see also [25], which we extend to vector-valued functions using a weaker norm that only involves a projected difference quotient. To this end we introduce a definition.
The following proposition demonstrates that it suffices to show X(R n ; R n ) ⊂ L 2 (R n ; R n ) is a locally compact embedding. Proposition 3.9 (Compactness). If X(R n ; R n ) ⊂ L 2 (R n ; R n ) is a locally compact embedding, then for every bounded and open Ω ⊂ R n , and every δ > 0, the embedding X 0 (Ω δ ; R n ) ⊂ L 2 (Ω; R n ) is compact.
Proof. As we remarked earlier, for every u ∈ X 0 (Ω δ ; R n ), its extension by zero outside of Ω δ belongs to X(R n ; R n ). Moreover, [u] X(Ω δ ;R n ) = [u] X(R n ;R n ) . Now if the inclusion i : X(R n ; R n ) ⊂ L 2 (R n ; R n ) is locally compact, then in Definition 3.8, we can set K := Ω to conclude that The result now follows easily.
We now prove the local compact embedding of X(R n ; R n ) in the remaining portion of this section. We follow the argument in [29].
Then the corresponding convolution operator T W : Proof. The proof follows from [29, Lemma 3.1] after noting that for i = 1, 2, . . . , n, [T W u] i is a finite sum convolution operators which are locally compact.
Proof. For τ > 0, let j τ (ξ) := k δ (ξ) |ξ| 2 1 R n \B(0,τ ) (ξ). Then j τ ∈ L 1 (R n ) and that, by assumption on k δ , we have that j τ L 1 (R n ) → ∞ as τ → 0. We now introduce the matrix-valued function where c n is a normalizing constant that depends only on n so that R n J τ (ξ)dξ = I n , the identity matrix. (3.27) Let u ∈ X(R n ; R n ), and we claim that (3.28) We prove this via a direct calculation: rewrite u − T jτ u as Now, we calculate the L 2 (R n ; R n )-norm, and estimate it with the Cauchy-Schwarz Inequality and the pointwise inequality j τ (ξ) ≤ k δ (ξ) |ξ| 2 : [u] 2 X(R n ;R n ) . (3.30) Taking square roots in (3.30) immediately yields (3.28). Now let M ⊂ X(R n ; R n ) be a bounded set, and K ⊂ R n be compact; our proof will be complete once we show that R K (M ) ⊂ L 2 (R n ; R n ) is relatively compact. To this end, let C := sup u∈M u X(R n ;R n ) and ǫ > 0. Since k δ (ξ) |ξ| 2 / ∈ L 1 (R n ), we may take τ > 0 to be sufficiently small so that j τ L 1 (R n ) ≥ C 2 ǫ 2 . By Lemma 3.10, the set M := [R K T jτ ](M ) is relatively compact in L 2 (R n ; R n ). Thus we may use the estimate (3.28) to obtain, for any u ∈ M , From this we conclude that R K (M ) is contained within an ǫ-neighborhood of M , and which is relatively compact in L 2 (R n ; R n ) (since j τ ∈ L 1 (R n )). Thus, R K (M ) is totally bounded in L 2 (R n ; R n ), which is a sufficient condition for the local compact embedding to hold.
WELL-POSEDNESS: STATE SYSTEM AND MINIMIZATION
In this section we show existence and uniqueness of solutions for each one of the optimal control problems introduced in Section 2. The approach we use is a reduced formulation where the constrained optimization is reformulated as an unconstrained optimization of the control via the solution operator of the state equation. To facilitate that we begin by proving an abstract well-posedness result that appears in some form in [27,55]; we provide a proof for the sake of completeness.
Suppose also that S : L 2 (Ω; R n ) → Y is a compact operator, and G : Y → R is lower semicontinuous. For a given λ ≥ 0 and Z ad a nonempty, closed, bounded, and convex subset of for some non-negative Γ ∈ L 1 (Ω). Then, the optimization problem has a solution g. Furthermore, if λ > 0, S is linear, and G is convex, then (4.2) has a unique minimizer. Alternatively, if λ = 0 and G is strictly convex on its domain (with S still being linear), then the minimizer is unique.
Proof. We use the direct method of calculus of variations to show that (4.2) has a solution. First, we note that j is bounded from below. Indeed, since the second term is nonnegative for all g, it suffices to demonstrate that the first term is bounded from below. To that end, assume otherwise. of {w m } ∞ n=1 converges weakly to some w ∈ Z ad . Since S is a compact operator, Sw m k → Sw as k → ∞ strongly in Y . Since G is lower semi-continuous, we have which poses a contradiction, since G does not assume the value −∞. We henceforth denote j 0 := inf g∈Z ad j(g), and the remainder of the existence part of the proof is comprised of finding g ∈ Z ad such that j( g) = j 0 . To this end, we identify a sequence {g m } ∞ m=1 ⊂ Z ad such that lim m→∞ j(g m ) = j 0 as m → ∞. Recalling again [55, Theorem 2.11] we obtain that converges weakly in L 2 (Ω; R n ). By a density argument, it is easy to show that the weak limit has to be √ Γg. Since S is compact and G is lower semi-continuous, we have the inequality chain (4.5) Since g ∈ Z ad , it follows that j( g) = j 0 , and we have found a minimizer. The proof of uniqueness under the given additional conditions is standard since j will automatically become strictly convex. Notice that in all cases, the solution space Y is compactly embedded into L 2 (Ω; R n ). For the local problems, the embedding H 1 0 (Ω; R n ) ⋐ L 2 (Ω; R n ) is standard, while for the non-local problems we invoke Theorem 3.11 and Proposition 3.9. We thus have that the solution mapping S : L 2 (Ω; R n ) → Y is compact, and then we may write the reduced cost functionals for our problems abstractly as Note that this functional satisfies all the conditions of Theorem 4.1, which guarantees existence and uniqueness of a minimizer.
ANALYSIS IN VANISHING HORIZON PARAMETER
Having shown that, for every horizon δ ≥ 0, the nonlocal optimal control problem 2.2 has a unique solution (u δ , g δ ), we now study the behavior of the pair as δ → 0. Notice that u δ minimizes the potential energy functional over X 0 (Ω δ ; R n ). We begin with the following convergence result.
Proof. Theorem 4.1 gives existence and uniqueness of optimal pairs that minimize the energy W δ defined in (5.1). Moreover, since 0 is an admissible control, we have that W δ (u δ ) ≤ 0, and so, after rearranging we get The Cauchy-Schwarz Inequality, in conjunction with the nonlocal Poincaré inequality (3.24) and the Triangle inequality, gives us Notice that the constant in this estimate, owing to (3.24), is independent of δ. Furthermore, since {g δ } δ>0 ⊂ Z ad , it is norm bounded (and therefore has a weak limit, up to a sub-sequence), and as a consequence sup Now since u δ ∈ X 0 (Ω δ ; R n ), after extending by zero to Ω 1 (with δ = 1) we have that From this, we may use [37,Proposition 4.2] or [33, Theorem 2.5] to conclude that the {u δ } δ>0 is precompact in L 2 (Ω; R n ) and converges strongly in L 2 (Ω; R n ) to some u ∈ H 1 0 (Ω; R n ) (up to a sub-sequence).
The main question we would like to address in the remaining is whether the limiting pair (u, g) solves a corresponding limiting optimal problem. The limiting behavior of the minimizers is closely related to the variational convergence of the above parametrized energy functionals. The main tool we shall use is Γ-convergence (see [10,13,16,43] for more on properties of Γconvergence; [6, ?, 37, 38, 43] for examples of proofs of Γ-convergence for other peridynamics models. For convenience, we recall its definition here. Definition 5.2 (Γ-convergence). We say that the sequence E δ : GC1 The liminf property: Assume u δ → u strongly in L 2 (Ω; R n ). Then we have the Fatou-type inequality GC2 Recovery sequence property: For each u ∈ L 2 (Ω; R n ), there exists a sequence {u δ } δ>0 where u δ → u strongly in L 2 (Ω; R n ) and 5.1. Vanishing horizon parameter for continuous problem. We will be working on the extended linear peridynamic energy functional we now define. Let E δ : and +∞ otherwise. Similarly, define a limiting energy E 0 : 8) and +∞ otherwise. Note that since, for all δ > 0, our energy E δ is quadratic we have for all u, v ∈ X(Ω δ ; R n ).
Lemma 5.3 (Nonlocal to local).
Suppose that A ⋐ Ω and w ∈ C 2 (A, R n ). Then for any h ∈ L ∞ (Ω), we have that The proof of this can be found in [20,35,37] in some form or another.
We now state the result on the variational convergence of the parameterized energies E δ . Proof. We verify each of the conditions that comprise this definition.
Proof of GC1: Let u ∈ L 2 (Ω; R n ) be arbitrary, and {u δ } δ>0 ⊂ L 2 (Ω; R n ) be such that u δ → u strongly in L 2 (Ω; R n ); we may assume without loss of generality that lim inf δ→0 + E δ (u δ ) < ∞. That is, up to a sub-sequence we may assume that E δ (u δ ) < ∞ and using the positive lower bound on the coefficient H we have that sup δ>0ˆΩˆΩ k δ (x − y) |x − y| 2 |Du δ (x, y)| 2 dxdy < ∞. (5.10) Arguing in the same way as in the proof of [33, Theorem 2.5], we then have u ∈ H 1 (Ω; R n ), and that u δ → u strongly in L 2 (Ω; R n ).
From here we will look to find a variant of [37,Equation 37], largely repeating the lower semicontinuity part of the proof of [38,Theorem 4.4]. We first assume that h is the constant function h = 1 and prove that for any A ⋐ Ω open, we have the inequality 1 n(n + 2)ˆA 2 Sym(∇u(x)) 2 Let 0 < ǫ < dist(A, ∂Ω), and let η ∈ C ∞ 0 (B(0, 1)) be a smooth cutoff function. Define η ǫ (z) := ǫ −n η z ǫ , and define w ǫ,δ := η ǫ * u δ on A, which is in C 2 (Ā; R n ). Via a direct calculation coupled with application of Jensen's inequality, we havê Our next step will be to send δ → 0 + , leaving ǫ > 0 fixed for now. The right hand side of (5.12) is bounded by lim inf δ→0 + E δ (u δ ) (with h = 1). We compute the limit of the left hand side.
Set w ǫ := η ǫ * u. Then we observe that w ǫ,δ → w ǫ as δ → 0 + in C 1 (A; R n ) due to u δ → u in L 2 (Ω; R n ) (where ǫ > 0 is taken to be fixed for now). We use this and Lemma 5.3 to obtain that 13) The desired inequality (5.11) now follows from taking the limit in ǫ in (5.13) and combining it with (5.12).
Next we assume that h is a simple function
where we use the sub-additivity lim inf a τ + lim inf b τ ≤ lim inf(a τ + b τ ). Finally, the case of general positive h ∈ L ∞ (Ω), we select an increasing sequence of step functions s j (x), 0 ≤ s j ≤ s j + 1 ≤ h(x) that converges to h uniformly. The result then follows from direct application of the Monotone Convergence Theorem. Proof of GC2: Let u ∈ L 2 (Ω; R n ). We may assume that u ∈ H 1 (Ω; R n ). For the recovery sequence, we take u δ :=ũ ∈ H 1 (R n , R n ), which is the extension of u to R n with compact support say in Ω 1 (with δ = 1). Take a sequence {v j } ∞ j=1 ⊂ C 2 (Ω 1 ) such that v j →ũ n H 1 (Ω 1 ) as j → ∞. then using (5.9), we see that for a C > 0 That means, E δ (v j ) → E δ (ũ) as j → ∞, uniformly in δ. Using the same proof as Lemma 5.3, we see that for each j = 1, 2, . . . , lim Taking the limit in j now we have that where in the second equality we used the uniform convergence in δ.
Remark 5.5. We may follow the above approach as well as [10,Remark 1.7] to conclude that the family of energies {W δ } δ>0 , defined in (5.1) (finite on X 0 (Ω; R n )), also Γ-converges in the L 2 -topology to where g δ ⇀ g weakly in L 2 (Ω; R n ) as δ → 0 + . With Γ-convergence at hand, we recall that [16,Corollary 7.20] states if {u δ } δ>0 is a family of minimizers for {W δ } δ>0 over L 2 (Ω; R n ), and u is a limit point of this family, then u is a minimizer of W 0 on L 2 (Ω; R n ) (see also [13, Theorem 2.1]). By our previous results, this implies u ∈ H 1 0 (Ω; R n ), and moreover W 0 (u) = lim δ→0 + W δ (u δ ). (5.14) Finally, we identify what conditions to impose to identify the solution to the local optimal control problem via a limiting process.
Proof. Lemma 5.1 gives the existence of such pair (u, g) ∈ A loc . We now need to show that this pair minimizes the reduced objective functional in Problem 2.3. Let (v, f ) ∈ A loc be arbitrary, and consider, for δ > 0 the sequence (v δ , f ) ∈ A δ , i.e., of solutions to the nonlocal boundary value problem (2.15). We can repeat our argument from Lemma 5.1 with g δ = g = f , and see that v δ → v strongly in L 2 (Ω; R n ). Then, by the Dominated Convergence Theorem, we have that Now we observe that lim was chosen as the minimizers for the objective functional (2.10). Next, notice that lim to Fatou's Lemma, where we recall that strong L 2 (Ω; R n ) convergence of u δ → u implies a.e. convergence in Ω. In summary, the inequality chain concludes the proof.
5.2.
Vanishing horizon parameter for discrete problem. In order to establish the asymptotic compatibility in Section 7, one must also consider the Γ-convergence of the discrete problem. The course of proof is similar to that of Γ-convergence for the continuous problem, but one can use the fact that X h ⊂ W 1,∞ 0 (Ω; R n ) ⊂ H 1 0 (Ω; R n ) to avoid the use of mollifiers. For these reasons, we merely state the results.
We also present the discrete analogue to to Theorem 5.6.
h is the family of solutions to the non-local discrete problem 2.4. Then, there is (u h , g h ) ∈ A loc h such that u δ,h → u h in L 2 (Ω; R n ) and g δ,h ⇀ g δ in L 2 (Ω; R n ). Moreover, (u h , g h ) solves the local discrete optimal control Problem 2.5.
FIRST ORDER OPTIMALITY AND DISCRETIZATION
Let us now turn our attention to first-order optimality conditions, which are the gateway to discretizing the nonlocal optimal control problem. From here onward, we assume that our integrand F (first introduced in (2.10)) is continuously Gâteaux-differentiable in the second argument. The first Gâteaux derivative will be denoted as F u . We will also denote by S δ the solution operator corresponding to the state system (2.15), and by S * δ the adjoint of S δ in the L 2 -sense. Due to Corollary 4.2, the operator S δ is well defined. Using the reduced objective functional (4.6), we recall that [55,Lemma 2.21] shows the first order necessary condition where j ′ represents the derivative of j in some appropriate sense. This functional has two terms that need to be differentiated: for the first term, we use the Fréchet differentiability of F and the Chain Rule; the derivative of the second term comes from the Fréchet derivative of · 2 L 2 Γ (Ω;R n ) (the weighted Γ norm). See [17,Lemma 3.5] for a similar calculation corresponding to the fractional Laplacian. Inequality (6.1) can now be rewritten as It is standard to introduce a new notation to rewrite the above as the system Note that S δ is a self-adjoint operator, so S * δ F u (·, u δ ) = S δ F u (·, u δ ), and so p δ ∈ X 0 (Ω δ ; R n ). Furthermore, as a consequence of these conditions, in the event that Γ = 1, we obtain g δ is the L 2 -projection of the adjoint p δ onto the control space Z ad , i.e.
where P E denotes the L 2 -projection onto the set E. Notice that, owing to the assumption that the objective functional is strictly convex, these first order necessary conditions are also sufficient. We summarize the result as follows.
6.1. Error analysis for nonlocal problems. With the aid of the optimality system, we are able to perform an error analysis, which we now begin. From here on we assume, for simplicity, that With this at hand, the optimality conditions for the non-local discrete problem read: where S δ,h is the discrete solution operator, and S * δ,h is its discrete L 2 adjoint. Note that S δ,h is a self-adjoint operator, so S * δ,h F u (·, u δ,h ) = S δ,h F u (·, u δ,h ). Also as with the non-local continuous optimality conditions, it follows that g δ,h ( To ease the error analysis, define the intermediary functions u δ , p δ ∈ X 0 (Ω δ ; R n ) such that (6.7) The existence and uniqueness of these functions follows from the Lax-Milgram Theorem. More importantly, we observe that the optimal discrete state and adjoint variables are nothing but the Galerkin approximations to u δ , p δ , respectively. From this we immediately obtain, using Céa's Lemma, that We now prove error estimates for the state and adjoint.
Theorem 6.2 (State and adjoint error estimates). Suppose that (u δ,h , g δ,h ) is the solution to Problem 2.4; p δ,h solves the discrete adjoint equation in (6.5) given u δ,h ; (u δ , g δ ) is the solution to Problem 2.2; and p δ solves the continuous adjoint equation in (6.3) corresponding to the state u δ . Then we have these error estimates for the states, and the adjoints: Proof. We begin by proving (6.9). Substitute v δ := u δ − u δ in (2.15) and (6.6), and subtract those two equations to obtain Using the definition of H(x, y), Hölder's inequality, and (3.6) gives This, combined with (6.8) then yields the result. The proof of (6.10) uses the same procedure, and is thus omitted.
At this stage we must observe that the infima in (6.9) and (6.10) tend to zero as h → 0 + . This is because of density; if a rate of convergence in these terms is desired, then further regularity of u δ and p δ must be studied. For some kernels this could be done, for instance, by exploiting that u δ,h belongs to a space that is strictly smaller than the dual of X 0 (Ω δ ; R n ); see, for instance, [1,26,44]. Due to the generality we place on our kernel, we do not pursue this. It remains to estimate the difference between continuous and discrete controls, which will now be our focus.
While in general our controls only belong to L 2 (Ω; R n ), in the event we have additional regularity, we can quantify our forthcoming estimates even more. Indeed, in the local case, the projection formula g(x) = − 1 λ P Z ad (p(x)) combined with the fact that p ∈ H 1 0 (Ω; R n ) imply further regularity on the control (namely, that g ∈ H 1 (Ω; R n )). The following lemma provides a sufficient condition on the kernel for this to also be the case for nonlocal problems.
In the following result, we require s = 1 2 to be able to use the Hardy-type inequality [34,Theorem 2.3]. This is essentially a technicality. Lemma 6.3 (Regularity of control for fractional-type kernels). Suppose that in the definition of Z ad , given in (2.9), the functions a and b are constants. Suppose also that, in addition to the contents of Assumption 2.1, we have that k δ (ξ) |ξ| 2 ∼ 1 |ξ| n+2s (6.13) holds for all ξ ∈ B δ (0), for some s = 1 2 . Then, necessarily, g δ ∈ H s (Ω; R n ).
Proof. We introduce some notation specifically for this proof. As seen in [39], we denote by u H s (Ω;R n ) the fractional Sobolev norm on vector fields, and denote by H s (Ω; R n ) the space of vector fields with finite fractional Sobolev norm. It has been shown in [39,Theorem 1.1] that the space ˆΩˆΩ |Du(x, y)| 2 |x − y| n+2s dydx < ∞ coincides with H s (Ω; R n ) with comparable norms. Now since p δ ∈ X 0 (Ω δ ; R n ) and k δ satisfies (6.13), via direct calculation we have that p δ ∈ χ s (Ω; R n ), and so it is in H s (Ω; R n ). To finish the proof, we recall the component-wise, pointwise formula P Z ad (p δ ) = max{a, min{p δ , b}}, (6.14) proven in [55,Theorem 2.28], where we use the assumption that the boxing functions in Z ad are constants. It is now clear that P Z ad (p δ ) is in H s (Ω; R n ) from directly estimating the max-min expression. Moreover, P Z ad (p δ ) H s (Ω;R n ) p δ H s (Ω;R n ) . The conclusion for g δ follows from the formula (6.4).
Remark 6.4. An alternative to estimating g δ H s (Ω;R n ) directly is to use interpolation theory; see [32,Chapter 16] and [52,Chapter 25]. To see this, it suffices to recall that the H s (Ω; R n ) space is an intermediate space between H 1 (Ω; R n ) and L 2 (Ω; R n ).
Having shown that it is possible for the control to lie in a smoother space than L 2 (Ω; R n ), we can proceed with the error analysis. Again, due to the generality of the kernel we are not very explicit in this. Instead, we introduce ω : where Π 0 : L 2 (Ω; R n ) → Z h denotes the L 2 -projection onto Z h . Clearly, ω depends on the spatial dimension n, on the embedding number (or Gelfand width) of the embedding X 0 (Ω δ ; R n ) ⊂ L 2 (Ω δ ; R n ), and on the properties of P Z ad . In the setting of Lemma 6.3 a proper rate of approximation can be established.
Lemma 6.5 (Approximation with smoothness). Assume that k δ satisfies (6.13) on B δ (0) for some We can now obtain an error estimate for the control. In the following result the idea is that, once further regularity of the state/adjoint is known (which can be done for more specific kernels), and a bound on ω like the one in Lemma 6.5 is obtained, the right hand side in the estimate below can be bounded by a power of h. Theorem 6.6 (Convergence of controls). Assume that g δ is the optimal control associated with Problem 2.2, and g δ,h is the discrete optimal control associated with Problem 2.4. Then we have the estimate Proof. We follow a grosso modo the argument used to prove [17,Theorem 4.7]. We let q δ,h ∈ X δ,h be the Galerkin approximation to p δ , i.e., the solution of Similarly, U δ,h ∈ X δ,h is the Galerkin approximation to u δ : Set γ z := g δ,h in (6.3) and γ h := Π 0 g in (6.5). Adding the ensuing inequalities we obtain where I 1 := p δ − p δ,h , g δ,h − g δ and I 2 := p δ,h + λg δ,h , Π 0 g δ − g δ . Now, we write I 1 as We now claim that I 1,3 ≤ 0. Indeed, recall that p δ,h denotes the optimal adjoint state for the discrete problem, and hence it satisfies Subtracting (6.22) and (6.25) yields Set v δ,h := r δ,h − p δ,h in (6.27) to obtain Similarly, in (6.26), we set v δ,h := U δ,h − u δ,h and obtain Since the left-hand sides of (6.28) and (6.29) are identical, it follows that I 1,3 ≤ 0. By using Cauchy-Schwarz and Céa's lemma repeatedly, we also obtain the following estimates for I 1,1 and I 1,2 : Combine (6.30) and (6.31), along with the fact that I 1,3 ≤ 0, to see that Finally, by two applications of Young's Inequality, for some constant C > 0 independent of δ and h. Let us now turn our attention to estimating I 2 = p h + λg δ,h , Π 0 g δ − g δ . We write it as 34) and in turn look to control each I 2,k , k = 1, . . . , 5. Starting with I 2,1 , we write it as Since g δ ∈ P Z ad X(Ω; R n ) and p δ | Ω ∈ X(Ω; R n ), we use (6.15) and Cauchy-Schwarz to obtain As for I 2,2 , we again utilize Cauchy-Schwarz and (6.15): for some constant C > 0.
To handle I 2,3 we subtract (6.22) from (6.25), set v δ,h := p δ,h − r δ,h in the result, and obtain Applying (6.15) with w := p δ,h − r δ,h , and combining the result with (6.38) gives Use Young's Inequality and Céa's Lemma on (6.40) to obtain To control I 2,4 , we use Cauchy-Schwarz and (6.15): Then by a standard Céa's lemma argument, Finally, for I 2,5 we use (6.15) and Céa's lemma again to obtain After using Young's Inequality, and combining the estimates for I 1 and I 2 , we conclude We combine and summarize the state and adjoint error estimates as follows.
Corollary 6.7 (Error estimates). In the setting of Theorems 6.2 and 6.6, Furthermore, if the conditions of Lemma 6.5 are satisfied, then ω(h) ∼ h s .
6.2.
Error analysis for local problems. The analogue of (6.3) for the local problem is p + λg, γ − g ≥ 0, ∀γ ∈ Z ad p = S * u, u = Sg, (6.48) where by S : Z ad → H 1 0 (Ω; R n ) we denote the solution operator to problem 2.18. In a similar manner, the analogue to (6.5) for the local discrete problem is where S h : Z h → X h denotes the discrete solution operator. The analogue of (6.4) for the local discrete problem is Define the intermediary functions u, p ∈ H 1 0 (Ω δ ; R n ) such that B 0 ( u, v) = g h , v ∀v ∈ H 1 0 (Ω; R n ); (6.51) Again, these functions exist and are uniquely defined thanks to Lax-Milgram. Much like for the non-local problem, we have state and control error estimates as h → 0 + , and the proofs are virtually identical to those already presented. However, for the local problem, since g ∈ H 1 (Ω; R n ) we may employ the estimate in place of (6.15). We may also prove, in the same manner as in Section 6.1, error estimates for the discrete state, adjoint, and control. In particular, suppose (u, g) denotes the solution to Problem 2.3, while (u h , g h ) denotes the solution to the discrete Problem 2.5. Assume also that p denotes the solution to the adjoint problem (6.49), while p h solves the discrete adjoint problem 6.49. If we further denote g as the optimal control for Problem 2.3, and g h as the discrete optimal control for Problem 2.5, then we have the estimates It then follows that u h → u and p h → p in H 1 (Ω; R n ) as h → 0 + .
ASYMPTOTIC COMPATIBILITY
In [53] the concept of asymptotically compatible schemes for parameter-dependent linear problems was introduced. The goal of asymptotic compatibility is to guarantee that we reach the same local, continuous solution regardless of whether we send δ and h to 0 separately (in either order) or simultaneously. This broad idea has been implemented extensively in several problems, see [54,11,31,30]. Our main goal in this section is to extend this notion to nonlocal optimal control problems, and to show that our ensuing numerical schemes are indeed asymptotically compatible. We first provide a definition of asymptotic compatibility of a scheme to the optimal control problems that slightly extends [53, Definition 2.8].
Definition 7.1 (Asymptotic compatibility). We say that the family of solutions {(u δ,h , g δ,h )} h>0,δ>0 to Problem 2.4 is asymptotically compatible in δ, h > 0 if for any sequences with δ k , h k → 0, we have that u δ k ,h k → u strongly in L 2 (Ω; R n ) and g δ k ,h k ⇀ g weakly in L 2 (Ω; R n ). Here (u, g) ∈ H 1 0 (Ω; R n ) × Z ad denotes the optimal solution for Problem (2.3).
The idea behind asymptotic compatibility can be summarized by saying that the diagram in Figure 1 commutes. The asymptotic compatibility theory for linear problems developed in [53] hinges on several structural properties for the operators at hand. Since they will be also useful in our setting, we quickly verify them here as well.
Proof. The fact that finite element spaces of piecewise linear functions are asymptotically dense in H 1 0 (Ω; R n ) is well-known, thus verifying AC1. For any k ∈ N, the bound stated in AC2 follows from a standard a priori estimate and the fact that Z ad is bounded in L 2 (Ω; R n ).
The structural conditions given above guarantee the asymptotic compatibility for linear problems. Our extension regarding the asymptotic compatibility of our schemes in the setting of optimal control problems is the content of the next result. Proof. In this proof we denote {(u k , g k )} ∞ k=1 := (u δ k ,h k , g δ k ,h k ) ∞ k=1 , which is the sequence of pairs solving Problem 2.4. We also let {p k } ∞ k=1 := {p δ k ,h k } ∞ k=1 denote the sequence of solutions to the adjoint problem included in (6.5). We consider an arbitrary, non-relabeled sub-sequence of the triples {(u k , g k , p k )} ∞ k=1 , and show that it has a further sub-sequence which always converges to the same limit point. Moreover, this limit solves (6.48) and, since this uniquely characterizes the solution to Problem 2.3, asymptotic compatibility will follow.
Since {g k } ∞ k=1 ⊂ Z ad , this sequence is bounded in L 2 (Ω; R n ), and there exists a sub-sequence and a function g * so that g k ⇀ g * weakly in L 2 (Ω; R n ). Meanwhile, due to item AC2 of Proposition 7.2, the sequence {u k } ∞ k=1 is uniformly bounded, and upon taking a further, non-relabeled, sub-sequence, there exists a limit point u * ∈ H 1 0 (Ω; R n ) so that u k → u * strongly in L 2 (Ω; R n ). Since {(u k , g k )} ∞ k=1 are pairs satisfying Problem 2.4, we have for all v k ∈ X δ k ,h k that B δ k (u k , v k ) = g k , v k . (7.3) Let ϕ ∈ C ∞ 0 (Ω; R n ) be arbitrary, and denote by I k the Lagrange nodal interpolant with respect to the mesh of size h k . If w k := I k ϕ ∈ X δ k ,h k , then w k → ϕ in W 1,∞ (Ω; R n ) as k → ∞. This convergence is sufficiently strong to ensure lim k→∞ g k , w k = g * , ϕ . (7.4) Now, utilizing the definition (7.1), we write B δ k (u k , w k ) = A δ k u k , w k X 0 (Ω δ k ;R n ) * ,X 0 (Ω δ k ;R n ) = A δ k ϕ, u k X 0 (Ω δ k ;R n ) * ,X 0 (Ω δ k ;R n ) + A δ k (w k − ϕ), u k X 0 (Ω δ k ;R n ) * ,X 0 (Ω δ k ;R n ) =: I k + II k .
(7.5)
Due to item AC3 of Proposition 7.2, necessarily A δ k ϕ ∈ L 2 (Ω; R n ), and by item AC4, we have that A δ k ϕ → A 0 ϕ strongly in L 2 (Ω; R n ). Due to this and u k → u * strongly in L 2 (Ω; R n ), the term I behaves as follows: lim k→∞ A δ k ϕ, u k X 0 (Ω δ k ;R n ) * ,X 0 (Ω δ k ;R n ) = A 0 ϕ, u * H −1 (Ω;R n ),H 1 0 (Ω;R n ) . (7.6) As for II k , we may use the definition (7.2) and that u k is the solution to (2.15), along with Hölder to deduce II k = B δ k (u k , w k − ϕ) u k X(Ω δ k ;R n ) w k − ϕ X(Ω δ k ;R n ) . (7.7) Due to item AC2 the first factor is uniformly bounded in k, whereas the second factor is controlled up to a constant (uniform in k) by w k − ϕ H 1 (Ω;R n ) , due to Lemma 3.2. This factor is further bounded from above by w k − ϕ W 1,∞ (Ω;R n ) , and then the convergence of w k → ϕ in W 1,∞ (Ω; R n ) tells us that II k → 0 as k → ∞. The result is that B 0 (u * , ϕ) = g * , ϕ (7.8) for all ϕ ∈ C ∞ 0 (Ω; R n ); by density, we may then extend (7.8) to all ϕ ∈ H 1 0 (Ω; R n ). Repeating the analysis just used for the sequence of states {u k } ∞ k=1 , we identify a p * ∈ H 1 0 (Ω; R n ) so that p k → p * strongly in L 2 (Ω; R n ), and B 0 (ϕ, p * ) = u * , ϕ (7.9) for all ϕ ∈ H 1 0 (Ω; R n ). Now, we link the states, controls and adjoints, beginning as follows: due to (6.1), for each k we have that B δ k (p k , v k ) = u k , v k (7.10) for all v k ∈ X δ k ,h k , and the identity g k (x) = − 1 λ P Z ad (Π 0 p k (x)). (7.11) The next step is to show that Π 0 p k → p * strongly in L 2 (Ω; R n ). By the Triangle Inequality and the stability of Π 0 , we estimate Π 0 p k − p * L 2 (Ω;R n ) ≤ p k − p * L 2 (Ω;R n ) + Π 0 p * − p * L 2 (Ω;R n ) . (7.12) Since p k → p * strongly in L 2 (Ω; R n ), the first term in (7.12) decays to 0 as k → ∞, while the second term vanishes due to (6.15). Now, due to the convergence Π 0 p k → p * strongly in L 2 (Ω; R n ) and the projection mapping being Lipschitz, we have that − 1 λ P Z ad (Π 0 p k (x)) → − 1 λ P Z ad (p * (x)); this coupled with the weak convergence g k ⇀ g * in L 2 (Ω; R n ) lets us conclude g * (x) = − 1 λ P Z ad (p * (x)). (7.13) Since (7.8), (7.9), and (7.13) all hold, and solutions to the local continuous optimality conditions (6.48) are necessarily unique, we have that g * = g; p * = p; and u * = u. Finally, notice that this limit point (u, g, p) ∈ H 1 0 (Ω; R n ) × Z ad × H 1 0 (Ω; R n ) is independent of the original subsequence chosen, which means the entire sequence {(u k , g k , p k )} ∞ k=1 converges to (u, g, p) in the L 2 (Ω; R n ) × L 2 wk (Ω; R n ) × L 2 (Ω; R n ) topology, completing the proof. | 2023-04-20T01:15:45.353Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "7656cbaa0c132f41dfd228b17ce4729af78eb7cc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7656cbaa0c132f41dfd228b17ce4729af78eb7cc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
209526993 | pes2o/s2orc | v3-fos-license | Rapid and Sensitive Detection of Bisphenol A Based on Self-Assembly
Bisphenol A (BPA) is an endocrine disruptor that may lead to reproductive disorder, heart disease, and diabetes. Infants and young children are likely to be vulnerable to the effects of BPA. At present, the detection methods of BPA are complicated to operate and require expensive instruments. Therefore, it is quite vital to develop a simple, rapid, and highly sensitive method to detect BPA in different samples. In this study, we have designed a rapid and highly sensitive biosensor based on an effective self-assembled monolayer (SAM) and alternating current (AC) electrokinetics capacitive sensing method, which successfully detected BPA at nanomolar levels with only one minute. The developed biosensor demonstrates a detection of BPA ranging from 0.028 μg/mL to 280 μg/mL with a limit of detection (LOD) down to 0.028 μg/mL in the samples. The developed biosensor exhibited great potential as a portable BPA biosensor, and further development of this biosensor may also be useful in the detection of other small biochemical molecules.
Introduction
Bisphenol A (BPA) is an important organic chemical raw material, which is widely used in the production of plastic products and fire retardant. BPA is an endocrine disruptor, which can mimic human hormones and maybe lead to negative health effects [1,2]. Studies have shown that BPA can contact humans through the skin, respiratory tract, digestive tract, and other channels. After BPA enters the body, it combines with intracellular estrogen receptors and produces estrogenic or anti-estrogenic effects through a variety of reaction mechanisms, thereby affecting endocrine, reproductive, and nervous systems, as well as causing cancer and other adverse effects [3][4][5]. Therefore, a rapid and sensitive detection method of BPA is of great significance.
At present, the detection methods of BPA mainly include liquid chromatography with an electrochemical method [6], chromatography-mass spectrometry [7,8], surface-enhanced Raman scattering [9], enzyme-linked immunosorbent assay (ELISA) [10], etc. These methods are complicated and require expensive equipment with professional personnel. The movement of biomolecular molecules in the detection process relies on natural diffusion, which is time-consuming and cannot meet the requirements for the rapid detection of BPA. For example, Pasquale et al. [11] have proposed a method to determine BPA levels in fruit juices by liquid chromatography coupled to tandem mass spectrometry. However, it cost about 15 min to accomplish the detection of BPA. Sheng et al. [12] have developed an optical biosensor based on fluorescence, but they required the addition of labels to generate the sensor response. Xue et al. [13] have reported a novel SPR biosensor that combines a binding inhibition assay with functionalized gold nanoparticles to allow for the detection of trace concentrations of BPA. However, the biosensor detected BPA in 60 min, which was too long for
Materials and Reagents
The buffer solution 10 × PBS was purchased from Solarbio (Beijing, China). The 10 × PBS was diluted in deionized water to make the working solution at 0.05 × PBS, 0.01 × PBS for diluting other substances. At the same time, the 0.01 × PBS was used as the background solution. 11-mercaptoundecanoic acid (MUA) was purchased from Yuanye Bio-Technology Co., Ltd., (Shanghai, China) The MUA was dissolved in anhydrous ethanol to prepare 5 mmol/L of MUA solution for forming the gold-sulfur bond. 1-(3-dimethylaminopropyl)-3-ethyl carbon diimide hydrochloride (EDC) was obained from Sigma (St. Louis, MI, USA); EDC was dissolved in 0.05 × PBS to make 0.4 mol/L EDC solution. N-hydroxysuccinimide (NHS) was acquired from biotopped (Beijing, China), and NHS was dissolved in 0.05 × PBS to prepare 0.1 mol/L NHS solution. Then, we mixed the two solution 1 to 4. Ethanolamine was purchased from MACKLIN (Shanghai Macklin Biochemical, Ltd., Shanghai, China). Ethanolamine was diluted with 0.05 × PBS to make 1 mol/L solution for closing. The BPA antibody and antigen were purchased from QF Biosciences Co., Ltd., (Shanghai, China). The antibody was diluted with 0.05 × PBS to make two concentrations of 5.3 µg/L and 5.3 µg/mL, and the antigen was diluted in 0.01 × PBS to make the solution that included 0.028 µg/mL, 0.28 µg/mL, 2.8 µg/mL, 28 µg/mL, and 280 µg/mL for testing. Antigen and antibody ware stored at −20 • C. In order to guarantee the accuracy of detection, preparation of the solution was carried out on a clean bench.
Preparation of Interdigital Microelectrodes
In this work, the interdigital microelectrodes were fabricated on silicon wafers, which had interdigitated arrays with widths of 10 µm separated by 10-µm gaps [32]. Before the detection, the interdigital microelectrodes should be modified with the following steps, as shown in Figure 1. Firstly, they are immersed in acetone for 4 min with ultrasonic cleaning, rinsed in absolute ethyl alcohol for 3 min with ultrasonic cleaning, rinsed in deionized water for 3 min with ultrasonic cleaning, and dried with a drying oven. Secondly, MUA was added to the surface of the gold electrodes for forming the Au-S bonds and realizing the self-assembled monolayer [33,34]. Then, sensors were placed in the incubator overnight with the temperature at 25 • C. Thirdly, before the EDC and NHS solution were added to the electrodes, the electrodes surface should be cleaned with absolute ethyl alcohol and blow dried with nitrogen; then, the sensors should be placed in an incubator for 2 h. After activation of the carboxyl group, the electrodes' surface should be cleaned with deionized water and blow dried with nitrogen; then, the chambers were pasted on the sensors. After that, 10 µL of antibody should be added to the surface of the sensors, which are then placed in an incubator for 3 h with the temperature at 37 • C. EDC and NHS were used as the crosslinking agent to assist in the formation of amide bonds between the carboxyl group of the self-assembled monolayer and the amino group of the BPA antibody. With these processes, the BPA antibody was immobilized onto the electrodes stably. Finally, in order to enhance the specificity of detection, ethanolamine was utilized to close the unbound active sites of the electrodes' surface. Then, the sensors were placed in an incubator for 1 h with the temperature at 25 • C. After the above steps, the modification of the electrodes surface was accomplished. The sensor needs to be cleaned after each sensing procedure; it is possible to wash off the antigen only and reuse the sensor, which indicates that the sensor has an expected reusability of five times before the performance of the sensor is believed to be dissatisfactory.
Apparatus and Methods
An impedance analyzer of model IM3536 (HIOKI, Ueda, Japan) was a high-precision apparatus to detect the impedance, capacitance, and resistance of the modified interdigital electrode after being dropped different concentrations of antigens solution. Firstly, different frequencies (1 kHz, 10 kHz, 20 kHz) and different voltages (100 mV, 600 mV, 1.1 V) were applied to the experiments, respectively. Here, Figure 2 showed the relationship between the normalized capacitance and different concentrations of BPA with a 10-kHz AC signal of different voltages (100 mV, 600 mV, and 1.1 V) applied to the interdigital electrodes. Finally, a 10-kHz AC signal of 600 mV was selected to be applied to the interdigital electrodes via the impedance analyzer of model IM3536 as the measuring signal. At the same time, the voltage used in the experiments is rms. In this work, before detecting antigen concentrations, each sensor should detect the background solution. However, the measurements of the background solution are made in this work only and do not need to be used in commercial application. We detected the background solution as the blank control group to highlight the antibody-antigen binding response to the change in capacitance of electrodes. Then, 10 μL of different concentrations of antigens solution were added to interdigital electrodes. The impedance, capacitance, and resistance of the modified interdigital electrodes were detected within one minute. The normalized capacitance change rate of the sensor was computed to demonstrate antigen-antibody binding, which was shown with the slope of normalized capacitance versus time (%/min). Then, the slope was linearly fitted by the least square method. The normalized capacitance was computed as Ct/C0, where Ct is the capacitance value at time t and C0 is the capacitance value at time zero [32].
Apparatus and Methods
An impedance analyzer of model IM3536 (HIOKI, Ueda, Japan) was a high-precision apparatus to detect the impedance, capacitance, and resistance of the modified interdigital electrode after being dropped different concentrations of antigens solution. Firstly, different frequencies (1 kHz, 10 kHz, 20 kHz) and different voltages (100 mV, 600 mV, 1.1 V) were applied to the experiments, respectively. Here, Figure 2 showed the relationship between the normalized capacitance and different concentrations of BPA with a 10-kHz AC signal of different voltages (100 mV, 600 mV, and 1.1 V) applied to the interdigital electrodes. Finally, a 10-kHz AC signal of 600 mV was selected to be applied to the interdigital electrodes via the impedance analyzer of model IM3536 as the measuring signal. At the same time, the voltage used in the experiments is rms. In this work, before detecting antigen concentrations, each sensor should detect the background solution. However, the measurements of the background solution are made in this work only and do not need to be used in commercial application. We detected the background solution as the blank control group to highlight the antibody-antigen binding response to the change in capacitance of electrodes. Then, 10 µL of different concentrations of antigens solution were added to interdigital electrodes. The impedance, capacitance, and resistance of the modified interdigital electrodes were detected within one minute. The normalized capacitance change rate of the sensor was computed to demonstrate antigen-antibody binding, which was shown with the slope of normalized capacitance versus time (%/min). Then, the slope was linearly fitted by the least square method. The normalized capacitance was computed as C t /C 0 , where C t is the capacitance value at time t and C 0 is the capacitance value at time zero [32]. Micromachines 2019, 10, x 5 of 12
Electrical Double Layers
What happens when a solution comes into contact with a solid metal surface? Helmholtz built a model to attempt to explore this question [35]. He called it model H as the left half of the dotted line in Figure 3. As is shown in Figure 3, this model can be equivalent to a flat plate capacitor, and the relationship between the charge density (σ) on one side and the potential (V) difference between the two layers is described by the following equations [35,36], where d is the distance of center of the positive and negative charge.
We can conclude the capacitance ( ) of the flat plate capacitor from Equation (1). Hence, the H model can successfully describe the common electrochemical phenomenon with two basic equations. However, the Helmholtz layer shows an obvious defect, as shown in Figure 3. In the corollary of the equation, is a constant, but in the experiment, there are spreading layers, leading to not accurately describing the surface change. We describe the diffuse layer as and call the electrical double layer (EDL) as [36], which will be affected by the relative potential and electrolyte concentration. From this, we can deduce that is equal to in series with . The value of can be determined by the following equation.
Electrical Double Layers
What happens when a solution comes into contact with a solid metal surface? Helmholtz built a model to attempt to explore this question [35]. He called it model H as the left half of the dotted line in Figure 3. As is shown in Figure 3, this model can be equivalent to a flat plate capacitor, and the relationship between the charge density (σ) on one side and the potential (V) difference between the two layers is described by the following equations [35,36], where d is the distance of center of the positive and negative charge.
We can conclude the capacitance (C H ) of the flat plate capacitor from Equation (1). Hence, the H model can successfully describe the common electrochemical phenomenon with two basic equations. However, the Helmholtz layer shows an obvious defect, as shown in Figure 3. In the corollary of the equation, C H is a constant, but in the experiment, there are spreading layers, leading to C H not accurately describing the surface change. We describe the diffuse layer as C D and call the electrical double layer (EDL) as C d [36], which will be affected by the relative potential and electrolyte concentration. From this, we can deduce that C d is equal to C H in series with C D . The value of C d can be determined by the following equation. As mentioned above, both the charging and discharging processes of the electrical double layer (EDL) are similar to those of the parallel plate. When the electrolyte is added to the surface of interdigitated microelectrodes, the electrolyte will contact with the surface of the microelectrode in Figure 3. Thus, it can be equivalent to the parallel plate, as shown in Figure 4 [37]. When the electrode is bare, the interface capacitance of it can be described by the following equation.
where is the permittivity of the solution, is the electrode area, and is the electrical double layer (EDL) thickness.
When antibodies are immobilized to the surface of microelectrodes via self-assembly, the interfacial capacitance is expected to change to where is the permittivity of the antibody, is the effective area after antibodies are immobilized to the surface of microelectrodes, and is the antibody thickness. In the experiments, the solution including the antigens was added to the surface of interdigitated microelectrodes, on which the antibodies were immobilized. When the antigens bind to antibodies, the molecular deposition on the sensor surface become thicker, and the interfacial capacitance , will be expressed with where is the permittivity of the antigen, is the antigen thickness, and is the effective area of the interfacial capacitor after the binding of an antigen to an antibody. Assume that the area of interfacial capacitance is equal before and after the binding. From Equation (6), we can see that the diminution of interfacial capacitance can be lead by the thickness increase of the dielectric layer. In this work, the biosensing utilizes the change of interfacial capacitance , . Thus, the relative changes of interfacial capacitance are used to detect the specific binding of antigens to antibodies.
Consequently, the value of ∆ / , can be used to detect the biomolecular interactions. Beyond that, measuring the value of ∆ / , can overcome the experimental difference caused by As mentioned above, both the charging and discharging processes of the electrical double layer (EDL) are similar to those of the parallel plate. When the electrolyte is added to the surface of interdigitated microelectrodes, the electrolyte will contact with the surface of the microelectrode in Figure 3. Thus, it can be equivalent to the parallel plate, as shown in Figure 4 [37]. When the electrode is bare, the interface capacitance of it can be described by the following equation.
where ε s is the permittivity of the solution, A int is the electrode area, and λ d is the electrical double layer (EDL) thickness. When antibodies are immobilized to the surface of microelectrodes via self-assembly, the interfacial capacitance C int is expected to change to where ε t is the permittivity of the antibody, A b is the effective area after antibodies are immobilized to the surface of microelectrodes, and d ab is the antibody thickness.
In the experiments, the solution including the antigens was added to the surface of interdigitated microelectrodes, on which the antibodies were immobilized. When the antigens bind to antibodies, the molecular deposition on the sensor surface become thicker, and the interfacial capacitance C int,ab will be expressed with where ε p is the permittivity of the antigen, d ag is the antigen thickness, and A g is the effective area of the interfacial capacitor after the binding of an antigen to an antibody. Assume that the area of interfacial capacitance is equal before and after the binding. From Equation (6), we can see that the diminution of interfacial capacitance can be lead by the thickness increase of the dielectric layer. In this work, the biosensing utilizes the change of interfacial capacitance C int,ab . Thus, the relative changes of interfacial capacitance are used to detect the specific binding of antigens to antibodies. The relative changes of interfacial capacitance are ∆C C int,ab = C int,ab −C int,ag Consequently, the value of ∆C/C int,ab can be used to detect the biomolecular interactions. Beyond that, measuring the value of ∆C/C int,ab can overcome the experimental difference caused by the different surface roughness of each electrode and the difference of experimental treatment in each group, and improve the accuracy of experimental results.
The Binding Mechanism of Interdigital Electrode Surface
In this work, we adopt the interdigital electrodes as the sensors. The interdigital microelectrodes are finger-shaped in its surface. At the same time, this shape can be utilized to achieve the effect of ACEK. On the surface of the self-assembled electrode, there are two forms of binding antigens to antibodies.
In Figure 5, when the AC signal is not applied to the interdigital electrodes, the antigens binding to antibodies only depend on the deposition. In this case, only a small fraction of antigens have the chance to bind to the antibodies. This process takes a long time and does not guarantee the activity of antigens and antibodies. However, when an AC signal is applied to the interdigital electrodes, the ACEK effect is generated on the surface of the electrodes. Under the action of the ACEK effect, more and more antigens are rapidly enriched near the antibodies to promote the binding [38], thus improving the accuracy, rapidity, and sensitivity of detection. Hence, in this work, the ACEK effect was used on the electrodes to accelerate the binding.
The Binding Mechanism of Interdigital Electrode Surface
In this work, we adopt the interdigital electrodes as the sensors. The interdigital microelectrodes are finger-shaped in its surface. At the same time, this shape can be utilized to achieve the effect of ACEK. On the surface of the self-assembled electrode, there are two forms of binding antigens to antibodies.
In Figure 5, when the AC signal is not applied to the interdigital electrodes, the antigens binding to antibodies only depend on the deposition. In this case, only a small fraction of antigens have the chance to bind to the antibodies. This process takes a long time and does not guarantee the activity of antigens and antibodies. However, when an AC signal is applied to the interdigital electrodes, the ACEK effect is generated on the surface of the electrodes. Under the action of the ACEK effect, more and more antigens are rapidly enriched near the antibodies to promote the binding [38], thus improving the accuracy, rapidity, and sensitivity of detection. Hence, in this work, the ACEK effect was used on the electrodes to accelerate the binding.
The Binding Mechanism of Interdigital Electrode Surface
In this work, we adopt the interdigital electrodes as the sensors. The interdigital microelectrodes are finger-shaped in its surface. At the same time, this shape can be utilized to achieve the effect of ACEK. On the surface of the self-assembled electrode, there are two forms of binding antigens to antibodies.
In Figure 5, when the AC signal is not applied to the interdigital electrodes, the antigens binding to antibodies only depend on the deposition. In this case, only a small fraction of antigens have the chance to bind to the antibodies. This process takes a long time and does not guarantee the activity of antigens and antibodies. However, when an AC signal is applied to the interdigital electrodes, the ACEK effect is generated on the surface of the electrodes. Under the action of the ACEK effect, more and more antigens are rapidly enriched near the antibodies to promote the binding [38], thus improving the accuracy, rapidity, and sensitivity of detection. Hence, in this work, the ACEK effect was used on the electrodes to accelerate the binding.
Detection of Antigen with 5.3 µg/L Antibody
In this study, different concentrations of BPA standers (1.2 ×10 −7 mol/L to 1.2 ×10 −4 mol/L) were tested to evaluate the performance of the method. A 10-kHz AC signal of 600 mV was applied to measure capacitance of the biosensor for 60 s. Figure 6a shows the relationship between the normalized capacitance and different concentrations of BPA. Obviously, the change of capacitance was linear, and the rate of change of the curve increased as the concentrations of BPA increased, corresponding to the degree of binding between antibodies and antigens. The change rate of normalized capacitance curves was found by least square linear fitting, which provided a quantitative index of antibody-antigen binding. In Figure 6a, the slope of these capacitance curves was found to be −10.6% /min, −14.6% /min, −24.7% /min, and −37.7% /min for BPA levels at 1.2 ×10 −7 mol/L, 1.2 ×10 −6 mol/L, 1.2 ×10 −5 mol/L, and 1.2 ×10 −4 mol/L, respectively. −24.7‰/min, and −37.7‰/min for BPA levels at 1.2 × 10 mol/L, 1.2 × 10 mol/L, 1.2 × 10 mol/L, and 1.2 × 10 mol/L, respectively.
In Figure 6b, we calculated the averages and standard deviations (SDs) of the biosensor response and demonstrated the correlation between the concentration of BPA and change rate of the capacitance. The range of 1.2 × 10 mol/L to 1.2 × 10 mol/L BPA samples showed change rates of −11.62‰/min ± 2.08‰/min, −18.53‰/min ± 3.93‰/min, −25.07‰/min ± 2.73‰/min, and -30.43‰/min ± 2.56‰/min, respectively. In the range of 1.2× 10 mol/L to 1.2× 10 mol/L BPA, dC/dt was logarithmically dependent on the concentration of BPA. A negative linear correlation between dC/dt and the concentration of BPA was observed. The dependence is expressed as (‰/min) = −6.912x + 3.045 with a Pearson correlation coefficient = 0.985. In the experiments, we used the impedance analyzer IM3536 to detect the change of the capacitance of the sensor when antigen-antibody binding. However, the mechanism of measurement of the impedance analyzer IM3536 is to measure the impedance and the phase angle of the device, and the capacitance is calculated from the impedance and phase angle. Considering that the developed sensor is not a pure resistor or pure capacitor, the impedance and phase angle of the sensor will change at any moment when the sensor is immersed into solution, so the capacitance has oscillation. However, the oscillation is in the range of error measurement of the impedance analyzer IM3536. In order to mitigate these oscillations in the capacitance used in the sensor transduction, it's an effective way to slow down the speed of measurement or take the average of multiple measurements. In Figure 6b, we calculated the averages and standard deviations (SDs) of the biosensor response and demonstrated the correlation between the concentration of BPA and change rate of the capacitance. The range of 1.2 ×10 −7 mol/L to 1.2 ×10 −4 mol/L BPA samples showed change rates of −11.62% /min ± 2.08% /min, −18.53% /min ± 3.93% /min, −25.07% /min ± 2.73% /min, and −30.43% /min ± 2.56% /min, respectively. In the range of 1.2 × 10 −7 mol/L to 1.2 × 10 −4 mol/L BPA, dC/dt was logarithmically dependent on the concentration of BPA. A negative linear correlation between dC/dt and the concentration of BPA was observed. The dependence is expressed as y(% /min) = −6.912x + 3.045 with a Pearson correlation coefficient R 2 = 0.985.
In the experiments, we used the impedance analyzer IM3536 to detect the change of the capacitance of the sensor when antigen-antibody binding. However, the mechanism of measurement of the impedance analyzer IM3536 is to measure the impedance and the phase angle of the device, and the capacitance is calculated from the impedance and phase angle. Considering that the developed sensor is not a pure resistor or pure capacitor, the impedance and phase angle of the sensor will change at any moment when the sensor is immersed into solution, so the capacitance has oscillation. However, the oscillation is in the range of error measurement of the impedance analyzer IM3536. In order to mitigate these oscillations in the capacitance used in the sensor transduction, it's an effective way to slow down the speed of measurement or take the average of multiple measurements.
Detection of Antigen with 5.3 µg/mL Antibody
To verify the difference in detection between electrodes modified with different antibody concentrations, 5.3 µg/mL of antibody was immobilized on the interdigital electrodes. The experimental conditions are consistent with the experiment above. Figure 7a displays the change rate of the normalized capacitance, which was found to be −11.5% /min, −20.0% /min, −25.5% /min, and −30.8% /min for BPA concentrations at 1.2 ×10 −7 mol/L, 1.2 ×10 −6 mol/L, 1.2 ×10 −5 mol/L, and 1.2 ×10 −4 mol/L, respectively. The averages and standard deviations (SDs) of the biosensor response were exhibited in Figure 7b. From it, we can evaluate that the concentration of BPA was negatively correlated with dC/dt, and the linear correlation is expressed as y(% /min) = −6.93x + 1.47 with a Pearson correlation coefficient of R 2 = 0.989. It follows that the electrode with a concentration of 5.3 µg/mL can also detect the BPA levels range from 1.2 ×10 −7 mol/L to 1.2 ×10 −4 mol/L, and the linearity and correlation are better.
Detection of Antigen with 5.3 μg/mL Antibody
To verify the difference in detection between electrodes modified with different antibody concentrations, 5.3 μg/mL of antibody was immobilized on the interdigital electrodes. The experimental conditions are consistent with the experiment above. Figure 7a displays the change rate of the normalized capacitance, which was found to be −11.5‰/min, −20.0‰/min, −25.5‰/min, and −30.8‰/min for BPA concentrations at 1.2 × 10 mol/L, 1.2 × 10 mol/L, 1.2 × 10 mol/L, and 1.2 × 10 mol/L, respectively. The averages and standard deviations (SDs) of the biosensor response were exhibited in Figure 7b. From it, we can evaluate that the concentration of BPA was negatively correlated with dC/dt, and the linear correlation is expressed as (‰/min) = −6.93x + 1.47 with a Pearson correlation coefficient of = 0.989. It follows that the electrode with a concentration of 5.3 μg/mL can also detect the BPA levels range from 1.2 × 10 mol/L to 1.2 × 10 mol/L, and the linearity and correlation are better. Table 1 lists the quantitative results of the recent publications with different methods to detect BPA. Compared with most of the recent publications, which have reported the electrochemical method to achieve the detection of BPA, the results of this work may not be better than them. However, in this work, we have demonstrated a novel method to detect BPA based on self-assembly technology and the AC electrokinetics effect. There is nearly no report to detect BPA with self-assembly technology and the AC electrokinetics effect. So, it has the tremendous potential to detect BPA more sensitively with a lower limit of detection in the near future. Table 1 lists the quantitative results of the recent publications with different methods to detect BPA. Compared with most of the recent publications, which have reported the electrochemical method to achieve the detection of BPA, the results of this work may not be better than them. However, in this work, we have demonstrated a novel method to detect BPA based on self-assembly technology and the AC electrokinetics effect. There is nearly no report to detect BPA with self-assembly technology and the AC electrokinetics effect. So, it has the tremendous potential to detect BPA more sensitively with a lower limit of detection in the near future. Table 1. Limit of detection (LOD) comparison of different methods.
Conclusions
In this work, a rapid, highly sensitive BPA biosensor based on self-assembly technology and the AC electrokinetics (ACEK) effect has been proposed. A higher concentration of BPA solution was dropped on the self-assembly interdigital electrode sensor, a larger number of antibody-antigen binding occurred, and a larger normalized capacitance change rate was detected for the sensor. In this work, we used antigen-antibody-specific binding to detect BPA. At the same time, the limit of the biosensor is 1.2 × 10 −7 mol/L, which is better than some existing detection methods. For example, Sun et al. [39] have used the oscillopolarogaphic method to detect BPA in food packaging materials with a limit of detection of 4.4 × 10 −6 mol/L. Yan et al. [40] have developed a simple and renewable nanoporous gold-based electrochemical sensor for BPA detection with a limit of detection of 4.3 × 10 −7 mol/L. The present work could successfully detect BPA at nanomolar (nM) levels, which was higher than the results in the previous work [32], which could successfully detect BPA at femto molar (fM) levels.
Although the results of the present work were not better than the results in the previous work, we have used a novel method (self-assembly technology) to achieve the detection of BPA. At the same time, in order to acquire better results, we are improving the conditions, processes, and materials of the experiments. The novel method has the tremendous potential to detect BPA more sensitively with a lower limit of detection. Further development of the method may provide a more convenient, highly sensitive, and effective detection for BPA in complex samples. | 2020-01-02T10:52:31.175Z | 2019-12-30T00:00:00.000 | {
"year": 2019,
"sha1": "a70e12595a4590635dd4f1142464d0739f797e0e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/mi11010041",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a70e12595a4590635dd4f1142464d0739f797e0e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
18721636 | pes2o/s2orc | v3-fos-license | Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images
The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively.
Introduction
Optical coherence tomography (OCT) is a non-invasive imaging modality and is widely used in the diagnosis and management of ocular diseases. The new spectral-domain optical coherence tomography (SD-OCT) [1,2] scanners not only show a higher signal-to-noise ratio than the previous generation time-domain OCT scanners, but also provide close-to-isotropic volumetric images (see Fig. 1(a)) of the retina. However, these images often show large artifactual shifts of the individual A-scans, which are thought to be the result of a number of factors, including motion of the eye and positioning of the camera. Furthermore, these distortions differ along the slow scanning axis (B s -scans or yz slices) and the fast scanning axis (B f -scans or xz slices), as depicted in Figs. 1(b) and 1(c). The B f -scans often show large tilts, which are thought to be caused by imaging paraxially to the optical axis of the eye, while the high frequency artifacts seen in the B s -scans are attributed to eye and head motion. These artifacts not only make it difficult to visualize the data, but they also affect the further processing of the images. Of course, the retina in these scans is also curved due to the natural scleral curvature and while this is not an artifact, it can be advantageous to at least temporarily also remove this curvature for some applications. For instance, segmentation algorithms that incorporate learned information about the layers [3,4] do so by modeling the behavior of the surfaces of interest. But this is hard to do in the presence of unpredictable artifacts such as those that are seen in the B s -scans of the OCT images. Surface behavior is also of interest clinically, where it could be used to compare pathological changes to normal data. Bringing the data into a consistent format is also important in 3-D registration applications [5], such as registering ONH to macular scans. Here, a predictable consistent shape can have a significant impact on the result. Thus, the need to correct these artifacts and bring the dataset into a more consistent, predictable shape is compelling.
Since the correction process is intended to be a preprocessing step, it should alter as little as possible of the actual A-scans to avoid losing any information. To this end, numerous approaches have been reported that use correlation in 1-D [6] and 2-D rigid registration [7][8][9][10][11][12][13] to correct these artifacts. These registration based methods, while effective, are 2-D methods and thus, do not incorporate 3-D contextual information. The new availability of orthogonal scans has also led to methods [14, 15] that incorporate information from both scans to realign the dataset and remove motion artifact. This is an effective method that not only removes motion artifact but also reconstructs the "true" shape of the retina. However, the application of these methods is restricted by the availability of the orthogonal scans, which are not typically acquired clinically.
Garvin et al. [3], as a preprocessing step for a 3-D intraretinal segmentation algorithm, described a 3-D segmentation-based method that corrects motion artifacts by re-aligning the columns of the image with respect to a smooth "reference" plane. This reference plane is constructed by fitting a smoothing thin-plate spline (TPS) to a surface segmented in a lower resolution. The small number of control points and the large regularization term used in the spline-fit process reduces the dependence on the segmentation result, but the spline is not able to model the fast variations seen along the slow scanning axis. A smaller regularization term would have provided a closer fit to the control points, but this would increase the dependence of the artifact correction on the segmentation result.
In this paper, we describe a segmentation-based method for the correction of distortions seen along the fast and slow scanning axes in OCT retinal scans. The method focuses on correcting the artifacts along each axis separately while retaining the overall 3-D context, which makes our approach able to better address the differences in the types of artifacts characteristic of each axis. This is done by incorporating a priori information regarding the different artifacts seen along these two axial directions and correcting them using dual-stage thin-plate splines fitted to a segmented surface. Additionally, we also present a method to reconstruct the "true" scleral curvature (which is removed by the artifact correction method) given the new availability of orthogonal scans. Note that having orthogonal scans for the optional scleral curvature reconstruction step is important as the scleral curvature is severely disrupted along the slow scanning axis by motion artifact. However, not all applications need the scleral reconstruction step (such as segmentation, registration and thickness-measurement applications).
In addition to visual assessments, the artifact-correction method was validated using pairs of datasets obtained from the same eye using orthogonal fast-scanning directions and depth maps created from stereo fundus photographs. In both validation techniques, significant differences (which are visually apparent) were noted between the original datasets, datasets corrected using a single spline-fit and the datasets corrected using the proposed dual-spline fit process.
Methods
The artifact correction process needs a stable surface to which a thin-plate spline can be fit. We therefore, begin by segmenting a surface using an automated graph-theoretic approach [3], which incorporates contextual information (as the segmentation is carried out in 3-D) and also ensures the global optimality of the segmented surface. The surface between the inner and outer segments of the photoreceptor cells (depicted in Fig. 2) is used, as it can be easily and reliably detected in OCT volumes. Two stages of smoothing thin-plate splines (described below) are then used to estimate the distinct artifacts seen in OCT images. In the first stage, a smoothing thin-plate spline is used to estimate and correct the tilt artifacts commonly seen in the B f -scans. At this stage, the scleral curvature is also removed. A second spline-fit is then used to model and correct the rapidly varying motion artifact characteristic of the B s -scans.
Thin-plate splines were first formulated by Duchon [16], who compared the process to the physical bending of a thin sheet of metal. The TPS formulation [17, 18] used in our method generates a smoothing interpolation function as follows: We begin by defining f to be a function that maps R 2 to R for a given set of N points in R 2 , ν = {ν i : i = 1, 2, ..., N}. Then, the thin plate spline interpolation function s will be one that minimizes the energy equation: where x and y are the two components of θ . Let us also define Now, for the above energy equation there exists a unique minimizer s λ given by The parameters d k and c i can be estimated by solving N linear equations. Thus, a 2-D TPS can be fit to a 3-D segmented surface to obtain a smooth (where the smoothness is controlled by λ ) 3-D reference plane with respect to which the dataset is flattened. Henceforth in this paper, we shall refer to the surface obtained through the 2-D TPS fit as the 3-D spline surface, and the curve obtained through the 1-D TPS fit as the 2-D spline curve.
To compensate for the two different artifacts seen in the dataset, the flattening is done in two steps (see Fig. 3): 1. A 2-D TPS is fit to the surface, where the number of control points used is determined by the size of the surface along each axial dimension. At this stage, the number is set to 10% and 5% of the dimensions along the x and y axial directions, respectively, and the control points are evenly distributed along each direction. A relatively large smoothing regularization term (λ = 0.1) is used so that the 3-D spline-fit surface thus created is relatively smooth and approximates the overall curvature of the retinal surfaces seen in the B f -scans. (Experimentally, we found that values of λ between [0.07, 0.13] provide consistent results.) However, at this stage, the large regularization term used makes it difficult to accurately estimate the artifacts in the slow-scanning direction (B s -scans). The dataset is now flattened with respect to this reference plane by realigning the columns of the dataset, which eliminates the curvature seen in the B f -scans as well as any tilt artifacts that may be present.
2. The rapid variations seen in the B s -scans can be corrected in one of two ways: (a) The previously computed 3-D spline surface estimates and corrects the artifact along the fast-scanning axis, thus a single 2-D spline curve (computed using a 1-D TPS) can be used to model the artifacts in the B s -scans. The segmented surface is averaged across the fast-scanning direction to create a single 2-D vector, which approximates the artifact seen in the B s -scans. A single 1-D spline can now be fit to this vector using evenly spaced control points (totaling 25% of the axial dimension) and a relatively small regularization term (λ = 0.07) to give us a 2-D spline curve, which is used to correct the motion artifact in all the B s -scans. (Consistent results were obtained for λ = [0.01, 0.07].) Note that the initial creation of the averaged 2-D vector (across all of the B s -scans) helps to ensure that only the motion artifacts in this direction will be corrected rather than also flattening local retinal distortions, such as those from pathology. This method can be used to correct artifacts in macula scans as the volumes do not contain any regions where the surfaces are discontinuous.
(b) In the case of optic nerve head (ONH) centered scans (where the surfaces become indistinct at the optic disc), a second 2-D thin-plate spline is fit to the new surface. The 2-D TPS now uses a larger number of control points in the y-direction (25% of the axial dimension) than the x-direction (5% of the axial dimension). These control points are chosen from outside the optic disc region, where the margin is approximated using a circle (2.1mm in diameter). Note that in the case of abnormal optic discs, this margin can be more precisely (and still automatically) determined [19]. The regularization term used in this step is also smaller (λ = 0.05), enabling the resulting 3-D spline surface to model the rapid variations seen along the slow scanning axis. (Consistent results were obtained for λ = [0.03, 0.07].) The final artifact-corrected dataset is now obtained by flattening the image once more with respect to the plane obtained in one of the two ways described above.
Experimental Methods
The proposed artifact correction technique was validated using the following two approaches: 1. The first approach uses pairs of macula OCT scans acquired from the same eye, using orthogonal fast scanning axes. The pairs of OCT scans also made the reconstruction of the ocular curvature in OCT volumes possible. Nine pairs of macula-centered OCT scans were obtained on a SD-OCT1000 spectral-domain scanner (Topcon Corp., Tokyo, Japan) from 9 normal subjects participating in the Rotterdam Study, which is a prospective population-based cohort study investigating age-related disorders. The study population consisted of 7983 individuals aged 55 years and older living in the Ommoord district of Rotterdam, the Netherlands [20-22]. These OCT images, wich were obtained as part of latest follow up in addition to other ophthalmic tests, had dimensions of 512 × 128 × 480 voxels obtained from a 6 × 6 × 2 mm 3 region centered on the macula.
2. The second approach uses 3-D reconstructions of the ONH from stereo fundus photographs. The stereo fundus photographs and the ONH-centered OCT scans were acquired from the same patient on the same day, from both eyes of 15 patients from the Glaucoma Clinic at the University of Iowa. The scans were obtained from a Cirrus spectral-domain scanner (Carl Zeiss Meditec, Dublin, CA, USA), and had dimensions of 200 × 200 × 1024 voxels obtained from a 6 × 6 × 2 mm 3 region centered on the ONH.
These two validation methods were used for the macula and ONH-centered scans, respectively, and provided a quantitative assessment of the two variations of the artifact-correction method.
Validation Using Paired OCT Scans with Orthogonal Fast-Scanning Axes
Since the artifacts seen in OCT images are strongly dependent on the orientation of the fast scanning axis, a pair of OCT scans acquired from the same eye with orthogonal fast scanning axes can be used to assess the accuracy of the artifact correction process. The tilt and low variation artifacts associated with B f -scans now appear in perpendicular scans in the second dataset, and the same is true of high frequency variations associated with B s -scans. Fig. 4(a) corresponds to the central B s -scan in Fig. 4(b), and it is easily seen that the artifacts in the two slices are very different. The curvature and tilt artifacts associated with the B f -scans are far easier to correct than the rapid variations seen in the B s -scans, thus, the B f -scans from one artifact corrected dataset can be used quantitatively to validate the ability of the proposed method to correct artifacts in the B s -scans of the second dataset.
The quantitative measure of the accuracy of the artifact correction process is expressed using the mean unsigned difference in the location of a particular surface before and after the artifact correction process. The surface between the inner and outer segments of the photoreceptors was segmented and used in the spline-fit, and thus is available for use in the validation as well. An absolute comparison in microns is possible since this surface is flattened to the same depth in the z-axis.
The acquisition process of the OCT datasets creates volumes that are roughly rotated by 90 o degrees with respect to each other. Thus, the datasets must first be registered to each other before any comparisons can be made. This was done by manually selecting two correspondence points in the 2-D projection images of the paired datasets. The projection images were created from a small number of slices near the segmented surface (between inner and outer segments of the photoreceptors), as the vessels are much clearer in projection images created in this manner [23]. Vessel crossings and bifurcations can be used as correspondence points. A rigid transformation (as expressed below) can now be used to align the B f -scans of one dataset with the B s -scans of the second. It is easily apparent that two points are sufficient to compute the transformation matrix.
In addition to the orthogonality of the scans, the images are also anisotropic in the x and y directions. In order to register the volumes better and make a true comparison between the segmented surface from the orthogonal scans, the projection images and the segmented surface are interpolated (along the smaller dimension) to make them isotropic. The mean unsigned difference between the segmented surface in both datasets can now be computed (within the common registered area) from the original, partially corrected (using a single 3-D spline surface) and the final artifact-corrected image.
Validation Using 3-D Reconstructions of the Optic Nerve Head
Tang et al.
[24] reported a method for the reconstruction of the shape of the ONH from stereo fundus photographs. The 3-D shape estimate is obtained by finding corresponding pixels from two stereo images of the ONH, taken from two slightly different angles. The two image planes are known to be horizontally displaced, but can be assumed to be co-planar. Since the horizontal disparity is known to be inversely proportional to the depth associated with the 3-D pixel, a depth map can be created using pixel correspondences. The depth maps thus created ( Fig. 5(b)) show the shape of the retina at the ONH region, and since they are created from fundus photographs they are free of the axial artifacts associated with OCT scans.
The structure obtained from the OCT images that is most comparable to the depth maps is the location of the inner limiting membrane (ILM). Before any comparison can be made between the depth maps and the location of the ILM, we have to compensate for three important characteristics of stereo fundus photographs.
1. The fundus photographs are not in the same reference frame as the OCT images, thus the depth maps must first be registered to the OCT dataset. Vessel locations were used to guide the rigid registration of the fundus images to the 2-D projection image created from the OCT image, as described in Section 3.1.
2. The depth from stereo estimations contain noise, which is seen in the depth maps. Thus, the depth maps must first be smoothed to validate the artifact correction process. The smoothing was done so that the noise was suppressed while the shape information from the depth maps was retained. Figure 5 shows the fundus photograph, its corresponding depth map and 3-D rendering of a smoothed depth map.
3. The depth maps do not provide quantitative depth information, as these are serial stereo photos and stereotactic positions (angle between camera positions) is not available. Thus, the location of the segmented surface from the OCT images must be scaled and expressed in normalized units. For this, we consider the depth of the surface from the top of the OCT dataset and normalize this depth by dividing by 200. We then scale this normalized depth to match the scale of the depth maps. The z-axis depth location of the reference plane for all datasets is maintained at the same value to minimize variations between the datasets.
A pixel by pixel comparison can now be made between the smoothed depth maps and the normalized depth of the ILM in the flattened OCT datasets.
Modeling the Shape of the Eye
The pairs of OCT datasets acquired from the same eye can be used to estimate the shape of the eye since the B f -scans, which contain information about the retinal curvature, are available along two perpendicular axis. Thus, in datasets with small or no tilts induced by the incorrect positioning of the camera, the B f -scans from one dataset can be used to "correct" the B sscans in the second dataset, creating a dataset where the retinal surfaces now reflect a better approximation of the shape of the eye.
A 2-D TPS is fit to the segmented surface from each of the datasets to create a 3-D interpolated isotropic surface. The 3-D spline surface thus created is not only smooth and isotropic, but is also less dependent on the segmentation result. Figures 6(a) and 6(b) show the isotropic surfaces created from the datasets with the horizontal and vertical fast scanning axes, respectively. The B f -scans from the dataset with the vertical fast scanning axis can now be used to correct the artifacts in the B s -scans of the dataset with the horizontal fast scanning axis. Since the datasets are acquired from the same area of the retina, the isotropic surfaces can now be used to estimate the z-axis translations required to correct the rapid variations seen along the slow scanning axis. Correcting the artifact in this manner will retain the ocular shape of the retina. The final shape corrected surface is as shown in Fig. 6(c).
Results
The quantitative results are summarized in Table 1. In the first validation technique (using the paired macula-centered scans), a segmented surface from the two datasets was compared before and after the artifact-correction process. The mean unsigned differences seen in the original and partially corrected (single spline-fit) datasets were 98.97 ± 39.45 µm and 23.36 ± 4.04 µm, respectively. The artifact-corrected datasets, on the other hand, only showed a mean unsigned difference of 5.94 ± 1.10 µm, which is significantly smaller (p < 0.001, from two-tailed paired t-test) when compared to the partially corrected datasets. Figure 7 shows an example of the 3-D representation of the original surface used to create the reference plane, the surface after the first spline-fit and the surface in the final artifact-corrected image. ⋄ Mean unsigned difference was computed between the location of a segmented surface in the paired OCT datasets and expressed in µm. ⋆ Mean unsigned difference was computed between the disparity maps and the normalized depth of the ILM. Average value is expressed as mean ± standard deviation in normalized units. Differences were computed in regions where the disparity maps were well defined. The optic disc was avoided. Figure 7(b) shows the segmented surface in a macula-centered OCT dataset after the first spline fit, and it is easy to see the periodic nature of the artifact along the y-axis. The single 2-D spline curve computed (using a 1-D TPS fit to an averaged 2-D vector) along this axis is more than sufficient to estimate this artifact and eliminate it, as can be seen in Fig. 7(c). Figures 8(a) and 8(b) shows the central B f -scan and B s -scan, respectively, from a maculacentered OCT dataset before the artifact correction process. The same slices after the artifact correction process are depicted in Figs. 8(c) and 8(d).
In the second validation method, 3-D depth maps created from stereo fundus images were compared to the ILM. As the depth maps are created from the fundus photographs and can sometimes show only a small region around the optic disc, care was taken to ensure that the difference was only calculated in areas where the disparity map was well defined. The optic disc region was also avoided. The mean unsigned difference (computed in normalized units) seen in the original and partially corrected (single spline fit) datasets was found to be 0.321 ± 0.134 and 0.142± 0.036, respectively. The artifact-corrected datasets showed a mean unsigned difference of 0.134 ± 0.035, which is significantly smaller (p < 0.001, from two-tailed paired t-test) when compared to the original datasets. Figures 7(d), 7(e) and 7(f) show the segmented surface from an ONH centered dataset in the original datatset, the dataset after correction with a single 3-D spline surface, and the dataset after the second 3-D spline surface correction process, respectively. The optic disc disrupts the surfaces (as seen in Figs. 7(d) and 7(e)) necessitating the use of a second 3-D spline surface instead of a 2-D spline curve. Figure 9 shows the ILM and two slices from an ONH-centered dataset in different stages of artifact correction. The vast difference between the original and artifact-corrected images is easily apparent in the slices depicted in Figs. 9(a) and 9(b), respectively.
Discussion and Conclusion
Eliminating motion artifacts in OCT images is important as it brings the dataset into a consistent shape and makes visualization and subsequent analysis easier. While the removal of the scleral curvature seen in the B f -scans is undiserable in some applications, numerous applications such as the automated segmentation of intraretinal layers [3,4], and the volumetric registration of OCT-to-OCT [5] datasets would benefit from the consistency of an artifact-corrected dataset.
In this work, we have presented a method for the correction of axial artifacts using thinplates splines that estimate the distortions along the fast and slow scanning axes in OCT retinal scans. Our results show that the TPS approach is effective, as it is able to create a globally smooth surface whose properties are easily controlled by the regularization parameter and the number of control points used. The proposed method aims to eliminate all of the artifacts, and therefore does not retain any information about the shape of the retina; however, given the new availability of orthogonal OCT scans on clinical OCT machines, the shape can be estimated and reconstructed. It is also important to note that the approach does not alter the data in any way other than the z-axis translation (which is a reversible transformation), and thus does not affect any measurements derived from the A-scans. The time complexity of the method largely depends on the number of control points used in the TPS fit, as the surface segmentation takes under a minute. Our implementation (in C++, on a 2.8GHz six-core AMD Opteron processor) took a total time of 7-9 minutes to segment the surface and correct the artifacts in the OCT volumes.
The robustness of the correction procedure is evident from the results obtained on a diseased set of OCT scans, where the surface segmentation is more prone to error. While the dependence on the segmentation result is undesirable, it does provide vital information in scans where the camera has been incorrectly positioned and the retinal surfaces appear skewed. In such situations, the 3-D spline surface flattening procedure would show better results than 2-D rigid registrations methods, as the 2-D registration approaches only aim to correct the motion artifacts, but do not address the tilt artifacts. Furthermore, the 2-D registration could introduce artifacts into the image in the form of unwanted translational or rotational changes, as the they are not guided by 3-D contextual information. The use of SLO images [13] or fundus photography would be necessary to ensure that the such artifacts do not affect the end result. Alternatively, a combination of segmentation and registration-based methods could be used to retain 3-D contextual information during the artifact correction process.
Thus, in summary, with our two-stage flattening approach, we are able to correct multiple types of axial artifacts (some for which the initial 3-D surface segmentation is necessary or useful for artifact-modeling purposes), while still demonstrating a robustness against any small local disruptions or errors in the initial segmentation result. With orthogonal scans, we are then | 2018-04-03T05:25:16.276Z | 2011-07-27T00:00:00.000 | {
"year": 2011,
"sha1": "1e8fafb5ad087187cb7971d85a5041b309354ce0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1364/boe.2.002403",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e8fafb5ad087187cb7971d85a5041b309354ce0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
210829548 | pes2o/s2orc | v3-fos-license | A pitfall of using the circular‐edge technique with image averaging for spatial resolution measurement in iteratively reconstructed CT images
Abstract The circular‐edge technique using a low‐contrast cylindrical object is commonly used to measure the modulation transfer functions (MTFs) in computed tomography (CT) images reconstructed with iterative reconstruction (IR) algorithms. This method generally entails averaging multiple images of the cylinder to reduce the image noise. We suspected that the cylinder edge shape depicted in the IR images might exhibit slight deformation with respect to the true shape because of the intrinsic nonlinearity of IR algorithms. Image averaging can reduce the image noise, but does not effectively improve the deformation of the edge shape; thereby causing errors in the MTF measurements. We address this issue and propose a method to correct the MTF. We scanned a phantom including cylindrical objects with a CT scanner (Ingenuity Elite, Philips Healthcare). We obtained cylinder images with iterative model reconstruction (IMR) algorithms. The images suggested that the depicted edge shape deforms and fluctuates depending on slice positions. Because of this deformation, image averaging can potentially cause additional blurring. We define the deformation function D that describes the additional blurring, and obtain D by analyzing multiple images. The MTF measured by the circular‐edge method (referred to as MTF') can be thought of as the multiplication of the true MTF by the Fourier transformation (FT) of D. We thus obtain the corrected MTF (MTFcorrected) by dividing MTF' by the FT of D. We validate our correction method by comparing the calculated images based on the convolution theorem using MTF' and MTFcorrected with the actual images obtained with the scanner. The calculated image using MTFcorrected is more similar to the actual image compared with the image calculated using MTF', particularly in edge regions. We describe a pitfall in MTF measurement using the circular‐edge technique with image averaging, and suggest a method to correct it.
| INTRODUCTION
Iterative reconstruction (IR) algorithms have been widely implemented for clinical computed tomography (CT). IR methods can reduce image noise, which is mainly caused by radiation quantum fluctuation in the CT projection data, while maintaining (or improving) the spatial resolution. [1][2][3][4] Most IR algorithms incorporate statistical models (photon and noise statistics) and the scanner geometry and optics, and have nonlinear properties. 5,6 It has been reported that the nonlinear properties cause spatial resolution variability depending on image noise levels and object contrast. [7][8][9] Therefore, the modulation transfer function (MTF), one of the most comprehensive metrics for spatial resolution, measured using traditional approaches with high contrast wires or beads, is not applicable for characterizing the spatial resolution of clinical IR images. Richard et al. developed a new MTF measurement approach, called the "circular-edge technique," using a low-contrast cylindrical object. 7 This technique has been widely used for MTF measurements of IR images. Most of the studies using this technique computed the average of the consecutive cross-sectional images of the cylinder and/or the average of many images acquired from repeated scans to improve the signal-to-noise ratio. 1,[9][10][11][12] Because of the intrinsic nonlinearity of IR algorithms, the resulting image properties are complicated compared with those of filtered back projection (FBP) images. Leipsic et al. 13 reported that in cardiac CT angiography, reconstructions obtained using adaptive statistical iterative reconstruction (ASIR) differ in appearance from traditional FBP images, exhibiting a different noise texture and smoothed borders. Singh et al. 14 observed a step-like artifact at tissue interfaces (such as the margins of the liver, spleen, and blood vessels) in abdominal CT images reconstructed using ASIR. The imaging at border/edge regions using IR algorithms is potentially sensitive to slight fluctuations in the CT projection data, including noise. In a phantom study, Li et al. 9 obtained multiple IR images using repeated scans, and assessed the standard deviation of CT values locally in the edge regions of circular objects. They considered this standard deviation as "edge-noise," and found that the edge-noise was greater than the standard deviation computed for uniform regions; thereby suggesting a specific anomaly in edge regions. The object edge shape depicted in IR images may deform slightly with respect to the ideal shape (circle) and fluctuate in repeated scans; this effect is one potential reason for the increased edge-noise. Averaging multiple images can reduce the image noise, but does not effectively improve the deformation and fluctuation of the object edge shape depicted in the IR images. When applying image averaging with the circular-edge technique, the occurrence of edge shape deformation may adversely affect MTF measurements.
The aim of this study is to address this issue and propose a method to correct the MTF measured using the circular-edge technique. To verify the validity of the proposed method, we compared the computed images obtained by applying the convolution theorem using the corrected MTF with the true images obtained by the CT scan.
2.A | Equipment and imaging parameters
We used the sensitometry module (CTP404) included with the Catphan 600 phantom (The Phantom Laboratory, Salem, NY). The module consists of eight cylindrical objects; we used two objects made from Delrin (approximately 350 HU at 120 kVp) and polystyrene (PS) (approximately −30 HU at 120 kVp). The background CT value was approximately 100 HU at 120 kVp. We placed the phantom in the center of the scanner field of view (FOV) such that the cylinder was parallel to the z direction, and therefore perpendicular to the x-y scanning plane. We scanned the phantom with a multidetector row CT scanner (Ingenuity Elite, Philips Healthcare, Netherland) at 120 kVp, 100 mA, with a one-second/ rotation, a pitch of 1.17, and detector configuration of A CT image is characterized by the spatial resolution of the system. When considering a CT image of a uniform cylindrical object placed parallel to the z direction (perpendicular to the x-y scanning plane), the resulting image is expressed as follows: [15][16][17] NARITA AND OHKUBO where O x; y ð Þ is an object function of a circular shape with uniform density, and PSF x; y ð Þ is the two-dimensional (2D) point spread function (PSF). The operator * is the 2D convolution. Because of the uniform circular shape of the cylinder, the cross-sectional image I x; y ð Þ does not change with the slice position along the z-axis. However, we observed differences between consecutive slice images reconstructed using IMR (Fig. 1). To describe a practical image generation system that includes the object-shape deformation present in IMR images, we make several assumptions and modify Eq. (1) as follows.
First, we include a deformation between each of the cross-sectional images and the original circular shape in the object function.
Thus, we write Eq. (1) as follows: where I where I where D x; y ð Þ is a blurring function whereby the blurring is originated from the deformations in O 0 i x; y ð Þ. Therefore, we refer to D x; y ð Þ as the deformation function. By applying Eqs. (4) to (3), we The circular-edge technique is based on Eq. (1) assuming the ideal circular shape of O x; y ð Þ and the isotropy of the in-plane resolution; this provides an MTF that is equivalent to the Fourier transformation of PSF x; y ð Þ. That is, the resultant MTF is written as follows: where F is the Fourier transform, u and v are the spatial frequency coordinates in the x and y directions, respectively, and w is spatial frequency in the radial direction. When applying image averaging with the circular-edge technique, we assume Eq. We applied the circular-edge technique to obtain the frequency characteristics of the deformation function D x; y ð Þ, which is the (2) as follows: 19 the frequency characteristics of D x; y ð Þ (Fig. 3). The difference between MTF 0 w ð Þ and MTF corrected w ð Þ obtained for the IMR Body We calculated the RMSEs for these comparisons for all 200 slice images, and the average RMSEs are shown in Table 2. The average
2.C.2 | MTF correction method
RMSEs corresponding to MTF corrected w ð Þ were smaller than those corresponding to MTF 0 w ð Þ under all conditions.
| DISCUSSION
We considered that images reconstructed using the IMR algorithm potentially deform from the ideal object shape, depending on the noise in the edge regions (Fig. 1). This deformation is caused by the intrinsic nonlinearity of the IMR algorithms, and was not observed in Fig. 3(e). The MTF' and MTF corrected used for calculating image (g) and (i) are shown in Fig. 3(f).
Large deformations were observed in the images reconstructed using the algorithm Body SharpPlus compared with those reconstructed using Body Routine (Fig. 1). The deformations caused blurring in an average image, reducing the frequency characteristics of D x; y ð Þ. As shown in Fig. 3 Fig. 2), even when using a lower-contrast object (the contrast of the PS to the background was Fig. 3 approximately −130 HU). However, when using a considerably lower-contrast object, improvements of generating the effective object function might be necessary.
| CONCLUSION
We demonstrate a pitfall in the circular-edge technique accompanied with image averaging for MTF measurement, particularly when using an edge-enhancement type IMR algorithm. To address this issue, we made several assumptions, modified the equation for the image generating system, and proposed a method to correct the MTF. We confirmed the validity of the proposed method by comparing the calculated images based on the corrected MTF with the actual (true) images. When using an edge-enhancement type IMR algorithm, the MTF correction method improves the results obtained using the circular-edge technique.
This work was supported by JSPS KAKENHI Grant Number
JP17K09059. We thank Irina Entin, M. Eng., from Edanz Group (www.edanzediting.com/ac) for editing a draft of this manuscript.
CONF LICT OF I NTEREST
The authors have no relevant conflict of interest to disclose. | 2020-01-21T14:02:31.133Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "b91ae17bfdb9b2568f1874b561ff082cf4ed9d15",
"oa_license": "CCBY",
"oa_url": "https://aapm.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.12821",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42d9e41598a2691f0437c4139dff29db25293991",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
247348562 | pes2o/s2orc | v3-fos-license | Frequent Eating Out and 10-Year Cardiovascular Disease Risk: Evidence from a Community Observatory in Malaysia
Despite increasing mortality rates from cardiovascular diseases (CVDs) in low- and middle-income countries, information on the estimation of 10-year CVD risk remains to be sparse. Therefore, this study was aimed at predicting the 10-year CVD risk among community dwellers in Malaysia and at identifying the association of distal (socioeconomic characteristics) and proximal (lifestyle practices) factors with 10-year CVD risk. We calculated the 10-year CVD risk score among 11,897 eligible respondents from the community health survey conducted by the South East Asia Community Observatory (SEACO) using the Framingham risk score (FRS). The findings indicate that 28% of respondents have a high chance of having CVD within the next ten years. After adjusting for the age of respondents, demographic and socioeconomic factors such as gender, ethnicity, marital status, education, income, and occupation had an association with the 10-year CVD risk. In addition, frequent eating out had an association with 10-year CVD risk, while physical activity was found to have no association with predicted CVD risk. CVD remained among the top five mortality causes in Malaysia. Health promotion strategies should emphasize the importance of having home-cooked meals as a healthy dietary behavior, to reduce the mortality rate among Malaysians due to CVDs.
Introduction
Cardiovascular diseases (CVDs) account for the majority of noncommunicable disease (NCD) deaths across the globe [1]. NCDs contributed to about 73% of the total deaths in Malaysia, with the largest contributor being CVDs (35%) in 2015 [2]. Ischaemic heart disease remained among the five principal causes of deaths in Malaysia (15.6%) in 2019 [3].
In addition, the prevalence of obesity (17.7%), hypercholesterolemia (47.7%), and diabetes (17.5%) had increased among Malaysian adults [4]. Since individuals may have one or more CVD risk factors and/or chronic disease conditions, the progression of a particular CVD risk factor cannot accurately predict future CVD risk at the population level. The Framingham risk scoring (FRS) model can predict the CVD outcomes "fairly well" in the Asian population [5][6][7][8][9]. However, there remain to be limited findings on the 10-year CVD risk predicted by using a representative population sample in Asian countries with developing economies such as Malaysia.
In the related literature, the association between behavioural or lifestyle exposure such as unhealthy diets and physical inactivity with individual CVD risk factors such as obesity [10][11][12], hypertension [13,14], diabetes [11], and hyperlipidaemia [14,15] has been documented. However, past studies investigating the relationship between lifestyle factors such as diet and physical activity with 10-year CVD risk are lacking. Hence, this study was aimed at predicting the 10year CVD risk among Malaysians and at identifying the association of distal (socioeconomic characteristics) and proximal factors (lifestyle practices) with a 10-year CVD risk.
Study Design and Data
Collection. This study utilized data from the community health survey 2013 collected by the South East Asia Community Observatory (SEACO). SEACO is a health and demographic surveillance system (HDSS) established in Segamat, Johor, Malaysia [16]. This surveillance site covers approximately 44,900 individuals that reside in 13,400 households in the baseline enumeration (household census) conducted in 2012 [16]. SEACO conducted house-to-house interviews to obtain the demographic and socioeconomic status (e.g., education, age, ethnicity, and income) and self-reported information on lifestyle practice and dietary behaviour (e.g., smoking, physical activity, and frequency of eating outside). Anthropometric measurements (e.g., height, weight, blood pressure, and random blood glucose) were also conducted as part of the home-based health screening among respondents aged 35 years and above. Data collection was undertaken from August 2013 to July 2014. The total number of respondents in this survey was 25,184 [16].
Selection of Respondents.
The respondents with no history of CVD were eligible for this FRS prediction model. From among 25,184 respondents, only 13,804 had blood pressure measurement. In addition, we considered the exclusion of the respondents that answered "Yes" to the following questions: (1) "Have you ever been told by a doctor/ medical assistant that you have heart disease?" or (2) "Have you ever been told by a doctor/medical assistant that you have had a stroke?". However, the number of respondents was further reduced to 11,897 because of missing information ( Figure 1). We included all eligible participants in our data analysis.
2.3. Cardiovascular (CVD) Risk Score. The outcome variable was modified FRS point proposed by D'Agostino et.al [17] to determine the 10-year CVD risk. To estimate the 10-year CVD risk score, the researchers utilized nonlaboratory predictors: age (in years), body mass index (BMI), antihypertension medication use, systolic blood pressure (SBP), smoking status, and diabetes mellitus status. Respondents were classified as smokers if they reported "currently smoking." Anthropometric measurements such as height (cm) and weight (kg) were obtained through home-based screening during interviews. Body mass index (BMI) was calculated by dividing the weight with height in metres squared. Three blood pressure (BP) readings were taken using the Omron HEM 7120 E Blood Pressure Monitor M2 Basic Digital Intellisense following the standard STEPwise guideline [18]. However, only the second and third BP readings were averaged and used in the data analysis, as the first reading may overestimate the mean BP [19].
The classic FRS was previously validated in Malaysia by Ng and Chia [20]. Meanwhile, Su et.al [9] predicted and compared the 10-year CVD risk among low-income urban dwellers in Metropolitan Kuala Lumpur using both classic and modified FRS, in which both models reported similar findings. We followed the steps proposed by D'Agostino et.al in the prediction of the CVD risk. First, the FRS point from each category was identified and summed up. Then, the total FRS points obtained were converted into 10-year CVD risk, classified as low (≤6%), moderate (7 to 20%), and high (>20%) [9,17].
2.4. Demographic and Socioeconomic of Respondents. The demographic variables included were age, gender, ethnicity (Malay; Chinese; Indian; Aborigine; and others), and marital status (never married; married; separated/divorced; widowed/widower; and others). Socioeconomic variables included were income (below RM 1000; RM1000-RM1999; RM2000-RM2999; and RM3000 and above), highest education level attained (no formal education; primary; secondary; tertiary; and other [e.g., religious school and international school]), and occupation (paid employee; self-employed; homemaker; not working; pensioner; and other). The BioMed Research International education status of the respondents was also used as the proxy of the level of health literacy [21,22].
Dietary and Lifestyle
Practice. Lifestyle practice included frequency of eating out and level of total physical activity. Frequency of meals eaten outside acted as a proxy for healthy eating habit [23,24]. The level and intensity of physical activity were measured by the validated Malay version of the Global Physical Activity Questionnaire (GPAQ) by WHO [25]. The categorisation of the level of total physical activity was low, moderate, and high according to the GPAQ guideline [25].
2.6. Statistical Analysis. Data were analysed using IBM SPSS version 24. Descriptive statistics presented the characteristics of the respondents and variables used in deriving FRS scores. Chi-square tests were conducted to examine the association between 10-year CVD risks with the selected independent variables. Multiple linear regression outlined the association between the 10-year CVD point scores with the demographic and socioeconomic variables and lifestyle behaviour.
Ethics
Approval. The study was approved by the Monash University Human Research Ethics Committee (MUHREC) (Project ID: 13142). The respondents were given an explanatory sheet and a consent form by the data collectors (DCs) from SEACO during the face-to-face interviews. The DCs conducted the interviews and performed home-based screening after the respondents agreed and signed the consent form.
Results
A total of 11,897 respondents without a history of CVDs were included in this study. Table 1 presents the demographic, socioeconomic, and dietary and lifestyle practice data of the respondents. About 35% of the respondents were aged 60 and above. Approximately 57% were female. The majority of the respondents were Malay (63.4%), follow by Chinese (25.1%) and Indian (9.6%). Most of the respondents (88.6%) had at least primary or secondary education. About 53% of the respondents earned below RM1000 on a monthly basis. The majority of the respondents (76%) reported a frequency of eating out of less than 6 meals per week, while about 90% of respondents had a low level of total physical activity. Table 2 summarises the characteristics of the variables used for constructing the FRS model. The majority of the respondents (87.3%) reported that they did not take antihypertensive medication. About 86% were nonsmokers, and about 90% were nondiabetic patients. The mean age was about 55 years old, and the average of the SBP reading was 132.6 mm/Hg. The mean BMI among the respondents was 26.6 kg/m 2 . The mean predicted CVD risk was 11.29 (95% CI, 11.19 to 11.39). Table 3 summarises the demographic, socioeconomic, and dietary and lifestyle practice by CVD risk. The results indicate that about 28% and 43% of the respondents were at high and moderate risks of CVD, respectively. Compared to other age groups, respondents aged 70 and above had a higher risk of getting CVD. Male respondents had a high CVD risk. Malay, Chinese, and Aborigine respondents had a high CVD risk compared to the Indian ethnicity. Respondents who were widowed/widower, without formal education, or earned RM1,000-RM1,999 monthly were predicted to have a high CVD risk. The results also show that the risk of getting CVD was quite evenly distributed across all categories for frequency of eating outside per week and level of total physical activity. Table 4 presents the association between 10-year CVD risk and demographic, socioeconomic, and dietary and lifestyle practice, adjusted for age. This study was aimed at identifying the impact of lifestyle practices with CVD risk prediction among a semirural population; hence, the presentation of two independent models are as follows. Model 1 comprised CVD risk and demographic and socioeconomic variables; while model 2 comprised model 1 in addition to physical activity and dietary practice. By presenting the two models, this study distinguishes the effect of demographic and socioeconomic status and lifestyle practices on CVD risk prediction among the respondents. In model 1, female respondents had lower 10-year CVD risk points (b = 1:456, 95% CI, -1.636 to -1.277) as compared to males. Chinese (b = −0:765, 95% CI, -0.918 to -0.613) and Indian (b = −0:251, 95% CI, -0.471 to -0.031) respondents had lower 10-year CVD risk points as compared to Malay respondents, while the Aborigine group (b = 1:797, 95% CI, 1.183 to 2.410) had higher CVD risk points compared to Malay respondents. Only respondents that were widowed/ widower had higher CVD risk as compared to never married respondents (b = 0:831, 95% CI, 0.456 to 1.207). Respondents that studied primary/secondary level (b = −0:875, 95% CI, -1.249 to -0.501), tertiary level (b = −1:088, 95% CI, -1.572 to -0.603), and other types of schools (b = −0:714, 95% CI, -1.220 to -0.207) had a lower CVD risk as compared to those who had no formal education. Respondents who earned more than RM1,000 monthly had a lower CVD risk as compared to those who earned less than RM1,000 per month. In terms of occupation status, the CVD risk points were higher among the self-employed (b = 0:218, 95% CI, 0.017 to 0.418), homemakers (b = 0:459 , 95% CI, -0.245 to 0.672), those who reported not working (b = 0:757, 95% CI, 0.499 to 1.016), and other unspecified occupations (b = 0:455, 95% CI, 0.169 to 0.742), as compared to paid employees.
In model 2, those who reported eating solely at home (b = −0:342, 95% CI, -0.560 to -0.124) or eating out less frequently (1 to 5 meals per week) (b = −0:574, 95% CI, -0.781 to -0.367) had lower predicted CVD risk points as compared to those who ate out 11 times or more per week. There was no association between physical activities with CVD risk prediction. The findings on distal factors are similar to model 1.
Discussion
The prevalence of predicted CVD risk among the respondents was 28.9%, 42.9%, and 28.2% for low, moderate, and high CVD risk, respectively. The prevalence of CVD risk was lower as compared to a study done in Kuala Langat (a semirural area) in 1993, wherein 55.8% of males and 15.1% of females reported to have a high CVD risk [6]. In contrast, our study shows that 47.2% of males and 13.6% of females had a high CVD risk. However, the prevalence of CVD risk in our study was higher as compared to other past studies in the context of Malaysia. Su et. al. [9] reported that there were 21.8% and 38.9% of respondents who had a high and moderate CVD risk, respectively, among urban dwellers in Kuala Lumpur in 2012. The Prospective Urban Rural Epidemiology (PURE) project conducted in 2008 reported that the prevalence of high CVD risk was only 16% [11]. Ahmad et. al. [26] utilized the NHMS data from 2006 until 2015 to determine the prevalence of CVD risk among Malaysians by using the WHO/ISH risk prediction chart. The authors discovered that the prevalence of high CVD risk (>40%) increased among female respondents aged 70 to 79 with time (11.1% in 2006 to 15.3% in 2015) [26].
The current study shows that there was an association between demographic, socioeconomic, and dietary and lifestyle practice with CVD risk. Older respondents had a higher CVD risk, which is an inevitable event as a process of ageing [26][27][28]. Older individuals encountered more health issues and sickness such as hypertension [28] and diabetes [27], which are CVD risk factors. Male respondents were found to have a higher CVD risk compared to females. This finding is consistent with the past studies conducted in Malaysia [5,6,9]. Meanwhile, Malay, Chinese, and Aborigine respondents had a higher CVD risk compared to the Indian ethnicity. This finding is contrary to the previous studies in Malaysia, where only Malays had a higher predicted CVD risk as compared to other ethnic groups [5,9]. Meanwhile, Amiri et al. [27] discovered that Indians had a lower risk of having more than one CVD risk factor, while another study showed that Indian males aged 45 and above had higher odds of having more than three CVD risk factors due to dietary and lifestyle practices as well as genetic factors BioMed Research International [14]. Moreover, respondents who were widowed/widower were reported to have a higher CVD risk. The findings was consistent with the previous study done in Kuala Lumpur, where married individuals and widows/widowers had a higher CVD risk [9]. In this study, respondents without a formal education had a higher prevalence of CVD risks. Previous studies have shown that low education attainment is one of the contributing factors to CVD risk factors [11,14]. Individuals with lower education attainment tend to have a lower monthly income [29]. Therefore, respondents who earned RM1,000-RM1,999 monthly in this study had higher prevalence of CVD risk. This is because individuals with a low monthly income will have limited access to health services and encounter a financial burden in obtaining medical support [29]. Subsequently, they tend to lower their own concerns and perceptions on their health conditions [29]. In our study, respondents who were not working or worked as other unspecified occupation had a higher CVD risk. This finding is contrary to that by Amiri et al. [27], where paid employees had a higher CVD risk. Another study in Malaysia found that homemakers had a higher prevalence of CVD risk, attributable to unhealthy lifestyles and dietary practices [14].
BioMed Research International
The novel finding in our study is the inclusion of Aborigines in the prediction of 10-year CVD risk. They were found to have a higher CVD risk. Previous studies on 10year CVD risk among Malaysians mainly focused on major ethnic groups such as Malay, Chinese, and Indian [5,9,11], and information on Aborigines and minority communities were often excluded in the research evidence reported. Hence, we would like to highly recommend inclusive NCD health policies and strategies by taking into consideration minority communities and Aborigine groups.
Education attainment of respondents is highly associated with the CVD risk. For instance, respondents with a higher education level (tertiary) had a lower CVD risk as compared to those who had no formal education. This result is consistent with some prior studies [5,11,30], who report that those with a lower education level had a higher predicted CVD risk. Individuals with a lower education level had a poorer understanding of health information, or they had a lower health literacy in general [14]. Individual's health behaviour could be affected by their health literacy [21]. Individuals with higher education attainment had a higher level of health literacy, where some information and knowledge on health required the need of an individual to read and understand the content [21]. In addition, some past studies proved that health literacy served as a mediator or pathway by which education affects health [21,22].
In our study, individuals with a higher income (RM1,000 and above) had a lower predicted CVD risk. A low income will limit the individual access to health services. Therefore, the individual's health condition is often neglected [29]. Our finding shows that respondents that worked jobs other than paid employees had a higher CVD risk, which in line with the study by Su et. al [9]. This might be due to paid employees having a stable source of income and insurance coverage that can enhance the access for health services and thus reduce the prevalence of CVD risks [9]. It is noteworthy that individuals with a lower educational attainment had a lower source of income, resulting to limited access to healthcare services [29]. Hence, the predicted CVD risk was high among those with a low education level, those with unstable jobs or unemployed, and those with a low income.
Many studies emphasized the importance of physical activity [31][32][33] and nutrition intake [34,35] in CVD prevention and risk reduction. Our study shows that there is no significant association between physical activity and CVD risk, similar to the study by Yang et al. [36]. However, the finding was contrary to the study done by R. Yadav et. al [37], where two week of a yoga-based lifestyle intervention can reduce the predicted CVD risk score by 11%. Meanwhile, a previous study on civil servants in South-Western Nigeria using the WHO prediction chart showed that those who were physically inactive had a 2.4 times higher risk in developing CVD as compared to those who were physically active [38]. Another study in Korea found that young Korean women (below 40 years old) had a high predicted CVD risk due to an unhealthy lifestyle such as smoking, obesity, or sedentary activity [39]. However, the occurrence of CVD is not only influenced by physical activity; socioeconomic status and dietary habits also play an important role in preventing CVD risk. Hence, the insignificant association of physical activity on 10-year CVD risk in our study does not imply that physical activity is not an important aspect in preventing CVD. Since our data focused on reported physical activity, objective measurements of physical activity should be included in future data collection, and this would provide more robust information.
Respondents who reported eating solely at home or eating out less often (less than 6 meals per week) had a lower CVD risk. This might be due to the food choices practised by Malaysians [40]. According to the Household Expenditure Survey Report 2019, people who reside in rural areas use about a quarter of their income on raw foods and ingredients, while those who live in urban areas only spend 16% of their income on similar products [41]. This indicates that people in rural areas have a higher tendency to eat homecooked food, which is consistent with the findings of this study. The report found that the foods and goods that were most purchased included rice, chicken, eggs, vegetables, fish, beef, and fruits that are essential and nutritious for the body [41]. Many past studies have proven that diets with a low sugar intake help to prevent CVD [34,42]. Refined carbohydrates (e.g., white rice and white flour), sweetened beverages, high sodium intake, and saturated fats increased CVD risk [34,43]. In Malaysia, easily accessible fast food outlets [44] and habit of frequently eating out resulted in an increased trend of consuming unhealthy fast foods which are energy dense and high in fat and high in sodium content [45][46][47]. Meanwhile, people who eat out tend to choose popular food that is high in fats and sodium such as nasi lemak (rice cooked with coconut milk), pasta, chicken rice, and rice with thick and rich gravies. This food choice could increase the CVD risk among those who frequently eat outside. A past study done in Malaysia also concluded that incorrect and disordered eating patterns among Malaysians was associated 7 BioMed Research International with the occurrence of obesity, which was one of the risk factors of CVD [48].
This study found that education played an important role in reducing CVD risk. By improving their socioeconomic status through education [29], people are also able to gain knowledge and understand the concept of healthy eating [14]. Therefore, health education should be introduced through formal and informal curricula. Besides that, encouraging home-cooked meals should be a key message for health promotion. This is also in line with the National Plan of Action for Nutrition of Malaysia III 2016-2025 (NPANM III), where the government emphasized promoting healthy eating and active living [4]. Various strategies such as promoting nutrition activities via mass media and conducing advocacy and awareness on Malaysian Healthy Plate concepts through various activities (campaigns, talks, exhibitions, etc.) were implemented in promoting healthy eating [4]. However, these activities are more focused on urban areas. Healthy eating information and education for the rural population should be considered as a priority area for NCD prevention and control. Apart from the Internet, the government should spread health information through broadcasting, newspapers, campaigns, and other offline outlets. This is because older people face difficulty in finding information online, and this causes a low level of health literacy among elderly [49]. Increasing the level of health literacy among older adults is essential, as they are exposed to more health risks and limited access to digital health information [50]. Besides that, financial aid support for low-income groups should be implemented to reduce the financial burden for seeking medical help among the poor. For instance, "Skim Peduli Kesihatan for the bottom 40% income group" (Healthcare Scheme B40 or PeKa B40) was founded as about 48% of the B40 group aged 40 and above had at least one NCD that was often undiagnosed [51].
The strength of this study is the large population representative sample that consisted of the major and minor ethnic groups in the country. This study reported cumulative CVD risk in 10-year prediction, which is a more pragmatic approach than measuring individual risk factors. However, there are some limitations inherent in this study. The use of a self-reported assessment may lead to measurement and recall bias, where the respondents might underreport or overreport the lifestyle practices, particularly the level of physical activity.
Conclusions
In summary, there is an association between demographic, socioeconomic, and dietary practice with 10-year CVD risk. CVD was one of the five principal causes of death in Malaysia in 2019 [3]. Accessing and utilizing health information interventions should be age friendly where individuals of different age categories can obtain information easily. Increasing the level of health literacy to adopt healthy eating practices among Malaysians should be the focus of the government, in an attempt to reduce CVD risk. Comparison of calories and fat and sodium content of home-cooked and outside meals should be included in health education mate-rials to promote healthy eating at home. Besides the major ethnic groups, public health interventions should also focus on minority communities. It is essential to raise the public's awareness of healthy diets and lifestyle to reduce the rate of mortality due to CVDs.
Data Availability
Data requestors will need to fill in an online application form from the SEACO website (https://www.monash.edu.my/ seaco/research-and-training/how-to-collaborate-with-seaco). All the application will go through the SEACO Review Committee.
Conflicts of Interest
The authors declare no conflict of interest. | 2022-03-10T16:20:34.887Z | 2022-03-07T00:00:00.000 | {
"year": 2022,
"sha1": "c3ef5d21478d790477b1917aafc4d1032528f811",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2022/2748382.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2b21648a4f662afe330ee39a6465e9f785d13ad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118610825 | pes2o/s2orc | v3-fos-license | Splitting of the monolayer out-of-plane A'1 Raman mode in few-layer WS2
We present Raman measurements of mono- and few-layer WS2. We study the monolayer A'1 mode around 420 cm(-1) and its evolution with the number of layers. We show that with increasing layer number there is an increasing number of possible vibrational patterns for the out-of-plane Raman mode: in N-layer WS2 there are N Gamma-point phonons evolving from the A'1 monolayer mode. For an excitation energy close to resonance with the excitonic transition energy we were able to observe all of these N components, irrespective of their Raman activity. Density functional theory calculations support the experimental findings and make it possible to attribute the modes to their respective symmetries. The findings described here are of general importance for all other phonon modes in WS2 and other layered transition metal dichalcogenide systems in the few layer regime.
Introduction
The last few years have seen a spectacular increase in interest in layered transition metal dichalcogenides (TMDs). While the properties of the 3D bulk materials have been well known for decades, the possibility of thinning them down towards the monolayer has given rise to an entirely new research area. Their structural formula M X 2 (with M being a transition metal and X a chalcogenide) comprises metals, semimetals, semiconductors and superconductors [1]. What they have in common is that, with approaching the 2D limit, a whole world of intriguing properties such as extraordinarily large exciton binding energies [2], robust valley polarization [3] and large spin orbit splitting [4] opens up. These properties pave the way for potential use of TMDs for applications in digital electronics and optoelectronics [5], in energy conversion and storage [6] as well as in spintronics [7]. Most tungsten and molybdenum based TMDs exhibit a transition from an indirect to a direct bandgap semiconductor when being thinned down to the monolayer, resulting in high intensity photoluminescence [8][9][10]. Raman spectroscopy is one of the most powerful tools in characterizing nanomaterials. For few-layer (FL-) TMDs such as FL-WS 2 it allows for instance to exactly determine the number of layers. To be able to extract all the information that the measured Raman spectra have to offer, it is of primary importance to have a common ground on which to categorize the different Raman modes as a function of the number of layers. There is a large number of publications on monolayer TMDs [11][12][13][14] and the respective bulk materials [12,15,16], but only few articles concentrate on the transition from monolayer to bulk, e.g. the evolvement of the Raman signatures with the number of layers. In these articles moreover, many use the symmetry of the bulk to assign the Raman features [10,[17][18][19]; only recently there have been some reports that take into account the different symmetries of even number of layers (even N ) and odd number of layers (odd N ) TMDs [20,21]. However they only focus on Raman modes that are allowed in first-order scattering. In this work we will show that (i) it is important to distinguish between even and odd number of layers and (ii) that when the excitation energy is in resonance with the first optical transition of the investigated material it is necessary to consider the full set of phonons. We study the splitting of the monolayer out-of-plane A 1 mode in FL-WS 2 in particular and are able to observe layer dependent Raman signatures comprising Raman active and inactive modes. They allow for an easy identification of the number of layers via Raman spectroscopy. Moreover a general systematic behavior for the splitting of the monolayer Raman modes is proven experimentally for the first time and supported by DFT calculations. The findings described in this work should be expandable to other FL-TMDs.
Experimental
The samples are prepared from a bulk WS 2 crystal (hq graphene, Netherlands -Groningen) using the mechanical exfoliation technique. To enhance the optical contrast, the crystals are exfoliated onto a 90 nm SiO 2 /Si wafer. Raman measurements on WS 2 samples were done at room temperature in backscattering geometry with a Horiba Jobin Yvon LabRAM HR spectrometer using a confocal setup with a 100x objective and excitation wavelengths of 457 nm and 633 nm. To avoid sample heating the laser power was kept below a maximum of 120 µW; a 1800 lines mm −1 grid was used to ensure high spectral resolution of around 1 cm −1 . First, Raman spectra were taken with the setup described above and calibrated with Neon lines. In a second step the measurements were re- Step height times 0.7 nm peated in subpixel (6sp) mode. There, each spectrum is taken a couple of times, while each time the spectrometer is shifted by a step size which is smaller than a pixel value. This technique does not increase the spectral resolution, however, by providing more data points per wavenumber, it reduces the signal to noise ratio significantly. The spectra aquired in subpixel mode were then shifted in frequency to match the calibrated Raman spectra obtained in the regular single-window mode. For better comparison, in Figs. 2, 3, 4, the spectra were normalized to the intensity of the out-of-plane mode. Atomic force microscopy (AFM) images were acquired using a Park Systems XE-100 setup with commercial silicon tips in tapping mode configuration. Images were taken with 256x256 px resolution (4x4 µm). An exemplary AFM analysis of a FL-WS 2 sample is shown in Fig. 1.
Step heights between subsequent layer numbers were typically around 0.7 nm, close to the experimental value of the interplanar spacing of WS 2 layers [22]. Down to the monolayer, an offset of typically around 1 nm between substrate and sample was observed, which is probably due to the presence of adsorbates in between the substrate and the sample and was taken into account in the analysis presented in Fig. 1.
The phonon frequencies of monolayer and FL-WS 2 at the Γ-point were calculated in the frame of density functional (perturbation) theory on the level of the local density approximation (LDA) as implemented in the CASTEP code [23]. We treated the W(4d,5s) and the S(3s,3p) states as valence electrons using normconserving pseudopotentials with a cutoff energy of 800 eV. All reciprocal space integrations were performed by a discrete k-point sampling of 18x18x1 k-points in the Brillouin zone. Starting from fully symmetric model geometries of one to five layers of AB-stacked WS 2 , we fully optimized the lattice constants and atomic positions until the residual forces between atoms were smaller than 0.01 eV/Å and the stresses on the cell boundaries were smaller than 2.5×10 −3 GPa. The obtained in-plane lattice constants slightly increased with layer number from a value of 3.141Å for 1L-WS 2 to 3.143Å for 5L-WS 2 , in good agreement with the experimental in-plane lattice constant of 3.15Å in bulk WS 2 [24]. Interactions of the sheet with residual periodic images due to the 3D boundary conditions were minimized by maintaining a vacuum layer of at least 20Å.
Results
WS 2 , like MoS 2 , crystallizes in the 2H trigonal prismatic structure where the tungsten atoms are sandwiched between two layers of sulfur atoms. Intralayer bonds are of covalent nature, the interlayer interaction is governed by weak van-der-Waals forces. Bulk WS 2 has the D 6h point group; the 6 atoms per unit cell result in 18 phonon modes at the Γ-point of the hexagonal Brillouin zone [25]: In the monolayer and an odd number of layers (odd N ), the symmetry is reduced to the D 3h point group. Therefore, odd N WS 2 do not have a center of inversion. The Γ-point phonon modes transform according to the following irreducible representation: For even number of layers (even N ), WS 2 possesses a center of inversion, the symmetry is described by the point group D 3d : Let us now first consider the case where the excitation wavelength is far from resonance. Figure 2 (a) shows Raman spectra of one to five WS 2 layers and the bulk material taken with an excitation wavelength of 457 nm. The two main Raman modes are the A 1 and A 1g mode around 420 cm −1 for odd and even N, respectively, and the E and 2g mode around 355 cm −1 for odd N and even N, respectively. A clear upshift with increasing number of layers is seen for the out-of-plane A 1 /A 1g mode, whereas the in-plane E /E 1 2g mode slightly softens. This has been observed before for WS 2 [19,26] as well as for other TMDs [10,17,27,28]. The stiffening of the out-of-plane A mode is explained by the increasing interlayer interaction and the subsequent rise in restoring forces on the atoms with the number of layers [17]. The same should hold for the E mode, although to a lesser extent, as the atoms move in-plane and the influence of interlayer interaction is thus expected to be smaller. However, the opposite trend is observed and has been attributed to dielectric screening of long range Coulomb interactions [12]. Figure 2 (b) depicts the change in frequency of the out-of-plane A and in-plane E mode with the number of layers and the frequency difference between the two modes, which increases from 60.4 cm −1 to 65 cm −1 from the monolayer to the bulk. If the exciting light is far from resonance, the spectra are dominated by Raman modes allowed in first-order scattering, as has been reported previously for WS 2 nanotubes [29]. If we focus on the out-of-plane A modes, it is evident that the mono-, bilayer and bulk spectra show a single peak, whereas for three and more layers there is at least another mode appearing as a low-energy shoulder of the dominant Raman feature. This apparent splitting of the out-of-plane mode is due to the fact that for monolayer and bulk WS 2 there is only one Raman active A 1 and A 1g mode, respectively; for few layers starting with three layers, more than one Raman mode becomes allowed, see also Ref. [21]. To further explore these new Raman modes, which are reported here for the first time in FL-WS 2 , we analyze the Raman spectra of the same samples taken under the resonance condition. Figure 2 (c) shows a photoluminescence spectrum of monolayer WS 2 at 457 nm excitation wavelength. The photoluminescence signal is more than two orders of magnitude larger than the Raman signal and has maximum intensity around 625 nm. From previous experiments it is known that the first optical transition energy of bulk WS 2 is constituted by the A exciton around 633 nm [29]. Therefore, with 633 nm excitation wavelength, we are close to the A excitonic resonance for mono-and few layer WS 2 . In the remainder of this work we will focus on the out-of-plane A mode around 420 cm −1 . Of the four Raman modes allowed in bulk WS 2 D 6h symmetry, the E 2 2g is too low in frequency to be observed here, the E 1g mode is not allowed in backscattering geometry, and the E 1 2g mode around 350 cm −1 overlaps with a second order mode, which dominates the spectra in resonance [16]. As we will show below, the out-of-plane mode is clearly separated from other Raman features and shows sidebands that can readily be explained by the respective symmetries of even or odd number of layers. Figure 3 shows the region of the out-of-plane Raman mode of few-layer WS 2 taken with 633 nm excitation wavelength. Similarly to the spectra shown at 457 nm excitation wavelength, the upshift of the Raman mode with the number of layers is evident. More importantly though, there are striking differences in the shape of the Raman mode. For FL-WS 2 the structure of the out-ofplane mode is complex. In contrast to the spectra described above, already the bilayer spectrum shows more than one component. More and more sidebands arise for increasing number of layers but they get weaker for n > 5 and vanish for very thick flakes (n = N ). This again underlines the special role played by few-layer samples: for n = 1 and n = N the symmetry of WS 2 allows only one A 1 (n = 1) and A 1g (bulk) Raman mode. The comparison with the out-of-resonance spectra (457 nm excitation wavelength) illustrates that, even though shoulders of the main Raman peak are observed at 457 nm excitation as well, an increased number of well pronounced sidebands to the main Raman peak appears mainly for Raman measurements in resonance with the A exciton. We fit the spectra with Lorentzian profiles, see Fig. 4; for clarity spectra of even and odd N are shown in separate graphs. In Fig. 4 (a) the bilayer A 1g mode spectrum possesses a low energy shoulder that has not previously been observed. Its appearance is surprising since the only expected Raman active vibration is the A 1g mode, where both layers vibrate in-phase according to the monolayer A 1 mode. We will show below that this second peak is indeed the infrared active A 2u mode, where the two layers vibrate out-of-phase. In the four-layer (4L) spectrum the dominant A 1g mode shifts up with respect to the bilayer, and a prominent shoulder is seen at almost the same fre-quency as the shoulder in the bilayer spectrum. The 4L spectrum is best fitted with four Lorentzians to account for the Raman intensity between the two stronger Raman features. In the spectra of odd N WS 2 , a similar pattern is revealed. In Fig. 4 (b) the trilayer (3L) and five-layer (5L) spectra are shown. The former consists of the expected A 1 peak as the most significant contribution and a pronounced low-energy shoulder. A third Lorentzian fits the plateau between the main peaks. The 3L spectrum cannot be properly fitted with only two Lorentzians. From the experience gained from the spectra investigated above, the five-layer spectrum is fitted with five Lorentzians, two of which fill up the region between the main A 1 peak and the two low-energy shoulders. From the spectra for layer numbers of n = 1 to n = 5 it thus seems that there are always N components to the out-of-plane A mode, where N is the number of layers. We have observed sidebands on the lower energy side of the dominant Raman peak also for higher layer numbers as exemplary shown for the spectrum of an around 10 layer thick flake. But they are rather weak and cannot be fitted following the pattern described above.
DISCUSSION
Recently, careful analysis of few-layer TMDs has led to the observation of Raman modes that are neither seen in the bulk nor in the monolayer [10]. Some of them appear as shoulders to Raman modes that are allowed in first order for bulk and monolayer, like the bulk A 1g mode discussed here. Others are Raman inactive or not allowed in backscattering geometry in bulk and monolayer, like the bulk B 2g and E 1g modes [10,21,30,31]. For the first case, to the best of our knowledge, there is only one article that explicitly shows a splitting of the first-order A 1g mode with the number of layers in few-layer MoSe 2 [10]. The 3L and 4L samples show two components, for the 5L sample a third component is seen. For WSe 2 the overlap of the bulk E 1 2g and A 1g mode makes an observation of such shoulders impossible [21,30], and for the case of MoTe 2 the intensity of the A 1g mode appears to be too small to resolve a multipeak structure [28]. For MoS 2 there is only little literature on resonance Raman spectra, and despite some asymmetry in the shape of the out-of-plane Raman mode, a splitting similar to the one investigated in this work, is not observed [32,33]. Terrones et al. [21] calculate the optical phonons for a number of FL-TMDs, among them MoS 2 and WS 2 , but do not discuss phonons other than the Raman active ones. In order to have a theoretical background to the experimentally observed appearance of more than just the Raman active vibrational modes in the spectra of FL-WS 2 , we have done calculations employing density functional theory (DFT). A better insight into the atomic displacements corresponding to the phonon modes of FL-TMDs is given in Fig. 5, where schematic drawings of all possible vibrations evolving from the monolayer A 1 mode in WS 2 from n = 1 − 5 are depicted, based on the DFT calculations. While there is only one possibility in monolayer WS 2 for the sulfur atoms to vibrate against each other with a fixed tungsten atom in between, a splitting of this mode occurs for bilayer WS 2 . Since bilayer WS 2 possesses a center of inversion, there is a Raman active A 1g mode, where the two layers vibrate in phase, and an infrared active A 2u mode, where the two layers vibrate out of phase. As the latter is not Raman active it is not seen in Raman spectra taken with excitation wavelengths far from resonance [ Fig. 2 (a)]. However, it is observed for the resonance Raman spectrum (Figs. 3 and 4), albeit with weaker intensity than the dominant A 1g mode. Several possible reasons for this unusual behavior are discussed below. As the two layers interact more strongly for the in-phase vibration, the A 1g mode has a slightly higher frequency than its infrared active counterpart.
For 4L-WS 2 , each of the two bilayer modes again splits up into a Raman-active A 1g and a infrared active A 2u mode. The spectrum is still governed by the in-phase vibration of all four layers (A 1g ) but there is a second Raman active A 1g mode that has the outer layers vibrating out of phase with the inner ones, thus retaining the inversion symmetry of the overall structure. It is interesting to note that this lower lying A 1g mode in the 4L spectrum has almost the same frequency as the infrared active mode in the bilayer, a pattern that will also be observed for odd N, see below. In addition, we identify the two small shoulders on the lower-frequency side of the two Raman active modes with the A 2u modes [ Fig. 4 (a)], the lowest lying with neighboring layers vibrating out of phase, the other one with the two upper layers vibrating out of phase with the two lower layers. In odd N WS 2 there is obviously again the possibility of all layers vibrating in-phase and out-of-phase. In contrast to even N WS 2 , where the pure out of phase vibration is not Raman active, for odd N both, inand out-of-phase vibrations, are Raman active and possess A 1 symmetry. For trilayer WS 2 , the atomic displacement vectors of these two modes are shown on the right and left side of the third panel of Fig. 5. In the spectrum depicted in Fig. 4 (b), the lower lying A 1 mode accounts for the strong shoulder at approximately 416 cm −1 of the main A 1 mode (in-phase vibration). In between the modes a plateau is evident that is not accounted for if the spectrum is only fitted with two Lorentzians. The origin of the plateau is attributed to an infrared active A 2 mode, with the middle layer fixed and the sulfur atoms of the top and bottom layer vibrating out-of-phase (see Fig. 5, third panel, middle). The same approach can now be used for the analysis of the five layer WS 2 spectrum. Two shoulders to the main A 1 peak can be identified and, following the pattern of Fig. 5, attributed to another two Raman active A 1 modes. In between the Raman active modes two very weak features belong to infrared active A 2 vibrations. Coming from the trilayer WS 2 , the two modes with the highest frequency in the five-layer material can be imagined as FIG. 5: Schematic drawing for all possible vibrational modes in the out-of-plane mode region of FL-WS2 (Raman and infrared) for one (1L) to five (5L) layers together with the symmetry assignments taken from our DFT calculations. The displacement patterns are ordered with increasing frequency from left to right. Taking into account the full set of possible vibrational patterns helps to study the splitting of vibrational modes with increasing layer number and to attribute the modes to the features seen in the Raman spectra.
stemming from a splitting of the main Raman active A 1 mode in trilayer. The same can be said about the lower frequency Raman active mode in the trilayer that splits up into the low frequency A 1 and A 2 mode in five layers. The only infrared active A 2 mode in the trilayer spectrum changes symmetry in the five layer and has infrared active components. In total, in N layers, each of the monolayer phonons splits up into N phonon modes [30,31]. Table 1 lists the experimentally obtained Raman frequencies of all out-of-plane vibrations derived from the A 1 mode at 418.8 cm −1 for few-layer WS 2 . Where two or more samples with the same number of layers were measured, the average value is given in Table 1. In all these cases deviations from the given frequencies are less than 0.3 cm −1 . Again, the table illustrates that Raman and infrared active modes are alternating irrespective of the layer number. Additionally, not only the main Raman active component with A 1 /A 1g symmetry for odd and even N appears to exhibit increased frequency with increasing number of layers but also the other components (2nd row and below in Table 1) follow the same trend. This is illustrated in Fig. 6 (a), where the tabulated frequencies are plotted against the number of layers. The main Raman peak in the spectra shown in Figs. 2-4 is seen stiffening in frequency from the monolayer to five layers (red circles, connected by a dashed line to guide the eye). Starting with the bilayer an infrared active mode comes into play that also appears for higher layer numbers. It also increases in frequency, thus following the behavior of the main Raman component due to increased force constants with increasing number of layers (blue squares, connected by a dashed line to guide the eye). The same is seen for a second Raman active feature starting with three layers and another infrared active mode starting with four layers. Interestingly, the frequencies observed in the mono-, bi-and trilayer are almost exactly repeated when the layer number is increased by two, underlining the close relation of the out-of-plane modes even though the symmetry and Raman/infrared activity changes with the layer number. In contrast, the position of the lowest frequency mode stays almost constant from three layer onwards. This mode always has neighboring layers moving out-of-phase, but neighboring sulfur atoms from adjacent layers moving in-phase. As a result, the nearest neighbor force constants determining the frequency of this mode will not change significantly for larger number of layers. In Fig. 6 (b), the frequencies calculated with DFT are plotted against the layer number. Despite a slight overestimation of absolute frequencies and a smaller magnitude of the splitting of the modes, the experimental results are well reproduced. As a whole, the splitting of the out-of-plane monolayer A 1 mode in FL-WS 2 results in a fan-like shape showing similarity to Fig. 5 in Ref. [20]. Zhang et al. [20] investigated the evolvement of the low-frequency rigid layer C (displacement along the c-direction) and LB (layer breathing) modes in FL-MoS 2 , a first report on these interlayer modes in few-layer MoS 2 can be found in Ref. [34]). With the support of a simple atomic chain model the authors found a behavior similar to the one described here for the out-of-plane vibration in WS 2 . Obviously, rigid layer vibrations only appear starting from the bilayer. Strictly speaking, the difference to all other optical modes in FL-TMDs is that there are consequently only N -1 possible vibrations (where N denotes the layer number) for the rigid layer modes (E 2 2g and B 2g symmetry in bulk MoS 2 ). Of course, the "N modes for N layer rule" is restored again if one adds the acoustic modes. The acoustic E and A 2 modes of the monolayer are the origin of the low frequency vibrations in FL-TMDs and will split up into N components for N layers (among them an A and an E type mode with zero frequency). The behavior described above should in principle be observable in other 2H-TMDs as well. Terrones et al. [21] predict an increased splitting of the Raman active components in the order WSe 2 , MoSe 2 , WS 2 and MoS 2 . What distinguishes WS 2 from other prominent 2H-TMDs like WSe 2 and MoS 2 is that the bulk A 1g and E 1 2g modes are well separated in energy and that no second-order Raman features overlap with the A 1g mode, thus making it easier to resolve the splitting of the out-of-plane mode into Raman and infrared active components. More importantly, measuring in resonance with the optical transition appears to be a necessary condition to observe the full set of vibrational modes. There is a lack of studies on Raman spectra of other FL-2H-TMDs measured under resonance condition; often the corresponding excitation wavelengths are avoided because in the monolayer case the Raman features are obscured by strong photoluminescence signal. In FL-WS 2 in particular, the characteristic shape of the out-of-plane mode in resonance Raman spectra can be used as a fingerprint region to unambiguously identify the number of layers. For WS 2 measured under the resonance excitation, even in the bulk material, the A 1g out-of-plane mode is accompanied by a small shoulder that is attributed to the silent B 1u mode [12,16,29]. It had been previously attributed to a LA(K)+TA(K) combination mode [35], but the participation of two phonon modes involving acoustic phonons in this frequency region seems unlikely, especially in light of a more recent DFT calculations showing that nowhere in the Brillouin zone the acousting phonons reach values above 200 cm −1 [12]. The in-phase and outof-phase vibrational pattern of the A 1g /B 1u pair in bulk WS 2 finds its counterpart in the variety of in-and outof-phase vibrations observed in FL-WS 2 . In WS 2 nanomaterials, the B 1u mode gains in Raman intensity and its evolvement can be followed in WS 2 nanomaterials under pressure [29,36], in different layer orientation in thin films [37] and in different diameter WS 2 nanotubes [29,38]. In these cases, a strong resonance behavior is observed as well, much like in FL-WS 2 : the modes not allowed in a first-order Raman process appear most strongly when the excitation energy is in or close to resonance with the optical transitions. In an earlier work on WS 2 nanotubes [29], we found that the curvatureinduced strain and the resultant crystal symmetry distortion was to be held accountable for the activation of silent modes. Here, in quasi 2D materials, the situation in absence of curvature effects is different; strain due to substrate-sample interaction is assumed to only play a very minor role in Raman spectroscopy on supported FL-TMDs [20]. Instead, a closer look at the nature of the excitonic transition leading to the resonantly enhanced Raman intensity in the spectra of FL-WS 2 can provide means to elucidate the appearance of the infrared-active components of the A 1 /A 1g Raman mode around 420 cm −1 . For 633 nm excitation wavelength, the phonons are coupling to the A exciton situated at the K point of the Brillouin zone. Owing to the structural relationship between strong intralayer bonding in two dimensions and weak interlayer interaction in the third dimension, like many properties of layered TMD systems, the wavefunctions of the excitons are expected to be very anisotropic. A recent work on the orientation of luminescent excitons in FL-MoS 2 , isostructural to FL-WS 2 , reveals them being confined entirely in-plane without significant expansion in the stacking direction of individual layers [39]. This was further substantiated by density-functional theory (DFT) calcu-lations in the local-density approximation (LDA) that explicitely showed the wavefunction of the A exciton in multilayer MoS 2 to be spread out over a large area in two dimensions but with negligible density in neighbouring layers [40]. For the resonant Raman process discussed here this means that even for a layer number of more than one, the phonon couples to an A exciton localized primarily in one of the layers. If the N layers in FL-WS 2 are to be treated approximately as N individual monolayers for the specific case of the A excitonic resonant Raman process, there are N Raman allowed Raman modes to be expected in the region of the monolayer A 1 Raman mode with similar Raman intensities. They are still split in frequency due to interlayer interaction. Here, all N components are identified in the Raman measurements presented in this article, but the infrared-active components -speaking from the N -layer symmetry point of view -are always weaker in intensity than the Raman active components. Thus we conclude that Raman selection rules are at least weakened but not completely broken. This is supported by the fact that the Raman active components are gaining in intensity relative to the main Raman peak but still appear as shoulders rather than as individual peaks. Far from the A excitonic resonance, the few-layer WS 2 can not be treated as N individual monolayers and the Raman selection rules following from the few-layer symmetry strictly apply. This is also in agreement with recent findings on newly observed Raman modes in FL-MoS 2 in resonance with the C exciton [31].
Conclusion
In summary, we have shown experimentally as well as theoretically that the out-of-plane A 1 mode of the WS 2 monolayer splits up in the few layer regime into N components for N layers. Despite the fact that only N 2 of them for even number of layers and N +1 2 for odd number of layers are Raman active, the full set of phonon modes is observed when the laser excitation energy is close to the A excitonic transition energy. A possible explanation for this unusual behavior is presented by taking into account the in-plane orientation of the A exciton wavefunction involved in the resonant Raman scattering process. The stiffening of the main out-of-plane phonon mode with N is also followed by all other components successively added with increasing number of layers. By resonant Raman scattering measurements one can conclusively identify the number of layers in a specific sample by simply counting the number of components of the outof-plane A mode. The detailed analysis of the evolvement of the A 1 mode of monolayer WS 2 presented here should in principle be applicable to (i) all other Raman modes of (ii) all layered materials in the few-layer regime. | 2015-03-31T21:38:33.000Z | 2015-03-31T00:00:00.000 | {
"year": 2015,
"sha1": "f6bd3857d088c30f3265ccd0daa904019571fc0d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.00049",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f6bd3857d088c30f3265ccd0daa904019571fc0d",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
18210428 | pes2o/s2orc | v3-fos-license | Primary hepatoid carcinoma of the biliary tree: a radiologic mimicker of Klatskin-type tumor
Abstract Hepatoid carcinomas are a group of neoplasms with features resembling hepatocellular carcinomas. Although extremely rare, more cases have been noted to arise from various organs within the last decade. Differentiating these tumors when located in the biliary tree from cholangiocarcinoma is not only a radiologic challenge but also critical, because treatment modalities and operative strategies are dependent on the exact nature of the tumor. We report a unique case in the literature of a 67-year-old Caucasian female who presented with obstructive jaundice due to an obstructing mass seen at the common hepatic duct on imaging with no preceding history of cirrhosis and increased serum α-fetoprotein (AFP), in whom a differential diagnosis from cholangiocarcinoma in a non-cirrhotic liver was particularly difficult given the combination of tumor location and solitary nature. The radiologist may include ectopic hepatoid adenocarcinomas in the differential consideration of an obstructing tumor in the biliary tree especially in patients with increased serum AFP levels.
Introduction
Hepatoid carcinoma was first described as a specific type of primary gastric carcinoma by Ishikura et al. [1] with the most frequent site of this carcinoma being the stomach. Hepatoid carcinoma is a variant of adenocarcinoma associated with hepatic differentiation. It is generally composed of adenocarcinomatous and hepatocellular carcinoma (HCC)-like foci and the latter component has the full spectrum of the morphologic and functional features of HCC. It is rare, with incidence accounting for 0.38% of all gastric cancers and much less in other organs. Hepatoid adenocarcinoma is an aggressive carcinoma carrying a poor prognosis compared with the other common tumors in these organs [1] . Rare cases of hepatoid carcinoma have been described in a variety of organs including the esophagus, lung, gallbladder, ovary, cecum, and adrenal cortex [2,3] . Radiologic mimics of cholangiocarcinoma described previously include a heterogeneous group of entities that includes benign conditions as well as malignant tumors such as HCC, metastases, melanoma, lymphoma, leukemia, and carcinoid tumors. The imaging findings of these benign and malignant entities may be indistinguishable from those of cholangiocarcinoma [4,5] . In most cases, a definitive diagnosis can be established only with histopathologic examination [6] . A small case series of ectopic hepatoid carcinoma has been described previously in non-biliary locations [7] . We described yet another differential possibility, although rare, but which in the proper clinical setting must be included in the differential consideration of cholangiocarcinoma.
In reporting this case, we would like to emphasize the role of imaging as a problem-solving tool for analysis of tumors of the biliary tree, including hepatoid carcinoma of the biliary tree.
A 67-year-old Caucasian female presented with painless jaundice and a history of weight loss and generalized fatigue. Physical examination was within limits of normal. Laboratory analysis revealed an alkaline phosphatase level of 689 IU/L. Alanine aminotransferase and aspartate aminotransferase levels were 209 IU/L and 285 IU/L, respectively. Total bilirubin levels were 8.2 mg/dL with direct bilirubin levels of 6.0mg/dL and indirect bilirubin levels of 2.2 mg/dL. Alpha fetoprotein (AFP) levels were increased at 2478.0 mg/L (normal; 08 mg/L). The hepatitis panel was negative for hepatitis B or hepatitis C. The patient denied a history of alcohol abuse. Past medical history was significant for diabetes, hypertension, hyperlipidemia, and coronary artery disease. Past surgical history was significant for appendectomy and cholecystectomy.
Initial sonographic imaging showed mild dilatation of the intra-and extrahepatic bile ducts. Color and pulsed Doppler imaging demonstrated hepatopedal flow in the portal vein and no abnormal hepatic masses were seen on gray-scale imaging. A subsequent computed tomography (CT) examination incidentally showed a large duodenal diverticulum which was thought to be arising from the second portion of the duodenum. A subtle soft tissue density was seen at the junction of the right and left hepatic ducts at the porta hepatis with mild dilatation of the intrahepatic bile ducts above the obstruction (Fig. 1). No CT evidence for cirrhosis such as serrated liver borders, enlarged portal vein, and venous collaterals was observed. Further imaging with magnetic resonance (MR) imaging with magnetic resonance cholangiopancreatography (MRCP) was recommended.
Dynamic MR imaging showed no evidence of an intrahepatic mass in early or delayed phases. Dedicated MRCP images, on both the source and the reconstruction images, showed an irregular filling defect in the region of the right hepatic duct, just before the junction with the left hepatic duct to form the common hepatic duct, which did not have the smooth appearance of a stone. The intrahepatic biliary ducts were mildly dilated above the obstructing mass ( Fig. 2A).
Endoscopic retrograde cholangiopancreatography (ERCP) workup was performed and brushings from the region of the obstructing lesion were obtained. Subsequent histopathologic analysis showed hepatoid epithelial malignancy composed of large cells with rounded nuclei, prominent nucleoli, and moderate to large amount of eosinophilic cytoplasm (Fig. 3A). The neoplastic cells formed irregularly shaped nests and cords with occasional canalicular structures containing bile. Mitotic division figures were prominent. The tumor cells were positive for low molecular weight cytokeratin (Cam 5.2), Hepar-1, and demonstrated a canalicular staining pattern (Fig. 3B). CD34 analysis demonstrated a sinusoidal staining pattern surrounding the tumor cells. The tumor was negative for CK7 and CK20. The ectopic hepatoid carcinoma may have arisen from ectopic hepatic tissue in the biliary tree. At this point the patient was thought to have a histologically proven tumor at the bifurcation of the right and left hepatic ducts with increased AFP levels. An internalexternal biliary drainage catheter was placed.
On percutaneous T-tube cholangiogram performed after one cycle of chemotherapy, there was an irregular polypoid filling defect at the bifurcation of the left and right hepatic ducts, which was thought to be causing partial obstruction of the right hepatic duct and delayed filling, suspicious for interval growth of the tumor infiltrating the duct in a region more typical for a Klatskintype tumor (Fig. 2B).
The patient underwent four rounds of chemotherapy and refused a fifth and opted for hospice treatment.
Discussion
Hepatoid carcinoma is a primary neoplasm exhibiting features of HCC in terms of morphology, immunohistochemistry, and behavior. Hepatoid carcinoma is confirmed by its histomorphologic similarity to hepatocellular carcinoma (HCC), with markedly increased levels of serum AFP. Since the bile ducts and the liver originate from the same endodermal tissue, biliary epithelium has an ability to differentiate into hepatic cells, resulting in a tendency for AFP production [8] . A prompt and accurate diagnosis of hepatoid carcinoma is important because the prognosis is very poor compared with that of common types of adenocarcinoma.
Ectopic hepatoid carcinoma of the biliary tree can be confused with extrahepatic spread of HCC although the latter have advanced intrahepatic tumor and rarely metastasize to the biliary tree. A few cases of small HCCs measuring less than 3 cm with tumor thrombi in the biliary tree have been described previously in the literature [9] . However, on imaging studies these cases showed a small associated hepatic mass. Our case did not demonstrate any evidence of a hepatic mass on CT or MR imaging studies. Furthermore, the intrabiliary mass seen on CT, T-tube cholangiogram, and MRCP did not demonstrate extrabiliary invasion.
The rare hepatoid carcinoma of the biliary tree may mimic Klatskin-type cholangiocarcinoma with its shared clinical features such as old age, anatomic location, and aggressive behavior. A particular polypoid variant of cholangiocarcinoma is infrequently found in both the intraand extrahepatic ducts. The histologic type of this tumor is mostly papillary adenocarcinoma with intraluminal growth. The tumor is depicted as an intraluminal polypoid mass at both ultrasound and CT and as a polypoid filling defect at cholangiography [10] . Whether ectopic hepatoid carcinoma can resemble other morphologic forms of cholangiocarcinoma described by Lim et al. [11] such as periductal infiltrating and mass-forming cholangiocarcinoma is unclear; however, it can mimic the polypoid form of cholangiocarcinoma in radiologic appearance and tumor location as seen in our case. On CT imaging, periductal infiltrating cholangiocarcinoma may show a concentrically thickened common duct wall with enhancement, whereas our case showed no evidence of biliary duct wall enhancement on CT (Fig. 1).
Intraluminal polypoid tumors of the intra-and/or extrahepatic bile ducts are generally associated with partial obstruction and dilatation of the bile ducts [11] similar to the observations in our case. However, the polypoid form of cholangiocarcinoma may have avid mucin secretion in the biliary tree and normal serum AFP levels in contrast to hepatoid carcinoma, which has increased AFP levels and is a non-mucin secreting tumor as demonstrated in our case on ERCP examination [11] .
The differentiation between these different pathologic entities is important because no effective adjuvant therapy exists for cholangiocarcinoma, and unless clear indications of nonresectability exist, most patients should be considered for surgical exploration. On the other hand, hepatoid carcinoma, although also aggressive in biologic behavior, can be treated with chemotherapy. In conclusion, although the final diagnosis is confirmed with clinicopathologic findings similar to other benign and malignant entities of the biliary tree that can mimic cholangiocarcinoma, radiologists may include ectopic hepatoid carcinomas in the differential consideration of an obstructing tumor in the biliary tree especially in patients with increased AFP levels. Furthermore, imaging can help differentiate the primary ectopic hepatoid carcinoma of the biliary tree from small HCC with bile duct tumor thrombus even before a histopathologic diagnosis is confirmed. | 2018-04-03T02:52:53.524Z | 2010-10-08T00:00:00.000 | {
"year": 2010,
"sha1": "23d5250178ee26213ceb0ff3273c755646cbef85",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2999407/pdf/ci100027.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "23d5250178ee26213ceb0ff3273c755646cbef85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244102892 | pes2o/s2orc | v3-fos-license | PESTO: Switching Point based Dynamic and Relative Positional Encoding for Code-Mixed Languages
NLP applications for code-mixed (CM) or mix-lingual text have gained a significant momentum recently, the main reason being the prevalence of language mixing in social media communications in multi-lingual societies like India, Mexico, Europe, parts of USA etc. Word embeddings are basic build-ing blocks of any NLP system today, yet, word embedding for CM languages is an unexplored territory. The major bottleneck for CM word embeddings is switching points, where the language switches. These locations lack in contextually and statistical systems fail to model this phenomena due to high variance in the seen examples. In this paper we present our initial observations on applying switching point based positional encoding techniques for CM language, specifically Hinglish (Hindi - English). Results are only marginally better than SOTA, but it is evident that positional encoding could bean effective way to train position sensitive language models for CM text.
Switching Points: The Bottleneck
Switching Points (SPs) are the positions in CM text, where the language switches. Consider the text -aap HI se HI request EN hain HI (request you to). Here, when the language switches from Hindi to English (se HI request EN ) a HI-EN (HIndi-ENglish) SP occurs. Correspondingly, a EN-HI SP occurs at request EN hain HI . In this work we to look at sentiment analysis of CM languages, specifically Hinglish through the lens of language modeling. We propose PESTO -a switching point based dynamic and relative positional encoding. PESTO learns to emphasis on switching points in CM text. Our model marginally outperforms the SOTA.
Background -Dataset and Positional Encoding
Data and SOTA: The SentiMix task @ SemEval 2020 Patwa et al. (2020) released 20K Hinglish tweets, are annotated with word-level languages and sentence-level sentiment i.e. positive, negative, neutral. Liu et al. (2020a) achieved the SOTA (75% f1 score) by fine-tuning a pretrained XLM-R using adversarial training. Vaswani et al. (2017) introduced Positional Encoding (PE) for language modeling. PE serves as an added feature along with the word embeddings, providing both relative Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. and absolute positional relations between a target word and its context words.
Absolute Positional Encoding (APE)
Sinusoidal PE: -A predefined sinusoidal vector p i ∈ R d is assigned to each position i. This p i is added to the word embedding w i ∈ R d at position i, and w i +p i is used as input to the model. In this way, the Transformer can differentiate the words coming from different positions and assign each token position-dependent attention Vaswani et al. (2017). Sin/cos functions are used interchangeably to capture odd/even numbered positional words in a sequence -equation 1 Dynamic PE: -Instead of using periodical functions like sin/cos, Liu et al. (2020b), proposed to learn a dynamic function at every encoder layer that can represent the positional info. A function θ(i) is introduced which can learn positional info with gradient flow. -equation 2 Relative Positional Encoding (RPE) Shaw, Uszkoreit, and Vaswani (2018) introduced a learnable parameter a l j−i which learns the positional representation of the relative position j-i at encoder layer l. This helps the model to capture relative word orders explicitly -equation 3
Switching Point based Positional Encoding
We introduce a novel, switching point based PE. Consider the Hinglish sequence -gaaye HI aur HI dance EN kare HI . SP based indices (SPI) -i) We set the index to 0 whenever an SP occurs. Indexing would normally be = {0, 1, 2, 3}, we change it to {0, 1, 0, 0}. ii) We consider Hindi as our base language and English as the mixed language. We set the index to 0 only when the shift is from base language (L1) to the mixed language (L2). So, the resultant index would be {0, 1, 0, 1}.
Switching Point based Dynamic PE (SPDPE)
We introduce a function S(l i ), which takes the word level language labels as input and returns SPI. Instead of passing an index directly as i to θ, we use θ(s(l i )) to dynamically learn the PE based on SPI -equation 4
PESTO -Switching Point based Dynamic and Relative PE (SPDRPE)
Here, in addition to the SPDPE, we use a learning parameter a l j−i , which encodes the relative position j-i at the encoder layer l. This encoding approach learns representations dynamically based on SPs along with the embedding a l j−i so that it can also capture relative word orders (equation 5).
Models
Baselines -Word2Vec, Multi Head Attention (MHA): We choose Word2Vec as the baseline since it does not capture position info. We also choose attention mechanism, which is widely used to capture relational dependencies, to see its effects over SPs. We experiment with two lengths -i) Length 3 to capture the local window of dependency, whereas, ii) 12 to see whether it can learn anything from the whole sentence. 12 is the average length of sentences in our corpus. PESTO Overall Architecture: The local dependencies from skipgram Word2Vec (trained from scratch) along with SPI obtained from SPDRPE are passed to a 12 headed transformer based encoder layer. On top of the transformer, a 1D CNN is used to get the sentence level representation. We also obtain the sentence embedding using tf-idf weighted average of Word2Vec embeddings. Finally, we concatenate the representations of the CNN and the tf-idf sentence embedding and pass it to a dense layer which applies softmax to predict the sentiment. The architecture of PESTO is shown in Fig. 1. We train the entire model (2 encoder layers) from scratch, without using any pre-trained model.
Results
PESTO achieves 75.56% F1 score and outperforms SOTA (Tab. 1). The main reason for this is learning SP by aggregating both relative and dynamic PE with a variable length MHA framework. PESTO is able to generate more thrust to the switching point weather EN achaa HI (Fig. 2). The experiments were conducted on google Colab. The code is available at https://github.com/mohammedmohsinali/PESTO.
Conclusion
In this paper we report initial experiments on Hinglish sentiment analysis problem through the lens of language modeling. We argued SPs are the major bottleneck for CM. Our contribution could be seen as following -i) We introduce the idea of switching-point based positional encoding. i) We propose a relative switching point dynamic positional encoding technique named PESTO, which yields better results than SOTA. iii) It is also noteworthy that our model -PESTO achieves SOTA results without any pre-trained heavy language model, whereas all the SOTA models in the SentiMix task used models like BERT, or XLNet. | 2021-11-15T02:15:50.432Z | 2021-11-12T00:00:00.000 | {
"year": 2021,
"sha1": "c1793d88db202bd416cab2776224f17e4da7457a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c1793d88db202bd416cab2776224f17e4da7457a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
264888661 | pes2o/s2orc | v3-fos-license | Novel Postoperative Hypofractionated Accelerated Radiation Dose-Painting Approach for Soft Tissue Sarcoma
Purpose Hypofractionated radiation therapy (RT) offers benefits in the treatment of soft tissue sarcomas (STS), including exploitation of the lower α/β, patient convenience, and cost. This study evaluates the acute toxicity of a hypofractionated accelerated RT dose-painting (HARD) approach for postoperative treatment of STS. Methods and Materials This is a retrospective review of 53 consecutive patients with STS who underwent resection followed by postoperative RT. Standard postoperative RT dosing for R0/R1/gross disease with sequential boost (50 Gy + 14/16/20 Gy in 32-35 fractions) were replaced with dose-painting, which adapts dose based on risk of disease burden, to 50.4 and 63, 64.4, 70 Gy in 28 fractions, respectively. The first 10 patients were replanned with a sequential boost RT approach and dosimetric indices were compared. Time-to-event outcomes, including local control, regional control, distant control, and overall survival, were estimated with Kaplan-Meier analysis. Results Median follow-up was 25.2 months. Most patients had high-grade (59%) STS of the extremity (63%) who underwent resection with either R1 (40%) or close (36%) margins. Four patients experienced grade 3 acute dermatitis which resolved by the 3-month follow-up visit. The 2-year local control, regional control, distant control, and overall survival were 100%, 92%, 68%, and 86%, respectively. Compared with the sequential boost plan, HARD had a significantly lower field size (total V50 Gy; P = .002), bone V50 (P = .031), and maximum skin dose (P = .008). Overall treatment time was decreased by 4 to 7 fractions, which translated to a decrease in estimated average treatment cost of $3056 (range, $2651-$4335; P < .001). Conclusions In addition to benefits in cost, convenience, and improved biologic effect in STS, HARD regimen offers a safe treatment approach with dosimetric advantages compared with conventional sequential boost, which may translate to improved long-term toxicity.
Introduction
Soft tissue sarcomas (STS) are a diverse group of tumors that arise from the mesenchymal or connective tissue and account for 1% of all adult malignancies in the United States. 1 Because of their rarity and heterogeneity, these tumors represent a significant treatment challenge.Although oncologic excision remains the mainstay of treatment for STS, the addition of radiation therapy (RT) is often recommended to reduce the risk of local failure. 2Intensity modulated radiation therapy (IMRT), rather than conventional external beam radiation therapy, is commonly used in the postoperative setting after demonstrating significant benefits in local control and avoidance of nearby organs-at-risk (OARs). 3Although the total dose and treatment volume depend on clinical factors, conventional RT fractionation of 1.8 to 2 Gy per fraction is most commonly used for STS.
Hypofractionated accelerated RT holds several potential advantages in the treatment of STS.The lower total fractions can improve patient convenience and lower costs, 4 while also limiting population interactions during the ongoing global pandemic.][10][11][12][13][14] The utility of postoperative hypofractionation has been explored with brachytherapy, with local control rates of »90% for high-grade STS receiving 30 to 50 Gy over 1 week. 15Although this dose is commonly prescribed to a 2 £ 1 cm expansion of the tumor bed, there are 125% to 200% isodose lines near the source that simultaneously escalate doses at the highest area of recurrence risk (eg, tumor bed). 16In the postoperative setting, outside of brachytherapy, the utilization of hypofractionation with simultaneous dose escalation is not well characterized.
To harness the benefits of hypofractionation while limiting normal tissue toxicity risk, we created a novel accelerated simultaneous integrated boost (SIB) regimen to replace the standard postoperative RT approach in STS (2 Gy per fraction with a cone down sequential boost), which was inspired by the dosimetric advantages of postoperative brachytherapy.This approach, termed "hypofractionated accelerated radiation dose-painting" (HARD), adjusts the dose delivered per day by the volume's clinical risk of disease burden.The low-risk volume is treated with a 50.4Gy base and a dosepainted volume receiving 63, 64.4, or 70 Gy in 28 fractions for R0, R1, or gross disease, respectively.In this way, the novel HARD technique has the potential to optimize local control (LC) through dose-escalation of the high-risk area while minimizing dose to nearby OARs.The present study evaluates the acute toxicity of this HARD technique, along with the difference in expected long-term toxicity via a dosimetric comparison of the first ten SIB plans to their standard sequential RT boost counterparts.
Methods and Materials
This is a retrospective review of a prospectively maintained database of 53 patients with STS who underwent resection followed by postoperative RT with the HARD approach, from October 2019 to June 2022, with an accelerated plan of 50.4 Gy as the base target dose and the dose-painted volume receiving 63, 64.4, or 70 Gy in 28 fractions for R0, R1, or gross disease after surgery, respectively.Of note, our practice has standardized postoperative treatment planning (e.g.magnetic resonance imaging, MRI) for patients at high local recurrence risk, to account for gross residual/recurrent disease after surgery, before the start of RT.The dose for postoperative radiation is 63 and 64.4 Gy for negative and positive surgical margins, respectively, unless gross disease was identified on treatment planning imaging.In cases where re-resection was not deemed feasible, gross disease was dose escalated to 70 Gy in 28 fractions.Equivalent dose in 2 Gy per fractions (EQD 2 ) was calculated assuming an a/b of 4 to 10. 5,17,18 A computed tomography (CT) simulation was performed with ≤3 mm slices, and immobilization with a vac-lock or aquaplast system was used.Presurgical MRI, when available, was fused to the CT to delineate areas at risk.Gross tumor volume (GTV) was commonly defined by the T1 post contrast, whereas a T2 fat-saturated or STIR image was used to determine extent of initial peritumoral edema.Radio-opaque wires were used to delineate the scar and drain sites.The clinical target volumes (CTV) were defined by risk of microscopic disease, either low (CTV1: 50.4 Gy) or intermediate (CTV2: 63-64.4Gy).CTV1 was defined as 3 to 4 cm expansion along the muscle/subcutaneous tissue with a 1.5 cm radial expansion from the preoperative GTV and tumor bed, respecting anatomic boundaries (eg, bone, fascia, compartments, organs), including surgically manipulated tissue (eg, scars and drains).CTV2 was a 2 cm by 1 to 1.5 cm expansion from GTV/tumor bed.Residual/recurrent GTV was planned to 70 Gy in 28 fractions (2.5 Gy per fraction).Planning target volume (PTV) was a 3 to 5 mm expansion from CTV or GTV, excluding 3 mm from skin surface if skin was not initially involved.Treatment was planned for PTV V100>95% and minimum point dose (0.03 cc) >95% of the prescribed dose, although V95>95% and 90% minimum dose were allowed to meet organ at risk constraints.All patients were planned using intensity modulated radiation therapy (IMRT) with volumetric modulated arc therapy, and daily CT image-guided radiation therapy.Acute toxicity during and after radiation treatment were reported as per CTCAE (version 5).The present study was approved by the institutional review board of the University of South Florida and Moffitt Cancer Center.
Dosimetric analysis
Using the same treatment volumes, planning system, dose constraints, and prescription goals, a comparison plan was generated for the first 10 patients with a standard sequential approach with 50 Gy as the base target dose with the boost volume receiving an additional 14, 16, or 20 Gy in 32, 33, or 35 fractions for R0, R1, or gross disease, respectively.The sequential counterpart was planned using the same planning structures, beam energy, beam geometry, treatment planning system, and dose calculation algorithm.A fixed number of iterations were then performed (100) using planning parameters (weighting and cGy values) that were a direct ratio to that of the clinical SIB treatment plan.The HARD regimen and the sequential boost plan counterpart for each patient were then compared for differences in dosimetric indices, including the V40, V50, and maximum dose to the joint and bone, V20, V25, and maximum dose to the skin strip, and the field size (volume receiving ≥50 Gy).Treatment plans were optimized and calculated with the treatment planning system used at the time of patient treatment, including collapsed cone dose calculation in Pinnacle (version 14.6; Phillips), Tomotherapy Phillips ACQ SIM, and Monte Carlo dose engine in Raystation v11A (Ray-Search Labratories, Stockholm, Sweden).
Cost analysis
A sample of patients with STS who received postoperative RT were queried, and the technical fees charged were used to estimate a cost per fraction.The average cost per fraction was then used to extrapolate the cost difference for the HARD regimen of 28 fractions compared with the conventional sequential fractionation regimens of 32, 33, or 35 total fractions.
Statistics
Descriptive statistics were used to summarize the patient and treatment characteristics of the cohort.Timeto-event outcomes were estimated with Kaplan-Meier analysis from the date of current diagnosis and included LC, regional control, distant control (DC), and overall survival (OS).A local recurrence was defined as a recurrence occurring within high dose PTV (PTV_6300, PTV_6440, PTV_7000), a regional recurrence as outside the high dose PTV but within the 50% isodose line of the low dose PTV (PTV_5040), and distant recurrence as a recurrence beyond the 50% isodose line (eg, lymph node or distant progression, or skip metastases).The Cox proportional hazard model was used for univariate and multivariate analysis to identify significant predictors of DC.Dosimetric variables and estimated treatment costs were compared between the sequential and HARD approaches via the Wilcoxon signed-rank test.The reverse Kaplan-Meier method was used to calculate the median follow-up. 19Statistical analyses were performed using JMP 15 (SAS Institute Inc, Cary, NC).
Four patients (7.5%) experienced grade 3 toxicities, all of which were acute radiation dermatitis that resolved by the 3-month follow-up visit.All 4 patients who experienced a grade 3 toxicity were either current (N = 1) or former (N = 3) smokers, and no patients with a nonsmoking history experienced grade 3 toxicity.No patients experienced grade 4 or 5 toxicity.
Cost analysis
The estimated total technical fees for radiation therapy treatment for the conventional 32, 33, and 35 fraction regimens were $20,166, $20,810, and $21,850, respectively, compared with $17,515 for the HARD regimen.The average difference in cost for the cohort was $3056 (range, $2651-$4335, P < .001).
Discussion
Postoperative hypofractionated RT offers practical benefits for patients over conventional fractionation in cost, 4 convenience, and reduced population interaction during the current global pandemic. 20Additionally, the low a/b ratio of STS makes the disease particularly suitable for hypofractionated RT, where the higher dose per fraction provides a higher biologically effective dose to these relatively radioresistant tumors. 5,7Despite these advantages, the role of hypofractionation RT in the postoperative treatment of STS is not well-characterized, and potentially limited to due risk of long-term toxicity. 12erein, we demonstrate the safety and efficacy of an isotoxic postoperative RT approach, using HARD, for risk adapted dosing in STS.
2][23] In a recent clonogenic survival assay analysis of 14 sarcoma cell lines, Haas et al found that the while the median a/b was 4.9 Gy, the radiosensitivity varied considerably by histology, with an a/b of <4 Gy in 6 cell lines. 17We recently highlighted the heterogeneity in radiosensitivity within STS in a study using the radiosensitivity index (RSI), a 10-gene signature validated to estimate the intrinsic radiosensitivity of tumors. 22The RSI not only confirmed the relative radioresistance of STS, but it also identified a highly radioresistant subset of STS tumors with a lower estimated a/b of 3.29 Gy that may benefit from dose escalation.The higher dose per fraction has a more pronounced effect on tumors with lower a/b but poses a risk to long-term sequela (a/b = 3).In Abbreviations: KPS = Karnofsky performance status; NOS = not otherwise specified; XRT = external beam radiation therapy.
sarcoma, prior studies have described hypofractionated accelerated postoperative RT techniques, [24][25][26] and others have described dose-painting approach, 27 but this study is the first to describe the clinical outcomes for a combined hypofractionated accelerated approach with dose-painting technique ("HARD") for postoperative STS.This technique is a safe targeted BED escalation to the high-risk volumes, with a relatively lower BED to the surrounding normal tissue, thus improving the therapeutic window compared with a standard sequential boost approach.
There is a growing trend toward using preoperative RT, and it is often weighed against the higher risk of wound complications and implications for the patient.In patients with a significant wound healing risk (eg, diabetes, smoking history, superficial/subcutaneous tumors, peripheral vascular disease), or where the difficulty in wound healing recovery outweighs the potential late toxicity benefit with preoperative RT (eg, poor KPS, older age), postoperative RT can be considered.Even in situations best suited for neoadjuvant RT, patients may have a preference for upfront surgery due to tumor related pain, ulceration, or personal choice.Postoperative RT, in comparison to preoperative RT, requires larger treatment fields with higher RT doses, which are associated with an increased risk of long term toxicity (eg, joint stiffness, edema, and fibrosis). 28,29] Thus, these late effects are more sensitive to higher dose per fraction. 6,21Although the present study lacks the long-term follow-up to adequately address late toxicity concerns, the HARD regimen was associated with significant dosimetric advantages compared with the standard sequential boost plan (Table 3).In particular, the first 10 patients treated with the HARD approach resulted in a smaller 50 Gy field size, compared with their sequential boost counterpart, which is predictive of subcutaneous fibrosis and joint stiffness in the NCIC SR2 trial. 29In addition, HARD allowed sparing around weight bearing bones at risk, translating to a significantly lower V50, a known predictor for osteoblast cell death and fracture. 30his is consistent with prior dosimetric studies that showed an improved target coverage and reduced OAR doses achieved with postoperative IMRT SIB techniques. 31,32With sarcoma's low a/b, hypofractionation and improving dose conformality (eg, avoiding "cold spots") are the keys to mitigate risk of recurrence. 3After the standard base treatment, the sequential boost's low dose beyond the tumor bed increases the overall field size (50 Gy isodose line) and dose to the adjacent targets, whereas the HARD approach allows a steep dose drop off without the concern of hyperfractionating any of the areas at risk.These dosimetric indices suggest that long-term benefit may be possible when using HARD compared with conventional sequential fractionation.
The HARD regimen was associated with a low risk of acute toxicity, as only 4 patients (7.5%) experienced acute grade 3 radiation dermatitis, which resolved by the 3month follow-up, and no patients experienced grade 4 or 5 toxicity.In a recent study of 90 patients with STS treated with standard postoperative RT with or without concurrent chemotherapy, Greto et al found higher rates of acute toxicity, with 17% of patients experiencing grade 3 dermatitis. 33In the NCIC SR2 trial 28 comparing preoperative RT and postoperative RT in patients with STS, there was a 68% rate of grade 2 or greater acute skin toxicity for those in the postoperative group, which is consistent with our results.Five-year local control rates for STS treated with surgery followed by postoperative RT range from 83% to 100%. 22,28,34,35With a limited follow-up, our results suggest that the HARD regimen may have similar local control, with no incidences of local failure, and only 4 regional failures (1 within prior PTV_5040, 3 outside of PTV_5040 but within prior 50% isodose line).This is despite the high-risk population in the present study, with a majority of large, grade 3, recurrent tumors with close or positive margins (Table 1).In addition, there were 7 patients with gross disease identified on treatment planning imaging before the start of radiation therapy, where re-resection was not feasible, and included 1 patient with a positive margin resection, 2 patients with gross nodal disease, and 4 patients who developed gross disease at the surgical after a negative margin resection.Gross disease was dose escalated to 70 Gy without evidence of local recurrence.Longer follow-up is required to confirm the efficacy of the accelerated HARD regimen, although the dose-painting approach has the potential to improve local control with an increased BED while effectively sparing the adjacent organs at risk (Table 3, Fig. 2).Much of our cohort have a high distant recurrence risk (86% stage II-IV), as predicted in previous studies, 36,37 which is likely reflective in the 32% 2-year distant recurrence rate observed.Although many patients were at a high distant recurrence risk, 44 of 53 did not receive any chemotherapy, because of patient decision or being deemed a poor chemotherapy candidate by a medical oncologist.Of note, the contributing factors that may have precluded the patient from receiving chemotherapy include patient age (≥70 years, n = 19), prior chemotherapy history (n = 2), KPS ≤70 (n = 6), or comorbidities (eg, chronic kidney disease [n = 5], significant coronary artery disease [n = 10]).On multivariate analysis accounting for size, grade, margins, and gross disease after surgery, we found that only poor KPS and advanced clinic stage were associated with poor DC (Table E1).
The accelerated HARD regimen reduces the total number of fractions from 32-35 to 28, and the difference of 4 to 7 fractions has significant implications not only in patient convenience, but also in health care cost.This difference of 4 to 7 total fractions has an estimated health care cost savings of $2651 to $4335 per patient, in addition to the cost of travel, lodging, childcare, and lost wages incurred for each patient by undergoing an additional week of radiation therapy.Additionally, the HARD regimen offers a potential benefit in reducing exposure to the health care system and potential infection for cancer patients who are often immunocompromised.
There are several important limitations of the present study, including its nonrandomized, retrospective nature from a single institution.The limited follow-up of the present study restricted the ability to assess long-term local control and toxicity.In addition, our results are limited by the significant heterogeneity of the present cohort, especially within STS histology.
Conclusion
The HARD regimen offers a safe, condensed postoperative RT approach for STS, with similar LC and acute toxicity as historic data.With significant dosimetric benefits, including a lower field size volume and dose to surrounding structures, the utility of HARD may improve long-term toxicity compared with standard postoperative RT with sequential boost, though longer follow-up is required.
Figure 1
Figure 1 Kaplan-Meier curves depicting (A) overall survival, (B) regional control, and (C) distant control from the date of the current diagnosis.
Figure 2
Figure 2 Postoperative radiation plan with isodose lines for a single patient with lower extremity soft tissue sarcoma, treated with the hypofractionated accelerated radiation dose-painting plan to 50.4 Gy in 28 fractions with a simultaneous integrated boost to 63 Gy (A-B), compared with conventional plan to 50 Gy with a sequential boost of 14 Gy (C-D).Note the significantly larger field size (denoted with orange or 50 Gy isodose line) and 50 Gy overlap with bone (black dotted box) in C, D, and E. Dose-volume histogram comparing the hypofractionated accelerated radiation dose plan (solid line) and the conventional plan (dotted line) in high-risk planning target volume (red), low-risk planning target volume (blue), bone (green), and skin strip (purple).
Table 1
Patient and tumor characteristics
Table 2
Treatment characteristics
Table 3
Dosimetric matched paired analysis of HARD versus sequential boost IMRT Abbreviations: CI = confidence interval; IMRT = intensity modulated radiation therapy; Int = intermediate; PTV = planning target volume; SIB = simultaneous integrated boost. | 2023-11-02T15:16:09.543Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "46e37e785bdae33c13a467b2d9a3cde7f6cc079a",
"oa_license": "CCBY",
"oa_url": "http://www.advancesradonc.org/article/S2452109423002191/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f05e028e9e6b80f1efeb731bdd88239137ccc420",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59620749 | pes2o/s2orc | v3-fos-license | Increasing Imaging Resolution by Non-Regular Sampling and Joint Sparse Deconvolution and Extrapolation
Increasing the resolution of image sensors has been a never ending struggle since many years. In this paper, we propose a novel image sensor layout which allows for the acquisition of images at a higher resolution and improved quality. For this, the image sensor makes use of non-regular sampling which reduces the impact of aliasing. Therewith, it allows for capturing details which would not be possible with state-of-the-art sensors of the same number of pixels. The non-regular sampling is achieved by rotating prototype pixel cells in a non-regular fashion. As not the whole area of the pixel cell is sensitive to light, a non-regular spatial integration of the incident light is obtained. Based on the sensor output data, a high-resolution image can be reconstructed by performing a deconvolution with respect to the integration area and an extrapolation of the information to the insensitive regions of the pixels. To solve this challenging task, we introduce a novel joint sparse deconvolution and extrapolation algorithm. The union of non-regular sampling and the proposed reconstruction allows for achieving a higher resolution and therewith an improved imaging quality.
I. INTRODUCTION
Looking at the development of camera systems in the past years, an ongoing pursuit for higher resolutions can be discovered. Up to now, this has mainly been achieved by increasing the number of pixels used in the image sensor inside the camera. That is to say, more and more light-sensitive elements are used for obtaining a high-resolution sensor and therewith allowing for high-resolution imaging. However, this practice has the disadvantage that an increase of the number of pixels typically comes along with a higher price and a larger power consumption. Furthermore, a reasonable reduction of the size of the individual pixels is only possible up to a certain point due to photometric limits [1]. Hence, if one wants to avoid increasing the actual sensor dimensions, the number of pixels per area is limited.
One way for increasing the resolution of images is to estimate high-resolution information from the acquired image by post-processing, even if the underlying sensor is not able to theoretically resolve such fine details. All these postprocessing operations belong to the group of super-resolution (SR) techniques. Accordingly, SR techniques can be divided into three groups. First, this is the classical multi-image SR [2]. The algorithms from this group exploit that multiple images of the same object or scene are taken consecutively. Based on small displacements caused by movement, a higher resolution can be recovered. However, these techniques are not applicable if only a single image is available. The same holds for the second group of SR techniques which is multi-camera SR [3]. There, the information from multiple cameras at different positions can be used for increasing the resolution. Multicamera SR is able to achieve a very high quality but apparently, it can only be applied if more than one camera is available. Unlike the former two groups which rely either on multiple images in time or in spatial direction, the third group can also work on single images. Thus they are called singleimage SR [4] algorithms. These algorithms exploit properties like self-similarity of the acquired object or scene [5] or use information from training data sets [6]- [8] for estimating a high-resolution image. In the case that these properties are fulfilled, single-image SR algorithms are able to achieve a very high quality. However, if the underlying assumptions are not met, they fail. All SR concepts have in common that they all rely on the quality of the actual output of existing image sensors and the therewith along-coming limitations. Thus, we are actually aiming at increasing the resolution of the image sensor directly, instead of applying post-processing SR techniques.
Common image sensors are designed on the twodimensional repetition of a prototype pixel cell. Hence, these sensors perform a regular two-dimensional sampling and due to this the resolution is limited by aliasing. Thus, for increasing the achievable resolution, the image sensor has to be modified in order to reduce the influence of aliasing. For this, we have proposed a slightly modified custom image sensor in [9] which allows for a higher imaging quality if used in combination with an appropriate reconstruction algorithm. The modification of the sensor consists of a mask which is put on the image sensor and non-regularly shields three quarters of every pixel. Using this, the acquisition is carried out effectively on a grid twice as fine as the underlying sensor. However, this is achieved at the cost that three quarters of the pixels with respect to the fine grid are missing. Nevertheless, using the reconstruction proposed in [9] or the improved Frequency Selective Reconstruction [10], a very high reconstruction quality can be achieved. However, this technique has the drawback that by masking three quarters of every pixel, only one quarter of the sensor area remains sensitive to light, therewith lowering the overall sensitivity.
Even though the non-regular shielding allows for a higher resolution, sacrifying so much of the sensitivity is not acceptable for practical image sensors. Looking at state-of-the-art image sensors, it becomes obvious that one always tries to make most of the area sensitive to light and the objective hence is to bring the fill-factor close to one. This can be achieved for example by applying techniques like backside illumination [11], [12] or the use of microlenses [13]- [17]. However, these techniques again lead to a regular sensing and therewith the resolution is limited by aliasing.
Nevertheless, the influence of aliasing can be reduced by non-regular sampling. As shown in [18] and [19], a nonregular or a random sampling can be used for reducing the visible influence of aliasing. However, this is not limited to visual features as we have shown in [10]. As discussed there, a higher resolution can be achieved by applying non-regular sampling since the aliasing does not lead to a repetition of the spectral components but rather to a noise-like floor in the spectrum. And this floor can be suppressed by an appropriate reconstruction using a-priori signal knowledge.
In this paper, we want to propose a new image sensor layout scheme which allows for a non-regular sampling but at the same time does not decrease the sensitivity as significantly as [9]. The proposed scheme does not require a new technology, but rather the sensors can be designed using existing tool chains and manufacturing processes. The new layout results from non-regularly rotating a prototype pixel cell and therewith obtaining a non-regular orientation of the light-sensitive areas. As the non-regular placement does not directly lead to a higher resolution, we also propose a novel reconstruction algorithm which is called Joint Sparse Deconvolution and Extrapolation (JSDE). By using the combination of a non-regularly sampling sensor and this fitting reconstruction algorithm, images with four times the resolution of the underlying sensor can be reconstructed.
The paper is structured as follows. In the next section, the proposed sensor layout is discussed in detail and it is shown how a non-regular sampling sensor can be derived from a prototype pixel cell. In Section III, the novel JSDE algorithm is outlined and it is shown how this algorithm can recover a high-resolution image from the output of the non-regular sampling sensor. As the algorithm can also be interpreted in the Compressed Sensing (CS) framework [20], [21], the section also contains a brief discussion about the relationship between JSDE and CS algorithms. Afterwards, in Section IV simulation results are provided in order to show the effectiveness of the combination of a non-regularly sampling sensor together with an appropriate reconstruction. This section also provides a comparison to alternative acquisition concepts and reconstruction algorithms. Finally, a conclusion is provided in Section V together with a short outlook to further developments.
II. PROPOSED SENSOR LAYOUT
Looking at image sensors, it can be observed that they are typically designed on the basis of a prototype pixel cell, as shown exemplary in Figure 1, that is repeated many times in horizontal and vertical direction. Accordingly, the lightsensitive pixels are placed on a regular two-dimensional grid. In this context, it always has to be kept in mind that the individual pixels are not sensitive to light over their whole area but also contain insensitive regions resulting from the circuitry within each pixel. Nevertheless, by using techniques like backside illumination [11], [12] and / or microlenses [13]- [17] a fill-factor of nearly 100% can be achieved. Using this, one gets an array of rectangular light sensitive areas and the acquisition process can be regarded as an integration of the incident light on identical areas located on the twodimensional regular grid.
This acquisition process limits the resolution of imaging sensors in two ways. First, the integration over the lightsensitive areas of every pixel inherently produces a low-pass characteristic, therewith attenuating high spatial frequencies and image details. Second, and more severe, the other limiting factor is the aliasing from the sampling process. The regular placement of the pixels leads to classical aliasing. Due to this, high spatial frequencies get mapped to low-frequency ones, distorting the image quality strongly. In order to avoid this, either a special anti-alias filter has to be placed in front of the image sensor or the used lens has to suppress high frequencies in such a way that the impact of aliasing is small.
In order to resolve this and to be able to achieve a higher resolution with image sensors, we propose a novel sensor layout. This new layout in combination with the proposed joint sparse deconvolution and extrapolation reconstruction algorithm explained in the next section yields a higher image quality. The basic idea of the novel layout is to use a nonregular placement of the light-sensitive areas. As we have shown in [10], non-regular sampling has the advantage that the aliasing does not lead to a repetition of the spectrum, or respectively, to a mapping of high frequencies to low ones. Instead, non-regular sampling produces a noise-like floor in the frequency domain. Using a sparsity-based reconstruction algorithm, this noise-like floor can be suppressed by exploiting the sparsity property of image signals [22].
In [9], we already have proposed a concept of modifying a regular image sensor to perform a non-regular sampling. For this, a mask is overlayed on the sensor, non-regularly shielding 3 replacements GND VDD RESET OUT ENABLE Fig. 2. Small area of an image sensor with non-regularly rotated prototype pixel cells. The light-sensitive photo diodes are shown in light-gray while the insensitive circuitry is shown in dark-gray. three quarters of every pixel. Using a fitting reconstruction algorithm we have been able to show that a higher image quality can be obtained as would be possible with the underlying regular sensor. However, this technique has the disadvantage that by shielding three quarters of every pixel, sensitivity is lost and in some cases the reconstruction can yield small artifacts.
In order to cope with both the problems, we propose a novel sensor layout which is able to capture more of the incident light and also at the same time allows for a higher quality during the reconstruction step. Looking at a typical pixel cell as shown in Figure 1, it can be observed that the light-sensitive area forms an L-shaped region roughly covering three quarters of the pixel while the insensitive part which contains the transistors covers only one quarter. This structure of a pixel cell can be used directly for performing a nonregular sampling. While it is common up to now to just repeat this pixel cell over the whole sensor, we propose to rotate the pixel cells non-regularly as shown for a small area of the whole sensor in Figure 2. Using this, one obtains an integration of the incident light over non-regularly rotated regions.
Integrating the light over these non-regularly rotated L-shaped areas has the big advantage that it is possible to recover an image with a higher resolution by using an appropriate reconstruction algorithm. For this, two tasks have to be solved, first the integration over the spatially varying pixels has to be made undone. Second, at the position of the light-insensitive pixel area, no information is available. Thus, the signal has to be extrapolated into these unavailable regions. The sparsity-based JSDE algorithm which we propose in the next section is able to solve both problems at the same time and allows for a high reconstruction quality.
Even though the proposed sensor layout seems to fit best with pixel cells that contain an L-shaped light-sensitive area, it can also be applied to pixel cells which have a different layout or for pixel cells with micro-lenses or backside-illumination. In this case, a mask would have to be imprinted by using micro-lithography. The mask has to make one quarter of every pixel insensitive to light in a non-regular fashion, therewith achieving a layout as shown in Figure 2.
Both possibilities, that is to say, either rotating the pixel cells or using masks are easy ways to achieve a non-regular sampling and could be included in state-of-the-art image PSfrag replacements sensor design and manufacturing processes. It is only necessary to define four prototype pixel cells with corresponding connections and place them on the image sensor plane. The only disadvantage of the proposed sensor layout is that the fill-factor of the pixel cells is limited to 75%. However, as a significantly higher image quality can be achieved by the subsequent reconstruction algorithm, this is a reasonable price to pay. And for sensors which already do not make use of all the incident light, non-regularly placing the sensitive areas allows for a higher imaging quality, for free.
As mentioned above, for making use of the benefits from a non-regular sampling, an appropriate reconstruction algorithm is required. In the next section we outline the novel JSDE algorithm in detail. This algorithm is able to solve both challenges outlined above and therewith allows for the reconstruction of high-resolution images.
III. RECONSTRUCTION BY JOINT SPARSE DECONVOLUTION AND EXTRAPOLATION
As discussed above, the non-regular sampling can only lead to an improved imaging quality, if it is combined with a fitting reconstruction algorithm. Therefore, we propose the sparsity-based Joint Sparse Deconvolution and Extrapolation (JSDE) in the following. For this, we will first have a look at the mathematical description of the acquisition process and the relationship between the sensor signal and the highresolution image signal to be reconstructed. Afterwards the actual reconstruction by JSDE is outlined.
A. Mathematical Description of the Acquisition Process
In the left half of Figure 3, a part consisting of 5×3 pixels of the non-regularly sampling image sensor is shown. In general the sensor is of size X × Y and the output of the sensor is depicted by signal s [ x, y] with spatial coordinates x and y. In this context, the tilde denotes the signal with the low resolution. The objective of JSDE is to reconstruct the image on a high-resolution grid with twice the spatial resolution in vertical and horizontal direction. That is to say, every pixel of the signal s [ x, y] should be expanded into four new pixels. The target signal is depicted by s [x, y] and is of size X × Y with X = 2 X and Y = 2 Y . The right side of Figure 3 shows a small part consisting of 10 × 6 pixels of the target signal to be reconstructed.
The relationship between the sensor signal s [ x, y] and the target signal s [x, y] to be reconstructed can be described by Apparently, this is an underdetermined problem which cannot be solved directly. However, the problem can be solved by exploiting certain signal properties. For this, the proposed JSDE algorithm uses the sparsity property of image signals as shown in the following.
B. Solution of the Underdetermined Problem by JSDE
For reconstructing the signal on the high-resolution grid, JSDE first performs a division of the signal s [x, y] into blocks. The signal in the block is depicted by f [m, n] with spatial coordinates m and n. A block located at position (x o , y o ) in the image signal s [x, y] can be accessed by the relationship The block of size B together with a neighborhood of width W pixels forms the reconstruction area L. Except for cases where the considered block is located at the boundary of the image, area L is square and in general it is of size M × N pixels. All these coordinates and sizes are defined with respect to the fine grid. The corresponding block of the sensor signal s [ x, y] is depicted by f [ m, n] and is half the size in vertical and horizontal direction compared to the block f [m, n] from the high-resolution signal.
In order to describe the relationship between the nonregularly sampled signal and the target high-resolution signal, the signals are vectorized. By scanning the sensor signal f [ m, n] in column order, we obtain vector f . Accordingly, all the pixels of f [m, n] are aligned in the vector f . However, it has to be noted that the mapping between the two-dimensional signal f [m, n] and the vector f is not column-wise, but rather, first inside every of the related large pixels a scan is PSfrag replacements Relationship between sensor signal f and desired high-resolution signal f . performed column-wise before the scan proceeds to the next pixel. Figure 4 illustrates this scan order.
As every pixel of the desired high-resolution image relates to one quadrant of one pixel of the low-resolution sensor signal, every time three of the four pixels of the highresolution signal contribute to one pixel of the sensor signal.
The relationship between f and f is shown in Figure 5 and can also be described by the block diagonal aggregation matrix A, leading to f = Af . ( Regarding the examples shown in Figures 4 and 5, the corresponding aggregation matrix A is given by (4) Of course, for the aggregation of the whole signal f to f , matrix A is larger and it always is of size MN 4 × (M N ). Apparently, it is not possible to recover f from f directly, since A cannot be inverted. As stated above, the inversion is an underdetermined problem consisting of two sub-problems to be solved. First, a spatial varying deconvolution has to be performed which inverts the integration of the three lightsensitive quadrants into the output of the pixel of the image sensor. And second, the signal amplitude at the position where the sensor is insensitive to light has to be estimated which can be regarded as extrapolation problem. For solving both tasks and for recovering the signal on the high-resolution grid, we propose the novel JSDE which exploits the fact that image signals can be sparsely represented in the frequency domain [22].
For the reconstruction, JSDE aims at generating the parametric model By using the same scan order as shown in Figure 4, the model can also be vectorized to g and the basis functions to ϕ k . For generating the model, JSDE uses an iterative approach that is related to the Frequency Selective Reconstruction (FSR) [10] and its precursor Frequency Selective Extrapolation [23]. However, it has to be noted that FSR only is able to perform an extrapolation while JSDE can additionally perform a spatial varying deconvolution and therewith can be regarded as a generalization of FSR. In order to generate the model, the distribution matrix D is required that controls which pixels of signal f belong to pixels from the lowresolution signal f . Distribution matrix D is closely related to aggregation matrix A and also is a block diagonal matrix. However, D carries blocks of four ones and can be used for directly expanding f to the high-resolution grid and therewith can be regarded as the dual operation to aggregation matrix For the example shown in Figures 4 and 5, distribution matrix D looks as follows: Using the aggregation matrix A and the distribution matrix D, the residual r between the available signal f and the model g can be determined with respect to the high-resolution grid.
During the model generation process, the spatial weighting function for (m, n) ∈ B (9) similar to [24] is used. Here, set A subsumes all the pixels relating to light-sensitive quadrants, whereas set B contains the insensitive quadrants. This weighting function is required for two reasons. First, the pixels from the fine grid which relate to the light-insensitive areas can be excluded from the model generation. Since these pixels do not contribute to the measured amplitudes in the underlying low-resolution pixels, they contain no information and have to be excluded from the model generation. Second, a weighting with an isotropic model is carried out in order to assign a lower weight and therewith less influence on the model generation to pixels lying more far away from the block to be reconstructed than pixels located closer to the considered block.
For generating the model of the signal, the weighted residual energy E w is considered. E w describes how well the generated model on the fine grid fits the available signal on the coarse grid, subject to the weighting function w [m, n]. By scanning w [m, n] in the same way as f [m, n] shown in Figure 4, we obtain vector w and the diagonal matrix W = diag (w). Using this, the weighted residual energy can be calculated by In this context, (·) H denotes the conjugate transpose, or respectively the Hermitian matrix or vector.
As mentioned above, the model g will be generated iteratively and the currently considered iteration is depicted by ν. Initially, the model is set to zero, g (0) = 0, and the residual is set to the expanded low-resolution signal r (0) = D f . Then, in every iteration one basis function to be added to the model is selected and the corresponding expansion coefficient is estimated. In order to determine the basis function to be added to the model in the current iteration, a weighted projection of the residual onto the basis function is carried out. For calculating the weighted projection, the weighted residual energy E (ν) w,k is regarded, which would result from a selection of basis function ϕ k in iteration ν. The weighted residual energy can be calculated by which would result if basis function ϕ k was selected in iteration ν. Based on this, the projection coefficients p (ν) k as output from the weighted projection can be calculated for all indices k.
The projection coefficient p w,k . For this, the Wirtinger calculus is used and the partial derivatives are set to zero. Here, the (·) * denotes the conjugate complex term. The partial derivative In order to fulfill (13) for minimizing the weighted residual energy, follows as result for the projection coefficient. For determining which basis function actually to add in iteration ν, the one is selected that would be able to reduce the weighted residual energy the most, subject to the frequency prior q k . The frequency prior is used for favoring the selection of low-frequency basis functions over high-frequency basis functions in the case of ambiguities. The influence of the frequency prior on the model generation is explained in detail in [10], [25]. While in [10] a prior inspired by the optical transfer function (OTF) of imaging systems is used, an adaptive frequency prior is proposed in [25]. For JSDE, the simple OTF-inspired prior is sufficient and q k is defined by Here, the two substitutions are used for allowing a compact representation. A two-dimensional plot of the frequency prior q k with respect to the substitute variables k 1 and k 2 is provided in Figure 6.
Using this, the basis function to select is the one that minimizes E (ν) w,k subject to the frequency prior and the index u (ν) of the basis function results to The steps for obtaining the result above can be found in the Appendix.
After having selected the basis function to be added in the current iteration, the update of the expansion coefficient has to be determined. For this, the orthogonality deficiency compensation proposed in [26], [27] is used. This procedure is used for reducing the interference between the different basis functions and therewith obtaining a stable estimation. For this, only a portion of the projection coefficient is added to the expansion coefficient. Even though directly using the projection coefficient for the update would reduce the residual energy the most, reducing the influence by orthogonality deficiency compensation yields a better modeling. As shown in [27], Algorithm 1 Pseudo code for model generation of the Joint Sparse Deconvolution and Extrapolation algorithm. Signals are vectorized according to the scan order shown in Figure 4.
input: Block f of low-resolution signal, sizes M and N , aggregation matrix A, distribution matrix D, and weighting matrix W according to sensor layout, basis functions ϕ k , number of iterations I, and compensation factor γ.
end for /* Iterative model generation */ for all ν = 1, . . . , I do /* Basis function selection */ end for output: Model g = g (I) with respect to the fine grid a constant factor γ could be used as a good approximation for the elaborate compensation of the orthogonality deficiency which is proposed in [26]. Using this, the model and the residual can be updated by The steps of selecting the basis function to be added, estimating the weight and updating the model and residual are repeated for a pre-defined number of iterations I. After the model generation has been finished, the samples corresponding to the currently considered block are extracted from the model and are placed in the high-resolution image. Finally the reconstruction proceeds to the next block. In order to provide a compact overview of the model generation of JSDE, Algorithm 1 shows a pseudo code of the modeling.
C. Relationship to Compressed Sensing
Looking at the model generation process of JSDE outlined above, it can be observed that there are several similarities to reconstruction algorithms used within the Compressed Sensing (CS) framework [20], [21]. Indeed, JSDE can also be seen in this framework as it generates a sparse model of the signal that is measured. In this context, the aggregation matrix A could be interpreted as the sensing matrix and the pixel amplitudes eplacements Large pixel Non-regular quarter sampling Regular three-quarter sampling Non-regular three-quarter sampling Fig. 7. Considered sensor layouts: Large pixels with ≈ 100% fill-factor, non-regular quarter sampling [9] with ≈ 25% fill-factor, regular three-quarter sampling with ≈ 75% fill-factor, and the proposed non-regular three-quarter sampling with ≈ 75% fill-factor.
of the low-resolution sensors can be regarded as the output from the linear measurements of the high-resolution signal.
In [10], the relationship between the FSR algorithm and CS is discussed in detail. As FSR can be regarded as a precursor of the proposed JSDE, this relationship also holds for JSDE. Accordingly, JSDE belongs to the group of greedy algorithms whose most prominent representative is Matching Pursuits (MP) [28]. However, by incorporating the spatial weighting function and the frequency prior, JSDE is also related to CS algorithms that make use of prior knowledge, as for example the ones proposed in [29], [30]. Furthermore, the use of a block-wise processing makes JSDE also related to block-wise CS algorithms [31], especially the ones that use overlapping blocks [32].
In the next section, simulation results are provided for showing the effectiveness of JSDE and its ability to recover a high-resolution image from a non-regular sampling sensor. We also show that the combination of non-regular sampling and a fitting reconstruction yields a superior quality and is able to outperform different sampling and reconstruction concepts.
A. Simulation Setup
The purpose of this section is to show how well the proposed non-regular sampling in combination with the reconstruction by JSDE can be used for achieving a higher imaging quality. In order to obtain a meaningful evaluation, this section also provides simulation results for different sensor layouts in combination with various reconstruction algorithms. As layouts for the sensor, four different schemes are considered, as shown in Figure 7. First, these are large pixels with ≈ 100% fill-factor which could be achieved for example if backside-illumination or micro lenses are used. Second, the non-regular quarter sampling [9] with 25% fill-factor is considered. Furthermore, two different cases are examined, where a three-quarter sampling with ≈ 75% fill-factor is considered. This relates to the use of a prototype pixel cell as shown in Figure 1 without microlenses. This protoype pixel cell can either be identically repeated or, as proposed, non-regularly rotated in order to place the insensitive area in different quadrants.
In order to simulate the behavior of the different sensing technologies, high-resolution images are taken and every time 2 × 2 pixels of this image are combined. Depending on the considered sensor layout, the four pixels are either just averaged as for the large pixel case, or one of the four pixels is selected for the non-regular quarter sampling [9] case. For the latter two cases, three of the four samples are averaged while the fourth is discarded. For the regular sampling case, always the same three pixels of a 2 × 2 group are averaged while for the non-regular sampling case, the considered pixels over which the averaging is carried out change.
In order to reconstruct the high-resolution images from the sensor output, different algorithms are evaluated. Of course, which algorithm to actually use depends on the regarded sensor layout. For the large pixels case, where the lightsensitive areas are square and extend over the whole pixels, the easiest way to obtain an image on the high-resolution grid just would be to apply a pixel enlargement (PE). That is to say, the amplitude of the measured pixel is assigned to all the four underlying pixels on the high-resolution grid. This can also be regarded as a nearest neighbor interpolation. Alternative to this, we also test a bicubic upsampling (BIC) by a factor of two in both directions. As the objective is to reconstruct the image on a grid of a higher resolution and this task can also be solved by SR algorithms, we also include three single image SR algorithms into the evaluation. These are the sparse dictionary-based algorithms from Yang (SR-Yang) [
6] and Zeyde (SR-Zeyde) [8] on the one hand side and the regression-based algorithm from Kim (SR-Kim) [7] on the other hand side.
For the case that the sensor layout using the non-regular quarter sampling [9] is used, an extrapolation is required for reconstructing the missing pixels with respect to the highresolution grid. For this FSR [10] is considered as it can be seen as an ancestor of JSDE. Furthermore, Steering Kernel Regression (KR) [33] and Constrained Split Augmented Lagrangian Shrinkage Algorithm (CLS) [34] are applied for solving this task.
For both three-quarter sampling cases, the same reconstruction algorithms can be used. First, this is again a pixel enlargement (PE) where the output of averaging over the three quadrants just is assigned to all four quadrants. The second method is to apply a modeling by Matching Pursuits (MP) [28] which performs a greedy sparse modeling. The block size and the basis functions for MP are selected as for JSDE. However, unlike JSDE, MP does not include a spatial weighting function, a frequency prior or the orthogonality deficiency compensation. Since MP already is a rather old sparse modeling algorithm, we also include the Generalized Approximate Message Passing (GAMP) [35] as an algorithm for generating a sparse model, given the acquired signal. Finally, the proposed JSDE is applied and evaluated. Since both sensor layouts require a joint deconvolution and extrapolation for reconstructing the image on the high-resolution grid, the novel JSDE can be applied in both scenarios.
The following subsection is devoted to the realization of the JSDE algorithm and especially which parameters to select for the model generation. Afterwards, a comparison of the [36] follows. This kind of comparison also is used for evaluating the sensitivity of the different sensor layouts. Subsequent to this, a comparison in terms of resolution is provided which shows the superiority of the nonregular sampling concept in contrast to regular sampling. After this, a short subsection follows which discusses the runtime and therewith computational complexity of the algorithms before visual results are provided in the last subsection.
B. Selection of Reconstruction Parameters
Regarding the JSDE algorithm as it is proposed in Section III, it can be observed that the algorithm requires several parameters which have to be determined. As mentioned in the preceding section, the model generation of JSDE is related to FSR, even though the latter one only is able to perform an extrapolation and no spatial varying deconvolution. Nevertheless, the parameters given in [10] can be used as a good starting point for determining the parameters for JSDE. As the available image information is uniformly distributed over the whole image area, a data dependent processing order of the blocks as proposed in [10], [37] is not required and JSDE can operate in a fixed linescan order of the image blocks.
As a fitting of the parameters to the underlying data set shall be avoided, the determination of the parameters has to be carried out on a data set which is independent from the later used actual test data set. Hence, we have used the Kodak test data base [38] for determining the parameters, while for the subsequent evaluation given in the next subsection the TECNICK image data base [39] has been used.
For determining the actual parameter set, a large number of different parameter combinations is evaluated using images from the Kodak test data base. As the proposed sensor design currently only considers the acquisition of the luminance, for the simulations also only the luminance component of the images is considered. Apparently, a full search of the parameter space is not feasible. Hence, we have used the parameters from [10] as starting point and varied them. As metric for the evaluation, the PSNR between the original highresolution image and the reconstructed image is used.
The simulations on the Kodak image data base reveal that a block size of B = 4 samples and a border width of W = 14 samples yields a high reconstruction quality. For the actual model generation, I = 100 iterations should be carried out and the weighting function should decay withρ = 0.7. The orthogonality deficiency compensation should be performed with γ = 0.5. In order to provide a compact overview, Table I lists all selected parameters.
Fortunately, none of the parameters is very critical and a variation around the selected values is possible without heavily affecting the reconstruction quality. In order to prove this, Figure 8 shows three plots of the average reconstruction quality where either the number of iterations I, the compensation factor γ, or the decay factorρ are varied while all other parameters are selected according to Table I.
C. Evaluation of the Image Quality in Terms of PSNR and SSIM
Using the above determined parameters, simulations have been carried out on an independent test data set. For this, the TECNICK image data base [39] has been used. The highresolution images have a size of 1200 × 1200 pixels. By combining every time 2 × 2 pixels according to the four considered sensor layouts shown in Figure 7, all the lowresolution images consist of 600 × 600 pixels.
In Table II, the average image quality in terms of PSNR and SSIM is listed for the TECNICK image data base for the above mentioned sensor layouts and the considered reconstruction strategies. It can be seen that the proposed non-regular three-quarter sampling in combination with the novel JSDE reconstruction yields a very high objective image quality, outperforming all other combinations except for the SR algorithms.
Especially compared to the case that a large pixel is considered in combination with bicubic interpolation, a gain of more than 0.5 dB can be achieved. Comparing the proposed non-regular three-quarter sampling to the non-regular quarter sampling [9], it can be observed that by applying JSDE for the reconstruction, a gain of more than 0.7 dB is possible, and at the same time three times the light is collected, compared to the quarter sampling case.
Regarding the reconstruction output of JSDE for the case that the three-quarter sampling is carried out in a regular or in a non-regular fashion, it can be observed that the nonregular rotation of the pixel cell yields a gain of more than 0.4 dB. This is due to the fact that the non-regular placement of the light-sensitive regions does not produce the typical aliasing components and allows for the reconstruction of even very fine details. Looking at the results for MP and GAMP, it can be seen that the latter two only produce a significantly lower quality than JSDE and especially for the regular case completely fail. This can be explained by the fact that MP and GAMP have no frequency prior included. Thus, for the regular sampling case these algorithms are not able to distinguish between the selection of low-frequency and high-frequency basis functions and the actual selection process is determined by numerical inaccuracies. Hence, MP and GAMP produce a lot of artifacts in this special case, resulting in a very low quality. In this context, it always has to be kept in mind, that a better approximation of the available signal does not necessarily have to come along with a higher reconstruction quality. Accordingly, as MP and GAMP do not employ the spatial weighting function, the frequency prior, and the orthogonality deficiency compensation, they approximate the available signal better than JSDE. However, they are not able to achieve a higher reconstruction quality. Another interesting aspect which can also be discovered is that PE already allows for a decent reconstruction quality for the three-quarter sampling cases. Thus, it would also be possible to apply an elaborate reconstruction as by JSDE offline, while for a preview just PE is used.
Even though the non-regular three-quarter sampling achieves a high reconstruction quality, it is still outperformed in PSNR and SSIM if the SR algorithms are applied on the large pixel case. As outlined above, single-image SR algorithms are able to achieve a high quality by exploiting the self-similarity in images or similarities to trained data sets.
Since the training has been carried out on images with a similar characteristic as the test data, the output of the considered algorithms is rather high. However, in all the cases when content is considered that does not fulfill these properties, they fail. And more important, even though the SR algorithms can improve the PSNR, they are not able to increase the actual resolution of an imaging system. The spatial resolution always is limited by the sampling in the image sensor and as shown in the subsection after the next, cannot be improved by singleimage SR algorithms.
D. Evaluation of the Sensitivity
Independent of the three-quarter sampling being achieved by placing the insensitive parts of a pixel cell in one quadrant or by using a mask in front of the pixels, it has to be considered that a fill factor of only 75% can be used. In contrast to this, common image sensors with backside illumination and / or microlenses can achieve fill factors of up to roughly 100%. Accordingly, the proposed three-quarter sampling has a lower sensitivity and hence is more prone to shot-noise. In order to assess the influence of the reduced fill factor of the proposed concept on the actual imaging quality, simulations with superimposed noise have been carried out. For this, shot noise following a Poisson distribution has been added to the pixels in order to simulate the variation of the incident light. In order to simulate the noise, a full well capacity of 10.000e − has been assumed. Compared to state-of-the-art imaging sensors, this is a quite low full well capacity, however, in order to actually assess the effects, this more challenging scenario has been considered. Additionally, in order to simulate thermal and readout noise, Gaussian distributed noise with a standard deviation of 25e − has been added. Table III lists parently, the image quality is lower compared to the noiseless case. Furthermore, it can be observed that in the case where the large pixels are used, the loss is very small, whereas for non-regular quarter sampling [9], the highest loss arises. The large loss of the non-regular quarter sampling is caused by the small area sensitive to light. Since the proposed non-regular three-quarter sampling integrates the light over a larger area, it is less sensitive to noise and degradation caused by the noise is moderate.
E. Evaluation of the Image Resolution
The evaluation of the quality in terms of PSNR and SSIM is only one possibility to measure the image quality. Aside from this, the achievable image resolution has to be evaluated. PSNR and SSIM only provide an overall quality score but do not express the abilities of a system to resolve even very fine details. In order to assess this property, different evaluations have been carried out and are discussed in the following. In order to allow for a compact presentation, only a subset of the aforementioned combinations of sensor layouts and good performing reconstruction algorithms is considered in this subsection. For obtaining a representative set of combinations, the large pixel case is considered together with BIC and SR-Zeyde [8], the non-regular quarter sampling together with FSR [10] and KR [33], and for the regular and nonregular three-quarter sampling, PE and the proposed JSDE is considered.
The first option to evaluate the image resolution is to apply the different combinations on a resolution test chart and determine how well different details can be recognized. For this, the EIA-1956 Resolution Test Chart is considered. In Figure 9, details of the reconstructed images are shown. It can be observed that all regular sensor layouts lead to severe aliasing and that small details cannot be separated well. Only the non-regular quarter and three-quarter sampling in combination with FSR and the proposed JSDE are able to acquire even fine details and separate the converging lines up to the smallest distance. In direct comparison between the quarter sampling and the three-quarter sampling case, it can be discovered that FSR achieves a higher resolution and that the converging lines can be separated more precisely. In doing so, also a higher PSNR can be achieved. However, the non-regular quarter sampling in combination with FSR also introduces some ringing and some ragged edges at the letters. Nevertheless, the three-quarter sampling in combination with JSDE also is able to resolve even fine details and does not produce these artifacts.
In order to quantify the abilities of the different sampling concepts and reconstruction algorithms, further tests have been carried out with line patterns of different spatial frequency. For measuring the resolution of the different combinations, the modulation transfer function is considered that measures the contrast of the reconstructed signal for different spatial frequencies. The contrast C is defined as with I max being the maximum amplitude in the reconstructed image and I min being the minimum amplitude. In Figure 10, the contrast is plotted with respect to the frequency of the line pattern, defined relative to the spatial sampling frequency with respect to the low-resolution grid. It can be seen that both nonregular sampling layouts are able to achieve a high contrast up to the sampling frequency of the low-resolution grid if combined with FSR, or respectively, JSDE. The non-regular quarter sampling achieves an almost perfect reconstruction in this case. Nevertheless, the non-regular three-quarter sampling also is able to yield a very high contrast and to resolve even very fine structures. Of course, due to aliasing the regular sampling with large pixels can only cover frequencies up to half the sampling frequency and the contrast drops significantly if the Nyquist-frequency is exceeded. This also cannot be resolved by single image SR algorithms and it can be observed that in this case the resolution is limited by the underlying large pixels. For high spatial frequencies, the SR algorithm can only achieve a resolution similar to the BIC case.
In order to show that the ability to resolve fine details is not direction-dependent, Figure 11 shows details of the test image Zoneplate which is a rotation-symmetric chirp and is shown at the left side. This test image covers the whole frequency range in all directions and is well suited for measuring to what extent high-frequency content can be recovered. Comparing the images, it can be discovered that only the non-regular quarter sampling in combination with FSR and the proposed non-regular three-quarter sampling in combination with JSDE are able to cover the whole frequency range and acquire even very fine details. All other algorithms produce strong aliasing artifacts for high frequencies, also leading to a significant loss in PSNR and SSIM. Regarding the case of regular threequarter sampling in combination with JSDE, it can be seen that aliasing artifacts occur there, as well. However, they are not an output of the reconstruction process, but rather result from the regular sampling pattern.
F. Evaluation of the Computational Complexity
For evaluating the computational complexity of the different reconstruction algorithms, runtime tests have been carried out [6] 734.99 SR-Kim [7] 557.98 SR-Zeyde [8] 20.89 Non-regular quarter FSR [10] 208.78 sampling [9] KR [33] 137.16 CLS [34] 52.84 Regular three-quarter PE 0.014 sampling MP [28] 19794.67 GAMP [35] 13142.46 JSDE 20524.30 Non-regular three-PE 0.44 quarter sampling MP [28] 20464.58 GAMP [35] 8092.05 JSDE 20523.00 with MATLAB R2016b on an Intel Xeon E5-1620 v2 equipped with 32 GB RAM. As some of the reconstruction algorithms make use of MEX-Files, the comparison is not completely fair, nevertheless it provides a good impression of the overall complexity. In order to get reliable results, 10 runs have been carried out and always the same image of size 1024 × 1024 pixels has been reconstructed. In Table IV, the average runtime in seconds is provided for the different combinations of sensor layouts and reconstruction algorithms. It can be observed, that the runtime spreads over several magnitudes from simple reconstruction algorithms like PE and BIC to the proposed JSDE. Currently, JSDE is the Visual results for details of 400 × 250 pixels (left two columns) and 120 × 75 pixels (right two columns) of different test images from the TECNICK image data base [39]. The sensor layouts large pixels (LP), non-regular quarter sampling ( 1 4 -NonReg), regular three-quarter sampling ( 3 4 -Reg), and non-regular three-quarter sampling ( 3 4 -NonReg) are considered in combination with different reconstruction algorithms. (Please pay attention, additional aliasing may be caused by printing or scaling. Best to be viewed enlarged on a monitor.) most complex algorithm even though its complexity is of the same order as the other sparse modeling algorithms MP and GAMP. Even though the runtime of JSDE looks poor at first glance, the algorithm has potential for high speed-ups. For this, the relationship of JSDE to FSR could be exploited and a frequency domain implementation of JSDE similar to FSR [10] is envisaged for the future. Using this strategy, as shown in [40], for FSR even a real-time operation of FSR is possible.
G. Evaluation of the Visual Quality
Aside from the objective evaluation in terms of PSNR/SSIM and resolution provided above, of course, the visual quality of the reconstructed images is of high importance. For demonstrating the visual quality of the different concepts, Figure 12 shows the output for different combinations of sensor layouts and reconstruction algorithms. In order to allow for a compact presentation, only the same subset of combinations between sensor layouts and reconstruction algorithms as used in Subsection IV-E is considered, again. The image details have been selected in order to represent very different content and it can be seen that the considered algorithms behave differently with respect to the content to be reconstructed. Furthermore, for each image, the PSNR and the SSIM is provided. However, in many cases the measured PSNR does neither fit well the actual resolution of the image, nor the visual impression.
Comparing the different images, it can be observed that except for the non-regular sampling and reconstructions by FSR and JSDE, all sensor layout and reconstruction algorithm pairs suffer either from aliasing or from introduced artifacts. Looking at the output from the large pixel and the SR algorithm, it can be discovered that this combination yields a high quality in many cases. However, whenever it comes to very fine details, this combination suffers from aliasing, leading to a reduced visual quality. If the non-regular quarter sampling case with FSR reconstruction and the proposed nonregular three-quarter sampling in combination with JSDE are examined in detail, it can be observed that FSR introduces some very small ringing artifacts which are not visible for the JSDE case. Hence, the proposed concept is more sensitive to light and also yields a higher visual image quality.
V. CONCLUSION AND OUTLOOK
In this paper, we have proposed a novel strategy for modifying image sensors in order to increase the imaging resolution. By applying a non-regular sampling, the limitations of image sensors caused by aliasing can be avoided. The proposed sensor layout is still based on state-of-the-art technologies and only very small changes of the design tool chain and workflow are required. This is achieved by defining a prototype pixel cell which is non-regularly rotated. Using this, the light falling on the sensor pixels is integrated over non-regularly placed regions, therewith achieving a non-regular sampling. In order to achieve a higher resolution with the proposed sensor, a reconstruction to a high-resolution grid is required. For this, we have proposed the Joint Sparse Deconvolution and Extrapolation (JSDE) as a sparsity-based algorithm which allows for solving the underdetermined problem that comes along with the reconstruction. By performing a spatially varying deconvolution in combination with an extrapolation, JSDE is able to achieve a very high image quality. Using this, non-regular sampling in combination with JSDE is able to outperform classical sensor layouts and reconstruction algorithms in terms of resolution and achieves a high visual quality.
Future research aims at combining the proposed nonregular sensor layout with super-resolution techniques. As the non-regular sampling is directly able to acquire more high-frequency information than regular image sensors, this might also be beneficial for reconstruction algorithms which are more data-driven than the generic JSDE. Furthermore, the extension of the non-regular sampling concept to color imaging will be investigated. For this, on the one hand a direct combination of the proposed sensor concept with a Bayer pattern can be considered. On the other hand, an extension of the non-regular sampling concept to the placement of the color filters is foreseen. Finally, the manufacturing of a nonregular sampling sensor is envisioned. With such a sensor, actual measurements would be possible, proving the potential of the proposed concept in real-world applications. | 2019-02-09T14:21:04.259Z | 2019-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "541f82c8d968812f7c9312ab1a46e7990262a146",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1345f04574f3a6f2366dd09dbf19d5cda3cce16f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119369379 | pes2o/s2orc | v3-fos-license | STM observation of a box-shaped graphene nanostructure appeared after mechanical cleavage of pyrolytic graphite
A description is given of a three-dimensional box-shaped graphene (BSG) nanostructure formed/uncovered by mechanical cleavage of highly oriented pyrolytic graphite (HOPG). The discovered nanostructure is a multilayer system of parallel hollow channels located along the surface and having quadrangular cross-section. The thickness of the channel walls/facets is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm, the channel length is 390 nm and more. The investigation of the found nanostructure by means of a scanning tunneling microscope (STM) allows us to draw a conclusion that it is possible to make spatial constructions of graphene similar to the discovered one by mechanical compression, bending, splitting, and shifting graphite surface layers. The distinctive features of such constructions are the following: simplicity of the preparation method, small contact area between graphene planes and a substrate, large surface area, nanometer cross-sectional sizes of the channels, large aspect ratio. Potential fields of application include: ultra-sensitive detectors, high-performance catalytic cells, nanochannels for DNA manipulation, nanomechanical resonators, electron multiplication channels, high-capacity sorbents for hydrogen storage.
Introduction
From the moment of graphene discovery and until the present time, several methods of its preparation have been suggested [1,2,3,4,5]. Among the suggested methods, the method of mechanical exfoliation of graphene planes from highly oriented pyrolytic graphite (HOPG) [1,5] deserves a special mention since mechanical exfoliation of graphene planes apparently underlies mechanism of formation of the spatial box-shaped graphene (BSG) nanostructure described in the present work.
A surface of HOPG having unusual appearance is presented in Fig. 1 [6]. The surface has been either formed or uncovered after mechanical cleavage. As a rule, plane atomically smooth areas with sizes from several hundreds of nanometers to several microns are produced after cleaving this sort of graphite [7]. In the case considered, the graphite surface represents a multilayer system of parallel hollow channels which plane facets/walls are apparently graphene sheets.
A periodical microstructure that appeared after mechanical cleavage of HOPG is described in work [8]. The microstructure is a system of parallel folds periodically repeating through approximately 100 µm. The width of a fold area makes about 2 µm. The microstructure consists of several graphite layers and reaches 1-2 µm in depth.
The microstructure and the detected nanostructure have some similarities: both structures extend in one dimension, STM observation of a box-shaped graphene nanostructure 2 periodically repeat, their folds are formed across the cleaving front, they both have layer shifts and channels with quadrangular cross-section. The observed similarities may imply similarity of the processes of formation of those surface structures, i. e., scalability of the phenomenon when passing from micrometer to nanometer fold sizes.
The main objectives of the presented work are (1) Demonstration of the existence of BSG nanostructure.
(2) Analysis of the sizes and morphology of elements of BSG nanostructure.
(3) Development of a possible mechanism (a qualitative model) of the BSG nanostructure formation.
(4) A brief estimation of the prospects of possible applications of BSG nanostructure (to prove the need of further research).
Theoretical analysis and computer modeling of the discovered nanostructure as well as attempts of its reproduction are planned to be implemented at the next stages of the research. Based on the study of the BSG nanostructure, possible areas of its application were defined: detectors, catalytic cells, nanochannels of fluidic de- vices, nanomechanical resonators, multiplication channels of electrons, hydrogen storage and some others.
The notions of a channel wall and a channel facet used below are close to each other. Wall, as a rule, refers to a flat surface common to two adjacent channels. Facets usually refer to outer flat surfaces of the upper channel layer.
Experimental observations
The following experimental facts point out the small thickness of the walls/facets of the detected nanostructure. First, the direct measurement of the wall thickness (see the white arrows in Fig. 1(a)) of an "open" channel gives a size of order of 1 nm (an open channel is the one that has no top facets). Second, the direct measurement of the facet thickness (see the black arrows in the inset) also gives a size of order of 1 nm.
Third, during the raster scanning, the STM tip seems to cause plastic deformation of the box-shaped nanostructure for some regions even with tunneling currents <1 nA. The latter can only take place in case of thin enough facets/walls of the nanostructure. In particular, one of the possible signs of such deformation is a flattening of top facets of the box-shaped structure. The flattening looks like a notable decrease of the slope of a nanostructure facet.
As an example, one of such places is outlined with a curvilinear contour in Fig. 1(a). Similar formations are well seen on the facets of the neighboring channels. The top facet of a channel inside the frame in Fig. 1 is shown in Fig. 2 at a higher magnification.
Moreover, the small thickness of the facets/walls is pointed out by the fact of damage (or plastic deformation) of several regions (outlined with ovals) of the box-shaped nanostructure. The damage happened after changing fast scanning direction by 90° (compare the regions outlined with ovals in (b) with the same regions in (a)). Most likely, the stiffness of the facets/walls in y direction turned out to be insufficient to withstand the force influence from the STM tip. Noteworthy is that during the initial scanning along the fast direction coincided with x axis, the region with the overhanging edge (bottom oval) did not break and did not bend plastically although notable scanning faults caused by its mechanical instability were registered. Fig. 1(a)). Lateral shift of the upper graphene layer relative to the underlying layer makes 11 nm. Fast-scan direction is the x axis.
Let us remind that between the STM tip and the surface under investigation, besides a tunneling current registered during the STM measurement, a force interaction takes place [7,9,10]. Herein, the larger the tunneling current I tun (set point) is, the closer the microscope tip is located to the surface with the same bias voltage U tun applied and, in turn, the greater forces act between the tip apex and the surface.
Specific faults that appeared during the scanning are a fourth sign pointing out to a small thickness of the nanostructure facets/walls. These faults appear as narrow streaks from one to several raster lines in width (see the area located above the horizontal line in Fig. 1(a)). The streaks are oriented exactly along the raster lines. Such faults in microscope operation could be taken for a damage/modification of the surface resulted from the above mentioned forces acting between the STM tip and the surface since a damage/modification of the surface often leads to unstable scanning. However, the next scanning of the same surface areas revealed no signs of damage/modification and after switching the fast scanning direction from x to y, the faults disappeared at all (see Fig. 1(b)).
The faults under consideration could also have been interpreted as a result of random surface pollutions. Such pollutions are often pollutions introduced from the outside or a nanoscale debris of the nanostructure originated from cleaving/scanning [7]. The nanoscale debris cause unstable microscope operation by falling under the probe and/or by sticking to it. The practice of STM measurements, however, shows that the presence of any pollutions, as a rule, would make scanning with atomic resolution either completely impossible or extremely unstable.
Meantime, a quite stable atomic resolution was obtained (see Fig. 3) at the subsequent scanning of the upper facet of the nanostructure with high magnification (small scanning step) near the middle of the area enclosed in a frame in Fig. 2. It is well seen in Fig. 2 that the area of the facet flattening, which had been previously considered as a single whole, under a higher magnification turned out to consist of several conditionally plane areas having slightly different slopes. The borders of the plane areas are composed of bent sections of graphene plane formed during a plastic deformation.
Taking into account the said above, the nature of the observed streaks can be as follows. If the scanned surface is the surface of a very thin membrane, then the forces applied by the STM tip while moving by the surface cause its elastic deformation (bending) [11,12]. For example, under attractive van der Waals forces, the membrane bends toward the tip thus increasing the tunnel current. The microscope feedback loop immediately attempts to compensate the change in current by moving the scanner Z manipulator away from the surface. As a result, a not really existing increase in topography height will be observed on the obtained image.
Under the action of repulsive forces, the membrane, on the contrary, bends away from the tip. In this case, try- Fast-scan direction is the x axis. Fourier spectrum is given in the inset, where six maxima typical of graphite can be observed. The hexagon composed of the six maxima is significantly distorted by thermal drift and residual deformations of the lattice. The maxima of the spatial frequencies: . The propagation directions of the spatial ing to reach the set value of the tunneling current, the microscope feedback system moves the scanner's Z manipulator toward the surface thus increasing the membrane deformation even more. At a certain moment, the membrane reacting force is increased so much that it becomes equal to the tip pressing force and the tunneling current reaches the set value. As a result, a lowering of the topography height will be observed on the obtained image which is absent on the real surface. Taking into account the abrupt dependence of the tunneling current upon the size of tunnel junction, the pointed out topography changes can be rather strong.
It is just the described type of tip-surface force interaction that takes place in Fig. 1(a) in the form of the above fault. Abrupt topography falls (up to 6 nm in depth) and steep topography raisings (up to 4 nm in height) are clearly seen where they should not be judging by the adjacent scan lines.
It is worthy to note that the force interaction between an STM tip and a surface may possess a hysteresis. In that case, the force interaction has inelastic character [11] that may point out a rearrangement of the graphene lattice structure and/or a relative sliding of the graphene layers. In certain conditions, hysteresis of the force interaction can cause oscillations of the microscope feedback loop.
The analysis of the obtained STM images thus shows that the facets/walls of the channel of the box-shaped nanostructure are thin membranes (nanomembranes) with the typical thickness of 1 nm. Although the measured thickness of 1 nm corresponds to three graphene layers (the distance between neighboring graphite/graphene layers makes 3.35 Å [13]), the actual facet/wall thickness of the nanostructure can be a couple of graphene layers or even a single layer. This occurs because of some peculiarities of nanostructure wall formation (see splitting into sublayers in Section 5), widening of the objects due to interaction with a sidewall of the STM tip as well as deformation of the nanostructure itself in the locations where the thicknesses of the facets are being measured.
Because of small sizes of the elements of the box-shaped nanostructure, the membranes under consideration have characteristic oscillations of very high frequency [12,14,15,16,17]. As a result, these oscillations cannot pass through the low-frequency microscope servosystem. For the same reasons, the membranes cannot be excited by outside acoustic or seismic disturbances with frequencies entirely within the low-frequency spectral range.
It is well known that even as big objects as micromembranes and microcantilevers can get excited by thermal fluctuations already at room temperature [17,18,19,20]. Besides, free oscillations of a nanomembrane can be excited/damped by the mechanical force exerted by the tip. For example, at the membrane edges (facet cuts of the open channels), high-frequency excitation can occur when the tip goes down from the membrane and, vice versa, it can be damped when the tip goes up onto the membrane. In both cases, the side surface of the tip would participate in the interaction. As a result, the root-mean-square level of noise would noticeably increase at the points corresponding to the edges of the membranes (see Figs. 1 and 2).
In Fig. 1(a), the hollow channels of the BSG nanostructure are oriented toward the scan axis x at the angle α=62.7° and the facet cuts of the open channels make the angle β=143.8° with the axis x. The appearance and the analysis of the cross-sections of the discovered channels have shown that the graphene facets/walls of the nanostructure are not perfect planes [21] and the form of the channel cross-section is close to a parallelogram (see Fig. 4).
The large diagonal of the parallelogram is nearly parallel to the horizontal plane (basal plane). By the STM scans, the following was approximately determined: the mean channel depth d=8±1 nm, the mean size of width projection of the small channel facet w x =18±1 nm, and the mean size of width projection of the large channel facet W x =28±1 nm, the channel length L makes 390 nm and more.
Analysis of the observations
Since the nanostructure under consideration looks periodical, we may expect well noticeable maxima, corresponding to the observed periodicity, to be present at the two-dimensional Fourier spectrum of the nanostructure. Fig. 1(a) gives a good idea of the directions along which some possible periodicities could propagate. These directions are defined by α+90°=152.7° and β-90°=53.8° angles. Along the first of the above directions, the channels themselves repeat periodically; along the second one -the cuts of the upper facets do, which are going nearly perpendicular to these channels. The angle α+90° apparently points out the direction of the cleaving front movement. The cleaving front refers to the moving line of a contact of the adhesive tape with a graphite surface.
As expected, the spatial period 1 3 − f is very close to the manually measured sum of the projections of the widths of the upper facets w x +W x =46 nm. However, the direction this oscillation propagates along differs from the expected direction by more than 13°. Since Fourier spectrum is capable of the best estimate of the spatial period mean value, more precise values of the width projections of the facets w x =18.9 nm and W x =29.4 nm are further used in calculations (to the mean values w x and W x measured manually, 0.9 nm and 1.4 nm were added, respectively, so as the total facet width w x +W x would numerically equalized with the found period The spatial period 1 2 − f relates to the periodicity in the locations of the cross cuts of the upper facets; its direction γ 2 coincides quite well with the direction β-90°. The origination of the spatial period 1 1 − f is not that obvious, though. As this oscillation propagates strictly in the horizontal direction γ 1 , some relation should be assumed between the oscillation and the movement along a raster line. However, the frequency f 1 having a comparable ampli- Fig. 4. Cross-sectional view of the "open" (low profile) and the "closed" (upper profile) parts of a channel of the box-shaped nanostructure. Matching of the profiles shows that the cross-section shape of the discovered nanostructure is close to a parallelogram. The section locations are shown in Fig. 1(a) with thick lines of corresponding colors. Mean channel depth d=8±1 nm, mean sizes of width projections of small w x =18±1 nm and large W x =28±1 nm facets of the channel.
The directions of oscillation propagation corresponding to the found periods: γ 1 =0.0°, γ 2 =48.1°, γ 3 =139.5°. For better visualization, the spectrum image is normalized in the vertical plane with the use of a nonlinear (logarithmic) scale.
tude and oriented along x -1 is also present on the Fourier spectrum built for the scan in Fig. 1(b), where the fast-scan direction coincides with y direction.
By the obtained sizes d, w x , and W x , calculated were the widths of the small w=19.3 nm and the large W=29.7 nm facets (w/W≈2/3) as well as the parallelogram angles ϕ 1 =19.7°, ϕ 2 =160.3°, and ϕ 3 =12.0°. The model representation of the BSG nanostructure and its typical sizes are given in Fig. 6.
It is rather difficult to suggest an unambiguous description of the mechanism responsible for the spatial boxshaped nanostructure formation based on the available data only. For example, it is still unclear whether the detected nanostructure was originated inside the HOPG body during its crystallization and then simply unsealed (uncovered) by the manual cleavage or whether this nanostructure was formed immediately during the mechanical cleaving in a surface layer. In case the nanostructure has formed immediately during the cleaving, whether the HOPG specimen had some structure peculiarities in the considered area by the moment of the nanostructure formation. Those peculiarities could be intercalations, ordered pattern of defects, etc. which had allowed for the observed nanostructure to be formed at the moment of the cleavage.
In the context of the above, noteworthy is the fact that the facets/walls that appeared on the surface make up the nanostructure almost completely consisting of plain areas. The prevalence of plain areas is another fact proving that graphene sheets are the main structural component of the considered nanostructure.
In the scientific literature there are many reports where images of complex dislocation networks observed with an STM on HOPG surface are presented [9,22]. As a rule, dislocation networks observed with an STM are not registered with an atomic-force microscope (AFM). This fact means that the dislocation network is connected with some electronic properties of the HOPG sample and is physically located under the surface rather than upon it. In this regard, a question arises: whether the observed box-shaped nanostructure is really sort of a dislocation network.
Analysis of the published dislocation networks shows that the difference in height of the registered topography makes several Ångströms. In the observed nanostructure the difference in height makes 12-15 nm after elimination of a mean tilt and smoothing. Moreover, the appearance of the nanostructure (the shape of the elements, their mu- L>390 nm tual location and bulkiness) does not match any of the known substantially flat dislocation networks. Thus, according to the signs mentioned above, the detected box-shaped nanostructure may not be admitted as a dislocation network.
It is difficult to say anything definitive about the real number of the formed layers of the channels of the boxshaped nanostructure based only on the available data. At least, two layers of the channels are clearly recognizable in Fig. 1. The lowermost layer can be observed when moving along the diagonal connecting the left top and the right bottom corners of the scan (red and blue cutting lines). This layer is partially opened (blue cutting line). In parallel with this layer, yet another channel layer located above is well seen in the right top corner of the scan, which is partially opened as well. The two layers of the box-shaped channels are shown schematically in Fig. 6.
In parallel to the upper facets of the lower layer of the channels in the left bottom corner of the scan, two graphene layers are located (see Fig. 1). They overlap one another and have a thickness of about 1 nm each. Those graphene layers partially cover the upper facets of the lower layer of the channels. The edge of one of the graphene layers is shown in Fig. 2 with higher magnification. The fact of existence of these two clearly distinguishable graphene layers is yet another sign pointing out the small thickness of the facets/walls of the found nanostructure.
It is well seen in Fig. 2 that the upper graphene layer is shifted relative to the lower layer by approximately 11 nm in the lateral plane. Considering the profile of the nanostructure channels, it becomes obvious that an empty space should be formed between the layers after lateral shifting (see Section 5). The existence of the empty space and the force applied by the microscope tip to the upper graphene layer during scanning may explain why the plane upper facets of the channels were found somewhat deformed.
As it was noted above, atomic resolution is possible (see Fig. 3 The hexagon formed by the maxima is strongly distorted by thermal drift [23,24,25]. Moreover, the hexagon is probably distorted by residual deformations that appeared during formation of the structure and its scanning. In the absence of any distortions, the considered hexagon is regular. In order to precisely determine, using an STM, the degree of residual strain in the lattice of graphene that the facets of the box-shaped nanostructure are formed of, the method of feature-oriented scanning (FOS) [23,24,25] should be applied. A distinctive feature of FOS is in situ elimination of drift influence on the scanning results. It is worth noting that atomic resolution was realy obtained on the surface of a thin membrane consisting of 2-3 graphene layers. This fact again confirms (inderectly) the high rigidity of graphene structures [1,2,3].
By comparing directions α and β of the box-shaped nanostructure with the crystallographic directions θ on the facet surface, we can say with high degree of confidence that the direction α of the nanostructure channels ap-proximately coincides with the direction θ 3 of the graphene plane and the direction β of the facet cuts of the channels approximately coincides with the direction θ 1 of the graphene plane [26]. Some discrepancies between the angles α and θ 3 , β and θ 1 can be accounted for by differences in drift velocities (thermal drift + creep) [24,25], which probably took place as the scans shown in Fig. 1(a) Taking into consideration that the found values of lattice constants a 1 , a 2 , and a 3 are more like a=2.46 Å than like a=1.42 Å, we can suggest that the number of graphene layers in this particular facet is no less than two and that the relative location of the layers is exactly the same as the one of the adjacent layers in graphite (ABAB stacking).
Formation mechanism
Below is a qualitative description of the probable formation mechanism of the detected BSG nanostructure. It is assumed that the box-shaped nanostructure arises as a result of a mechanical cleaving performed by an adhesive tape. Fig. 8 shows the HOPG cleaving method that possibly enables the formation of the searched for box-shaped nanostructure. At first glance, the method might seem just insignificantly different from the existing one. Nevertheless, there are several specific peculiarities, namely: (1) a small-valued (about 12°) cleavage angle ϕ 3 defined as the tilt of the small facet of the nanostructure channel (see Fig. 6); (2) the position of the adhesive tape on the graphite surface so the cleavage front be approximately parallel to one of the crystallographic directions of the lattice (see Fig. 7); (3) the setting of a minimal external cleaving force F and keeping that force constant during the whole process.
Let us make a detailed analysis of the cleaving process. To begin with, let us consider some short-length (tens of nanometers) section AB of the cleaving surface directly adjacent to the current position of the cleavage front.
The cleavage front passes through the point A normally to the plane of Fig. 8. The action of the external cleaving force F is transferred through the adhesive tape to a thin surface layer of the considered section AB (pos. 1). Under the influence of a lateral component of the cleaving force, the graphite crystal lattice will undergo elastic compression at the section AB.
When this compression reaches a certain ultimate value, mechanical condition of the thin graphite surface Proportions between certain elements are not met. Moreover, the layer located at the upper facets of the channel is shifted relative to these facets by approximately 11 nm in the direction defined by angle α+90°. The layer located above the mentioned layer is also shifted relative to that layer by approximately the same value in the same direction. By the way, the movement direction of the cleavage front in Fig. 7 is chosen exactly based on this observation. The existing shift is undoubtedly a direct confirmation of the possibility that the split graphite layers can shift in a fold relative to each other at nanoscopic scale.
Thus, the presented facts point out that the graphene nanostructure consisting of one or more layers of channels can be formed as a result of a relative shift (sliding) of the split graphene layers in a fold under the action of cleaving force F. The angle the force F is applied apparently should be determined by the angle ϕ 3 (see Fig. 6), i. e., the slope of the small facet of the nanostructure channel to the horizontal plane (basal plane of graphite). The condition that the external cleaving force F should be set to the minimum value is dictated by a relatively slow consecutive nature of the processes: compression-bending of the surface graphite layer, nanofold formationsplitting, relative shifts of the graphene layers in the nanofolds. The condition of keeping a constant value of the cleaving force F during the entire process is to ensure that the elements of the BSG nanostructure be created uniform.
The proposed formation mechanism enables fabrication of nanochannels not only with different transverse sizes (see pos. 2 and 3 in Fig. 9) but with a varying cross-section as well. To fabricate nanochannels having a varying cross-section, during the relative shift of layers, the cleaving front should be involved, besides the translational Fig. 9. Simplified formation mechanism of the channel layers of the boxshaped nanostructure (cross-sectional view). Two channel layers appear from three split-in-folds graphene layers during relative shifting (sliding) of these layers along the plane of the small facet under the action of a cleaving force F. ϕ 3 is the angle of force application, i. e., the tilt of the small facet plane to the basal plane.
motion, in a slight rotational motion around the axis perpendicular to the small facet plane.
Discussion
The detected nanostructure has been formed as a result of a number of inelastic deformations. At the moment of the nanostructure formation, the ultimate relative elongation apparently approached the maximum permissible level for graphene (13% for the "armchair" orientation; 20% for the "zigzag" orientation) [11,29,30] or even exceeded it at some locations (see the structure defects in Fig. 1 in the form of ruptured upper facets). Immediately as the box-shaped nanostructure is being formed, the high stresses in its elements relax through inelastic mechanisms: sliding of the cleaving front, bending of the graphite/graphene plane layers, splitting of the layers in the folds, shifting of the split layers relative to each other, structural rearrangements in the graphene layer [30], and in the extreme case through a complete breakage of C-C bonds. In the absence of inelastic mechanisms, the whole steady spatial nanostructure, which we observe, could not have been formed since after the external force is removed, it would simply have returned to the initial state -a "stack" of graphene sheets.
Single crystal graphite versus HOPG
As shown above, the cleaving force should be oriented relative to graphite crystal lattice in such a way that the cleaving front be parallel to any of the three crystallographic directions of the basal plane. The easiest way to maintain a certain orientation of the cleaving front is to use single crystal graphite (SCG) or Kish graphite [31,32] instead of HOPG. The point is that HOPG macrosample is a polycrystal, where the normals to the basal planes (c directions) of all the crystallites are oriented nearly along the same direction (mosaic spread is tenths of a degree) and other directions (a and b) of the crystallites are randomly oriented. Therefore, with HOPG, it is impossible to immediately set the required orientation of the cleaving front relative to crystallographic directions and so we should only rely on a chance that somewhere at the surface a crystallite exists with the necessary orientation.
Therefore, in case HOPG is applied, in order to detect the sought for box-shaped nanostructure, the entire cleavage area has to be looked through. Actually a search should be performed for a crystallite that satisfies the above condition of cleaving front orientation. This conclusion means that the described method of box-shaped nanostructure fabrication on HOPG is rather time-consuming in the sense of searching for the prepared nanostructure itself.
The rare character of spontaneous formation of the BSG nanostructure while HOPG cleaving is confirmed by the circumstances of the nanostructure discovery. The box-shaped nanostructure was first found during trials and refinements of the method of distributed calibration of a probe microscope scanner [25] based on FOS approach [23,24]. During these works, the overall measurement time made up more than a year of continuous scanning in automatic mode. Approximately once a day, an overview 2×2 µm scan was carried out at a new location of the sample. It is worth to note that while conducting the distributed calibrations, the HOPG sample was cleft, in fact, as rarely as about once in 2-3 months [25]. Interestingly, the most of the structures previously published in scientific literature, including various superlattices [33,34,35], were observed on the HOPG surface when the measurements were being taken.
Moiré superlattices inside nanochannels
Considering that the box-shaped nanostructure is formed as a result of a relative shift of the graphene layers, one may suppose that moiré superlattices [33,34,35] are likely to appear in the contact area of these layers (see Fig. 10). Although the shift of the layers in BSG nanostructure occurs mostly in parallel to a graphite crystallographic direction, the formed moiré pattern will not necessarily be a system of 1D fringes. Since the cleaving front oriented at the angle α is not strictly parallel to the crystallographic direction θ 3 of the upper graphene layer (see Section 4, Fig. 7), the graphene layers forming the boxshaped nanostructure may have rotated relative to each other at an angle of order α-θ 3 =4.7°. Such angles are sufficient enough for formation of a 2D hexagonal superlattice having a period of several nanometers and corrugations up to 2 nm [33,34,35].
Note that unlike the hexagonal moiré pattern for which formation it is sufficient to rotate one graphene layer relative to the other one, in order for a pattern of moiré fringes to be formed, it is required that one graphene layer be stretched/compressed relative to the other layer. Deformation of the contacting graphene layers simultaneously along x and y directions also leads to the formation of a 2D centered-hexagonal superlattice. Since tensile/compressive strains can only be small from the physical point of view [29,30], they cause moiré patterns with large periods. By changing the parameters of the graphene superlattices formed on the inner surface of the channels of the box-shaped nanostructure, it is possible to modify the energy spectrum of electrons in these areas [36] as well as to control the adsorption properties of the nanochannels [34]. Moreover, the hexagonal superlattices can be used as a template for making ordered nanostructures [37] on the inner surface of the nanochannels.
Additional methods to control the formation of BSG nanostructure
Although the formation of the nanostructure considered is a pure result of a random unmanaged process, a number of factors apparently predetermined its appearance. Among the factors are: the specific orientation of the cleavage front relative to crystallographic directions of the graphite basal plane, the specific value of a cleaving force and the specific relationship between the lateral and the vertical components of the cleaving force as well as the specific direction of the cleaving force relative to the basal plane.
Moreover, by implementing a certain pattern of mechanical stresses/defects on and/or near the graphite surface that weaken the bonds between the graphite planes in some surface areas and strengthen them in other areas, an attempt could be made to take a better control of the process of the box-shaped nanostructure formation (folding, layer splitting and shifting). In order to implement such stress/defect pattern, some of the already known physical and/or chemical methods could be applied. Among them are electron/ion bombardment [38,39], intercalation [32], substrate thermal deformation [40,41], surface "cutting" by means of catalytic hydrogenation [42] or local probe oxidation [43], etc.
Covering inner surface of nanochannels
The suggested mechanism of BSG nanostructure formation also implies that if HOPG is able to intercalate [32] some substance into the surface layer then it is possible, if necessary, to cover (modify) the internal surface of the channels of the box-shaped nanostructure with an atomic layer of that substance. At the first stage, after the BSG nanostructure formation, the covering appears at least on two walls of the channel. At the second stage, the covering material is transferred onto the other two walls by means of annealing in vacuum. Intercalation of the atoms that form a dielectric layer allows fabrication of nanochennels with upper and lower parts isolated from each other so the nanochennels can be used as electrodes (for example, to apply a transverse electric field [44]).
Possible applications
The practical importance of the discovered phenomenon consists in the fact that rather complicated multilayer hollow 3D nanostructures of graphene do exist in principle and that they can be fabricated by using original graphite as a raw stock. It is well known that graphene is especially worthwhile being a thin (literally atomic) graphite layer completely separated from a substrate [16,45]. Otherwise, this material degenerates into regular, yet very thin, carbon film, which per se can be easily fabricated by the contemporary well-developed methods of molecularbeam epitaxy (MBE) [46,47] or chemical vapor deposition (CVD) [39,48].
What is important for practical application of the detected nanostructure is that the cross-section of the formed channels can vary widely. Unlike the nanopores existing in graphene [49], the nanopores (nanochannels) of BSG nanostructure are not perpendicular to the basal plane but are parallel to it. Moreover, the edge of the open nanopore (see Fig. 1) is so sharp that it is able to "resolve", for instance, single nucleotide bases in a DNA molecule while translocating through the nanopore [49].
Conclusions
The key points of the research can be summarized as follows: (1) A previously unknown 3D box-shaped graphene nanostructure has been detected on highly oriented pyrolytic graphite after mechanical cleavage.
(2) The discovered nanostructure is a multilayer system of parallel hollow nanochannels having quadrangular cross-section with typical width of a nanochannel facet 25 nm, typical wall/facet thickness 1 nm and length 390 nm and more.
(3) An original mechanism has been proposed that qualitatively explains the formation of the nanostructure detected. To elaborate a more detailed mechanism, a more detailed investigation of the nanostructure is required including computer modeling and attempts of intentional fabrication.
(4) Applications are revealed where the use of the box-shaped graphene nanostructure may lead to new scientific results or improve performance of existing devices. | 2019-04-13T16:32:15.964Z | 2016-11-14T00:00:00.000 | {
"year": 2016,
"sha1": "e58fff9817a79e77e317eb8b254d8a496581ed8e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1611.04379",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "62e8e5924098386783b6ffca357a3f7f8b7903df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
16226788 | pes2o/s2orc | v3-fos-license | The Maxflow problem and a generalization to simplicial complexes
The problem of Maxflow is a widely developed subject in modern mathematics. Efficient algorithms exist to solve this problem, that is why a good generalization may permit these algorithms to be understood as a particular instance of solutions in a wider class of problems. In the last section we suggest a generalization in the context of simplicial complexes, that reduces to the problem of Maxflow in graphs, when we consider a graph as a simplicial complex of dimension 1.
Introduction
The problem of Maxflow was formulated by T.E. Harris in 1954 while studying the Soviet Union's railway network, under a military research program financed by RAND, Research and Development corporation. The research remain classified until 1999. The Maxflow problem is defined on a network which is a directed graph together with a real positive capacity function defined on the set of edges of the graph and two vertices s, t called the source and the sink. A flow is another function of this type that respects capacity constraints and a Kirchoff's law type restriction on each vertex except the source and sink. The net flow of a flow is defined as the amount of flow leaving the source. The problem of maxflow is to find a flow with maximum net flow on a given network. In the first section we will define clearly such concepts and present basic results in the subject.
Throughout the second section, we will present three different algorithms for the solution of Maxflow. In 1956 L. Ford and D. Fulkerson devised the first known algorithm that solves the problem in polynomial time. The algorithm works starting with the zero flow and finding paths from source to sink where flow can be augmented preserving the flow and capacity restrictions. We then analyze a more efficient algorithm developed by A. Goldberg and E. Tarjan in 1988. This algorithm works in a different fashion starting with a preflow, a function saturating edges adjacent to the source, and then pushing excess of flow to vertices estimated to be closer to the sink. At the end of the algorithm the preflow becomes a flow and in fact, a maximum flow. Finally we describe Dorit Hochbaum's pseudoflow algorithm, which is the most efficient algorithm known to day that solves the Maxflow problem.
In the third section we show the usefulness of this subject and present three applications of the theory of Network flow. First we show how well-known theorems in combinatorics such as the Hall's Marriage theorem can be proven using Maxflow results. We then show how to find a set of maximal chains in a poset with certain properties using the results in the previous sections. Finally we describe an algorithm for image segmentation. an important subject in computer vision, that relies on the relation between a maximum flow and a minimum cut.
The problem of Maxflow is a widely developed subject in modern mathematics. Efficient algorithms exist to solve this problem, that is why a good generalization may permit these algorithms to be understood as a particular instance of solutions in a wider class of problems. In the last section we suggest a generalization in the context of simplicial complexes, that reduces to the problem of Maxflow in graphs, when we consider a graph as a simplicial complex of dimension 1.
Flow in a network
There are many equivalent ways to define the objects needed to state our problem. We will work with the following: |f | is the total amount flowing out of the source.
The problem of MAXFLOW
Given a network (G, c) the MAXFLOW problem is to find a flow f of maximum net flow. 3, the edge capacity constraints imply that F * is bound, and the flow conservation constraints imply that it is closed, hence F * is compact. The map f * ∈ F * → |ψ −1 (f * )| ∈ R is a linear map, hence continuous. As it is defined on a compact set, it achieves a maximum value, say at f max . ψ −1 (f max ) is then a flow of maximum value.
The previous theorem shows that, in fact, MAXFLOW is a linear programming problem, the most important results of which can be proved with LP theory. We discuss this formulation in detail in what follows.
Definition 2.6. Let G = (V, E) be a simple directed graph, φ its incidence function and (G, c) a network. Let n := |V |, m := |E|, v : [n] → V be an enumeration of the vertices and e : [m] → E be an enumeration of the edges. We define the incidence matrix Φ v,e with respect to the enumerations v, e as From now on, we suppose a network (G, c) has fixed enumerations v, e, of vertices and edges. We take n as the number of vertices, m as the number of edges and suppose that v(1) = s, v(n) = t, then we refer simply to the incidence matrix as Φ.
Lemma 2.7. Given a network (G, c), the problem of MAXFLOW is equivalent to the following LP problem: where Φ 1,j is the first row vector of the matrix Φ, Φ * is the matrix that results from Φ by deleting the first and last rows, I m is the identity matrix of size m and c * := [c(e(i))] i is the vector of edge capacities.
Proof. Follows from the definition 2.3 and the proof of theorem 2.5.
Definition 2.8. For a linear program (called the primal problem) the dual program is defined as We will compute the dual program of MAXFLOW. The problem of MINCUT is to find a cut of (G, c) of minimum capacity. Two of the most important results are the following Theorem 2.12. MAXFLOW=MINCUT. This means the net flow of a maximal flow is equal to the capacity of a minimal cut.
In order to prove these results, we will see that the dual program of MAXFLOW is a relaxation of the MINCUT problem and use the following: Theorem 2.13. (weak duality). Let x * and y * be feasible solutions to a primal problem and its dual, respectively, then Clearly, for any feasible x * and y 1 , y 2 ≥ 0, c T x * ≤ g(y 1 , y 2 ).
Rearranging terms we have where y 1 ∈ R m and y 2 ∈ R n . Then we have Now minimizing over y 1 , y 2 ≥ 0 we have, for any feasible x * c T x * ≤ min y 1 ,y 2 ≥0 g(y 1 , y 2 ) By the previous observation this is equivalent to min y 1 , and this is the dual problem, as we wanted to show. Theorem 2.14. (strong duality). If the primal problem has an optimal solution x * , then the dual problem also has an optimal solution y * and To find the dual of our problem, we state it in standard form Where 0 n is the zero column vector of length n. Then we find the dual to be min After further inspection this is equivalent to n − 2 unrestricted in sign variables, one for each vertex v(i) = s, t called v i , and m variables, one for each edge e j such that These restrictions translate to the following set of inequalities: We can define v 1 = −1 and v n = 0 and we write all the equations in the form Proof. Let χ ∈ [−1, 0] be a random variable with uniform distribution. Define a random variables for each edge by Note that this assignment defines a random cut.
k then by the restrictions of the problem, we get As the expected value of the random cut capacity is less or equal to the optimal value of the problem, there exists a cut of capacity less or equal to the optimal value.
This proves that the dual of MAXFLOW is in fact a relaxation of MINCUT and we get, by strong duality . Any such pair with residual capacity greater than zero is called a residual edge. Note that the residual capacity is always greater or equal to zero. We define the residual graph G f as the graph with vertex set that of V (G) and edge set the set of residual edges.
on any other pair of vertices. One can easily check that F is a flow, and that |F | = |f | + A so f is not a maximal flow. Now this theorem is the basic result needed to state the Ford-Fulkerson algorithm. Starting with the zero flow, as long as there exists an augmenting path with respect to such flow, we can increase the value of the flow by A as defined in the above proof.
Lemma 3.7. If (G, c) is such that c(u, v) ∈ Z then the algorithm terminates.
Proof. At each step of the algorithm, the value is increased by A ≥ 1 so a maximal flow is reached after a finite number of steps.
As corollaries we get Corollary 3.8. If the capacities of a network are integers, then the value of the maximal flow is an integer and there exists a maximal flow with f (u, v) ∈ Z for every edge (u, v).
Corollary 3.9. If the capacities of a network are rational numbers, then the algorithm terminates.
In fact there are examples of networks with irrational capacities such that the algorithm never terminates, moreover, the value of the flow in each step does not converge to the actual value of the maximal flow, so our algorithm must have as a condition that the capacity is at least a rational valued function. Then, the running time of the algorithm depends on the way the augmenting paths are chosen. There are many ways to find an augmenting path, like the shortest augmenting path or the largest bottleneck (value of A) augmenting path, that lead to a polynomial time algorithm.
The Goldberg-Tarjan algorithm
The Goldberg-Tarjan algorithm [2] is another polynomial time algorithm with a different approach to the problem of finding a maximal flow. Instead of increasing the flow along augmenting paths, it starts with a preflow, which is a function on V × V which satisfies excess of flow at each vertex, and then pushes excess flow to edges closer to the sink. Next we formalize these concepts following Goldberg-Tarjan's article [2].
Definition 3.11. for a network (G, c) and given a preflow f on the network, we redefine the residual capacity of (u . If c f (u, v) > 0 we call such pair a residual edge. We define the residual graph as the directed graph having vertex set V and edge set the set of residual edges.
Note there are similarities with definition (3.4) but in this definition we are working with a preflow rather than a flow.
Definition 3.13. given a a valid labeling on a network (G, c) is a function d :
It can be shown that for any vertex
is a lower bound on the distance from v to t in the residual graph and if d(v) ≥ n then d(v) − n is a lower bound on the distance to s in the residual graph [2]. This labeling of the vertices permits the algorithm to push excess flow to vertices that are estimated to be closer to the sink and, if needed, to return flow to vertices estimated to be closer to the source.
Now we define the basic operations, push and relabel, that the main algorithm uses.
Relabel. Let v be an active vertex such that for any As initial preflow we take the function f such that for any v ∈ V , f (s, v) = c(s, v) and zero everywhere else. It is readily checked that this is a preflow. As an initial labeling of the vertices we take d(s) = n and zero everywhere else. As long as there is an active vertex v, either an operation of push or relabel is applicable to v. When there are no more active vertices the algorithm terminates, and the preflow becomes a flow, and in fact, it is maximal. Details of the proof of correctness and termination of the algorithm can be found in [2]. We show only correctness assuming termination.
Lemma 3.15. If f is a preflow and d is any valid labeling for f then the sink t is not reachable from s in G f .
Theorem 3.16. If the algorithm terminates and d is a valid labeling for f with finite labels, then f is a maximal flow.
Proof. At the end of the algorithm there are no active vertices, as the labels are finite, it means that all vertices have zero excess, so f is a flow. By lemma (3.15) and lemma (3.6) this flow is in fact maximal.
One important remark about this algorithm is the fact that it always works (it terminates and it is correct) no matter what type of capacity function we are dealing with. The Ford-Fulkerson fails to terminate in some cases where the capacity function is not rational. It is also important to note that the algorithm relies only on local operations, that means the operations depend and modify only parameters related to a small part of the graph, this allows a parallel implementation of the algorithm that takes advantage of multicore processors. A special implementation of such algorithm terminates after O(n 2 m) steps.
Hochbaum's pseudoflow
Dorit Hochbaum's pseudoflow algorithm [3] is an algorithm with a different approach to the maximum flow problem. Instead of directly finding a maximum flow, it first solves the maximum blocking cut problem, then a maximum flow is recovered. Although the most complicated of the three, it is also the most efficient. We follow [3]: The concept of pseudoflow drops the conservation of flow constraint, preserves the capacity constraint on the edges of the graph and the antisymmetry constraint on V × V . We define the residual capacity and residual graph in the same manner we did with flows and preflows.
We define the excess of flow at a vertex e(v) as in definition ( 3.12).
Now we consider a pseudoflow f on G st and a rooted spanning tree with root r, T of G ext such that ii) For every arc in E\T , f is either zero or saturates the arc.
iii) In every branch all downward residual capacities are strictly positive.
iv) the direct children of r are the only vertices that do not have zero excess.
Definition 3.19. a spanning rooted tree with root r of G ext that satisfies the previous conditions is called a normalized tree. Note that this is an undirected graph.
A child r i of r is classified as: A vertex v is called weak or strong if it has a weak or strong ancestor, respectively.
As we mentioned earlier, Hochbaum's algorithm solves first the maximum blocking cut problem, which we state next: Problem: For a directed, weighted graph G = (V, E) with vertex weights w(v) for each vertex v, and arc capacity function c(a, b) defined for every (a, b) ∈ E, find S ⊂ V such that is maximum. Such a set is called a maximum surplus set and (S,S) is called a maximum blocking cut.
The key is to find the relation between a maximum blocking cut in G and a minimum cut in G st . Given by the following lemma: This is proven in [3] following an article by Radzik [4]. The following lemma, also found on the article [3], is fundamental for the correctness of the algorithm. For a normalized tree T if the set of strong vertices S satisfies the condition in lemma (3.21) the tree is called optimal The algorithm starts with a normalized tree related to a pseudoflow f on G st . There are multiple choices of such a tree. We will start with a simple normalized tree. It corresponds to a pseudoflow f saturating A(s) and A(t) on G st . In this normalized tree every vertex in V forms an independent branch. The set of strong vertices are those adjacent to the source.
By lemma (3.21), it is desirable to reduce the residual capacity from strong to weak vertices, therefore, with each iteration of the algorithm, a residual edge from S toS is chosen, this is called a merger arc(edge). If such an edge does not exist then the tree is optimal and the set of strong vertices form a maximum blocking cut. If there is one, then such edge becomes a new edge of the tree and the edge joining the root of the strong branch to r is removed from the tree. Then the excess of the root of the strong branch is pushed upwards until it reaches the root of the weak branch. Note that this path is unique.
It is not always possible to push the total of the excess along an edge.If there is an edge, say (a, b) that does not have enough residual capacity to push the excess then such edge is removed (split) from the tree, a (the tail of the edge) becomes the root of a new strong branch with excess equal to the excess pushed minus the residual capacity of the edge. This is done in such a way so that the property that only roots of branches may have nonzero excess is maintained through the running of the algorithm. The remaining excess at b continues to be pushed in the same fashion until it reaches the root of the weak branch or until it reaches another edge that does not have enough residual capacity and the process is repeated. This process assures that the tree is normal at the end of each iteration.
Termination of the algorithm follows from the next lemma: Lemma 3.22. At each iteration of the algorithm either the total excess of the strong vertices is strictly reduced or the number of weak vertices is reduced.
Proof. Recall that from the properties of definition (3.19) we have that all downward residual capacities of edges are positive. After appending a merger edge to the tree and removing the edge joining the root of the strong branch r s to r, the path from r s to the weak branch becomes an upward path with positive residual capacity at each edge of the path, then some positive amount of excess arrives at the weak branch that is being merged. Then either some positive amount of excess arrives at the root of the weak branch and the total excess is strictly reduced, or there is some edge in the weak branch without enough residual capacity. In this case the edge is split and the tail of such edge becomes a strong vertex. Note that if some weak vertex becomes strong in this fashion, then all of its children, including the former strong branch, becomes strong. Then if such operation takes place, the number of weak vertices is strictly reduced. Now let M + = C({s}, V ) be the sum of capacities in A(s) and M − be the sum of capacities in A(t) then by the final comment in the previous lemma we see that any iteration that reduces the total excess is separated from another iteration of such type by at most n iterations.
Then it follows immediately for integer capacities that Now as the problem is symmetrical on s and t we find that by reversing all directions of the edges of the graph and interchanging s and t we get an equivalent problem so it follows again that for integer capacities Now in order to solve our initial problem we have to recover a maximum flow from the pseudoflow and maximum blocking cut obtained after the algorithm terminates, as it is not guaranteed that the pseudoflow becomes a flow after termination. In what follows we describe how to recover such maximum flow. Proof. Suppose f is such that |f | > 0 then there is some ( To get a feasible flow on the original network, we have to get rid of excesses at strong nodes and deficits at strictly weak nodes. For any strong vertex v s , as long as f (v s ,t) > 0 we have that (t, v s ) ∈ E f is part of the residual network. Hence by lemma (3.27) we have a residual path fromt tos that contains the edge (t, v s ). Increasing the flow on such path by an amount of δ equal to the minimum over the residual capacities of the path, actually decreases the excess of v s by the same amount. After one such step, either the vertex v s arrives at zero excess or this process can be repeated by lemma (3.27). This is a process analogous to flow decomposition on the reversed graph. After termination there are no vertices other than t with positive excess.
In the same fashion, the remaining flow is decomposed until positive deficits at strictly weak vertices are disposed. This must be done via t as it is the only vertex sending a positive amount of flow tot. After termination all vertices except s and t have nonzero excess. Deletings,t from the graph leaves us with a feasible flow on G st .
Corollary 3.28. A maximum flow can be recovered from an optimal normalized tree with pseudoflow f.
Proof.
For an optimal tree we have C f (S,S) = 0 that is, there are no residual edges directed from strong to weak vertices. Hence, following the previous argument, excesses at strong vertices can be disposed using only paths traversing strong nodes. Now there are no edges directed from a weak to a strong vertex with positive flow, as otherwise the reverse edge would have residual capacity greater than zero, a contradiction. So by the proof of theorem (3.26) the remaining deficits at weak vertices are disposed using only paths traversing weak vertices. It then follows that C f (S,S) = 0 after recovering a flow f so that c(v, w) = f (v, w) for v ∈ S, w ∈S and as consequence |f | = C(S,S). By lemma (3.20) and lemma (3.21) (S,S) is a minimum cut. This shows |f | is maximum.
Applications
There are many not so obvious applications of maximum flow algorithms and results to different pure and applied topics, we show three interesting problems that can be solved using the previous results.
Hall's Marriage Theorem
Let G = (V ∪ W, E) be a bipartite graph, where V ∩ W = ∅ and |V | = |W | = n. Label the vertices in V as v 1 , . . . , v n , and the vertices in W as w 1 , . . . , w n . A perfect matching on G is a permutation σ ∈ S n such that [v i , w σ(i) ] ∈ E for every i = 1, . . . , n. We prove the following using the Maxflow-Mincut theorem (2.12).
So the capacity of a minimum cut is n. By theorem (2.12) and (3.8) there exists a maximum flow f of integer values and net flow |f | = n. As there are only n edges out of the source s and into the sink t, and they have capacity 1, they must be saturated. By conservation of flow and the fact that the flow is integer, for any v ∈ V there exists only one w ∈ W such that f (v, w) = 1. Again by conservation of flow and integrality, for any w ∈ W there exists only one v ∈ V such that f (v, w) = 1. This shows that the edges directed from V to W carrying a flow of 1 define a perfect matching on G.
Counting disjoint chains in finite posets
Definition 4.4. A finite poset P := (P , ≤) is a finite set P together with a partial order ≤ on P . We say that P has0 or (1) if there exists an element x ∈ P such that x ≤ y or x ≥ y for any y ∈ P , respectively. A chain is a subset c := {x 0 , . . . , x n } ⊂ P such that for any two elements Given a finite poset P , we say that a chain C is maximal if C ∪ {x} is not a chain for any x ∈ P \C. Clearly any maximal chain contains0 and1. We say that y covers x in the poset if x < y and there exists no z such that x < z < y. We say that a set {C i } of chains are cover-disjoint if whenever y covers x then {x, y} belongs to at most one chain C i . We would like to find a subset S of the set of maximal chains, such that S is cover-disjoint and such that |S| is maximum.
One of the possible ways of doing this is to work in a greedy algorithm fashion, finding one of such chains and then repeating the process in the remaining part of the poset. We note that this may not lead to a partition of maximum size, as the example in Figure 2 suggests.
Instead, we consider an associated network P st where s =0, t =1, V = P and (x, y) ∈ E whenever y covers x . We define a capacity function with value 1 on every edge.
Image segmentation
The problem of segmenting a given image is that of defining a partition of the pixels as two sets, the foreground and the background, so that they form coherent regions. There are multiple other problems defined under the label of image segmentation. In the following we show how to define the problem and how to solve it using algorithms of flow optimization in a network, following T.M. Murali's lecture notes [6]. Throughout this section we denote a directed edge as (v, w) and an undirected edge as [v, w].
We define a finite undirected graph G = (V, E) where V is the set of pixels of an image, V ⊂ Z + × Z + and the set of edges E comprises the set of neighbors for each pixel (x, y) ∈ V . The set of neighbors of (x, y) is N (x,y) : is maximized.The idea is that if a v > b v it's preferable to set v as in the foreground and if a pixel v has most of it's neighbors defined as in the foreground, it is preferable to set v as in the foreground also. Such probabilities are given in the problem, however, different choices of such values may lead to better or worse results in the segmentation of the image. For instance if one is interested in isolating a small object in a big background, the best choice is to take higher values for the probability function a v .
In order to construct such sets, one must define as foreground(background) vertices those with higher probability of belonging to the foreground(background), while reducing the total penalty of the boundary between foreground and background. We want to formulate this problem as a Mincut problem. In order to do this we have to overcome some difficulties, namely, that of working with an undirected graph rather than a capacitated network, and a function to be maximized rather than minimized.
Then maximizing s (A, B) is the same as minimizing s (A, B) Now we consider a directed graph We then have a network where the source(sink) is connected to each pixel with such edge with capacity equal to a v (b v ) and where each undirected edge [v, w] of neighbor pixels is replaced with two antiparallel edges (v, w), (w, v) both with capacity equal to p[v, w]. Then it follows immediately that for a cut (A, B) in such network we have By lemma (4.6) arrive at next result: There's only one difficulty left to overcome, as we must deal only with simple graphs, we must replace the set of antiparallel edges. We do this by adding for each pair of neighbor vertices v, w two new vertices c vw , c wv and replacing the antiparallel edges with the edges (v, c vw ), (c vw , w), (w, c wv ), (c wv , v) all with capacity equal to p[v, w]. It use readily checked that it is equivalent to find a maximum flow on this new graph.
Then we solve the problem by finding a maximum flow on such network using either the Ford-Fulkerson or Goldberg-Tarjan algorithm and then recovering a minimum cut using theorem (3.6).
A Generalization of Maxflow
We would like to define a more general optimization problem that reduces to the Maxflow problem on graphs and then try to generalize the optimization algorithms studied on previous sections. Given a network (G, c) we can consider the graph G * that results from appending the edge (s, t) with infinite capacity and then, the problem of finding a maximum flow on the original network is equivalent to finding a maximum circulation on G * , that is a positive function on the edges of G * satisfying capacity constraints and flow constraints on every vertex. In this case the objective function |f | is the amount flowing through the edge with infinite capacity. [v σ(0) , . . . , v σ ( d) ] ⇔ σ is even. This partitions the set in two equivalence classes that we call orientations of X. To choose an orientation for X is to choose one of such orientations, which we call the positive orientation and we say that X is oriented. We denote an oriented simplex as X = (v 0 , . . . , v d ).
Preliminaries
Notation. For a simplicial complex X we let X (d) be the set of its d-dimensional simplices. Definition 5.6. The boundary operator ∂ d : C d → C d−1 is a homomorphism defined in the basis as wherev i means deleting such term.
Elements of C d are called d-chains. Elements of the subgroup ker(∂ d ) are called d-cycles.
Higher Maxflow
Definition 5.7. A d-dimensional network is a triple (X, T, c) where 1. X is a simplicial complex of pure dimension d all of whose facets have chosen orientations.
2. T ∈ X (d) is a distinguished oriented simplex of dimension d satisfying the source condition: • For every oriented d-simplex σ which intersects T in a (d−1)-dimensional simplex τ the signs of τ in ∂σ and ∂T are opposite.
Remark 5.8. The source condition is a generalization to the assumption that on a network (G, c) every edge incident with the source is directed out of it and every edge incident with the sink is directed into it. The source simplex T is the generalization of an appended edge directed from sink to source with infinite capacity.
Definition 5.9. A flow on a network (X, T, c) is a function f : X (d) → R + satisfying the following properties: 1. f is a weighted cycle, that is Remark 5.10. The condition that f is a weighted cycle is a generalization of the conservation of flow condition. To see this we define the following: This is, in fact, a generalization of the incidence function defined on section (2.1). Then the condition that f is a weighted cycle is equivalent to: This comes from the fact that after appending the edge (t, s) to a network (G, c) with flow f one must define f (t, s) = |f | so that conservation of flow holds in all vertices including s, t.
(HMax-Flow.) The higher max flow problem asks to find the maximum possible amount f (T ) which can be carried by a flow on a network (X, T, c).
Remark 5.13. A (1)-dimensional network is a capacitated graph (T is the edge from t to s) and HMax-Flow reduces to Max-Flow on graphs.
Remark 5.14. If all the capacities are 1 and X is a triangulated orientable d-manifold and T is any top-dimensional simplex then every top-dimensional cycle is a flow with f (T ) = 1. This is one HMax-flow. This follows directly from the definition of oriented manifold.
As an LP problem
Even in higher dimension, the problem can be stated as a set of linear equalities and inequalities in a finite dimensional vector space, so it can also be stated as a linear program as we did in section (2.2). We continue with the convention that |n| = |X (d−1) | is the number of (d where c ∈ R m is the vector c i = c(e i ).
Computing the dual we find it can be stated as where v ∈ R n and e ∈ R m .
Further examples and conjectures
Example Towards a good definition of a cut on a generalized d-dimensional network we find feasible solutions of the dual of such problem. We are mostly interested in integer solutions to the problem. In this example we find both ∂ and ∂ T I m 0 to be totally unimodular matrix so the existence of integer optimal solutions is assured. We suggest the following definition.
for any σ = T , and So, given the cut and the assigned values to the d − 1 faces variables we can define the dual variables corresponding to the facets η σ as In such way it is readily checked the solution is dual feasible. We define the capacity of a cut (S, S ) as the value of the dual objective function at this solution, which is the weighted sum of the capacities of facets σ = T , with weights η σ . In such way the capacity of a cut is always an upper bound to the value of a maximum flow, by weak duality. It is natural to ask whether the minimum of the capacities over all cuts equals the value of a maximum flow. We attempt to show this using an analogous probability argument as in section (2). (3) we can extend the capacity function defined on X (d) to the set of all orientations of elements of
As in section
Definition 5.16. The residual complex X f is the (multi) simplicial complex whose facets are those x ∈ X (d) ∪ −X (d) such that the residual capacity of x c f (x) :=c(x) −f (x) > 0. By definition, if the capacity function is not identically zero, the residual complex is a pure multisimplicial complex of dimension d.
Recall from that from lemma (3.6), a flow is not maximal if there exists a simple (no loops) path from s to t in the residual graph. After appending the edge (t, s) this reduces to: a flow is not maximal if there is no simple cycle containing (t, s) on the residual graph. This motivates the following definition: Definition 5.17. Given a (X, T, c) a d dimensional network and a flow f on X, an augmenting cycle is d-cycle σ such that σ = i c i X i with c i ∈ Z + and X i ∈ X f and such that X i = T for some i. Proof.
The fact that f + f is positive is clear. Proof. Let A = c i X i be an augmenting cycle with X i ∈ X f and c i positive. Let m = min{ 1 Capacity constraints hold by definition of m. As X i = T for some i the value of the flow is strictly increased.
We would like to prove the converse of this lemma to devise a first algorithm for Higher Maxflow optimization. Example Consider two tetrahedra oriented by the outward normal, with a common facet T , over the vertex set {1, 2, 3, 4, 5} We find the incidence matrix to be This matrix is also totally unimodular hence the problem has integer optima. In various cases we find this optima to be the sum of the minimum capacities over each tetrahedron. This optimum is achieved after finding two augmenting cycles corresponding to each tetrahedron. When the flow is maximum there is no augmenting cycle on the residual path.
Question 5.21. When is the incidence matrix of a d dimensional network totally unimodular?
We will cite some important theorems that will give us some insight about the status of the question (5.21), and ultimately show that it is in fact false. We find certain family of networks where it holds. For a subcomplex X 0 ⊂ X we define the group of relative chains of X modulo X 0 as the quotient C d (X)/C d (X 0 ) := C d (X, X 0 ). The map ∂ d induces a map ∂ (X,X 0 ) d : C d (X, X 0 ) → C d−1 (X, X 0 ) which also satisfies ∂ (X,X 0 ) d ∂ (X,X 0 ) d+1 = 0 So we define the relative homology groups as H d (X, X 0 ) = ker(∂ (X,X 0 ) d )/Im(∂ (X,X 0 ) d+1 ) := Z d (X, X 0 )/B d (X, X 0 ).
The following is a result by Dey-Hirani-Krishnamoorthy that gives a partial result to our question: Theorem 5.22. [5] For a finite simplicial complex triangulating a d + 1 dimensional compact orientable manifold, [∂ d+1 ] is totally unimodular irrespective of the orientations of the simplices.
The answer to question (5.21) is "not always". Consider the following counterexample: Counterexample: [5] For certain simplicial complex triangulating the projective plane, the matrix [∂ 2 ] is not totally unimodular. This might not be exactly a counterexample for our conjecture as we need to prove existence of some facet that may work as source. However if we consider the two dimensional sphere positively oriented, and a triangulation of such manifold with consistent choice of orientations, then we may define any facet of such complex as the source, and then joining this complex to the triangulation of the projective plane by a vertex we find a submatrix of [∂ 2 ] that is not totally unimodular.
The following is a theorem also due to Dey-Hirani-Krishnamoorthy that characterizes totally unimodular matrices arising from a boundary operator. The following theorem yields another family of simplicial complexes where total unimodularity holds, namely, the family of d dimensional complexes embeddable in R d .
Theorem 5.24. [5] Let K be a finite simplicial complex embedded in R d+1 , then H d (L, L 0 ) is torsion free for all pure subcomplexes L 0 and L of dimensions d and d + 1 respectively.
It would be helpful to find more general families of simplicial complexes where unimodularity holds. Using theorem (5.23) we may give an alternative proof of the fact that for graphs, the incidence matrix is totally unimodular.
Theorem 5.25. For a directed graph G, the incidence matrix [∂ 1 ] is totally unimodular.
Proof. Suppose G is connected. Let L 0 ⊂ L be two subcomplexes of G of dimension 0 and 1 respectively. We want to show that H 0 (L, L 0 ) is totally unimodular. Now (L, L 0 ) is a good pair, this implies that H 0 (L, L 0 ) H 0 (L/L 0 ) which is a torsion free group isomorphic to Z k where k is the number of connected components of L/L 0 . The result follows by theorem (5.23).
Definition 5.26. Let X be a simplicial complex. A pair of facets (F, F ) of X is a leaf if for every face H of X we have F ∩ H ⊂ F . A simplicial tree is a simplicial connected complex X such that every subset of facets of X contains at least one leaf.
Lemma 5.27. Let X be a pure simplicial complex of dimension 2, and suppose that for any pure sub complex of dimension 2 L ⊂ X, H 1 (L) = 0, then [∂ 2 ] is totally unimodular.
Proof. Let L 0 ⊂ L be pure sub complexes of dimension 1 and 2 respectively. We have a long exact sequence in homology By exactness of the sequence we have that H 1 (L, L 0 ) → H 0 (L 0 ) is an invective map. As H 0 (L 0 ) is torsion free so is H 1 (L, L 0 ). The result follows by theorem (5.23).
Lemma 5.28. Let X be a pure simplicial complex of dimension 2 such that there exist disjoint facets {T 1 , . . . , T m } such that X\{T 1 , . . . , T m } is a simplicial tree. Then for any pure subcomplex L ⊂ X of dimension 2, H 1 (L) = 0.
Proof. Let L be a pure simplicial complex of dimension 2 with facets {F 1 , . . . , F l }. Define V as the pure simplicial complex with facets {F 1 , . . . , F l }\{T 1 , . . . , T , }. By assumption, V is a two dimensional subcomplex of a two dimensional simplicial tree so it is also a simplicial tree. Any simplicial tree is contractible and homology groups are homotopy invariant. Now for any F facet of L that is not in V we have two cases: Every one dimensional face of F is in V or not. In the first case, after contracting V , F forms a sphere. In the second case F has at least a one dimensional facet not in V and F can be contracted to F ∩ V . Hence after contracting V we see that L is homotopy equivalent to a wedge of spheres so that H 1 (L) = 0.
Conjecture 5.20 remains without answer. | 2012-12-05T11:16:54.000Z | 2012-12-05T00:00:00.000 | {
"year": 2012,
"sha1": "ac8b2c730e12c0422abf2473f4762dcafea68d50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ac8b2c730e12c0422abf2473f4762dcafea68d50",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
108290128 | pes2o/s2orc | v3-fos-license | Feeling safe or unsafe in psychiatric inpatient care, a hospital-based qualitative interview study with inpatients in Sweden
Background A major challenge in psychiatric inpatient care is to create an environment that promotes patient recovery, patient safety and good working environment for staff. Since guidelines and programs addressing this issue stress the importance of primary prevention in creating safe environments, more insight is needed regarding patient perceptions of feeling safe. The aim of this study is to enhance our understanding of feelings of being safe or unsafe in psychiatric inpatient care. Methods In this qualitative study, interviews with open-ended questions were conducted with 17 adult patients, five women and 12 men, from four settings: one general psychiatric, one psychiatric addiction and two forensic psychiatric clinics. The main question in the interview guide concerned patients’ feelings of being safe or unsafe. Thematic content analysis with an inductive approach was used to generate codes and, thereafter, themes and subthemes. Results The main results can be summarized in three themes: (1) Predictable and supportive services are necessary for feeling safe. This concerns the ability of psychiatric and social services to meet the needs of patients. Descriptions of delayed care and unpredictable processes were common. The structured environment was mostly perceived as positive. (2) Communication and taking responsibility enhance safety. This is about daily life in the ward, which was often perceived as being socially poor and boring with non-communicative staff. Participants emphasized that patients have to take responsibility for their actions and for co-patients. (3) Powerlessness and unpleasant encounters undermine safety. This addresses the participants’ way of doing risk analyses and handling unpleasant or aggressive patients or staff members. The usual way to act in risk situations was to keep away. Conclusions Our results indicate that creating reliable treatment and care processes, a stimulating social climate in wards, and better staff-patient communication could enhance patient perceptions of feeling safe. It seems to be important that staff provide patients with general information about the safety situation at the ward, without violating individual patients right to confidentiality, and to have an ongoing process that aims to create organizational values promoting safe environments for patients and staff. Electronic supplementary material The online version of this article (10.1186/s13033-019-0282-y) contains supplementary material, which is available to authorized users.
Background
A major challenge in psychiatric inpatient care is to create an open and rehabilitative environment that promotes patient recovery, patient safety and a good working environment for staff. Staff members need to take positive risks in their work with patients by gradually bringing back responsibility and initiative to the patient [1,2]. At the same time, violence in the ward may negatively affect patient recovery [3,4], staff health [5,6], and the organization [7]. Therefore, it is important to create a safe environment through primary preventive interventions so that both staff and patients can feel safe.
Most health care in Sweden is publicly funded and run by regional councils. Over the past few decades, Sweden, like many other Western societies, has the latest decades invested in outpatient care and made a radical reduction in the number of beds in psychiatric inpatient care. As a result, the proportion of inpatients receiving coercive care due to serious psychiatric conditions has increased [8,9]. In psychiatric inpatient care in Sweden, 83% of nursing staff have reported experiences of violence; 47% in the previous 6 months [10]. It is, therefore, necessary to minimize violence, including self-harm and coercive measures, in order to create a safe environment. Research on the prevention of violence implies that management and staff ability to create a good ward environment has a crucial influence on the risk of violence [11]. The wide variation in the use of coercive measures can not only be explained by patient diagnoses or other patient variables [12]. Instead, it appears that some institutions are more successful than others in creating a safe environment that helps to minimise the frequency of coercive measures [13,14]. Situations where staff members need to restrict patient freedom or deny patients their wishes have been found to explain 39% of violence from patients in psychiatric inpatient care in Europe [15]. The Safewards Model includes six domains that can be used by management and staff as a basis for modifying risk factors: the physical environment, the staff team, the patient community, patient characteristics, outside hospital and the regulatory framework, [11,13]. The program is widely used and available in seven languages [16]. On an organizational level, the management can give prerequisites for the prevention of violence through good managerial policies, organizational values and an efficient organization with a clear purpose of care [13,14]. These organizational factors affect the ward structure and make it easier for ward managers and staff to create consistent and reasonable ward rules. It is essential that staff learn to communicate with patients effectively and in a caring manner as well as endeavouring to understand the reason or trigger for a patient's aggression. Other preventive measures are to take care of agitated patients at an early stage and to use de-escalation methods when appropriate [13,17,18].
In this study, the focus is on the patients and the factors that make them feel safe or unsafe in psychiatric inpatient care. Being safe as a patient is not only about physical safety but also about the broader context of the general atmosphere of feeling safe in the ward. Many features in guidelines and programs to manage violence are in line with the recovery approach such as involving patients in all decisions about their care and treatment and improving their experience of staying in the ward [19] forming supportive relationships, giving hope, meaningful activities and developing coping skills [13,20]. This implies that work with safety and quality of care needs to be integrated in order to be beneficial to patients in mental health care [21]. In studies of patient views on psychiatric inpatient wards, patients have reported that they appreciate staff who communicate and create a therapeutic relationship with them. Such staff can promote a sense of trust and safety [22][23][24][25][26][27], help to reduce patient anxiety [26] and resolve conflicts [27,28]. Good communication with patients can result in patients feeling valued and more human [26,29,30]. On the other hand, factors such as staff not being seen enough by the patients, lack of communication and staff not showing understanding of the patient's illness, as well as stigmatizing remarks have resulted in patients feeling that they are not being respected and are less valuable than other humans [22,31,32], which can lead to violent behaviour [27]. Rules of the ward can create conflicts if they are difficult to understand, are rigid and their application is perceived as arbitrary [26,32,33]. Patients may perceive that they have to adapt to the ward environment, have days without meaningful activities and accept changes in medication without being consulted [22,27,32,[34][35][36]. They may perceive that if they do not adapt, or show negative emotions, they might be subjected to coercive measures or be frightened by staff into adapting [22,[31][32][33]. Patients can notice that staff are not always sensitive in detecting patients whom they consider being a safety risk [37]. Sometimes they feel they are being stalked by another patient [38,39], have problems with co-patients using alcohol or drugs or are victims of theft of personal possessions at the ward [38]. In these situations, it is important to have an own room to go to, as a lack of personal space has been described as problematic [27,32,38,39]. Patients reported feeling safe from others outside the clinic and at lower risk for self-harm, in addition, male staff gave a higher sense of physical protection than female staff [39].
Guidelines and programs stress the importance of primary prevention in creating safe environments. Most of the studies referred to above aimed to describe how patients perceive psychiatric inpatient care in general.
Since only a few of the studies have focused on patients own perceptions of feeling safe, there is need for further research in order to understand what we should improve to achieve effective primary prevention. When we interviewed staff, they emphasized that creating a relationship and good communication were prerequisites both for good care and for primary violence prevention [40]. In the present study we wanted to interview patients in these wards and ask them how they perceive safety issues. The aim of this study is to enhance our understanding of feelings of being safe or unsafe in psychiatric inpatient care from a patient perspective in the ward environment by letting patients freely express their views on safety issues.
Participants and settings
We interviewed 17 patients treated in four inpatient settings for adults in three Swedish regions; one general psychiatric, one psychiatric addiction and two forensic psychiatric clinics; one with low, and the other with medium, security class. The first three clinics mentioned had patients in need of psychiatric in-patient care from the respective surrounding catchment areas. Patients in all of these clinics were cared for because they had a psychiatric diagnosis. Substance users were found in all clinics, but the psychiatric addiction clinic also had special competence to care for patients with co-occurring addiction and psychiatric problems. The medium security clinic had a high perimeter security in comparison to the low security clinic. None of them had staff with police authority. The medium security clinic had patients from surrounding catchment areas and also from other areas of Sweden. During the time of the study, all clinics intended to provide single rooms for all patients. We used purposive sampling and chose these settings to get a wide range of clinics. For this study, patients were recruited by the ward managers in the psychiatric clinics who gave them verbal and written information. If the patient agreed to participate, we were informed. We wanted to get as wide a range as possible regarding age and gender. Managers asked patients whom they deemed able to participate, who had begun to recover and had been patients for so long that they had experiences to share. Patients also received written information about the study. Five of the participants were women and 12 were men; their ages ranged from their twenties up to 67 years.
Design and procedure
This is an interview study focusing on the question of feeling safe or unsafe, with some optional open-ended questions. Thematic content analysis with an inductive approach was used to generate codes and thereafter themes and subthemes.
An interview guide was created after a review of literature and conversations about the subject with some members of one of the Fountain Houses in Sweden who had experience of psychiatric inpatient care. Our approach in the study was not to define what a safe or unsafe ward environment could be, nor what situations patients might perceive as violent or threatening, but rather to let the patients define the concepts of safe/unsafe environments and identify situations where these could apply. The main question in the interview guide were about patients' perceptions of feeling safe or unsafe in the ward (see Additional file 1). The instruction was that the interviewer would let the interview revolve around this issue, but there were some optional open-ended questions and four areas that we could ask about to keep the conversation going. These were on how the patient perceived the importance of (1) the ward's physical design, (2) the ward's routines and rules, (3) the staff 's approach to patients and (4) the presence of other patients in relation to feeling safe or unsafe. We also asked if they had encountered situations that they perceived as threatening and/or violent. A normal length of an interview was around 50 min (from 30 min to 1¼ h); 16 interviews were recorded and transcribed verbatim. One patient did not permit recording; notes were taken during this interview.
Analysis and interpretation
Thematic content analysis with an inductive approach [41,42] was applied by listening to, and reading, the interviews several times in order to get an overall picture of the material. In the continued reading, we searched places in the text that somehow addressed our research question. Any such place in the text was marked as a meaning unit and moved to a coding sheet. A summary was then made of the meaning unit by giving each unit a code and/or a brief description that was as close to the content as possible. These open codes were then organised under higher order headings. Once this was done, the interpretation work began by creating subthemes/ themes with the higher order headings and codes as a basis. The number of preliminary subthemes/themes was gradually reduced. A tentative longer result section, with many quotes, was written in Swedish with relatively many themes and sub-themes. This was used for a discussion with the authors as to which quotes and theme names were adequate and how these would be translated into English. When the final interpretation of the material was done, themes and subthemes were chosen which could give a thick description and best describe the relevant material. The original text was available throughout the analysis and we could go between the whole, and parts of the text. All stages of the process were made by at least two persons; first independently and then discussing and reaching consensus. Important decisions were taken jointly by all five authors. VP is a social worker and Ph.D., TW is a psychiatrist and Ph.D., UH is a registered nurse and Ph.D. student, IN is a psychologist and LK is a social scientist and Ph.D. Our pre-understanding of the subject comes from our professional education, personal experiences of psychiatry and our review of the literature.
Results
The main results of the analysis with focus on patients' feelings of safety and unsafety in the ward can be summarized in three themes and nine subthemes ( Table 1). The first theme, predictable and supportive services are necessary for feeling safe, concerns patients' experiences with psychiatric services and their perceptions of its ability as a healthcare provider to meet the needs of the patients for treatment and care. The second theme communication and taking responsibility enhance safety is about how the participants perceived the daily life at the ward, which was often perceived as being socially poor and boring with non-communicative staff. The third theme, powerlessness and unpleasant encounters undermine safety is about how participants addressed problems with encounters that are hard to handle and how they perceived that staff members and co-patients acted in these situations.
Predictable and supportive services are necessary for feeling safe
Participants described an unpredictable treatment process of not knowing whether they would receive adequate medication, other treatment, or care and support. There were descriptions of treatment and care processes that the participants perceived as safe and predictable, but more often the processes were characterized by uncertainty and concern about what would happen after discharge from the ward without appropriate support from psychiatric or social services. Substance users in psychiatric addiction care and other clinics stressed these problems more often than other patients. I can't do my social planning in a public toilet. It's a bit hard to do planning there. My plan is just to get hold of something that will drown my thoughts.
The care described related almost exclusively to medical treatment and waiting for the effect of medication. Several participants described an organization which did not have the capacity to handle the whole patient populations' needs. They reported of occasions when they themselves or others did not have access to psychiatric care despite experiencing an acute need for care. It was difficult to gain access to treatment, resulting in a delayed treatment, which at times could lead to intoxication by the patient. Once they were admitted to the ward, it was often hard to get an opportunity to meet a doctor or a social worker and it was even harder to get a long-term treatment plan.
The participants expressed a need for structure and routines in the wards and often experienced the daily routines, such as fixed time meals and medication, as something positive. The constant presence of staff contributed to safety.
There are always people around you, you don't have to be alone, alone with your thoughts, and there is nothing to use if you want to hurt yourself either. It feels like there is always someone to talk to. There is a lot of staff in this ward so there is always someone.
Some participants perceived the risk of being exposed to violent behaviour and self-harm as low in the ward in comparison to life on the outside. It felt safe that the door was locked and they had their own contact person and strict routines helping the participants to structure their lives.
Yes, it's when the doors close and you're left there standing on your own. It's really easy to say you're feeling fine when you're in here with people around you who're engaged in your wellbeing.
Closed doors and ward rules could also be perceived as negative if staff members were too rigid or just enforced them to demonstrate their power. One problem, which mainly affected patients in addiction care, was that the mixture of patient groups caused irritation. A mixture of substance users undergoing detoxification and severely mentally ill patients who were disruptive could cause irritation among substance users. This could lead to the patient leaving the ward even though the person felt that they were not ready to leave. Participants had a desire for a friendly ward climate where staff and patients can have a normal social life with a home-like environment, a climate where it was possible to talk and joke with each other and with staff members, but most often there were no social activities around which patients and staff could meet. When activities were promised, there was a relatively high risk that they would be cancelled. In particular, in forensic psychiatry, participants reported that staff did not take into account that the institution was their home. The physical ward environment was also often seen as being problematic since it, together with staff behaviour, constantly reminded patients of being in a closed institution. There were participants with a long experience of psychiatric care who had confidence in the development of psychiatry. They said the staff members are friendlier today than in the 1990s, with fewer abuses and less use of belts, i.e. use of physical restraints.
Because when I came here the ward was more open and brighter with friendly and engaged staff, who were out among us and talking. I can't really say "us", but me anyway…
Communication and taking responsibility enhance safety
Participants had a desire for communicative staff members who were available at the ward. Some reported about staff members who did not communicate at all and some could only chat about the weather. Many participants felt that there were only one or two staff members that they had confidence in and who were communicative. A good conversation could mean a lot to the participant, but often they only got very little time to talk undisturbed with their favourite staff member.
Sit down with me so I feel you're here for me. Don't go when someone is distressed. It shouldn't be that when a staff member is sitting and talking to me, someone else comes and disturbs us-some other member of staff just asking a simple question like: "Where's Anna?", for example. She shouldn't disturb
our conversation because then I'll be offended. If I'm in a conversation, a good conversation, it means a lot to me. I can feel a bit better just for the time being.
In particular, night staff were criticized for not communicating with patients. Participants also noticed that communication between professional groups, for example between nurses and doctors, sometimes failed. Staff were also often more interested in their own mobiles and computers than communicating with patients. Asking and waiting was a big part of everyday life at the ward. Participants pointed out that the staff did not like to be disturbed by patients asking questions.
When the milk is finished, I shouldn't have to go into them when they're sitting and eating at the same time as us and stand in the doorway feeling humiliated and thinking that I'm disturbing them while they're eating in there. I feel like they're sitting there, I shouldn't disturb their lunch… It's hard to go into them. Disturbing them: "Now the milk is finished. " A potential conflict situation. It just takes a thing like that to trigger a threatening situation. For the feeling I get is: "We don't have time for you just now when we're in here. Don't bother us while we're eating. " And that sets it off. I have discovered that a lot of conflict arises from this kind of situation.
When patients asked the staff about something they often got the reply: "I'll get back to you" or they were referred to another staff member. After the patient got the opportunity to ask the question, they often had to wait quite a long time before it was dealt with by staff. Participants noted that many patients were irritated by this slow process and that it could lead to conflicts between staff and patients, upcoming conflicts that staff seldom noticed.
Participants stressed the importance that you, as a patient, are taking responsibility for your own Silent patients risked isolation in the ward since staff seldom talked to them. Sometimes participants took responsibility for talking with these patients. They also commented that patients were responsible for not acting out too badly against the staff.
Powerlessness and unpleasant encounters undermine safety
So I have no conflicts like that, but there is a lady here who stalks me and gives me drawings all the time. It's tough but there is no conflict so I choose to keep away.
Keeping away was the main strategy that many patients used in order to deal with different problems in the ward. Staff sometimes told them to go to their rooms if there was a risk of violence. Participants described how they did risk analyses of co-patients and staff members. Having your own room was described as being valuable because you could go there when you perceived a copatient or staff member as unpleasant or aggressive.
I haven't felt personally that I've experienced fear. At the same time, I have wondered: What's happened now? And how far is he prepared to go? How angry does he get? Is he so angry that he can hit someone? Because it was nearly… yes, you become hesitant.
Participants were aware of the fact that there was a care hierarchy in which the patient was at the bottom. They described powerlessness in relation to staff and there were some descriptions of oppressive behaviour from the staff such as the subtle use of power, violations or threats.
"No, we said 9 o'clock. " Yes, but, my partner's already here and it's five to nine. Can I not just go out now? "No, at 9 o'clock. " And then they hinder you. I've been subjected to that before. Then they delay you, so it ends with the clock being five pasts nine because the nurse doesn't want to be here when she knew I'd to be out at 9 o'clock.
Participants reported that it was difficult to not to react negatively to staff members with a negative approach. Patients had learnt, particularly in forensic psychiatry, that they have a lot to lose if they showed negative emotions. There were some descriptions of how certain staff refused to "see" patients and also descriptions of stigmatizing behaviour; staff members, for example, using hand sanitizer directly after touching the patient or the patient's belongings. Participants felt an expectation from the staff that they should not be disturbed too much by patients with questions or worries. They thought that if they did so, there could be negative consequences and, as a result, participants had self-restrictions about talking with the staff. Some female participants expressed that they had been afraid of specific male staff members.
Participants described experiences of unpleasant or violent co-patients. Violence from co-patients was rare at the ward, but some had witnessed violence between co-patients or between staff and a co-patient. The main problem was that some co-patients were perceived as unpleasant or scary; especially female patients who described scary experiences.
However, there's a guy who has come in now that goes about like this. He checks that the coast is clear and then talks about horrendous assaults on women all the time.
A female participant from a forensic psychiatric unit was more worried about her male friends than herself, since violence with severe consequences most often occurred between males. Participants from forensic units described more stable wards but, at the same time, they seemed to be more aware than other participants that a serious conflict may occur. A participant had been subjected to a murder attempt in his room by a co-patient after inviting this patient to his room. Staff knew that this patient and another patient were seen as a threat towards him but did not inform the patient about the risk before the incident. Many participants reported, however, that the staff often handled frightened and aggressive patients in a good way. They described how staff often successfully met these patients by acting and talking in a way that made the patient calm. After incidents, staff could talk to patients in order to calm down the situation, but the participants often lacked information about what had happened and whether there was a current or future risk. This lack of information made it difficult for them to assess future risks at the ward; which some of the patients actively did. According to participants, the staff referred to secrecy as an explanation for not giving information to patients.
Discussion
Participants expressed that they would feel safe if they had a predictable treatment and care process and the ward had a friendly ward climate with supportive routines. They wished they could have communicative staff, access to information they needed and be trusted to take responsibility. In critical situations, they took responsibility off themselves by keeping away from danger when necessary. The importance of the relationship between individual staff members and patients is often emphasized in nursing research. According to participants in this study too, these relations are important, but it would be a mistake to focus solely on the relationship between staff members and patients. Participants emphasised the capacity of the psychiatric organization for giving predictable treatment and care, a good psychical environment and social climate as key factors for feeling safe in the ward. We did not ask specific questions about the treatment and care process in the interviews but, despite that, the participants talked a lot about these issues. These findings gave the basis for the theme predictable and supportive services are necessary for feeling safe. Participants were well aware of the lack of beds in inpatient care and some claimed that this was a reason why they sometimes received delayed care. The lack of access to a safe and effective psychiatric service is a global problem in terms of patient safety [43,44]. Swedish health care has problems with delayed care, reasons for this may be that different care organisations fail to co-operate around patients' needs [45] and that Sweden has few beds in comparison to many other OECD countries [46,47]. In order to create sustainable care processes, patients need to be invited to care planning and given relevant information, thereby enhancing the patients' feelings of safety. In this study and other studies, patients have reported a lack of information about their treatment, their rights, or the reason why they are in coercive care [26,28]. They have also reported that they are not often invited to take part in their own treatment and care [22,25,26,30], despite the fact that they would like to be involved and get feedback on the process of recovery [48,49]. Participants were aware of organizational problems in psychiatric outpatient care and social services and as a result of these problems, they suspected that after discharge, they would not get the treatment, care and support necessary in order to recover. Another problem described was that the participant or other patients left the ward prematurely if they did not feel capable of handling the situation regarding problematic co-patients. This is consistent with previous research; the mixture of patients at a ward can be a source of triggers for violence and absconding [11,24].
This study confirms earlier studies that socially poor and boring wards with non-communicative staff create distress [11,50] and are, according to participants, something that could trigger aggression from patients, while communication and taking responsibility enhance safety. Access to communicating staff members with whom patients can talk about their experiences can be a way of making the patients situation understandable and promote the feeling of safety [49]. In another study with staff members in three of the participating clinics in this study, staff emphasized the importance of establishing a relationship with patients, talking to them and indicated that knowing each other had a preventing effect on violence [40]. Despite this awareness of the importance of communication, the participants in this study did not perceive the staff as communicative in general, nor providing them with information. Participants were assessing risks at the ward regarding co-patients and staff members. Just like another study [39], they considered this to be difficult since they did not receive information about risks from the staff.
Participants described that powerlessness and unpleasant encounters undermine safety. Some participants with a long experience of psychiatric care were positive about the development of psychiatry. They perceived less abuse of patients and felt more respected by staff nowadays than a few decades ago. Participants were impressed by the ability of some staff members to interact with aggressive or frightened patients in a calm manner and using de-escalation methods. Some positive results in this study, in contrast to some other studies, was that no one reported problems with theft or problems with copatients using drugs or alcohol [38], nor did they feel that patient safety was dependent on the presence of male staff [39]. Research confirms what a female participant said in the result section, namely, that men are more likely to be involved in violent incidents with more severe consequences than women, although a female-only ward may have at least as many incidents of violence and selfharm as a mixed ward [51]. At the same time, several participants reported inappropriate behaviour by other staff members, such as aggressive, stigmatizing or oppressive behaviour. This is in contrast to Stenhouse's study [39], where patients only reported problems with staff who did not engage in patient work, but not about inappropriate behaviour. Some stories in our study give the impression that there are staff members who seem to try to provoke violent behaviour from patients. In most of these cases, the patients understood that it was no idea to protest since it could end with coercive measures, which were also reported as occurring. Some participants were exposed to stigmatizing behaviour by staff members, which is serious since feelings of being stigmatized might delay recovery and is also correlated with suicide [52]. If staff members in our study did communicate risks with patients concerning violence, it was only after violent incidents, and then the purpose was to calm down the patient emotionally rather than giving facts about the situation or security measures. Staff behaviour can trigger or de-escalate violence in the ward, so it is important to create organizational values which can strongly counteract provocative, oppressive and stigmatizing behaviour toward patients and start to combat stigma [25]. The management can give support to the staff team by providing the conditions for a good functioning ward with a clear purpose [11,53]. This should include staffing levels that are sufficient for safety, improving staff responses to violence and giving them opportunities to spend time with patients [53]. With organizational support and a supportive ward manager, staff teams would find it easier to modify their anxiety and frustration which would promote staff-patient interaction and safety [11,53]. These kind of interventions could give the fundament for cohesive staff teams that are more content with their psychosocial climate at work, are morally committed, and show more understanding and positive appreciation towards patients, thereby also creating a ward climate that can have a positive primary prevention effect on violence [10,53].
This study had a variety of different kind of wards with different specialities, different treatment, different patient groups, different staff groups and differences in management styles and organizational values. Our aim was to describe feelings of being safe in psychiatric inpatient care in general. Since we only had one clinic with few patients from each specialty, we cannot draw conclusions about differences between specialties. A weakness of the study was that we had no control over the inclusion process of participants or the exact number of patients that refused to participate. It was the managers who decided when a patient was sufficiently healthy to be interviewed. We did not collect extensive information about the patients because the act of just writing their name on the consent form felt difficult for some patients. We managed to get a spread in age, but not an even gender distribution. Instead, the study contributes to our insight into how patients perceive feelings of being safe or unsafe during their stay in a psychiatric ward and can give some ideas about what we should improve to achieve effective primary prevention.
To sum up, patients in this study were worried about the unpredictable treatment and care process, a boring social climate in wards, and a lack of communication with staff. All these factors are important from the primary violence prevention and recovery perspective. There are programs such as Safewards [11] and Star Wards [20] that have several elements that participants in this and other studies have requested. These elements include creating a socially friendly climate, starting treatment in form of talking therapies and self-care programs immediately, involving patients in their care planning and giving them the opportunity to talk with a staff member every day. A development in line with this could make patients feel more safe and secure since many wards implementing these programs have experienced changes, with less patient aggression than earlier [20,54].
Many patients in this study were afraid of discharge and wondered whether they would receive adequate treatment and care. For substance users, in particular, but probably also for other groups, it may be necessary to have services that are more assertive, with improved coordination of health and social services in order to have the capacity to secure treatment and care processes [55,56].
Participants in this study tried to assess the future risk of violent incidents. It is, therefore, important that staff provide patients with general information about the safety situation and what measures the staff team can take. This kind of safety issue can develop into an ethical dilemma between respecting an individual patient's right to confidentiality and other patients' need of information about safety. Despite some very positive results with a number of patients describing communicative staff members and the use of de-escalation methods, there were also far too many descriptions of staff with an aggressive approach, indicating that some units have organizational values that might inadvertently lead to some staff members behaving aggressively or stigmatizing patients. It seems there is a need for ongoing work, not only with implementation of de-escalation techniques but also with organizational values [25,53,57].
Conclusions
Our results indicate that creating reliable treatment and care processes, a stimulating social climate in wards, and better staff-patient communication could enhance patient perceptions of feeling safe. It seems to be important that staff provide patients with general information about the safety situation at the ward, without violating individual patients right to confidentiality, and to have an ongoing process that aims to create organizational values promoting safe environments for patients and staff. | 2019-04-08T18:24:23.467Z | 2019-04-08T00:00:00.000 | {
"year": 2019,
"sha1": "805ac03dbe3693250b851746e93095d61ff9c096",
"oa_license": "CCBY",
"oa_url": "https://ijmhs.biomedcentral.com/track/pdf/10.1186/s13033-019-0282-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "805ac03dbe3693250b851746e93095d61ff9c096",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233274248 | pes2o/s2orc | v3-fos-license | Lattice Doping of Lanthanide Ions in Cs2AgInCl6 Nanocrystals Enabling Tunable Photoluminescence
The Beijing Municipal Key Laboratory of New Energy Materials and Technologies, School of Materials Science and Engineering, University of Science and Technology, Beijing, China Laboratory of Crystal Physics, Kirensky Institute of Physics, Federal Research Center KSC SB RASs, Russia Department of Engineering Physics and Radioelectronics, Siberian Federal University, Russia Department of Physics, Far Eastern State Transport University, Russia The State Key Laboratory of Luminescent Materials and Devices, Guangdong Provincial Key Laboratory of Fiber Laser Materials and Applied Techniques, School of Materials Science and Engineering, South China University of Technology, China
Introduction
Lead halide perovskites have become the legend in the history of material science for emerging optoelectronic application due to their tunable emissions, high photoluminescence quantum yield (PLQY), easy solution processability, and so on [1][2][3][4]. Nevertheless, considering their lead toxicity and low stability, it is urgent to seek environmentally friendly semiconductor materials in this database. At this time, leadfree halide perovskites were discovered with lower toxicity and higher stability and have attracted great interests [5][6][7][8][9]. There are many choices for the replacement of Pb 2+ by other benign metal ions, including the incorporation of isovalent Sn 2+ ions [10] and substitution of trivalent Bi 3+ or Sb 3+ ions forming the similar composition as Cs 3 Bi 2 Cl 9 [11][12][13]. However, those materials are either limited by stability challenges [14] or with lower electronic mobility because of the lower symmetry nonperovskite structure [15]. One different way to address the challenge is to replace two Pb 2+ ions with one monovalent cation (B + ions) and one trivalent cation (B 3+ ions), forming the three-dimensional (3D) double perovskite structure [16]. The possible combinations of various cations make the diversity of lead-free double perovskites and make them the most promising alternative for optoelectronic applications [17].
Lead-free halide double perovskites with the general formula A 2 B + B 3+ X 6 (A = Cs + ; B + = Cu + , Ag + , Na + ; B 3+ = Bi 3+ , Sb 3+ , In 3+ ; X = Cl − , Br − , I − ) crystallize in a cubic unit cell with the space group Fm 3 m [18]. Among them, Cs 2 AgBiX 6 and Cs 2 NaBiCl 6 possess an indirect band gap leading to a low absorption coefficient and a weak photoluminescence (PL) emission [19,20]. In contrast, Cs 2 AgInCl 6 , inheriting the relatively good performance of the lead halide perovskites mainly attributed to the nature of direct band gap, has drawn increasing attention after the discovery by Giustino et al. [21] and Zhou et al. [22] and the milestone work as white light emitters by Luo et al. [7]. Cs 2 AgInCl 6 is reported to have a long carrier lifetime, easy solution processability, and a direct band gap with a parity-forbidden transition that results in a low PLQY (<0.1%), and a full story on research history of Cs 2 AgInCl 6 has been summarized recently for the details [23]. The poor PLQY has been improved by different doping and alloying strategies [7,[24][25][26]. Nevertheless, the PL of Cs 2 AgInCl 6 nanocrystals (NCs) contains a broadband spectral profile owing to the origin of self-trapped excitons (STEs) [27]. Therefore, to explore doped Cs 2 AgInCl 6 NCs with improved PLQY and tunable emission is a main challenge. Generally, lanthanide (Ln 3+ ) ions would be the most suitable dopants for their rich and unique PL emissions in the visible to near-infrared range [28,29], which could be utilized to achieve tunable luminescence and increased PLQY [30]. Moreover, the successful incorporation of rare earth ions for the lead-based halide perovskites [31,32] and the structural similarity between lead-based and lead-free perovskites (both with the six octahedral coordination number) have provided the reference and opportunities to conduct the further lanthanide doping study on Cs 2 AgInCl 6 NCs [33][34][35].
In this work, different lanthanide ions (Ln 3+ = Dy 3+ , Sm 3+ , Tb 3+ ) were successfully incorporated into Cs 2 AgInCl 6 perovskite NCs through the hot-injection method developed by our group [26]. Dy 3+ , Tb 3+ , and Sm 3+ ions were verified to occupy the In 3+ site in the Cs 2 AgInCl 6 lattice. The introduction of these rare earth ions endowed Cs 2 AgInCl 6 with diverse PL emissions in the visible region. Benefiting from the energy transfer process, Sm 3+ /Tb 3+codoped Cs 2 AgInCl 6 NCs achieved tunable emission from green to yellow orange and a fluorescent pattern from the as-prepared NC-hexane inks by spray coating was made to show its potential application in fluorescent signs and anticounterfeiting technology. This work expands the PL emissions of lead-free perovskite NCs through lanthanide ion doping, making them more competitive and will promote a wider regulation for their optical properties and novel photonic applications in energy-related materials. (14 mL), OA (1 mL), OLA (1 mL), and HCl (0.28 mL). The reaction solution was heated to 120°C and degassed by alternating vacuum and N 2 for 1 h. Then, the mixture was heated to 260°C under N 2 . The as-prepared hot (150°C) Cs-oleate solution (0.8 mL) was quickly injected into the solution. After~20 s, the system was transferred to an ice-water bath. The crude sample was centrifuged at 8000 rpm for 4 min, discarding the supernatant. Next, the precipitate was dispersed in hexane and centrifuged again at 5000 rpm for 4 min, leaving the supernatant. The final NCs were precipitated with ethyl acetate by centrifugating for 4 min at 10000 rpm. For Sm 3+and Tb 3+ -codoped samples, different doping concentrations (5 mol%, 10 mol%, 20 mol%, and 40 mol%) of Sm 3+ were added at the fixed concentration of Tb 3+ (0.108 mmol).
2.4.
Characterization. X-ray diffraction (XRD) measurements were carried out on an Aeris X-ray diffractometer (PANalytical Corporation, Netherlands) equipped with a 50000 mW Cu Kα radiation after dropping concentrated nanocrystal hexane solutions on the silicon substrates. Transmission electron microscopy (TEM) images and energy-dispersive X-ray spectroscopy (EDS) analysis were acquired on a JEM-2010 microscope transmission electron microscope at the voltage of 120 kV equipped with an energy-dispersive detector, for which the samples were prepared by dropping dilute nanocrystal hexane solutions on the ultrathin carbon film-mounted Cu grids. Steady-state photoluminescence (PL) spectra, photoluminescence excitation (PLE) spectra, and PL decay spectra were recorded using a FLS920 fluorescence spectrometer (Edinburgh Instruments Ltd., U.K.) which is equipped with the Xe900 lamp, nF920 flash lamp, and the PMT detector. UV-visible absorption spectra were collected using a Hitachi UH4150 UV-vis-near IR spectrophotometer. Elemental contents were determined by the inductively coupled plasma mass spectroscopy (ICP-MS) after treating samples with wet digestion method. Xray photoelectron spectroscopy (XPS) was carried out on the ESCALAB 250Xi instrument (Thermo Fisher). The PL quantum yields were obtained on the Hamamatsu absolute PL quantum yield spectrometer C11347 Quantaurus_QY.
Structural Analysis of Ln 3+ (Ln = Dy, Tb, Sm)-Doped
Cs 2 AgInCl 6 NCs. Ln 3+ ion (Dy 3+ , Sm 3+ , Tb 3+ )-doped Cs 2 A-gInCl 6 NCs were synthesized by a hot-injection method at 260°C as illustrated in Figure S1. The X-ray diffraction (XRD) patterns showed that all the doped samples possessed pure phase (Figure 1(a)) and all peaks of them were indexed by cubic cell (Fm 3 m) with the parameters close to Cs 2 AgInCl 6 (Figures 1(b)-1(e)) [21]. This indicated that the incorporation of Ln 3+ ions into Cs 2 AgInCl 6 does not change the phase structure. To verify the location of Ln 3+ ions, Rietveld refinement was performed using 2 Energy Material Advances TOPAS 4.2 software. The refinements were stable and showed low R factors (Table S1). The coordinates of atoms and main bond lengths are given in Tables S2 and S3, respectively. It was found that cell volumes of compounds increased with Ln 3+ ions doped (Figure 1(f) 3 Energy Material Advances same perovskite structure as Cs 2 AgInCl 6 . The existence of doped Dy 3+ , Sm 3+ , and Tb 3+ ions in Cs 2 AgInCl 6 NCs could be confirmed by energy-dispersive X-ray (EDS) analysis and corresponding elemental mapping images ( Figure S3). The high-resolution TEM (HRTEM) images in Figures 1(g)-1(i) revealed that the incorporation of Ln 3+ ions did not induce the formation of crystal defects and the clear lattice fringes with the increasing lattice constants of 3.75 Å, 3.8 Å, and 3.9 Å for Dy 3+ , Sm 3+ , and Tb 3+ ions doped, respectively, corresponded to the (022) interplane distance (3.7 Å) of Cs 2 AgInCl 6 . The increased interplane distances further indicated the successful incorporation of Dy 3+ , Sm 3+ , and Tb 3+ ions.
To further characterize the chemical compositions of Ln 3+ -doped Cs 2 AgInCl 6 NCs, X-ray photoelectron spectroscopy (XPS) measurements were carried out. As shown in the XPS survey spectra (Figure 2(a)), the signals of Cs, Ag, In, and Cl were clearly observed in every sample. The respective high-resolution XPS spectra are present in Figures 2(b)-2 Energy Material Advances environments of In 3+ and Cs + in terms of the samples doped with Ln 3+ ions, while for the Ag 3d the spectra showed almost the same peak position for the undoped and three Ln 3+ iondoped Cs 2 AgInCl 6 NCs. Moreover, the relatively weak signals peaked at 167.9 eV, 1085 and 1110 eV, and 167.3 eV are observed in Figure 2(f) corresponding to the binding energy of Dy 4d, Sm 3d, and Tb 4d, respectively [36,37]. The weak signals may be due to the small amount of lanthanide ions on the surface. Combined with the XRD analysis, those results further indicated that Ln 3+ ions were successfully doped into the perovskite host lattice and located in the site of In 3+ to alter the local coordination structures.
Optical Properties of Ln 3+ (Ln = Dy, Tb, Sm)-Doped
Cs 2 AgInCl 6 NCs. The optical features of the as-prepared Ln 3+ -doped Cs 2 AgInCl 6 NCs were investigated ( Figure 3). All samples showed a strong absorption starting at around 350 nm and peaked at~310 nm (Figure 3(a)). Additionally, it is clear that there was a red shift of the excitonic absorption peak with Ln 3+ ion doping, which could be ascribed to the size increase of NCs. The optical band gaps 3.83 eV, 3.85 eV, and 3.88 eV for Dy 3+ -doped, Sm 3+ -doped, and Tb 3+ -doped NCs were quantified from the T auc plots of ðαhνÞ 2 , which were calculated from the corresponding absorption spectra (Figure 3(b)). The decrease in optical band gaps compared with~4 eV of undoped Cs 2 AgInCl 6 NCs [26] could be attributed from the lattice expansion of doped NCs [38]. Doped with different lanthanide ions, the as-synthesized NCs present variable emission (Figure 3(c)). Figure 3(c). The sharp peaks therein were corresponding to the intrinsic transitions of 4 F 5/2 -6 H J (J = 15/2, 13/2, 11/2) for Dy 3+ ions, 4 G 5/2 -6 H J (J = 2/5, 2/7, 2/9, 2/11) for Sm 3+ ions, and 5 D 4 -7 F J (J = 6, 5, 4, 3) for Tb 3+ ions, respectively. All the PLE spectra monitored at the respective peak positions of three Ln 3+ ions were almost the same, which matches closely with the PLE spectrum of Cs 2 AgInCl 6 NC host seen in the previous work by Alivisatos et al. [39] and in our group [26]. That indicated that the emissions of Ln 3+ -doped NCs were most likely to originate from an efficient energy transfer from Cs 2 AgInCl 6 NC host to the energy levels of Dy 3+ , Sm 3+ , and Tb 3+ ions [40], as illustrated in Figure S4. The PL decay curves of the three lanthanide ion-doped samples were measured ( Figure 3(d), Table S4) and fitted by The calculated lifetimes for Dy 3+ -doped, Sm 3+ -doped, and Tb 3+ -doped NCs were 3.29 ms, 8.1 ms, and 8.45 ms, respectively, consistent with the recent reports on these lanthanide ion-doped luminescent materials [41,42].
Tunable
Luminescence of Sm 3+ -and Tb 3+ -Codoped Cs 2 AgInCl 6 NCs. Energy transfer between the codoped lanthanide ions in one system is a general strategy to achieve tunable luminescence. We design the controlled experiments by doping Tb 3+ ions in Cs 2 AgInCl 6 NCs with different amounts of Sm 3+ ions ( Figure 4). The general amount of Sm 3+ and Tb 3+ dopants was determined by ICP-MS measurement. As shown in Figure 4(a), all samples showed a strong absorption starting at~350 nm and peaked at around 310 nm. The PLE spectra of Cs 2 AgIn (0.89-x) Cl 6 :0.11Tb,xSm NCs were almost the same when monitored at 548 and 605 nm, further suggesting that the emissions of Sm 3+ and Tb 3+ ions were also derived from the efficient energy transfer from Cs 2 AgInCl 6 NC host to lanthanide ions (Figure 4(b)). Figure 4(c) reveals the PL emission for different amounts of Sm 3+ -doped Cs 2 AgIn (0.89-x) Cl 6 :0.11Tb NCs under the excitation of 311 nm. The PLQYs were measured to be 5.9%, 5.5%, and 5.0%, respectively, corresponding to the Sm 3+ concentrations of 3%, 5%, and 11%. With the increase in the amount of Sm 3+ dopants, the PL intensity of Tb 3+ emission decreases and the PL intensity of Sm 3+ emission increases first and then decreases. Thus, the emission colors could be tuned from green to yellow orange. The weakening of Sm 3+ emission was attributed to the concentration quenching effect. To reveal the variation trend of PL intensity more directly, the PL spectra were normalized as shown in the inset of Figure 4(c). It was found that the normalized peak intensity of Tb 3+ ions decreased and the luminescent intensity of Sm 3+ ions increased gradually. Those results indicated the possible occurrence of Tb 3+ → Sm 3+ energy transfer in Cs 2 A-gInCl 6 NCs. Moreover, the decay curves of 11%Tb 3+ /xSm 3+ (x = 0, 2%, 3%, 5%, and 11%)-codoped Cs 2 AgInCl 6 NCs by recording Tb 3+ 548 nm emission at 311 nm excitation are shown in Figure 4(d) to investigate the energy transfer process from Tb 3+ to Sm 3+ ions. The lifetimes calculated from Figure 4(d) and Table S5 for xSm 3+ (x = 0, 2%, 3%, 5%, and 11%)-doped Cs 2 AgIn (0.89-x) Tb 0.11 Cl 6 NCs were 8.77, 8.39, 8.12, 7.70, and 7.35 ms, respectively, which showed that with the increase in the concentration of Sm 3+ ion dopants, the fluorescence lifetime of Tb 3+ ion emission decreased gradually. That evidence further confirmed the existence of the energy transfer channel from Tb 3+ to Sm 3+ ions in Cs 2 AgInCl 6 NCs. Sm 3+ emission decays monitored at 605 nm emission and 311 nm excitation were also revealed in Figure 4(e). It was found that with the increase in the doping amount of Sm 3+ ions, the fluorescence decays became faster, attributed to the concentration quenching effect of Sm 3+ ion dopants. In addition, we used Bi 3+ -doped Cs 2 AgIn (0.89-x) Tb 0.11 Cl 6 :xSm NCs to make fluorescent signs by spray coating. Bi 3+ ion incorporation could adjust the excitation to 365 nm for wider application from our previous work [34]. The scheme of spray coating process is demonstrated in Figure 4(f), in which different NC-hexane solutions were atomized into very small droplets from the nozzle with the high-pressurized nitrogen gas. Then, the droplets deposited onto the PMMA substrate, forming the desired uniform, stable, and high-resolution patterns. The fluorescence patterns with tunable emissions shown in the right side of Figure 4(f) could respond to the 365 nm UV excitation signal, revealing the potential application of lanthanide ion-doped Cs 2 AgInCl 6 NCs in the field of anticounterfeiting technology and fluorescent signs.
Discussion
In conclusion, we demonstrated the successful lattice doping of various lanthanide ions, including Dy 3+ , Tb 3+ , and Sm 3+ , into lead-free perovskite Cs 2 AgInCl 6 NCs through the hotinjection method. It was confirmed by structural refinements that Dy 3+ , Tb 3+ , and Sm 3+ ions occupied the site of In 3+ ions, and the TEM images and XPS analysis further verified this result. The introduction of Ln 3+ doping endowed Cs 2 AgInCl 6 with diverse PL emissions in the visible region. Benefiting from the energy transfer process, Sm 3+ /Tb 3+ -codoped Cs 2 A-gInCl 6 NCs achieved tunable emission from green to yellow orange and a fluorescent pattern from the as-prepared NChexane inks by spray coating was made to show its application in fluorescent signs and anticounterfeiting technology. This work extends the study on lanthanide ion doping into lead-free halide perovskite Cs 2 AgInCl 6 NCs and further enables a wider regulation for their optical properties and applications in energy-related materials.
Data Availability
All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. | 2021-04-16T20:39:44.560Z | 2021-02-24T00:00:00.000 | {
"year": 2021,
"sha1": "bab048a5c159db3cb86c9589b3f0631849208658",
"oa_license": "CCBY",
"oa_url": "https://downloads.spj.sciencemag.org/energymatadv/2021/2585274.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bab048a5c159db3cb86c9589b3f0631849208658",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
226289969 | pes2o/s2orc | v3-fos-license | Enhancing Low-Quality Voice Recordings Using Disentangled Channel Factor and Neural Waveform Model
High-quality speech corpora are essential foundations for most speech applications. However, such speech data are expensive and limited since they are collected in professional recording environments. In this work, we propose an encoder-decoder neural network to automatically enhance low-quality recordings to professional high-quality recordings. To address channel variability, we first filter out the channel characteristics from the original input audio using the encoder network with adversarial training. Next, we disentangle the channel factor from a reference audio. Conditioned on this factor, an auto-regressive decoder is then used to predict the target-environment Mel spectrogram. Finally, we apply a neural vocoder to synthesize the speech waveform. Experimental results show that the proposed system can generate a professional high-quality speech waveform when setting high-quality audio as the reference. It also improves speech enhancement performance compared with several state-of-the-art baseline systems.
INTRODUCTION
Recently, deep neural networks (DNNs) have been widely used in many speech-generation tasks, such as text-to-speech synthesis [1] and voice conversion [2]. Such a deep-learning based data-driven approach typically requires a large amount of high-quality expensive speech data for training. Although self-recorded or public found speech can be relatively easily obtained, it is often recorded in uncontrolled environments by using non-professional microphones, where acoustic noise, room reverberation, and bad frequency characteristics of the recording device inevitably degrade audio quality.
To enhance low-quality recordings, traditional speech enhancement methods, e.g., weighted linear prediction error (WPE) [3], spectral subtraction [4], and log-MMSE speech magnitude estimator [5], have been extensively studied. These signal-processing methods are typically developed for a single application scenario, such as speech denoising (e.g. [4,5]), de-reverberation (e.g. [3]), or audio effect adaptation (e.g. equalization [6]). Although one can combine denoising, de-reverberation, and equalization methods to sequentially address each subproblem, Mysore et al. [7] pointed out that such intuitive combination would degrade audio quality due to undesired synergy between processes. For example, a sound equalizer might amplify background noise by wrongly amplifying noisy-frequency components, which causes conflict with the speech denoising process. The performance of such traditional methods is still far from satisfactory.
Our goal for this work is to automatically enhance lowquality recordings by simultaneously removing noise and reverberation and applying pleasing audio effects. More specifically, we propose an encoder-decoder neural network to directly transform the input recordings to sound as if they were produced under other recording conditions (e.g. in a professional studio). Recording conditions, including noise, reverberation, microphone characteristics, and audio effects, are jointly considered, which we collectively refer to as the channel factor. The target recording condition is derived from an additional reference audio and can be represented as the channel factor (embedding) using a channel modeling (CM) network. In the inference stage, the input recording is first filtered out its original channel characteristics by an adversarially trained encoder, and then passed to the decoder to predict the target-environment Mel spectrogram, conditioned on the channel factor obtained from the reference. Finally, we use a WaveRNN vocoder to generate a speech waveform from the predicted Mel spectrogram. With this flexible framework, we can not only enhance low-quality audio by providing highquality clean audio as the reference but also transfer the audio effect, e.g., adapting the input recording to the new reverb effect if we designate the appropriate reference audio. This paper is organized as follows: Section 2 reviews the relevant work. Section 3 describes the proposed system and gives details of each network component. Section 4 presents the experimental results. We conclude our work and discuss future direction in Section 5.
RELATED WORK
Many DNN-based speech enhancement methods [8,9] have recently been proposed for directly predicting clean speech representations from noisy input and significantly outperform traditional methods. Nevertheless, most of these methods operate on the magnitude spectrogram but disregard the phase. Consequently, noisy phase distortion is inevitably introduced by inverse short-time Fourier transform (ISTFT), which degrades performance. To overcome such limitations, end-toend waveform models, such as SEGAN [10,11] and Wavenet [12], have attracted significant attention. Very recently, Su et al. [13,14] proposed a Wavenet-based waveformto-waveform mapping system for speech enhancement and achieved good results. Compared with time-frequency features, however, the raw waveform samples are redundant [15], therefore difficult to model, and the trained system tends to be more susceptible to overfitting.
Different from previous studies [11,13,14], we choose the Mel spectrogram on which to operate since this high-level representation is much smoother than raw samples and easier to handle. To alleviate the phase distortion introduced by ISTFT, we use the state-of-the-art WaveRNN vocoder [16] to synthesize the waveform. Maiti et al. [17] also proposed applying neural vocoders for speech enhancement. Unlike their work, we do not focus on investigating the effect of different neural vocoders in the waveform-synthesis stage but on studying the model architecture and training objective in the spectrogram-enhancement stage. For waveform synthesis, we simply select WaveRNN as the vocoder.
SYSTEM OVERVIEW
The system diagram is illustrated in Fig. 1. It consists of three main components: an encoder, a channel modeling (CM) network, and a decoder. In addition, a WaveRNN vocoder works separately as the waveform synthesis module.
Encoder
The encoder is designed to filter out the channel characteristics from input audio. More specifically, the input audio is first transformed to the magnitude spectrogram by STFT, and then passed to the encoder to produce the channel-invariant features. The encoder consists of a six-layer 2D CNN, each layer with batch normalization, ReLU activation, and zero Table 1. Parameters of encoder. Kernel shape of 2D CNN layers is represented as [kernel size tuple, stride tuple, output channels]. T and F denote the number of frames and frequency bins, respectively.
Layer
Input shape Kernel shape / Nodes Output shape Table 1.
To encourage the encoder to produce channel-invariant features, inspired by [18] and [19], we introduce a channel classifier (#1) as a discriminator for adversarial training. It consists of one uni-directional LSTM (Uni-LSTM) layer with 400 nodes and one fully-connected layer with a softmax layer, which predicts the channel type (recording condition) of the input audio. In the training stage, this classifier is optimized to accurately predict the channel type by minimizing the crossentropy classification loss. On the other hand, the encoder is optimized oppositely to maximize the classification loss to prevent the produced features from encoding channel information. This adversarial training encourages the encoder to filter out the channel information from its input.
Channel Modeling
The channel modeling (CM) network explicitly extracts the channel factor from the reference audio. Its structure is shown in Fig. 2. We use this model structure since it effectively encodes latent factors [20,21]. Instead of using a one-hot code, the channel factor can be automatically encoded as a neural code from the reference audio, which enables the system to deal with the unseen channel condition and unlabelled reference audio. Moreover, the CM network can be jointly optimized with other neural components, which further provides better results.
The network takes as input the magnitude spectrogram computed from the reference. It consists of a six-layer 2D CNN each with a 5×5 kernel, 2×2 stride, batch normalization, and ReLU activation. The output channels are set to 32, 32, 64, 64, 128, and 128, respectively. A uni-directional gated recurrent unit (GRU) layer with 128 nodes follows the last CNN layer, producing an intermediate feature. Next, a channel token layer is added, which consists of 12 trainable channel tokens and a multi-head attention module [22]. Specifically, each token has 256 dimensions, and the number of attention heads is set to 8. The intermediate feature output by the GRU layer is fed to the channel token layer and serves as the query vector, then the attention module calculates the similarity (weight) between the query and each token. Finally, the channel factor (vector) is formed as the weighted sum of these channel tokens.
To better disentangle channel and speaker identities from the reference audio, we introduce two additional classifiers, channel classifier (#2) and speaker classifier. Both are fullyconnected networks with one 256-node hidden layer followed by a softmax layer to predict the channel type or speaker identity. Note that different from channel classifier (#1) used in the encoder, classifier (#2) here encourages the channel factor to be more informative about channel information. While speaker classifier still serves as the adversarial discriminator to filter out the speaker information from the channel factor.
Decoder
The auto-regressive decoder shown in Fig. 3 is used to produce the target-environment Mel spectrogram, which shares similar channel characteristics to those of the reference audio. The extracted channel factor is first repeatedly concatenated to the encoder output in every time frame. The resulting concatenated features are processed by a Bi-LSTM layer with 256 nodes, and then passed to a Uni-LSTM layer with 512 nodes. Four fully-connected layers are sequentially added, each with 80 nodes, to produce the 80-dimensional Mel spectrogram. Similar to Tacotron2 [23], we add a 2-layer Pre-Net each with 256 nodes for the auto-regressive process. The produced Mel spectrogram from the previous time step is processed through Pre-Net and fed into the Uni-LSTM layer for the prediction of the current time step. A five-layer convolutional Post-Net module used in Tacotron2 is also introduced to predict the Mel spectrogram residual to improve the overall reconstruction.
WaveRNN Vocoder
To avoid introducing the noisy phase, a WaveRNN vocoder is used to directly generate the waveform from the Mel spectrogram. We use a speaker-independent WaveRNN, which effectively generalizes to unseen speakers. Since this vocoder is trained using a large external corpus, compared to the deterministic ISTFT, such a robust neural waveform model has better tolerance to the prediction error of spectrogram, and thus is able to generate a higher quality waveform.
Training Objective
To state the training objective of our proposed system, we review each component shown in Fig. 1 and use the following definitions: The encoder Φ e encodes the input spectrogram of T i frame length, o 1:Ti In addition to the main MSE objective, we add the following three objectives: where CE denotes the cross-entropy loss, and D c1 , D c2 , and D s denote channel classifier #1, #2, and the speaker classifier, respectively. The channel types of the input and the reference are represented as one-hot labels, i.e., c in and c ref , and the speaker label of the reference is denoted as s ref .
As explained in previous sections, L enc ch is used as the adversarial training objective to filter out the channel information from the encoder output z 1:Ti e . We also use L cm ch as an auxiliary objective and L cm spk as an adversarial objective, to encourage the channel factor z c to encode more channel information but less speaker information. In the training stage, the neural components (i.e. Φ e , Φ c , and Φ d ) and classifiers (i.e. D c1 , D c2 , and D s ) are optimized alternatively. At one training step, we optimize three classifiers individually by minimizing their corresponding cross-entropy objectives, which are L enc ch , L cm ch , and L cm spk . At the next training step, we fix the classifiers and jointly optimize all three neural components with the following training objective: where α, β, and γ are hyper-parameters controlling the weights of different sub-objectives.
Dataset
The DAPS (device and produced speech) dataset [7] was used in our experiments. It provides aligned recordings of high-quality speech 1 and a number of versions of low-quality speech 2 , which are affected by noise, reverberation, and microphone response. Specifically, it consists of 20 speakers (10 female and 10 male) reading 5 excerpts each from public domain stories. To prepare the training set, we selected 4 of the 5 excerpts narrated by 18 of the 20 speakers under 7 of the 10 recording conditions then split the corresponding recordings into shorter segments, which resulted in 23,555 audio clips. The remaining 1 excerpt, 2 speakers (1 female and 1 male), and 3 conditions were used to form the test set, which resulted in 228 audio clips. Thus, all the content, speakers, and recording conditions of the tested speech were unseen to the training set. The three tested real-world recording conditions were: (1) ipad livingroom, recording was done by an iPad Air in a living room; (2) ipadflat office, recording was done by an iPad Air placed flat in an office; and (3) iphone bedroom, recording was done by an iPhone 5S in a bedroom.
Implementation
All audios were resampled at 16kHz. We used STFT to compute the spectrogram with a Hanning window size of 50 ms and a hop size of 12.5 ms, and the spectrogram was power-law compressed [24] with a power of 0.3. For WaveRNN vocoder, we used a public speaker-independent model 3 , which was pretrained sufficiently with more than 900 speakers selected from the LibriTTS corpus [25]. We slightly fine-tuned the model, with the high-quality studio recordings in the training set, to make the model adapt to the studio audio effect. Note that the speakers and content of the tested recordings were still unseen to the fine-tuned WaveRNN vocoder. Although the primary target of this work is to enhance low-quality recordings, we implemented audio effect transfer, e.g., transferring the iPhone recording in the bedroom to the iPad recording in the office, within one unified system. As shown in Fig. 1, the decoder can predict not only the Mel spectrogram in studio quality but also that under other recording conditions, depending on the reference audio 4 . This architecture enables us to augment training data with diverse combinations of input and reference pairs. Since the system learns to disentangle the channel factor and adapt to various recording conditions, we expect that it can reduce overfitting and improve overall performance. Therefore, each audio clip under 7 training recording conditions was combined with 3 different types of references: one high-quality recording (as primary training target) and two recordings that were randomly selected from the other 6 conditions. This extended the original training set and resulted in a total of 70,665 (23,555 × 3) training examples. For the test set, we set the high-quality recording only as the reference since our ultimate target is to examine if the low-quality input can be enhanced. The Adam optimizer [26] was used for training, with learning rates of 0.0001 and 0.0002 for the model and its classifiers, respectively. Hyper-parameters α, β, and γ in Eq. 8 were set to 1.0, 0.2, and 0.05, respectively.
Evaluated Systems
We conducted an ablation study on the proposed system. Several speech enhancement baseline systems were also reimplemented, making a total of seven systems compared in our experiments. We describe and notate each system as follows: • ED: A simplified version of our proposed system that is composed only of encoder and decoder modules. The decoder only predicts the high-quality Mel spectrogram as its prediction target.
• ED+CM: Another simplified version that is composed of encoder, decoder, and CM modules. No classifiers and corresponding training objectives were used for training. Following the work of [27], we improved this system by conditioning the encoder with the input's channel factor 5 .
• FULL (ED+CM+Classifiers): Our complete proposed system shown in Fig. 1, which consists of an encoder, decoder, CM network, and three classifiers. Auxiliary (Eq. 6) and adversarial (Eq. 5 and Eq. 7) training objectives were integrated through these three classifiers.
• Linear-ISTFT: This system shares the same settings as FULL, except the decoder output was changed to linear spectrogram. Instead of WaveRNN, we synthesized the waveform using ISTFT with the noisy phase.
• Wavenet: A waveform-to-waveform mapping system based on Wavenet [13]. We reimplemented it with the same model architecture and training objective (L1 loss on log spectrogram).
• WPE: A state-of-the-art speech de-reverberation baseline, which estimates a linear filter to minimize the weighted linear prediction error [3].
• WPE+L: An integrated system that sequentially combines WPE for de-reverberation and a standard log-MMSE speech magnitude estimator [5] for denoising.
Objective Evaluations
We first evaluated each system with objective measures. We used the short-time objective intelligibility (STOI) score [28] to measure speech intelligibility and three composite scores (CSIG, CBAK, and COVL) [29] to measure enhancement quality. CSIG, CBAK, and COVL are mean opinion score (MOS) predictions of speech distortion, noise distortion, and overall quality, respectively. The evaluation results are listed in Table 2. As shown, the FULL system consistently improves its two simplified versions (ED and ED+CM) for all measures, which indicates both the CM network and classifiers play important roles in our proposed system. It also significantly outperforms time-domain Wavenet and the two signal-processing baselines (WPE and WPE+L). WPE+L system performs much worse than WPE. This is mostly because the log-MMSE estimator suppresses noise too aggressively even though the noise level of the DAPS dataset is not high, therefore it degrades speech quality. We found that FULL system is worse than Linear-ISTFT in terms of CBAK and COVL. The probable reason is that the vocoder-generated speech has more artifacts than the ISTFT-generated one. However, most of these artifacts introduced by the neural vocoder do not affect human perception, as has been observed in our previous work [21]. To comprehensively evaluate each system, we further conducted the following subjective listening tests.
Subjective Evaluations
We conducted crowdsourced listening tests for the subjective evaluations. Specifically, we chose 120 (20 audios × 3 conditions × 2 genders) of the 228 tested audio clips for each system 6 . Participants (165 individuals) were asked to rate the quality of each anonymized audio from 1-5 (five-point Likert scale) for the mean opinion score (MOS). For reference, the raw (with low quality) and studio versions of each audio were also provided to the participants before rating. Each audio was rated ten times to avoid human bias.
The listening results are shown in Fig. 4. The Mann-Whitney U test [30] reveals that the proposed FULL system significantly outperforms the other systems with p-values all lower than 1e-7. It is noteworthy that unlike the objective results, FULL system shows a higher score than Linear-ISTFT, which means the WaveRNN module successfully improves the quality of the synthetic waveform. This also indicates that although the artifacts introduced by WaveRNN degrade the objective results, they do not affect human subjective evaluations. More interestingly, we can see that the FULL system outperforms ED in both subjective and objective tests, even though the task of FULL system is more challenging: the extra channel factor should be disentangled, and the output Mel spectrogram can be not only in studio quality but also in other acoustic characteristics based on the provided reference. Such additional learned knowledge related to channel information does benefit the FULL system and improves its performance.
Beyond Enhancement: Audio Effect Transfer
In addition to speech enhancement, the proposed system can also realize audio effect transfer: transferring the input recordings to sound as if they were recorded in another environment. To achieve this, we only need a few or even one reference audio recorded under the corresponding desired channel (recording) condition.
Instead of using one-hot code, the CM network automatically encodes the channel factor from the arbitrary reference, then the decoder can predict the target-environment Mel spectrogram conditioned on this factor. Figure 5 gives a visualization of learned channel factors for different reference recordings under three tested unseen conditions and one studio condition. We used t-SNE transformation [31] to project the 256dimensional channel factor into 2 dimensions. We can see that the learned factors are clearly clustered based on their channel conditions 7 , which indicates that the CM network can effectively discriminate unseen reference audios and produce representative factors. Therefore, it enables the system to deal with the unlabelled references under unseen channel conditions. With this system, we can further control the transferred effect (e.g. reverberation level) by flexibly scaling the channel factor. Examples of transferred Mel spectrograms are given in Fig. 6, where we aimed to transfer a studio recording to sound as if it were recorded in the (unseen) iphone bedroom condition. Instead of feeding a reference audio, the applied channel factorẑ c was pre-computed through linear interpolation of two factors using Eq. 9.
where z pro c and z iph c denote the channel factors extracted from a professional studio recording and iphone bedroom recording, respectively, and α is the scale value that ranges from 0 to 1. We successfully controlled the transferred effect from less reverberant (Fig. 6 (c) ) to more reverberant ( Fig. 6 (d)) by increasing the scale value α. We can also see that the transferred Mel spectrogram in Fig. 6 (d) shares a similar audio effect (or channel characteristics) with the ground-truth transfer target in Fig. 6 (b).
CONCLUSIONS
In this paper, we proposed a system to enhance low-quality voice recordings. Specifically, the channel factor disentangled from a high-quality reference recording is used to guide the system to predict the enhanced Mel spectrogram, which is then transformed to the enhanced waveform via a WaveRNN vocoder. Experimental results show that our system works well and outperforms several state-of-the-art baselines. Moreover, we show that it can be flexibly extended to transform the input recording into not only professional studio quality (as our primary target) but also with other acoustic (or channel) characteristics based on the reference we designate. Our future work includes improving the naturalness of the predicted Mel spectrogram by adopting a generative adversarial network-based spectrogram discriminator [13,14]. In addition, we found that the synthetic waveform has lower perceived quality if it is synthesized from the reverberant spectrogram. This is because our neural vocoder was pretrained only with high-quality but dry waveforms. We plan to alleviate this issue by using a recently proposed reverberationaware vocoder [32], which is able to model reverberation and generate a reverberant waveform with high perceived quality. | 2020-11-11T02:00:47.208Z | 2020-11-10T00:00:00.000 | {
"year": 2020,
"sha1": "d76fbc1dc9f33a3e8a3f2638a89c7833d02f44bb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2011.05038",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d76fbc1dc9f33a3e8a3f2638a89c7833d02f44bb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
233664765 | pes2o/s2orc | v3-fos-license | Robotic-Assisted Minimally Invasive Surgery in Children
Currently, minimally invasive surgery (MIS) includes conventional laparo-thoracoscopic surgery and robot-assisted surgery (RAS) or robotic surgery. Robotic surgery is performed with robotic devices, for example the Da Vinci system from Intuitive Surgical, which has a miniaturized camera capable of image magnification, a three-dimensional image of the surgical field, and the instruments are articulated with 7 degrees of freedom of movement, and the surgeon operates in a sitting position at a surgical console near the patient. Robotic surgery has gained an enormous surge in use on adults, but it has been slowly accepted for children, although it offers important advantages in complex surgeries. The areas of application of robotic surgery in the pediatric population include urological, general surgery, thoracic, oncological, and otorhinolaryngology, the largest application has been in urological surgery. There is evidence that robotic surgery in children is safe and it is important to offer its benefits. Intraoperative complications are rare, and the frequency of postoperative complications ranges from 0–15%. Recommendations for the implementation of a pediatric robotic surgery program are included. The future will be fascinating with upcoming advancements in robotic surgical systems, the use of artificial intelligence, and digital surgery.
Introduction
Pediatric robotic surgery offers unique challenges within this rapidly advancing field. There has been a slow rate of uptake within most pediatric surgical centers around the world due to both finance, and difficulties associated with equipment primarily designed for adults. The ergonomics required for the da Vinci® master-slave-type platform currently challenge the small working space in very small children.
Currently, there are three options for surgical treatment for a wide variety of pathologies in the pediatric population, open surgery (traditional) and MIS, which include: conventional laparo-thoracoscopic surgery and RAS.
Minimally invasive techniques are applicable in more than 60% of abdominal and thoracic operations in children, and according to evidence-based data and ethical principles can be used properly [1].
In 1994, the first robotic system used in the urological practice known as AESOP was introduced. Later, the evolution of these devices would bring the Zeus system and finally the Da Vinci system while continuously increasing their precision and effectiveness [2].
Since these initial reports, robotic surgery has seen widespread application within the adult population, especially in urologic and gynecologic procedures. As is often the case for new devices, technology, and therapeutic options in surgery, the application of robotic surgery for children has occurred more slowly than in adults. This caution is due in part to technical limitations with developing appropriately sized instruments for the pediatric patient; however, in recent years broader implementation has been seen [3][4][5][6].
In April 2001, Meininger et al. [7] published the first cases of RAS in children. The first of these two Nissen fundoplication procedures was reported as occurring in July 2000 [7][8][9][10]. Shortly afterward, the first robotic urological procedure in a child was undertaken in March 2002 by Peters et al. (personal communication, July 2002) who performed a pyeloplasty using the da Vinci® [11,12]. Since then to date, more than 70 different surgical techniques have been published [13,14].
Currently, the only robotic system that is approved for pediatric use is the da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA) [7]. The da Vinci robot is well suited for children of all ages, including infants and newborns, using careful preoperative planning, this allows the da Vinci to be used for numerous procedures in small children [14,15].
The evolution of conventional laparoscopic surgery highlights the transitory stages that follow adoption and diffusion of surgical innovation [16][17][18]. RAS was introduced to the specialty of pediatric surgery following initial case reports in the early 21st century. Subsequently, this promising surgical technology has undergone a formative 10-year period of introduction, development, early dispersion, exploration and preliminary assessment [13].
Cundy et al. [13], performed a 2013 systematic literature search for all reported cases of RAS in children during an 11-year period. During this time, 2,393 procedures in 1,840 patients were reported and the most prevalent gastrointestinal, genitourinary, and thoracic procedures were fundoplication, pyeloplasty, and lobectomy, respectively.
Due to the limitations of conventional laparoscopic surgery in pediatric patients, expert pediatric surgeons should only perform the more complex or reconstructive laparoscopic techniques [19].
The safety of RAS in children is reported to be similar to open procedures, and the outcomes are at least equivalent to conventional laparoscopy [47]. Robotic surgery on smaller children and infants require special considerations when discussing robotic surgery [48].
Numerous case reports, case series, and comparative studies have unequivocally demonstrated that robotic surgery in children is safe [13].
In systematic investigations of databases of pediatric RAS, the global surgical conversion rate was 4.7% [22], and a net overall surgical conversion rate of 2.5% was reported [13]. In published studies of pediatric RAS, transoperative complications are infrequent, and in the postoperative period, the frequency varies from 0 to 15% [22,[49][50][51].
The primary disadvantage of robotic surgical technology in pediatric surgery is related to the size of the surgical robot and its associated instruments [4,5,46], the robotic instruments are only available in 2 sizes, 8 mm and 5 mm. Similarly, robotic endoscopes (lens) are currently only available as 12.0 mm and 8.5 mm.
The cost analysis for the use of the robot is not strictly measured by numerical cost in dollars, but should be considered as value equating to quality (as defined by positive outcomes/cost). Naturally, there is the initial cost of purchasing and maintaining the robot itself, as well as the increased costs from the disposable robotic equipment and the longer operative times [4]. It should be noted other factors associated with the robotic portion of a procedure, such as increased operating room or anesthesia time, staff training, and cost of marketing campaigns [62].
In contrast, patient and parent satisfaction, as well as emotional and professional benefits, should also be considered when evaluating cost/satisfaction of this type of investment [63]. One study found that it takes at least 3 to 5 cases per week in a program to demonstrate a net gain from robotic surgery [64].
Other cost analyses suggest that robotic surgeries are more expensive than those associated with laparoscopic or open surgery [65,66]. However, RAS is associated with a 2% decrease in anastomotic leaks [67,68]. This reduces hospitalization and costs of managing the resulting surgical morbidity, and benefits the earlier return of the patient to the workforce [66]. In addition, by preferably performing difficult and complex cases in which robotic surgery adds value to patient care; it should be a solution with the best profitability in hospitals that have a robotic system. In some countries such as in Latin America, costs represent a great inconvenience for the advancement of robotic surgery in children, especially in private hospitals.
A short hospital stay, prudent use of instruments, reduced operating room times, and competent robotic equipment reduce costs [69]. Therefore, future comparative analyses of outcomes in children should include financial factors such as loss of human capital, parents [70].
Applications
Robotic surgery has been used in almost all pediatric surgical subspecialties, including urology, general surgery (gastrointestinal-hepatopancreatobiliary), thoracic, oncology, and otorhinolaryngology. Among pediatric disciplines, robotic surgery is used most frequently in urology.
The best indications for robotic surgery are procedures that require a small surgical field, fine and precise dissection, and secure intracorporeal sutures [71]. The RAS have special application in complex and reconstructive surgery, for these procedures, from the open technique; surgeons often jump to RAS [14]. RAS in otorhinolaryngology with the application of the transoral approach is particularly useful in masses of the tongue base [72]. Furthermore, RAS has performed a wide spectrum of surgical procedures in children [13].
Urologic robotic surgery
To date, the application of MIS in pediatric urology has evolved over more than 30 years [73]. Urology has the highest acceptance of robotic surgery within pediatrics. The first use of robotics in children was a pyeloplasty for ureteropelvic junction (UPJ) obstruction, because the ureteropelvic anastomosis was a technical challenge using conventional laparoscopic surgery [11,12].
In a systematic bibliographic search that was carried out of all the published cases of pediatric robot-assisted urological surgery between 2003 and 2016. A total Robotic-Assisted Minimally Invasive Surgery in Children DOI: http://dx.doi.org /10.5772/intechopen.96684 of 151 publications that reported 3688 procedures in 3372 patients were identified. The most reported procedures were pyeloplasty (1923), ureteral reimplantation (1120), heminephrectomy (136), and nephrectomy or nephroureterectomy (117). There were 16 countries and 48 institutions represented in this literature [6].
We will approach the surgical urological pathology of the child based on the anatomy of the urinary tract as follows, i. Upper urinary tract, ii. Lower urinary tract and iii. Miscellaneous procedures.
Nephrectomy
In pediatric patients, complete or partial nephrectomies are indicated more frequently for benign diseases and less frequently for malignant diseases. Indications for RAS nephrectomy for benign diseases are multicystic dysplastic kidney disease, kidney exclusion due to various pathologies, such as UPJ obstruction, reflux nephropathy, among others, indications of malignant tumors, particularly Wilms tumor are increasing legitimizing itself through corresponding treatment protocols, and surgery performed while adhering strictly to oncological surgical rules [74].
In nephrectomy, the initial step is the dissection and exposure of the renal pedicle, its ligation and cutting. The next step, the kidney is completely freed from its surrounding tissue. Subsequently, the dissection of the ureter is performed, in the case of radical nephroureterectomy it should be performed up to the bladder. The kidney is extracted through the umbilical access, in case of nephrectomy due to tumor, the use of a collection bag is mandatory, and it is removed through a Pfannenstiel incision, and finally lymph node sampling is crucial for surgical staging and guiding subsequent treatment.
Partial nephrectomy
Ureteral duplication is the most common congenital abnormality of the urinary tract. Partial nephrectomy for benign indication is performed for the resection of a deficient or non-functional fraction of a duplex system and can cause or be associated with obstruction and hydronephrosis, dysplasia, megaureter, ureterocele, and vesicoureteral reflux. Heminephroureterectomy is performed in cases with a reflux system [73]. It is recommended before surgery, to place a stent in the ureter to be preserved (for easy identification during dissection). If the ureter of the remaining fraction is to be reimplanted or if an ectopic ureter is to be followed in the deep pelvis, the robot is repositioned between the patient's legs and redocked [75].
Pyeloplasty
Robot-assisted pyeloplasty is the most common procedure performed robotically in pediatric patients, both within urology and overall [76]. The excellent experience with robot-assisted pyeloplasty has challenged other approaches as a new standard for the treatment of UPJ obstruction.
Dismembered pyeloplasty (Anderson-Hynes) includes resection of the UPJ and reduction of the renal pelvis. In the technique, the ureter is incised and spatulated laterally to provide sufficient ureteral wall length to achieve a wide side-to-side anastomosis. Once the anterior layer of the pelvic-ureteral anastomosis has been sutured, an antegrade transanastomotic double-J stent is passed. J-Vac transabdominal drainage was used in the surgical bed.
Patients undergoing robotic pyeloplasty have a shorter hospital stay, and less need for analgesics; however, there is no difference in the success rate of robotic pyeloplasty in comparison to the other two approaches [77][78][79].
In robotic pyeloplasty the learning curve is much shorter. This allows some surgeons to transition from the open pyeloplasty to the robotic approach without any prior laparoscopic experience with this technique [80].
Pyeloplasty in infants less than 10 kg has been performed successfully. A multiinstitutional study of 60 infants less than 12 months old with a 91% success rate and an 11% complication rate, which is similar to other studies on larger children and adults [81]. The foregoing supports the personal experience of the author.
Also, the retroperitoneal robotic approach is indicated mainly for patients with previous abdominal surgery, when adhesion syndrome is suspected, and it has been validated for pyeloplasty and other techniques in this anatomical area [82].
Ureteroureterostomy
The procedures performed included pyeloureterostomy for incomplete duplication and lower pole UPJ obstruction and ipsilateral ureteroureterostomy along with distal ureterectomy for obstruction in a dysplastic upper pole with ureteral, ectopia, for the treatment of duplex anomalies and reconstruction of obstructed dilated ureteral segments [83]. This can also be applied to the lower ureter in duplex systems where it helps to avoid reimplantation of disparate ureters in the same tunnel. Also, transperitoneal robotic ureteroureterostomies have been reported for mid ureteric strictures and also for the correction of retrocaval ureters [84,85]. Also with robotic assistance, the removal of a large ureteric stone at any level with the placement and closure of a stent is a relatively simple affair, using the Mikulicz procedure to close the ureterotomy or a spatulate anastomosis.
Ureterocalicostomy
Ureterocalicostomy is a potential, and technically feasible option in patients with UPJ obstruction and significant lower pole caliectasis which is often reserved for patients with a failed pyeloplasty and a minimal pelvis, or patients with an exaggerated intrarenal pelvis [86]. An ureterocalicostomy is a procedure in which the ureter is sutured to the lowermost calyx of the kidney. It is a salvage operation, which should be in the arsenal of every surgeon operating the UPJ [87]. The robotic approach is a good option.
Extravesical ureteral reimplantation
The most performed procedure in the lower urinary tract in children is the antireflux ureteral reimplantation [13]. Indications for the surgical treatment of pediatric vesicoureteral reflux include severe urinary tract infections while taking continuous antibiotics prophylaxis, renal scarring, and worsening or non-resolution vesicoureteral reflux. Robotic ureteral reimplantation can be done by an extravesical or intravesical approach and, of these approaches, the extravesical is much more widely reported [88,89]. The extravesical procedure is a ureteral reimplantation according to the well-established technique of Lich-Gregoir, for achieving an antireflux mechanism. This technique is an accepted alternative to endoscopic treatment and open reimplantation techniques in pediatric patients [73]. However, open surgery remains the gold standard for ureteral reimplantation [90]. DOI: http://dx.doi.org /10.5772/intechopen.96684 The long-term results of the antireflux procedure are evaluated in terms of preservation of differential renal function, absence of urinary tract infections, and adequate urinary drainage, with a follow-up of more than one year [91]. In a prospective study of children undergoing robot extravesical ureteral reimplantation at eight academic centers from 2015 to 2017, 143 patients (199 ureters). The majority of ureters (73.4%) had grade III or higher vesicoureteral reflux preoperatively. Radiographic resolution was present in 93.8% of ureters. Robotic ureteral reimplantation should be considered as one of several viable options for management of vesicoureteral reflux in children [92].
Appendicovesicostomy (Mitrofanoff)
Complete bladder emptying in children with bladder emptying dysfunction (neuropathic bladder) is achieved with clean intermittent catheterization (CIC). In 1980, Mitrofanoff described his technique of a continent appendicovesicostomy for patients when transurethral CIC cannot be carried out for any reason. When medical therapy fails in the neuropathic bladder, the surgery aims to preserve upper tract function and social continence. A cystostomy with a continent opening easy to catheterize and associated with a closure of the vesical neck, was the objective. The tip of the appendix opened into the bladder at the end of an antireflux submucosal tunnel and the other end hemmed to the skin. The bladder neck is usually closed in the same operation. The continence of the vesicostomy is total and the comfort obtained is excellent [93].
The surgical technique is analogous to the Lich-Gregoir technique, to create an antireflux mechanism. The appendicocutaneostomy can be placed in the umbilicus or in the right lower abdominal quadrant [73]. Robotic continence procedures have been shown to be a safe and effective alternative [94]. An important point is to assess whether a simultaneous bladder augmentation is performed [95].
In patients with neurogenic bowel and bladder secondary to spinal dysraphism who tend to have multiple limb spasms and spinal scoliosis, RAS is a good option [96]. Complex lower urinary tract reconstruction defined as reconstruction of the bladder neck or catheterizable continent ducts, or both, as well as the creation of an antegrade Malone continence enema, for better management of constipation [97].
Augmentation cystoplasty
Augmentation cystoplasty often performed in the context of other reconstructive procedures such as appendicovesicostomy or bladder neck reconstruction. The procedure of bladder augmentation can be performed using a mega-ureter when nephrectomy is anticipated. At present day, the ileocystoplasty represents the currently accepted standard of care [73]. In robotic technique, a 20 cm segment of ileum is selected and isolated. Intestinal continuity is restored, and in the postoperative, the bladder is drained with a suprapubic tube, a urethral catheter and another catheter through the Mitrofanoff channel [98]. Another tissue option for bladder augmentation is the sigmoid colon, this technique significantly improved urodynamic parameters, such as bladder accommodation and filling pressure in children with myelomeningocele-associated neurogenic bladder [99].
Pediatric urology miscellaneous procedures
The miscellaneous pediatric urology procedures are some surgeries in the pelvic area, a narrow field that is ideal for the robotic approach. There are reports from RAS of; symptomatic bladder diverticulum excision [36], symptomatic or malignant urachal cyst excision [100], posterior urethral diverticula excision, mainly after surgical reconstruction of imperforate anus [101], prostatic utricle removal, is a malformation due to incomplete regression of Müllerian ducts [102], and varicocele cure, a condition that has a significant association with infertility [103].
General surgery (gastrointestinal and hepatopancreatobiliary)
RAS in general surgery, and thoracic surgery have not yet reached the magnitude that it has in pediatric urology. Robotic procedures that have been reported include, fundoplication, cholecystectomy, choledochal cysts resection, hepatectomy, colectomies, proctectomy with ileal pouch-anorectal anastomosis [104]. Other techniques are, Thal fundoplication and salpingo-oophorectomy [8], Soave pullthrough procedure for Hirschsprung's disease [105]. Others that are less common, RAS for the treatment of duodenal obstruction, such as the Ladd cure in intestinal malrotation, the duodenojejunostomy for superior mesenteric artery syndrome [106], the repair of congenital duodenal atresia [107], and gastroduodenal obstruction due to trichobezoar [14].
Fundoplication
Fundoplication is the most widely performed and reported robotic-assisted surgery in pediatric general and thoracic surgery [3].
When comparing conventional laparoscopic primary fundoplication and RAS in children, there were no differences between the two groups in terms of operative time, length of hospital stay, conversions, and complications. The conclusion is that RAS is a safe alternative to conventional laparoscopic surgery [111]. Regarding the advantages of RAS, a systematic review of primary fundoplication showed that postoperative complications are reduced in the robotic group. Because in the RAS there is greater dexterity and precision in the subphrenic space, than with laparoscopy [112]. In addition, RAS plays an important role in difficult cases, such as obese patients, large hiatal hernias, and redo fundoplication [113,114]. On the other hand, with conventional laparoscopy, only skilled pediatric surgeons resolve difficult cases [114].
Choledochal cyst resection
Choledochal cyst resection and reconstructive Roux-en-Y hepaticojejunostomy are technically complex and, only in Southeast Asian centers there is extensive experience in the laparoscopic technique. In the rest of the pediatric centers of the world, most of this surgeries are performed with the open technique [115].
In 2006, the first pediatric RAS choledochal cyst resection was reported [116]. Since that time and up to 2019, several authors have reported cohorts of 1 to 39 pediatric patients undergoing RAS choledochal cyst resection [109]. A recent publication informed 70 cases with RAS and 70 cases by conventional laparoscopy, and concluded that RAS choledochal cyst excision and hepaticojejunostomy were associated with better short-term intraoperative and postoperative outcomes, and proved the safety and feasibility of RAS in children with choledochal cysts [117]. The ideal treatment for children with choledochal cyst, nowadays, is MIS, laparoscopic, through expert pediatric surgeons or RAS, in institutions where technology is available. But, if one or another situation is not present, the author recommends continuing with the open approach to offer children the greatest safety and effectiveness [109].
Kasai procedure
The Kasai procedure can be ideal for RAS because it is a complex technique, it has an ideal instrumentation to dissect the hepatic portal and find the portal plate [118]. To date, there are very few reported cases of Kasai operation for RAS for biliary atresia. The experience is larger with conventional laparoscopy, especially in Southeast Asian countries, where the pathology is more frequent than in other latitudes of the world [115].
Pancreatic pathology
There are very few publications of pancreatic pathology in children treated with RAS, we find only case reports about: tumor enucleation, distal pancreatectomy, subtotal pancreatectomy, and pancreaticoduodenectomy. The traditional open surgeries have been largely replaced by MIS, including laparoscopic surgery and RAS.
RAS distal spleen-sparing pancreatectomy is safe and feasible in pediatric patients with insulinoma [119]. Also, robotic enucleation is indicated in small neuroendocrine tumors of the pancreas. This technique provides the dual benefits of minimal invasiveness and good preservation of the pancreatic parenchyma. The experience has demonstrated the feasibility and safety of the RAS enucleation, with an excellent curative effect for pediatric insulinoma [120,121].
Soave pull-through
Hirschsprung's disease (HSCR) has also been shown to benefit from robotic surgery, the outcome of totally robotic soave pull-through for HSCR is promising. This technique is particularly suitable for older HSCR patients, even those requiring a redo surgery, and represents a valid alternative for HSCR patients. In cases of total colonic aganglionosis, for the hepatic angle or only recto sigmoid, RAS has been used and its versatility has been confirmed. The published results are promising, continence scored from excellent to good in all patients who could be evaluated in this regard [105]. In the first series of infants less than 6 kg who underwent the Swenson RAS, morbidity did not increase [122].
Treatment of duodenal obstruction
Superior mesenteric artery syndrome is a rare condition that results from intermittent functional obstruction of the third part of the duodenum. The diagnostic criteria are clinical, radiological and endoscopic. The classic approach has been open surgery [123]. There are case reports of robotic Roux-en-Y duodenojejunostomy as a surgical option for the treatment of this condition [106,124].
Robotic repair of congenital duodenal atresia may help overcome the obstacles presented by the use of traditional rigid laparoscopic instruments, due to the difficulty in constructing a precise duodenal anastomosis, with robotic surgery the procedure is relatively straightforward [107]. About gastroduodenal obstruction due to trichobezoar in children and laparoscopy, we found several reports. We operated with RAS on a 12-year-old girl weighing 23 kg with pica and psychological disorder, with success and without postoperative morbidity [14].
Cholecystectomy
Elective robot-assisted cholecystectomy is relatively prevalent in the literature [13]. Multiport robotic cholecystectomy and single-site robotic cholecystectomy are the approach options. Robotic cholecystectomy is safe and effective and serves as an excellent introductory procedure for pediatric surgeons considering the development of a pediatric robotic surgery program, useful for training [125].
Splenectomy
Splenectomy remains the mainstay of treatment for the sequelae of pediatric hereditary hematologic disorders. These conditions can lead to splenomegaly, medically refractory cytopenias, and dependence on transfusions. Laparoscopic splenectomy is the standard of surgical care. Robot-assisted splenectomy is an option and is associated with a shorter length of hospital stay compared to laparoscopic splenectomy [126].
Gynecological surgery
There are case reports and series documenting a variety of robotic gynecological surgeries in children with favorable results. Procedures consisted of ovarian cystectomies, oophorectomies for ovarian masses, and salpingo-oophorectomy for gonadal dysgenesis [127]. In addition, robotic resection of mature cystic teratoma and mucinous ovarian tumor. It is an easy and safe technique in selected patients and also for the treatment of complex gynecological diseases [128]. Surgeries in the pelvis have a reduced field of work and are ideal for the robotic approach.
Heller's cardiomyotomy for achalasia
Achalasia is rare in children. Surgical options include open, laparoscopic, and robotic approaches, and Heller's myotomy remains the treatment of choice. Concomitant partial posterior fundoplication is suggested for all patients. Heller's robotic myotomy for esophageal achalasia in children has been shown to be safe and effective. Both laparoscopic and robotic esophageal myotomy are comparable in their results. However, robotic surgery is superior in terms of avoiding mucosal perforation, this complication occurred in 16% of patients in the laparoscopic group [129][130][131].
Management for anorectal malformations
Anorectal pull-through for anorectal malformations, with the robotic technology assists the pediatric surgeon by increasing dexterity and precision of movement. This is important in anorectal malformations surgery, where the dissection of the fistula and the pull-through of the rectum into the muscular complex are crucial to achieve continence in future. RAS permits easier closure of the fistula, improves reconstruction technique, and minimizes trauma to important surrounding structures, providing better visualization of the muscular complex. Robotic anorectal pull-through makes use of fundamental concepts learned from decades of highanorectal malformation open repair, and combines them with modern advances in surgical instrumentation and techniques [132].
Thoracic robotic surgery
The global experience in thoracoscopic surgery in children is more than 30 years compared to robot-assisted thoracic surgery (RATS). The learning curve of thoracoscopy is longer compared to RAS. Thoracic MIS reduces the risk of thoracic and spinal deformities after lung resection in children. Lobectomy is one of the robotic techniques most frequently performed in children [133].
Early publications on RATS in children reported having performed cardiovascular techniques such as patent ductus arteriosus (PDA) closure and vascular ring section [134,135]. Le Bret, et al. [134] in 2000, 56 children operated on for PDA surgical closure, 28 cases with thoracoscopy and 28 cases with robotic approach. They used the ZEUS robotic surgical system (Computer Motion, Inc., Goleta, CA. USA). Their results were comparable in both approaches.
Cundy et al. [13], in a systematic search in the literature of reported cases of robotic surgery in children of 2393 procedures, thoracic procedures accounted for 3.2% (77 surgeries and 12 different techniques), and the conversion rate was 10% in thoracic procedures. In this report, the five most frequent RATS procedures are: lobectomy (18), thymectomy (14), benign mass excision (9), diaphragmatic plasty (8), and malignant tumor resection (5).
There are three series reported with a greater number of cases, each with 11 RATS in children (total 33), in order of frequency the procedures include: tumor masses resection (8), lobectomy (7), diaphragmatic plication (4), diaphragmatic plasty (3), esophageal atresia correction (3), bronchogenic cysts resection (3) and unique procedures of segmentectomy, esophageal duplication resection, pleural and lung biopsies, gastric tube/esophagoplasty and Heller myotomy. Overall, there were 6 (18%) conversions to open surgery in neonatal patients and (3) 9% postoperative complications. The neonatal thorax represents the greatest obstacle in the adaptation of the 5 or 8 mm robotic platform instruments [20,133,136]. In RATS, children weighing more than 4 kg are more easily treated [15].
Pulmonary lobectomy
The most common RATS in children is lobectomy. The first publication on robotic lobectomy, including pediatric cases, was by Park et al. [137], in 2006. Series with few cases of segmental lung resections and lobectomies have been published with excellent results with conversions mainly on the first attempt [14,15,133,136]. Addressing the disadvantages of RATS lobectomy, a prolonged total operative time was reported, but without having a negative effect, since it did not increase the postoperative morbidity and mortality of patients [138].
Congenital diaphragm abnormalities, including eventration and Morgagni and
Bochdalek diaphragmatic hernias, have been successfully repaired through the use of conventional MIS. However, some reports have shown a high recurrence rate for some defects. Robotic surgery is the alternative to close diaphragmatic hernias more efficiently [139].
Some authors prefer the thoracic approach to repair Bochdalek's diaphragmatic hernia, but infants weighing less than 2.5 kg are better treated with the abdominal approach. The author performed one case of Morgagni's diaphragmatic hernia and another case of Bochdalek's diaphragmatic hernia via the abdominal route. Robotic assistance allows the surgeon to more easily reach this area to suture diaphragmatic defects [139].
Acquired anomalies, such as diaphragmatic paralysis, can also be resolved with RATS [14,139].
Thymectomy
Radical thymectomy is the comprehensive treatment of myasthenia gravis. The feasibility and effectiveness of robotic thymectomy is evident in this cohort study [140]. In addition, performing the "early thymectomy" (performed within a year of diagnosis) resulted in higher remission rates compared to "late thymectomy" [141], including minimizing the adverse effects of immunosuppression in pediatric patients [142].
In recent studies including 49 children, thoracoscopic thymectomy was also safe for children with juvenile myasthenia gravis (JMG) [143,144]. Two other studies with 9 and 18 children, reported the same results [145,146]. Robotic thymectomy is a safe procedure, complications were low, and without mortality. Thymectomy should be offered as a part of multimodal therapy for treating children and adolescents with acetylcholine receptor antibody-Positive JMG [146].
Other robotic thoracic procedures
There are RATS publications of other specific procedures, such as tracheopexy for the treatment of severe tracheomalacia [147], and reports of pediatric cases of resection of a bronchogenic cyst [148,149].
Oncologic robotic surgery
Presently, the use of MIS in patients with cancer is progressing. However, the role of MIS in children with solid neoplasms is less clear than it is in adults. Although the use of diagnostic MIS to obtain biopsy specimens for pathology is accepted in pediatric surgical oncology, there is limited evidence to support the use of MIS for the resection of malignancies (solid tumors) in the thorax and abdomen in children [150].
Open surgery remains the main technique for the resection of solid tumors in children. RAS offers technical and ergonomic advantages that can make MIS more achievable in this environment, allowing benefits for both the patient and the surgeon. Reduced postoperative recovery time and faster initiation of adjuvant therapy are the most important benefits for the patient [104].
A systematic search of multiple electronic databases, of 23 publications, reported 40 cancer cases in total. The indications for surgery were more than 20 different pathologies. One third of the tumors were malignant. Most of the procedures involved abdominal or retroperitoneal tumors in adolescent patients. Oncological adverse events were two isolated events, one tumor spillage and one residual disease. The evidence is limited to case reports and small case series only. Pediatric cancer surgery is an area of opportunity for robotic surgery. Its technical challenges create the opportunity to develop robotic approaches that meet the challenges of complex cancer procedures [151].
Thoracic tumors
As an anecdote, the robot appears to be well adapted to complex mediastinal dissection and has been used in excision of left ventricular myxoma [152], and in excision of complex massive leiomyoma of the esophagus [153]. The robot offered DOI: http://dx.doi.org/10.5772/intechopen.96684 excellent visualization and ease of resection. The other case of complex massive retrocardiac esophageal leiomyoma was successfully removed using RAS. In the latter case, intraoperative esophagoscopy and transillumination were useful adjuncts to identify the esophagus and develop a safe extramucosal dissection plane.
There is a publication with five pediatric patients with a mean age of 9.8 years and weight of 41.5 kg, who underwent robotic resection of a mediastinal thoracic mass, including a ganglioneuroma, ganglioneuroblastoma, teratoma, germ cell tumor, and a large inflammatory mass of unclear etiology. The application of RATS in malignant solid tumors in children in selected cases is an option, but oncological surgical principles should be applied [154].
Abdominal tumors
There are mostly individual case reports for robot-assisted abdominal oncological surgery in children.
Neuroblastoma is the most common extracranial solid tumor in children and the most common malignancy in infants. Complete resection is curative in low-stage disease. Robotic surgery can skeletonize abdominal blood vessels in the tumor and cut the tumor into pieces, including stage IV retroperitoneal neuroblastoma [155,156].
Juvenile cystic adenomyoma is the focal presence of ectopic endometrial glands and stroma within the uterine myometrium. Another case, a 15-year-old adolescent girl underwent RAS of a 4 cm cyst, and the uterus was closed in four layers, the postoperative period was uneventful [157].
Management of rhabdomyosarcoma. A 22-month-old, 8-kg boy with an embryo-rhabdomyosarcoma in the urinary bladder and prostate, the treatment was a robot-assisted radical cystoprostatectomy, and the postoperative course was uneventful [158]. Another application of RAS is in the dissection of retroperitoneal lymph nodes in selected pediatric and adolescent patients with paratesticular rhabdomyosarcoma or germ cell tumor of the testicle, a report of a case of each of these conditions, they were treated with good results. The robotic approach to extended lymph node dissection is suitable [159].
Robotic partial nephrectomy has been reported in appropriately selected children with renal cell carcinoma. However, there are limited reports of laparoscopic or robotic partial nephrectomy for cancer surgery in children. RAS allows for an oncologically sound resection of partial nephrectomy, as well as extended lymph node dissection [160].
Robotic adrenalectomy is an increasingly used procedure in patients with a variety of surgical adrenal lesions, including adenomas, aldosteronomas, pheochromocytomas, and adrenal gland metastases. Emerging literature also supports the role of RAS in partial adrenalectomy [161]. With robotic partial adrenalectomy, successful preservation of adrenocortical function is achieved [162].
RAS is an emerging technique for the treatment of pancreatic neoplasms. Robotic spleen-preserving distal pancreatectomy for a solid pseudopapillary tumor in pediatric patient, can be considered in younger patients presenting with a solid pseudopapillary tumor in the distal pancreas, and its use as an alternative to open pancreatectomy [163]. A report with 15 adolescents with pancreatic head tumor treated with MIS. Pancreaticoduodenectomy was performed, 10 cases with conventional laparoscopic surgery and 5 cases with RAS. The pathological diagnoses were solid pseudopapillary neoplasms (8), neuroendocrine neoplasms (3), intraductal papillary mucinous neoplasm (1), cystic fibroma (1), serous cystadenoma (1), Ewing's sarcoma (1). Six patients presented postoperative complications. The median follow-up was 37 months. The patient with Ewing's sarcoma was diagnosed with liver metastasis 41 months after surgery and died 63 months after surgery. All other patients survived without a tumor [164].
Robotic gynecological surgery in girls with ovarian disease, the ideal is to maintain the morphology of the ovary, which is beneficial for the recovery of postoperative ovarian function, especially in benign diseases. In centers where robotic surgery is available, ovarian tumors are a suitable entry procedure [128].
Robotic surgery can also be used in supportive care in pediatric oncology including placement of gastrostomy tubes and ovarian transposition [104].
The fundamental oncological principles of no tumor spillage and total resection of tumor margins can be adhered to by RAS; a specific concern being the lack of haptics having an impact on the surgeon's ability to differentiate cancerous from healthy tissue. However, it has been noted that the loss of tactile feedback is, very well compensated for by the excellent optical system [158]. Cancer patients are necessarily followed for recurrences, and only long-term prospective studies of robotic resections can guarantee adherence of the RAS to oncological principles.
Contraindications in children for MIS in tumors, including robotic surgery, are large or fragile tumors that carry a high risk of fracture and tumor spillage, significant adhesions from previous operations, and significant deterioration of respiratory or cardiovascular physiology [104].
Otorhinolaryngology
Pediatric robotic surgery has been used least frequently in otorhinolaryngology [72]. Until now, the majority of RAS applications in otorhinolaryngology is a transoral approach, particularly useful in masses of the base of the tongue. Open surgery can facilitate access to the oropharyngeal region, including the base of the tongue, but can lead to the morbidity of splitting the lip and jaw or require pharyngotomy. As a result, the robotic transoral approach is being used [165]. In the near future, we believe that transoral robotic surgery may become the gold standard.
In a publication of pediatric cases of robotic transoral surgery, with 41 patients, with age between 2 months and 19 years, the techniques were, lingual tonsillectomies (16), lingual and lingual based tonsillectomies (9), 2 malignant diseases in the oropharynx (high-grade undifferentiated sarcoma and biphasic synovial sarcoma), a thyroglossal duct cyst at the base of the tongue, laryngeal cleft cysts (11), a posterior glottic stenosis, and a surgery for congenital true vocal cord paralysis. A minor intraoperative complication occurred. No patient required postoperative tracheostomy. Conversion index was 9.8% [166].
Author's experience in robotic surgery
From March 2015 to January 2021, since the beginning the prospective registry of the casuistry has been carried out. We have performed 258 robot-assisted laparoscopic and thoracoscopic surgeries (RALTS) in 227 patients (224 children and 3 adults), in a public hospital and two private hospitals in Mexico City. The demographic data of the patients are, in relation to gender, 52.4% male and 47.6% female. The average and range of age, weight and height of the patients were, age 79.5 months (2 to 204), weight 26.8 kg (4.4 to 102) and height 114.5 cm (55 to 185), the smallest patient was 2 months old, 4.4 kg in weight and 57 cm in height, a left pyeloplasty was performed. The adult patients were 31, 63 and 64 years old.
In the robotic thoracic surgery group, in order of frequency, the techniques performed were: lobectomy 4 (40%), diaphragmatic plication or plasty 4 (40%), a bronchogenic cyst resection (10%) and a pleural biopsies (10%). In this robotic thoracic surgery group, the conversion rate was 20% and postoperative complications 10%. In this group of RAS 5 different techniques were performed.
In the robotic oncological surgery group, the techniques performed were adrenalectomy 2 (for adenoma and another for pheochromocytoma) and single techniques of, anterior mediastinal teratoma resection, Ewing tumor resection, Wilms tumor stage 3 resection in horseshoe kidney, partial gastrectomy for carcinoid tumor, retroperitoneal lipoma resection and conservative resection of ovarian cyst. In this robotic cancer surgery group, the conversion rate was 12.5%, and there were no complications. In this group of RAS 8 different techniques were performed. The cases of adult patients were pheochromocytoma, adrenal adenoma and carcinoid tumor.
Planning
The success of a pediatric robotic surgery program (PRSP) depends on a wellstructured plan. Implementing a PRSP requires institutional support and requires a comprehensive, detail-oriented plan that takes into account training, supervision, cost, and cases volume. Given the lower prevalence of robotic surgery in children, in many cases it may be more feasible to implement pediatric robotic surgery within an adult robotic surgery program. The pediatric surgery team determines its goals for volume expansion, surgical case selection, surgeons training, and surgical innovation within the specialty. In addition to the clinical model, a robust economic model that includes marketing must be present, especially in private hospitals [167].
Development of the program
The development of a robotic surgery program is associated with significant initial costs due to the initial investment in the robotic surgical system [168]. Adequate surgical volume is essential for both feasibility and ensuring adequate results for patients [64]. The surgeon should start with less complex index cases and gradually progress to more advanced reconstructive procedures with growing experience [61].
Less complex cases, such as a fundoplication, are excellent robotic training cases not only for surgeons and anesthesia personnel, but also for technical and nursing personnel assisting in the operating room [169].
Additionally, robotic cholecystectomy is a suitable procedure for first few surgeries when pediatric surgeons are beginning robotic surgery [125]. It is imperative to have a core group of specific personnel familiar with robotic procedures to increase efficiency. Adequate and systematic performance of the entire team in simple cases, then translates into better performance in more complex cases.
It is estimated that approximately 100 cases are required to obtain consistent results in pediatric robotic surgery cases by a surgical team [167]. The learning curve for each procedure varies, but is shorter than with laparoscopy, for example for robotic pyeloplasty there are 15 to 20 cases, to obtain similar results and surgical success [170]. Experience shows that in complex or reconstructive techniques, surgeons using the open approach switch to the robot-assisted approach, such as pyeloplasty, ureteral reimplantation, biliodigestive and pulmonary lobectomy, among others.
Robotic pediatric surgery team
There are three main actors involved in the implementation of a pediatric robotic surgery program: i. Surgeons and anesthetists, ii. Nurses and iii. Administration [168].
Successful robotic surgery is mentioned as requiring four elements, i. Good understanding of the surgical procedure, ii. Excellent surgical skills, iii. Frequent teamwork training, and iv. Trocar placement [171]. Adequate surgical volume is critical both for feasibility and to ensure good patient outcomes. Cases should be performed once a week to maintain surgical skill and advance to more advanced reconstructive procedures.
There has been a growing role for simulation and surgical training. Currently, the robotic surgery simulators available for training are the Mimic and da Vinci simulators. The simulators evaluate the skills in the different tasks that the surgeon performs. It is desirable that surgeons have previous experience in conventional laparo-thoracoscopy.
Training, accreditation and credentialing
Training and accreditation. In the present, the certification process to be a robotic surgeon depends on the manufacturer. Intuitive Surgical (Sunnyvale, CA, USA), the manufacturers of the da Vinci Surgical System, have a separate training program that takes surgeons from console setup to the monitoring phase for initial cases with support from a proctor.
This process should be more structured and create a curriculum for robotic surgeons, this is essential for the training and objective evaluation of future robotic surgeons. Defining results, specific training tasks and their validation; as well as, establishment of measurements and approval criteria to improve the quality of robotic surgery should be included in the plan [172]. Academic organizations and hospital institutions can lead the implementation of a structured curriculum.
An accreditation proposal for the robotic surgeon is the following; After the intuitive surgery training program (step 1), then do the first five cases with a co-surgeon (step 2), who has the dual role of preceptor and supervisor, assesses the surgeon who is learning and also imparts new skills and takes control of the operative case if the clinical situation warrants it (the tutor allows the trainee to gain robotic experience safely in the first index cases). This is followed by 6 to 10 cases in which the tutor / supervisor is a bedside assistant (step 3). The preceptor/supervisor reports the findings to the Institution's Robotics Committee on the skills and progress of the trainee, evaluating whether the independent practice can be continued by the surgeon (step 4), based on the favorable evaluation of the preceptor [167].
The author's experience supports this accreditation proposal so that the learning curve of the surgeon, who is starting his foray into robotic surgery, is a satisfactory experience for him, and the patient is offered the greatest security from the stage of the curve of learning.
Program information data log
Data collection is very important. Collecting, analyzing, and presenting data prospectively to Institutional colleagues, at a minimum, allow objective analysis of results for comparative studies against other approaches, as well as to publish them.
The future of robotic surgery in children
Recently, the Senhance Robotics System (Transenterix, Morrisville, NC) has begun offering 3 mm instrument sizes, which could make robotic surgery more technically feasible for even the smallest pediatric patient. Although not currently approved for use in pediatric surgery, the Transenterix platform, was evaluated in an experimental study where surgeons were able to successfully perform intracorporeal and knotted sutures in body cavities as small as 90 ml, and the instruments could be inserted directly without the need for ports, reducing the required distance between ports [5]. This Transenterix platform has haptic feedback.
With advancing technology and the demand for more compact robotic platforms, the future for robotic surgery will doubtlessly result in a reduction of instrument size and an improvement in haptic feedback. This puts the pediatric patient in particular, the newborn at the forefront. Reconstructive surgery such as esophageal and intestinal anastomosis, all of which require a delicate and more magnified approach will benefit enormously from these advances. The pediatric and neonatal patient must be at the forefront of research into the future of robotic surgery [173].
We are at a dawn of a new age in surgery, as we witness the dramatic growth in robotic surgery. The proliferation and commercialization of new robotic surgical systems over the next few years will drive competition, lower cost, and accelerate the adoption of these technologies [174].
Artificial intelligence. More sophisticated systems will track the surgeon's movements and patient data and synchronize with outcomes data to provide us with early warning systems for complications. One more interesting aspect is how these systems will participate in the surgical decision-making process in real time. We are already gathering data on tissue perfusion, helping us decide on the appropriate location for an anastomosis. Additionally, using artificial intelligence, real-time data will be collected from many sources, including electronic medical records, anesthesia monitoring systems, video images, and surgeon data for making decisions that we will increasingly rely on [174].
Digital surgery (Surgery 4.0), the next frontier of surgery, is defined as the convergence of surgical technology, real-time data and artificial intelligence. Following previous waves of disruption, which saw the transition from open (Surgery 1.0) to laparoscopic surgery (Surgery 2.0), and from laparoscopic surgery to robotic surgery (Surgery 3.0), the digital paradigm in surgery is bringing unprecedented changes to the century-old field. The power of linked data and advancements in artificial intelligence are beginning to make a real impact in the way surgeries are performed, reducing well-documented variability in surgical process and outcomes.
Companies, investors, surgeons and health systems are racing to accelerate the digitization of surgery in order to dramatically improve patient outcomes whilst reducing cost and inefficiencies; improve patient access; reduce inequities between populations; improve quality; and deliver more personalized surgical care, and the digital surgery is the next apex in surgery [175].
Verb Surgical is building a digital surgery platform that combines robotics, advanced visualization, advanced instrumentation, data analysis, and connectivity. Surgery 4.0 or digital, which seeks to achieve less invasive and smarter interventions, "marks the beginning of a true democratization of the discipline". The Verb Surgical platform will be an option in the near future of digital surgery [175, 176].
Conclusions
In this chapter, in relation to robot-assisted surgery, its definition, characteristics, advantages, benefits, limitations and applications in children are addressed. As well as, the surgical areas of its application in the pediatric population, which include urological, general, thoracic, oncological and otorhinolaryngological surgery.
To date, there are multiple publications that demonstrate that robotic surgery in children is safe and effective, and it is important to offer children its benefits. However, a frequent conclusion of published studies on robotic surgery in children is the impossibility of carrying out comparative studies with all the scientific rigor, which makes it impossible to reach solid conclusions about the advantages and benefits in the pediatric population.
Robotic surgery preferably applied to difficult and complex cases adds value to patient care, and is an important balancing factor against the apparently higher cost (main drawback), compared to open and laparo-thoracoscopic surgery.
The author included his results in pediatric robotic surgery, which compared to other series of similar published cases; the experience is favorable and encouraging.
Globally, to date, few pediatric surgeons have adopted the robot-assisted surgery, as opposed to more pediatric urologists who have benefited more children. To date, in malignant tumors in children, robotic surgery has been applied less.
Recommendations for the implementation of a pediatric robotic surgery program are included. With robotic assistance, it is important to mention that the learning curve is shorter than with laparo-thoracoscopic surgery. It is necessary for
Author details
Mario Navarrete-Arellano Hospital Central Militar and Hospital Angeles Lomas, Mexico City, Mexico *Address all correspondence to: drcirugiaroboticamx@gmail.com each institution to establish the curriculum for the accreditation and credentialing of the robotic surgeon. A proposal is included.
The future will be fascinating with upcoming advancements in robotic surgical systems, the use of artificial intelligence, and digital surgery. | 2021-05-05T00:09:57.313Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "c3b4ee6e0ee6fdba5ab4b73568476955c8a3ee2b",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/75621",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "628ceb278393eb2426afce1d12f536885680c073",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119617603 | pes2o/s2orc | v3-fos-license | The cotype zeta function of $\mathbb{Z}^d$
We give an asymptotic formula for the number of sublattices $\Lambda \subseteq \mathbb{Z}^d$ of index at most $X$ for which $\mathbb{Z}^d/\Lambda$ has rank at most $m$, answering a question of Nguyen and Shparlinski. We compare this result to recent work of Stanley and Wang on Smith Normal Forms of random integral matrices and discuss connections to the Cohen-Lenstra heuristics. Our arguments are based on Petrogradsky's formulas for the cotype zeta function of $\mathbb{Z}^d$, a multivariable generalization of the subgroup growth zeta function of $\mathbb{Z}^d$.
Introduction
A fundamental problem in the field of subgroup growth is understanding the number of subgroups of finite index n in a fixed group G. In many cases, analytic properties of the subgroup growth zeta function ζ G (s) provide useful information. This is the Dirichlet series where H ranges over all finite index subgroups of G. If the number of subgroups in G of index n grows at most polynomially, then the Dirichlet series defining ζ G (s) converges absolutely for Re(s) sufficiently large. An analytic continuation of the series and knowledge of the locations and orders of its poles would provide information on asymptotics for the number of subgroups of index less than X as X → ∞.
One of the most basic examples is the subgroup growth zeta function of the integer lattice Z d which turns out to have a simple expression as a product of Riemann zeta functions: (1.2) ζ Z d (s) = ζ(s)ζ(s − 1) · · · ζ(s − (d − 1)).
See the book of Lubotzky and Segal for five proofs of this fact [19]. Since ζ(s) has a simple pole at s = 1, standard Tauberian techniques immediately give the asymptotic (1.3) as X → ∞.
1.1. The proportion of lattices with given corank. A number of more refined questions about the distribution of sublattices of Z d can be asked. Motivated by work of Nguyen and Shparlinski [21], we investigate the distribution of sublattices of Z d whose cotype has a certain form. The cotype of a sublattice Λ ⊆ Z d is defined as follows. By elementary divisor theory, there is a unique d-tuple of positive integers (α 1 , . . . , α d ) = (α 1 (Λ), . . . , α d (Λ)) such that the finite abelian group Z d /Λ is isomorphic to the sum of cyclic groups (1.4) (Z/α 1 Z) ⊕ (Z/α 2 Z) ⊕ · · · ⊕ (Z/α d Z) where α i+1 | α i for 1 ≤ i ≤ d − 1. We call the d-tuple α(Λ) := (α 1 (Λ), . . . , α d (Λ)) the cotype of Λ. The largest index i for which α i (Λ) = 1 is called the rank of Z d /Λ and the corank of Λ. By convention, Z d has corank 0. A sublattice Λ of corank 0 or 1 is called cocyclic, i.e., Z d /Λ is cyclic, or equivalently, Λ has cotype ([Z d : Λ], 1, . . . , 1). Nguyen and Shparlinski study the distribution of cocyclic sublattices of Z d and pose several related questions. Let N (m) d (X) be the number of sublattices Λ of Z d of index less than X such that Λ has corank at most m. In particular, N (1) d (X) is the number of cocyclic sublattices of Z d of index less than X. Throughout this paper we use p to denote a product over all primes. Rediscovering a result of Petrogradsky [22] by more elementary means, they show that Comparing this to the asymptotic (1.3) for all sublattices, Nguyen-Shparlinski and Petrogradsky both observe that the probability that a "random" sublattice of Z d is cocyclic is about 85% for d large.
Nguyen and Shparlinski conclude their paper by stating that it would be of interest to obtain similar asymptotic formulas for N (m) d (X) for m > 1 and to show that the sublattices of corank m form a negligible proportion of all sublattices of Z d when m is sufficiently large.
In this paper we show the following theorem. .
We recall in Section 2 the definition of the q-binomial coefficient d i p −1 . Dividing by the number of all sublattices of index less than X as given in (1.3) gives the proportion of sublattices with corank at most m.
For example, the proportion of sublattices of Z d of corank at most 2 converges to approximately 99.4% as d → ∞, and the proportion of sublattices of Z d of corank at most 3 converges to approximately 99.995%. Therefore, while sublattices of any fixed corank have positive density among all sublattices of Z d , they become sparser as the corank grows. This confirms an expectation of Nguyen-Shparlinski. Also of interest in our work is the method of proof. Nguyen and Shparlinski prove their results by counting solutions of linear congruence equations. Our proofs extend Petrogradsky's methods and make systematic use of the cotype zeta function of Z d , which he introduced in [22]. This is a multivariate generalization of the subgroup growth zeta function ζ Z d (s) from (1.2). Petrogradsky computes it explicitly in terms of permutation descent polynomials.
1.2. Coranks of lattices and cokernels of matrices in Hermite normal form. Throughout this paper, for a ring R we let M d (R) denote the d × d matrices with entries in R. For a finite abelian group G, we write (G) p for its Sylow p-subgroup. Consider the distribution on finite abelian p-groups of rank at most d that chooses a group G of rank r where r ≤ d with probability It follows from a result of Cohen and Lenstra [8, Theorem 6.1] that the right-hand side of (1.7) is equal to the product over all primes p of the probability that a group chosen from P p d has rank at most m. Motivated by famous conjectures of Cohen and Lenstra on distributions of Sylow psubgroups of class groups of number fields [8], Friedman and Washington prove that the distribution of cokernels of d × d random matrices with entries in the p-adic integers Z p , drawn from Haar measure on the space of all such matrices, is the distribution of (1.8) [12, Proposition 1]. Stanley and Wang show that this distribution arises in the study of Smith normal forms of random d × d integer matrices with entries chosen uniformly from [−k, k], as k → ∞. The Smith normal form of an integer matrix carries the same information as its cokernel. As k → ∞, each entry is uniformly distributed modulo p r for each prime power, so this distribution of cokernels matches the one studied by Friedman and Washington, and therefore is equal to the one defined by (1.8). Going from a result for a single prime to a result involving infinitely many primes is often challenging. Stanley and Wang prove that the probability that the cokernel of a random integer matrix chosen from the model described above has rank at most m is given by the right-hand side of (1.7) [30,Theorem 4.13]. The proof uses nontrivial results from number theory of Ekedahl and Poonen on greatest common divisors of outputs of multivariable polynomials [11,23].
We now interpret of Corollary 1.2 in terms of cokernels of special classes of random integer matrices. A nonsingular M ∈ M d (Z) with entries a ij is in Hermite normal form if: (1) M is upper triangular, and (2) 0 ≤ a ij < a jj for 1 ≤ i < j ≤ d. We recall a basic fact about lattices and matrices in Hermite normal form. This gives a bijection between the set of sublattices of Z d of index less than X and nonsingular d × d matrices in Hermite normal form with determinant less than X.
Let H d (Z) ⊆ M d (Z) denote the subset of nonsingular matrices in Hermite normal form and H d (X) ⊆ H d (Z) denote the subset of these matrices with determinant less than X. By Proposition 1.3, Corollary 1.2 is equivalent to the following statement.
In Section 4 we consider the distribution of Sylow p-subgroups of cokernels of matrices in Hermite normal form, giving an explanation for this result.
Theorem 1.5. Let G be a finite abelian p-group of rank r ≤ d. Then Equivalently, We note that this result does not directly imply Corollary 1.4 because of subtleties involved in going from a single prime to a product over all primes. The main point of Theorem 1.5 is that Sylow p-subgroups of cokernels of matrices in Hermite normal form follow the same distribution as Sylow p-subgroups of cokernels of Haar random matrices in M d (Z p ). This fits in with universality results for cokernels of families of random integer and p-adic matrices due to Wood [37]. However, Wood's results do not directly imply Theorem 1.5 because, in the language of [37, Definition 1], matrices in Hermite normal form are not ǫ-balanced, as many entries are fixed to be 0.
1.3. Related work. More general functions of the type considered in this paper have their origin in Igusa's study of zeta functions of algebraic groups [15]. This work has become an essential tool in the theory of zeta functions of groups and rings and has been extended in various directions. See for example the paper of du Sautoy and Lubotzky [10] and the further references listed in Section 5.2. In this context, Petrogradsky's local zeta function and generalizations thereof arise naturally as multivariate p-adic integrals. This point of view is developed in Voll [35], in both the context of counting subgroups of a finitely generated torsion-free nilpotent group and in the study of their representation zeta functions. Further generalizations of Igusa's local zeta functions are introduced in Klopsch-Voll [16] and Schein-Voll [26], leading to numerous applications in subgroup, subring and represention growth; see e.g. [5,6,18,27,31,33].
In the theory of autorphic forms related functions appear in the computation of Fourier coefficients of Eisenstein series on GL 2n and the local singular series of an n-by-n square matrix, as noted in the papers of F. Sato [25] and Beineke-Bump [1]. Sato raises the interesting open question of finding a corresponding relation between local singular series and the enumeration of subgroups of an abelian group in the symplectic case.
Outline of the paper. We review Petrogradsky's work in Section 2. In Section 3 we prove our main results on the distribution of the corank. In Section 4 we prove Theorem 1.5. The utility of the cotype zeta function in the resolution of these corank problems suggests that it may be fruitful to introduce multivariate Dirichlet series to address analogous subgroup and subring growth problems in a broader context. We elaborate on this and present some further concluding remarks in Section 5.
Definition 2.1. [22] Let d be a positive integer and let a α (Z d ) be the number of subgroups Λ ⊆ Z d of cotype α. We define the cotype zeta function of Z d : Note that ζ Z d (s, . . . , s) = ζ Z d (s), so this multivariable function generalizes the subgroup growth zeta function of Z d . The subgroup growth zeta function of Z d has an Euler product, and Petrogradsky shows that this multivariable generalization has one as well.
where the local factor for each prime p is defined as One of the main results of [22] is the computation of the local factors of the cotype zeta function of Z d in terms of permutation descents and q-binomial coefficients. We fix some notation and recall basic properties of these combinatorial objects following [22, Section 3]: where we set λ k+1 = 0 and λ 0 = d. Note that d = m 0 + m 1 + · · · + m k . We define the following polynomials in q: with the sum taken over all nonempty subsets λ ⊆ N d−1 .
The polynomials w d,λ (q) that arise have been studied extensively in the combinatorial literature. The first part of Theorem 2.5 below is stated in [22, Definition 2.4. Let π ∈ S d be a permutation. We call i ∈ {1, 2 . . . , d − 1} a descent of π, provided that π(i) > π(i + 1). For π ∈ S d , let D ′ (π) denote its set of descents.
A pair (i, j) is called an inversion of π if and only if i < j and π(i) > π(j). Let inv(π) denote the number of inversions of π.
Note that d cannot be a descent of a permutation in S d .
(1) There exists a number N ≥ |λ| such that w d,λ (q) is a polynomial in q with nonnegative integer coefficients of the form (2) We have that (3) We have that To conclude this section, we compare the results of Petrogradsky described here to the work of du Sautoy and Lubotzky [10], which builds on earlier work of Igusa [15]. Theorem 5.9 of [10], specialized to G = GL d and ρ the standard representation gives (2.5). (The result of [10] is specialized to a single variable, but the multivariate extension is obvious.) Petrogradsky's proof uses a cotype-preserving bijective correspondence between finite index subgroups Λ of Z d and subgroups of the finite group Z d /N Z d , where N = α 1 (Λ). The number of the latter can be expressed in terms of q-binomial coefficients [4]. On the other hand, du Sautoy and Lubotzky interpret the p-part of the zeta function as a p-adic integral over GL d (Z p ), which they compute using the Iwahori decomposition. This leads to a sum over the (affine) Weyl group equivalent to (2.5). These ideas have been further developed to prove local functional equations for zeta functions of nilpotent groups and other Igusa-type zeta functions; see, e.g., [16,35].
Density results for the corank
We begin by introducing the Dirichlet series counting sublattices of Z d of corank less than or equal to m. This is given by Recall that a sublattice of corank at most m will have cotype (α 1 , α 2 , . . . , α d ) with α m+1 = · · · = α d = 1. Therefore, in terms of Petrogradsky's expression for the cotype zeta function given in Theorem 2.3, we have The analytic properties of ζ (m) Z d (s) will lead to our desired density results.
To complete the proof of Proposition 3.1, it remains to evaluate In order to go further we need the intermediate result of Lemma 3.3 below.
This lemma will be used in the proof of Lemma 3.3 below. We note in passing that setting e = 0 and letting n → ∞ yields the generating series for partitions in terms of the Durfee number generating series.
The argument below is similar to that given in Section 4.1 of Stasinski-Voll [32]. We give a proof for completeness. Proof of Lemma 3.3. We argue by induction on i. The base case i = 1 is immediate. Assume the identity is true for all i 0 satisfying 1 ≤ i 0 < i. We remove the contribution of µ = ∅ from the left-hand side of (3.8) and write it as where we have used the identity in the final step. We continue by using the inductive hypothesis on the inner sum, i.e., the expression in (3.9), and see that the left-hand side of (3.8) is equal to: where we have used the subset-of-a-subset identity. Comparing with the right-hand side of (3.8), we are reduced to proving We can write this a little more nicely: This is the case e = 0 of Lemma 3.2.
3.2.
Conclusion of the proof of Proposition 3.1. We return to the evaluation of f (1 − q j 2 ).
By Lemma 3.3, the above sum restricted to subsets with largest element i yields Noting that i = 0 corresponds to the contribution of µ = ∅, we sum over all i to obtain . Now taking the product over p cancels the zeta factors in (3.4) and we are left with .
This concludes the proof of Proposition 3.1.
3.3.
The density of sublattices of corank at most m. Theorem 1.1, the asymptotic expression for the number of sublattices with corank at most m, follows immediately from Proposition 3.1 and the analytic continuation statements from Theorem 2.3. We note that the constant term in the expression (1.3) is Taking the quotient of this term with the constant term in Theorem 1.1 completes the proof of Corollary 1.2.
Sylow p-subgroups of cokernels of matrices in Hermite normal form
The goal of this section is to prove the second of the equivalent statements in Theorem 1.5. Our strategy for determining the distribution of Sylow p-subgroups of cokernels of matrices in Hermite normal form is to relate this distribution to the distribution of cokernels of Haar random p-adic matrices. Haar measure on the p-adic integers Z p gives rise to Haar measure on M d (Z p ), normalized so that the total volume is 1. More concretely, each matrix entry can be chosen independently with respect to Haar measure on Z p . Throughout the rest of this section, we use Prob M ∈M d (Zp) (·) to denote the probability that a Haar random matrix M ∈ M d (Z p ) has some property. This is equal to the volume of the subset of M d (Z p ) consisting of matrices with this property. We give an example from the introduction. Recall the distribution P p d on finite abelian p-groups of rank at most d defined in (1.8). Any α ∈ Z p can be written uniquely as α = p e u where e ∈ Z ≥0 and u ∈ Z p is a unit. In this case, we write v p (α) = e. Note that |cok(M )| = p e if and only if v p (det(M )) = e. We will use the following analogue of Proposition 1.3 for matrices with entries in Z p . i u j where p i ∤ u j . Let H p i be the upper-triangular matrix with entries b jk defined so that: With this definition, it is clear that if H ∈ H d (Z), then each H p ∈ H d (Z) as well. The following proposition follows from an application of the Chinese remainder theorem. 5.1. Subgroup and subring growth zeta functions. We may also try to construct multivariate Dirichlet series to study subgroup growth for other groups. A first case of potential interest is the discrete Heisenberg group The normal subgroup zeta function of H 3 is A multivariate generalization of this series might give more refined information on the distribution of the finite groups which arise as quotients of H 3 . Similar questions can be asked for subring growth. For example, we expect that the cotype subring zeta function of Z 3 can be used to show that in contrast to the case studied here, most of the subrings of Z 3 (ordered by index) are not cocyclic. In a nonabelian setting, the Lie ring sl 2 (Z) has an explicitly computed zeta function where the sum is over all finite index Lie subrings of sl 2 (Z) and P (x) = (1+6x 2 −8x 3 )/(1−x 3 ) [9]. It would be interesting to compute the cotype subring zeta function of sl 2 (Z) and use it to find the density of Lie subrings with cyclic quotient. Klopsch and Voll compute the subring zeta functions of all 3-dimensional Lie algebras over Z p in a uniform manner [17]. Their techniques, in particular, should allow one to compute the cotype zeta function for both H 3 and sl 2 (Z).
5.2.
Zeta functions of classical groups. The subgroup growth zeta function ζ Z d (s) of Z d also arises in the more general context of the zeta functions associated to algebraic groups studied by Hey, Weil, Tamagawa, Satake, Macdonald and Igusa [14,36,34,24,20,15]. For G a linear algebraic group over Q p and a rational representation ρ : G → GL n they define (5.4) Z G,ρ (s) = G + | det ρ(g)| s dg where G + = ρ −1 (ρ(G(k) ∩ M n (O p )), where O p is the ring of integers of Q p . When G = GL n and ρ is the natural representation, Z G,ρ (s) is just the p-part of the subgroup growth zeta function ζ Z d (s). In more recent work, du Sautoy and Lubotzky [10] show that Z G,ρ (s) for more general G and ρ continues to have an interpretation as a generating series counting substructures of algebras. We take an explicit example from Bhowmik-Grunewald [2], see also [3,Theorem 12]. Let β be the alternating bilinear form on a 2n dimensional space associated to the matrix 0 −I n I n 0 . | 2017-08-28T22:07:01.000Z | 2017-08-28T00:00:00.000 | {
"year": 2017,
"sha1": "031e2c8c9adfef6d1c1d39bb6e3839ee04ed0a1d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "db8e56a297f7304990b0e4c787a4d9bad550d88c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253179350 | pes2o/s2orc | v3-fos-license | Changes to Urinary Proteome in High-Fat-Diet ApoE−/− Mice
Cardiovascular disease is currently the leading cause of death worldwide. Atherosclerosis is an important pathological basis of cardiovascular disease, and its early diagnosis is of great significance. Urine bears no need nor mechanism to be stable, so it accumulates many small changes and is therefore a good source of biomarkers in the early stages of disease. In this study, ApoE-/- mice were fed a high-fat diet for 5 months. Urine samples from the experimental group and control group (C57BL/6 mice fed a normal diet) were collected at seven time points. Proteomic analysis was used for comparison within the experimental group and for comparison between the experimental group and the control group. The results of the comparison within the experimental group showed a significant difference in the urinary proteome before and after a one-week high-fat diet, and several of the differential proteins have been reported to be associated with atherosclerosis and/or as biomarker candidates. The results of the comparison between the experimental group and the control group indicated that the biological processes enriched by the GO analysis of the differential proteins correspond to the progression of atherosclerosis. The differences in chemical modifications of urinary proteins have also been reported to be associated with the disease. This study demonstrates that urinary proteomics has the potential to sensitively monitor changes in the body and provides the possibility of identifying early biomarkers of atherosclerosis.
Introduction
Atherosclerosis (AS) is the primary pathological basis of cardiovascular disease (CVD) [1], which is the leading cause of death in the world today [2]. In 2015, more than 17 million people died of cardiovascular disease, accounting for 31% of all deaths worldwide [3]. The impact of atherosclerosis is more significant during its late stages, during which it induces a series of fatal consequences such as myocardial infarction and stroke [4]. Therefore, its early diagnosis is of vital importance.
Urine is an ideal source of early biomarkers because biomarkers are measurable changes related to biological processes regulated by homeostasis mechanisms, and urine can accumulate these early changes [5]. This conclusion has been confirmed by many related studies. For example, in a glioblastoma animal model constructed by injecting tumour cells into the brains of rats, changes in the urine proteome occurred before magnetic resonance imaging reflected the changes caused by the tumour [6]. Similarly, studies have confirmed that even if only approximately 10 cells are subcutaneously inoculated in rats, the urinary proteome can change significantly [7]. In addition, urine is more accessible and non-invasive to obtain [8].
The use of animal models avoids the influence of genetic, environmental and other factors on the urinary proteome, and it is easier to judge the early stages of atherosclerosis and identify biomarkers [9]. Apolipoprotein E (ApoE) plays an important role in maintaining the normal levels of cholesterol and triglycerides in serum by transporting lipids in the blood [10]. Mice lacking ApoE function develop hypercholesterolemia, increased verylow-density lipoprotein (VLDL) and decreased high-density lipoprotein (HDL), exhibiting The urine samples were centrifuged at 12,000 g for 40 min to remove the supernatant, precipitated using 3 times their volume of ethanol overnight, and then centrifuged at 12,000 g for 30 min. The protein was resuspended in lysis buffer (8 mol/L urea, 2 mol/L thiourea, 25 mmol/L dithiothreitol and 50 mmol/L Tris). The protein concentration was measured using the Bradford method. Urine proteolysis was performed using the filteraided sample preparation (FASP) method [13]. The urine protein was loaded on the filter membrane of a 10 kDa ultrafiltration tube (PALL, Port Washington, NY, USA) and washed twice with UA (8 mol/L urea, 0.1 mol/L Tris-HCl, pH 8.5) and 25 mmol/L NH4HCO3 solution; 20 mmol/L dithiothreitol (DTT, Sigma, St. Louis, MO, USA) was added for reduction at 37°C for 1 h, and then 50 mmol/L iodoacetamide (IAA, Sigma, St. Louis, MO, USA) was used for alkylation in the dark for 30 min. After washing twice with UA and NH4HCO3 solutions, trypsin (Promega, Fitchburg, WI, USA) was added at a ratio of 1:50 for digestion at 37°C for 14 h. The peptides were passed through Oasis HLB cartridges (Waters, Milford, MA, USA) for desalting and then dried by vacuum evaporation (Thermo Fisher Scientific, Bremen, Germany).
Spin-Column Peptide Fractionation
The digested samples were redissolved in 0.1% formic acid and diluted to 0.5 μg/μL. Each sample was used to prepare a mixed peptide sample, and a high-pH reversed-phase fractionation spin column (Thermo Fisher Scientific, Waltham, MA, USA) was used for separation. The mixed peptide samples were added to the chromatographic column and eluted with a step gradient of 8 increasing acetonitrile concentrations (5, 7.5, 10, 12.5, 15, 17.5, 20 and 50% acetonitrile). Ten effluents were finally collected by centrifugation and were dried with vacuum evaporation and resuspended in 0.1% formic acid. In this study, iRT reagent (Biognosis, Schlieren, Switzerland) was used to calibrate the retention time of the extracted peptide peaks, which were added to ten components and each sample at a volume ratio of 10:1.
LC-MS/MS Analysis
An EASY-nLC 1200 chromatography system (Thermo Fisher Scientific, Waltham, MA, USA) and Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) were used for mass spectrometry acquisition and analysis. The peptide sample was loaded onto the precolumn (75 μm×2 cm, C18, 2 μm, Thermo Fisher) The urine samples were centrifuged at 12,000 g for 40 min to remove the supernatant, precipitated using 3 times their volume of ethanol overnight, and then centrifuged at 12,000 g for 30 min. The protein was resuspended in lysis buffer (8 mol/L urea, 2 mol/L thiourea, 25 mmol/L dithiothreitol and 50 mmol/L Tris). The protein concentration was measured using the Bradford method. Urine proteolysis was performed using the filteraided sample preparation (FASP) method [13]. The urine protein was loaded on the filter membrane of a 10 kDa ultrafiltration tube (PALL, Port Washington, NY, USA) and washed twice with UA (8 mol/L urea, 0.1 mol/L Tris-HCl, pH 8.5) and 25 mmol/L NH 4 HCO 3 solution; 20 mmol/L dithiothreitol (DTT, Sigma, St. Louis, MO, USA) was added for reduction at 37 • C for 1 h, and then 50 mmol/L iodoacetamide (IAA, Sigma, St. Louis, MO, USA) was used for alkylation in the dark for 30 min. After washing twice with UA and NH 4 HCO 3 solutions, trypsin (Promega, Fitchburg, WI, USA) was added at a ratio of 1:50 for digestion at 37 • C for 14 h. The peptides were passed through Oasis HLB cartridges (Waters, Milford, MA, USA) for desalting and then dried by vacuum evaporation (Thermo Fisher Scientific, Bremen, Germany).
Spin-Column Peptide Fractionation
The digested samples were redissolved in 0.1% formic acid and diluted to 0.5 µg/µL. Each sample was used to prepare a mixed peptide sample, and a high-pH reversed-phase fractionation spin column (Thermo Fisher Scientific, Waltham, MA, USA) was used for separation. The mixed peptide samples were added to the chromatographic column and eluted with a step gradient of 8 increasing acetonitrile concentrations (5, 7.5, 10, 12.5, 15, 17.5, 20 and 50% acetonitrile). Ten effluents were finally collected by centrifugation and were dried with vacuum evaporation and resuspended in 0.1% formic acid. In this study, iRT reagent (Biognosis, Schlieren, Switzerland) was used to calibrate the retention time of the extracted peptide peaks, which were added to ten components and each sample at a volume ratio of 10:1.
LC-MS/MS Analysis
An EASY-nLC 1200 chromatography system (Thermo Fisher Scientific, Waltham, MA, USA) and Orbitrap Fusion Lumos Tribrid mass spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) were used for mass spectrometry acquisition and analysis. The peptide sample was loaded onto the precolumn (75 µm×2 cm, C18, 2 µm, Thermo Fisher) at a flowrate of 400 nL/min and then separated using a reversed-phase analysis column Biomolecules 2022, 12, 1569 4 of 24 (50 µm × 15 cm, C18, 2 µm, Thermo Fisher) for 120 min. The mobile phase with a gradient of 4%-35% (80% acetonitrile + 0.1% formic acid + 20% water) was used for elution. A full MS scan was acquired within a 350-1500 m/z range with the resolution set to 120,000. The MS/MS scan was acquired in Orbitrap mode with a resolution of 30,000. The HCD collision energy was set to 30%. The mass spectrum data of 10 components separated by the reversed-phase column and all the samples obtained by enzymatic hydrolysis were collected in DDA mode.
Label-free DIA Quantification
The DDA collection results of the above 10 components were imported into the Proteome Discoverer software (version 2.1, Thermo Scientific, Waltham, MA, USA) search database using the following parameters: mouse database (released in 2019, containing 17,038 sequences) with the iRT peptide sequence attached, trypsin digestion, a maximum of two missing cleavage sites, parent ion mass tolerance of 10 ppm, fragment ion mass tolerance of 0.02 Da, methionine oxidation set as variable modification, cysteine carbamidomethylation set as fixed modification, and protein false discovery rate (FDR) set to 1%. The PD search result was used to establish the DIA acquisition method, and the window width and number were calculated according to the m/z distribution density.
Sixty-nine peptide samples were put into DIA mode to collect mass spectrometry data. Spectronaut™ Pulsar X (Biognosys, Biognosis, Switzerland) software was used to process and analyse mass spectrometry data [14]. Based on the DDA search result pdResult file and the 10 DDA raw files, we created a spectrum library; the raw files collected by DIA were imported for each sample to search the library. The high-confidence protein standard was a peptide q value < 0.01, and the peak area of all fragment ions of the secondary peptide was used for protein quantification.
Protein Chemical Modifications Search
PFind Studio software (version 3.1.6, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China) was used to perform label-free quantitative analysis of the DDA collection results of enzymatic hydrolysis samples [15]. The target search database was from the Mus musculus database downloaded by UniProt (updated September 2020). During the search process, the instrument type was set as HCD-FTMS, the enzyme was fully specific trypsin, and up to 2 missed cleaved sites were allowed. The "open-search" mode was selected, and the screening condition was that the FDR at the peptide level was less than 1%. The data were analysed using both forward and reverse database search strategies. After the initial screening, a restricted search method was used for verification.
Statistical Analysis
The missing abundance values were determined (KNN method) [16], and CV value screening (CV < 0.3) [17] was performed on the mass spectrometry results. The two-sided unpaired t-test was used for the comparison between each set of data. Comparison within the experimental group and comparison between the experimental group and the control group at the same time points were screened for differential proteins. The screening criteria were as follows: fold change (FC) between the two groups ≥1.5 or ≤0.67 and p < 0.05. At the same time, the samples in each two groups were randomly combined, and the average number of differential proteins in all permutations and combinations was calculated according to the same criteria as normal screening (Table S1), ensuring that differential proteins were generated by differences between groups rather than random production.
The proportions of different types of chemical modification sites out of the total number of modification sites were calculated, and the data between each two groups were compared by two-sided unpaired t-tests. The screening criteria were FC between the two groups ≥1.5 or ≤0.67 and p < 0.05.
The DAVID database (https://url.cy/0E13rJ) [18] was used to perform functional enrichment analysis on the differential proteins that were screened. The significance threshold of p < 0.05 was adopted. All methods were performed in accordance with the relevant guidelines and regulations.
Histopathology
The Oil Red O staining results of the whole aortas of 6-month-old ApoE −/− mice fed a high-fat diet for 5 months were compared to those of 6-month-old mice fed a normal diet. The average percentage of stained areas in the experimental group was 17.78 ± 2.14% (n = 6), and the average percentage in the control group was 0.88 ± 0.34% (n = 4), p = 0.0004 ( Figure 2). compared by two-sided unpaired t-tests. The screening criteria were FC between the two groups ≥1.5 or ≤0.67 and p < 0.05.
The DAVID database (https://url.cy/0E13rJ) [18] was used to perform functional enrichment analysis on the differential proteins that were screened. The significance threshold of p < 0.05 was adopted. All methods were performed in accordance with the relevant guidelines and regulations.
Histopathology
The Oil Red O staining results of the whole aortas of 6-month-old ApoE −/− mice fed a high-fat diet for 5 months were compared to those of 6-month-old mice fed a normal diet. The average percentage of stained areas in the experimental group was 17.78 ± 2.14% (n = 6), and the average percentage in the control group was 0.88 ± 0.34% (n = 4), p = 0.0004 ( Figure 2).
Differential Protein Screening and Functional Annotation
The experimental group and the control group had 69 samples from seven time points (W0/W1/M1/M2/M3/M4/M5) for non-labelled LC-MS/MS quantification (one sample in the experimental group for W0 was insufficient). A total of 592 proteins identified with at least 2 unique peptides with FDR < 1% were identified, and an average of 360 urine proteins were identified for each sample. The heatmap ( Figure S1) of all the samples shows that it is hard to discriminate samples of different time points or groups as a whole, which indicates that there are great differences among individuals. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (https://url.cy/qevTk1 (accessed on 10 August 2022)) via the iProX partner repository [19] with the dataset identifier PXD027610.
Comparison within the Experimental Group Short-Term Effects of a High-Fat Diet
To identify the effects of a high-fat diet, after a week of a high-fat diet in ApoE −/− mice, urine samples collected from W0 and W1 were compared and analysed. The volcano plot of proteins is shown in Figure S2. A total of 12 proteins were significantly upregulated and 15 proteins were significantly downregulated at W1 (Table 1). Among them, 21 proteins or their family members have been reported to be associated with lipids.
GO analysis of these 27 proteins by DAVID showed that most of the annotated biological processes were related to lipid metabolism and glucose metabolism (Figure 3). At the same time, the differential proteins between W1 and W0 in the control group (Table S2) did not enrich for any significant changes in biological processes, indicating that the physiological state of mice did not change significantly at W1, while only a week of a high-fat diet induced huge changes in the animal urinary proteome, further demonstrating that the urinary proteome sensitively reflects changes in the body.
Differential Protein Screening and Functional Annotation
The experimental group and the control group had 69 samples from seven time points (W0/W1/M1/M2/M3/M4/M5) for non-labelled LC-MS/MS quantification (one sample in the experimental group for W0 was insufficient). A total of 592 proteins identified with at least 2 unique peptides with FDR < 1% were identified, and an average of 360 urine proteins were identified for each sample. The heatmap ( Figure S1) of all the samples shows that it is hard to discriminate samples of different time points or groups as a whole, which indicates that there are great differences among individuals. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (https://url.cy/qevTk1 (accessed on 10 August 2022)) via the iProX partner repository [19] with the dataset identifier PXD027610.
Short-term Effects of a High-fat Diet
To identify the effects of a high-fat diet, after a week of a high-fat diet in ApoE −/− mice, urine samples collected from W0 and W1 were compared and analysed. The volcano plot of proteins is shown in Figure S2. A total of 12 proteins were significantly upregulated and 15 proteins were significantly downregulated at W1 (Table 1). Among them, 21 proteins or their family members have been reported to be associated with lipids.
GO analysis of these 27 proteins by DAVID showed that most of the annotated biological processes were related to lipid metabolism and glucose metabolism (Figure 3). At the same time, the differential proteins between W1 and W0 in the control group (Table S2) did not enrich for any significant changes in biological processes, indicating that the physiological state of mice did not change significantly at W1, while only a week of a highfat diet induced huge changes in the animal urinary proteome, further demonstrating that the urinary proteome sensitively reflects changes in the body. Figure S2. The Venn diagram (Figure 4) shows that a total of 17 proteins changed significantly at all five time points, and the DIA quantitative results show that these 17 proteins exhibited the same change trend at these time points. Another 18 proteins changed significantly at the last four time points, and the trend of change was the same at each time point (Table S3). Among them, 26 proteins or their family members have been previously reported to be related to lipid metabolism or cardiovascular diseases.
Urinary Proteome Changes in the Whole Process
Compared to W0, 51/69/86/65/88 proteins changed significantly at M1/M2/M3/M4/M5 in the experimental group, respectively. The volcano plots of proteins are shown in Figure S2. The Venn diagram (Figure 4) shows that a total of 17 proteins changed significantly at all five time points, and the DIA quantitative results show that these 17 proteins exhibited the same change trend at these time points. Another 18 proteins changed significantly at the last four time points, and the trend of change was the same at each time point (Table S3). Among them, 26 proteins or their family members have been previously reported to be related to lipid metabolism or cardiovascular diseases. Major urinary proteins (MUPs) are members of the lipocalcin family, which can be isolated and transport various lipophilic molecules in the blood and other body fluids [20]. Knockout of mouse trefoil factor 2 protects against obesity in response to a high-fat diet [21]. Angiotensinogen plays a key role in fat cell metabolism and inflammation development [22].
Alpha1-antitrypsin has been reported as a biomarker of atherosclerosis [23]. It has been reported that CCN4 (cellular communication network factor 4) promotes the migration and proliferation of vascular smooth muscle cells by interacting with α5β1 integrin [24], which plays a vital role in the occurrence and development of atherosclerosis. Regular monitoring of vitamin B12 status may help prevent atherosclerosis-related diseases, and anticobalamin 2 can carry vitamin B12 [25]. Regenerated islet-derived protein 3β, an inflammatory marker, is of great significance for the recruitment of macrophages and for tissue repair [26]. The level of α-2-HS-glycoprotein is positively correlated with atherosclerotic substitution parameters, such as intima-media thickness (IMT) and arteriosclerosis [27]. The literature shows that gelsolin stabilizes actin filaments by binding to the ends of filaments, preventing monomer exchange. Its downregulation indicates that the cytoskeleton of vascular smooth muscle cells in the human coronary atherosclerotic medium is dysregulated [28]. It has been reported that SCUBE2 may play an important role in the progression of atherosclerotic plaques through Hh signal transduction [29]. Type I collagen is an early biomarker of atherosclerosis [23].
Igκ chain V-III region PC 7043, Igκ chain V-II region 26-10 and immunoglobulin κ constant are all involved in the adaptive immune response. The haptoglobin polymorphism is related to the prevalence and clinical evolution of many inflammatory diseases, including atherosclerosis [30]. MHCII antigen presentation has an important protective function in atherosclerosis [31]. Interleukin-18 plays a key role in atherosclerosis and plays a role in appetite control and the development of obesity [32]. According to the literature, compared to healthy controls, LAMP-2 gene expression and protein levels in peripheral blood leukocytes of patients with coronary heart disease are significantly increased [33]. T-cadherin is essential for the accumulation of adiponectin in neointima and atherosclerotic plaque lesions [34]. Kidney androgen-regulated protein has also been reported in the urine of ApoE −/− mice fed a high-fat diet [23]. Fibronectin is an indicator of connective tissue formation during atherosclerosis [35]. Peripheral arterial occlusive disease (PAOD) is one of the primary manifestations of systemic atherosclerosis, and transthyretin and complement factor B are potential markers for monitoring plasma PAOD disease [36]. Serotransferrin plays an important role in atherosclerosis [37]. The differential expression of serine protease inhibitor A3 in blood vessels is significantly related to human atherosclerosis [38]. Prolactin plays a role in the proliferation of vascular smooth muscle cells, and the proliferation of vascular smooth muscle cells is a characteristic of cardiovascular diseases such as hypertension and atherosclerosis [39].
The abovementioned differential proteins that continually changed during the whole process were analysed using DAVID for GO analysis (Figure 4), and the enriched biological processes are also shown in the figure.
The major urinary protein-induced lipid metabolism-and glucose metabolism-related biological processes changed significantly; the acute phase reaction has been reported in the literature to be related to atherosclerosis [40]. The positive regulation of fibroblast proliferation also changed significantly, and vascular damage and dysfunction of adipose tissue around blood vessels promotes vasodilation, fibroblast activation and myofibroblast differentiation [41]. Wound healing is also related to atherosclerosis [42]. The extracellular matrix gives atherosclerotic lesion areas tensile strength, viscoelasticity, and compressibility [43]. There are also reports showing correlation between osteoporosis and atherosclerosis [44]. The ERK1/ERK2 pathway is involved in insulin (INS) and thrombin-induced vascular smooth muscle cells, which play important roles in proliferation [45]. Cell adhesion also plays an important role in atherosclerosis [46].
The comparison within the experimental group avoids the influence of genetic and dietary differences on the experimental results to the greatest extent, but the influence of biological growth and development is difficult to avoid. The results show that there are a variety of proteins that change continually throughout the progression of the disease and that are closely related to the disease. It is worth noting that the differential proteins obtained using this comparison method and the biological processes and pathways enriched by them exhibit a high degree of overlap at different time points, which may make it difficult to enhance early diagnosis of the disease, so follow-up comparison between the experimental group and control group was performed.
Comparison between the Experimental Group and the Control Group
Comparison of the results between the experimental group and the control group at the same time points showed that 44/16/54/23/48/57/46 differential proteins were obtained at W0/W1/M1/M2/M3/M4/M5, respectively. The details of the proteins are shown in Table 2, the volcano plots of proteins are shown in Figure S2, and the overlap of differential proteins at different time points is shown in Figure S3. Comparing between the experimental group and control group, there were significant differences in the differential proteins at each time point, but they were all closely related to lipids and cardiovascular diseases.
The differential proteins were analysed by DAVID for GO analysis, and the biological processes that changed significantly at different time points are shown in Figure 5. The biological processes related to lipid metabolism and glucose metabolism in the experimental group were significantly different from those in the control group at W0. At W0 and M4, the immune-related processes were significantly different. Differential proteins at M1 were primarily enriched in cell adhesion-related processes, while at M2, they were primarily enriched in redox reaction-related processes. At M3, wound healing began to appear, and there were many adhesion-related processes. In addition to a large number of immune-related processes, the positive regulation of fibroblast proliferation and the negative regulation of angiogenesis also appeared at M4. The processes related to phagocytosis and proteolysis began to appear at M5.
Effects of Genetic Factors
At W0, before a high-fat diet was administered to the experimental group, the only difference between the two groups was genetic factors. There were already significant differences in the biological processes related to lipid and glycometabolism, indicating that ApoE gene knockout greatly affects the lipid transport in mice in the experimental group, which is reflected by the urinary proteome very early. Acute phase reactions, immune responses, cytokines and proteolysis are also closely related to atherosclerosis [47][48][49][50].
Urinary Proteome Changes during Whole Process
The literature shows that during the early stages of atherosclerosis, low-density lipoprotein (LDL) particles accumulate in the arterial intima, and are thereby protected from plasma antioxidants and undergo oxidation and other modifications and have proinflammatory and immunogenic properties. Classic monocytes circulating in the blood can exhibit anti-inflammatory functions and bind to the adhesion molecules expressed by activated endothelial cells to enter the inner membrane. Once in the inner membrane, monocytes can mature into macrophages, which express scavenger receptors that bind to lipoprotein particles and then become foam cells, finally forming the core of atherosclerotic plaques. T lymphocytes can also enter the inner membrane to regulate the functions of natural immune cells, endothelial cells and smooth muscle cells. The smooth muscle cells in the media can migrate to the inner membrane under the action of leukocytes to secrete extracellular matrix and form a fibrous cap [51]. During the exploration of this experiment, at week 1, the differentially expressed proteins between the experimental and control groups were related to the differentiation of epithelial cells, and cell adhesion was enriched in M1 macrophages, which may be related to the adhesion of monocytes. Differential proteins between the experimental and control groups at M2 were related to biological processes associated with redox, which may be related to the redox of LDL particles. Cell adhesion also changes at M3, which may involve the recruitment of phagocytes. Numerous immune-related biological processes changed in M4, indicating the participation of immune cells such as T cells. The regulation of fibroblast proliferation may be related to the formation of fibrous caps. Enriched results revealed that proteolysis changed significantly at M5. It has been reported that activated macrophages can secrete proteolytic enzymes and degrade matrix components. The loss of matrix components may subsequently lead to plaque instability and increase the risk of plaque rupture and thrombosis [52]. Fibrin dissolution also plays an important role in the development of atherosclerosis [53].
The biological processes of the enrichment of differential proteins at different time points can correspond to the occurrence and development of atherosclerosis, indicating that the urinary proteome has the potential to be used to monitor the disease process.
After a week of a high-fat diet in the experimental group, the protein kinase B signalling pathway changed. It has been reported to play an important role in the survival, proliferation and migration of macrophages and may affect the development of atherosclerosis [54]. After a month of a high-fat diet, many biological processes underwent significant changes. Studies have shown that urinary sodium excretion is the decisive factor in carotid intima-media thickness, which is an indicator of atherosclerosis [55]. The classical pathway of complement activation is also related to atherosclerosis [56]. Copper and isotypic cysteine can interact to generate free radicals, thereby oxidizing LDL, which has been found in atherosclerotic plaques [57]. At M2, oestrogen is also reported to have a variety of anti-atherosclerotic properties, including affecting plasma lipoprotein levels and stimulating the production of prostacyclin and nitric oxide [58]. At M3, wound healing is also associated with atherosclerosis [42]. For the biological processes that changed at M4, the ERK1/ERK2 pathway plays an important role in the proliferation of vascular smooth muscle cells induced by insulin (INS) and thrombin [45]. Alternative pathways of complement activation and major histocompatibility complex family II have been reported to be associated with atherosclerosis [59,60]. In the enrichment of differential proteins at M5, chaperone-mediated autophagy (CMA) plays an important upstream regulatory role in lipid metabolism [61].
To further explore the effect of high-fat diet on chemical modifications of urine proteins, a total of 15 samples were selected at three time points (EW0/EM5/CM5). After data retrieval (.raw) based on open-pFind software, the analysis results were exported in pBuild.
A total of 923 different chemical modification types were identified in 15 samples, of which 468 chemical modification types were identified in the EW0 group, 748 chemical modification types were identified in the EM5 group, and 611 chemical modification types were identified in the CM5 group.
An unsupervised cluster analysis of all modifications found that the CM5 group was well distinguished from the other two groups ( Figure 6). The percentages of different modification types in the EW0 group and the EM5 group were quantified to identify the modification changes that occurred in the comparison within the experimental group. Among them, one modification type was unique to the EW0 group and existed in more than four samples (the total number of samples was five), 23 modification types were unique to the EM5 group and existed in more than five samples (the total number of samples was six); there are 68 types of modifications shared by the two groups, and there had significant differences (FC ≥ 1.5 or ≤0.67, p < 0.05). At the same time, the proportions of different types of modified sites in the CM5 group and the EM5 group were quantified, and the difference between the experimental group and the control group was analysed. Among them, eight modification types were unique to the CM5 group and existed in more than three samples (the total number of samples was four), and 19 modification types were unique to the EM5 group and existed in more than five samples (the total number of samples was six). There were 72 types of modifications that were shared by the two groups that had significant differences (FC ≥ 1.5 or ≤0.67, p < 0.05) (see Table S5 for details). . Functional annotation of differential proteins at different time points between the experimental and control groups (p < 0.05). When the experimental group is compared to the control group, there is a large difference in W0, demonstrating that the urinary proteome reflects even slight difference between the groups. In the subsequent control results at each time point, the degree of overlap in the differential proteins is small, but they are mostly related to lipids and cardiovascular diseases. The enriched biological processes also correspond to the progression of atherosclerosis, indicating that the urinary proteome is useful to monitor the disease process. However, as mentioned before, this type of comparison does not take the influence of diet and other factors into account.3.3. Chemical Modifications of Proteins. Figure 5. Functional annotation of differential proteins at different time points between the experimental and control groups (p < 0.05). When the experimental group is compared to the control group, there is a large difference in W0, demonstrating that the urinary proteome reflects even slight difference between the groups. In the subsequent control results at each time point, the degree of overlap in the differential proteins is small, but they are mostly related to lipids and cardiovascular diseases. The enriched biological processes also correspond to the progression of atherosclerosis, indicating that the urinary proteome is useful to monitor the disease process. However, as mentioned before, this type of comparison does not take the influence of diet and other factors into account. To reduce the false negative influence caused by the open search mode, a restricted search method was used for verification. Modification types that accounted for the top five modification sites in the open search were fixed; modification types that were unique in a group and existed in each sample and modification types that had been reported related to lipids in the literature were selected. Twenty modifications in the EM5-EW0 group and 25 modifications in the EW5-CM5 group were selected, and the proportion of modified sites in the total number of sites was calculated (Table S6). The screening criteria were FC ≥ 1.5 or ≤ 0. 67 To reduce the false negative influence caused by the open search mode, a restricted search method was used for verification. Modification types that accounted for the top five modification sites in the open search were fixed; modification types that were unique in a group and existed in each sample and modification types that had been reported related to lipids in the literature were selected. Twenty modifications in the EM5-EW0 group and 25 modifications in the EW5-CM5 group were selected, and the proportion of modified sites in the total number of sites was calculated (Table S6). The screening criteria were FC ≥ 1.5 or ≤ 0. 67 Among the changes observed in the comparison within the experimental group, many studies have shown that carbamylated proteins are involved in the occurrence of diseases, especially atherosclerosis and chronic renal failure [138]. The kynurenine pathway is the primary pathway of tryptophan metabolism and plays an important role in early atherosclerosis [139]. The oxidation of proline can form glutamate semialdehyde, and glutamate semialdehyde is closely related to lipid peroxidation [140]. Elevated plasma homocysteine has also been widely studied as an independent risk factor for atherosclerosis [141]. Obstruction of the sulphur dioxide/aspartate aminotransferase pathway is also known to be involved in the pathogenesis of many cardiovascular diseases [142]. The Delta_H(2)C(3) modification of lysine also refers to acrolein addition +38, and acrolein and other αand β-unsaturated aldehydes are considered to be mediators of inflammation and vascular dysfunction [143]. CHDH modification of aspartic acid and NO_SMX_SIMD modification of cysteine have not been reported to be related to atherosclerosis and may act as potential modification sites.
Although it was not verified in a restricted search, there are also studies claiming that the interruption of cell signals mediated by electrophiles is related to the occurrence of atherosclerosis and cancer. HNE and ONE and their derivatives are both active lipid electrophile reagents that inhibit the release of proinflammatory factors to a certain extent [144]. Nε-carboxymethyl-lysine (CML) has been reported to accumulate in large amounts in the tissues of diabetes and atherosclerosis, and glucosone aldehyde is related to its formation [145]. Benzyl isothiocyanate salt has been reported to inhibit lipid production and fatty liver formation in obese mice fed a high-fat diet [146]. It has been reported in the literature that thiazolidine derivatives have a positive effect in the treatment of LDLR(−/−) atherosclerotic mice [147]. In addition, the carboxyethylation of lysine has also changed, and some research indicates that the degree of carboxymethylation and carboxyethylation of lysine in the plasma of diabetic mice is significantly increased [148]. Changes in the expression of fucosylated oligosaccharides have been observed in pathological processes such as atherosclerosis [149]. In addition, the phosphorylation modification of tyrosine is related to the formation of esters, which may also be involved in lipid metabolism and the occurrence and progression of diseases [150]. [137] In the differential modifications between the experimental group and the control group, some of the significantly changed modifications also changed in the comparison within the experimental group. In addition, the Delta_H(2)C(2) modification at the Nterminus of the amino acid also refers to acetaldehyde +26. In addition, acetaldehyde stimulates the growth of vascular smooth muscle cells in a notch-dependent manner, promoting the occurrence of atherosclerosis [151]. Advanced protein glycosylation is an important mechanism for the development of advanced complications of diabetes, including atherosclerosis. Hydroimidazolone-1 derived from methylglyoxal is the most abundant advanced glycosylation end-product in human plasma [152]. In addition, the guanidine modification of lysine may also be related to atherosclerosis [153].
Although not verified in the restricted search, an increasing number of studies have shown that short-chain fatty acids and their homologous acylation are involved in cardiovascular disease, and the proportions of 2-hydroxyisobutyrylation, malonylation and crotonylation in the experimental group were significantly increased [154]. Nε-carboxymethyl-lysine (CML) has been reported to accumulate in large amounts in the tissues in diabetes and atherosclerosis, and its induced PI3K/Akt signal inhibition promotes foam cell apoptosis and the progression of atherosclerosis [155]. In addition, glucosone is closely related to its formation, the proportion of which also increased significantly in the experimental group. Oxidation of tyrosine produces dihydroxyphenylalanine (DOPA), and the protein binding DOPA in tissues is elevated in many age-related pathological diseases, such as atherosclerosis and cataract formation [156].
As mentioned above, the comparison within the experimental group avoids the influence of genes, diet and other factors on the urinary proteome, but it may be affected by the growth and development of the organisms themselves. Comparison between the experimental group and control group avoids the influence of development but cannot avoid factors such as diet. The identification results of chemical modifications of urine proteins showed that regardless of whether comparison within the experimental group was adopted, the modification status changed greatly and was closely related to lipids and cardiovascular diseases. In comparison, differences between the experimental group and control group may be more obvious.
Conclusions
This study explored changes in urinary proteomics of high-fat-diet-fed ApoE −/− mice. The results of comparison within the experimental group showed that even after only one week of a high-fat diet, while the urinary proteome of the control group had not significantly changed, the urinary proteome of the experimental group had changed significantly, and most of the enriched biological pathways were related to lipid metabolism and glycometabolism, indicating that the urinary proteome has the potential for early and sensitive monitoring of biological changes. Most of the proteins and their family members that change continually in disease progression have been reported to be related to cardiovascular diseases and/or can be used as biomarkers. The results of the comparison between the experimental group and the control group show that the biological processes enriched by differential proteins at different time points correspond to the occurrence and development of atherosclerosis, indicating that the urinary proteome has the potential to be used to monitor the disease process. The differential modification types in the comparison within the experimental group and the comparison between the experimental and control groups have also been reported to be related to lipids and cardiovascular diseases and can be used as a reference for identifying new biomarkers.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biom12111569/s1: Table S1, Screening results of random combinations of urine samples; Table S2, Differential proteins between week 1 and week 0 samples in the control group; Table S3, Details of continuously changing differential proteins in the comparison within the experimental group; Table S4, Differential proteins between adjacent time points of the experimental group; Table S5, Details of differential modifications in two comparisons by open search; Table S6, Results of limited search of modifications; Figure S1, Venn diagram of differential proteins at different time points between the experimental group and the control group; Figure S2, Volcano plots of proteins; Figure S3, Venn diagram of differential proteins at different time points between the experimental group and the control group.
Author Contributions: Y.H. performed the experiments, analysed the data, contributed reagents/ materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper and approved the final draft. W.M. and J.W. performed the experiments and contributed reagents/materials/analysis tools. Y.L. analysed the data and contributed reagents/materials/analysis tools. Y.G. conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. All authors have read and agreed to the published version of the manuscript.
Informed Consent Statement: Not applicable.
Data Availability Statement: The mass spectrometry proteomics data have been deposited to the Pro-teomeXchange Consortium (http://proteomecentral.proteomexchange.org (accessed on 10 August 2022)) via the iProX partner repository with the dataset identifier PXD027610.
Conflicts of Interest:
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2022-10-28T15:07:35.892Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "216cebca45d32baaef24ab729459b014db06e662",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/12/11/1569/pdf?version=1666787009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b06e4f742e45cc428919390d9aa5d598589a40c2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115171922 | pes2o/s2orc | v3-fos-license | Is nonrelativistic gravity possible?
We study nonrelativistic gravity using the Hamiltonian formalism. For the dynamics of general relativity (relativistic gravity) the formalism is well known and called the Arnowitt-Deser-Misner (ADM) formalism. We show that if the lapse function is constrained correctly, then nonrelativistic gravity is described by a consistent Hamiltonian system. Surprisingly, nonrelativistic gravity can have solutions identical to relativistic gravity ones. In particular, (anti-)de Sitter black holes of Einstein gravity and IR limit of Horava gravity are locally identical.
I. INTRODUCTION
We use the Hamiltonian formalism [1], [2], [3], [4] for the dynamics of nonrelativistic gravity in Wheeler-DeWitt superspace [5]. The formalism leads naturally to the study of consistency of the nonrelativistic gravity. The equations of the rate of change of energy and momentum are computed. As is well known, the relativistic theory is characterised by identically zero energy rather than just the total integrated energy being zero [6], [7]. A question arises: Can one generalise nonrelativistic theories and recover an identically zero energy condition? In other words: Can one generalise the lapse function from being a function of time only to a function of space and time? We show that the answer is negative, unless a very strong consistency condition is satisfied. Thus, generically, the lapse function of consistent nonrelativistic theories must be time dependent only.
The approach is applicable to Hořava's recently proposed theory of gravity [8], [9]. In particular, we show that there are no new (anti-)de Sitter black hole solutions. In fact, the theory has the same solutions as Einstein gravity in empty and flat space if λ = 1.
The DeWitt metric on M is given by [5] where λ is a constant, tr(k) = g ab k ab , (k × k) ab = k ac g cd k db , and k · k = tr(k × k). The metric G has an inverse metric G −1 given by
B. Hamiltonian formalism
We investigate a dynamical system on T M given by an invariant action where X (shift vector field) is a time dependant vector field on M , N (lapse function) is a function of t only, i.e. N (t) is a constant function in the space of real-valued functions F(M ), L X is the Lie derivative, and the potential V(g) ∈ F d (M ) is a scalar density. The canonical momenta conjugate to g ab are π ab = p ab dµ(g) = δS δġ ab = (k ab − λ tr(k)g ab )dµ(g), and the Hamiltonian is where H(g, π) = G −1 (π, π) + V(g), Hamiltonian equations have the following form [2], [3]: where B and its adjoint map B * Here we follow [2] and consider the potential V as a function of the undifferentiated metric coefficients g that do not appear in the Christoffel symbols Γ, and of the Christoffel symbols, and we write V(g, Γ).
C. Constraints
The invariance of the Hamiltonian with respect to the spatial diffeomorphisms implies the following [2]: for an arbitrary vector field X. Therefore, we have the following conservation law (constraint) Then from (2) we get but not necessarily a much stronger constraint as in relativistic gravity. As is well known [6], [7], in any topologically invariant theory (6) holds rather than just (5). But, is it possible to impose (6) on nonrelativistic gravity? In order to answer this question let us compute the rate of change of H and I along a solution of (3) for general N (x, t) and X(x, t). It is straightforward to show that (cf. [2]) where Incidentally, (7) is equivalent to the Dirac canonical commutation relations (cf. [10], [2], [3]). Let us define [3] C H = {(g, π) ∈ T * M | H(g, π) = 0}, C I = {(g, π) ∈ T * M | I(g, π) = 0}, C = C H ∩ C I = {(g, π) ∈ T * M | H(g, π) = 0, I(g, π) = 0}.
If (g(0), π(0)) ∈ C, then we have (g(t), π(t)) ∈ C I for all t for which the solution exists, but (g(t), π(t)) ∈ C for all t if and only if the restriction of A N to C ⊂ T * M vanishes, i.e. the following condition holds for all N A N (g(t), π(t)) C = 0.
If one assumes that N is a function of x and t for a nonrelativistic theory, then the theory will be consistent if and only if (9) holds. This is a very strong condition. By definition we have However, (9) does not hold for all N and a general potential V(g). We know one theory (possibly the only one if λ = 1/d), that of general relativity satisfying the condition. If (9) does not hold, then the Hamiltonian system is not consistent. Hence, (6) cannot be imposed and one has to consider N as a function of t only. In that case (7) can be written in the following form: where Thus, it is obvious that nonrelativistic gravity is possible, provided one considers a time only dependant lapse function, a projectable function (see [9]). If one generalises the lapse function, then the only meaningful, consistent theory is Einstein gravity. However, if (9) does not hold for all solutions it can hold for specific solutions. Indeed, there could exist solutions with A(g(t), π(t)) C = 0, then H(g(t), π(t)) = 0 and I(g(t), π(t)) = 0. These types of solutions would mimic relativistic ones. They will be called Lorentz symmetry recovering (LSR) solutions.
D. Examples
Let us consider some important (non)relativistic theories.
Einstein gravity. For the relativistic potential with arbitrary λ we have F ab = − R ab − 1 2 Rg ab + Λg ab dµ(g), and where divY = Y a |a , and ∆f = −g ab f |ab . Thus, we see that λ = 1 and λ = 1/d are critical values as noted in [8], [9]. Theories with λ = 1 are very different from Einstein gravity, because of the last term in (12). The DeWitt metric's dependence on λ = 1 is crucial too. If λ = 1, then A N (g, π) C = 0 and full relativistic gravity is recovered. Therefore, one is free to choose a space and time dependent lapse function.
Hořava gravity [8], [9]. We consider a more general potential For simplicity, we assume that λ = 1 and the spatial metric is flat R ab = 0, and then it is trivial to show that all solutions are LSR ones. Moreover, there is a bijection between solutions of Hořava and Einstein gravity. In particular, for a spherically symmetric metric, all solutions are locally equivalent to the Schwarzschild-Kottler solution in Lemaître coordinates [11]. For example, for m > 0 and Λ > 0, we have Thus, there is no "new" (A)dS black hole solutions in Hořava gravity. One will find new solutions if one considers a space and time dependent lapse function, but then the theory becomes inconsistent. However, nonflat geometries are not necessarily LSR solutions.
III. CONCLUSIONS
The Hamiltonian formalism is used to study nonrelativistic gravity. The evolution (7) for H and I is derived and a consistency condition (9) is proposed. It is shown that if one considers a time only dependant lapse function, then nonrelativistic gravity is possible and described by a consistent Hamiltonian system. A typical nonrelativistic gravity will be an inconsistent theory if we assume a space and time dependant lapse function. One could conjecture that only Einstein gravity is consistent with a space and time dependant lapse function if λ = 1. The other possibility is Hořava gravity if λ = 1/d (see [8], [9]).
The results of the paper can be extended to include field theories coupled to gravity. One is tempted to extend the approach and investigate the nonrelativistic Wheeler-DeWitt equation [5] M G −1 δ δg , δ δg − V(g) Ψ( 3 g) = 0.
All of these directions will be investigated in further study and hopefully a more important question, "Is physically meaningful nonrelativistic gravity possible?" will be answered. Similar issues with different assumptions are discussed in [12], [13], [14].
Note added.-While this work was being prepared for submission, we became aware of [15] where similar questions are addressed. | 2009-07-17T10:00:31.000Z | 2009-05-26T00:00:00.000 | {
"year": 2009,
"sha1": "04a66e24311a3035c2b5aec0c6e438b288c9ac15",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0905.4204",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "04a66e24311a3035c2b5aec0c6e438b288c9ac15",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
7901603 | pes2o/s2orc | v3-fos-license | Statistical Analysis and Optimization of Acid Dye Biosorption by Brewery Waste Biomass Using Response Surface Methodology
Biosorption of Acid Yellow (AY 17) and Acid Blue (AB 25) were investigated using a biomass obtained from brewery industrial waste spent brewery grains (SBG). A 2 full factorial response surface central composite design with seven replicates at the centre point and thus a total of 31 experiments were employed for experimental design and analysis of the results. The combined effect of time, pH, adsorbent dosage and dye concentration on the dye biosorption was studied and optimized using response surface methodology. The optimum contact time, pH, adsorbent dosage and dye concentration were found to be 45min, 6, 0.5g, 75 mg/L respectively for the maximum decolorization of AY 17(97.2%) and 40 min, 2, 0.4g and 75 mg/L respectively for the maximum decolorization of AB 25(97.9%). A quadratic model was obtained for dye decolourization through this design. The experimental values were in good agreement with predicted values and the model developed was highly significant, the correlation coefficient being 0.89 and 0.905 for AY 17 and AB 25 respectively. Experimental results were analyzed by Analysis of variance (ANOVA) statistical concept.
Introduction
Dyes are intensely coloured substance used for the dyeing of various materials such as textiles, paper, leather, hair, foods, drugs, cosmetics, plastics and many more substances.They are retained on these materials by physical adsorption, salt or metal complex formation, solution mechanical retention, or by the formation of covalent chemical bonds.The colour of the dye is due to electronic transitions between various molecular orbital, the probability of these transitions determining the intensity of the colour.Textile dyes are also designed to be resistant to fading by chemicals and light.They must also be resilient to both high temperatures and enzyme degradation resulting from detergent washing.For these reasons, degradation of dyes is typically a slow process.
The effluents arising out of textile and dyeing industries are the most problematic to be treated not only for their high chemical and biological oxygen demands, suspended solids in toxic compounds but also for colour, which is the first contaminant to be recognized by human eye.Dye wastewater is usually treated by physical or chemical treatment processes for colour removal.These include chemical coagulation/flocculation, precipitation, ozonation, adsorption, oxidation, ion exchange, membrane filtration and photo degradation.These methods for colour removal from effluents have high operating costs and limited applicability (Cooper, 1993).In recent years, biological decolourization method has been considered as an alternative and eco-friendly economical method.This has led many researchers to search for the use of effective, economical and eco-friendly alternative materials such as Chitin (McKay et al., 1983); Silica (McKay, 1984); the hardwood sawdust (Asfour et al., 1985); Bagasse pith (McKay et al., 1987); Fly ash (Khare et al., 1987); Paddy straw (Deo, 1993); Rice husk (Lee & Low, 1997); Slag (Ramakrishna & Viraraghavan, 1997); Chitosan (Juang et al., 1997); Palm fruit bunch (Nasser, 1997); Bone char (Ko et al., 2000).Thus research is still going on to develop alternative low cost adsorbents to activated carbon which is used mostly in industries.So in the present study spent brewery grains (SBG) which is present in abundant as waste in brewery industry is tried and tested as biosorbent.
Except a few studies in the literature for colour removal only traditional methods of experimentation were followed to study the effects of all variables which are lengthy, random processes and also require large number of experimental combinations to obtain the desired results.In addition, obtaining the optimum conditions i.e., the point at which maximum % colour removal could be achieved is almost beyond the scope.The traditional step-by-step approach, although widely used, involves a large number of independent runs and does not enable us to establish the multiple interacting parameters.This method is also time consuming, material consuming and requires large number of experimental trials to find out the effects, which are unreliable.So, specifically designed experiments to optimize the system with lesser number of experiments are the need of the hour.These limitations of the traditional method can be eliminated by optimizing all the affecting parameters collectively by statistical experimental design (Montgomery, 1991).
So, in this present study, experiments were designed by incorporating all important process variables namely time, pH, adsorbent dosage, and initial dye concentration using Statistical Design Software Minitab 14 (USA).Experimental design allows a large number of factors to be screened simultaneously to determine which of them has a significant effect on % colour removal.A polynomial regression response model shows the relationship of each factor towards the response as well as the interactions among the factors.Those factors can be optimized to give the maximum response (% colour removal) with a relatively lower number of experiments.In this context, a new approach using statistically designed experiments for finding optimum conditions for maximum % colour removal was discussed in detail.The corresponding interactions among the variables were studied and optimized using central composite design and response surface and contour plots.
Biosorbent and Adsorbate
The Brewery Industry waste Spent Brewery grain was obtained from Mohan breweries and distilleries Limited, Chennai, India and dried at 60 º C for 12 hours.Synthetic textile dye acid yellow and acid blue was obtained from Sigma-Aldrich Chemicals Private Ltd., India and was used without further purification their chemical structures are shown in Fig. 1 and Fig. 2. All chemicals and reagents used for experiments were of analytical grade and supplied by Qualigens fine chemicals.
Preparation of biomass
Spent Brewery Grains, taken from Mohan breweries and distilleries Limited, Chennai, India, was suspended in 1M sulphuric acid solution (20g of SBG per 100mL of acid solution) for one hour.Then it was filtered and the acid solution was discarded.The biomass was washed with distilled water many times until it is completely free from the acid and dried at 60 º C for 24 hours.The dried biomass was ground, sieved to 270 mesh size and stored for further use in the experiments.As seen from the Fig. 3 the scanning electron micrograph (SEM) image shows the porous structure of the biosorbent.
Batch Experiments
Stock solution 1000mg/L of dye (AY 17 and AB 25) were prepared in double distilled water and was diluted as required according to the working concentration.The required pH was adjusted by 0.1N HCl or 0.1N NaOH.pH was measured using a pH meter (Elico, model LI 120, Hyderabad, India).Dye concentration was measured using UV-Vis Spectrophotometer (HITACHI U 2000, spectrophotometer) at a wavelength corresponding to the maximum absorbance of each dye λ max = 401.5 nm for AY 17 and λ max = 600 nm for AB 25.The dye solution (50 mL) at desired concentration, pH and adsorbent dosage taken in 250 ml Erlenmeyer flasks was contacted.The flasks were kept under agitation in a rotating orbital shaker at 150 rpm for desired time.Experiments were performed according to the central composite design (CCD) matrix given in Table 2.The response was expressed as % color removal calculated as
Factorial experimental design
The parameters contact time, pH, adsorbent dosage and dye concentration were chosen as independent variables and the output response, removal efficiency of dye.Independent variables, experimental range and levels for AY 17 and AB 25 removal are given in Table 1 and Table 2.A 2 4 full-factorial experimental design, with seven replicates at the center point and thus a total of 31 experiments were employed in this study.The center point replicates were chosen to verify any change in the estimation procedure, as a measure of precision property.Experimental plan showing the coded value of the variables together with dye removal efficiency are given in Table 3.The analysis focused on how the colour removal efficiency is influenced by independent variables, i.e., time (X 1 ), pH(X 2 ), adsorbent dosage(X 3 ) and dye concentration(X 4 ).The dependent output variable is maximum removal efficiency.For statistical calculations, the variables X i were coded as x i according to the following relationship: The behavior of the system was explained by the following quadratic equation The results of the experimental design were studied and interpreted by statistical software, MINITAB 14 (PA, USA) to estimate the response of the dependent variable.
3.1Response Surface Methodology (RSM)
The most important parameters, which affect the efficiency of a biosorption process are contact time, pH, adsorbent dosage and dye concentration.In order to study the combined effect of these factors, experiments were performed at different combinations of the physical parameters using statistically designed experiments.
The main effects of each parameter on dye removal are given in Fig. 4 and Fig. 5 for AY 17 and AB 25 respectively.From the figure, it was observed that the maximum removal was found to occur at 60 min for AY 17 and 45 min for AB 25.This indicates that higher the contact time between the dye and adsorbent, higher is the equilibrium removal efficiency.Maximum adsorption occurred at acidic pH range for both the acid dyes.This may be due to high electrostatic attraction between the positively charged surface of the SBG and anionic dyes AY 17 and AB 25.Acid dyes are also called as anionic dyes because of the negative electrical structure of the chromophore group.As the initial pH increases, the number of negatively charged sites on the biosorbent surfaces increases and the number of positively charged sites decreases.A negative surface charge does not favor the biosorption of dye anions due to electrostatic repulsion (Namasivayam and Kavitha, 2002).In general, the acidic dye uptakes are much higher in acidic solutions than those in neutral and alkaline conditions.
It was observed that the removal efficiency of both the dyes AY 17 and AB 25 increases as the adsorbent dosage increases.This may be due to the increase in the available active surface area of the adsorbent.It is observed that the removal efficiency of AY 17 decreases with the increase in dye concentration due to unavailability of surface area of the adsorbent to the increasing number of dye molecules and for AB 25 it is increasing in the studied range up to 175 mg/L with increase in initial dye concentration.Using the experimental results, the regression model equation (second order polynomial) relating the removal efficiency and process parameters was developed and is given in Equ.(4).and Equ.(5) for AY 17 and AB 25 respectively.
The regression equation for the determination of output response for AY17 is The regression equation for the determination of output response for AB 25 is Apart from the linear effect of the parameter for the dye removal, the RSM also gives an insight into the quadratic and interaction effect of the parameters.These analyses were done by means of Fisher's 'F'-test and Student't'-test.The student't'-test was used to determine the significance of the regression coefficients of the parameters.The P-values were used as a tool to check the significance of each of the interactions among the variables, which in turn may indicate the patterns of the interactions among the variables.In general, larger the magnitude of t and smaller the value of P, the more significant is the corresponding coefficient term (Montgomery, 1991).The regression coefficient, t and P values for all the linear, quadratic and interaction effects of the parameter are given in Table 4 and Table 5 for AY 17 and AB 25.It was observed that the coefficients for the linear effect of adsorbent dosage, dye concentration (P=0.000,0.001) was highly significant and coefficient for the linear effect of time was the least significant for AY 17 and pH, time (P = 0.000, 0.004, respectively) was highly significant and coefficient for the linear effect of adsorbent dosage was the least significant for AB 25.The coefficient of the quadratic effect of pH and dye concentration (P = 0.002, 0.130) was highly significant and the coefficient of the quadratic terms of time (P = 0.865) was least significant for AY 17.The coefficient of the quadratic effect of time and pH (P = 0.000, 0.042) was highly significant and the coefficient of the quadratic terms of dye concentration (P = 0.737) was least significant for AB 25.
The coefficients of the interactive effects of AY 17 among the variables did not appear to be very significant in comparison to the interactive effects of AB 25.However, the interaction effect between time and pH (P = 0.000) and time and adsorbent dosage (P = 0.131) were found to be significant for AB 25.The significance of these interaction effects between the variables would have been lost if the experiments were carried out by conventional methods.
The optimum values of the process variables for the maximum removal efficiency for both the dyes AY 17 and AB 25 are shown in Table 6.These results are in close agreement with those obtained from the response surface analysis, confirming that the RSM could be effectively used to optimize the process parameters in complex processes using the statistical design of experiments.Although few studies on the effects of parameters on adsorption have been reported in the literature, only a few attempts has been made to optimize them using statistical optimization methods.The predicted values (using the model equation) were compared with experimental result and the data are shown in Table 3.
Analysis of Variance (ANOVA)
The statistical significance of the ratio of mean square due to regression and mean square due to residual error was tested using analysis of variance (ANOVA).ANOVA is a statistical technique that subdivides the total variation in a set of data into component parts associated with specific sources of variation for the purpose of testing hypothesis on the parameters of the model (Segurola et al., 1999).According to the ANOVA Table 7 and Table 8 for AY 17 and AB 25, the F Statistics values for all regressions were higher.The large value of F indicates that most of the variation in the response can be explained by the regression model equation.The F statistics value of 9.24 is greater than tabulated F 14, 16 (2.38)which indicates that the second order polynomial equation ( 4) is highly significant and adequate to represent the actual relationship between the response and the variables with a high value of coefficient of determination (R = 0.9433; R 2 = 0.89) for AY 17.The F statistics value of 10.9 is greater than tabulated F 14, 16 (3.14)which indicates that the second order polynomial equation ( 5) is highly significant and adequate to represent the actual relationship between the response and the variables with a high value of coefficient of determination (R = 0.9513; R 2 = 0.905) for AB 25.
The associated P-value is used to judge whether F Statistics is large enough to indicate statistical significance.A P-value lower than 0.05 indicates that the model is considered to be statistically significant (Kim et al., 2003).The P-values for almost all of the regressions for both the acid dyes AY 17 and AB 25 were lower than 0.01.This means that at least one of the terms in the regression equation has a significant correlation with the response variable.The ANOVA table also shows a term for residual error, which measures the amount of variation in the response data left unexplained by the model.The form of the model chosen to explain the relationship between the factors and the response is correct.
The response surface and contour plots to estimate the removal efficiency over independent variables adsorbent dosage, pH and pH, dye concentration for the dyes are shown in Fig. 6 and 7 for AY 17 and Fig. 8 and 9 for AB 25 respectively.The contour plots given in figures show the relative effects of any two variables when concentration of the remaining variables is kept constant.The maximum predicted yield is indicated by the surface confined in the smallest curve of the contour diagram (Gopal et al., 2002).
Figs. 10 -13 depict the experimental and model predicted removal efficiencies.The predictive capacity of the models was also evaluated in terms of the relative deviations (RE Exp -RE Pred ) / RE Exp for the model.With a few exceptions, the values of the variables showed a good agreement (within 3% error) with the experimental data shown in Table 3.
Conclusions
The present investigation clearly demonstrated the applicability of SBG as biosorbent for AY 17 and AB 25 dye removal from aqueous solutions.Experiments were carried out covering a wide range of operating conditions.The influence of time, pH, adsorbent dosage and initial dye concentration was critically examined.It was observed from this investigation that the percentage removal efficiency is significantly influenced by time, pH, adsorbent dosage and initial dye concentration.A 2 4 Full factorial central composite experimental design was applied.The experimental data were analyzed using response surface methodology and the individual and combined parameter effects on colour removal efficiency were analyzed.Regression equations were developed for removal efficiency using experimental data and solved using the statistical software Minitab 14.It was observed that model predictions are in good agreement with experimental observations.Under optimal values of process parameters around 97.2% and 97.9% colour removal was achieved for AY 17 and AB 25 dye respectively using the SBG.This study clearly showed that response surface methodology was one of the suitable methods to optimize the best operating conditions to maximize the dye removal.
Figure 2 .
Figure 2. The chemical structure of AB 25 dye
Figure 5 .
Figure 5. Main effects plot of parameters for AB 25 removal
Figure 8 .
Figure 8. Response surface plot of AB 25 dye removal (%) showing interactive effect of pH and dye concentration
Table 1 .
Experimental range and levels of independent process variables for AY 17 removal
Table 2 .
Experimental range and levels of independent process variables for AB 25 removal
Table 3 .
Full factorial central composite design matrix for AY 17 and AB 25 removal
Table 4 .
Estimated Regression Coefficients and corresponding T-and P-values for AY 17
Table 5 .
Estimated Regression Coefficients and corresponding T-and P-values for AB 25
Table 6 .
Optimum values of the process parameter for maximum efficiency for AY 17 and AB 25 | 2017-09-08T14:50:51.310Z | 2009-03-19T00:00:00.000 | {
"year": 2009,
"sha1": "6fad0c2379af1177955447ee4669f0c7fd9731c2",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/1251/1214",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6fad0c2379af1177955447ee4669f0c7fd9731c2",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
270037460 | pes2o/s2orc | v3-fos-license | Between acute medicine and municipal alcohol treatment: Cross-sectoral collaborations regarding patients with alcohol problems
Aim: The aim was to examine cross-sectoral collaborations of a Danish emergency department (ED) and two municipal treatment centres in the uptake area regarding patients with alcohol problems. Methods: The study was a qualitative exploratory study. We conducted individual interviews with ED nurses and secretaries (n = 21) and group interviews (n = 2) in municipal alcohol treatment centres with three and four participants, respectively. Interviews were analysed, first with qualitative content analysis, then by applying the analytical concept “boundary object”. Results: Three themes emerged: (1) Responsibilities in practice; (2) Professional contrasts; and (3) The social nurse in a unique position. Themes illuminated a low degree of collaboration characterising the intersectoral work. Blurred responsibilities, challenged communication and acute versus long-term focus were some of the factors not supporting cross-sector collaborations. However, the function of the social nurse was highly appreciated in both sectors and plays a central role. Nonetheless, implicit limitations of this function entail that not all patients with alcohol problems are referred and handled within an ED setting. Conclusions: Overall, we found a lack of collaborative work between healthcare professionals in ED and municipalities for patients with alcohol problems. However, the “social nurse” function was greatly valued in both sectors due to a mediating role, since healthcare professionals in both sectors experienced lack of organisational structures supporting collaborative network, perceived temporal barriers, limited knowledge exchange and differences in approaches to patients.
Background
Excessive alcohol use causes a wide range of illnesses that have consequences both at a personal and societal level (WHO, 2018b).In Denmark, new estimates show that nearly 10% (n = 401,682) of the adult population has moderate alcohol problems and approximately 67,000 people are estimated to have severe alcohol problems (SIF, 2023).Approximately 2% of all Danish hospital contacts are alcohol related (SST & SSI, 2015).In 2013, there were 131,264 alcohol-related contacts registered in the hospitals (SSI & SST, 2015) and it has been found that 17% of patients in somatic wards scored >8 with the AUDIT screening tool (equivalent to hazardous use or above) (Schwarz et al., 2019).The prevalence of alcohol use in Danish emergency departments (EDs) is unknown, but a study from the UK has shown that 40.1% of patients exceeded this limit (Drummond et al., 2014).In 2018, 17,583 people were in public treatment for alcohol dependency (Sundhedsdatastyrelsen, 2020), and 20% were referred from either a somatic or psychiatric hospital (Schwarz et al., 2018).These numbers leave a considerable treatment gap.Furthermore, it is known that the time lapse between onset of the alcohol problem and treatment is 10-18 years (Chapman et al., 2015;Kessler et al., 2001); therefore, it is crucial to shorten this period and explore possibilities for better cross-sector collaborations and referral practices to ensure more people enter treatment earlier.Of admitted patients in an ED, 68% are discharged directly to their home without being hospitalised to a specialty ward (AHH).This means that ED settings play a key role in the cross-sectoral collaboration and are an important venue for preventive measures in at-risk patient groups.
Screening, Brief Intervention and Referral to Treatment (SBIRT) is one of the most examined preventive public health models intending to bridge the detection of alcohol problems in nontreatment seeking populations to specialty treatment services (SAMHSA, 2011).For a successful implementation, the literature calls for "seamless transitions" and strong referral networks in the "Referral to Treatment" (RT) component of SBIRT (Del Boca et al., 2017;Madras et al., 2009;SAMHSA, 2011SAMHSA, , 2013)).However, evidence is lacking that the BI component is effective in linking individuals to specialty treatment (Glass et al., 2015;Kim et al., 2017;Schwarz et al., 2019).In addition, the RT component itself is also understudied (Berger et al., 2017;Cucciare & Timko, 2015;Glass et al., 2017), as it is often not the primary outcome in randomised controlled trials (RCTs) within this field (Glass et al., 2015).Thus, its effectiveness is therefore questionable (Glass et al., 2015;Saitz, 2015).Barriers to SBIRT have been reported at patient, provider and system levels (Broyles et al., 2012;Cucciare & Timko, 2015;Gargaritano et al., 2020;Johnson et al., 2011).Although patient barriers mainly are connected to shame and stigma entering treatment (Farhoudian et al., 2022;Finn et al., 2023), a review of provider barriers in-hospital found personal discomfort, lack of knowledge, and time and resources being some of the most frequent barriers (Gargaritano et al., 2020).Further, the responsibility for addressing problematic alcohol use has been problematised by nurses, suggesting other specialised professionals to intervene and handle alcohol problems (Broyles et al., 2012).However, many of these studies reflect either the setting of primary care or different somatic hospital wards, are focused on all SBIRT components (with minor focus on the "RT" part) or consist of surveys.
At the system level, cross-sector collaborations and continuity in patient trajectories have been pivotal topics in healthcare for decades, but the complex nature of the required coordination to accomplish this still causes numerous challenges and is found inadequate (Seemann & Gustafsson, 2016;WHO, 2018a).In general, literature on cross-sector collaborations in other areas of healthcare has shown difficulties in transitions and communication, and the supposedly two collaborative sectors have been described as "two worlds" (Coleman & Berenson, 2004;Petersen et al., 2019) with their different culture, including goals and motivators.Further, the conditions for collaboration are not always supported by the underlying organisational structures (Høgsgaard, 2016;Kousgaard et al., 2019).Knowledge of integrated care and collaborative models supporting a cross-sector trajectory are comprehensive and diverse (WHO, 2016).However, it is not always possible to transfer a collaborative model that works in one setting into another (VIVE, 2018).This could be due to contextual factors and the fact that interventions need to be adapted or tailored to the local context (Kirk et al., 2021;Waltz et al., 2019;Wensing et al., 2011).In implementation science, examining context from micro to macro levels in a preimplementation phase is highlighted in determinant frameworks, since it impacts implementation outcomes and degree of success, and can generate more implementable solutions to a longstanding problem (Nilsen & Bernhardsson, 2019).
To sum up, only few patients who need treatment are referred from hospitals to alcohol treatment.Further, the referral-to-treatment (RT) component of SBIRT is the least studied, and cross-sector collaborations and contextual factors as determinants for implementation are understudied qualitatively between EDs and municipalities.Seemingly, alcohol treatment pathways are challenged, and the aim of the present study was to examine both sector perspectives mutually to get a better understanding to improve and design sustainable cross-sector collaborations.
Aim
The aim of the present study was to examine perceptions of existing cross-sectoral collaborations regarding patients with alcohol problems from healthcare professionals' perspectives in an ED and two associated municipalities.With this approach we can explore possibilities for future cross-sectoral interventions aimed at patients with alcohol problems.
Research questions
▪ How are cross-sector collaborations regarding patients with alcohol problems experienced in an ED and in municipal alcohol treatment centres, respectively?▪ Which challenges occurand how are they managedin the cross-sectoral work?
Setting
The healthcare system in Denmark is taxfinanced with free and, in principle, equal access for all citizens.The healthcare system works at three levels: national; regional (n = 5); and municipal (n = 98).Hospitals are governed by the regions and are responsible for somatic and psychiatric hospital treatment and care.The municipalities are responsible for preventive healthcare initiatives (such as dental services for all children aged <18 years, healthcare centres/clinics, family nursing, etc.), rehabilitation, home care services and nursing homes.In relation to alcohol, since 2007 according to §141 of the Danish Health Act (Indenrigs-og sundhedsministeriet, 2019), the treatment and prevention of alcohol problems are overall a responsibility of the municipalities, while hospitals are responsible for the acute alcohol-related hospitalisations, such as severe detoxifications, withdrawal symptoms and alcohol-related co-morbidity.With the responsibility distributed among multiple municipalities, national quality standards for treatment have been called for, since a huge variation in provided quality and staff competences was found in a recent comparison (SST, 2019).
The present study was conducted in an ED at a university hospital in the Capital Region and in two municipalities.The hospital covers an area of approximately 500,000 citizens and collaborates with 10 municipalities in the uptake area.The ED receives nearly 200 patients daily and almost 70% of them are discharged directly from the ED, without entering a specialised inpatient unit (AHH).
A function relevant for this study is the "social nurse", originating in Denmark in 2006, and currently 1-2 social nurses are employed at each hospital in the Capital Region (Dideriksen et al., 2019).Social nurses are registered nurses (BA level, 3.5 years) with preferably more than 5 years of experience with the target group.They are considered specialists and the majority also hold a higher academic degree, such as a Master's degree.Social nursing is based on the principles of harm reduction with a holistic and health-related approach (Dideriksen et al., 2019).The aim of social nursing is to reduce inequalities for socially marginalised patients with diverse health-related problems, such as homelessness, multi-morbidity, severe alcohol dependency, drug addiction or psychiatric disorders.A central part of the job is to provide professional guidance to healthcare professionals (HCPs) regarding treatment of alcohol and substance use (Dideriksen et al., 2019).Hence, social nurses are usually present in the different wards several times per week, but they are not part of frontline staff, and a referral is needed.
The two participating municipalities in this study have <40,000 inhabitants (Muni A) and >500,000 inhabitants (Muni B).Both municipalities are a part of the hospital's uptake area and operate their own alcohol treatment facilities, in contrast to other municipalities in the Capital Region that buy services from external private alcohol treatment providers.Thus, both municipalities have their own facilities and provide a broad range of therapeutic sessions (individual and group), medical treatment, detoxifications and a variety of social support.They provide an individual treatment plan, often consisting of a combination of medical and psychosocial treatment.The treatment facilities are interdisciplinary, with psychologists, nurses, doctors, physiotherapists and social workers.
Study design
This study is a qualitative exploratory study examining the context determinant in crosssectoral collaborations regarding patients with alcohol problems in an ED and two municipal alcohol treatment centres.With this qualitative approach, it is possible to draw attention to those elements in the cross-sectoral collaboration that work as barriers or enablers in future implementation efforts (Green & Thorogood, 2009;Nilsen & Bernhardsson, 2019).
Participants and data collection
Semi-structured single interviews (Kvale & Brinkmann, 2014) with ED nurses and medical secretaries (n = 21) were held in January and February 2020.Sampling for interviews was purposive and subsets of interview data have been used in previous studies in combination with other data material (Sivertsen et al., 2021(Sivertsen et al., , 2023)), while data for this paper have not been used previously.Ward managers of nurses and medical secretaries acted as gatekeepers (Green & Thorogood, 2009) and on scheduled days, the interviewer was placed in a room near the ED during working hours in the data collection period.The HCPs on duty on those given days were asked to participate in the study and were interviewed in turns.Nurses (n = 11) had 5.8 years of seniority in the ED (range 1-16), whereas medical secretaries (n = 10) had 12.9 years (range 1-33).Interviews varied in length with a mean time of 31 min (range 21-60 min).Every participant received written and oral information about the study and signed consent forms.
Two group interviews (Green & Thorogood, 2009) with healthcare professionals from municipalities were conducted in their respective municipal alcohol treatment facilities in August 2020.Conducting interviews in preexisting groups of colleagues is suitable for examining common experiences, opinions and social group norms (Kitzinger, 1995).In collaboration with treatment facility managers, participant sampling was purposive.Participants all worked in the alcohol treatment facilities and had experiences with ED collaboration.Three persons participated in Municipality A: the manager of the treatment facility; a doctor; and a nurse.In Municipality B, a manager, one doctor and two nurses participated.Both interviews had a duration of 2 h.
Interview guides were developed for the two sectors based on previous findings from an ethnographic study with 150 h of observation from an ED (Sivertsen et al., 2021).Here, it was found that cross-sector collaborations and HCP possibilities for actions ought to be explored further.Examples of questions are shown in Table 1.
Both single and group interviews were conducted by DMS.In group interviews, JWK acted as observer/co-interviewer. Interviews were recorded on a digital voice recorder and later transcribed verbatim.
Data analysis
First, the material was read and re-read to gain an overall understanding of the content.All interviews were inductively coded using the Qualitative Content Analysis (Graneheim & Lundman, 2004).All transcriptions were divided into meaning units and subsequently the essence was condensed at a manifest level.Next, an interpretative and latent level was applied and codes that illuminated subthemes were provided.The first author coded the data material (207 pages) and JWK validated the work; discrepancies were discussed until reaching consensus.Finally, themes were developed in an iterative and interpretative process and discussed among authors until reaching consensus.This methodology provides insights at theoretical and latent levels and has a transparent analytical process that increases validity (Graneheim & Lundman, 2004).(Table 2) The second-order analysis is inspired by the analytical concept of "boundary object" (Star & Griesemer, 1989) to explain and nuance the findings further.After re-reading the coded data material, the results were interpreted in an iterative process with this analytic perspective.The concept originates from the work of Star & Griesemer studying the collaboration of different actors and their divergent perspectives of flora and fauna species in a Museum of Zoology (Star & Griesemer, 1989).Here a boundary object is described as: "objects which are both plastic enough to adapt to local needs and the constraints of the several parties employing them, yet robust enough to maintain a common identity across sites […] They have different meanings in different social worlds but their structure is common enough […] to make them recognizable" (Star & Griesemer, 1989).Further, the term boundary should be understood as "a shared space" rather than an edge or a border (Leigh Star, 2010).Thereby, a boundary object has taken various forms in different studies and is not necessarily described as a material object, a "thing", such as fieldnotes, specimens or maps (Star & Griesemer, 1989), but can also be a practice, e.g., care pathway (Haland et al., 2015), an idea or theory (Fox, 2011), or even a patient or human body (Bishop & Waring, 2019;Kirk et al., 2024;Mol, 2002).Further, boundary objects are often most relevant to study from an organisational level (Leigh Star, 2010).In this study, "the patient with an alcohol problem" is considered the boundary object, since the patient is present in both social worlds (EDs and municipal treatment centre), adapts to these worlds and is perceived differently in these places due to different representations of "identity".The patient thereby constitutes a shared entity that crosses boundaries, further accounted for in the analysis.
Ethical considerations
The present study was approved by the Data Protection Agency in the Capital Region of Denmark (2012-58-0004 andVD-2018-229, I-Suite: 6471).According to the Danish National Committee on Health Research Ethics, there is no need for ethical approval in interview studies.The study was conducted according to the Helsinki Declaration (WMA, 2013) as all participants received written and oral information regarding the purpose of the study, confidentiality, voluntary participation, the right to withdraw and anonymity before they gave their written consent.The results are presented according to Consolidated criteria for reporting qualitative research (COREQ) (Tong et al., 2007).
Results
The overarching and unifying topic, corresponding to the aim of the study, is the crosssector collaboration between the ED and the two municipalities.Based on the qualitative content analysis, three themes and seven subthemes emerged (see Figure 1), describing and explaining the existing collaboration from different perspectives.The main themes are: 1) Responsibilities in practice; 2) Professional contrasts; and 3) The social nurse in a unique position.In the following, we analyse these themes by illustrating how "the patient with an alcohol problem" can be interpreted as a boundary object (Star & Griesemer, 1989) across sectors, and further how this boundary object, as a shared object, can be used to understand the collaboration of these distinct social worlds.
Theme 1: responsibilities in practice
This theme concerns the HCPs' reflections on their own and their collaborators' responsibilities in connection to alcohol treatment and how they interpret and practise this daily.The results showcase some of the discrepancies in perceptions of HCPs working in the two sectors, but also provides a picture of HCPs' visions for treating patients with alcohol problems in a shared responsibility.
Subtheme: who will take care of them?
This subtheme concerns the HCPs' perspective on responsibility for the patients between municipalities and hospitals.Most ED nurses express that patients do not have sufficient health literacy or are motivated to seek help on their own after discharge, and they are uncertain of what the patients are discharged to and who "catches them" outside the hospital.The nurse is aware that the patient (as a boundary object) is a shared part of another social world outside the hospital.The expressions of the patients present them as passive, someone who cannot take care of themselves and not motivated to seek treatment.Further, there is an insecurity whether anyone will take care of the patient and furthermore what can be provided?What are the capabilities at their municipal collaborators?The impression is that they work in parallel worlds without necessarily coordinating patients' trajectories.In the interviews, responsibilities are discussed among all participants, but the placement is unclear.Some ED nurses have the opinion that too much responsibility has been placed on the municipalities after the healthcare reform (see the description under "Setting"), others believed that the GP would be the preferred solution.Further, they criticise that patients with an alcohol problem should not be held responsible for seeking help themselves.On the other hand, municipalities suggest that the hospital should be responsible for this connection.Again, the patient role is perceived as passive by both sectors.This view may be due to previous experiences with patients' inability to act or moral/ethical concerns.Either way, it exempts responsibility from the patient.
Between professions in the ED, some nurses problematise that they sometimes can feel left alone with the responsibility for the patients' trajectories and call for secretaries and doctors to take part in this responsibility as well.Between sectors, a doctor in the municipality raises some issues regarding the professional responsibility of ED doctors, in terms of referring to municipal follow-up treatment if this is a possibility.
My point is that the Hippocratic oath also applies to doctors who treat "addicts".If the patient shows signs of a tumour, the doctor needs to deal with it.One cannot say: "I can't deal with that".It's a no-go!(Muni A, 1) According to this quote, the doctor has experienced that not all ED doctors consider addiction as a disease that needs treatment.Or, at least, they do not know how to, and then omit responsibility.These views are contradictory and showcase the flexible nature of a boundary object, and how different versions or degrees of the disease regarding the same patient can be perceived in the eyes of different professionals.Overall, legal responsibilities were more often articulated by participants from municipalities who all referred to §141 in the Danish Health Act, saying that all citizens have a right to treatment.They explain that they welcome everyone who seeks their help, regardless of the severity and then assesses whether treatment should be initiated.
Subtheme: dream scenario
This subtheme contains the HCPs' ideas for changes in structures and new opportunities for care and treatment for patients with alcohol problems, reflecting inadequacies in the current system.Many participants used expressions like "I wish…" or "I hope…" when they elaborated on this topic.A recurring topic was that patients with alcohol problems should not be in an ED.HCPs feel that they have no time for them, that the care given is not appropriate and that they are misplaced in the ED.
They shouldn't be here.They should be in a place where someone could take much better care of them and could follow up on it.This is just a revolving dooryou are in and then you are out.So maybe they should go somewhere else.That would be the best.(ED nurse,2) This view can both be interpreted as selfreflection from a professional expressing concern of a patient group, but on the other hand it represents a reality where patients are objectified and "moved around" as the system demands.In contrast, according to the municipalities, the patients with the most severe alcohol problems do not want to be in the ED and they describe how these patients have an aversion of being admitted and how municipal HCPs try to avoid a hospitalisation by using close monitoring and relational work in the municipalities.Even though preventing an admission is overall beneficial, supporting patients' aversions may expand the gap between sectors and potentially create distrust among them.
Many HCPs are aware of this gap and suggests that some in-hospital reorganisation should be done to optimise alcohol treatment.Several ED nurses have a dream of a specialised hospital unit only for patients with substance use, staffed with dedicated doctors, nurses and social workers.Some suggest a kind of hybrid between a somatic and a psychiatric unit, responsible for the complex treatment, which could be placed in either a somatic hospital ward with therapeutic treatment available or at a psychiatric ward with competences in acute somatic conditions.
The municipalities highlight the benefits of an outreach program where employees from the alcohol treatment centre could visit the hospitalised patient and make an agreement of entering treatment after discharge.
We can help more people into treatment by meeting them in the ward, hand them a leaflet and say: it is me you will meet next Monday.I believe that would be the best-case scenario.(Muni A, 2) Accordingly, this approach can provide the patient with a feeling of safety and ensure that the patient is not lost between hospital and municipalitiesa shared responsibility.With these more integrated initiatives the boundary object would be described as positive, since the positive boundary object enables exchange of ideas between collaborative groups and yield a common language and enhanced knowledge and can even bring harmony to a previous dispute (Fox, 2011).
Theme 2: professional contrasts
This theme concerns the HCPs' perception of their own professional roles and their core tasks.It further showcases boundary challenges and how barriers for collaboration exist due to assumptions about each other, lack of knowledge exchange and temporal structures.
Subtheme: acute versus planned treatment
Reflecting on their possibilities for action, the HCPs in the ED explain that besides services offered by the social nurse (see Theme 3), they lack opportunities and knowledge of what can be done.As one says: "I see it as fire extinguishing, then they [patients] come again, then we put out the fire again" (ED secretary, 2).Some of the HCPs in the ED are questioning what can be expected in an acute setting, other than their usual offers: detoxification; monitoring of withdrawal symptoms; and medication.In addition, many refer to a leaflet with contact information to a local alcohol treatment clinic, which is occasionally given to the patients.The presence of the leaflet in the ED, highlights the awareness of a shared space and the implicit potential of the leaflet to change the patient trajectory and lead the patient (the boundary object) in the intended direction of a planned treatment.Thereby, it represents a concrete artefact symbolising the pathway between sectors.
I have just handed out a leaflet with a telephone number on it.I don't know of any other options besides that.(ED nurse, 5) A nurse strongly opposed the wording in these leaflets.The term "abuse" (da.Misbrug) is written everywhere, which the nurse means will be a barrier for patients with alcohol problemsthey will never go there, because they will relate it to "drug abuse".
It says Substance Abuse Centre on our leaflets, but when Mr. Smith who drinks too much red wine is admitted, he does not feel that he belongs in a centre like that with all the drug abusers […] It is a lost cause […] what the hell have they been thinking!(ED nurse, 6) Earlier quotes characterised the patients in rather passive roles.Here, there is a clear distinction, and the patient is personified with a name, providing the patient with a more active role and a will of his own.However, most nurses in the ED address their main role as caregivers in the acute phase securing both the triage and fundamental care.The acute condition is the main priority, which maintains the perception of the patient in a passive role.
Well, you can say that I'm supposed to provide the basic nursing and check for withdrawal symptoms and all those things… But we're in an acute situation and they are often in a very bad condition.(ED nurse, 2) If a nurse assesses the patient to be too ill for discharge, the nurses explain that they must have good arguments in line to convince the doctor, which can sometimes be an impossible task.
If I assess that a patient cannot be discharged, then even if the doctor believes that no more can be done, we can still try to keep them for a couple of days […] It is not because we want to 'throw them out the door', but we sometimes have to.If we cannot argue why they should stay hospitalised, they'll be discharged.(ED nurse, 3) The patients' problems are present in multiple versions, and professions consider them differently.In this light, the ED becomes a trading zone for negotiations between professions in everyday practice.Assessments and decisions are dynamic and not always stable; however, due to asymmetrical power relations, the doctor has the final mandate in these matters.From the HCPs' perspective, many of the patients with alcohol problems are repeatedly admitted, which causes a fatigue or powerlessness in the HCPs.Therefore, they argue that it can sometimes seem pointless to try to postpone a discharge, since patients will most likely return anyway.This connects with the previous quote of the revolving door and supports the interpretation of the passive object being sent back and forth.
In opposition to the HCPs in the ED, municipalities are more optimistic about treatment, they work with long-term goals and have faith that the patient eventually will succeed, even after several relapses.Professionally, they would prefer to see patients at a much earlier stage than is usually the case.The municipalities call for earlier detection in hospitals and closer collaboration; this way they can intervene earlier and prevent a patient's personal deterioration.Here, the patient is spoken of in a more active role, if it is possible to arrive at an earlier stage than they usually do.
We welcome those who are not on the ropes, so we can work with the family dynamics, and the children and the employment.(Muni B, 3) However, they are aware that it is a schism to request this early referral from the hospital ED, since they know resources and possibilities are sparse, and foremost the cure of the patient is not happening during the hospitalisation but requires a long-term commitment.
Subtheme: knowledge transfer
This subtheme deals with barriers for crosssectoral communication experienced by the two sectors.Several HCPs from the ED call for more simplified communication pathways and supportive IT solutions to deliver brief notices, instead of long and detailed care schedules (da.plejeforløbsplan), which is the existing tool, although not frequently used.
I know, we can make a care schedule and send it to the municipality in agreement with the patient telling the municipality to visit [name].But I don't think we really do that (…) I think most of them are left undone.(ED nurse, 1) In contrast to the comprehensive care schedules, the amount of information needed from the municipalities to start treatment up again after discharge is actually very limited.Besides acceptance from the patient, all they need is the patient's name, social security number, a brief description of the problem and a medicine status.
We are almost never notified when our patients are discharged.We have no idea.We have no idea what medicine they have been given, and we rarely can see it in the database.If he got 400 mg last time and I give him 100 mg, the treatment fails.The patient cannot tell me what he got."I got some little round pills"they have no idea.And it's such a shame, since we really want to cooperate.(Muni B,3) They explain that this lack of communication around discharge causes inappropriate consequences for the patients and their treatment will likely fail.Hence, municipalities urge hospital EDs to notice them, so they can follow-up on treatment.Some point out that cross-sectoral collaborative work has primarily been based on personal networks and not a formalised collaboration.
A care schedule is very seldom used, even though it is the agreed tool for exchanging information.As mentioned in methods, a boundary object can also be a "thing", and if a care schedule is perceived flexible enough to adapt to both sectors while simultaneously being robust enough to keep a common identity (Star & Griesemer, 1989), then it can be interpreted as such.However, in opposition to the earlier mentioned positive boundary object, the inability to create knowledge exchange would categorise it as an ineffective boundary object (Fox, 2011).
Subtheme: temporal barriers
This subtheme relates to the temporal barriers and availability as experienced by the healthcare professionals.ED HCPs call for increased accessibility to the municipalities, such as expanded opening hours and a 24-h phone, not only a few hours on weekdays.They describe how municipal treatment facilities are not open when patients need them the most, and from an ED perspective these opening hours are experienced as problematic.Consequently, patients will end up in the ED if help is not provided in the municipalities, since they are open around the clock.
Many alcohol units are closed on holidays and weekends when people are most vulnerable and maybe are having a bottle of vodka during Easter or Christmas, then there is no one to help them.I think it's a shame if somebody is sad on Christmas Eve and only the emergency department is there to take care of them.(ED secretary, 9) Seemingly, the two systems work in different and unaligned structures.This unalignment is experienced both from the ED perspective, as shown above, and in municipalities that are challenged by the daily hospital routines: It is a practical question because rounds often take place around noon where they agree to discharge, but our telephone hours end at noon.The patients have lunch [at the hospital] between 12 and 1 pm and then they are ready to leave, but then we are closed.It's incoherent.(Muni B,2) Occasionally, the staff from municipality asks if the ED staff will consider keeping a patient for the weekend, but since the ED has a constant flow of patients, queries like this cannot always be met.
Consequently, these temporal barriers are disruptions in the patient trajectory and there seems to be a lack in the choreography of their work regarding their common boundary object (the patient).Instead, they work in parallels.Further, as presented in the subtheme "acute vs. planned treatment", the perspectives for treatment are different in the two sectors, while EDs work in short-term perspectives and the municipalities have a long-term perspective in their approach to the patient.
Theme 3: the social nurse in a unique position
This theme concerns the function of the social nurse since this particular service played a dominant part in the dataset.The theme is divided into two subthemes: "The mediator" and "A limited resource", describing the HCPs' need for a function like this, but also the potential pitfalls of this function and how it may affect the patient (as the boundary object).
Subtheme: the mediator
The importance of the social nurse function is emphasised in most interviews.The social nurse clearly fulfils a need in the ED providing specialist knowledge and care.This function is mentioned in very positive terms and is believed to be an almost certain guarantor of a good discharge.
They are damn good.They are SO good because they know who to call if problems occur after discharge.And they have such good contact with these patients.And often, they know them already, which is extremely important.(ED nurse, 10) By saying "these patients" and "they know them", a distance from the patient is made.To boost ED HCPs' competences, the social nurse occasionally teaches HCPs about substance use, different treatment facilities, medication, withdrawal symptoms and so on.However, the HCPs still find it difficult to navigate in this field and to gain an overview.One explains: I appreciate the social nurses who can provide patients with the relevant offers.I think it's a jungle, knowing what is relevant and where to find it.They try to teach us, but I still consider it a jungle.(ED nurse, 1) It is often mentioned that the social nurse takes over responsibility for the patient trajectory and handles all multidisciplinary and municipal collaborative work in relation to patients with a known substance use.This contrasts with the intended guiding role.The social nurse takes on a mediating role, being a link between the two sectors and conduct negotiations between many stakeholders on behalf of the patient (the boundary object) serving as a boundary actor, further elaborated in the discussion.The function is described as a huge relief for the department and therefore frequently used.HCPs explain that even though they might know which offers exist, it is better if the social nurse is involved, as she or he is updated with contact numbers and structures in the different municipalities in the hospital's catchment area.The social nurse will know which contacts to call and which strings to pull.
The social nurses are a big help regarding the interdisciplinary work and what to offer the patients after discharge.They are always updated with recent changes and know which offers exist and how to send the patients on their way.I always use her for those things.(ED nurse, 2) Both municipalities have the same perception of the function and praise those trajectories where the social nurse has been involved.They are satisfied with the trajectories when they have been contacted by the social nurse from the hospital in advance of a discharge, but also the other way around: I try to use the social nurses as often as I can.I call in and say: "now Peter is here again".
[Imitating social nurse]: "Great, I will check on him".I think it works fine!(Muni B, 3) However, municipalities draw attention to the fact that their collaborations may only concern the patients with severe substance use and severe social problems.They have no experience in collaborating about patients who "solely" have an alcohol problem.
Subtheme: a limited resource
Even though the social nurse is highly appreciated in both sectors, several participants point to the fact that even though the social nurse tries to see as many patients as possible, she cannot see all patients with alcohol problems.Some suggest that she primarily treat or take care of patients with the most severe and obvious problems.
If somebody is really far out, they can talk to a social nurseand they are in demand!They are very busy.So, we only use them in very, very severe cases.(ED secretary, 5) Therefore, priorities are sometimes made regarding which patients to refer to the social nurse, in consideration of her time and resources.This means that ED HCPs will select those patients most in need of the social nurse's services, usually in cases of a suspicion of social problems in combination with a substance use (which is the core focus of this function), whereas patients who have been admitted for, for example, detoxification, are not necessarily referred to the social nurse.
The social nurses take care of the most vulnerable or homeless, those who have the biggest problems.But not necessarily those who are admitted because they drink and have problems because of that.They end up getting help for 2 days [detoxification] and then they are left to their own devices.(ED nurse, 3) Hence, it is a matter of case-by-case subjective assessment based on the nurses' clinical judgement and experience, whether the social nurse is involved in the patient trajectory or not.Some patients are expected to have a more active role and be responsible for getting help themselves.The autonomy of the patient is highlighted as an important argument for not always sending a referral.I don't want to paternalise [name] in room 9.3 to talk to the social nurse because he drinks six beers a day and has mild withdrawal symptoms.The social nurse is an offer for those who want it.But I do not want to impose anything on anyone.(ED nurse,1) According to the ED nurses, the social nurse should only be perceived as an offer and if the patient does not want to accept this, there will be no moral judgement or paternalism.The patients' right to autonomy is highly valued, both in terms of alcohol intake and the desire for help.On the other hand, insisting on autonomy can easily tip into omission, if decisions whether to contact the social nurse is made on subjective assessments.Further, another reason not to contact the social nurse could be that nurses perceive it as an irreversible passage point (Callon, 1986;Star & Griesemer, 1989), once you have referred to the social nurse, you cannot withdraw.It is documented in the patients' journals, and they are now in a certain category, with the risk of being stigmatised in future encounters in the hospital system.
As shown in the previous subtheme, all participants mentioned the function of the social nurse in very positive terms, but both municipalities highlighted the fragility of this position.Usually, there is one social nurse (sometimes two) covering an entire hospital, which means that if the social nurse has a day off, is on a course or on vacation, the collaborative work is likely to fail.
…and then she may be gone on a Friday, but the patients get discharged anyhow and then collaboration is ruined immediately.If she isn't present just once, then there is no cooperation.It is sad, since they are good when at work.(Muni A, 3) Likewise, it was pointed out that if a phone call is missed or mail is left unread, the risk of adverse events increases, since the function is only linked to one person.In this perspective, the social nurses' competences become an obligatory passage point (Callon, 1986;Star & Griesemer, 1989) for the boundary object.The positioning is central and to achieve desired goals of social nurse services, it is a prerequisite that all stakeholders accept the necessity of the passage point and comply with terms for certain actions to follow.
Discussion
This explorative study presented three themes, describing the collaboration between the two sectors in relation to patients with alcohol problems as inadequate.A lack of opportunities and clarity in the division of responsibilities was experienced by staff in EDs and municipalities.ED HCPs, in particular, were uncertain of possibilities and structures in the municipal alcohol treatment facilities.Both sectors faced barriers complicating the intersectoral work, such as temporal aspects, professional approaches and organisational structures in acute versus long-term treatment.One of the main findings in this study was that the crosssectoral collaboration was largely linked to one highly appreciated functionthe social nurse.We found that the social nurse had a significant role as a mediator in the intersectoral collaboration in relation to patients with alcohol problems, since this function fulfils a need of the HCPs in providing specialist knowledge and taking over responsibility.However, results also showed that not all patients with alcohol problems are referred to the social nurse.HCPs assess which patients could benefit most from this type of help, based on the core focus of the function: socially marginalised patients (Dideriksen et al., 2019).With this core focus, the social nurse is entitled to assess and prioritise if patients (as boundary objects) are within the target group.In the results, we described how the social nurse is positioned as a mediator and a passage point to further services.In addition to this, Cramer et al. describes how "boundary work" and "boundary actors" occur constantly in organisations (Cramer et al., 2018).The boundary term is used to indicate a difference between and within professions, but it can be used as a strategy to connect units, or as a barrier protecting autonomy, status and the control of resources (Cramer et al., 2018;Gieryn, 1983).In other words, these boundaries are negotiable, and the boundary actor acts as a mediator connecting two worlds, but with a strong position to include or exclude patients according to certain patient group categories (Cramer et al., 2018).From this perspective, the social nurse could be described as a "boundary actor" being highly appreciated and bridging the acute disease-focused and flow-driven environment of an ED with the holistic approach that governs the social nurse's work.An ethnographic study examining the handling of patients with alcohol problems in an ED showed that HCPs tend to not recognise the broad spectrum of alcohol problems and mainly focused on patients with severe substance dependency often in combination with psychiatric disorders and social problems, since they demand extra time and energy from staff with their "inappropriate" behaviour (Sivertsen et al., 2021).Hence, these patients are likely the same as those referred to the social nurse, since the social nurse, as the boundary actor, takes over responsibility for the trajectory of the "hard-to-treat" patients.As shown in the results, a patient who drinks six beers a day and shows mild withdrawal symptoms may not necessarily be referred.This decision may be due to a subjective assessment by the nurse or based on prior experiences of negotiating if a patient was in the target group of a social nurse.The "minor" alcohol problems of patients not perceived as "hard-to-treat" may fulfil criteria for dependency, but this category of patients may only receive treatment for the cause of their admission, not their alcohol problem.When this happens for patients with dependency, this will most likely be the case for the less identifiable patients with hazardous and harmful use as well (Sivertsen et al., 2021).In case the social nurse is the primary link between hospitals and municipalities in relation to patients with alcohol problems, patients referred from hospital to treatment centres will be skewed towards socially marginalised patients.This was also an observation point from the municipalities, where there was a wish that citizens could enter treatment at an earlier stage of their alcohol misuse.Broadening the target group and incorporating preventive aspects to this function would remediate this.Internationally, social workers and social service providers in the ED fulfil some of the same tasks as the social nurse (Craig & Muskat, 2013;Gehring et al., 2022;Moore et al., 2017).There is a considerate overlap in practice roles such as case manager, counsellor and problem-solver (Moore et al., 2017).However, in Denmark, social nurses and social workers differ in terms of educational background and work under different legislationsthe Health Act and the Act of Social Services, respectively.Moreover, the number of social workers in hospital have decreased drastically over decades (Harsløf et al., 2016).In the results, it was suggested that a person from the alcohol treatment facility could visit the hospital on a regular basis and plan with eligible patients to enter treatment.This has been done in a Danish RCT study (the Relay Model), where a therapist from a municipal alcohol treatment centre showed up at four different somatic in-hospital wards and met patients screened to be at risk, offered Brief Intervention and provided information about available treatment options after discharge (Schwarz et al., 2019).With the aim of improving referrals from hospital to municipal alcohol treatment, they found a significantly higher probability of treatment attendance 18 months after discharge in those patients who had received the intervention.However, even though the Relay Model had tried to overcome known provider-level barriers towards SBIRT, the study concluded that the overall number of people attending was relatively small and the considerable efforts of this intervention did not add up to outcomes and recommended to rethink if general hospitals are suitable for SBIRT (Schwarz et al., 2019).Results like this highlight the importance of separating intervention outcomes from implementation outcomes (Proctor et al., 2011).
Leaflets are another way to inform of possibilities for entering treatment.We found that besides the social nurse, ED HCPs felt that they lacked opportunities to help patients with alcohol problems.However, leaflets were occasionally given to patients, but the wording was experienced as creating barriers, since patients associated the word "abuse" with "drug abuse".Such an experience could prevent a nurse from ever delivering such a leaflet.The terms used in relation to severe alcohol use (or substance use), which are highly stigmatised health conditions, affect the likelihood of people seeking help (Volkow et al., 2021).Hence, if leaflets are not aimed directly at their target group and the wording stops people from entering treatment, HCPs might feel that they have done something when delivering the leaflet, but if the effect of it is questionable, at least changing the wording would be a starting point and an "easy picking".However, it is still not known if patients would enter municipal treatment even if wording was different and despite the "open door policy", since alcohol treatment is heavily stigmatised and only few patients have the confidence and motivation needed to navigate the system (Andreasson et al., 2013;Gilburt et al., 2015;May et al., 2019).Further, a Cochrane review showed that the effect on printed educational material is low in order to change practices and outcomes (Giguere et al., 2020).In our study, HCPs from both sectors agreed that patients should not be responsible for seeking help themselves.Municipalities meant that the ED should be responsible for establishing connections to alcohol treatment facilities, but to the contrary ED HCPs suggested that, ideally, general practitioners should be responsible.In a Consolidated Framework for Collaboration Research, having a shared vision is one of the main constructs (Calancie et al., 2021); therefore, future implementation efforts for crosssectoral alcohol initiatives should initially focus on strategies to attain a common goal.Further, we found challenges in the existing collaboration due to different approaches to the patients and knowledge transfer, both in terms of temporal barriers but also in electronic medical records for medicine (da.Faelles Medicin Kort, FMK), in which municipalities were rarely able to see which medicine was given during a hospitalisation and potentially updated medicine status.In SBIRT literature, cross-organisational communication and the necessity of integration with the electronic medical health record are highlighted as important facilitators for continuous referral to treatment processes (Broyles et al., 2012;Vendetti et al., 2017).
Lack of time is frequently highlighted as a main barrier for implementing new interventions (Gargaritano et al., 2020;Geerligs et al., 2018;Vendetti et al., 2017), but different dimensions of time and temporality are often not described.We found that certain temporal structures characterised both sectors, and that participants experienced different temporal barriers in relation to this.Reddy et al. described temporality in work in three temporal features: trajectories; rhythms; and horizons (Reddy et al., 2006).Temporal trajectories are used to describe temporal logics following a "structured timeline" focusing on the sequence of work activities.An example of a temporal trajectory from this study is when patients are admitted for detoxification and are discharged as soon as they are stable.To be efficient, EDs must comply with requirements for length of stay and the maintenance of patient flow; therefore, they often use the term "fully treated" when the admission diagnosis is treated (Bendix Andersen et al., 2018;Kirk & Nilsen, 2015).Further, it can be argued that patients with alcohol problems may be perceived as "flowstoppers" by ED staff (Kirk & Nilsen, 2016;Sivertsen et al., 2021).Several findings in this study can be related to temporal rhythms, which characterise work at a collective level and the repeated patterns of work.The HCPs in both sectors use their knowledge of these recurring patterns in the planning of care and treatment activities; for instance, in access to phone services, knowledge of rounds and discharge procedures or when municipalities try to advocate for a hospitalisation over the weekend, since they are closed for the weekend themselves.A place like an ED has multiple rhythms in relation to people, activities and interactions, which collectively form "a complex temporal fabric" (Reddy et al., 2006).This complexity becomes clear when the temporal activities of the two sectors are not aligned in, for example, opening hours, which has consequences for the patients who may experience feeling "lost" in a fragmented healthcare system.This leads to the question: Is the organisational structure designed for patients' needs?Future cross-sector interventions aimed at people with alcohol problems should avoid siloing and work towards coordinated treatment.Further, findings should be incorporated in a process of tailoring the intervention and in selection of implementation strategies (Powell et al., 2015).
Strengths and limitations
A strength of the study is that it presents both the hospital sector and the primary care sector (alcohol treatment facilities in the municipalities) in a joint analysis to highlight attention points in the existing collaborations.Another strength is that no other peer-reviewed studies have, to our knowledge, described the use of the social nurse function from the HCPs' perspectives in relation to everyday practice in an ED and specifically in relation to alcohol problems.Future studies on this function should focus on the referral practices (who, how, when and why), characteristics of the referred patients and potentially rejected patients, and examine the impact on HCPs' feeling of responsibility, when the social nurse is involved in the patient trajectory.Even though it is a Danish concept, it can be compared to other "bridge building" functions, such as case managers, social workers, alcohol specialist nurses or alcohol liaison service.Therefore, results are likely transferable to those similar functions.
A limitation was that in group interviews, managers were present, which may have influenced the power balance in the group, possibly hindering someone from expressing his/her opinion (Kitzinger, 1995).However, managers were also part of the daily functions in the centres; therefore, they were also regular colleagues.Further, even though the two municipalities in this study varied in size, they had the same structure in terms of their own setup for alcohol treatment facilities implicit in the municipal organisation.Other municipalities buy services from external private facilities, and it is possible that results would have been different if those municipalities had been interviewed.Another limitation was that hospital doctors were not interviewed in relation to these crosssectoral collaborations.We chose not to do so in this study, since they are mainly responsible for sending referrals, but the direct contact and coordination with municipalities is often a task for nurses or secretaries.However, future studies of ED collaborations should also include doctors' perspectives, since a part of examining patients is to ask about alcohol and tobacco use and, as such, to deliver a platform for discussing issues related hereto.Finally, patients' perspectives were not examined in this study, including their perceptions of the coherence and continuity in the intersectoral work and if they feel that they receive the needed help.These considerations would be of interest in future studies.
Conclusions
The existing collaboration between an ED and municipal alcohol treatment facilities is characterised by a lack of knowledge of each other's services and professional differences in their approach to the patient.HCPs in EDs describe a lack of possibilities and both sectors experience collaborations as influenced by temporal structures, which complicate the intersectoral work.The underlying organisational structures that govern their work counteracts a shared goal and responsibilities, which challenges collaborative work.Results show that besides the highly valued "social nurse", there is a low strength in collaborative networks.This position has a unique status in supporting cross-sector collaborations; however, primarily socially marginalised patients with severe alcohol problems are referred to the social nurse.There are no current practices or collaborations aimed at the broad spectrum of alcohol use, besides a leaflet that is delivered occasionally.This means, that the present organisational structures and the way these structures are managed by HCPs is a co-producing factor for not handling a group of patients with "less severe" alcohol problems.
When they are discharged, I often wonder what are we sending them home to?[…] Who will take care of them?(ED nurse, 7)
Figure 1 .
Figure 1.Themes and subthemes in the cross-sector collaborations between emergency departments and municipal alcohol treatment centres regarding patients with alcohol problems.
Table 2 .
Illustrations of the analytic process in qualitative content analysis. | 2024-05-26T15:08:14.806Z | 2024-05-24T00:00:00.000 | {
"year": 2024,
"sha1": "e8fe6047b01fca257e07095700514e5df7d57593",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14550725241252256",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f396380bfd38a603d3aaf3d0f81d4d0c9c16618a",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
233318163 | pes2o/s2orc | v3-fos-license | PHYSICOCHEMICAL PROPERTIES AND ANTIMICROBIAL EFFECTS OF ROSELLE COROLLA, ONION PEELS AND PEANUT SKINS ANTHOCYANINS
Anthocyanins would make ideal natural food colourants with additional nutritional benefits however stability is a hindering factor. The stability, physicochemical properties and the biological activities of anthocyanins extracted from onion peels, peanut skins or roselle corolla were achieved. Crude anthocyanins were extracted using two different solvent systems (distilled water and acidified ethanol with HCl 1.5 N, 85:15, V/V). Roselle corolla pigment extracted by acidified ethanol was the highest in phenolic and anthocyanin contents compared with water extract content. The aqueous extract of roselle corolla, onion peels and peanut skin showed activities against Gram (Staphylococcus aureus and Bacillus subtilis) and Gram (Pseudomonas aeruginosa and Escherichia coli) bacteria. Aqueous extract of roselle corolla and peanut skin inhibited mycelial growth of Fusarium oxysporum. About 78% of aqueous extract pigments of onion peels were retained when heated at 75°C. Stability under different light sources showed general decline in pigment retention of the samples over the time period with all extracts. However, roselle corolla extracted by acidified ethanol showed more stability under different light treatments compared with distilled water extract. The extracted pigments were stable against oxidizing agents, whereas it reduced gradually when treated with cane sugar or salt.
INTRODUCTION
Colour is one of the most important characteristic attributes affecting the consumer's acceptance of food since it gives the first impression of food quality. Natural colours are extracted from renewable sources such as plant materials, insects, algae, etc, while the synthetic colours are manufactured chemically. Nowadays, there is a drastic attention to polyphenols due to their positive effects on health by preventing cardiovascular, inflammatory and neurological diseases (Silva et al., 2007). Many convenience foods such as confectionery products, gelatin desserts, snacks, cake, pudding, ice cream and beverages would be colorless, and would thus appear undesirable without the inclusion of colourants (Abou-Arab et al., 2011).
Anthocyanins as a subsidiary of polyphenols have been under investigations in recent years, and sources of anthocyanin which are widely used in the food industry as natural colourants and as an alternative to synthetic colourants are considrable. In addition to their coloring efficiency, increasing evidence suggests that anthocyanins are not only nontoxicant mutagenic, but also have a wide range of therapeutic properties (Lozovskaya et al., 2012). Anthocyanins are among the most important water soluble plant pigment found in higher plants. They
769-781
Anthocyanin pigments make a good source for a natural food colourants however, they are known to be very unstable. Factors that most commonly affect the stability are pH, temperature, light and storage. Additional benefits of using anthocyanins as food colourants is the biological roles that it plays such as antioxidant activity (Amr and Al-Tamimi, 2007). Ensuring the chemical stability of anthocyanins has become a focal point in recent studies, as there is an abundance of potential industrial applications. A major benefit would be to substitute synthetic colourants and dye and replace them with stable anthocyanins (Castaneda-Ovando et al., 2009).
Hibiscus sabdariffa is an annual herbaceous shrub from the Malvaceae family (Mahadevan et al, 2009), and is cultivated in both tropical and sub-tropical regions around the world. The plant is described as being red stemmed with serrated leaves and red corolla. Although H. sabdariffa is termed an under-utilized crop, it is used commonly in households for its traditional medicinal properties and produced into various edible products such as jam, jelly and tea (Sipahli et al., 2017). H. sabdariffa is known for its various pharmaceutical, nutritional and traditional medicinal properties and has said to be a rich source of anthocyanins ( Patel, 2014).
Onion (Allium cepa L.) is one of the oldest and most frequently cultivated food plants highly valued for its pharmacological properties, such as antioxidant, antimicrobial and antitumor ones, reduction of cancer risk and protection against cardiovascular diseases (Ly et al., 2005). Though it is not specifically considered as a medicinal herb, the onion has shown healthpromoting effects based on its secondary metabolites, such as flavonoids to which the strong antioxidant properties of onion have been attributed (Lachman et al., 2003;Nuutila et al., 2003).
By-products of the peanut industry which include peanut plant leaves, roots, hulls and skins have also been identified as rich sources of phytochemicals, suggesting that the bioactivity found in fruits and vegetables could possibly be present, although currently these plant parts have little economic value (Francisco and Resurreccion, 2008). Of these materials, peanut skins are most commonly used as low cost fillers in animal feed but are known to have an astringent taste and anti-nutrient properties (Hill, 2002). The antioxidant activity of peanut skins has been reported (Ballard et al., 2009;Nepote et al., 2005), but there are no reports in the scientific literature regarding the relationship between antioxidants, their activity, and anti-inflammatory properties of peanut skins.
Anthocyanins would make ideal natural food colourants with additional nutritional benefits however stability is a hindering factor (Sipahli et al., 2017). In view of the stability, the aim of the present study was to investigate the physicochemical properties, antibacterial and antifungal effects of the major anthocyanins extracted from roselle corolla, onion outer peels or peanut skins.
Plant Materials
Roselle (Hibiscus subdariffa L.), peanut (Arachis hypogaea L.) and onion (Allium cepa L.) were used as source of the natural anthocyanin. The dried corolla of Roselle were purchased from a local market at Zagazig, Egypt. Peanut skins were removed from peanut seeds. Dry outer peels of red onions were used for analysis. They are also obtained from local market at Zagazig, Egypt.
Extraction of pigments
Ethanol was acidified with 1.5N HCl (85:15, V/V) and distilled water which were used as a solvents for extraction of pigments from roselle corolla, dry outer peels of onions or peanut skins. Extracted pigments were obtained according to the procedure described by (Pouget et al., 1990). Ten grams of each plant material powder were immersed in 200 ml of both tested solvent and kept at 4°C overnight. The mixture was filtered through a filter paper (Whatman No. 1), then the filtrates were collected and lyophilized. The yield of lyophilized extracts based on dry weight basis was calculated from the following equation: Where W1 was the weight of the extract after the evaporation of solvent and W2 was the weight of the residue.
Anthocyanin determination
Total anthocyanins content of roselle, onion peels or peanut skins were estimated according to the protocol described by (Du and Francis, 1973), where a known volume of the filtered extract was diluted to 100 ml using the extracting solvent. The colour intensity was measured at 520 and 535 nm for water and acidified ethanol, respectively using Spectrophotometer (JENWAY -England 6405 UV/VIS). The total anthocyanins content referred to cyanidin-3-glucoside was calculated using the following equation: Total anthocyanin (mg/100g) = (A x D f ) / (Ws x 5.99) x 100 A: Absorbance, Df: dilution factor, Ws: Sample weight
Total soluble solids
The total soluble solids (TSS) of samples were estimated according to (Horwitz and Latimer, 2000).
Total phenolic content (TPC) determination
Total phenolic content was determined using the Folin-Ciocalteu assay (Kähkönen et al., 1999). Samples (300 µl) were transfered into test tubes followed by 1.5 ml of Folin-Ciocalteu's reagent (10 times dilution) and 1.2 ml of sodium carbonate (7.5% W/V). The tubes were allowed to stand for 30 min and the absorbance was measured at 765 nm. Total phenolic was expressed as gallic acid equivalent in mg per 100g of dry material. The calibration equation for gallic acid was Y = 0.0009X + 0.214 (R 2 = 0.9679), where Y is absorbance and X is concentration of gallic acid in µg/ml and R 2 is the correlation coefficient.
Total flavonoids content (TFC) determination
Total flavonoids content was measured according to the method of (Ordonez et al., 2006) with some modification. A 2 ml aliquot of 2 g/100 ml AlCl 3 ethanol solution was added to 500 µl of the extract (1000 µg/ml). After 60 min, the absorbance at 420 nm was recorded. Total flavonoids content expressed as quercetin equivalent (QE) was calculated based on the calibration curve using the following equation: Y=0.0012X + 0.008 (R 2 = 0.944) Where X is the concentration (µg QE), Y is the absorbance, and R 2 is the correlation coefficient.
Antimicrobial activity evaluation
Microbial and fungal strains were obtained from Plant Department, Faculty of Science, Zagazig University, Egypt. Extracts of acidified ethanol and distilled water at different concentrations were evaluated individually as an antibacterial agents against two Gram positive (Staphylococcus aureus and Bacillus subtilis) and two Gram negative bacteria (Pseudomonas aeruginosa and Escherichia coli) by conventional well-diffusion assay (Nanda and Saravanan, 2009). The pure cultures of bacterial strains were sub-cultured on nutrient broth at 37°C on a rotary shaker at 200 rpm. The exponential phase cultures of these strains were adjusted to a concentration of 1× 10 9 CFU ml -1 . Each strain was spread uniformly onto the individual plates using sterile cotton swabs. Wells of 6-mm diameter were made on Müller Hinton Agar (MHA) plates using a gel puncturing tool. Aliquots (30 µl) of the extract solution (100, 200, 500, 1000 and 2000 µg/ml) were transferred into each well. After incubation at 37 °C for 24 hr., the diameter of the inhibition zone was measured using a transparent ruler. The effect of the same extracts on the mycelial growth of Fusarium oxysporum was evaluated also at different concentrations (100, 200, 500, 1000 and 2000 µg/ml) using the poisoned food technique (Yahyazadeh et al., 2009). A 6 mm mycelial agar plug from a 7-day-old culture of Fusarium oxysporum was placed at the center of each Potato dextrose agar (PDA) plate and calculated volumes of the tested substances were added, to achieve the previously mentioned concentrations. Approximately, 0.05% (V/V) Tween-80 was then added to the media. Petri dishes were sealed with parafilm and incubated for 7 days at 25°C. The diameter (mm) of colony zone was measured with a caliper .
The extent of growth reduction (%) was calculated as follow: Growth reduction (%) = (C LG -T LG ) / (C LG ) x 100 C LG : Linear growth of control, T LG : Linear growth of treatment
Pigment stability
Stability of anthocyanins extracted either with distilled water or acidified ethanol from onion peels, peanut skins and roselle corolla were investigated according to ( Tan et al., 2011; Sipahli et al., 2017).
Pigments stability under heat
The heat stability of 0.005 g/100 ml pigment solution was measured after treatment in a thermostatically controlled bath at 25, 50 and 75°C for different periods (0.5, 1 and 2 hr.). The samples were held at each temperature for specific time and then cooled immediately in an ice bath. Subsequently the absorption of the solutions was recorded at λmax. Percentage retention of anthocyanins was calculated as follow: Pigment retention (%) = (absorbance after heating / absorbance before heating) x 100
Pigments stability under light
The 0.005 g/100 ml pigment solution were held under natural light, dark place or under the ultraviolet-light far from 30 cm for specific time (1-4 days) and the absorbance was determined at λmax.
Pigments stability under some chemicals stress
The effect of KMnO 4 or H 2 O 2 on the stability of the pigment was measured. Ten ml of 5 mg/100 ml pigment solution and 50 ml of different concentrations of KMnO 4 (20-100 mg/ml), or H 2 O 2 (10, 20 and 30%) were mixed, and then the absorbance of the homogenate was determined at λmax. The effects of sugar or salt effects on the stability of the pigment were measured. The solutions of cane sugar or salt (NaCl) were prepared by 0.5 g/100 ml, and then mixed with 10 ml of 5 mg/100 ml pigment solution. The absorbance λmax of the solutions was measured every 20 min.
Statistical Analysis
All data were subjected to ANOVA using the MSTAT-C statistical package according to (Gad, 2001). Different letters in the tabulated data or above the bars in the figured data indicate significant differences by Fisher's Protected LSD test at (P < 0.05).
Yield of Extract
Two solvents were compared in order to use the most effective one for extracting the pigments of roselle corolla, onion peels or peanut skin. The yield of anthocyanin pigments recovered from roselle, onion peels or peanut skin with two solvents (distilled water and acidified ethanol) are shown in Table 1. In general acidified ethanol was more effective than distilled water in case of onion peels or peanut skin. The highest yield of anthocyanin was observed in roselle corolla extracted by distilled water (27.43 mg/100g). These results agree with those reported by (Mattuk, 1998;Sipahli et al., 2017).
Total phenolic and flavonoid contents in extracted residue
Total phenolic contents (TPC) of all extracts were determined by Folin-Ciocalteu's method, and found to be varied (Fig. 1). The highest amount of total phenolic content was observed in roselle corolla pigment extracted by acidified ethanol (88.88 mg/ml) compared with distilled water extract. The present results were in keeping with the results obtained by Abou-Arab et al. (2011), who found that the total phenolic content of H. sabdariffa extracted by HCl acidified ethanol was the highest compared with other solvents. Whereas, (Sindi et al., 2014) reported that the lower phenolic content had been observed from H. sabdariffa anthocyanins extracted by methanol. Dry outer peels of onions showed the lowest amount of phenol content (4.44 mg/ml) among all acidified ethanol extracts. Total anthocyanin pigment extracted by distilled water showed a lower phenolic content in all tested plant materials compared with acidified ethanol extract. The same results were observed with flavonoid content, which the acidified ethanol extract showed the highest amount of flavonoid compared with distilled water extract.
Total anthocyanin and total soluble solid contents
Anthocyanin pigments and total soluble solid content recovered with the two different solvents are shown in Fig. 2. The highest amount of anthocyanin were observed in roselle with acidified ethanol and distilled water (2.078 and 3.877 mg/100g, respectively) followed by aqueous extract of onion peel (0.635 mg/100g). Peanut skin showed the lowest amount of anthocyanin (0.208 mg/100g) with distilled water. The same trend was observed with the total soluble solids.
Antimicrobial Activity
The antibacterial activity of roselle, onion peel or peanut skin extracted by acidified ethanol was examined at different concentrations (100-2000 µg/ml) and the results are listed in Table 2. The minimum inhibition concentration (MIC) of roselle acidified ethanol extract against the four studied bacteria was 200 µg/disc whereas was 500 µg/disc for onion peel acidified ethanol extract. MIC of peanut skin acidified ethanol extract against gram positive and gram negative bacteria were 200 µg/disc and 500 µg/disc, respectively.
Table 2. The Inhibition zones diameter (mm) induced in Gram + and Grambacteria using agar well diffusion assay under the influence of different concentrations (100-2000 µg/ml) of acidified ethanol extract from roselle, onion peels and peanut skin
Concentration (
Stability of pigments under heat stress
The aqueous and acidified ethanol extracts from roselle, onion peel or peanut skin were heated at 25, 75 and 100°C for 0.5, 1 and 2 hr., and the pigment retention (PR) were spectrophotometrically measured (Fig. 4). There was no significant (p < 0.05) difference in pigment retention of roselle distilled water extract while, acidified ethanol extract showed to retain the most at 75 and 100°C of heat treatments (Fig. 4).Comparatively, there was significant (p < 0.05) difference in retention between onion peel distilled water extract and acidified ethanol extracts. Pigment retention of onion peel distilled water extract showed slightly degradation over time at 75 and 100°C compared with 25°C (Fig. 4), while acidified ethanol extract showed a significant decline especially at 75 and 100°C of heat treatments after 2 hr., (Fig. 4), it could be indicated imply that most of the pigments were degraded at 100°C (Amr and Al-Tamimi, 2007). The same results were observed for the pigment isolated from peanut skin with distilled water and acidified ethanol extract (Fig. 4). The rate of anthocyanin degradation when heated increases because of reacting molecules that come closer when the extract is concentrated (Kırca et al., 2007). Colour changes were also observed upon heating over the time. In all of the samples, the colour decreased in intensity. Heat stability could possibly be improved by increasing the anthocyanin concentration, removal of oxygen and the inactivation of enzymes (Hellström et al., 2013).
Stability of pigments under light stress
Stability of anthocyanins under light is a very important side because it aids in storage conditions (Sipahli et al., 2017). Under different light treatments, the results showed general decline in pigment retention over the time period with all extracts (Fig. 5). However, roselle acidified ethanol extract showed more stability under different light treatments compared with its distilled water extract (Fig. 5). Also peanut skin acidified ethanol extract observed increasing in stability of the pigment compared with its distilled water extract (Fig. 5). In contrast, onion skin distilled water extract showed significantly higher in stability of the pigment compared with acidified ethanol extract under different light conditions over the period time (Fig. 5). Amr and Al-Tamimi (2007) found that their samples retained 84% of the pigment which were treated to dark conditions for 10 days.
Effect of KMnO 4 or H 2 O 2 stress on pigment stability
Effect of KMnO 4 at different concentrations (20-100 mg/ml) on pigment stability extracted from roselle, onion peel and peanut skin with distilled water or acidified ethanol was observed and the results are showen in Fig. 6. Pigment retention gradually increased with the increasing KMnO 4 concentrations (20-100 mg/ml) for all tested samples. The same results were observed for all tested samples when treated with H 2 O 2 (Fig. 7) at different concentrations (10, 20 and 30%). It can be concluded that all extracted pigments are stable to KMnO 4 and H 2 O 2 .
Effect of sugar or salt (NaCl) stress on pigment stability
Effect of cane sugar (0.5%) and salt (0.5%) on the stability of the pigment stored at different periods (20-80 min) were investigated and the results are listed in Figures 8 and 9. Pigment retention was gradually reduced with increasing the storage time (20-80 min) for all tested samples when treated with cane sugar (0.5%). The same results were observed for all tested samples when treated with salt (0.5%). | 2019-09-16T23:23:00.640Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "e789c3ae874156882fa2ed7502571760254e95f7",
"oa_license": null,
"oa_url": "https://zjar.journals.ekb.eg/article_40966_7d3206322f9fb9a93ca9891372187df0.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b466dbb0c13f8300ae2ac56630d30ca4919e819a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
46274034 | pes2o/s2orc | v3-fos-license | 5 D-Tracking of a nanorod in a focused laser beam-a theoretical concept
Back-focal plane (BFP) interferometry is a very fast and precise method to track the 3D position of a sphere within a focused laser beam using a simple quadrant photo diode (QPD). Here we present a concept of how to track and recover the 5D state of a cylindrical nanorod (3D position and 2 tilt angles) in a laser focus by analyzing the interference of unscattered light and light scattered at the cylinder. The analytical theoretical approach is based on Rayleigh-Gans scattering together with a local field approximation for an infinitely thin cylinder. The approximated BFP intensities compare well with those from a more rigorous numerical approach. It turns out that a displacement of the cylinder results in a modulation of the BFP intensity pattern, whereas a tilt of the cylinder results in a shift of this pattern. We therefore propose the concept of a local QPD in the BFP of a detection lens, where the QPD center is shifted by the angular coordinates of the cylinder tilt. ©2014 Optical Society of America OCIS codes: (120.0120) Instrumentation, measurement, and metrology; (140.7010) Laser trapping; (260.3160) Interference; (290.0290) Scattering; (070.0070) Fourier optics and signal processing. References and links 1. A. P. Bartko and R. M. Dickson, “Imaging three-dimensional single molecule orientations,” J. Phys. Chem. B 103(51), 11237–11241 (1999). 2. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for singlemolecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). 3. S. Stallinga and B. Rieger, “Position and orientation estimation of fixed dipole emitters using an effective Hermite point spread function model,” Opt. Express 20(6), 5896–5921 (2012). 4. M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. B 20(3), 554–559 (2003). 5. P. J. Pauzauskie, A. Radenovic, E. Trepagnier, H. Shroff, P. D. Yang, and J. Liphardt, “Optical trapping and integration of semiconductor nanowire assemblies in water,” Nat. Mater. 5(2), 97–101 (2006). 6. M. E. J. Friese, T. A. Nieminen, N. R. Heckenberg, and H. Rubinsztein-Dunlop, “Optical alignment and spinning of laser-trapped microscopic particles,” Nature 394(6691), 348–350 (1998). 7. E. L. Florin, J. K. H. Horber, and E. H. K. Stelzer, “High-resolution axial and lateral position sensing using twophoton excitation of fluorophores by a continuous-wave Nd alpha YAG laser,” Appl. Phys. Lett. 69(4), 446–448 (1996). 8. P. C. Seitz, E. H. K. Stelzer, and A. Rohrbach, “Interferometric tracking of optically trapped probes behind structured surfaces: a phase correction method,” Appl. Opt. 45(28), 7309–7315 (2006). 9. Y. Nakayama, P. J. Pauzauskie, A. Radenovic, R. M. Onorato, R. J. Saykally, J. Liphardt, and P. Yang, “Tunable nanowire nonlinear optical probe,” Nature 447(7148), 1098–1101 (2007). 10. D. B. Phillips, J. A. Grieve, S. N. Olof, S. J. Kocher, R. Bowman, M. J. Padgett, M. J. Miles, and D. M. Carberry, “Surface imaging using holographic optical tweezers,” Nanotechnology 22(28), 285503 (2011). 11. A. A. M. Bui, A. B. Stilgoe, T. A. Nieminen, and H. Rubinsztein-Dunlop, “Calibration of nonspherical particles in optical tweezers using only position measurement,” Opt. Lett. 38(8), 1244–1246 (2013). 12. S. J. Parkin, G. Knöner, T. A. Nieminen, N. R. Heckenberg, and H. Rubinsztein-Dunlop, “Picoliter viscometry using optically rotated particles,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 76(4), 041507 (2007). 13. D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 810–816 (2003). 14. M. Speidel, L. Friedrich, and A. Rohrbach, “Interferometric 3D tracking of several particles in a scanning laser focus,” Opt. Express 17(2), 1003–1015 (2009). 15. D. Ruh, B. Tränkle, and A. Rohrbach, “Fast parallel interferometric 3D tracking of numerous optically trapped particles and their hydrodynamic interaction,” Opt. Express 19(22), 21627–21642 (2011). 16. K. Dholakia and T. Cizmar, “Shaping the future of manipulation,” Nat. Photonics 5(6), 335–342 (2011). 17. S. H. Simpson and S. Hanna, “Optical trapping of spheroidal particles in Gaussian beams,” J. Opt. Soc. Am. A 24(2), 430–443 (2007). 18. F. Borghese, P. Denti, R. Saija, M. A. Iati, and O. M. Marago, “Radiation torque and force on optically trapped linear nanostructures,” Phys. Rev. Lett. 100, 163903 (2008). 19. P. B. Bareil and Y. Sheng, “Angular and position stability of a nanorod trapped in an optical tweezers,” Opt. Express 18(25), 26388–26398 (2010). 20. S. H. Simpson and S. Hanna, “First-order nonconservative motion of optically trapped nonspherical particles,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 82(3), 031141 (2010). 21. Y. Cao, A. B. Stilgoe, L. Chen, T. A. Nieminen, and H. Rubinsztein-Dunlop, “Equilibrium orientations and positions of non-spherical particles in optical traps,” Opt. Express 20(12), 12987–12996 (2012). 22. A. Irrera, P. Artoni, R. Saija, P. G. Gucciardi, M. A. Iatì, F. Borghese, P. Denti, F. Iacona, F. Priolo, and O. M. Maragò, “Size-scaling in optical trapping of silicon nanowires,” Nano Lett. 11(11), 4879–4884 (2011). 23. P. J. Reece, W. J. Toe, F. Wang, S. Paiman, Q. Gao, H. H. Tan, and C. Jagadish, “Characterization of semiconductor nanowires using optical tweezers,” Nano Lett. 11(6), 2375–2381 (2011). 24. O. M. Maragò, P. H. Jones, F. Bonaccorso, V. Scardaci, P. G. Gucciardi, A. G. Rozhin, and A. C. Ferrari, “Femtonewton force sensing with optically trapped nanotubes,” Nano Lett. 8(10), 3211–3216 (2008). 25. L. Dixon, F. C. Cheong, and D. G. Grier, “Holographic deconvolution microscopy for high-resolution particle tracking,” Opt. Express 19(17), 16410–16417 (2011). 26. A. Pralle, M. Prummer, E. L. Florin, E. H. K. Stelzer, and J. K. H. Hörber, “Three-dimensional high-resolution particle tracking for optical tweezers by forward scattered light,” Microsc. Res. Tech. 44(5), 378–386 (1999). 27. A. Rohrbach, C. Tischer, D. Neumayer, E. L. Florin, and E. H. K. Stelzer, “Trapping and tracking a local probe with a photonic force microscope,” Rev. Sci. Instrum. 75(6), 2197–2210 (2004). 28. G. Volpe, G. Kozyreff, and D. Petrov, “Backscattering position detection for photonic force microscopy,” J. Appl. Phys. 102(8), 084701 (2007). 29. R. Huang, I. Chavez, K. M. Taute, B. Lukic, S. Jeney, M. G. Raizen, and E.-L. Florin, “Direct observation of the full transition from ballistic to diffusive Brownian motion in a liquid,” Nat. Phys. 7(7), 576–580 (2011). 30. L. Friedrich and A. Rohrbach, “Improved interferometric tracking of trapped particles using two frequencydetuned beams,” Opt. Lett. 35(11), 1920–1922 (2010). 31. H. Kress, E. H. K. Stelzer, and A. Rohrbach, “Tilt angle dependent three-dimensional-position detection of a trapped cylindrical particle in a focused laser beam,” Appl. Phys. Lett. 84(21), 4271–4273 (2004). 32. L. Friedrich and A. Rohrbach, “Tuning the detection sensitivity: a model for axial backfocal plane interferometric tracking,” Opt. Lett. 37(11), 2109–2111 (2012). 33. A. Rohrbach and E. H. K. Stelzer, “Optical trapping of dielectric particles in arbitrary fields,” J. Opt. Soc. Am. A 18(4), 839–853 (2001). 34. M. M. Tirado, C. L. Martinez, and J. G. Delatorre, “Comparison of theories for the translational and rotational diffusion coefficients of rod-like macromolecules. Applications to short DNA fragments,” J. Chem. Phys. 81(4), 2047–2052 (1984). 35. A. Rohrbach, H. Kress, and E. H. K. Stelzer, “Three-dimensional tracking of small spheres in focused laser beams: influence of the detection angular aperture,” Opt. Lett. 28(6), 411–413 (2003). 36. M. Pelton, M. Z. Liu, H. Y. Kim, G. Smith, P. Guyot-Sionnest, and N. F. Scherer, “Optical trapping and alignment of single gold nanorods by using plasmon resonances,” Opt. Lett. 31(13), 2075–2077 (2006). 37. C. Selhuber-Unkel, I. Zins, O. Schubert, C. Sönnichsen, and L. B. Oddershede, “Quantitative optical trapping of single gold nanorods,” Nano Lett. 8(9), 2998–3003 (2008).
Introduction
In recent years optical tracking of nanorods and dipole emitters have attracted considerable interest in various disciplines.On the one hand, position and orientation tracking of dipolar light emitters such as fluorophores have led to a significant progress in localization microscopy techniques (e.g.STORM, PALM) enabling super-resolution optical imaging in three dimensions [1][2][3] or in biophysical single-molecule experiments [4].On the other hand, nano-sized cylindrical rods can serve as flexible building-blocks in nano-technology because of various optical and electrical properties, which can be controlled by their bulk material, size and environment [5].Furthermore, nanorods point-out a strong potential as probes for photonic force microscopy to measure local hydrodynamics, to scan surfaces [6][7][8][9][10][11] or to determine visco-elastic environments [12], especially in bio-sciences.
The most promising tool to manipulate these nanorods in five dimensions (3 directions in displacement, 2 different orientations) are optical tweezers, which can easily be moved in 3D space, can be multiplexed in space and time [13][14][15] or can be reshaped by computerholograms [16].
Nanorods are advantageous for optical trapping because of their typically upright orientation due to an increased volume overlap with the axially extended laser focus.This leads to an increased overall polarizability and increased optical forces relative to spherical probes of comparable volumes [17][18][19][20][21].The ability to measure changes in displacement and orientation either due to Brownian motion or due to external forces and torques make optically trapped nanorods a multi-modal and very sensitive sensor for the bio-nano-sciences [22].This has been achieved recently with video microscopy, but the slow frame acquisition rates often make it impossible to measure microsecond position changes and millisecond relaxation times as relevant for many applications.Optical trapped nanorods have been tracked in their 2D position with quadrant photo diodes (QPD) [23] or with additional separation of the 3D position and 2D orientation fluctuation relaxations [24].
However, the simultaneous tracking of the 3D position and of the orientation of a nanorod without post-processing of camera images has not been achieved yetneither with coherent nor with incoherent imaging [25].
The fastest and most precise 3D tracking technique is back focal plane (BFP) interferometry [26][27][28].Although the tracking range is limited to the extents of a laser focus, tracking rates of more than 1MHz [29] can be achieved and precisions of 1-5 nm [27].
In this study we demonstrate theoretically how to achieve 5D tracking of a cylindrical probe in a highly focused laser beam by using back-focal plane interferometry.We show both numerically and analytically that the 3D position and 2D orientation of the nanorod can be determined over a sufficiently large range of displacements and orientations approximately independent of the other dimension.Due to a simple mapping scheme, hardly any postprocessing is required enabling online-monitoring of the particle fluctuations.
Back focal plane interferometric tracking
The concept of back focal plane (BFP) interferometry to track the 3D position of a sphere is extended to track the 5D state of a cylindrical particle in a focused laser beam.This method exploits the interference of the light scattered at the particle and the unscattered light, which is captured by a detection lens DL (see Fig. 1).A sensor in the BFP records the interference intensity from the scattered and unscattered electric field.
Spherical particles
The intensity distribution ( , , ) where the position dependent difference of the phase of the incident and scattered field can be separated for small displacements b into 3 nearly orthogonal phases ΔΦ j (b j ), which only depend on sphere displacements in direction b j (with j = x, y, z): A small displacement b j is a shift of not more than about half the FWHM extent of the focus in the specific direction j = x, y, z.
Typically two quadrant photo diodes (QPD) are used [30] for the 3D tracking of a spherical particle.QPD #1 is completely illuminated by , , The three position signals S j (b), are extracted by integrating the intensity over a certain area, which is determined by a filter function The filter H x (k x , k y ) generates the difference from the signals of the upper two and the lower two QPD-quadrants, whereas H y (k x , k y ) generates the difference from the signals of the left and right quadrants.H z (k x , k y ) simply generates the sum of all four quadrants to provide the axial position signal S z .
For small displacements we assume the position signals to be linear and mutual orthogonal ( ) cos ΔΦ( ) The calibration factor g j is the detector sensitivity in the direction j = x, y, z.The intensity offset S 0j is typically zero for the lateral directions and can be subtracted in z-direction.Orthogonality means that a position signal for the displacement in one direction is independent of the displacement in all other directions such that S j (b j ) S k (b k ) = δ jk .The offdiagonal entries of the sensitivity matrix ĝ are negligibly small for small particle displacements:
Cylindrical particles
For non-spherical particles, such as cylinders or ellipsoids, also their orientation is of interest.This requires two further orientation signals, which are more complicated to extract from the interference pattern b since the phase changes due to particle re- orientations and particle displacements are coupled.For a cylinder tilted on its long axis, for example, the position signals lose then their property to be linear and orthogonal [31].
As indicated in Fig. 1, totally five signals S x (b), S y (b), S z (b), S θ (b), S (b), are required according to the spatial state of a cylinder defined by its state vector b (generalized coordinate vector), which is composed of a vector of translation b t and a vector of rotation b r .We define the center of the focus as the origin of the Cartesian coordinate system.The angle between the optical and the cylinder axis is the polar angle b θ .The azimuthal angle is b .The rotation angle b ψ about the cylinder axis cannot be detected because of its intrinsic symmetry.We define , , ; ,
Rayleigh-Gans Theory
A useful approach to calculate scattered fields at particles smaller or equal to the wavelength is the Rayleigh-Gans theory, also known as Born approximation.This approach assumes a single change of the k-vector of each incident plane wave (component), corresponding to the approximation that the field inside the particle does not change its angular spectrum, but only its amplitude.This amplitude is controlled by the polarizability α.The Rayleigh-Gans theory requires that the maximum phase shift of the incident field induced by a particle of length L and of refractive index n s relative to the surrounding medium with index n m is small, i. ), where V is the volume of the particle.We start with the inhomogeneous Helmholtz equation for the electric field which is characterized by a shape function s(r) describing the spatial extent of the scatterer with refractive index n s such that s(r) = 1/V inside and s(r) = 0 outside the scatterer.In general, however, the polarizability α is a tensor.To simplify the math in this study and to better illustrate the ideas of our strategy, we use the scalar approximation of the electric fields and the polarizability.The total scalar field E(r) that solves Eq. ( 7) can be separated in an incident and a scattered field.
Here, we have applied the Rayleigh-Gans approximation by replacing the total field E(r) by the incident field E i (r) in the Fredholm-integral.Essentially, the scattered field is a superposition of spherical waves G(r) driven with the local amplitude E i (r) at every position within the volume of the scatterer.The scalar Green's function G(r) is a solution of the homogenous Helmholtz equation.Using the convolution symbol (*), the approximated scattered field in the focal plane (FP) can be written as For a particle displaced and reoriented by the vector b, the shape function depends on two vector variables s(r,b).
Scatter spectra in the Fourier domain
Since our tracking scheme is based on BFP detection, we take the 3D Fourier transform of the scattered field ) , ( , , ) , ( s E kb simplifies in the case of an incident plane wave 3 0i (2 ) () , , The form factor ( , ) s kb is the Fourier transform of the shape function, which is s(r,b) = 1/V inside the particle and s(r,b) = 0 outside.For a non-tilted cylinder of length L, of diameter D and of volume V = L (D/2) 2 π it is: where J 1 is the 1st order Bessel function and sinc(x) = sin(x)/x.If the cylinder is translated by b t , s(r-b t ), we find the form factor modulated, the form factor for the general position state b can be expressed as It is advantageous that the operations for tilting and translating can be separated into two factors.The first term 0r ' [ ( )] s kb describes the rotation of the scatterer by b r and is real for symmetric scatterers such as cylinders.The second is a pure phase modulation and describes the scatterer's translation b t from the center of the focus.The used rotation matrices are defined as The convolution term in Eq. ( 10) for an arbitrary incident wave reads Therefore the scattered field in k-space for an incident plane wave is with () G k being the Fourier transform of G(r) as defined by Eq. (20).Assuming that the cylinder is a thin nanorod of length L and with diameter D << λ, it can be assumed to be a δlike needle oriented in z-direction.Therefore the form factor is infinitely wide in lateral directions and only depends on the k z component.The form factor of a very thin cylinder is Now the Euler rotation of the cylinder is simplified massively, since k' Hence, we find for the rotated k z -component: with For a cylinder in the center of the focus and tilted only in the k x k z plane (b θ = 0), the form factor reduces to The Fourier transform of the Greens-function is the Ewald sphere, which is a spherical cap The determination of the scattered field in k-space as described by Eq. ( 16) can be illustrated graphically as shown in Fig. 2. In a so-called Ewald construction, the spherical δ-like surface () G k is multiplied on the form factor 0 (, [ ] ) k , which is shifted by the k-vector of the incident wave k i .Figure 2(a) displays a thin cylinder of length L 2λ in the center of the focus tilted by b θ = 20° and an incident plane wave with k i = (0,0,k).Figure 2(b) shows the form factor in gray scale revealing the shape of a sinc-function also tilted by b θ = 20° and shifted upwards by k i .The Ewald sphere is displayed as a black circle, whereas the part of the half circle representing the forward scattered field captured by the detection lens with NA det = 0.9 is colored in red.The intersection is projected onto the k x -axes and is displayed schematically as () Ek.
Interference of the incident and scattered angular field spectrum
In the following the angular spectrum representation of the fields 0 ( , ) ( ) propagating in positive z-direction is used, which is obtained by projecting the fields () located on the positive (negative) half of the Ewald sphere () G k with k z > 0 ( () G k with k z < 0) onto the k x k y plane.This allows to use the more compact formulation: Hence, the angular spectrum of the scattered field for an incident plane wave (Eq.( 16)) is , , , The interference intensity consists of three terms as denoted in Eq. ( 1).Since the incident intensity will be canceled out or subtracted as will be shown in the next section, the 5D position vector b has to be extracted from the remaining two terms.In typical experimental situations the intensity of the incident field is removed electronically such that |Ẽ i | 2 will be removed in the following.The relevant difference intensity reads: Here we used the fact that the phase of the scattered field Φ s of a higher refracting particle in the Rayleigh-Gans-regime is π/2 behind the phase of the incident field Φ i such that The angular spectrum representation of a highly focused incident field without considering apodization is with the Heavyside step function defined by step(x) = 1 if x 0 and step(x) = 0 otherwise.NA = n m sin(α m ) is the numerical aperture of the focusing lens.This corresponds to the field distribution in the pupil plane of an objective lens.
Calculation of the focused incident field
The only slightly more complicated part in describing Eq. ( 23) is the complex amplitude of the scattered field E s (b t ) at the scatterer position b t , which is defined by the amplitude of the focused incident field.E i (b t ) can be well described by a Fourier transform of the pupil plane: with . This method is very flexible since it allows to consider many relevant focusing aspects, but requires numerical computation.Alternatively, Gaussian beam optics can be used, which is a paraxial approximation, but is helpful in our context, where interferometric tracking principles are to be developed.The field of a focused Gaussian beam can be written as ,, with a phase function considering the Gouy phase shift by totally ΔΦ = π along the axial extent of the focus Here, 0 A is the field strength, W 0 the beam waist at z = 0 and W(z is the radius of curvature of the wave-front and z 0 = k W 0 2 /2 is the Rayleigh length.The beam waist can be expressed by the NA of the lens such that W 0 = 2 1/2 λ/(π NA).From this it is possible to get a reasonable value for the complex amplitude at the center of the scatterer.
Local Field Approximation
Computing the scattered fields of a cylindrical particle in a highly focused laser beam is a complicated task.However, a particle, which is much smaller than the wavelength, is hardly affected by the spatial variation of the incident field across its extent.Therefore one can use the approximation of a local field with a mean phase at the center of the particle.In consequence, we assume that the particle "sees" an incident plane wave, which means that an incident plane wave is scattered according to Eq. (11).To account for the focused incident field we take the complex amplitude E i (r = b t ) of the focused beam.This approximation has turned out to provide scattering cross-sections for Rayleigh-Gans particles, which are not more than 20% away from the rigorously calculated scattered fields (Rohrbachunpublished data).
Applying the local field approximation with k i = (0,0,k), we can insert Eqs. ( 16) and ( 25) into Eq.( 23) to obtain the interference part () , sin Φ, The interference phase ΔΦ t (k,b t ) is determined by the phase of the scattered field translated by b t .Furthermore, we disregard the small changes of the lateral wavefront curvature in the local field approximation such that Φ i (k,b t ) = Φ i (k z ,b z ).We find [32]: Across the circular BFP of the detection lens, defined by with factor B = E i0 αk 2 /(2π) 3 .The shape of the interference term in the BFP is determined by a sinc function for a cylinder tilt and by a sine function for a cylinder shift.These characteristic intensity distributions can be illustrated by applying the thin cylinder approximation of Eq. ( 19) with b = 0.For a cylinder in the beam center (b t = 0) the interference term sin(ΔΦ t (k x ,k y ,0)) = 0 disappears and we find For tilt angles b θ < 30° and in the paraxial approximation, where This equation is illustrated in Figs.The diameter of the sinc[(-k x 2 -k y 2 )L/(4k)] function is determined by the length L of the cylinder, such that the bright area becomes narrower for longer cylinders (see Fig. 4).Now, how does an additional shift b t of the cylinder change the intensity distribution in the BFP?To illustrate the multiplication with the sin[ΔΦ t (k x ,k y ,b t )] function as described in Eq. ( 29), we extend Eq. ( 33) for the case b t = (b x ,0,0) such that , , , 0 sin / 2 sin The combination of a cylinder shift and tilt is displayed in Fig. 5
Comparison with rigorous numerical approach
The analytical approach presented here contains a number of approximations, which were necessary to perform in order to derive a qualitative relationship between a cylinder tilt and shift and the interference intensity in the BFP.The qualitative correctness of the analytically approximated intensity O diff x y () ,, I k k b was therefore compared to rigorous numerical calculations using the in-house simulation software Lightwave (R) .Herewith a highly focused incident field with NA = 1.2 [33] was scattered at a cylinder of finite thickness D within the Rayleigh-Gans theory [31].The numerical results confirm our evaluated principle that a cylinder shift / tilt in the FP results in a signal modulation/shift in the BFP as illustrated in the following Fig. 7.However, there are cases in which intensities O diff x y () ,, I k k b for different parameters look similar.Comparing for example, Figs. 8(d) and 8(f), one can see a slight clock-wise rotation of the bi-polar signal, which results from the spherical modulation of an axial cylinder shift.This effect, for instance, leads to an over-estimation of the cylinder shift along y.
Five-dimensional tracking with a local QPD
The goal of this study is to develop a tracking scheme, which allows to extract 5 signals S Based on the general BFP interference detection scheme introduced by Eq. ( 3), the spatial filter function ) ( ) ( , ) ( , , , , ( , , () , ) I k k .The radius of this circular region depends on the cylinder length and shall be denoted as k L (see Fig. 9).This region can be described by a circular step function as .The intensity above the threshold is defined as From this the center of mass vector k c = (k xc ,k yc ) is obtained by the operation: Having determined c k , the concept of a local QPD can be applied, which evaluates the bipolar signal modulation within the circular region.In other words, the difference of the upper and lower half of the integrated detector area ( | |)( describes the horizontal cylinder displacement.The first three (translational) components of the 5D filter function read: The last two (rotational) components of the 5D filter function are: ,, a tan( / ) After having found the 5D spatial filter function H(k x ,k y ), we can express the relation between the 5D state of the nanorod and the corresponding tracking signal as an extension to Eq. ( 3): which is e.g. for displacements in direction b x : , 0 sin 2 sin 2 1
Tracking results
The five dimensional configuration space contains many different combinations of cylinder positions and tilts as well as corresponding interference patterns, which could be analysed.However, to illustrate that the classical Fourier relation of shift / tilt in the FP results in a modulation / shift in the BFP holds, only some typical cylinder states are shown here.For a cylindrical nanorod optically trapped in a highly focused beam typical displacements are not larger than the extent of the focus and polar tilt angles are smaller than b θ = 30° due to restoring forces or torques that increase linearly with the displacement b t or tilt angle b θ respectively.The limits in b t and b r correspond roughly to the range of displacements where the linear relation of signals S j S 0j + g jj b j can be assumed, as it is also known from the BFP tracking of spheres.Figure 10 displays the typical sinus-like shaped signals for rod displacements.Although the slopes vary slightly, the linear dependency for rod displacements smaller than 0.2 µm can be seen.
The iso-signal grid representation
In the following, the results from numerical simulations are presented, which have been computed on a grid with translations b x = 0.2…0.Here, the simplest case is shown, for a non-tilted cylinder shifted over a range of 0.4µm × 0.8µm (Fig. 11(a)) and a centered cylinder, which is rotated over a range of 90° × 30° (Fig. 11(b)).
Where p 0ij normalizes the probability distribution to 1. σ j = k B T/κ j is the standard deviation of the Gaussian distribution in direction b j (j = x,y,z,θ) and results from the equipartition theorem (with k B T as the thermal energy).The coupling of translation and rotation is not considered.The probability distributions of the coordinates is assumed to be mutually The signals for axial displacements and a polar angle tilt do couple as shown in Fig. 14, where both iso-signal lines are tilted by roughly the polar tilt angle for the cylinder.In this case the sensitivity matrix of Eq. ( 35) is not diagonal and the signals for a change in b i can be expressed as
Calibration of the tracking system and error estimate
The standard technique to obtain the trap stiffnesses κ jj and the detector sensitivities g jj is to use the Langevin calibration method [27] for a particle diffusing in a harmonic potential.
Here, the κ jj can be measured via an autocorrelation function AC[b j (t)], which decays exponentially in a harmonic potential W(b i ) with autocorrelation time τ jj = γ/κ jj .One can solve for κ jj by using the translational drag coefficients which is known from the dimensions D and L of the cylinder.Here η is the fluid's viscosity and δ are factors [34].
The detector sensitivities g jj = σ sj /σ j can be obtained from the standard deviations of the position or angle probability density, which is σ j = k B T/κ j according to the equipartition theorem, and the width σ sj of the measured signal histograms.The histogram is generated from the trajectories S j (t) = S j (b j (t)), which are measured for a couple of seconds.Since the g jj are never constant across the diffusion volume, the widths σ sj and the sensitivities g jj represent mean values.
Therefore the reconstructed nanorod state rec j b is obtained rec j b S j σ j /σ sj .This results in a relative tracking error Δb j , which is ( / ) In our computer simulation we assumed realistic values for σ j = k B T/κ j as shown by the background gray colors in Fig. (14) From this we obtained the corresponding widths σ sj and thereby g jj = σ sj /σ j .Since we know the real input value b j in the simulation, the error Δb j rec j b -b j can be estimated.This is shown for some typical states of a nanorod in Fig. 15.
Discussion and conclusion
We have presented a theoretical concept of how to recover the 5-dimensional state b = (b t ,b r ) of a cylindrical nanorod (3D position b t and two angles b r ) from the interference pattern of unscattered light and light scattered at the cylinder.In particular, we present for the first time that the orientation tracking of a nanorod in a focused laser beam is also possible with the established concept of BFP-interferometry.Several difficulties had to be overcome, which might be one of the reasons why no such tracking concept has been presented yet.
Although rigorous scattering computations for a tilted cylinder in a highly focused laser beam have been published [20,31], corresponding to a solution of the forward problem, the back ward problem, the recovery of both the 3-D position and the 2-D orientation could not be solved.More precisely, the required direct relation between a nanorod displacement or tilt in the FP and the corresponding change of the interference pattern in the BFP has not been revealed.To uncover this relation, we have developed an analytical model based on the Rayleigh-Gans scattering theory.Since the interference patterns have to be evaluated in the BFP of the detection lens, the electric fields are derived in k-space or in the angular spectrum representation, respectively.The analytical representation of the form factor for a tilted cylinder was simplified by the approximation that the cylinder of finite length is infinitely thin.In addition, we used a local field approximation, i.e. an incident plane wave to calculate the scattered field spectrum, since the lateral phase of the incident field does not change much for typical displacement of an optically trapped nanorod.This operation results in a shift of the form factor by the incident k-vector.The local change of the phase and amplitude of the incident focused field along z was computed by Gaussian beam optics including the Gouyphase shift.
The interference pattern in the BFP was then obtained by the spectrum of a highly focused incident beam and by the approximated spectrum of the scattered field.From the analytical formula for the interference intensity and the corresponding 1D and 2D plots, it turned out that a nanorod displacement results in the modulation of the BFP interference intensity, whereas a tilt of the nanorod results in a shift of the BFP intensity.The results of our model with above mentioned approximations were confirmed by a rigorous numerical approach for a dielectric cylinder with L = 0.8µm = λ/n and D = 0.1µm, which were obtained by the in-house developed simulation software LightWave (R) .
Over a reasonably wide range of displacements b t and/or tilts b r the 5 resulting signals S(b) S 0 + ĝ•b are roughly linear with b and roughly orthogonal to each other.Only for larger nanorod displacements and angles, the signals become more nonlinear and begin to couple.This is especially pronounced for a tilted cylinder displaced in the axial z-direction.
However, there are a number of means of how to reduce the inter-signal coupling and to increase the linear tracking range.Similar to the approaches that have been applied successfully for the tracking of spheres [30,35], a spatial filter (function) in the BFP of the detection lens might help to improve the 5D tracking of small cylinders in the focal region of a highly focused beam.
Nanorods can align horizontally, i.e. parallel to the strongest component of the electric field inside the focus due to a polarization induced torque [36,37].By inspecting Fig. 2 similar scattered fields and interference patterns can be expected for a horizontally tilted and shifted cylinder, provided that the cylinder is shorter than the focal diameter, i.e.L < λ/n m .However, further investigations are necessary to test whether in this case the resulting tracking signals are unique, linear and orthogonal.
The experimental realization of our theoretical concept remains open and is challenging, if a fast tracking rate of about 100 kHz is the goal.It appears to be difficult to record the intensity pattern by simply two QPDs, delivering eight signals in total.A straightforward approach seems to be the usage of fast cameras with a small number of pixels, which are on the market, also for the popular 1064nm trapping wavelength requiring InGaAs as sensor material.It remains to be shown, whether the here presented tracking concept can be verified under experimental conditions.
Nevertheless, the fast 5D tracking of cylindrical nanorods will enable a manifold of applications reaching from non-equilibrium local probe measurements to surface scanning with optically trapped, needle-like probes similar to AFM imaging.Furthermore, if nanorods are used as building blocks for nano-scaled systems, the observation of their thermal state fluctuations is indispensable for a controlled assembly.
xy I k k b , described by the Fourier plane coordinates k x and k y , changes uniquely with the position b(t) = (b x (t), b y (t), b z (t)) of the particle roughly over the extent of the laser focus.The intensity distribution ( , , ) xy I k k b in the BFP is a superposition of the incident ( , ) b and provides the lateral position signals S x (b) and S y (b), whereas QPD #2 only records the central part of ( , , ) xy I k k b and provides the axial signals S z (b).
Fig. 1 .
Fig. 1.Setup scheme for trapping and tracking.A cylinder is optically trapped in a laser focus and changes its center position and orientation due to external forces or thermal fluctuations.The interference intensity pattern of scattered and unscattered light is recorded by a sensor in the back focal plane (BFP) of the detection lens (DL).Zoom: a translated and tilted cylinder, described by a position vector bt and an orientation vector br = (bθ, b).
e. ΔΦ = L k 0 (n s -n m ) << 2π.k 0 is the wave number in vacuum, and k = n m k 0 the wave number in the medium.The polarizability in the scalar case according to the Clausius-Mossotti relation reads α = 3V(n s 2
Fig. 2 .
Fig. 2. Rayleigh-Gans scattering of infinitely thin cylinder of length L. a) A plane wave with wave-vector ki incident on a tilted cylinder as local field approximation for the center of a focused field.b) The tilted form factor 0 ,, ( ) x z s k k b of the cylinder (as background in grey scale) is shifted by ki relative to the Ewald circle with radius ki = ks.The overlap (red circle area) defines the part of the angular spectrum of the forward scattered field Ẽs(kx) that is detected by a lens with NAdet = 0.9.c) The scattered field spectrum Ẽs(kx) as intersecting line between Ewald circle and form factor.
3(a)-3(c) for a cylinder at b t = 0 and for different tilt angles b θ = 0, 10°, 20°.The |sinc| 2 function is shown as a bright circular region in the BFP and is shifted linearly with increasing tilt angle b θ as displayed in Fig. 3(d).The principle holds also for b 0 as shown in Figs.3(e), 3(f), where b can be read out by the polar angle of the intensity's center-of-mass.
Fig. 3
Fig. 3. Intensity difference ( , ) O diff x y I k k in the BFP of the detection lens for a tilted cylinder.ac) The flat-top like intensity maximum shifted sideward if the cylinder is tilted (bθ > 0, b = 0).d) Corresponding intensity line scans.e,f) For b 0 the center of mass of O diff I is shifted in
Fig. 4 .
Fig. 4. Influence of a cylinders length on the intensity difference ( , ) O diff x y I k k .With increasing length L the width of the flat-top like intensity maximum (red circle) is decreased.The tilt angle bθ is defined by the length of the circle's center vector (arrow).
scale and three line scans each on the right side.The multiplication with either sin(b x k x ) as shown in Figs.5(a)-5(c) or with sin(b y k y ) as in Fig. 5(d) reveals a modulation of the |sinc| function in the BFP, which is approximately linear with the cylinder displacement in the FP.Again, the tilt of the cylinder in the FP results in a shift of the circular region.principle implies a 5D detection scheme, which is achieved by the method of a local quadrant photo-diode (QPD).
Fig. 5 .
Fig. 5. Tracking signals for a thin cylinder, which is both shifted and tilted.Left column: Scheme for shifted and tilted cylinders.Center column: Corresponding intensity difference ( ) , O diff x y I k k in the BFP.Right column: intensity line can ( ) ,0 O diff x I k illustrate the signal shift for a cylinder tilt and the bipolar signal modulation for a cylinder shift.A displacement b z of the cylinder in axial direction results in a spherical modulation of , ) , ( , O diff x y I k k b t b with the axial phase
Fig. 6
Fig. 6.Intensity difference ( ,, ) O f y dif x I k k b for axially displaced thin cylinders with arbitrary positions and orientations, decribed by the state vector b = (bt, br) = (bx,by,bz,b,bθ).A cylinder shift in axial direction results in a sphercial modulation of the signal.
Fig. 7 .
Fig. 7. Rigourously computed intensities ( ,, ) O f y dif x I k k b for states b = (bx,by,bz,bθ,b) of a cylinder with finite thickness.The cylinder length is L = 0.8µm and the diameter is D = 0.1µm.The cylinders displacements are in units of 0.1µm.The round pattern , ) , ( O diff x y I k k b is modulated in three cases and is shifted by the positions SP in e,f) to account for cylinder tilts.
+
x , S y , S z , S θ , S for position and orientation out of the BFP interference intensity in the back focal plane, const.In addition, the desired scheme shall provide a roughly linear relationship between a cylinder state b j and a signal S j S 0j + g jj b j , but also signals which are approximately independent of each other (j = x,y,z,θ,).The detector sensitivities g jj define an diagonal matrix such that provide the 5 state variables b j .As pointed out and illustrated in the last section, the tilt angles (b θ ,b ) can be extracted from the center of mass position k c = (k xc ,k yc ) of the circular intensity region of (
Fig. 9 .
Fig. 9. Intensity read out with a local QPD.a) Computed intensity difference cylinder displacement and the difference of the left and right half, ( | |)(
Fig. 10 .
Fig. 10.Tracking signals Si(b) for a nanorod with different state vectors b. a) Lateral signals Sx(bx) for two different lateral shifts by and tilts bθ .b) Lateral signals Sx(bx) for two different axial shifts bz and tilts bθ.c) Axial signals Sz(bz) for two different lateral shifts bx and tilts bθ .The linear detection range is marked with a box.
2, b y = 0.2…0.2, b z = 0.2…0.6, in µm, with increment 0.1µm and tilt angles b θ = 0…30°, with increment 10°, b = 0…30°, with increment 15°, which yields 5 x 5 x 9 x 4 x 7 = 6300 different states of a cylinder.Two dimensional contour plots S i (b i ,b j ) and S j (b i ,b j ) (i,j = x,y,z,θ,) are overlaid to illustrate that most of the signals are approximately linear and pairwise orthogonal to each other, such that S i (b i ,b j ) = g ii b i and S j (b i ,b j ) = g jj b j .Linearity is demonstrated by an equidistant spacing of the grid lines, orthogonality is shown by an orthogonal intersection of vertical and horizontal lines of equal signals (iso-lines), which are plotted in different colors for i and j in Fig. 11.
Fig. 11 .
Fig. 11.Position and orientation signals of a cylinder in a focused laser beam.An assumed probability density of states is underlayed in the background in gray scale.a) Iso-lines Sx(bx,by) and Sz(bx,by) of a vertical cylinder centered in y-direction.b) Iso-lines S(b, bθ) and Sθ(b, bθ) of a tilted cylinder in the center of the focus.The smallest polar angle is bθ = 10°, since azimuth angles b are not defined for bθ = 0.The gray shaded areas in the background of the contour plots indicate the probability densities p(b i ,b j ) to find the nanorod in the corresponding states (positions or orientations) assuming a harmonic potential W(b i ,b j ) ½κ i b i 2 + ½κ j b j 2 or a linear restoring force F j (b i ) = -/b j W(b i ), respectively.The probability densities p(b i ,b j ) are defined according to Boltzmann statistics as independent such that p(b i ,b j ) = p(b i )p(b j ).The probability distribution for the orientation azimuth angle b is p(b ) = p 0 = 1/(2π) since no force or torque restores the nanorod along b if polarization effects are disregarded.The overlays of the signals S x/z (b θ ,b x/z ) and S θ (b θ ,b x/z ) in Fig. 12 as well as the signals S x/z (b ,b x/z ) and S (b ,b x/z ) in Fig. 13 reveal that only weak signal coupling occurs according to our computer simulations.However, for axial displacements b z = 0.2µm the signal iso-lines S x (b θ/ ,b x ) in Fig. 12(b) and Fig. 13(b) are rather oblique.
Fig. 12 .Fig. 13 .
Fig. 12. Linearity and orthogonality of position and orientation signals Sx/z(bθ,bx/z) and Sθ(bθ,bx/z) of a shifted and tilted cylinder in a focused laser beam.a), b) Iso-signal lines (in a.u.) with and without axial cylinder shift.c),d) Iso-signal lines (in a.u.) with and without lateral shift.
in b i the sensitivities g ii (b i ) become space variant.In general all states of the cylinder couple with each other depending on the strength of b i such that tilt and axial shift can be understood by inspecting Figs.8(b) and 8(f), where the modulations of interference intensities are not independent in x,y and z.
Fig. 14 .
Fig. 14.Coupling of linear position and orientation signals Sx(bx,bz) and Sz(bx,bz) of a shifted and tilted cylinder in a focused laser beam.An assumed Gaussian probability density of states as a result of linear restoring forces is underlayed in the background in gray scale.
Fig. 15 .
Fig. 15.Tracking errors for disaplcements and tilts of a cylinder (L = 0.8, D = 1µm, ns = 1.57) a) Absolute tracking error for lateral displacements bx for different state vectors b. b) Absolute tracking error for tilt angles bθfor different state vectors b. Results were calculated with the simulation software LightWave (R) . | 2018-04-03T06:19:00.558Z | 2014-03-10T00:00:00.000 | {
"year": 2014,
"sha1": "9b33f7a2b12700935a75138525c7a0248a4d9d8d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.22.006114",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f71ce25798221b77064b31ee4b656ff8a6ad2c4e",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
119233047 | pes2o/s2orc | v3-fos-license | Hawking Radiation from Small Black Holes at Strong Coupling and Large N
In a previous work an approximate static metric was found of a test black string that stretches from the boundary to the horizon of the planar Schwarzschild-AdS_5 geometry. This is the gravity dual of the Unruh state for \mathcal{N}=4, SU(N) super Yang-Mills theory on a 4-dimensional Schwarzschild background, at large N and large 'tHooft coupling. We compute the holographic stress tensor of the gravitational solution and it turns out to possess many essential features of the Unruh state for weakly-coupled Hawking radiation, such as the appearance of a negative energy density near the black hole horizon and a positive energy density at infinity. It also confirms recent results that at leading order in N, the expectation value of the stress tensor in the Unruh state is finite on both the future and past horizons, and that at this order there are no flux terms as is expected in the black droplet phase.
Introduction
One of the interesting areas the AdS/CFT correspondence [1] could explore is the area of quantum black holes, or Hawking radiation [2,3,4]. To do so one must look for black hole solutions in AdS spacetimes with the boundary condition that the induced metric on the AdS boundary is that of a black hole [5]. Two types of such solutions were conjectured to exist [5,6]; black funnels or black droplets. Black funnels are black holes with connected horizons that extend from the boundary to horizon of the planar Schwarzschild-AdS geometry -they connect with the planar black hole horizon in a shoulder-like configuration. Black droplets, on the other hand, are black holes with disconnected horizons, that is, they extend from the boundary of AdS down to some point in the bulk where they close off (or cap off) in a smooth way before they reach the planar black hole horizon; the planar black hole gains some deformation as a result of the droplet suspended above it. Black funnels and droplets are the gravitational duals of different vacuum states of N = 4, SU (N ) super Yang-Mills theory on black hole backgrounds, at large N and large 't Hooft coupling. There are some physical differences between them though. Black funnels (as the horizon is connected) are dual to a deconfined plasma which is strongly coupled to the boundary black hole, that is, energy trasfer is quick between them, of order O(N 2 ). Black droplets on the other hand (as the two horizons in the bulk are disconnected) are dual to a deconfined plasma which is coupled weakly to the boundary black hole, that is, energy transfer is slow between them, of order O(1), see [5,7] for further details. A sharp phase transition is expected between the two phases which will be mediated by critical geometries of the kind proposed in [8,9,10].
In general, the temperature of the boundary deconfined plasma can be different from the temperature of the boundary black hole, depending on the sizes of the planar black hole and the boundary black hole, respectively. In [5,11] the two temperatures were taken to be equal and so this was dual to the Hartle-Hawking vacuum state, describing thermal equilibrium betweem the plasma and the boundary black hole. In [12], the authors constructed, numerically, a black droplet solution where the two temperatures are different; a boundary black hole at a finite temperature and a plasma at a zero temeperature, corresponding to a black droplet suspended above the extremal Poicare horizon of AdS. This was argued to be the dual of the Unruh or the Boulware vacuum states. In this paper we are going to work in 5 spacetime bulk dimensions and we are going to focus on the droplet phase, on the case where the two temperatures are different. We are going to take a finite (non-zero) temperature planar black hole in the bulk and a high temperature boundary black hole. In other words, the temperature of the boundary black hole is going to be much higher than the finite temperature of the surrounding plasma. This will correspond to the Unruh vacuum state, which is the steady-state in which the black hole only radiates and does not absorb positive energy -a process of black hole evaporation. In the bulk, this correspondes to a thin and long black droplet (or equivalently a test black string) extending from the boundary to the horizon of the planar Schwarzschild-AdS 5 geometry [8] (see Fig.1). As was shown in [8], in the Lorentzian section the black string (or the thin and long droplet) caps off smoothly just at the planar horizon, while it has a cone structure there in the Euclidean section, reflecting the fact that the two disconnected objects are at different temperatures.
In this work we take the bulk solution found in [8] and compute from it the holographic stress tensor using the familiar prescription of [13]. By the AdS/CFT correspondence this classically computed stress tensor at the AdS 5 boundary is equivalent to the expectation value of the renormalized stress tensor, in the Unruh state, of N = 4, SU (N ) super Yang-Mills theory on a fixed 4−dim Shcwarzschild black hole, at large N and large 't Hooft coupling. We find that this energy-momentum tensor shares many features with stress tensors computed for weakly coupled Hawking radiation [3]. For example, it has the essential feature of Hawking radiation that there are negative energy densities near the black hole horizon and positive ones at infinity [3,14,15]. This is the manifestation of particle creation, one particle with negative energy enters the black hole and its partner with positive energy escapes to infinity. This is the way a black hole loses mass, and evaporates -by absorbing negative energies. We find also that it is covariantly conserved and that it satisfies the correct trace anomaly [13,16,17]. Yet as our boundary theory is strongly coupled we of course should expect also some differences from the weakly-coupled cases studied extensively in the past. In this regard we find, in agreement with [12,18,19], that at leading order in N , the stress tensor is finite everywhere (in the Unruh state). In particular, we find it finite on both the future and past horizons. We also find that at this leading order in N the stress tensor has no flux terms, that is, there is no radiation from the boundary black hole to infinity. The latter point confirms that black droplets are, indeed, dual to a plasma which is weakly coupled to the black hole, in the sense that a flux term, or a radiation term from the black hole, would have made faster the process of exchange of heat between the black hole and the external plasma.
The paper is organized as follows. We begin in section 2 by introducing the bulk metric, found in [8], for a static test black string which extends from the boundary to the horizon of the planar black hole. In section 3 we compute the holographic stress tensor from the bulk metric. In section 4 we discuss the properties of the stress tensor and we compare it to stress tensors in the literature for weakly coupled Hawking radiation, and also to stress tensors for strongly coupled cases found recently. In section 5 we conclude by some comments.
Bulk metric
We start by reviewing the approximate static metric -which we already obtained in [8] that describes a test black string dangling from the boundary to the horizon of the planar Schwarzschild-AdS 5 black hole, see Fig.1. This is a solution of the Einstein equations with negative cosmological constant in 5 dimensions, where R AdS is the radius of curvature of AdS 5 . The solution [8] describes how to immerse a probe black string, with local metric, in the planar Schwarzschild-AdS 5 geometry. The planar geometry can be written more conveniently (in coordinates adapted to the black string) as where the red-shift function of the black brane is, By the coordinate change v = t + r/ f (z) one can go back from (2.3) to the familiar form of the black brane geometry, Before writing down the solution we want to emphasize that in our solution the black string horizon is much smaller than the radius of curvature of AdS 5 , namely, r 0 /R AdS << 1, where the last plays the role of the small parameter in our solution, and which tells that our black string is a test object. We assume also that the planar black hole horizon is of the same order of magnitude as the AdS 5 radius, that is, µ 1/4 ∼ R AdS . Note that the last two assumptions imply that the temperature of the boundary black hole, T B.H ∼ 1/r 0 , is much higher than the temperature of the surrounding plasma, T plasma ∼ 1/R AdS . Namely, T B.H >> T plasma . The solution [8], expanded up to second order in derivatives around an arbitrary z = constant surface (we denote the surface by z = z c and assume that z c > µ 1/4 ) is given by the following metric, 1 where ǫ is a formal parameter that counts the number of derivatives with respect to z, or equivalently counts the number of powers of r 0 R AdS in each term. In other words, as is explained in [8], each derivative with respect to z brings out a factor of r 0 R AdS . The non-vanishing components of h and the z-dependent radius of the black string is given by Note that 2M is the radius of the boundary black hole since r 0 (z) → 2M as z → ∞.
One should understand all quantities, which appear in the above solution, as expanded in derivatives (with respect to z) around the arbitrary surface z = z c . For example, It is worth mentioning that this solutions is regular everywhere in the Lorentzian section, and in special, the black string caps off smoothly at the planar black hole horizon. However, the solution is conical in the Euclidean section at the point where the black string intersects the planar black hole, and that is because each object has a different temperature. Moreover, the above solution satisfies the correct boundary conditions, see [8] for details. In particular, for large values of z c the solution reduces to the familiar AdS 5 black string, and hence it induces (up to a conformal factor) a 4−dim Schwarzschild geometry on the AdS 5 boundary.
Holographic Stress Tensor
In this section we take the above bulk solution and compute from it the holographic stress tensor (the boundary stress tensor) using the prescription of [13]. That is, we use the wellknown formula 2 where γ ab is the induced metric on the z = z c surface, Θ ab = (∇ a n b + ∇ b n a )/2 is the extrinsic curvature of that surface -n a is an outward pointing normal vector to the surface -and E ab is the Einstein tensor with respect to γ ab . The key step in our calculation of the boundary stress tensor is to notice that if we multiply both sides of (3.1) by the quantity r becomes manifest that the three terms on the right hand side are of different orders in the small parameter of our system, r 0 /R AdS . Therefore, one can write down where the ǫ parameter, as discussed above, indicates of what order each term is (see Appendix A for more details on the last step). Hence, upon pluging our solution (2.6) into eq.(3.2), and taking thereafter the limit z c → ∞ one obtains the following stress tensor: Note that, as it should, the stress tensor is proportional to z −2 c , gauranteeing a finite mass density on the boundary. Since that is the end of our calculation we will put now the formal parameter ǫ back to unity. We also will take the boundary metric to be the Schwarzschild metric, and so we need to make the following conformal transformation γ ab → R 2 AdS z 2 c γ ab . As a result, the stress tensor will transform as T ab → z 2 c R 2 AdS T ab , and so it will take the following desired form, AdS a(r)dv 2 + 2b(r)dvdr + c(r)dr 2 + d(r)r 2 dΩ 2 2 , (3.5) which is the main result of our work.
Properties of the Stress Tensor
The stress tensor found above should be identified according to the AdS/CFT correspondence with the expectation value of the stress tensor, in the Unruh vacuum, 3 of N = 4, SU (N ) super Yang-Mills theory on the 4−dim Schwarzschild background, at large N and large 't Hooft coupling (see [5]), T ab ren = T ab . (4.1) The result for the stress tensor obtained above is at leading order in the large N limit. By looking at dimensionful factor that multiplies the stress tensor (3.5), and by using the relation R 3 AdS G 5 = 2N 2 π one can rewrite the stress tensor as, As expected from a classical bulk solution, it gives the leading order part, O(N 2 ), of the CFT stress tensor [5,12]. Eventhough this stress tensor is non-vanishing at this leading order in N , we will see later that it contains no flux terms, or in other words, there is no transport of energy from the black hole to infinity at this order [25,26,27,28]. As explained for example in [7], for black droplets heat transport occurs as a result of a bulk quantum process (Hawking radiation in the bulk) which appears at order O(N 0 ).
Finite and covariantly conserved
The Eddington-Finklstein coordinates, (υ, r), used above to write the boundary metric and the stress tensor, make regularity and finiteness of tensors on the future horizon manifest. Note that the stress tensor (3.5) is finite everywhere. In particular, it is finite at long distances from the boundary black hole, at r >> 2M , and at the future horizon of the boundary black hole, that is, at r = 2M . The (υ, r) coordinates are, nevertheless, not appropriate for treating the past horizon. To talk about the past horizon one can, for example, use the (u, r) coordinates instead, where as usual, u = t − r * and the tortoise coordinate is r * = r + 2M log[r/2M − 1]. One can easily check that the stress tensor remains finite in the (u, r) coordinates, implying finiteness at the past horizon as well. In short, we have found that at this leading order in N the expectation value of the stress tensor in the Unruh state is finite everywhere at and outside the horizon.
In this regard, we would like to comment that eventhough the Unruh state is commonly known (in a free field theory) to be divergent on the past horizon, we find here that (at strong coupling and large N ) it is finite there at leading order in N . Similar results for the Unruh and Boulware states at large N and strong coupling were found in [12,18,19]. In this concern, the authors in [12] expect that the stress tensor will regain the divergency on the past horizon when subleading terms in N are included.
As for covariant conservation, one can check, by direct calculation, that our boundary stress tensor is covariantly conserved with respect to the boundary metric (the Schwarzschild metric), This is, in fact, in accord with what one expects from the dual quantum field theory point of view; the renormalized stress tensor T ab ren is covariantly conserved, since it is derived from an effective action [3]. In [16] the authors start their search for the general form of the renormalized stress tensor (in the Schwarzschild background) by looking for a general solution for the above conservation equation, under the assumption of staticity and spherical symmetry. In our case, however, it is to be noted that we derive the conservation equation instead of assuming it.
The stress tensor at infinity
Far away from the boundary black hole, that is, for r >> 2M , the stress tensor (3.5) reduces to the stress tensor of thermal plasma at temperature T = µ 1/4 If we perform a boundary coordinate transformation to the Schwarzschild coordinates, given by v = t + r * = t + r + 2M log[r/2M − 1], then the above asymptotic stress tensor will take the more familiar form, However, as the parameter M which characterizes the boundary black hole does not appear in the asymptotic expression for the stress tensor we must conclude that the thermal plasma at infinity (dual to the planar black hole in the bulk) is not influenced by the black hole at this leading order O(N 2 ). Nevertheless, as the fall-off of the next-to-leading order components of the stress tensor goes like M/r, one concludes that the interaction between the black hole and the plasma is not as weak as is typical for the black droplet phase. In [12] it was found that the fall-off is 1/r 5 , which clearly implied weak interaction. The reason for this difference is that our black droplet is a critical one, as it touches the planar black hole in the bulk, whereas the droplets studied in [12] are strictly above the Poincare horizon. This means that our black droplet interacts stronger with planar black hole than the droplets of [12], and this is reflected in the boundary theory by having a stronger interaction between the plasma at infinity and the boundary black hole in our case.
Trace anomaly
Renormalized stress tensors for weakly coupled conformal field theories have been extensively studied (see [3]), and they are known to have a trace anomaly. According to [13,17] the trace anomaly in the strongly coupled case takes exactly the same form as in the weakly coupled case. In 4 spacetime dimensions, which is the case of our interest, it takes the following form, (4.6) For our system, we have found that the trace anomaly does not appear up to the order we are working to. That is, our stress tensor (3.5) gives, Our result (4.7) is consistent with eq.(4.6) because up to the order we are working to (we did not compute the back-reacted metric) the boundary metric is pure Schwarzschild (R ab = 0) and therefore eq.(4.6) will give T a a = 0, confirming our result. Non-zero contributions to eq.(4.6) may appear only after computing the back-reacted metric. The corrections to the Schwarzschild metric due to the backreaction will be of order M 2 /R 2 AdS , and so, eq.(4.6) will give T a a = O(M 4 /R 4 AdS ), which means that non-zero contributions to the trace-anomaly, if any, will be of order O(M 4 /R 4 AdS ) at least.
Negative energy density near the horizon
Here we are going to show that the energy density of our system displays a crucial ingredient of stress tensors that describe Hawking radiation. Namely, we are going to show that near the horizon of the black hole there is a region with a negative energy density while near infinity the energy density is positive. This is a manifestation of particle creation. A pair of particles is created near the black hole horizon, one particle with negative energy enters the black hole horizon while its partner with positive energy escapes to infinity. In particular, that is the way a black hole is expected to lose its mass and evaporate; by absorbing negative energies.
Let us look at the energy density of the boundary theory, where we will take, u a = (1 − 2M/r) −1/2 δ a υ , the 4−velocity of a static observer. This gives us, (4.9) In Fig.2 we have plotted the energy density, and there one can see that in the region near the black hole horizon, r ∈ [2M, r 1 ≈ 3M ), the energy density is negative, while in the region r ∈ (r 1 ≈ 3M, ∞) the energy density is positive. AdS µ ρ, with respect tor = r/2M . Note that the energy density is finite at both the horizon and at infinity. Note furthermore the negative energy denisty in the regionr ∈ [1, ≈ 1.5).
No flux at leading order in N
Here we compute the energy-momentum current J a , defined as (see [23]) where u a is the 4−velocity of an observer in the boundary spacetime. Again, let us take u a = (1 − 2M/r) −1/2 δ a υ . It is straightforward to check that the stress tensor (3.5) gives, In special, this gives that J r = 0, which tells that there is no radial flux. Note that this is equivalent to showing that T r υ = 0. As said before this result agrees with recent results for similar calculations at strong coupling, see [12,18,19]. In reference [19] the authors gave the black droplet phase the name "jammed phase", since it behaves more like a solid, with no flow, rather than a fluid (black funnels). Yet, one expects that flux terms will appear in subleading orders in N , and they will lead to an exchange of heat between the boundary black hole and the surrounding plasma.
Final Comments
Most works so far on constructing black droplet solutions -and more generally on constructing static black hole solutions in AdS that induce black hole metrics on the boundary -relyed on numerical calculations (see [5] and references therein). In the work [8] we provided the first analytical example of a black droplet in AdS 5 spacetime and here, in this work, we provide its dual stress tensor. The construction of the analytical solution of the black droplet is due to the small parameter r 0 /R AdS of our sysytem which allowed for the perturbative construction. The calculation of the holographic stress tensor was straightforward and we obtained a simple and compact stress tensor which has the following important features: The stress tensor is static, covariantly conserved, regular on both future and past horizons, it gives a negative energy density near the black hole horizon and a positive energy density at infinity, and it contains no energy transfer terms at this (leading) order. These features agree with expectations and results on similar settings [5,7,12]. It is worth to conclude by the comment that this agreement gives further evidence that the analytical solution found in [8] indeed describes a black droplet.
A Details on the calculation of the stress tensor
In this appendix I am going to explain the passage from eq.(3.1) to eq.(3.2). Remember first that ǫ is a formal parameter which is inserted wherever there is a factor r 0 R AdS ; it simply counts the number of powers of r 0 R AdS and helps in organizing the calculations. Upon multiplying eq.(3.1) by r 2 0 one obtains, Now, as said above, wherever we see a factor r 0 R AdS we insert ǫ, and so we obtain, To show that every thing is consistent let us see now how the calculation proceed. Look at the 3 dimensionless quantities r 2 0 E ab , r 0 (Θ ab − Θγ ab ), and γ ab in the above expression and expand them in derivatives as well (as is done to all quantities in such calculations), ab (x) + ǫ 2 (z − z c ) 2 2 e ab (x) = r 2 0 E ab z=zc , e (1) ab (x) = ∂ z r 2 0 E ab z=zc , and e (2) ab (x) = ∂ 2 z r 2 0 E ab z=zc . This shows the way the calcula-tion is organized, and it makes it clear that the addition of the ǫ's in going from eq.(3.1) to eq.(3.2) is indeed consistent. | 2013-08-14T09:19:21.000Z | 2013-06-01T00:00:00.000 | {
"year": 2013,
"sha1": "89d95fdc2cc388bc62c999b3b9e5ce6fe1f9c413",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.0086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "89d95fdc2cc388bc62c999b3b9e5ce6fe1f9c413",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257806886 | pes2o/s2orc | v3-fos-license | Monitoring of antimicrobial usage among adult bovines in dairy herds of Punjab, India: A quantitative analysis of pattern and frequency
The present study aimed to evaluate the antimicrobial usage (AMU) pattern in dairy herds of Punjab, India. The on-farm quantification of AMU in adult bovine animals by the manual collection of empty drug containers (“bin method”) along with the records of the treatment was carried out in 38 dairy farms involving 1010 adult bovines for 1 year from July 2020 to June 2021. The farm owners were asked to record the antibiotic treatments as well as to deposit empty antibiotic packaging/vials into the provided bins placed at the farms. A total of 14 different antibiotic agents in 265 commercial antibiotic products were administered to the dairy herds during the study. A total of 179 (67.55%) administered products contained antimicrobials of “critical importance” as per the World Health Organization (WHO). Mastitis (54.72%), followed by the treatment of fever (19.62%), reproductive problems (15.47%), and diarrhea (3.40%) accounted for the majority of drugs administered in the herds during the study period. The most commonly used antibiotics were enrofloxacin (89.47% herds; 21.51% products), followed by ceftriaxone (50% herds; 12.83% products), amoxicillin (50% herds; 12.83% products), oxytetracycline (55.26% herds; 11.70% products), and procaine penicillin (47.37% herds; 12.83% products). The highest quantity of AMU [in terms of antimicrobial drug use rate (ADUR)] was observed for ceftiofur, followed by ceftriaxone, procaine benzyl penicillin ceftizoxime, enrofloxacin, cefoperazone, amoxicillin and ampicillin. A total of 125 (47.17%) products contained “highest priority critically important antimicrobials” (HPCIA) and 54 (20.37%) products contained “high priority critically important antimicrobials”. In terms of overall number of animal daily doses (nADD), the highest priority critically important antimicrobials (HPCIA) of the WHO such as third-generation cephalosporins and quinolones, respectively accounted for 44.64 and 22.35% of the total antibiotic use in the herds. The bin method offers an alternative to monitoring AMU as a more accessible approach for recording the actual consumption of antimicrobials. The present study, to the best of our knowledge, is the first of its kind to provide an overview of the qualitative and quantitative estimate of AMU among adult bovines from India.
The present study aimed to evaluate the antimicrobial usage (AMU) pattern in dairy herds of Punjab, India. The on-farm quantification of AMU in adult bovine animals by the manual collection of empty drug containers ("bin method") along with the records of the treatment was carried out in dairy farms involving adult bovines for year from July to June . The farm owners were asked to record the antibiotic treatments as well as to deposit empty antibiotic packaging/vials into the provided bins placed at the farms. A total of di erent antibiotic agents in commercial antibiotic products were administered to the dairy herds during the study. A total of ( . %) administered products contained antimicrobials of "critical importance" as per the World Health Organization (WHO). Mastitis ( . %), followed by the treatment of fever ( . %), reproductive problems ( . %), and diarrhea ( . %) accounted for the majority of drugs administered in the herds during the study period. The most commonly used antibiotics were enrofloxacin ( . % herds; . % products), followed by ceftriaxone ( % herds; . % products), amoxicillin ( % herds; . % products), oxytetracycline ( . % herds; . % products), and procaine penicillin ( . % herds; . % products). The highest quantity of AMU [in terms of antimicrobial drug use rate (ADUR)] was observed for ceftiofur, followed by ceftriaxone, procaine benzyl penicillin ceftizoxime, enrofloxacin, cefoperazone, amoxicillin and ampicillin. A total of ( . %) products contained "highest priority critically important antimicrobials" (HPCIA) and ( . %) products contained "high priority critically important antimicrobials". In terms of overall number of animal daily doses (nADD), the highest priority critically important antimicrobials (HPCIA) of the WHO such as third-generation cephalosporins and quinolones, respectively accounted for . and . % of the total antibiotic use in the herds. The bin method o ers an alternative to monitoring AMU as a more accessible approach for recording the actual consumption of antimicrobials. The present study, to the best of our knowledge, is the first of its kind to provide an overview of the qualitative and quantitative estimate of AMU among adult bovines from India. KEYWORDS antimicrobial usage, BIN method, bovines, dairy, milk
. Introduction
There is a projected rapid rise in the global human population to 9.8 billion by 2050, where almost half of the world's population growth is expected in developing countries (1). The population growth is generating a huge demand for livestock products, particularly for milk in developing countries, which is predicted to increase by 62% by 2050 (2)(3)(4). This increased demand for livestock products has promoted intensive livestock farming with high antimicrobial use for therapeutics, prophylaxis, as well as for growth promotion, which may lead to the emergence of antimicrobial resistance (AMR) (5-7). By 2030, the livestock industry is projected to account for 70% of the total antimicrobial use (AMU) globally, and antibiotic use in the animal husbandry sector of India has been predicted to double by this period (8).
As notified by the World Health Organization (WHO), the judicious use of antimicrobials especially "Critically Important Antimicrobials (CIAs) for human medicine" is crucial for AMR mitigation as well as for public health security (9). The categorization of the antimicrobials into "critically important", "highly important" and "important" in the WHO list of "Critically Important Antimicrobials for human medicine" (WHO CIA list) aims to ensure the prudent use of medically important antimicrobials for humans in the animal husbandry sector (10). In line with the various global action plans on combating AMR, the Government of India have also launched the "National Action Plan on Antimicrobial Resistance" (NAP-AMR) in 2017 (11), with one of the aims to optimize the use of antimicrobials in animals by restricting the use of antibiotics which are critically important for humans. However, there are implementation gaps in the NAP-AMR, as the field-level regulatory measures are still in the initial stages (12).
India stands fifth in terms of veterinary antimicrobial consumption in food animals measured in terms of veterinary antimicrobial sales data (13). However, there could be considerable bias in estimating the AMU based on sales data of veterinary antibiotics as it gives limited information on the number and species of animals treated, the condition of their use or the duration of treatment (14). Thereby, it is crucial to have a proper assessment of antibiotic usage in the animal husbandry sector at the regional as well as national levels, which can serve as a basis for the risk assessment of AMR.
The quantification of AMU at the farm-level represents an important step toward antibiotic stewardship as it provides detailed information on the quantity of antimicrobial use (AMU) at the level of end-user (farmer) and/or prescriber (veterinarian) (15). However, the estimation of quality on-farm AMU data remains challenging in many countries due to various factors such as poor animal health surveillance data, unavailability of treatment records, unauthorized use of antimicrobials, less awareness among farmers etc. (16)(17)(18).
To quantify the AMU at the farm level, various indices have been proposed (19). Some of the widely used metrics used are animal daily dose (ADD), antimicrobial drug use rate (ADUR), and used animal daily dose (UADD) (18,(20)(21)(22). The animal daily dose (ADD) in terms of grams/day for an animal can be obtained by multiplying the recommended "defined daily dose for animals" (DDD kg ) of a drug for its main indication in a specified species by the approximate weight of an adult animal (23,24). The number of animal daily doses (nADD) can be derived by dividing the total amount (mg or g) of medicine used by ADD, which is the product of actual animal weight and the standard dosage (24). The ADUR is equivalent to "daily doses per 1000 animal-days", i.e., "nADD/1000 animal-days", and is considered as a standardized measure for reporting ADD (6,20,25). ADUR is a time-sensitive measure which is not affected by the number of animals, and is useful in comparing AMU among the herds (21).
If the exact administered dose and the detailed data on the antibiotic application are known, the used daily dose (UDD), which is the administered dose per day per kilogram of a drug, can be calculated (25,26). The UDD can be used to calculate the used animal daily dose (UADD) (in mg/day), which is the product of animal weight and UDD (mg/kg/day). The UADD can only be calculated from detailed data on antibiotic administration, and such metrics are considered as a representative of the actual field-level use of the drug, since the treatment duration, weight and number of diseased animals vary between the treatments (15,24). Further, the number of used animal daily doses (nUADD) can be derived by dividing the total amount (mg or g) of medicine used by UADD, which is the product of actual animal weight and an estimate of the daily dose used for that antibiotic (22). As UDD represents more variations from the daily defined doses, the ratio of UDD and "defined daily doses for animals" (DDD kg ) facilitates an estimate of deviation in the dose administered during treatment from the recommended dosage (27).
Though there are studies from developed nations on tracking antibiotic usage in farms, there are limited studies from India as well as from other developing nations on assessing AMU (20,21,28). Moreover, to date, there is little knowledge about the amount of HPCIAs used in the dairy sector of India (9). Onfarm recording of AMU by employing the available methods like "bin method" and veterinary prescription records can measure the actual amount of antibiotics used on the farm (21). The earlier studies have observed the "bin method" as a suitable tool for monitoring AMU on the farm with better compliance than veterinary prescription records, particularly when the period of study is >6 months (29,30). Further, the studies have depicted that AMU data from the "bin method" could be a suitable tool to measure antimicrobials administered by farmers and is efficient for detecting the practice of unauthorized use of antimicrobials (20,21,31).
The present study targets the state Punjab of India, which has the highest per capita milk availability and is one of the leading milk-producing state in the country (32). The continuous rise in the demand for milk in Punjab as well as from neighboring states generates demand-associated production pressure among the dairy farmers in the region. This has led to the shifting of the trend from household dairy herds to commercial-intensive dairy farming in the region. Therefore, it is important to assess the AMU in dairy herds of Punjab to observe a reflection on the quantity, frequency of administration, and types of antibiotics used in dairy animal production in the state. In light of this background, the objective of the present study was to evaluate the pattern and frequency of AMU among adult bovine animals using the "bin method" along with the treatment history records from Punjab dairy farms for a duration of 12 months.
. Materials and methods
. . Herd enrolment
In the present study, forty-five farm owners were contacted through farm visits, and thirty-nine agreed to participate in the study. The farms were selected based on convenience and purposive sampling in order to include the farms from different geographical regions of Punjab. The farm owners were made clear about the purpose of the study and they provided their consent to use the antimicrobial usage (AMU) data of their farm for the study. The AMU data from the dairy farms were collected monthly from July 2020 until June 2021, i.e., for a period of 1 year. In the study, a total of 39 dairy farms [20 from household-level herds (those having 5 to 20 animals and mainly managed with manual labor by the family members) and 19 from commercial farms (those having more than 20 animals with semi-or fullymechanized farm operations)] were selected to monitor the AMU pattern. However, one commercial farm refused to participate in the study after 3 months of enrolment, thereby the study was completed on 38 farms. The total adult bovine animal population in the selected 38 herds comprised of 1,010 animals including both cattle (n = 519) and buffaloes (n = 491). Further, heifers and calves were not included in the calculation of antibiotic mass as they represented only a minor share of the total AMU in the case of targeted dairy herds, moreover, the such population is frequently changing in the herds due to their regular sale or purchase procedures (33).
. . AMU data collection
The AMU data of the targeted herds were collected by placing forty-liter receptacles with round swing tops on the selected farms. The receptacles were placed at a convenient location on the farm and the farm owners as well as farmworkers were instructed to place the empty containers of all the drugs used for treatment in animals into these receptacles. Further, in the study region, concerned veterinarians, para-veterinarians and unauthorized practitioners were also requested to place empty drug packets in the placed bins. In case of an incomplete or one-time use of an antimicrobial (where the vial is not emptied), the treatment prescriptions were requested to place in the receptacles.
The receptacles were emptied from the participating farms at monthly intervals and the data were recorded about the product name, volume or weight used, concentration of the product, and the number of drug vials deposited in the receptacle. The information about the number and category of animals being treated (species, age, and approximate weight of the animal), the number of days treated, the route of administration, the information on the person administering medicines, and reasons for treatment were obtained from the farm owners at monthly intervals along with the empty vials of the used medicines.
. . Data analysis
All the data were entered in an Excel spreadsheet (Microsoft Corporation, Redmond, Washington, USA, 2016). The contents of the bin were quantified by calculating the total amount of antibiotics administered in weight (mg) of the active substance used in the animals. The frequency of the different used antibiotics (active substances) was calculated by accounting the empty vials deposited in the bin along with the treatment history records from the farm owners and/or farm workers at monthly intervals. The used metrics for AMU quantification were animal daily dose (ADD), antimicrobial drug use rate (ADUR) and used animal daily dose (UADD).
The animal daily dose (ADD) refers to the g/day dosage for an animal, and was obtained by multiplying the recommended average daily dose of the active pharmaceutical ingredient (DDD kg ) by the approximate weight of an adult animal (20,23,24). The defined daily dose (DDD kg ) designates the mg/kg/day dosage obtained from the DDD vet calculations, which are the recommended value for each target species provided by the European Surveillance of Veterinary Antimicrobial Consumption (ESVAC)/ European Medicines Agency (EMA) (34,35). In the case of products without prescribed DDD vet measures, the on-label recommended dosage was used (36). Drugs with multiple antibiotic compounds such as the combination of sulfonamides with trimethoprim, amoxicillin-clavulanic acid/sulbactam/tazobactam, benzylpenicillin-benzathine, and benzylpenicillin-procaine were interpreted as single active substance (25). For the combination of trimethoprimsulphonamides, DDD vet was calculated for the minor substance, i.e., trimethoprim (37).
The ADD was calculated for each antibiotic administration by multiplying the recommended average daily dosage (DDD kg ) for the antibiotic by the actual exposure weight (kg) of the treated animal (24). As the country-specific standard weights were not available and animal weights at treatment might differ substantially, the weight at treatment for each animal recorded in the present study was used for estimating the ADD. The parenteral antibiotic formulations were calculated per kilogram of animals with recorded individual weights of the animals at exposure. In the study, the median body weight of the adult dairy cow was found to be 400 kg (mean 421 kg, min 300 kg, max 520 kg) and the adult buffalo weight to be 500 kg (mean 525 kg, min 425 kg, max 650 kg). For intramammary products, the ADD for antibiotic "A", was calculated using the formula: ADD = MG DDDA ×U DDDA ×F DDDA ; where MG DDDA is the dose (mg or IU) contained in a milliliter or an intramammary syringe of compound "A", U DDDA is the number of milliliters used in each administration, and F DDDA is the number of times per day the compound is administered (38).
The recorded amount of antibiotic administered to the animal obtained from the collection bin during each treatment was then divided by the calculated ADD (the product of expected dosage and the average animal weight) to yield the nADD. These calculations were performed individually for each observation, and the nADD at the herd level was estimated for each antibiotic agent by adding all drug-specific nADDs (21). Further, the herd level ADUR of various antibiotic groups was measured in the number of ADD/1000 animal-days using the formula described below (20,21).
ADUR(ADD/1000 animal − days) =
Active ingredient used in the study period g x 1000 ADD x Number of days in the study period x Number of animals at risk The amount of active substance(s) actually administered to the animal was calculated using the metric of UDD in mg/kg/day (25). The UDD for each antibiotic during each treatment was calculated separately for each data entry by dividing the actual amount of antibiotic compound administered (mg) by the number of animal times the average of the actual weight of the treated animals, and the treatment duration in days (39, 40). The used formula is: Weight of active substance mg No. of treated animals x Average weight kg x treatment duration (days ) The used animal daily dose or UADD was obtained from the product of animal weight and used daily dose (UDD). Further, the number of used animal daily doses or nUADD of each antibiotic was calculated as described by Flor et al. (22) by dividing the amount per antibiotic used by the UADD, which is the product of UDD and the animal weight at treatment (22). Similar to the nADD, the nUADD at the herd level was estimated for each antibiotic agent by adding all drugspecific nUADDs. Moreover, the UDD/DDD kg ratios were also calculated to quantify the antibiotic consumption and correctness of the administered dosage, in which a ratio between 0.8 and 1.2 was considered as correct dosing. The under-dose and overdose were, respectively interpreted as values <0.8 and >1.2 (25).
The descriptive statistics including unpaired t-test and graphical illustrations were carried out using Microsoft Excel 2016.
. . Herd characteristics
The majority of farms (84.21%, n = 32) enrolled in the study were mixed-species dairy farms comprising of both cattle and buffaloes, whereas 5 were exclusively cattle farms and one was buffalo farm. Further, 52.63% (n=20) of the farms were house-hold level herds comprising <20 animals, whereas 47.37% (n = 18) were commercial farms comprising more than 20 animals. In the selected herds, a total of 208 animal health related cases were reported in 1 year of study which required antibiotic use, of which mastitis was the most frequently reported disease condition in farms (50.48%, n = 105), followed by fever (20.67%, n = 43), reproductive problems (17.31%, n = 36), and diarrhea (4.32%, n = 9). Around 15 miscellaneous conditions were reported such as indigestion, inflammation, injury, skin infection, abscess, teat obstruction, edema, etc., each accounting for a negligible percentage of the total cases. Concerning the personnel administering regular treatment in farms, para-veterinarians were involved in treatment in 34.21% herds (n = 13), followed by "unauthorized practitioners" (frequently called "private doctors" in the villages) accounting for the treatment in 26.32% of the herds (n = 10). The farm owners themselves administered treatment in 23.68% of the herds (n = 9), where the 15.79% of herds (n = 6) were treated by veterinarians. However, dairy farmers consulted the veterinarians for the treatment of all the complicated cases (e.g., dystocia, recurring mastitis, severe injuries, fractures etc.) and the treatment failure cases attended by para-veterinarians/unauthorized practitioners/themselves.
. . Description of antimicrobial active ingredients
In the selected farms, a total of 265 commercial antibiotic products of 14 different antibiotics belonging to 9 groups were identified, of which the majority were injectable preparations. Out of the total antibiotic compounds administered, the highest number of antibiotic products were administered in the cases of mastitis (54.72%), followed by treatment of fever (19.62%), reproductive problems (15.47%), and diarrhea (3.40%). A total of 18 (6.79%) drugs were administered in case of various miscellaneous conditions like skin infection, abscess, indigestion, inflammation, injury, teat obstruction, oedema, etc.
. . Quantitative estimates of antimicrobial usage (AMU)
The AMU described in terms of nADD, ADUR, nUADD, along with the UDD/DDD kg ratio of the used antibiotics in common disease conditions on the selected farms is provided in Tables 3, 4. On grouping antibiotic classes according to their prioritization by the WHO, penicillins followed by third-generation cephalosporins which belong to "HPCIA", and aminopenicillins grouped under "high priority" were in use in the highest quantity in the herds (Figure 2). In the case of mastitis, penicillin (25.56%) followed by third-generation cephalosporins (22.02%) and aminopenicillins (20.01%) were the highest used in herds in terms of nADD. In the case of reproductive problems, 25.78% of the antibiotics used comprised of first-generation cephalosporins followed by aminopenicillins (20.13%), quinolones (15.28%), and thirdgeneration cephalosporins (14.68%). In case of nADDs used in fever, oxytetracycline (34.44%) followed by enrofloxacin (20.79%) and aminopenicillin (12.15%) were largely used. Sulphonamides (96.34%) were the highest used in cases of diarrhea. In terms of overall nADD, the largest amount of antibiotic used in the herds was ceftriaxone, followed by enrofloxacin, ceftiofur and penicillin, respectively accounting for 22.08, 21.91, 12.70, and 12.07% of the total amount of antibiotics used (Figure 2). On comparing the household level and commercial farms, there was a significant difference in the antibiotic usage in terms of nADD in commercial farms (p < 0.007). The nADD used in household level and commercial farms along with different conditions in dairy herds is depicted in Figure 3 and Supplementary Figure 1. In the case of household level herds, enrofloxacin followed by ceftiofur and amoxicillin were largely used antibiotics, and for commercial dairy farms, enrofloxacin followed by ceftriaxone, procaine penicillin, and amoxicillin were mostly used antibiotics in terms of nADD. The nADD was highest for enrofloxacin in both house-hold level herds and commercial dairy farms, with an overall nADD of 33.92 for enrofloxacin in house-hold level herds and 58.52 in commercial dairy farms during the study period.
The antibiotic use by quantity measured as ADUR in terms of nADD for 1000 animal-days was highest for ceftiofur, followed by ceftriaxone, procaine benzylpenicillin, ceftizoxime, enrofloxacin, and cefoperazone (Table 4). In the present study, the lowest ADUR was reported for gentamicin, sulphonamides, and metronidazole. The highest median UDD (mg/kg/day) was observed for cefoperazone, followed equally by ceftriaxone, ceftizoxime, and amoxicillin. In terms of UADD, the highest amount of use was recorded for procaine penicillin (24.69%), followed by enrofloxacin (16.50%) and ceftriaxone (14.77%) (Table 3, Figure 2). In terms of overall nUADD, HPCIA such as third-generation cephalosporins and quinolones, respectively accounted for 25.27% and 16.86% of the total antibiotic use in the herds (Figure 2).
. Discussion
The need for robust monitoring systems for data collection and understanding the antimicrobial usage and consumption is crucial for addressing AMR in the animal husbandry sector as well as in humans, since many of the antibiotic classes are shared among both sectors (14, 34). In concordance with earlier studies (40,42,43), the present study involves analysis of annual data of AMU on dairy farms using bins for the collection of empty drug containers along with treatment history collected directly from the farm owners. In developing countries like India, many times antibiotic doses are not administered adhering to standard pharmacopeia for the recommended value for each target species, thereby, the on-farm quantification of antimicrobials can represent a more accurate measure for quantifying AMU (44).
Earlier studies have reported the use of "bin method" of AMU data collection to have good to excellent reliability for injectable and intramammary products, and is potentially preferable in countries like India where obtaining veterinary sales data is difficult (45). One of the advantages of the bin dataset is that "overreporting" of AMU is less likely as the record of only used empty vials is made from the bins (46). However, erroneous overreporting may occur if any subset of the animal population in which antibiotics are used is not taken into account, and the method is labor-intensive and utilizes many resources, making its routine application difficult. Farmers observed the bin method as convenient, however, there may be the chance of under-reporting if the researchers do not periodically collect the data from the farm or fail to motivate the farmers about placing the empty vails, or if any new worker joins the farm in between the study who is unaware of the ongoing study (28,45). Theoretically, the treatment records are considered a precise method of measuring AMU in well-managed herds; however, the practical feasibility of this method requires constant commitment and effort from the people associated with the dairy farm, otherwise, it may result in incomplete data recording (47). Hence, the present study included the bin method along with the treatment history of data collection from farm workers to strengthen the study results. The present study points toward the high use of critically important antimicrobials (CIAs) in animal production in study regions, where around 67.55% (179/265) administered products contained antimicrobials of "critical importance", particularly for diseases such as mastitis, reproductive problems and fever in bovines. In accordance with an earlier study by Firth et al.
(48), where the use of "HPCIA" in treatments of mastitis was reported to vary from 10-80%, the present study also reported 52.42% of the total drugs administered in mastitis to be under "HPCIA" category. Similarly, a study from Germany has reported that more than 32% of the antibiotics used during lactation were "HPCIA" (49). In the present study, cephalosporins, particularly third-generation cephalosporins made up 29.66% of antibiotic use in mastitis, 29.27% of use in reproductive problems, and 7.69% of use in fever. Similar to the present study, in Austrian dairy farms, 3rd and 4th generation cephalosporins were most . /fvets. . frequently used, particularly for the treatment of mastitis and foot diseases (50), and 3rd generation cephalosporins accounted for 75% of intramammary antimicrobials used in the Wisconsin dairy farms during 2016-2017 (41). In a study on dairy farms in the United Kingdom, the use of highest priority, critically important antimicrobials (fluoroquinolones, third-and fourth-generation cephalosporins and colistin) was found to be predominant (45). An earlier study on veterinarians from India also reported the high usage of HPCIA such as quinolones (76.8%) and thirdgeneration cephalosporins (47.8%) in dairy herds (51). Similarly, the present study also revealed higher use of quinolones and thirdgeneration cephalosporins, both in terms of frequency and quantity of use. The high use of quinolones in India could have paved the increased resistance toward fluoroquinolones and cephalosporins among Gram-negative and Gram-positive bacteria in the country (52). Such AMU data at the regional level helps to identify the trends in antimicrobial usage and serves to inform health policy makers to initiate evidence-based responses to tackle this public health issue. When quantifying antimicrobial use in animals, the choice of metric and denominators to use is complex, and numerous weight-based and dose-based metrics are widely used (19), and no single method is considered to be ideal in all situations (53). The present study has employed AMU quantification based on different metrics such as animal daily dose (ADD and nADD), antimicrobial drug use rate (ADUR), and used animal daily dose (UADD and nUADD). In line with the earlier studies where the AMU quantification from the same data set vary depending upon the metric calculated, the present study has also found variation in the AMU quantification in the data from the same herd, based on the standard dosage and the actual dosage, in terms of nADD and nUADD, respectively (24). Similar to the present study, deviations in the UDD and DDD kg have been reported in previous studies (14,22). The variation in the estimates of AMU can happen because of under-or overdosing by the treatment provider, or by using the standardized weight, since the animal weight at the time of treatment may be different from the standardized weight (22). In the present study, the daily dose metrics, the nADD and the nUADD, were calculated based on the specific (estimated) live weight of the animal at treatment instead of standardized weights which may be country-and livestock sector-specific, and the AMU calculation using more specific weights for the animals at exposure were found to be more precise (15,24,54).
The state Punjab of India is primarily an agrarian economy with dairying as an important source of income for farmers (32). With the increase in commercial dairy herds, an increase in antibiotic consumption is also expected in the region. The present study reports the higher use of antibiotics in commercial farms, particularly antibiotics such as enrofloxacin, third-generation cephalosporins like ceftriaxone, cefoperazone, ceftizoxime, tetracycline like oxytetracycline, benzylpenicillin etc. This increased antibiotic use in accordance with the scale of operation can be attributed to the direct marketing of veterinary antibiotics to farm owners and the stocking of antibiotics on the farm premises, particularly in the case of commercial farms (44,55). There exists an efficient socio-economic basis of farmers which encourages their irrational antibiotic use, in which the ease of easy access to antibiotics, and the need for profits and fewer losses have caused an increase in non-prescription antibiotic consumption, many times compromising good husbandry practices (56).
In the present study, the farmers reported that in many cases the antibiotics were administered by unauthorized personnel Frontiers in Veterinary Science frontiersin.org . /fvets. . such as para-veterinarians, unauthorized practitioners and farmers themselves. Earlier studies also have reported that antibiotic use in the dairy sector of India is predominated by paraveterinarians, unauthorized practitioners, and the dairy farmer themselves (51,57). The predominance of informal practitioners was widely reported in the health systems of low-and middleincome countries, including India (58). In the present study, only 15.79% of the herds were observed to be primarily treated by veterinarians, highlighting the requirement for strengthening the veterinary services in the country. The treatment of the animals by unauthorized personnel (i.e., para-veterinarians, unauthorized practitioners and farmers themselves) in the targeted farms could explain the overdosage of certain antibiotics in the present study, particularly the higher generation cephalosporins such as ceftizoxime and ceftiofur, and underdosing of many antibiotics, like penicillin, oxytetracycline, gentamicin etc., which warrants immediate action for promoting judicious antibiotic usage. Apart from this, a multitude of other possible factors such as poor farm biosecurity, inadequate infection control practices along with the lack of compliance with regulatory frameworks could have resulted in the indiscriminate use of antibiotics in the targeted dairy herds. The quantification of antibiotic usage is considered crucial to assess animal husbandry practices and the effectiveness of ongoing stewardship programs. Since the use of metrics based on actual dosage requires the measurement in terms of actual dose rate, the treatment duration and the weight of animals at exposure, as was available in this study, are costly and timeconsuming, they may not always be feasible at the national level (23,59). However, the detailed recording of AMU data as in the present study is recommended on sentinel farms, when feasible, to complement the national AMU data (60). The data of the present study can be further used to determine the associations between antibiotic usage and associated resistance, which can inform necessary improvements in the existing AMU/AMR surveillance programs. Such region-specific studies can guide the policymakers in the formulation of evidence-based stewardship and awareness programs among the stakeholders (e.g., veterinarians, para-veterinarians, farmers etc.).
. Limitation
The antibiotics administered in most of the herds in the present study were by unauthorized practitioners, which might have led to inappropriate treatment duration in most cases, and thereby may have led to an under-or overestimation of the used daily doses, in comparison to the herds treated by the veterinarians. Further, we have tried to select the farms from various regions of Punjab, however, the inherent limitations of convenient and purposive sampling could have led to selection bias in the study. Moreover, antibiotic usage was calculated for herds considering only adult bovine animals, disregarding the contribution of other age categories in those herds, even though they are in minor proportion. In this context, further studies need to be performed to determine the contribution of calves and heifers to antimicrobial use in India.
. Conclusion
The present study relied on farm-recording of antimicrobial usage (AMU) by using various metrics, i.e., animal daily dose (ADD), antimicrobial drug use rate (ADUR), and used animal daily dose (UADD). In the present study, around 67.55% of the products administered contained antimicrobials of "critical importance" as per the WHO list and of those, 47.17% of products contained 'highest priority critically important antimicrobials' (HPCIA). The study also reports the deviation of the used daily doses from the recommended Frontiers in Veterinary Science frontiersin.org . /fvets. .
dosage of various antibiotics. These findings highlight the widespread use of HPCIA in treatment in the animal husbandry sector as well as the widely prevalent practice of animal treatment by unauthorized personnel, which necessitates prompt action from the government as well as the various stakeholders for prudent antibiotic usage in animal husbandry. Moreover, such epidemiological studies at a large scale are recommended to generate evidence-based data on AMU and related trends, which may provide insights to generate tailor-made strategies for curbing AMR at the regional and national levels.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The individual permission of the farm owners was obtained for their voluntary participation in the study. The identity of the participants was kept confidential throughout the study. | 2023-03-30T13:29:57.685Z | 2023-03-30T00:00:00.000 | {
"year": 2023,
"sha1": "01226e459f028c49a465d224335cbcf4f76ad5ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "01226e459f028c49a465d224335cbcf4f76ad5ef",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
81852954 | pes2o/s2orc | v3-fos-license | Developing and preliminary evaluation of a general academic course on traffic health and safety
Homayoun Sadeghi-Bazargani1 ID , Mohammad-Hossein Somi1, Mina Golestani1 ID , Mousa Amiri2, Saeideh Ghaffarifar3 ID , Saeid Aslanabadi1 ID , Ali Taghizadieh1 ID , Shahryar Behzad-Basirat2, Naser Mikaili1, Seyed Abdolreza Mortazavi-Tabatabaei4, Saeede Sheikhi1* ID , Hamid Soori4 ID , Davoud Khorasani-Zavareh4 ID , Soheil Saadat5 ID , Mashyaneh Haddadi5, Mohammad Asghari Jafarabadi6 ID , Hamid Allahverdipour6 ID , Javad Babaeie7, Seyed Kazem Shakouri8 ID , Ali Meshkini8 ID , Fatemeh Bakhtari6 ID , Zakieh Piri7, Leila Jahangiry6 ID , Parvin Sarbakhsh6 ID , Kavous Shahsavari Nia8 ID , Gholam Hossein Safari6 ID , Mostafa Farahbakhsh1 ID , Amir Mohammad Navali8, Abdolhassan Kazemi6 ID , Saleh Heidarian1, Forouzan Rezapur-Shahkolai9 ID , Reza Hekmatshoar10 ID , Majid Fallahi10 ID , Hasan Ghodsi11, Maryam Mazaheri12 ID , Behzad Jafarinia12 ID , Khalil Pourebrahim1, Sajjad Alihemmati1, Ali Ahmadian13, Ahad Safayi1, Farhad Torbati6, Malek Ghorban Niyati7, Mirbahador Yazdani1, Nooshin Hushian1 ID , Fahimeh Bakhtyari1 ID
Introduction
Based on the World Health Organization's reporting, over 1.25 million people die annually in traffic accidents worldwide.Traffic accidents are the ninth main cause of death worldwide, with an average age range of victims of 15 to 29.Overall, 90% of traffic accidents happen in low and middle-income countries that comprise 82% of the global population, and these countries account for half of the world's vehicles. 1he Iranian Legal Medicine Organization (ILMO) reported that over 14 000 mortalities and about 290 000 road traffic injuries (RTIs) occurred in 2016 in Iran. 2 RTIs not only impose heavy financial burdens on national and global economies but also affect families.Families that lose their breadwinners in RTCs, or those having a breadwinner become disabled, gradually enter poverty. 3herefore, concentrating on RTCs and training safety promotion methods seems to be important.As studies suggest, RTIs originate from three main intervening factors: human, environmental and vehicular.Human factors include age, sex, skill, sleepiness, driving focus, experience and drug influences; vehicular factors include design, production and maintenance; and road/ environmental factors include road geometric features, traffic control tools, traffic signs, road friction, weather and visibility. 4uman errors play an important role in RTIs.Reports indicate that only 1% of RTIs are due to "technical problems of vehicles" or "road safety problems;" the remaining 99% of accidents happen because of human error. 5,6ccording to a report by the Iranian police, there are several highlighted factors -among which human errors are the most important -responsible for over 500 000 mortalities during the past 20 years in Iran.These highlighted factors included use of cell phones, eating and drinking while driving, driving at unsafe speeds, lack of concentration, unsafe following distance, sudden changes in lanes, running red lights, insufficient experience, driving after taking medications, and illegal passing.Meanwhile, slippery and freezing road surfaces along with rain or snow are among the most important environmental factors. 7he mortality rate of RTIs in Iran is 30 per 100 000 population.This rate, compared to the world statistics (23 per 100 000), ranks Iran first in the world. 8
Objectives
In Meeting 762 of the Supreme Cultural Revolution Council (SCRC), headed by President Hassan Rouhani, a preparation plan for higher education in health sciences was presented.After presenting the plan in a steering committee for a comprehensive scientific map of the country and doing some needed reformulations, the plan was approved by the SCRC.According to the preparation plan for higher education in health sciences, all medical universities of the country and affiliated centers were clustered into 10 zones based on population, facilities and human indexes.As a result, some universities were classified as focal points for international missions and others for national missions. 9eveloping traffic knowledge in the community was assigned to Zone #2.Initially, the requirements were announced by the Ministry of Health and Medical Education (MOHME) for Zone #2 with Tabriz University of Medical Sciences (TUOMS) as its focal point; the medical universities of Urmia, Ardabil and Maragheh were included as member universities that all met the needed criteria to undertake this responsibility.Finally, a memorandum of understanding (MoU) was signed between the National Road Traffic Knowledge Development Trustee (NRTKDT) and MOHME and the center formally began its activities.In a short period, the team conducted considerable educational and research plans with the participation and cooperation of Iranian medical universities and other related organizations.Accordingly, through developing a strategic plan, the NRTKDT continues to seek to achieve the important goals and objectives as its main priorities.The goals are promoting knowledge and awareness about RTIs nationwide and developing national and international ties for the development of knowledge about RTIs.The objectives are designing and holding educational programs for target groups, transferring, translating and implementing knowledge about RTIs, developing necessary infrastructure to train national and regional professional human resources etc. 10 To reach the first objective, a one-credit educational course on safety and traffic knowledge was designed to be taught as a compulsory course at the university level.
Materials and Methods
The one-credit course on safety and traffic will be presented as a general and obligatory course to all students once during their college years.This program will pass a trial period of 1-3 years in 20 universities and the final assessment of its implementation will be forwarded to SCRC for further development.The design and implementation process of this course are as follows (Table 1): after conducting a needs assessment, an educational curriculum was designed, reviewed and finalized by national experts.The main objective of developing this curriculum was to inform the affected groups, especially university students who are considered as intellectuals and potential managers of the country, and the public in general.Moreover, the ultimate goal of designing this course is to decrease the number of RTIs.The educational objectives of the curriculum include: 1. Teaching the importance of epidemiology and status of traffic accidents and safety in Iran and the world, 2. Identifying risk factors of traffic accidents and injuries, 3. Understanding traffic safety norms and basic traffic laws at national and international levels, 4. Teaching safety measures and the requisite trafficrelated skills to students.Other elements of the curriculum include a teaching method, a student assessment method, teaching resources and learning opportunities.Various universities were summoned to see if they were ready to participate in the program via presenting the course to the students.Meanwhile, they were requested to nominate teachers (Table 2) and students (Table 3) who were willing to teach the course and undertake to study the course, respectively.Since the scope of the program was very broad, it was difficult to find a sufficient number of traffic experts to teach the course.Therefore, it was decided to design a teacher training program for the trainers.Information on teachers and students is presented in Table 4.The teachers were trained in two 5-day courses and those who successfully passed received a certificate which enabled them to teach the traffic safety course in universities.The traffic safety course was then administered as a pilot in four phases.In the first phase, the course was presented to three classes at BSc and MSc levels in the Faculty of Management and Medical Informatics as well as Health Faculty of TUOMS, Tabriz, Iran.In the second phase, the course was presented in the Faculty of Management and Medical Informatics, Health Faculty and some other faculties of TUOMS.The target group for this phase consisted of BSc, MSc and PhD students.A pretest was given to students to assess their level of knowledge and establish a baseline for determining the course effectiveness.The students were divided into five groups and a workshop-like class was held for one week in May 2017.For each group, at least five teachers from different fields of study were used.At the end of the course, a posttest was given to the students.A survey questionnaire was also filled out by the student participants and the results will be presented in the near future.The final exam, with similar questions for all groups, was designed and graded by an expert in traffic knowledge development and the answer sheets were graded by the teachers.The content validity of the measure was assessed and confirmed by a panel of experts.The internal consistency of the measure was assessed and confirmed by Cronbach's alpha (alpha >0.7) In the third phase, the course was presented to nine other medical universities.This phase is currently in process.
In the fourth phase, the course will be presented at a number of non-medical universities.In addition, it will be presented at all medical universities in the country.Finally, the comments of all national authorities and experts in the MOHME and the Ministry of Science will be collected and the results will be forwarded to the SCRC.Upon confirmation of the council, the course will also be presented to all Iranian universities.
Data analysis
Statistical analysis was done using SPSS software (version 17, SPSS Inc. IL, Chicago, USA).The data are presented using frequencies (percent) for categorical variables.
Results
The pre-test results (Table 4) show that the level of students' knowledge about the factors including the ways of preventing accidents, safe distance of the cars in different speed ranges and road safety norms was lower than other factors: almost 90% of the students provided incorrect answers to these items at the pre-test.Almost 80% of students provided incorrect answers to such items as car safety norms, the role of bumpers in accidents, crossing over street railways, knowledge about proenvironment driving, and traffic laws.Surveys also revealed that two items, "safety of pedestrians" and "first aid in RTCs, " were the most useful and applicable.The others were "road safety norms, " "car safety norms, " and "traffic rules."
Discussion
Iran is ranked as the fifth country worldwide in fatal RTIs.
Based on a report by the ILMO in 2016, almost 14 000 people were killed due to RTIs and 290 000 were injured in Iran.According to the World Health Organization, the global number of fatal RTIs is estimated to be 1.25 million people. 2 RTIs are the ninth leading cause of mortality worldwide, with an average age range of victims from 15 to 29.As mentioned earlier, 90% of mortalities happen in low to medium income countries, which account for 82% of the global population and 50% of global vehicles. 3This indicates these countries need to devote more time and energy to develop knowledge about traffic safety, and Iran, with such a high rate of RTIs, is among these countries.One of the responsibilities of public education systems is to promote public knowledge about different fields.
Developing public knowledge on general issues, along with conducting specialized research studies, is very important at institutes of higher education.General courses in higher education curricula are a formal part of a BSc degree and students in all fields need to pass them.
At present, there are 23 credits in general courses at the BSc level provided by Iranian universities. 11Considering the importance of RTIs and people's role in traffic issue, it seems that it is necessary to provide a university course for traffic safety education.General courses are not limited to one specific group of students; that is, all students are required to pass these courses at different universities.
Hence, including this course as the general course in universities could promote traffic knowledge among families and the community.
One might say that people can learn traffic-related issues through time and normal social communications or while obtaining a driver's license.But it should also be noted that under the Iranian system of obtaining a driver's license, rules are not strictly followed and education does not happen in holistic terms.In addition, all age groups have to learn traffic rules because even if they are not going to drive a car, they have to encounter cars as pedestrians.Therefore, they should be informed about RTIs and other related subjects; and universities are one of the main places for accomplishing this aim.The results of the pretests in the pilot for the university students confirms this issue as well.
Unlike countries as Sweden and Russia, there are no traffic-related courses in Iranian universities at present.For example, a "Traffic Epidemiology" course is a general course in Swedish universities.There are two main goals for developing traffic knowledge: 1. Establishing health and traffic knowledge at MSc and PhD levels 2. Teaching a course of traffic and safety as a one-credit general and compulsory course in all fields of study.The aim of this course is to promote individual knowledge of the community.Consequently, getting this knowledge can increase people's demand from the responsible authorities to promote the quality of traffic-related settings.Currently, the course is being implemented as a pilot study at ten universities of medical sciences nationwide; and after its approval by the SCRC, it will be taught at all universities.Based on the results of surveys, pre-tests, and comments of students who took this course, such topics as pedestrian safety, pro-environment driving and safety norms were the items with the highest priority.Accordingly, it seems that some changes could be made in the current curriculum.For instance, there could be some new material about RTIs and other topics could be centralized.According to student feedback, this course was more applicable compared to other general courses.This satisfaction may prove to be an advantage for expanding the course across universities.
Conclusion
Considering the importance of RTIs and the human element in traffic-related issues, it seems that it is necessary to provide university courses with traffic safety education.
Table 1 .▼▼
Design and implementation phases of traffic safety course Developing the unique educational and national textbook called "Safety and Traffic" book ▼ Sending announcement to target and interested universities ▼ Holding the first teacher training course ▼ Running the first phase of the pilot study through holding and presenting the course in three classes of the Faculty of Management and Medical Informatics and the Faculty of Health of TUOMS at the fall semester of 2016-2017 ▼ Holding national teacher training course ▼ Assessment and issuing the certificates ▼ Running the second phase of the pilot study by presenting the course in 13 classes at the Spring semester of 2017 ▼ Running the third phase of the pilot study by presenting the course at other national universities of medical sciences at the Spring semester of 2018 Running the fourth phase of the pilot study by making decision on implementation of the course at all universities of medical sciences and several non-medical universities at the fall semester of 2017-2018 ▼ Collecting the comments of the experts and authorities of both involved ministries ▼ Finalization ▼ Sending the package to the SCCR | 2018-12-17T05:12:33.250Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "a228fbfe48a583910008d743e3852aa1d9f27f86",
"oa_license": "CCBY",
"oa_url": "https://rdme.tbzmed.ac.ir/PDF/RDME_20958_20180113101540",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a228fbfe48a583910008d743e3852aa1d9f27f86",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255496557 | pes2o/s2orc | v3-fos-license | Breakthrough of glycobiology in the 21st century
As modern medicine began to emerge at the turn of the 20th century, glycan-based therapies advanced. DNA- and protein-centered therapies became widely available. The research and development of structurally defined carbohydrates have led to new tools and methods that have sparked interest in the therapeutic applications of glycans. One of the latest omics disciplines to emerge in the contemporary post-genomics age is glycomics. In addition, to providing hope for patients and people with different health conditions through a deeper understanding of the mechanisms of common complex diseases, this new specialty in system sciences has much to offer to communities involved in the development of diagnostics and therapeutics in medicine and life sciences.This review focuses on recent developments that have pushed glycan-based therapies into the spotlight in medicine and the technologies powering these initiatives, which we can take as the most significant success of the 21st century.
As modern medicine began to emerge at the turn of the 20th century, glycanbased therapies advanced. DNA-and protein-centered therapies became widely available. The research and development of structurally defined carbohydrates have led to new tools and methods that have sparked interest in the therapeutic applications of glycans. One of the latest omics disciplines to emerge in the contemporary post-genomics age is glycomics. In addition, to providing hope for patients and people with different health conditions through a deeper understanding of the mechanisms of common complex diseases, this new specialty in system sciences has much to offer to communities involved in the development of diagnostics and therapeutics in medicine and life sciences.This review focuses on recent developments that have pushed glycan-based therapies into the spotlight in medicine and the technologies powering these initiatives, which we can take as the most significant success of the 21st century. KEYWORDS glycobiology, glycan, carbohydrate, nanotechnology, drug development, vaccine
Glycobiology and glycomics
Over the past few decades, the use of computational modeling in the field of glycobiology has grown due to the rising popularity of glycobiology and glycomics (1). One of the newest omics disciplines to emerge in the contemporary post-genomics age is glycomics. In addition, to providing hope for patients and people with different health conditions through a deeper understanding of the mechanisms of common complex diseases, this new specialty in system sciences has much to offer to communities involved in the development of diagnostics and therapeutics in medicine and life sciences (2). Glycomics is a branch of glycobiology that focuses on defining the structure and function of glycans in living organisms. Glycobiology is the study that focuses on the structure, function, and biology of carbohydrates (1,3). An emerging discipline is known as "systems glycobiology (the impact of systems biology on the study of glycome)," which is hopeful given the current availability of advancing Wet-Lab techniques in the fields of glycobiology and glycomics (4). A long chain of carbohydrate-based polymers called glycans is made up of repeating monosaccharide monomer units joined by glycosidic connections. All cells in nature appear to contain complex and varied glycans, which are crucial to all biological systems. In living things, glycans play physical, structural, and metabolic roles (4). The last century appeared to be a significant expansion in our understanding of the biochemistry and biology of proteins and nucleic acids (5, 6).
Genomics revolution and biotechnology
Scientific interest in understanding the characterization, function, and interaction of other essential biomolecules such as DNA transcripts, proteins, lipids, and glycans for the cell has grown due to the genomics revolution and the advent of high-throughput technologies (7). High-throughput technology enables the production of massive amounts of data for omic analysis, for example, genomics, transcriptomics, proteomics, phenomics, and metabolomics (8). At present, the growth of these technologies and their application go hand-in-hand with the growth of bioinformatics (9). Glycopeptide-based antibiotics (GBA) such as Vancomycin, Teicoplanin, Telavancin, Ramoplanin, and Decaplanin, including Corbomycin, Complestatin, and antitumor antibiotic Bleomycin, are another breakthrough invention of this century (10). Two blockbuster drugs, Acarbose (Bayer) and Heparin, including influenza treatment drugs Tamiflu (Oseltamivir, Roche) stand out as monosaccharide-based drugs that have been used therapeutically for a long time and saving the lives of people (11,12).
Studies on glycobiology have grown as high-throughput technologies have advanced, allowing for fast cell screening. Additionally, more sophisticated analytical methods and data processing tools offer the chance to enhance high-throughput approaches for glycan screening as a disease marker and for categorizing glycan structure in therapeutic proteins (13). In addition, nanotechnology is a process of modifying matter at a size close to the atomic level to create a novel structure, materials, and devices. This technique offers advances in science across a wide range of industries, including manufacturing, consumer goods, energy, transportation, food safety, environmental science, and medicine, among many others (14).
Glyco-nanoparticles and nanotechnology
In addition, nanoparticles (NPs) are currently gaining a lot of attention due to their use in biology and medicine. The primary biological applications include the identification, estimation separation, purification, and characterization of biological molecules and cells, as well as the use of fluorescent biological labels, MRI contrast enhancement agents, pathogen and protein detection, DNA probing, tissue engineering applications, tumor targeting, targeted delivery of drugs, genes, and small molecules (15).
The development of powerful tools with diagnostic, therapeutic, and analytical applications through the use of nanotechnology has changed the approach of biomedical sciences and fight against human diseases (16). Millions of lives have been saved annually from vaccination, which is a success story in global health and development. More than 20 deadly diseases (such as Polio, Tetanus, Flu (influenza), Hepatitis B and Hepatitis A, Rubella, Hib, Measles, Whooping Cough, Pneumococcal, Rotavirus, Mumps, Chickenpox, Diphtheria, and off course end of 2021, COVID-19 and so on) can now be saved by vaccines, allowing individuals of all ages to live longer (17,18). The milestone intervention has been created in medical history by developing the vaccine for cancer, such as the HPV (human papillomavirus) vaccine, to prevent cervical, vaginal, and vulvar cancer, anal cancer, and genital warts used for oral cancer. Likewise, the Hepatitis B vaccine treats existing liver cancer (also called therapeutic vaccine-immunotherapy) (12, 19).
Nanotechnology focuses on hybrid materials made of inorganic nanostructures and biomolecules (20)(21)(22). Synthetic scaffolds made of iron oxide, noble metal, and semiconductor nanoparticles have been used to multimerize glycans and increase their affinity for receptors. Hybrid material's physical features, such as magnetic and fluorescence, have led to applications in sensing, delivery, and imaging, e.g., ultravioletvisible (UV-Vis) spectroscopy, infrared (IR) spectroscopy, elemental analysis, nuclear magnetic resonance (NMR), transmission electron microscopy (TEM), and X-ray photoelectron spectroscopy (XPS) (23). Likewise, as contrast agents for magnetic resonance imaging (MRI), magnetic nanoparticles (MNPs), such as iron oxide and manganese oxide nanoparticles (MONPs), are of particular interest. MRI uses a radio frequency (RF)-induced electromagnetic field to generate internal tomographic tissue pictures; modification of that field's signal by particles (called "contrast") allows their location to be noticed. Targeted magnetic, photodynamic, and gene therapy have all been used to battle cancer using nanocarriers based on heparin and heparin derivatives (24). Multifunctional gold NPs (nanoparticles) containing polysaccharide-functionalized gold NPs have been developed in various applications, including imaging, photodynamic treatment, and apoptosis activation of metastatic cells (24,25). Likewise, heparin's capacity to inhibit blood clot formation has enabled significant medical advancements since world war-II, such as heart transplants, renal dialysis, and coronary artery dilations (angioplasties) (26).
Glyco-science and therapeutics uses
In recent decades, various functions of glycans in biological systems have been discovered due to the growing glycoscience study. Numerous scientific fields, including immunology, development and differentiation, biopharmaceuticals, cancer, fertility, blood types, infectious illnesses, etc., have identified significant roles of glycans" (9). Glycan receptors are being targeted to treat viral diseases. The antiviral drugs Zanamivir (Relenza) and Oseltamivir (Tamiflu) are perhaps the most successful sugar-based medications these days. Likewise, carbohydrate-based antivirus medicines are Remedesivir, Molnupiravir, Azvudine, Entecavir, Telbivudine, Clevudine, Sofosbuvir, and Maribavir; those drugs are competitive neuraminidase ligands that bind to the enzyme and prevent the virus particle from being released from host cells (16, 27).
Due to their diversity, glycans have a wide range of biological functions and play essential roles in many physiological and pathological processes, including cell division, differentiation, and tumour formation (28,29). Glycans are essential biomarker candidates for many diseases, including cardiovascular diseases, immune system deficits, genetically inherited disorders, various cancer types, and neurological diseases, carry information in biological system (30)(31)(32)(33). During the onset and progression of these disorders, altered glycan expression is seen, brought on by improperly controlled enzymes, including glycosyltransferases and glycosidases. As a result, altered glycan structures may be helpful in the early detection of certain disorders. Glycans play an important role in illness diagnosis and management. Still, they can also be employed therapeutically as markers to identify and isolate particular cell types and as targets for developing new medications (3,34).
Geoinformatics databases and glycosylation
Numerous biological processes, such as cell growth and development, tumour growth and metastasis, immunological detection and response, cell-to-cell communication, and microbial pathogenesis, are significantly influenced by glycosylation. Due to course, one of the most prevalent and critical posttranslational modifications of proteins is glycosylation (35,36). Several factors can influence and modify glycosylation, including genetic determinants, monosaccharide nucleotide levels, cytokines, metabolites, hormones, and ecological factors (35)(36)(37)(38)(39). To get a large picture of the entire biological system, it is crucial to integrate omics methods such as proteomics, genomics, transcriptomics, and metabolomics into the field of glycobiology (35,36,40). Additionally, a wide variety of geoinformatics resources and databases are now available to investigate glycans and glycosylation pathways, which is also one breakthrough invention of the century (13).
Chromatography, diagnostic and therapeutic
Several methods have been developed and used in recent years to determine the structure of glycans to various degrees of detail (41). A conventional approach involves radioactively labelling the glycoconjugates, followed by enzymatic or chemical treatments, anionic exchange, gel filtration, or paper chromatographic analysis. Studies using nuclear magnetic resonance (NMR), gas chromatography with mass spectrometry (GC-MS), and other methods were carried out extensively. In recent years, simple chromatography methods have been replaced by High-performance anion-exchange chromatography (HPLC) and Ultra-performance liquid chromatography (UPLC), and fluorescence labelling has taken the role of radioactive labelling. Chromatography columns can be utilized in conjunction with the proper enzymatic/chemical treatments, for example, graphitized carbon, reversed-phase (RP), anion exchange, normal phase, or hydrophilic interaction resins (9,42). The most precise and widely used separation and purification method is column chromatography. Column chromatography can be used to separate and purify both solid and liquid materials. The extraction of pesticides of animals origin (made up of lipids, waxes, and pigments) has been aided by column chromatography. The chromatography process is used in medicine to create the peptide hormone pramlintide (an analog of amylin), which is used to treat diabetes and many more (43).Various glycan detection technologies have shed light on the nature of several diseases, including COVID-19, diabetes, cancer, and congenital abnormalities (44)(45)(46).
Numerous diseases that afflict humans are treated, cured, or even prevented using information from DNA. Researchers have already worked on gene sequencing to find specific genes that cause diseases, allowing them to develop remedies. The development of biomedicine has been greatly aided by gene therapy. The health community and the general public believe the human genome draft sequencing will enable researchers to provide cures or at least effective therapies for all ailments (47).
A more detailed and exact understanding of skin ageing is made possible by the recent improvement in glycobiology, which has been made possible by cutting-edge technological advancements. The field of longevity and anti-aging has been revolutionizing by the anti-ageing healthcare technology (48,49). Cutting-edge technology is the use of the latest and most advanced version of technology or applications that make function easy, cost-effective, reliable, and fast. Cutting-edge technology can be software that is also regarded as a "game changer". The cloud, containers, AI (artificial intelligence), and machine learning are all considered cutting-edge technologies (50). Patient care services, chronic disease management, and patient health initiatives, including rapid virus testing during COVID-19, digital diagnostics, telehealth, drug delivery, vaccine development, skin grafting, cancer and diabetics treatment and modern, etc., are examples of applications (51,52). Glycans are now undeniably proven to be essential skin elements and play a critical function in skin homeostasis. Glycans, which are essential for skin health, also change qualitatively and quantitatively as we age (53).
Conclusion
Since the beginning of modern medicine in the last century, the genomics revolution, geoinformatics database, biotechnology, and development in chromatology including the progress of glycan-based therapies have advanced rapidly. Research and development of structurally defined carbohydrates have led to the use of new tools and methods that have fueled interest in the therapeutic applications of glycans. DNA-and protein-centered therapies became widely used and progressed toward success. However, more precise targets for glycomimetics need to be found. Studying complex glycosylated structures, particularly glycoproteins, has advanced significant developments in synthetic procedures, analytical tools, and high-resolution biophysical approaches in glycobiology. Nanoparticles and other polyvalent structures have been developed to improve specially formulated glycopeptides' avidity and therapeutic potential, which can be considered the biggest success of the 21st century.
Author contributions
GM was responsible for the concept, design and draft of the manuscript. CT, XX contributed to the collect the information and reviewed this manuscript. JZ reviewed the final version of the manuscript and approved by all authors. All authors contributed to the article and approved the submitted version. | 2023-01-07T15:24:55.565Z | 2023-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "77a7c180a817c3acd62dc9000d9937887126821f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "77a7c180a817c3acd62dc9000d9937887126821f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
251858245 | pes2o/s2orc | v3-fos-license | Effects of Different Acids Dopant on the Electrochemical Properties of Polyaniline Cathode in Aluminum-ion batteries
. Polyaniline (PANI) as the cathode material has caused great concern in aluminum-ion batteries (AIBs) due to the low cost of raw materials, simple synthesis, good chemical stability and electrochemical reversibility. However, the effect of different doping acids on the electrochemical performance of PANI cathode has not been investigated. Here we preparation of PANI by use strong acids, medium-strong acids and weak acid as dopants. Results show that the specific capacity of PANI doped with strong acids is up to 180 mAhg -1 at the current density of 100 mAg -1 , which is much higher in comparison with those doped with medium-strong acids (100-120 mAhg -1 ) and weak acid (<60 mAhg -1 ). The electrochemical properties of PANI electrode materials are mainly affected by its crystallinity, doping degree, conductivity and morphology.
Introduction
Aluminum-ion batteries (AIBs) are a type of very promising energy storage device due to their high capacity, high charge transfer efficiency, low cost, and high safety. [1][2] The key to the research of AIBs is the cathode material. Up to now, there are basically two types of materials intensively investigated. One is the graphitic materials, [3] which present a limited capacity in storing large-size AlCl4anions (0.528 nm) [4] into the graphitic interlayer space. This characteristic makes it difficult to further improve their capacity, which is usually lower than 120 mAhg -1 . The other is metal dichalcogenides, [5] which initial discharge capacities are significantly high, reaching up to 300-500 mAhg -1 , but their cycle stability is rather poor. Their capacity usually decreases to about 100 mAhg -1 or lower values after 100-500 cycles.
Therefore, cathode materials with a higher capacity and stable performances are required. Conductive polymers [6] are an important class of electrode materials for metal-ion batteries due to their high theoretical specific capacity, rapid and reversible redox reactions in their entire threedimensional bulk phase. Among the conducting polymers, polyaniline (PANI) has been favored due to its low cost of raw materials, facile synthesis, good chemical stability and electrochemical reversibility. [7,8] In the process of preparing PANI by a simple chemical oxidation method, the doped of the acid is the key to the transformation of PANI from insulating state to conducting state [9]. However, the effects of crystallinity, conductivity and microstructural differences on the electrochemical performance of PANI doped with different acids have not been systematically investigated.
In this work, the oxidative polymerization method was used to prepare PANI with strong acids (hydrochloric acid and Methanesulfonic acid), medium-strong acids (Tartaric acid, Citric Acid and Hydrofluoric acid) and weak acid (Formic acid) as dopants respectively. The electrochemical behavior of PANI was systematically studied, and the fundamental reasons for there performance difference were explored by X-ray diffraction (XRD), Raman, Powder resistivity instrument, scanning electron microscope( SEM) and other methods.
The sample crystal structures were analyzed via powder X-ray diffraction measurements (XRD, D8 Advance, Bruker, Germany) by using the CuKα radiation at 40 kV and 30 mA. The Raman spectra were recorded by using a 532 nm-laser excitation at room temperature (HORIBA HR Evolution). The microscopic morphology and the microstructure of the samples were observed via field-emission scanning electron microscopy (SEM, Sirion 200, FEI, Netherlands). The conductivity of powder samples was measured by powder resistivity instrument (SZT-D, ST2253, China) under the pressure of 40MPa.
Preparation of PANI with different acid doping materials (AD-PANIs).
A certain amount of acid was added to a certain amount of water and mixed evenly, and configured into 1mol/L acid solution. 41.7mL acid solution was stirred at 0 o C for 30 min. Afterwards, a quantity of 0.8 ml of aniline and 8.3 ml of ammonium persulfate solution were added to the mixture and stirred vigorously for 24 h at 0 o C. The suspension was then aged at 85 o C for 72 h. The obtained product was filtered through a polyester fiber (Carpenter Co.) and washed with deionized water several times to remove the impurities. The wet product was then lyophilized and the dried compound was designated as x-PANI, where x represents type of acid dopant, The PANI doped with hydrochloric acid (HCl), Methanesulfonic acid (MSA), Tartaric acid (TA), Citric Acid (CA), Hydrofluoric acid (HF) and Formic acid (FA) are named HCL-PANI, MSA-PANI, TA-PANI, CA-PANI, HF-PANI and FA-PANI respectively.
Fabrication of the electrochemical cell.
The samples were ground with acetylene black and polyvinylidene fluoride (PVDF) with a mass ratio of 6:3:1 to prepare the cathodes. After adding methylpyrrolidone as the dispersing agent, the mixture was coated onto a rounded molybdenum current collector of 12 mm diameter, which was dried at 80 o C in vacuum for 12 h. The amount of the active material loaded was in the range of 1.5-2 mg. An aluminum foil (99.99%) was used as the anode component. The AIBs were assembled by using a customized Swagelok-type cell in an argon-filled glove box at room temperature. One piece of glass fiber paper (Whatman 934-AH) was placed between the Al anode and the cathode. A quantity of 80 μL of ionic liquid electrolyte was added to the cell to wet the separator. The preparation method of ionic liquid electrolyte is consistent with the previous literature. [7]
Electrochemical measurements.
The galvanostatic charge/discharge measurements were performed on a LANHE battery tester. Cyclic Voltammetry (CV) measurements were conducted with different scan rates over a range of 0.1-2.3 V versus Al/AlCl4on an electrochemical workstation (CHI 660E, Chenhua Instrument Corporation, China) using a three electrodemode, where the reference electrode and the counter electrode was constituted of an Al foil, and the working electrode of the samples, which were coated on a Mo foil.
Results
The structure, crystallinity and doping degree of the six doped PANIs were characterized by Xray powder diffraction (XRD) analysis and Raman spectroscopy. For the Raman spectrum of AD-PANI (Fig. 1a), five new typical peaks (*) generated by PANI can be found at 800, 1165, 1400, 1471-1505 cm -1 and 1590 cm -1 , which correspond to substituted benzene ring deformation, C-H bending of the quinoid ring, C-N stretching, C=N stretching of the quinoid ring and C-C stretching of the benzoid, respectively. [10,11] In addition, The peak at 1218 cm -1 is attributed to the stretching vibration of the C-N single bond, and the characteristic peak between 1300 cm -1 and 1370 cm -1 corresponds to the C-N+ stretching vibration (#) of the delocalized polaron charge carrier, which indicates that PANI is in the doped state. [12] [14], TA-PANI and CA-PANI followed, HF-PANI and FA-PANI have the weakest crystallinity. in addition, all PANIs have obvious peaks at 20.9 o , indicating that acid doping is beneficial to the orderly arrangement of PANI molecular chains along the direction parallel to the chain length. However, only HCl-PANI and MSA-PANI exhibits an enhanced intensity of the peak at 25 o , suggesting that doping of HCl and MSA may also facilitate the growth of PANI perpendicular to the chain direction. [15] This may effectively improve the conductivity and charge transfer rate of the materials, thereby effectively improving the rate performance and specific capacity of the cathode. AIBs, its electrochemical performance as a cathode material was evaluated in an AIB cell. Fig. 2a shows the galvanostatic charge-discharge curves of the six doped PANIs at a current density of 100 mAg -1 after circulation stabilization (50 cycles). Obviously, there are a long and glossy voltage plateau (∼1.0-2.0 V vs Al/AlCl4 -) in the charge process and a declined plateau (2.0-1.0 V vs Al/AlCl4 -) in the discharge process, indicating that AD-PANIs cathodes can store energy in AIBs, but the specific capacity and platform of HCl-PANI and MSA-PANI electrodes are significantly better than other acid-doped PANIs, followed by TA-PANI and CA-PANI, the specific capacity of HF-PANI is lower, and FA-PANI even has no obvious platform, which proved that the stronger of the acid, the higher specific capacity of the doped PANI. It can also be seen from the cycling performance of the PANI electrodes doped with different acids in Fig. 2b-g that the strong acid has high initial specific capacity (HCl-PANI is up to 184 mAhg -1 , MSA-PANI is up to 180 mAhg -1 at 100 mAg -1 ) and good cycle stability, of which HCl-PANI is the best, the discharge capacity retention rate is 72% after 200 cycles, which may be due to the more suitable structure of HCl-PANI. Fig. 3, MSA-PANI presents a sheet-like stacking structure with some holes, which makes the higher initial capacity , even surpassing the specific of HCl-PANI after 5 cycles. However, the porous structure is easy to collapse, resulting in poor cyclic stability. While the HCl-PANI presents a short-rod-like structure, which takes into account both the exposure of active sites and the stability of the structure, showing good specific capacity and cyclic stability.
Compared with strong acid doped PANI, the cyclic stability of organic medium-strong acid doped PANI is good, but the capacity is relatively low, the TA-PANI is up to 109 mAhg -1 (Fig. 2d) and the CA-PANI is up to 119 mAhg -1 at 100 mAg -1 (Fig. 2e). It is worth noting that the cycle capacity curves of AIBs cathode materials obtained by de-doping PANI with polyhydroxycarboxylic acid molecules such as TA and CA have a common feature is that the capacity needs to have an activation process in the first few cycles, and then the capacity tends to be stable, The reason for this phenomenon is that thecarboxylate anions with strong polar groups in the doping process will lead to enhanced interactions between polyaniline molecular chains and PANI molecules tend to agglomerate [16], and the internal PANI molecules could not effectively adsorb AlCl4in the early cycle due to the diffusion. which led to the low capacity in the initial cycle. In addition, although the weak acid (FA-PANI) has strong capacity retention ability, its specific capacity is less than 60 mAhg -1 after stabilization, and its Coulombic efficiency during the cycle is also unsatisfactory. Combined with the previous XRD characterization, it is not difficult to conclude that the strong acid-doped PANIs has better electrochemical performance because they have better crystallinity and conjugate length in both parallel and vertical molecular chain directions, which can effectively improve the conductivity and charge transfer rate of the materials [15] the Nyquist plots (Fig. 4a) and conductivity test (Table 1) also proved this. Powder conductivity test revealed that the electrical conductivity of MSA-PANI and HCl-PANI is apparently higher than other acid doped PANIs. As shown in Figure 4a, the Nyquist plots of AD-PANI consist of a suppressed semicircle at high frequency and an inclined line at low frequency. Usually, smaller diameter of the semicircle means lower charge transfer resistance (Rct), and higher slope of the inclined line refers to lower Warburg resistance (Rw). It is obvious that MSA-PANI and HCl-PANI exhibits relatively lower Rct and Rw compared with other PANIs, demonstrating that the doping of MSA and HCl can provide a more easier transfer of electrons and electrolyte ions at the PANI-electrolyte interface. Moreover, the kinetics of PANI can be further explored by the cyclic voltammograms (CV) curve of HCl-PANI (Fig. 4b). The anodic and cathodic peaks gradually shift toward slightly positive and negative potentials, respectively, as the scan rate increases, indicating a weak polarization. These observations reveal that these redox reactions have an excellent kinetics. The relation between the peak current and the scan rates can be expressed via the following equations: where i is the response current (in mA) and v is the scan rate (in mV s -1 ); a and b are adjustable values. As the inset in Fig. 4b shows, the parameter b is close to a value of 0.51, indicating that the fast kinetics of PNA electrodes in AIBs are mainly dominated by diffusion-controlled. [16] Figure 4. (a) Nyquist plots for the AD-PANI cathodes (the inset is the enlarged view of the high frequency region); (b) Typical CV curves of HCl-PANI cathode at various scan rates from 1 to 50 mV s -1 ; the inset shows the relation between the current and the scan rate; (c) specific capacities and coulombic efficiencies of HCl-PANI cathode at different current density from 100 mAg -1 to 1000 mAg -1 Such excellent kinetics and low charge transfer resistance are the result of the good rate capability of HCl-PANI (Fig. 4c), as the current density dropped from 1 Ag -1 to 100 mAg -1 , the specific capacity of the HCl-PANI cathode only decreases of by 40%, when the current density is adjusted back to 1 Ag -1 , its capacity can still be maintained at 130 mAhg -1 .
Conclusions
In summary, when strong acid-doped PANIs are used as the cathode materials of AIBs, the specific capacity can reach 180mAhg -1 at 0.1Ag -1 , and the specific capacity retention rate of HCl-PANI can be maintained at 72% after 200 cycles , the electrochemical performance is better than that of PANI doped with medium-strong acid and weak acid under the same conditions. AD-PANI has differences in crystallinity, doping degree, resistivity and morphology. The higher crystallinity and doping degree, smaller resistivity, and the stable structure result in better electrochemical performance. | 2022-08-27T15:08:01.551Z | 2022-08-21T00:00:00.000 | {
"year": 2022,
"sha1": "4409f035ca7a6b92feddb26f1215b6994178f9e1",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/HSET/article/download/1332/1263",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fff3592f00c60ec0498ea4f9b16951f766f73349",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": []
} |
220713580 | pes2o/s2orc | v3-fos-license | Common mental disorders and its associated factors and mental health care services for Ethiopian labour migrants returned from Middle East countries in Addis Ababa, Ethiopia
Background The migration of young Ethiopian men and women to the Middle East countries was mainly for economic reasons. The migration was largely irregular that posed a wide range of unfavorable life conditions for some of the migrants. The overall objective is to assess common mental disorders and its associated factors for Ethiopian migrants returned from the Middle East countries and to describe mental health care services targeting these migrants. Methods The study employed a mixed-methods approach. For the quantitative part, a systematic random sampling technique was used to select a sample of 517 returnees. An interviewer-administered questionnaire based on Self Report Questionnaire-20 was used to collect data from respondents. The qualitative study employed a phenomenological study design to describe mental health care services. Key informant interviews and non-participant observation techniques were used to collect qualitative data. Results The prevalence of common mental disorder among Ethiopian migrants returned from the Middle East countries was found to be 29.2%. education (AOR=2.90 95%CI: 1.21, 6.94), physical abuse (AOR=12.17 95%CI: 5.87, 25.22), not getting salary properly and timely (AOR=3.35 95%CI: 1.47, 7.63), history of mental illness in the family (AOR=6.75 95%CI: 1.03, 43.95), detention (AOR=4.74 95%CI: 2.60, 8.62), guilty feeling for not fulfilling goal (AOR=9.58 95%CI: 4.43, 20.71), and denial of access to health care (AOR=3.20 95%CI:1.53, 6.67) were significantly associated with a common mental disorder. Shelter based and hospital-based mental health care services were rendered for a few return migrants with mental disorders. The services were primarily targeted, female return migrants. Conclusion The prevalence of common mental disorder was high among migrants returned from the Middle East countries. Despite the high burden of mental distress, only a small proportion of return migrants with mental illness is getting mental health care services.
Background
People have migrated from one place to another since the start of human existence [1]. Even though human migration is not a new phenomenon, it has changed significantly in number and nature with the growth of globalization, including the ease of international transport and communication, the push and pull factors of shifting capital, effects of climate change, and periodic political upheaval, including armed conflict [2]. Migration has been increasing largely at the international level especially since the last decade [3]. Between 1990 and 2017, the number of international migrants worldwide rose by over 105 million, or by 69 percent. Globally, there were an estimated 258 million international migrants in 2017 [4]. Migration report estimates that if migration continues to increase at the same pace as in the last 20 years, the number of international migrants worldwide could be as high as 405 million by 2050 [5]. Africa is often seen as a continent on the move, with people escaping poverty, environmental disaster, or violent conflict. About 31 million Africans, or little more than 3 percent of the continent's population, have migrated internationally [6].
There is evidence that outward migration has increased in Ethiopia in recent years [7]. Ethiopians leave their country either as regular or irregular migrants. Regular migration is defined by the International Organization for Migration (IOM) as "migration that occurs through recognized, authorized channels". IOM also defines irregular migration as "movement that takes place outside the regulatory norms of the sending, transit and receiving countries". Data from the Ministry of Labour and Social Affairs (MoLSA) indicate that approximately 460,000 Ethiopians migrated legally from their country between 2008 and 2013, mostly to the Middle East with the majority going to Saudi Arabia (79%), Kuwait (20%) and others to UAE and other countries [8]. The exact number of Ethiopian migrants to the Middle East is unknown as two-thirds of them migrate through irregular means [9]. It is estimated that up to 500,000 Ethiopian women are migrating to the Middle East for domestic work annually [10]. Ethiopian migrants to the Middle East are driven to migrate primarily for economic reasons [11]. The process of migration can lead to a whole spectrum of physical and mental health disorders [12]. There are numerous reports that many migrants are victims of fraud, forced labour, and physical, sexual, and psychological abuse by their employers or by traffickers, and a significant number develop psychological problems [7]. Many Ethiopian women working in domestic service in the Middle East face severe abuses, including physical and sexual assault, denial of salary, sleep deprivation, withholding of passports, confinement, and even murder [13].
Similar with the outward migration, return migration to Ethiopia has increased in the past decade [10]. Ethiopians return to their country due to various reasons such as to reunite with family or friends, investment, or repatriation and deportation [8]. The recent deportation of 170,000 Ethiopian migrants from the Kingdom of Saudi Arabia is one example of large-scale return migration in the country [14]. The deportation of the aboveundocumented migrants was accompanied by severe human rights abuses, including arbitrary detention, theft of migrants' belongings, rape, beatings, and killings' that traumatized many of those who returned to Ethiopia [7]. Return migrants may have exceptional and increased mental health needs resulting from painful or traumatic experiences that they might face during the process of migration and/or during their stay in the country of destination [15]. A growing number of shreds of evidence show that Ethiopian returnees from different Middle East countries often have a variety of psychological disorders as they experience diverse problems at the various stages of their migration [16]. This, in turn, makes mental health to be a serious concern among Ethiopian migrants returning from the Middle East countries [9].
Studies suggest that mental health care and rehabilitation services are highly needed to be expanded to return migrants in Ethiopia [17]. However, return migration and mental health has received far too little attention in policy and crisis-intervention programs despite a large number of Ethiopian migrants with various mental health issues are returning from the Middle East countries. The National Mental Health Strategy which integrates mental health into primary health care systems to provide comprehensive, accessible and affordable mental health care for the public does not specifically address the special mental health care need of return migrants [18].
Even though hazards to general health and specifically to mental health rank among the top experiences of trafficking and migrant returnees, the services addressing health specifically mental health needs of returnees during the recovery phase of the migration process are sparse [9,16,17]. Therefore, this research aimed at studying the magnitude of common mental disorders and associated factors and mental health care practice that targets Ethiopian migrants returning in large number from Middle East countries.
Conceptual framework
The constructs of this conceptual framework were extracted from a bulk of literature written in the area. The framework depicts CMD or mental distress in return migrants is a result of socio-demographic characteristics, pre-departure risk factors, traumatic life experiences, goal-related perception, and access to health care in the destination country (Fig. 1).
Study design and setting
The study employed a mixed-methods research approach. The cross-sectional study design was used in the survey to assess the prevalence of common mental disorder and its determinants among Ethiopian migrants returning from the Middle East countries. The qualitative research applied a phenomenological study design to describe the dimension of mental health care services, opportunities, and challenges that were aimed at providing mental health care services for return migrants.
The study was conducted in Addis Ababa city which was the main gate to Ethiopian returnees from different parts of the world including those from the Middle East region via Bole International Airport. It is the hub for various actors working on rehabilitation and reintegration of return migrants. The main organizations working on rehabilitation and reintegration of return migrants in Addis Ababa include Addis Ababa City Administration Social and Labour Affairs Bureau, International Organization for Migration, Agar Ethiopia, Good Samaritan Association, Nolawi Services and St. Amanuel Mental Specialized Hospital. The study was conducted in the period between November 10, 2017, and December 21, 2017.
Population
The population for the quantitative study was Ethiopian labour migrants returning from Middle East countries who were staying in the provisional center in Addis Ababa and who was not unconscious and not critically ill. The population of the qualitative study was staff of Addis Ababa City Administration Labour and Social Affairs Bureau, Agar Ethiopia, Good Samaritan Association, Nolawi Services, St. Amanuel Specialized Mental Hospital and Ministry of Health which specifically work on mental healthcare of return migrants from the Middle East countries.
Sample size and sampling technique
The sample size was calculated using single population proportion formula with the prevalence rate of common mental disorder for the population (27.6%), which was based on a recent study conducted by Habtamu, Minaye, and Zeleke in 2017 [16]; d is the margin of error to be tolerated (4%) and Z 1 − α = 2 is the reliability factor corresponding to the confidence level of 95% The computed sample size was 480 and with a 10% non-response rate, the final sample size became 528. A systematic random sampling technique was used to select samples from Ethiopian return migrants. A list of 1059 return migrants who arrived during the study period was obtained from a register of the National Disaster Risk Management Commission, the government body that coordinated tasks in the provisional center at that time. The sampling interval (k) was calculated by dividing the number of returnees (1059) by the sample size (528). The calculated sampling interval was 2. A lottery was drawn between the first two names of return migrants from the register to randomly select the first respondent. Then every other migrant's names from the register were selected to be sampled and interviewed. For the qualitative part, a purposive sampling technique was employed to select participants and institutions. Participants and institutions with relevant and rich information were selected deliberately for the study. Information about relevant staff for the study was obtained by asking the leaders of each organization. Information saturation was used to determine the point at which the data collection ends. Ten key informant interviews were conducted with staff of Addis Ababa City Administration Labour and Social Affairs Bureau, Agar Ethiopia, Good Samaritan Association, Nolawi Services, St. Amanuel Specialized Mental Hospital and Ministry of Health.
The key informants were focal persons from each organization who had relevant and rich information about the matter. In the same way, three observations were conducted at Agar Ethiopia, Good Samaratian Association, and St. Amanuel Specialized Mental Hospital.
Study variables and data collection procedure
Our dependent variable was a common mental disorder with the presence of 8 or more symptoms of mental distress in return migrants out of 20 symptoms in the SRQ-20. Socio-demographic departure, the experience of a traumatic event, health care, and goal-related perceptions were assessed quantitatively. The qualitative study explored the depth and dimensions of mental health care services available. It also explored opportunities and challenges the actors encounter. Contents that were related to mental health care services, opportunities, and challenges were identified.
A face to face interview was used to collect data from respondents in the quantitative study. A structured questionnaire based on the WHO Self Report Questionnaire-20 (SRQ 20) was used to assess mental distress among sample respondents in the past 30 days. The questionnaire was translated into the Amharic language. Selected participants were asked for their willingness to participate in the study. Data was collected from migrants who fulfilled the inclusion criteria. The inclusion criteria for the study were: who was not unconscious, who were not critically ill, and who were not below 18 years old. Three data collectors with a bachelor's degree in psychology collected data from respondents.
Two different data collection techniques were used to collect qualitative data. These were key informant interviews and observation. The KI participants were mental health practitioners who had rich experience in providing mental health services for the return migrants' Key informant interviews were conducted to collect rich information about mental health care services from highly relevant staffs in those organizations which provide rehabilitative and mental health care services for return migrants. The key informant interview process was guided by a key informant interview guide. The key informant interviews were tape-recorded to capture all information given by the interviewees. After completion of each key informant interview, the facilitators thanked each participant for his/her willingness and time. A nonparticipant observation technique was used for the observation. The observation aimed to gather first-hand information on mental health services by observing sites where actual service is rendered. The first-hand information gathered through observation was the general setting of the service provider site, types of mental health services provided at the site, availability of adequate spaces for each service, availability of required materials and facilities for each service, appearance, and condition of mental health care clients, and interaction between service providers and clients. A structured observation checklist was used to guide the observation. Field notes were taken immediately after each observation. The investigator gathered the qualitative data from study participants and institutions
Data quality management
For quantitative data quality, a standard WHO questionnaire with acceptable validity and reliability was used to collect data from respondents. The data collectors were trained for two days on data collection tools and data collection procedures. A filled questionnaire was checked for completeness and consistency by the investigator during data collection. Data entry was done in Epi-Data software to minimize data entry errors.
To enhance the trustworthiness of qualitative research, the triangulation of data collection methods was used to elicit information about the same issue using key informant interviews and observation. The data from the two methods were triangulated during interpretation. The researcher kept records on the process of conducting the research as it was undertaken for the audit trail at a later time to review different aspects of the research. The extensive description of the setting and participants of the study were provided. Besides, a detailed description of the findings with adequate pieces of evidence was provided in the form of quotes from participants' interviews.
Data analysis procedures
Quantitatively collected data were entered into EpiData version 3.1. The data were edited and cleaned carefully. Then, the dataset was exported to SPSS Version 20 for analysis. Descriptive statistics were run to summarize the background characteristics of the respondents, to determine the prevalence of CMD, and to examine the distribution of specific symptoms contained in SRQ-20 among the respondents. Logistic regression models were fitted to identify factors associated with CMD. Analyses of associations for CMD focused on the presence and absence of the disorder taking the score of eight and above as a cut of a point on the score of SRQ-20. The screening criterion for variables to be included in the multivariable regression was the P-value <0.25 in the bivariate regression model. The level of significance of association in the logistic regression model was determined at P-value <0.05.
For qualitative data, tape-recorded key informant interviews were firstly transcribed in Amharic and then translated into English. The transcribed notes were edited, formatted, and saved as a text file. Then, the transcript notes saved in a text file were imported into OpenCode version 4 software. Codes were assigned to segments of the text. Then after the codes were categorized into four themes. The field notes taken during observations were summarized, categorized, and analyzed based on the site of observation. Finally, the findings from the two data collection techniques were triangulated.
Socio-demographic Characteristics
A total of 517 return migrants participated in the crosssectional survey with a response rate of 97.9%. More than half (56.9 %) of the respondents were female. The mean age of the respondents was 27.5 (SD = +5.0) years. Regarding their religion, over half (56.3%) of them were Muslim followed by Orthodox (30.2%). In terms of respondents' education, 13.0% of them did not have formal education and 37.9% attended primary education. Most of the respondents were from Amhara (33.3%), Oromia (27.7%), and Tigray (22.4%) regions of Ethiopia where a little more than half (53.2%) of the return migrants were from rural areas of these regions (Table 1).
Pre-departure Factors
The majority (86.5%) of the respondents did not know the language of the destination country before their migration. Only a few (11%) of them claimed that they had had the required skill for the type of work they supposed to do in the destination country. Regarding family pressure to migrate, the vast majority (89.2%) did not mention family pressure as a reason for their decision to migrate to the Middle East countries. Nearly half (48.2 %) of the respondents or their families took a loan to cover the cost of their journey to the Middle East country (Table 2).
Experience of Traumatic Events and Access to Health Care
Nearly one-third (32%) of return migrants reported that they had experienced physical abuse during their journey or at the destination country in the Middle East. More than half (55.3%) of the returnees reported that they had encountered verbal abuse. Among the female returnees, 68 (23.1%) of them reported sexual abuse during their journey or at their stay in the destination country. Only one-fifth (20.7) of the respondents reported to get their salary timely and properly. Regarding confiscation of passport, 86.1% of the returnees reported their passport had been held forcefully by their employer. A third of the return migrants (32.9%) were refused to communicate their family through the telephone. Among the total respondents, 36.4 % of them were detained either during their journey or at the destination country in the Middle East. More than sixty percent of the respondents reported that feeling guilty for unmet their primary goal of their migration. Slightly more than three-forth (76.8%) of the returnees were denied access to health care by their employer during their stay in the host Middle East country ( Table 3).
Prevalence of Common Mental Disorders
The prevalence of common mental disorder among Ethiopian migrants returned from the Middle East countries was found to be 29.2% (95% CI= 25.3, 33.3). The most frequent symptoms of common mental disorder reported by the respondents were: frequent headache (40.6%), feel unhappy (40.4%), nervousness (40.2%), bad sleep (39.7%), poor appetite (39.3%), feel tired all the time (35.4%) and easily tired (34.8%). Slightly more than a fifth (21.5%) of the respondents reported that they felt worthless in the past 30 days and suicidal ideation was reported by 17.4 % of the return migrants who participated in the study (Table 4).
Factors Associated with Common Mental Disorder
In the multivariable logistic regression model; Education (AOR=2. (see table 5).
Qualitative Research Findings
A total of ten participants were interviewed in the qualitative interviews. Half of them were female. Four main themes were identified from the content analysis of qualitative research. These were (i) the mental health problems of return migrants through providers' eyes, (ii) mental health care services being rendered for return migrants (iii) the existing opportunities for mental health care providers, and (iv) the challenges encountered by mental health care providers. Direct quotes from transcripts are provided to illustrate these themes. Excerpts or quotations from interviews with participants are identified by a code corresponding to Table 6.
The Mental Health Problems of Return Migrants through Providers' Eyes
The mental health problem of return migrants was vast and increasing from time to time. Participants agreed that female migrants were more affected by mental illness than their male counterparts. Those with mental health problems were brought to service providers in disturbing health conditions. The majority of return migrants were suffering from anxiety and depression and few of them were diagnosed with a severe form of mental illness like schizophrenia. Among the participants a female psychiatric nurse from Amanuel mental hospital who was also working in Agar Ethiopia supported the above idea by expressing: "In my understanding, almost all return migrants are affected by a mental health problem. The problem is immense and increasing from time to time. Most of the time, they are suffering from depression. They are also suffering from acute psychotic disorders. Few of them are suffering from a chronic psychotic disorder such as schizophrenia" (P4).
Another female participant from Good Samaritan Association agreed with the aforementioned idea by describing: "Most of the returnees coming to us were disoriented and traumatized. Some of them were with physical injuries. Some were with bad odour from their mouths and blood in their urine… Few were tested positive for HIV and TB"(P2).
Many Ethiopian labor migrants took an unsafe route in their migration to the Middle East countries.
Some of the participants claimed the Ethiopian government's ban of migration to the Gulf Arab countries exacerbated the illegal migration to the region. Most of the migrants were poorly prepared for the working and living conditions in the destination countries making them prone to abuse and mistreatment during their journey and at their destination. For most of the migrants, their work and stay in the employer house were full of exploitation and violation of their rights. The Kafala sponsorship system that practiced in the region also played a role in the exploitation and right violations of Ethiopian labour migrants in the Gulf Arab countries. A male participant working in Nolawi Services explained the poor preparation of Ethiopian labour migrants for the supposed domestic work and life in the destination country: "Ethiopian migrants do not have appropriate skills for the work they supposed to do. They do not have pre-departure orientation or training on the skill required for the work. They do not know the basic Arabic language. They are unaware of the culture of the destination country. They directly go to Arab countries without basic training and preparation. They are not aware of the working condition in the destination country" (P9).
A female psychiatric prescriber from Amanuel hospital also working in Good Samartian Association described the above abuse and mistreatment the Ethiopian labour migrants faced: "Most of the time, the mistreatment and abuse are started from here in Ethiopia by traffickers. During their journey to the Arab countries, they face rape, torture, insult, and more. Once they arrive at the Arab country, the mistreatment and abuse are continued. They are not allowed to communicate with family and friends. They are starved. They are forced to drink unclean pipe water used for cleaning purposes" (P1).
Mental Health Care Services Being Rendered for Return Migrants
Ethiopian return migrants with mental health problems got mental health care services at two settings namely at the rehabilitation center and a mental hospital. According to the participants and observations by the investigator, there were only three organizations that provided mental health care services for return migrants. Two of them were providing a rehabilitation center or shelter-based mental health services. The remaining one provided hospital-based mental health care.
Rehabilitation Center or Shelter Based Services
The rehabilitation center based mental health care services were provided by Agar Ethiopia and Good Samaritan Association. The two organizations were providing basic mental health care and rehabilitation services to returnees with mental illness. The services provided by the two organizations were shelter, food, hygienic materials, clothes, medical service through referral, psychological counseling, recreational therapy even though not well organized, reunification with family, life skill training, vocational skill training, and economic strengthening through linking with micro-finance institutes. The rehabilitation centers were aimed at providing mental health care services for female return migrants only. Few female migrants with severe mental illness got the services. Among the participants, a staff of Agar Ethiopia explained about the services his organization was providing: "We are providing a range of mental health care related services to return migrants with mental problems. The services we are providing to them are food, shelter, counseling, recreational therapy, life skill training, vocational skill training, reunification with family, and economic empowerment. We provide mental health care services for only female returnees at our rehabilitation center. Only a few of the returnees with mental problems are brought to our rehabilitation center as most of them do not show apparent signs of severe mental illness. The majority of returnees with mental illness are left on the street."(P3) During observation of Agar Ethiopia's rehabilitation center, the investigator observed 8 return migrants with mental illness. All of them were female and one of them had a baby. The middle size premise of the center was neat and free from bad odor and hazardous objects at the time of observation. Similarly, while observing the Good Samaritan Association's rehabilitation center, the investigator observed 5 mentally ill return migrants. All of them were female. The investigator observed paintings made by mentally ill return migrants hanged on one side of the wall. The center was a one-floor building with a small size compound.
Hospital-Based Services
St Amanuel hospital provided outpatient and inpatient medical services for the general public with mental illness. The hospital rendered mental health care services for return migrants with mental illness who were mainly brought by Agar Ethiopia, Good Samaritan Association, and Ethiopian Airports Enterprise. These services were: psychiatric assessment, prescription of medication or biological therapy, psychological counseling, ward admission, and reunification with a family. A female psychiatric nurse of Amanuel hospital described the type of services the hospital rendered for return migrants: "Amanuel hospital is providing a range of mental health care services for return migrants with mental illness. The hospital provides psychiatric assessment, counseling services, prescription of medication, inpatient services through ward admission, outpatient services, and reunification with family" (P6).
The observation in Amanuel hospital revealed that it was situated in one of the most bustling areas of the city. The investigator observed so many mentally ill male and female patients wearing a hospital gown. There were separate wards for male and female patients. The hospital compound was crowded with many outpatient and inpatient clients and their families/relatives. The waiting areas around examination rooms and registration/card rooms were full of patients and their families and relatives.
The Existing Opportunities for Mental Health Care Providers
There were limited opportunities for organizations working on mental health care of return migrants. The increased attention is given to mental health globally and nationally was considered as an opportunity for the expansion of mental health care services. Limited support provided by the government was helping to strengthen the capacity of the actors working on mental health care of return migrants. The presence of Amanuel mental hospital in Addis Ababa was an opportunity to diagnose and treat mentally ill return migrants brought by the two organizations working on mental health care of return migrants. Participants said that the dedication of staff and leaders in handling the challenging task of caring for mentally ill return migrants was an opportunity for organizations working on the area. Among the participants' Agar Ethiopia staff described the support his organization got from the government as worth mentioning opportunity: "The city government is supporting us in a limited way. For instance, AA BOLSA is working closely with us as it is part of their responsibilities. The Addis Ababa City Civil Society Agency has donated us a vehicle. Addis Ababa City Disaster Preparedness Bureau has granted us an emergency fund of one and a half million birr. Addis Ababa City Council is on the process of providing us 3 hectares of land for construction of rehabilitation center" (P3).
A male staff of Amanuel hospital who was also serving as psychiatric staff in Agar Ethiopia explained the attention the Ethiopian government gave to mental health: "The government is working to improve mental health care services in the country. It trains a large number of mental health professionals. Also, it decentralizes mental health care services to different levels of government health facilities" (P5).
The Challenges Encountered by Mental Health Care Providers
Mental health care providers shave encountered many challenges in their effort to deliver mental health care services to return migrants. Inadequate funding to expand services for all needy returnees was a major challenge for most of the providers. The limited capacity to expand mental health care services for all needy return migrants was another challenge of the actors in the area. Overcrowding and a shortage of beds in the hospital was a chronic problem for providing mental health care services for returnees and the general public. Almost all return migrants came to the hospital unaccompanied by family or relatives so that it posed a challenge to assess, treat, and follow them. Lack of awareness about mental illness and stigma against a mentally ill person was also one of the challenges that mental health care providers encountered. Lastly, the exaggerated need for economic support of return migrants recovered from mental illnesses posed a challenge for the actors struggling with meager resources. Agar Ethiopia staff alluded that lack of adequate space was a bottleneck for them to provide service for all needy return migrants: "We do not have enough space to provide mental health care service for all needy return migrants including male returnees" (P3).
One of Amanuel hospital staff also described the aforementioned challenge as follow: "There is a shortage of bed to admit all needy return migrants with severe mental illness even though efforts to admit this group of the society are in place" (P5).
Discussion
The prevalence of CMD among Ethiopian labour migrants returned from the Middle East countries was found to be 29.2% (95% CI= 25.3, 33.3). A similar study reported a prevalence rate of CMD to be 27.6 % among Ethiopian migrants returned from the Middle East and South Africa [16]. However, the prevalence of CMD in the current study is higher than that in the general population and working adults in the country which was ranging from 11.7% to 17.7% [19][20][21]. Findings of the qualitative research of this study indicated that the majority of return migrants are suffering from depression and anxiety disorders and few of them were diagnosed with a severe form of mental illness like schizophrenia. The findings of this study were in line with the study conducted in Nepal with repatriated migrants from the Gulf Arab countries and Asian countries which found that Nepalese foreign labor migrants were predominantly affected by depressive disorder and anxiety disorder [22].
The study revealed that the migrants were vulnerable to traumatic experiences and migration and adjustment related stressors. They were exposed to abuses, mistreatment, exploitation, and right violations. This was in line with a study from Sri Lanka conducted on women migrant workers returned from Middle East countries [23]. The participants in this study experienced a high level of abuse during their journey or at their stay in the Middle East countries. About 23.1%, 32%, and 55.3% of return migrants experienced sexual, physical, and verbal abuses respectively. A study conducted in Lebanon found that sexual, physical, and verbal abuses were detected in 12.5%, 37.5%, and 50.0 % of female foreign domestic workers respectively [24]. In the same way, a study from Nepal reported that 40.9% of return migrants had faced abuses at their workplace in the Middle East countries [25]. Labour migrants in the Middle East countries often encounter barriers to accessing appropriate health care [26]. In this study, only 23.2% of migrants had access to health care while they were in the Middle East countries. Similarly, a study from Nepal shows that only 12.9% of respondents reported that they had received health services after falling ill whilst in domestic work abroad [25].
Results from the multivariable analysis show that education, physical abuse, detention, salary earning, history of mental illness in the family, guilty feeling for not fulfilling expectation, and denial of access to health care were significantly associated with CMD adjusting for other possible confounding factors. In the same way, the qualitative findings identified mal-adaptation, individual susceptibility, severe abuses, painful experiences, and guilty feeling for not fulfilling the goal of the migration as causes of mental distress in Ethiopian labour migrants returned from the Middle East countries. A qualitative study done on return migrants in Ethiopia revealed that sexual violence, physical violence, emotional abuse, starvation, imprisonment, and difficulty adapting to a different culture were sources of mental trauma of Ethiopian labour migrants in the Middle East countries [17].
The study identified that only few organizations were providing rehabilitation and reintegration services for mentally ill female returnees with a very limited resource reaching a small segment of the larger population of migrant returnees [21]. Increased focus for mental health nationally and internationally, training of a relatively large number of mental health professionals, decentralization of mental health care services, support from the government, presence of St. Amanuel mental hospital, and dedication of mental health care staff are existing opportunities for the actors working on mental health care of return migrants. The major challenges of the actors identified in this study are lack of adequate funding, limited capacity to expand services, difficulty of assessing and following unaccompanied cases, low awareness, and stigma towards mental illness and high need for economic support. Some of these challenges are also reported by other studies. One study reported that mental health care actors are struggling to find consistent funding to maintain their services and only provide assistance to those in dire need [27]. Another study indicated that these organizations are overwhelmed by huge demand with limited capacity [21].
Strength and limitations of the study
The strength of this study lies in the mixed-method design that enables it to assess the magnitude of mental distress of return migrants and different dimensions of mental health care service available to them using quantitative and qualitative approaches. The study has also a limitation. The findings of the study may not reflect the situation among all migrants returning from elsewhere as it focused only on those returned from Middle Eastern countries and during a specific time period.
Conclusion
The prevalence of CMD is high among Ethiopian migrants who returned from the Middle East countries. Lack of pre-migration preparation and unsafe migration along with Kafala sponsorship system exposes Ethiopian labour migrants to abuses, mistreatment, exploitation, and right violations. The high proportion of Ethiopian labour migrants to the Middle East countries experienced traumatic events including physical abuse, verbal abuse, sexual violence, detention, and denial of salary.
The factors that found to be significantly associated with CMD in return migrants were education, physical abuse, salary earning, history of mental illness in the family, detention, guilty feeling for not fulfilling expectation, and denial of access to health care. Despite the high burden of mental disorder among return migrants, only a few organizations were working on mental health care targeting returnees with a very limited resource that reaching a small segment of return migrants with a mental health problem. The mental health care services were primarily targeted at and provided for female return migrants and male return migrants with mental disorders are neglected. Lack of adequate funding, limited capacity to expand services, difficulty of assessing and following up of unaccompanied patients, low awareness and stigma towards mental illness, and returnees' high need for economic support are main challenges for the organizations working on the area. | 2020-07-23T15:05:44.167Z | 2020-07-23T00:00:00.000 | {
"year": 2020,
"sha1": "3f17528f59b4be80d532170fc3ba9c7c30b7fde0",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-020-05502-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f17528f59b4be80d532170fc3ba9c7c30b7fde0",
"s2fieldsofstudy": [
"Sociology",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15736569 | pes2o/s2orc | v3-fos-license | Towards a better integrated stroke care: the development of integrated stroke care in the southern part of the Netherlands during the last 15 years (Special 10th Anniversary Edition paper).
Introduction Stroke care is complex and often provided by various healthcare organisations. Integrated care solutions are needed to optimise stroke care. In this paper, we describe the development of integrated stroke care in the region of Maastricht during the last 15 years. Description of integrated care case Located in the south of the Netherlands, the region of Maastricht developed integrated stroke care to serve a population of about 180,000 people. Integration was needed to improve the continuity, coordination and quality of stroke care. The development of integrated care in Maastricht was a phased process. The last phase emphasized early discharge from hospital and assessing the best individual rehabilitation track in a specialized nursing home setting. Discussion and lessons learned The development and implementation of integrated stroke care in the region of Maastricht led to fewer days in hospital, more patients being directly admitted to the stroke unit and an earlier start of rehabilitation. The implementation of early discharge from the hospital and rehabilitation assessment in a nursing home led to some unforeseen problems and lessons learned.
Introduction
There will be a marked increase in the number of stroke patients in Europe over the next decades [1]. By the year 2020, 250 per 100,000 inhabitants of the Netherlands will suffer from a stroke, often with subsequent permanent disabilities and handicaps as a consequence [2]. In terms of costs, stroke is among the most expensive diseases in the Netherlands with a total of 1.5 billion euros accounting for 2.2% of total annual health care costs [3]. Today, optimising stroke care in order to satisfy the demands for care, to enhance patient satisfaction and to be cost-effective is an important field of research worldwide [4][5][6].
Several randomised controlled trials have already shown that stroke care organised in hospital stroke units leads to a reduction in mortality, less dependency on care and a decrease in long-term institutionalised care [7]. During the last decade in the Netherlands, in addition to the development of hospital stroke units, there has been a trend towards the development of integrated regional stroke services, leading to more integrated care for stroke patients, to increase satisfaction among patients and caregivers, and last but not least, also leading to more cost-effective care [8]. This trend fitted also in an international trend towards integrated stroke care services [9,10].
Nowadays, in accordance with the Helsingborg Declaration on European stroke strategies, stroke patients in the Netherlands are part of a continuous care chain from the moment the stroke occurs [11]. This continuous care chain is often embedded in stroke services which are an organisational model of integrated care for stroke patients. Integrated care can be seen as the result of multi-pronged efforts to promote a coherent set of methods and models on the funding, administrative, organizational, service delivery and clinical levels, designed to create connectivity, alignment and collaboration within and between the cure and care sectors, to enhance quality of care and quality of life, consumer satisfaction and system efficiency for patients with complex problems which cut across multiple services, providers and settings [12].
The last decade before the millennium was extremely important for the development and innovation of stroke care in the Netherlands. Changes were necessary because healthcare in the Netherlands, as well as in many other Western countries, was very fragmented. Stroke care lacked continuity, coordination, and communication often resulting in long hospital stays for stroke patients [13]. To improve this situation, a better coordination and cooperation between professional caregivers, often working for different care organisations, was strived for. The development of organised stroke care in the Netherlands, with specific stroke units and stroke services, started in the 1990s, stimulated by the Dutch Heart Association, which also released publications describing a step-by-step setup for stroke units and stroke services [14,15]. National guidelines were developed, providing stroke care professionals with evidence-based recommendations for delivering optimum care [16,17]. To further facilitate the implementation of stroke services, the Dutch Institute for Healthcare Improvement started a series of breakthrough projects for stroke care. In this nationwide effort, different regions were supported in implementing the integrated delivery of stroke service care [18].
Currently there are just over 100 general hospitals in the Netherlands, of which more than 70 participate in providing services for stroke patients. Although there are regional differences, most of these stroke services are collaborations between a general hospital, one or more nursing homes, a rehabilitation clinic and home care organisations. Most of the Dutch stroke services are affiliated with a knowledge network ("Kennisnetwerk CVA Nederland") that strives towards implementing the goals set by the Helsingborg Declaration.
Besides providing chronic continuing care for somatic and psychogeriatric patients, nursing homes in the Netherlands have a specific geriatric rehabilitation function, whereas rehabilitation centres focus primarily on the rehabilitation of younger patients, who can cope with a more intense rehabilitation programme. Accordingly, Dutch nursing homes play a substantial role in integrated stroke care service, especially in the rehabilitation of elderly stroke patients. After hospital discharge, 32% of stroke patients return to their home, 9% are discharged to a rehabilitation centre and 31% are rehabilitated in a nursing home [8]. Dutch nursing homes employ their own nursing, paramedical and psychosocial staff and, in contrast to other Western countries, the medical treatment of nursing home patients is an officially recognized medical discipline, and nursing home physicians are specifically trained in this specialization. The nursing home sector in the Netherlands is mainly a non-profit sector, covered by a mandatory (national) insurance system for all citizens, the Exceptional Medical Expenses Act [19]. In 2001, with extra funding from the Exceptional Medical Expenses Act the stroke rehabilitation function of nursing homes was stimulated even further.
In the Netherlands, the development of integrated stroke care has stimulated the promotion of integrated care for other specialist services as well. Especially in the comprehensive care for diabetic patients and in the care for frail and disabled elderly, which often have complex care needs due to multiple co-morbidities, integrated care programs are being developed and implemented now nation wide [20,21].
Since 1996, integrated stroke care services in Maastricht as well as in other regions started to thrive, due to the expected effectiveness of thrombolysis as a treatment for stroke [23]. This encouraged hospitals to enlarge their stroke unit capacity, enabling every stroke patient to be admitted directly to the stroke unit after the onset of stroke. In order to better coordinate the flow of stroke patients through the health care chain, the integrated care model for stroke patients was designed, later evolving into the stroke service Maastricht. The stroke service Maastricht involves collaboration between general practitioners, neurologists, rehabilitation specialists, nursing home physicians, psychologists, nursing staffs, district nursing, physiotherapists, speech therapists, occupational therapists and dieticians working for the academic hospital, the nursing home, the rehabilitation centre and in primary healthcare.
The total development process was characterised by four phases. During the first phase, which started in 1996, the focus was on achieving a better degree of cooperation between caregivers within the academic hospital itself. Next to this, caregivers of regular community care were involved. A protocol was developed in which the care process was described from the moment of stroke onset until discharge to the home situation and a collaborative training programme for visiting nurses, physiotherapists and general physicians was setup.
The goals set were: The development of a care process in which as 1.
many stroke patients as possible could be admitted directly to the stroke unit of the academic hospital, as quickly as possible after the onset of stroke. The duration of hospital stay for stroke patients 2.
should be as short as possible. Community care, treatment and follow-up should 3.
start immediately after hospital discharge.
The second phase in the development of the stroke service Maastricht started in the year 2000. During this phase the emphasis was on the structured participation of the nursing homes in the region. The two nursing homes which in fact already participated in stroke service, but not in a structured way, were willing to reserve a total of 21 beds for older stroke patients who could be discharged from the academic hospital but couldn't yet return home. The two nursing homes committed themselves to admitting stroke patients within 10 days after referral from hospital. To facilitate this fast transition, an agreement had to be reached with the central indicating commission for care (CIZ). In the Netherlands, the CIZ is charged with the assignment of care provided by nursing homes. Normally this means that patients need to be visited by a CIZ employee before being approved In comparison to the approaches in integrated stroke care in other countries, for instance the development of hyper-acute stroke units (HASUs) in Great Britain, the Dutch experience differs because of the unique abilities and positioning of Dutch nursing home care.
HASUs are developed to enable more patients being treated with thrombolytic drugs by concentrating acute care for stroke patients in a few specialised centres, enabling admission and treatment of stroke patients 24 hours a day, 7 days a week. Patients admitted to a HASU will receive acute care for up to 72 hours after which they will be transferred to a stroke unit, also in the hospital setting, for further care and rehabilitation.
In the region of Maastricht, stroke patients are able to receive acute stroke care 24 hours a day, 7 days a week in the academic hospital and subsequently they are transferred to a nursing home for further assessment and rehabilitation.
This paper describes the development and changes over time of integrated stroke care in the south of the Netherlands, specifically in the region of Maastricht. In this development process, several phases can be distinguished which give insight into national and local factors that play a role in the integration of stroke care in the Netherlands. Parts of the changes in this development process were related to evaluations performed. In the last phase of this development, the stroke service underwent the last reformation, which will be described in detail.
Towards integrated stroke service care in the Maastricht region
The Maastricht region has about 180,000 inhabitants; it is situated in the southernmost part of the Netherlands, close to the borders of Belgium and Germany. Maastricht has only one hospital, with 715 beds; this hospital provides standard medical care for the region, and also serves as an academic centre for about 1.1 million inhabitants. In 2010, 365 stroke patients were admitted to the academic hospital and received care within the stroke service Maastricht. The mean age of these stroke patients was 70 years (standard deviation 15).
Integrated care for stroke patients was not available in Maastricht before 1996. Stroke patients were treated by various individual health care providers without any coordination. In that period, the average hospital stay for stroke patients was 28 days, during which the patient received little rehabilitation therapy. In view of the importance of starting rehabilitation as soon as possible after stroke, this represented suboptimal care for recovery [22].
for rehabilitation in a nursing home. During such a visit the CIZ employee judges the clinical information from the hospital related to the functional status and prognosis of the patient. However, this may take a couple of days and lead to an unnecessary delay in the care process. Therefore it was agreed that stroke patients could be admitted directly to the nursing home, without waiting for a CIZ employee visit. The official indication could be provided at a later date.
This phase in the development of the stroke service Maastricht in fact ended with the results of a study conducted in Maastricht. This study compared stroke service care in Maastricht with care for stroke patients in a region without a stroke service. The results showed that 6 months after stroke 64% of the surviving patients in Maastricht could be discharged to their own homes, in comparison with 42% in the care as usual group, which was more fragmented and without any coordination [24].
In 2002 the third phase started. In this phase specific attention was paid to further improving the quality of stroke care by implementing all relevant recommendations from the most recent national guidelines on rehabilitation after stroke [17]. In addition, much work was done on improving communication and coordination between professional caregivers within and amongst organisations participating in the stroke service, by improving for instance, the quality of the transitional information. Agreement was also reached on which clinimetric tests should be used throughout the care chain. Clinimetric tests like the Assessment of Motor and Process Skills (AMPS), Barthel Index (BI) or the mini-mental state examination (MMSE) provide information on different functional levels. Using the same clinimetric tests at set times makes it possible to monitor a patient's progress and improve communication between caregivers about the condition of the patient as well as about changes in this condition. Furthermore, care after discharge from the nursing home was improved as well, and structural education on the handling of psychological and behavioural effects of stroke was initiated for the patients and their caregivers.
After this phase, the development of organised stroke care in the region of Maastricht had resulted in a complete stroke service model, with the participation of an (academic) hospital, a large nursing home organization, a rehabilitation centre and a home care organisation and the model complied with the required model of stroke services in the Netherlands. Figure 1 depicts the model of Dutch stroke care in that time.
The fourth phase was developed after an evaluation of the integrated stroke service in 2004, which will be discussed below.
Evaluation of the integrated stroke service Maastricht
Integrated stroke care in the region of Maastricht is constantly being monitored, not only by an implemented electronic registration system that enables the gathering of a set of important indicators on the quality of stroke care, but also by means of scientific studies which are regularly being carried out [24][25][26]. All evaluations are initiated by a steering committee consisting of representatives of all health organisations participating in the stroke service.
In 2004, the integrated stroke care service in Maastricht and its surrounding region was analysed scientifically for the first time, because the average hospital stay of a stroke patient still amounted to 12 days and not all stroke patients could be admitted directly to the hospital's stroke unit. The study, carried out by Vos et al. [25] consisted of a process analysis, the identification of bottlenecks, the setting of goals and the selection as well as the implementation of coordination measures. The effects were measured by means of length of hospital stay and the number of patients admitted to non-specialised wards. Vos long-term care in a residential or nursing home ward of the participating nursing home organisation.
Description of the redesigned integrated stroke service Maastricht
The redesigned integrated stroke service Maastricht involves a critical care pathway for stroke patients admitted to the academic hospital. In this redesigned care pathway every stroke patient is admitted directly to the hospital stroke unit. Most are referred by general practitioners and brought to the emergency ward of the hospital by ambulance, but some come on their own initiative without first consulting a general practitioner. In the emergency ward acute diagnostic tests take place. In cases of confirmed stroke, the patient will be admitted to the stroke unit of the academic hospital, where further diagnosis and treatment, including thrombolysis if indicated, are performed.
Subsequently, the redesigned care model consists of a strict discharge regime for all stroke beds from the neurology ward of the academic hospital. All necessary tests and treatment in the hospital should be performed within 5 days after admission. Thereafter, in principle, all patients, regardless of their age, will be discharged to the stroke ward of the nursing home, where a comprehensive assessment takes place ( Figure 2). Only patients who can be discharged home within 5 days after admission and those who are medically unstable will not be transferred from the hospital to the nursing home within 5 days.
The nursing home physician examines each patient immediately on arrival in the nursing home and initiates the assessment program. In this program a multidisciplinary team consisting of a psychologist, physiotherapist, occupational therapist, speech therapist and trained nurses examine the patient, performing a structured assessment protocol. Following this assessment, the team will meet within five days of the patient's admission to make recommendations for a rehabilitation program specifically tailored to the patient. Their advice will be based on admission and discharge criteria formulated by the various care providers participating in the stroke service. There is a structured possibility for the nursing home physician to consult a rehabilitation physician if needed. After the multidisciplinary meeting, the patient and his family will be informed about the proposed rehabilitation track; if they approve this track can be started.
There are three options for a rehabilitation track after the assessment in the nursing home.
A first barrier involved the insufficient capacity of the stroke unit of the academic hospital. Because of this, 31% of stroke patients were not admitted directly to the stroke unit. A second barrier was presented by the time needed for initial diagnostic tests (such as: CTscan, Echo Doppler of the carotid artery, Cardiac Echo, or a 24-ECG) and medical consultations to be carried out in the academic hospital. These diagnostic tests and consultations should have been carried out at the time of admission, but actually this took approximately three days. A third barrier was formed by the low frequency of the multidisciplinary meetings, which took place only once a week. The multidisciplinary meetings are meant to evaluate the triage process and to determine the further rehabilitation track for each individual patient. The low frequency of the meetings caused an increase in the length of hospital stay for patients who otherwise could have been discharged home earlier.
A final barrier was formed by the waiting times for admission to the rehabilitation clinic and the nursing homes. All these barriers resulted in an average hospital stay of 12 days, of which on average 3 were superfluous from a medical perspective.
The identification of these barriers mandated a further redesign of the integrated stroke service Maastricht. This can be seen as the fourth phase in the development of the stroke service. Even more than in the past, the emphasis in this phase was laid on faster discharge from the academic hospital by better coordination and planning of initial diagnostic tests and consultations.
Apart from this, the multidisciplinary assessment and its related multidisciplinary meetings, to determine the best rehabilitation track (triage phase), which originally took place in the hospital, were transferred to the nursing home. In addition, the existing protocol for the rehabilitation of stroke patients in the nursing home had to be extended to incorporate an initial multidisciplinary assessment.
Because patients would be discharged much faster from hospital in the adapted model, the flow of patients to the nursing home was expected to increase, and therefore more nursing home beds were needed for assessment and rehabilitation. Accordingly, nursing home management decided to enlarge the nursing home stroke ward from 21 to 30 beds. Moreover, all 30 stroke beds were positioned in a single nursing home ward.
To assure that this nursing home stroke ward was able to receive new stroke patients at all times, the ward's patient outflow had to be guaranteed. Therefore, stroke patients who had finished their rehabilitation in the nursing home but could not be discharged home were given priority in finding a permanent bed for continuing Rehabilitation at home with home care and outpa-1. tient treatment provided by therapists from primary healthcare or day care rehabilitation in a hospital or nursing home.
In cases of fast functional recovery after stroke with the availability of adequate informal care and a safe environment at home.
Inpatient rehabilitation in a nursing home. 2.
In cases where stroke patients need a prolonged rehabilitation trajectory of a lower intensity.
Inpatient rehabilitation in a rehabilitation centre. 3.
In cases where the patient is in need of highintensity rehabilitation, and/or reintegration into regu lar work activities.
The redesigned integrated stroke service Maastricht is displayed in Figure 3. This new care model for stroke patients was implemented in January 2006. During the first four months following implementation, data on the duration of hospital stay and admission directly to the stroke unit were collected for all stroke patients admitted to the academic hospital. The data showed that the duration of hospital stay had decreased to an average of 7.3 days and that the percentage of stroke patients who could not be admitted directly to the stroke unit had decreased to 2% [25]. Although this study showed a decrease in hospital stay and in the number of patients who could not be admitted directly to the stroke unit, the study did not take into account functional outcomes at patient level, quality of life and satisfaction with care. Accordingly, the question remained whether in the new care model hospital stay was decreased without having a negative effect on other outcomes, such as the patient's functional level, quality of life or satisfaction with care. To answer these questions and to depict the total costs of this stroke care model, a cost-effectiveness study is presently investigating the cost-effectiveness of the new care model [26]. This cost-effectiveness study consists of an effect evaluation, an economic evaluation and a process evaluation. The design of this study involves a non-randomised comparative trail for two groups. The participants are followed for six months from the time of stroke. The mean outcome measures of the effect evaluation are quality of life and daily functioning. In addition, an economic evaluation will be performed from a societal perspective. A process evaluation will be carried out to evaluate the feasibility of early discharge and assessment in a nursing home, as well as the experiences and opinions of patients and professionals. The first results of this study can be expected as early as July 2012.
Lessons learned
Before the implementation of the last redesigned stroke service model could begin, stroke care professionals of different backgrounds worked together to define appropriate adaptations of the initial stroke care protocol. In the new stroke care protocol, admission and discharge criteria were formulated for every link in the stroke care chain and agreement was reached on what tests should be done by which professional at what point of time. Furthermore, the information needed for the effective transition of patients throughout the care chain was evaluated and adjusted. Despite this careful preparation of the new stroke care protocol, the implementation brought forward some unforeseen problems. These problems were expressed in contacts with the different stake holders of the stroke service including patients and health care professionals.
First, the relative unfamiliarity of patients in the region of Maastricht and surroundings with the possibilities of assessment and rehabilitation in a nursing home caused a problem. Experiences with the first patients showed that in general patients didn't associate a nursing home with a quick discharge to their own home, but with a long or even permanent residency. Therefore, some patients initially refused to their admission to the nursing home, but the hospital staff almost always succeeded in convincing them that this was the fastest way of starting rehabilitation. To change the patient's views and to actually show the possibilities of rehabilitation in a nursing home, a better way of providing information to patients and their caregivers was arranged. Verbal information given by the hospital nursing staff was supplemented by a DVD which showed the different rehabilitation tracks in detail. This DVD was given to every stroke patient who was admitted to the academic hospital and their primary caretaker.
Second, for the healthcare professionals working in the new stroke care model, early discharge from the academic hospital in combination with assessment in the nursing home implied a shift in tasks. Some professionals in the hospital lost their function in the assessment of stroke patients, when that was adopted by the professionals in the nursing home. As an earlier study by van Raak showed, this can be perceived as a threat by some of the hospital professionals [27]. For instance, the rehabilitation specialists in the hospital, who lost their coordinating role in the triage process, had some difficulty in adapting to this shift, particularly in relation to their decision-making power. In the old stroke care model, the rehabilitation specialist coordinated the decision on the type of rehabilitation track the stroke patient should follow after hospital discharge. In the new care model, the triage function of the academic hospital and the related multidisciplinary meetings were transferred to the nursing home team, supervised by the nursing home physician, with only a consulting role for the rehabilitation specialist. In practice this occasionally caused a difference in opinion, but subsequent adequate communication always led to a patient friendly solution.
Third, in the new model, the patient's transfer from the hospital to the nursing home is coordinated by the "discharge office of the academic hospital". A staff member of this office visits the patients prior to discharge, informs them of the rehabilitation track to be followed and arranges transfer, if needed. This function is vital for maintaining an adequate and continuous patient flow. But because the two employees consigned to this task initially hadn't coordinated their working hours, transfers could not always be planned in time. A better coordination of working hours solved this problem.
Fourth, another unforeseen problem was that the transport of the patients from the academic hospital to the nursing home hadn't been discussed with the ambulance service before the start of the new care model. Because the ambulance service maintained previously made arrangements, patients often arrived at the nursing home too late in the day to start the assessment on arrival. This problem was solved by making good additional arrangements with the ambulance service.
Fifth, labelling extra beds for stroke patient assessment in the nursing home meant that the hospital became less 'vulnerable' to fluctuations in patient flows, in contrast to the nursing home, which needed extra capacity to cope with patient flow fluctuations. In times of low Optimizing integrated stroke care means knowing and using the abilities of different healthcare providers for a common purpose. In the Netherlands, nursing homes with their unique ability to equally participate in the rehabilitation of mostly elderly patients, take away the pressure from acute care providers, not only as part of a stroke service but also as part of other integrated care models. Grace Warner, PhD, Dalhousie University, Halifax, Nova Scotia, Canada demands for stroke beds, the hospital was able to fill its beds with other neurology patients whereas the nursing home could not. In order to fulfil their part in the stroke service, the management of the nursing home was willing to keep their designated beds, even when unoccupied, and bear subsequent financial losses. Nowadays these problems are solved by additional reimbursement for nursing homes.
Reviewers
It can be concluded that by gradually altering the structure of the conventional stroke service model we have created a new care model that, based on evidence elsewhere, we expect to shorten the duration of hospital stay and lead to lower costs. Moreover this new model may have positive effects on patients' functional outcomes, quality of life and satisfaction with care.
Currently we are investigating the added value of this new model. If the expected positive effects are established, the model might also be tested in integrated care models related to other chronic diseases. In this respect we can think of patients with chronic heart failure or of elderly patients who often stay hospitalised unnecessarily long because of their multimorbidity and complex care problems. | 2016-05-04T20:20:58.661Z | 2012-05-25T00:00:00.000 | {
"year": 2012,
"sha1": "e2d3e74c50dbe0232c576ea492cfe64bce5387d4",
"oa_license": "CCBY",
"oa_url": "http://www.ijic.org/articles/10.5334/ijic.744/galley/1556/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2d3e74c50dbe0232c576ea492cfe64bce5387d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
122344150 | pes2o/s2orc | v3-fos-license | Forces from the rear: deformed microtubules in neuronal growth cones influence retrograde flow and advancement
The directed motility of growth cones at the tip of neuronal processes is a key function in neuronal path-finding and relies on a complex system of interacting cytoskeletal components. Despite intensive research in this field, many aspects of the mechanical roles of actin structures and, in particular, of microtubules throughout this process remain unclear. Mostly, force generation is ascribed to actin–myosin-based structures such as filopodia bundles and the dynamic polymer gel within the lamellipodium. Our analysis of microtubule buckling and deformation in motile growth cones reveals that extending microtubule filaments contribute significantly to the overall protrusion force. In this study, we establish a relationship of the local variations in stored bending energy and deformation characteristics to growth cone morphology and retrograde actin flow. This implies the relevance of microtubule pushing and deformation for general neurite advancement as well as steering processes.
Introduction
At the tip of outgrowing neuronal processes (axons), a highly complex sensory-motile structure termed a growth cone (GC) develops to perform guided growth of the neurite driven by the dynamic cytoskeleton. The GC converts external signals (e.g. gradients of chemoattractants/-repellants, electrical and mechanical stimuli) into internal rearrangements of the self-assembling biopolymers of the cytoskeleton. These are mainly actin filaments (F-actin) and tubulin microtubes (microtubules (MTs)). This in turn influences GC morphology and enables directional changes. It is well established that the interplay of actin and MT dynamics in the GC is crucial for the directed outgrowth of neuronal extensions [1][2][3][4]. Since actin filaments are rather flexible biopolymers with a persistence length of l P ≈ 17 µm [5,6], only structures built from multiple filaments are mechanically relevant for GC dynamics. These are found in the peripheral domain (P-domain) of the GC where a dense sheet (∼500-600 nm in height) of dynamically cross-linked actin filaments (often referred to as actin gel) is interspersed with radially oriented bundles of aligned filaments termed filopodia. MTs, in contrast, have a comparatively high rigidity with l P > 700 µm [7] and thus individually influence local GC dynamics. They are usually bundled within the neurite shaft from which a subpopulation extends into the P-domain where they splay apart in all angular directions.
These exploring MTs encounter various obstacles on their polymerization path formed by the aforementioned actin structures. During all phases of GC motility, the actin gel polymerized at the leading edge is transported toward the central domain (C-domain), mainly through myosin motor proteins that attach to F-actin and act as contractile force dipoles [8]. This retrograde flow (RF) opposes the growth of exploring MTs [9,10] and can lead to their deformation and back-transport within the P-domain [11]. However, while the retrograde movement of the actin network obstructs MT outgrowth, radial filopodia can provide optimal polymerization pathways for these filaments [12]. In regular growth phases, the actin bundles in filopodia Figure 1. GC structure. F-actin in the P-domain polymerizes and pushes against the leading edge. At the same time it is moving backward in a RF driven by motor protein activity in the T-zone. In the C-domain, filaments depolymerize and replenish the pool of monomers available for (re-) polymerization. Fluctuations in RF and polymerization lead to a net advancement or retraction of the edge. MTs originate in the axon and push into the GC where their advancement is opposed by the flow of actin material. are predominantly aligned with the RF. MTs attached to and oriented along those bundles reduce the flow-exposed area to their cross-section and thus minimize the forces opposing their advancement. In a way, dynamic actin structures both hinder and promote the exploration of peripheral regions by MTs. This already indicates a system of highly complex interactions. Not all MTs that are able to reach the P-domain of a GC have to be aligned with filopodia [13], but due to the reduced structural support, free MTs have a higher tendency to buckle under the forces they exert against the flow of actin material. An overview of the complex GC structure is shown in figure 1. For further reading regarding GC structure and function, we recommend the review by Lowery and Vactor [14].
Previous studies have shown that GCs with reduced myosin activity contain a larger number of MTs that reach the P-domain [4]. Axons exposed to actin depolymerizing agents are still able to grow out [15] and even reach higher lengths [16]. These early findings already point out that the actin machinery is not the only motor driving neurite outgrowth. In fact, the competition of two counteracting systems seems necessary for maintaining a regulated balance. Contractile forces of the active actin-myosin network are opposed by a dynein-MTbased counterpart driving extension by pushing the whole machinery from within [10,17]. The forward forces of polymerizing MTs in vitro are in the pN range [18,19]. In combination with dynein or other MT-based motors, it is more than likely that they also contribute to the overall forces required for GC advancement.
MTs that explore areas outside the C-domain without the mechanical support of filopodia tend to buckle due to the compressive forces originating from counteracting MT and actin dynamics. Forces exceeding the critical buckling limit F crit result in the transition from a straight to a deformed configuration [20,21]. After the buckling event, MTs get further deformed and (like springs) are able to store substantial amounts of bending energy. In the situation given, one has to consider that these MTs are mechanically supported by the actin gel they are embedded in. This additional support allows MTs to bear and exert even higher forces without buckling than one would expect for 'free' filaments, e.g. in an aqueous solution [22,23]. The fact that extending MTs are at the threshold between un-deformed and buckled states makes them excellent inherent tools to investigate the forces acting in cellular motility processes.
Our analysis of MT deformation shows that forces generated by MTs invading the GC periphery are in the same order of magnitude as the overall GC protrusion force. Hints at a correlation between MT energy density and GC protrusion as well as RF dynamics support the idea that MT distribution and deformation rate are linked to neurite advancement and turning, which might also have implications for mechanical path-finding.
Microtubule curvature analysis in growth cones
We analyzed the deformed shape of MTs in the GCs of NG-108 15 cells transfected with a MTbinding fluorescent marker using a semi-automated detection algorithm based on that described in [24] (see also section 4). From the fluorescent images, the position of the MT center line was extracted and the local curvature was calculated. The cells were additionally transfected with RFP LifeAct plasmids to visualize actin structures and GC morphology. Figure 2 shows a laser scanning image (a) and an overlay of the actin cytoskeleton channel with the colorcoded result of the MT curvature analysis (b) for a section of a GC. The full time lapse series of a large stationary GC can be seen in supplementary movie M1 (online supplementary data available from stacks.iop.org/NJP/15/015007/mmedia). In agreement with previous studies we could observe dynamic instability behavior, the bending and looping of single MTs or bundles as well as numerous cases of MTs aligned with filopodia in the P-domain (see examples in figures 2(c)-(f)). The curvature of MTs we analyzed typically ranged from 0 (straight) to 2 µm −1 (corresponding to a radius of curvature of 0.5 µm). With this tool at hand we were able to further investigate the deformation of buckled MTs in the P-domain as well as the overall relation between MT curvature and GC dynamics. For the calculation of the critical force F crit required to initiate buckling, individual MTs in the P-domain of GCs were selected and separately analyzed. Rods embedded in an elastic medium tend to buckle in higher modes of deformation with smaller amplitudes reducing the energy dissipated in medium deformation. Hence, instead of the total length of the rod (as is the case for classical Euler buckling) the wavelength of buckling λ b is the characteristic length scale to determine the critical force [22]. This wavelength was measured by detecting the positions on the filament with the highest curvature (red circles in figure 3(a)). Assuming a sine wave as the underlying function for buckling deformation, the distance along the contour between these points corresponds to half the buckling wavelength. Note that the distance along the filament contour corresponds to the axial distance between these points at the beginning of buckling. This is important since we may analyze MTs at different post-buckling states and need to approximate the initial buckling wavelength. Applying this method we found an average buckling wavelength of λ b = 4.02 ± 1.48 µm (n = 51; all the values presented are means±standard deviation unless otherwise stated). The buckling event can be described within a constrained buckling theory to compare our result with theoretical considerations. This delivers the following dependence between λ b , the flexural rigidity κ = E I of the rod and the elastic modulus G of the surrounding medium [23]: Using an elastic modulus of G ≈ 27 Pa [25] and a flexural rigidity κ between 0.4 and 2.15 × 10 −23 N m 2 for MTs [5,7], the theoretical wavelength would be between 3.03 and 4.62 µm.
Having confirmed that the wavelengths observed are very well in the expected range, the corresponding forces were computed using F crit = 8π 2 κ λ b [21,23]. This results in an average critical buckling force of 147.34 ± 89.82 pN per MT. However, F crit is the force necessary to buckle a previously straight MT, which might not always be given in the dynamic environment of a motile GC. Thus, to avoid over-estimation of the forces, we additionally calculated the (smaller) restoring force F res that is exerted by an already buckled rod against further axial compression. F res depends on the contour length L of the buckled rod (in our case L ≈ λ b ) and the distance d between its endpoints in the deformed state [26]: This yields F res = 69.49 ± 43.97 pN for a single MT. Interestingly, the measured values of λ b (and thus the calculated values for F crit and F res ) are not normally distributed but produce two distinct peaks ( figure 3(b)). These are located at 2.81 ± 0.64 and 4.39 ± 0.39 µm (combined normal fits) indicating two populations of MTs buckled under different conditions. The corresponding force peaks are located at 76.77 and 208.90 pN for F crit and at 35.25 and 101.40 pN for F res .
Bending energy stored in deformed microtubules
As a measure for MT deformation in different areas of GC, the bending energy per unit length dU/ds was calculated in terms of the elastic beam bending theory [20,21]. The energy stored in the infinitesimal element ds of a deformed elastic filament with a circular cross-section can then be approximated by with d /ds being the local curvature which can be derived from our image analysis. In total we analyzed 13 image series of active GCs with recording times between 100 and 600 s resulting in curvature map series as shown in supplementary time lapse movie M1 (online supplementary data available from stacks.iop.org/NJP/15/015007/mmedia). The mean values for d /ds in each frame of each series were calculated and then averaged over time. The overall mean for the bending energy per length stored in the GCs' MTs is (1.00 ± 0.54) × 10 −19 J µm −1 (n = 13) corresponding to (23.35 ± 12.61)k B T per µm or (0.0142 ± 0.0077)k B T per tubulin dimer in the tube wall lattice (based on approximately 1640 dimers µm −1 [27]).
The total amount of bending energy stored in all detected MTs of a GC ranges from 607 to 9409 k B T with a mean value of 2568 k B T. The scatter in these total energy values partially results from large variations in total MT mass. The P-domain of large GCs can hold tens of MTs while smaller versions only contain a few single filaments. Moreover, not all MTs present in the GC are detectable with our algorithm. Hence, these numbers only constitute a lower boundary for MT bending energies in GCs.
Having in mind the large forces we could ascribe to single deformed MTs, one would expect a correlation between the protrusion of certain areas of the cone and the local deformation of MTs in that area. This was investigated by dividing GCs into different regions of interest (ROIs) and comparing the average bending energy per length to the overall lamellipodium area change in the corresponding ROI. An example of ROI selection can be seen in figure 4(a). Typically, the GC was divided into two hemispheres and the actin-covered area within these ROIs was evaluated. For a better comparison between GCs of different sizes, all values of one sequence were normalized to the initial area in the first frame recorded. For the GC in figure 4(a) the respective relative area versus time plot can be found in figure 4(b). MT curvature was again measured in terms of bending energy per µm ( figure 4(c)). Subsequently, the time-averaged dU/ds in each ROI was normalized to the overall mean found in the GC under investigation. Thus, relative energy values >1 (<1) correspond to the above-average (below-average) MT curvature. As shown in figure 4(d), a clear trend evolves connecting lower MT bending energy with higher area losses (lamellipodium retraction) while regions with higher MT energy remain stationary or gain area (advancement).
Relating retrograde actin flow to microtubule deformation
In all states of GC motility (advancing, stationary, retracting), retrograde actin flow is directed against MT extension. Thus, it is likely that the deformation of MTs is directly related to the local speed of retrograde actin movement. A comparison of subsequent frames of a time series using feature-tracking and cross-correlation algorithms allows one to determine RF velocity within the P-domain of a GC. In three of the GCs under investigation, we were able to detect RF velocities ranging from 1 to 5 µm min −1 , which is in good agreement with values previously measured in GCs of the same cell type [28,29]. We found that RF varies temporally and spatially and often increases for tens of seconds in confined areas. We evaluated the MT behavior in GC sections where such bursts of RF activity occurred and observed an increase in MT deformation as a response to RF velocity increases. Figures 5(a) and (b) display a pair of MTs exposed to RF. While one filament is stabilized by a filopodium, the other gets deformed after a temporal increase in RF velocity. This deformation process is generally observed after RF bursts. In total, we were able to identify seven similar events, all relating higher MT deformation to RF bursts. An example can be seen in figure 5(c) where three bursts are followed by delayed deformation peaks. The relaxation of deformed MTs is accompanied by phases of decreased RF. The time delay between the bursts that mostly occur in the distal part of the P-domain and the corresponding peak in MT deformation ranges between 10 and 25 s and can be ascribed to transport phenomena transmitting peripheral actin flow activity to more centrally located MTs.
Discussion
Important functions in neurite outgrowth have often been ascribed to the distribution of rigid MTs in the rather soft periphery of neuronal GCs [1,30]. In this section, we address the question of what contribution MTs can make to the overall forces generated by an advancing GC.
Microtubule buckling
Comparing the lower estimate values F res ≈ 30 pN of our study with the ≈ 100 pN that a GC can exert against an AFM cantilever (effective cross section: 1.15 µm 2 ), Fuhs et al [25] clearly highlight the relevance of single MT pushing forces for the advancement of the whole GC. Unexpectedly, the distribution of measured buckling wavelengths features two distinct peaks corresponding to two different forces responsible for the buckling events. If we assume constant MT stiffness, the two different buckling conditions must originate from variations in the surrounding actin gel. The lamellipodium is generally not homogeneous since two different states (on/off) of actin polymerization regulate edge dynamics [28,31] and result in different types of networks. These differ strongly in actin network structure and density [32] which very likely leads to two typical values for network elasticity. Calculating the respective actin network shear moduli with a constant MT bending rigidity of 2.15 × 10 −23 N m 2 [5] yields G 1 ≈ 33 Pa for the first peak and G 2 ≈ 200 Pa for the second peak. The latter, higher value, however, has to our knowledge never been reported in studies investigating GC elasticity [25,33]. Treating the lamellipodium as an active medium appears to be a more promising approach when aiming to explain the bimodal distribution. In their theoretical work on MTs in active actin gels, Kikuchi et al [34] show that a contractile environment can either weaken or strengthen an embedded stiff filament. The effect depends on the orientation of actin fibers and contractile elements relative to the MT (anchoring). While perpendicular anchoring leads to an effective weakening of the MT, predominantly parallel alignment creates additional mechanical support and impedes buckling. In relation to our findings, this implies that MTs buckled at longer wavelengths (smaller forces) are embedded in actin networks with fibers mainly oriented perpendicular to the MT and those buckled at small wavelengths (higher forces) are effectively stiffened by parallel aligned fibers in their environment. Such populations of aligned and more randomly oriented actin filaments were observed earlier in electron microscopy studies of GCs [35]. Unfortunately, the orientation of actin filaments within the lamellipodium is not accessible with the applied techniques and requires further investigation.
Microtubule pushing as a unique motility mechanism
In comparison with other motile cell types the protrusion forces of GCs are rather low. Two studies employing identical AFM setups report ≈ 450 Pa forward pressure for epidermal fish keratocytes 4 [36] and ≈ 90 Pa for NG108-15 GCs [25]. In the context of our results, this means that already three MTs per µm 2 at the leading edge of a GC are sufficient to generate the aforementioned pressure. In addition, the lamellipodium of GCs is particularly soft [33]. Considering this, the contribution of pushing MTs appears even more significant for GC motility. In the dynamic lamellipodium of keratocytes, MTs are completely absent and these fast and persistently moving cells rely exclusively on the actin-myosin machinery to generate forces [37] which indicate that MTs are not essential for large forward forces but rather help to increase the system's versatility. GCs apparently have modified motility mechanisms that incorporate pushing MTs as important contributors to directed force generation. This in turn can be related to the fundamentally different biological functions of GCs and keratocytes. While the latter perform stable directed motility with a little reorientation and are optimized for high protrusion velocities, the former undergo frequent morphological changes related to sensitive path-finding, branching and pausing processes. The strong influence of MT dynamics on GC motility has direct implications for mechano-sensitive growth mechanisms as they have been described for various cell types [38][39][40]. An additional pushing component essentially alters the initial conditions for any force-dependent guidance mechanism. This may also be relevant for the repeatedly observed but still controversially discussed 'inverse' durotaxis of neurons. This phenomenon describes the tendency of neuronal processes to advance faster and branch more frequently on softer substrates [41,42], whereas other cell types (e.g. fibroblasts) preferentially migrate toward stiffer regions [38]. In GCs the traction generated by substrate coupled retrograde actin flow is complemented by anterograde MT pushing. This generally facilitates the forward motion of neuronal extensions in the soft environment of brain and nerve tissue and could be a possible explanation for their unique preference for highly compliant substrates in vitro.
Relating microtubule deformation to growth cone morphology and advancement
We also found indications of a correlation between local MT deformation rates (measured as stored bending energy per MT length dU/ds) and morphological changes of the GC. Our observations indicate a correlation between higher MT deformation and GC advancement that raises some questions. As mentioned above, previous studies report that straight MTs aligned with filopodia support advancement and, in the case of non-uniform distribution, turning of GCs [4,12,43,44]. Our data suggest that the occurrence of aligned and straight MTs in the P-domain is not the only indicator of directional changes. Apparently, a higher rate of bending can also be related to an increase in protrusion of certain areas of the GC. It is known that local forces can be transmitted over tens of micrometers via MT deformation [45]. Insight into whether actin dynamics are influenced by highly curved MTs or the increase in bending energy is a result of an otherwise triggered boost in RF or MT-actin cross-links (e.g. proteins of the spectraplakin family [46]) can be gained from the time correlations shown in figure 5(c). Temporal increases in actin flow velocity ('bursts') can be related to time-delayed increases in MT deformation followed by a drop in RF velocity. It is reasonable to assume that strongly deformed MTs which expose larger cross-sectional areas to RF and require increasing forces for further deformation hinder actin back-transport, which would explain the transient nature of bursts. RF is simply slowed down by highly deformed MT filaments. This in turn would result in accelerated edge protrusion, which is regulated by the balance between actin polymerization (forward) and RF. Eventually, this means that both filopodia-aligned MTs that directly push and a population of strongly deformed MTs that slow down actin back-transport support edge protrusion, and contribute to directed GC motility. The suggested interplay of actin and MT dynamics is schematically represented in figure 6.
Concluding remarks
Over the last decades, neuronal research has become a fast progressing interdisciplinary field and many details of GC motility and the underlying molecular processes have been revealed. Nevertheless, many interesting aspects remain to be studied including the influence of microtubule deformation on mechano-sensitive regulation processes and the details of actin-MT cross-linking throughout GC protrusion. The data presented in our study add a new point of view to the understanding of MTs as mechanical components in neuronal GCs and, in particular, their role in force generation and distribution. Adaptations of existing concepts will be required to incorporate the often neglected MT deformation energies and pushing forces resulting in a more complete picture of GC mechanics.
Cell culture and image acquisition
The NG108-15 hybrid cell line exhibits certain characteristic features of nerve cells, such as differentiation and the spurting of neurite-like processes, which are known to form synapses that are functional on the presynaptic side [47]. We chose this cell line since these cells readily respond to transfection treatments and their well-pronounced GCs constitute ideal model systems for investigations of the underlying cytoskeleton. NG108-15 neuroblastoma cells were purchased from ATCC (Manassas, VA, USA) and cultured in standard growth medium composed of Dulbecco's modified Eagle medium supplemented with 10% fetal calf serum and 1% penicillin/streptomycin solution (all purchased from PAA, Pasching, Austria). For image acquisition, cells were seeded on custom-made glass bottom Petri dishes or µ-slide 18-wells (IBIDI, Martinsried, Germany) and supplied with phenol red-free Leibovitz's L-15 medium with 2% B-27 ® supplement (both from Invitrogen, Darmstadt, Germany). Cells were Figure 6. Actin-MT interactions. MTs push from within the C-domain and fulfill two functions in the periphery: direct pushing (when aligned) and obstructing RF (when deformed). In area 1 MTs exert large pushing forces F MT , get deformed and store bending energy (springs). This slows down RF and a larger portion of actin polymerization forces at the edge is converted into forward motion. This area thus is less likely to retract resulting in net edge advancement. In region 2 with short, un-deformed MTs that do not reach the periphery, F MT is small, RF persists and the edge more likely retracts.
transiently co-transfected with mCherry-LifeAct plasmids (IBIDI, Martinsried, Germany) for F-actin visualization and pCS2 + /EMTB − 3XGFP plasmids (kindly provided by the group of Ewa Paluch, Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany) for MT visualization. Transfections were performed 24-72 h prior to image acquisition with liposome-based Metafectene ® Easy (Biontex, Martinsried, Germany) according to their standard protocol. All images were captured on a Leica TCS SP5 confocal laser scanning microscope equipped with an HCX PL APO CS 63.0 × 1.40 oil immersion objective (Leica Microsystems, Wetzlar, Germany).
Microtubule curvature analysis
MT deformation analysis was performed using custom written Matlab (MathWorks, Natick, MA, USA) software. The script is based on suggestions by Brangwynne et al [24] for a semi-automated MT tracking algorithm. Briefly, the user manually marks the approximate MT contour which is then used as a basis for refined position determination. The algorithm evaluates multiple image intensity line profiles perpendicular to the estimated MT contour and searches for Gaussian intensity peaks which are interpreted as filament centers. The set of refined center locations is subsequently fitted by either a modified spline model (complex MT contours in the GC) or by sine functions (buckling analysis). From these fits, the local curvature at each point of the filament can be derived for further analysis.
Retrograde actin flow analysis
RF detection was performed using custom-written Matlab (MathWorks, Natick, MA, USA) algorithms. Briefly, the image is rasterized and each image tile is compared with a somewhat larger area in the subsequent frame of the series through cross-correlation methods. Thus, prominent actin features can be traced over multiple frames and their velocity is the base for later interpolation steps that deliver the RF maps depicted in figure 5(b). The algorithm was initially developed by Timo Betz and is described in greater detail in [48]. | 2019-04-20T13:09:14.191Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "a933857378e5253f527b79b5ade404ae8312a877",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/15/1/015007",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a2a682c315362a7c9a9bb482d3d06b8eaf822bca",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
46386477 | pes2o/s2orc | v3-fos-license | Identification of a Domain Involved in ATP-gated Ionotropic Receptor Subunit Assembly*
P2X receptors are ATP-gated ion channels found in a variety of tissues and cell types. Seven different subunits (P2X1-P2X7) have been molecularly cloned and are known to form homomeric, and in some cases heteromeric, channel complexes. However, the molecular determinants leading to the assembly of subunits into P2X receptors are unknown. To address this question we utilized a co-immunoprecipitation assay in which epitope-tagged deletion mutants and chimeric constructs were examined for their ability to co-associate with full-length P2X subunits. Deletion mutants of the P2X2 receptor subunit were expressed individually and together with P2X2 or P2X3 receptor subunits in HEK 293 cells. Deletion of the amino terminus up to the first transmembrane domain (amino acid 28) and beyond (to amino acid 51) did not prevent subunit assembly. Analysis of the carboxyl terminus demonstrated that mutants missing the portion of the protein downstream of the second transmembrane domain could also still co-assemble. However, a mutant terminating 25 amino acids before the second transmembrane domain could not assemble with other subunits or itself, implicating the missing region of the protein in assembly. This finding was supported and extended by data utilizing a chimera strategy that indicated TMD2 is a critical determinant of P2X subunit assembly.
Ligand-gated ion channels (ionotropic receptors) are oligomeric protein complexes formed by the specific association of homologous subunits (1). All of the known ionotropic receptors are members of families composed of multiple subunit genes, each encoding proteins with unique biophysical and pharmacological properties (2). Co-assembly among the subunit proteins of any particular family results in the formation of homoand/or hetero-oligomeric receptors. For some families, this subunit-subunit interaction occurs in a promiscuous fashion, whereas in other families, there appears to be specific restrictions placed on which productive interactions are allowed (3)(4)(5). The biochemical and cellular mechanisms underlying assembly of subunits into functional receptor complexes are complex and poorly understood (for review, see Refs. 6 and 7).
Receptor families formed by subunits containing four transmembrane domains (4TMD) 1 have been extensively used as models to investigate the rules governing ionotropic receptor assembly and stoichiometry. The best studied is the muscle nicotinic acetylcholine receptor (nAChR), which contains four different subunits that combine to form pentameric heterooligomers (8). The nAChR appears to be assembled using a stepwise pathway, and critical motifs involved in initial subunit assembly events reside in the large extracellular aminoterminal domain of each subunit (9 -11). However, this receptor type has rigid constraints placed on the composition and stoichiometry of its constituent subunits such that only particular subunit-subunit interactions are allowed as intermediates and as final complexes (4,7). Such constraints are not observed in other 4TMD receptor families, and this divergence in assembly rules raises a concern of whether the process found for the nAChR is representative of a template used by all ionotropic receptors.
Unlike the 4TMD receptors mentioned above, the ionotropic receptors for extracellular ATP (P2X receptors) have been found to have a much different topological arrangement. P2X subunits have only two TMDs, yielding a topology that places their amino-and carboxyl termini in the intracellular compartment and the loop connecting the two TMD extracellular (12,13). Recent evidence indicates that, like the 4TMD receptors, P2X subunits can participate in the formation of homo-and hetero-oligomeric channel assemblies (14 -17). Thus, the P2X receptors represent a new and perhaps simpler model system for the investigation of mechanisms involved in the subunit assembly of other members of the ligand-gated receptor family.
In an effort to identify the domain(s) of the P2X subunit protein that are critically important for productive subunit assembly, we systematically examined the involvement of the amino terminus, carboxyl terminus, and TMDs in subunitsubunit interaction between subunits transiently expressed in HEK 293 cells. Using a co-immunoprecipitation assay to analyze homomeric and heteromeric assembly of a combination of truncated and chimeric constructs from different P2X subunits, we report here that in contrast to the nAChR, it is the second TMD of P2X subunits that carries a critical determinant of specific subunit-subunit interactions.
EXPERIMENTAL PROCEDURES
DNA Constructs-The full-length cDNA for the rat P2X 2 receptor subunit was obtained from Dr. D. Julius, whereas the cDNAs for P2X 1 , P2X 3 , and P2X 6 receptor subunits were cloned as described previously (17). PCR-based mutagenesis (36 cycles at 94°C for 15 s, 55°C for 1 min, 72°C for 1 min) was used to incorporate the FLAG (DYKDDDDK) epitope into the amino terminus, following the first methionine (P2X 2 -NFLAG) or into the carboxyl-terminal end, followed by a stop codon (P2X 2 -CFLAG), of the P2X 2 subunit. After PCR mutagenesis, restriction fragments containing epitope-tagged sequences were digested with * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 appropriate restriction enzymes and subcloned into the mammalian expression vector pRK-5. Amino-terminal truncations ⌬N7, ⌬N14, ⌬N22, ⌬N28, ⌬N35, and ⌬N51 (numbers indicate the quantity of residues deleted starting from the wild type-initiating Met) were generated using P2X 2 -CFLAG as a PCR template, a flanking 3Ј primer, and 5Ј oligonucleotide primers containing methionine codons and the specific sequences of the new amino termini. Because construct ⌬N51 starts in the extracellular loop of P2X 2 , we fused the signal peptide sequence (MRGSLCLALAASILHVSL) of the rat ␣7 nicotinic acetylcholine receptor subunit (18) to the P2X 2 sequence beginning at amino acid (aa) 52 to direct the membrane translocation of the new amino-terminal domain. Carboxyl-terminal-truncated subunits ⌬C387, ⌬C380, ⌬C370, ⌬C362, and ⌬C304 (numbers indicate last amino acid position before stop codon) were created using P2X 2 -NFLAG as a PCR template with 5Јflanking primers and 3Ј oligonucleotide primers containing stop codons in addition to the specific carboxyl terminus sequences. The HA (YPY-DVPDYA) epitope was inserted into the carboxyl termini of P2X 2 , P2X 3 , and P2X 6 by PCR-based mutagenesis, and these constructs were subcloned into pRK-5.
Chimeric constructs between P2X 1 and P2X 3 were constructed by performing two successive PCR amplifications using 5Ј and 3Ј primers containing overlapping regions between P2X 1 and P2X 3 . The primers used had the following sequences: for 1.3.1, post-TMD1 overlap oligos encoded aa VYEK/AYQV (corresponding to aa 50 -53 for P2X 1 , 47-50 for P2X 3 ), and the pre-TMD2 overlap oligos encoded aa AGKF/DIIP (aa313-316 from P2X 3 , aa327-330 from P2X 1 ); for 3.1.3, the post-TMD1 overlap oligos encoded aa LHEK/GYQT (aa 43-46 from P2X 3 , aa54 -57 from P2X 1 ), and the pre-TMD2 overlap oligos encoded aa AGKF/NIIP (aa 323-326 from P2X 1 , aa317-320 from P2X 3 ). These reactions yielded two constructs: chimera 1.3.1 consists of the amino-terminal and first TMD of P2X 1 followed by the extracellular region of P2X 3 up to 2-4 aa from the second TMD, at which point the protein reverts to P2X 1 until the final stop codon. Chimera 3.1.3 consists of the amino terminus and first TMD of P2X 3 , the extracellular domain of P2X 1 , and second TMD and carboxyl terminus of P2X 3 . These chimeras were tagged at the carboxyl terminus with the FLAG epitope as described above. All constructs were verified by DNA sequencing with the dideoxynucleotide chain termination method using the deaza-T7 Sequenase kit from Amersham Pharmacia Biotech.
Transfection of HEK 293 Cells-cDNAs encoding the wild type and mutant P2X subunits were expressed individually or in different combinations in HEK 293 cells. 35-mm dishes containing 3 ϫ 10 5 cells were incubated with 1 g of total cDNA mixed with 6 l of LipofectAMINE (Life Technologies, Inc.) in 1 ml of serum-free medium. After 5 h at 37°C, the medium was replaced with MEM, and cells were incubated for another 40 -48 h.
Electrophysiological Recordings-Whole cell recordings were obtained from single HEK 293 cells using AxoPatch 200 series amplifiers (Axon Instruments, Foster City, CA) and low resistance electrodes (1-2 megaohms). The typical holding voltage was Ϫ40 mV. Recording pipettes were filled with the following intracellular solution: 150 in mM CsCl, 10 in mM tetraethylammonium chloride, 5 in mM EGTA, 10 in mM HEPES, pH 7.3 with CsOH. The extracellular solution was: 150 in mM NaCl, 1.0 in mM CaCl 2 1 in mM MgCl 2 , 10 in mM glucose, 10 in mM HEPES, pH 7.3 with NaOH. ATP (30 M) was applied by manually moving the electrode and attached cell into the line of flow of solutions, exiting an array of inlets tubes lying side by side.
Biotinylation, Immunoprecipitation, and Western Blot Experiments-Confluent monolayers of HEK 293 cells in 35-mm dishes were washed three times with phosphate-buffered saline and then incubated with gentle agitation for 30 min at room temperature with 0.5 ml of 0.5 mg/ml Sulfo-NHS-LC-biotin (Pierce) in phosphate-buffered saline supplemented with 0.1 mM HEPES, pH 8.0. The reaction was quenched by incubating the cells for an additional 10 min with 50 mM ammonium chloride in phosphate-buffered saline. The cells were then washed three times in phosphate-buffered saline and incubated in solubilization buffer (1% Nonidet P-40, 1 mM phenylmethylsulfonyl fluoride, 1 mM 4-(2-aminoethyl)-benzenesulfonyl fluoride, 10 g/ml leupeptin in phosphate-buffered saline) at 4°C for 1 h. Immunoprecipitation was carried out using the M2 anti-FLAG antibody (5 g/ml) (Sigma) in the presence of 50 l of protein G Gamma-Bind-agarose (Amersham Pharmacia Biotech). Immunoprecipitates were washed five times with solubilization buffer and resuspended in protein sample buffer. Samples were boiled for 5 min, and proteins were analyzed by SDS-polyacrylamide gel electrophoresis, followed by transfer to nitrocellulose filters. The filters were blocked overnight in TBS-T (20 mM Tris, pH 7.6, 145 mM NaCl, 0.05% Tween 20) containing 2% bovine serum albumin and incubated for 1 h with streptavidin coupled to horseradish peroxidase. Filters were washed extensively in TBS-T, and immunoreactivity was visualized by enhanced chemiluminescence using an ECL kit (Amersham Pharmacia Biotech). In some experiments, transfected HEK 293 cells were solubilized without immunoprecipitation, analyzed by SDS-polyacrylamide gel electrophoresis, and transferred to nitrocellulose filters. Filters were incubated with 10 g/ml of the anti-FLAG antibody. After several washes with TBS-T, filters were incubated with peroxidaseconjugated sheep anti-mouse antibody for 1 h. Filters were washed extensively in TBS-T, and immunoreactivity was detected with the ECL detection kit.
RESULTS
Role of the Amino Terminus in P2X 2 Homo-or Hetero-oligomerization-Similar to all other known ligand-gated ion channels, P2X subunits can form homo-and/or hetero-oligomeric receptor channels when expressed in heterologous systems such as HEK 293 cells (17). As the amino-terminal region of 4TMD ionotropic receptors has been implicated in subunit assembly (10), we wanted to determine whether this domain was also involved in P2X subunit assembly. We therefore created deletion mutants of P2X 2 lacking the first 7, 14, 22, and 28 aa (designated ⌬N7 through ⌬N28, Fig. 1A). In each case an initial methionine was engineered into the truncated sequence to ensure correct translation. These nested deletions cover the portion of the protein up to the first TMD (predicted to span aa 32-52, approximately), and the truncated proteins were tagged with the FLAG epitope at the carboxyl terminus to allow for immunodetection after expression in HEK 293 cells. As seen in Fig. 1B, all deletion mutants were efficiently synthesized and were of the predicted size. When the four deletion mutants were tested for functionality, only the ⌬N7 mutant gave whole cell currents gated by ATP (Fig. 1C). Currents from cells expressing ⌬N7 were similar to those observed in cells transfected with the full-length P2X 2 receptor and, superficially, the activation and deactivation kinetics for the responses did not differ from wild type P2X 2 . To determine whether the lack of function of the other three deletion mutants was a result of disruption in the mutants' protein processing and/or sorting, leading to the absence of receptor molecules on the plasma membrane, cells transfected with these constructs were incubated with Sulfo-NHS-LC-biotin. This compound binds to free amino groups of proteins and, because it is membrane impermeant in intact cells, can be used to distinguish between cell surface and intracellular proteins. Membranes labeled with Sulfo-NHS-LC-biotin were solubilized with 1% Nonidet P-40 and subjected to immunoprecipitation with the anti-FLAG antibody. Precipitates were then electrophoresed on SDS-polyacrylamide gel electrophoresis and transferred to nitrocellulose, and biotinylated protein was detected with streptavidin. As shown in Fig. 1D, all four amino-terminal deletion mutants yielded labeled proteins when transfected, indicating that they were present on the cell surface. This suggested that the absence of functional expression was not because of lack of their delivery to the cell membrane.
Next, we examined the ability of the amino-terminal deletion mutants to co-associate with full-length P2X 2 receptor subunits. Because assembly would be predicted to depend upon the interaction of specific recognition sequences between subunits, we hypothesized that if the amino-terminal domain did contain a motif critical for P2X subunit-subunit interaction, then deletion mutants missing such a motif would fail to interact with full-length P2X subunits. To test this hypothesis, we utilized a co-immunoprecipitation assay in which each of the FLAGtagged P2X 2 deletion constructs were individually co-expressed with the full-length P2X 2 subunit tagged with the HA epitope. Deletion mutants were then immunoprecipitated with anti-FLAG antibody, and the presence of P2X 2 .HA was detected on Western blots using the anti-HA antibody. This strategy pro-vided the means to specifically isolate and analyze stable complexes consisting of mutant and wild type subunits.
Membranes from transfected cells were solubilized with detergent (1% Nonidet P-40), and P2X receptor complexes were immunoprecipitated with the anti-FLAG monoclonal antibody. As a positive control, we also tested the co-expression of P2X 2 .FLAG with P2X 2 .HA. As expected, P2X 2 .FLAG co-precipitated with P2X 2 .HA, and this interaction was observed only when both constructs were co-expressed in cells, as simply mixing lysates from cells expressing individual subunits did not result in co-precipitation ( Fig. 2A). Similarly, immunoprecipitation of each of the deletion mutants with the anti-FLAG antibody also resulted in the co-precipitation of full-length P2X 2 .HA (also Fig. 2A). Thus, deletion of the amino terminus of P2X 2 up to the first TMD did not prevent homomeric association of P2X 2 receptor subunits.
Because P2X 2 can also co-associate with P2X 3 to form heterooligomeric receptors, we tested whether the amino terminus of P2X 2 might be involved in hetero-oligomeric assembly by cotransfecting each of the deletion mutants with P2X 3 .HA. In each case we detected specific and stable interactions (Fig. 2B). These results indicate that in the absence of the amino terminus, the mutant P2X 2 subunits still contained recognition sequences directing both homo-and hetero-oligomeric channel assembly.
Role of the Intracellular Carboxyl-terminal Domain of P2X 2 in Subunit Assembly-Because the previous experiments ruled out the amino-terminal domain as being critical for assembly, our attention next turned to the carboxyl terminus. To investigate the role of this region in P2X receptor assembly, we made a series of deletions of the P2X 2 receptor subunit at amino acid positions 362, 370, 380, and 387 (designated ⌬C362 through ⌬C387, Fig. 3A) by introducing stop codons immediately following those residues. These constructs were tagged with the FLAG epitope at the amino terminus. All carboxyl-terminaltruncated subunits expressed well in HEK 293 cells and gave proteins of the appropriate molecular size (Fig. 3B). Whole cell recordings from HEK 293 cells expressing each of these mutants revealed that progressive deletions of the carboxyl terminus of P2X 2 to amino acid 362 (which is just on the presumed intracellular side of the TMD2 region, thought to span from aa 328 to 356) did not prevent the formation of functional ATPgated channels (Fig. 3C). Although the kinetics of activation The cells were then lysed with a buffer containing 1% Nonidet P-40. FLAG-tagged subunits were immunoprecipitated with the anti-FLAG antibody, and HA-tagged subunits were detected by Western blot with the anti-HA antibody. Mixing experiments were performed from cells expressing individual subunits before immunoprecipitation (lanes 1, 3, 5, 7, and 9). and desensitization for the ⌬C387, ⌬C380, and ⌬C370 mutants compared with wild type channels were not altered, there was an appreciable difference in both the size and the rate of desensitization of currents given by ⌬C362.
The fact that all of these carboxyl-terminal mutants were functional implied that they retained the ability to form homooligomers, and indeed we found that these carboxyl-terminal deletion mutants immunoprecipitated co-transfected P2X 2 .HA (Fig. 3D). These interactions occurred only after co-expression of the full-length and mutant subunits and were not detectable when lysates from cells expressing individual subunits were mixed before immunoprecipitation. These findings were also extended to hetero-oligomeric assembly properties, as specific and stable interactions were observed between each of the carboxyl-terminal deletion mutants and P2X 3 .HA (Fig. 3E). These results therefore provide direct evidence that the bulk of the intracellular carboxyl-terminal tail of P2X 2 does not play a critical role in either homo-or heteromeric channel assembly.
Investigation of Transmembrane Domain Involvement in the Oligomerization of P2X Subunits-As deletion of either of the intracellular domains did not prevent the oligomerization of P2X 2 subunits, we next tested the possibility that one or both of the TMDs were important for this process. This was accomplished through the engineering of two different constructs. For the first construct, the first TMD was removed by deleting the amino terminus to aa 51 (⌬N51). To enable the proper membrane orientation of the ⌬N51 protein (which begins at an extracellular site), we fused the signal peptide from the rat ␣7 nAChR (MRGSLCLALAASILHVSL) to the truncated P2X 2 sequence beginning at aa 51. To test the role of the second TMD, we generated a construct in which a stop codon was introduced at aa 304 (⌬C304), resulting in a protein missing both the second TMD and carboxyl terminus. Both constructs were epitope-tagged with FLAG at their nondeleted termini and are shown in Fig. 4A. Neither of these mutants produced functional ATP-gated channels. 2 As shown in Fig. 4B, ⌬N51 was expressed at high levels in transfected cells. Its apparent molecular weight was decreased after tunicamycin treatment (Fig. 4B), indicating that the protein was glycosylated and had achieved the appropriate transmembrane orientation. This mutant was able to co-assemble with either P2X 2 .HA or P2X 3 .HA (Fig. 4C), thus effectively ruling out TMD1 as being critical for assembly. In contrast, ⌬C304 did not assemble with either P2X 2 .HA or P2X 3 .HA (Fig. 4D). Additionally, no self-assembly of ⌬C304 was observed in cells co-transfected with two different ⌬C304 constructs, differentially tagged with the FLAG and HA epitopes (Fig. 4D), thus ruling out the possibility that the negative co-assembly results were because of the mutant protein preferentially associating with itself rather than with the full-length wild type proteins. Another possible explanation for the lack of assembly could be that the truncated protein is not inserted into the membrane correctly. However, tunicamycin treatment of cells transfected 1, 3, 5, and 7). E, co-immunoprecipitation between FLAG-tagged carboxyl-terminal P2X 2 deletion mutants and P2X 3 .HA. HEK 293 cells were transfected with ⌬C387.FLAG/P2X 3 .HA, ⌬C380.FLAG/ P2X 3 .HA, ⌬C370/P2X 3 .HA., or ⌬C362.FLAG/P2X 3 .HA combination and analyzed as described in D.
with ⌬C304 produced a downward shift in the apparent molecular mass of the protein (Fig. 4E), indicating that glycosylation had occurred. This in turn suggests that the truncated protein was inserted into the endoplasmic reticulum membrane in the appropriate orientation, as the only N-glycosylation consensus sites present in the protein exist in the extracellular loop. Taken together, the results of studies using both of these truncated proteins support the hypothesis that the region of the P2X 2 subunit lying between aa 304 and aa 362 is critical for co-assembly, with an obvious candidate for such an interaction being TMD2.
Role of TMDs in Subunit Assembly-To verify the role of TMD2 in assembly, we used intact proteins rather than truncated ones so that obvious concerns regarding appropriate subunit folding would be reduced. Therefore, a chimeric approach was chosen to investigate the question of TMD2 involvement in assembly. We have previously reported that the hetero-oligomeric association of different P2X subunits is selective (17). One result pertinent to our experimental design was that P2X 6 could co-immunoprecipitate either P2X 1 or P2X 2 but not P2X 3 when co-expressed in HEK 293 cells. We reasoned that exploiting these intrinsic subunit properties through the use of chi-meric receptor subunits would enable us to establish that it is a TMD and not the extracellular domain that is critical for productive P2X subunit assembly. In designing the chimeric constructs, we were cognizant that two subunits with such widely differing biophysical and pharmacological properties as P2X 2 and P2X 3 could yield problematic chimeric proteins. So instead of P2X 2 , we chose to use the P2X 1 subunit, which functionally is nearly identical to P2X 3 . Two chimeric constructs were then engineered: 1.3.1, in which only the extracellular domain was from P2X 3 and the rest of the flanking protein from P2X 1 , and 3.1.3, in which the extracellular domain of P2X 1 was flanked by the amino and carboxyl termini of P2X 3 (shown schematically in Fig. 5A). We inserted the FLAG epitope into the carboxyl termini of both constructs and then tested whether either of these chimeric constructs could coprecipitate with P2X 6 .HA when co-expressed in HEK 293 cells. As Fig. 5B shows, the chimera 1.3.1.FLAG was able to co-precipitate P2X 6 .HA when co-transfected, whereas 3.1.3.FLAG could not. In contrast, both chimeras were able to individually coprecipitate P2X 1 .HA (Fig. 5C). This supports the contention that a TMD contains a critical motif for subunit-specific assembly. 5). Cells were incubated with 10 g/ml tunicamycin (lanes 2 and 5) and control, and treated cells were analyzed as described in B. In lanes 3 and 6 lysates from cells transfected with the respective constructs were immunoprecipitated (IP) and detected with the anti-FLAG antibody.
DISCUSSION
Most previous studies investigating the processes underlying the assembly and formation of ionotropic receptors have used the 4TMD muscle-nAChR as the model system. Such studies have demonstrated that assembly occurs in a stepwise fashion, with the four monomeric subunits assembling into specific hetero-oligomeric intermediates through interactions of motifs present on their amino-terminal domains before the mature pentameric assembly is formed (6,7). However, not all ionotropic receptors have a similar stoichiometry or topology to the nAChRs (e.g. ionotropic glutamate or P2X receptors). Therefore, we sought to determine whether the knowledge garnered from the nAChR studies had general applicability to other ionotropic receptors, especially the P2X receptor.
We and others have previously established that P2X 2 subunits can assemble into homo-and/or hetero-oligomeric channels when co-expressed in HEK 293 cells (e.g. 17,19). As the initial step to examining the molecular determinants involved in the homo-and hetero-oligomerization of P2X receptors, a series of amino-and carboxyl-terminal deletions mutant subunits were constructed and assayed for the ability to form functional channels and to specifically associate with wild type subunits in HEK 293 cells. Nested deletions of the initial 28 amino acids of the amino terminus of P2X 2 were found not to prevent homo-or hetero-oligomeric channel assembly. Identical results were obtained using the carboxyl-terminal-truncated subunits, in which the construct with the most minimal intracellular tail (of some 10 aa) was still able to co-assemble with either P2X 2 or P2X 3 .
Interestingly, our data indicate that although the amino terminus does not play a critical role in the membrane insertion and plasma membrane targeting of the protein, it does influence the functional attributes of the receptor shows. With the exception of ⌬N7, all other amino-terminal deletions resulted in nonfunctional channels despite the fact that these proteins were expressed on the cell surface. One possible mech-anism underlying these results is that the agonist binding and/or gating properties of the mutant channels are altered through effects mediated by the first TMD. In contrast to the results with the amino-terminal deletions, we found that removal of almost the entire intracellular carboxyl-terminal domain did not result in loss of channel function. However, the results from the carboxyl-terminal nested deletions did show that the desensitization properties of the P2X subunit can be modulated by a stretch of eight amino acids just downstream of TMD2. This stretch of primary sequence contains a number of charged and aromatic amino acids, and future site-directed mutagenesis experiments should be useful in delineating the roles that these residues play in receptor desensitization.
As neither the amino-or carboxyl-terminal domains appeared to be important for assembly, the next step was to investigate the role of either of the TMDs. We were unable to observe any interaction between a deletion mutant subunit lacking the second transmembrane domain (⌬C304) with either of the wild type subunits. This lack of interaction was not the result of improper targeting and insertion of this protein into the endoplasmic reticulum membrane as evidenced by its glycosylation (as the only N-linked consensus sites are present in the extracellular domain of the protein). In fact, the inability of this mutant to even co-assemble with itself suggests that it does not contain recognition sequences required for subunit assembly. We could not, however, rule out the possibility that the lack of co-assembly by ⌬C304 was the result of an inadequate secondary or tertiary structure. An additional consideration was a report by Kim et al. (20), in which they described the self-assembly into tetramers of a bacterial fusion protein containing the extracellular domain of P2X 2 . Therefore we also wanted to rule in, or out, the importance of the extracellular domain of full-length P2X subunits in their assembly.
For those reasons, our next step was to use an alternative approach for verifying the deletion mutant analysis. The truncation studies had pointed to the area of the P2X 2 protein next FLAG/P2X 6 .HA, 3.1.3.FLAG/P2X 6 .HA, or P2X 3 .FLAG/P2X 6 .HA as indicated. Transfected cells were lysed with 1% Nonidet P-40, and FLAG-tagged subunits were immunoprecipitated with the anti-FLAG antibody. HA-tagged subunits were then detected by Western blot with the anti-HA antibody. Mi represents the mixing of lysates from cells transfected with the individual subunits before immunoprecipitation with the anti-FLAG antibody, whereas Co indicates the co-expression of subunits.
to and including the second TMD as carrying a pivotal determinant for subunit co-assembly. We therefore constructed chimeras derived from P2X 1 and P2X 3 so that only the extracellular domain, which lies between the two TMDs, was exchanged and then tested for their co-assembly with P2X 6 (which will co-assemble with P2X 1 but not P2X 3 ). These experiments demonstrated that an extracellular loop comprised of P2X 1 sequence (construct 3.1.3) was not sufficient to direct co-assembly with P2X 6 and that an extracellular domain derived from P2X 3 (construct 1.3.1) did not prevent co-assembly with P2X 6 . In both cases, the individual chimeras did co-assemble with P2X 1 , thus demonstrating that the failure of the 3.1.3 chimera to co-assemble with P2X 6 is not because of impairment of assembly but rather to the lack of a domain promoting or stabilizing specific subunit-subunit interactions. These findings lead us to two conclusions. First, they demonstrate that the extracellular region on its own is not sufficient for assembly of full-sized subunits to occur, and second, they support the idea that a TMD is involved in allowing productive subunit-subunit interaction to occur. When these results are combined with those from the deletional studies, the most parsimonious explanation is that the TMD2 of P2X subunits contains a critical determinant for productive subunit co-assembly. This postulate does not rule out other areas of the protein as being involved in assembly, but it assumes that such domains play a subservient role in allowing and/or maintaining subunit assembly than does TMD2.
In summary, we report here that the assembly of the ATPgated P2X receptors is dependent upon a motif(s) present in the second TMD of the protein. This is in marked contrast to what has been shown for the 4TMD muscle-nAChR, in which the amino terminus has been shown to be important for assembly. Thus, the findings presented in this report demonstrate that not all ligand-gated ion channels undergo assembly using a generic process. | 2018-04-03T06:20:53.301Z | 1999-08-06T00:00:00.000 | {
"year": 1999,
"sha1": "e4d472c0799c67062da280aa8896ac42151fbd28",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/32/22359.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9b5bf234a40fbabbe4791b89898868fc8def4132",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
164678604 | pes2o/s2orc | v3-fos-license | Lysosomal degradation of GMPPB is associated with limb‐girdle muscular dystrophy type 2T
Abstract Objective GDP‐mannose pyrophosphorylase B (GMPPB) related phenotype spectrum ranges widely from congenital myasthenic syndrome (CMS), limb‐girdle muscular dystrophy type 2T (LGMD 2T) to severe congenital muscle‐eye‐brain syndrome. Our study investigates the clinicopathologic features of a patient with novel GMPPB mutations and explores the pathogenetic mechanism. Methods The patient was a 22‐year‐old woman with chronic proximal limb weakness for 9 years without cognitive deterioration. Weakness became worse after fatigue. Elevated serum creatine kinase and decrements on repetitive nerve stimulation test were recorded. MRI showed fatty infiltration in muscles of lower limbs and shoulder girdle on T1 sequence. Open muscle biopsy and genetic analysis were performed. Results Muscle biopsy showed myogenic changes. Two missense mutations in GMPPB gene (c.803T>C and c.1060G>A) were identified in the patient. Western blotting and immunostaining showed GMPPB and α‐dystroglycan deficiency in the patient's muscle. In vitro, mutant GMPPB forming cytoplasmic aggregates completely colocalized with microtubule‐associated protein 1 light chain 3‐II (LC3‐II), a classical marker of autophagosome. Degradation of GMPPB was accompanied by an upregulation of LC3‐II, which could be restored by lysosomal inhibitor leupeptin. Interpretation We identified two novel GMPPB mutations causing overlap phenotype between LGMD 2T and CMS. We provided the initial evidence that mutant GMPPB colocalizes with autophagosome at subcellular level. GMPPB mutants degraded by autophagy‐lysosome pathway is associated with LGMD 2T. This study shed the light into the enzyme replacement which could become one of the therapeutic targets in the future study.
Introduction
GDP-mannose pyrophosphorylase B (GMPPB) related phenotype spectrum ranges widely from congenital myasthenic syndrome (CMS), limb-girdle muscular dystrophytype 2T (LGMD 2T) to severe congenital muscle-eyebrain syndrome associated with a-dystroglycanopathy. [1][2][3][4] GMPPB catalyzes the early step of glycosylation pathway in most of the human tissues, including muscle and brain. 1 Under GMPPB catalyzation, the sugar donor GDP-mannose is synthesized from mannose-1-phosphate and GTP. 5 The GDP-mannose then participates directly or indirectly into N-glycosylation, O-mannosylation, Cmannosylation, and glycosylphosphatidylinositol-anchor formation. 1 In the neuromuscular junction, the precursors of dystroglycan undergo N-and O-glycosylation and proteolytic process to generate two subunits (a-dystroglycan and b-dystroglycan), prior to working as a central component of dystrophin-glycoprotein complex. [6][7][8] The latter complex is responsible for linking the extracellular matrix to the cytoskeleton. 7,8 To date, 49 mutations in GMPPB gene have been documented to cause diseases, most of which are associated with muscular disease, including LGMD 2T (also known as LGMD R19 GMPPB-related 9 ) or overlapping with CMS. However, a small fraction of GMPPB mutations may lead to severe congenital phenotypes, such as mental retardation, epilepsy, cerebellar dystrophy, microcephaly, and retinal dysfunction. Hitherto, the exact pathogenesis of GMPPB-related a-dystroglycanopathy is elusive and the management remains symptomatic, however, a part of patients may respond to medications used to treat CMS. 2 Here we identify two novel mutations in GMPPB gene in one patient presenting the overlap between LGMD 2T and CMS. On the basis of thorough clinical, pathological, and genetic analysis, we aimed to functionally investigate the pathogenesis of GMPPB-related spectrum.
Participants
We identified a patient fulfilling the diagnosis of LGMD and CMS according to proximal muscle weakness and decrements in repetitive nerve stimulation (RNS) test at low rate stimulation. The patient and her parents were clinically examined.
Standard protocol approvals, registrations, and patient consents
The ethics committee of Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China approved the study. All participants provided written informed consent.
Mutation analysis
Genomic DNA was extracted using a standard phenol/ chloroform extraction protocol. Healthy individuals (n = 200) of matched geographic ancestry were included as normal controls. Exome sequencing was performed for the patient, using Agilent SureSelect v6 reagents for capturing exons and Illumina HiSeq X Ten platform. Alignment to human genome assembly hg19 (GRCh37) was carried out followed by recalibration and variant calling. Population allele frequencies from public databases of normal human variation (dbSNP, ESP6500, and 1000 g) were used to initially filter the data to exclude variants greater than 1& frequency. The variants were further interpreted and classified according to the American College of Medical Genetics and Genomics (ACMG) Standards and Guidelines. 10 In this segment, two neurogeneticists analyzed the inheritance pattern, allele frequency (from: 1000 g, ESP6500, dbSNP, ExAC and 200 in-house ethnically matched healthy controls), amino acid conservation, pathogenicity prediction [PolyPhen-2 (http://genetics.bwh.harvard.edu/pph2), SIFT (http://sift.jc vi.org), and Mutationtaster (http://www.mutationtaster. org). Putative pathogenic variants were further confirmed by Sanger sequencing.
Myopathology
We performed open biopsy on the right deltoid in the patient. The muscle tissue was frozen and then cut at 7 lm sections. These sections were stained according to standard histological and enzyme histochemical procedures with hematoxylin and eosin (HE), modified Gomori Trichrome (MGT), periodic acidic Schiff (PAS), oil red O (ORO), nicotinamide adenine dinucleotide tetrazolium reductase, succinate dehydrogenease, cytochrome C oxidase, and esterase.
Protein of muscle tissue lysate was extracted and the expression level of GMPPB was detected by western blotting with anti-GMPPB. Cell culture transfection and Western blotting HEK 293T cell line was obtained from the Cell Bank of Chinese Academy of Sciences (www.cellbank.org.cn) and maintained in Dulbecco's Modified Eagle Medium (DMEM) with 10% fetal bovine serum (FBS) and 1% penicillin/streptomycin (PS) at 37°C in a humidified incubator with 5% CO 2 . One day before transfection, cells were plated at 150,000 cells per well in 6-well culture dish. The next day, cells were transfected with 2.5 lg of EGFP control plasmid DNA or GMPPB-EGFP wild-type (GMPPB-WT) or mutant (c.803T>C and c.1060G>A) plasmid DNA using Lipofectamine 3000 transfection reagent (Invitrogen). Moreover, another four mutations reported previously as causative were set as positive controls (c.781C>T, c.877C>T, c.1034T>C, and c.1108G>C 2,4,11,12 ). Forty-eight hours later, the cells were split in radioactive immunoprecipitation assay (RIPA) buffer (Beyotime) to extract protein for western blot analysis. Cell lysates were diluted in an equivalent volume of 6X SDS-PAGE Sample Loading Buffer (Beyotime) for protein denaturation. For cell lysates, equal volumes were run on 12% SDS polyacrylamide gels. Total GMPPB levels were detected using the Anti-GFP antibody (1:2500, GFP-1010, AVES). GAPDH primary antibody (1:1000, 2118-14C10, Cell Signaling Technology) was used to ensure equal protein loading. Blots were then incubated with antichicken and antirabbit secondary HRP-conjugated antibodies (1:5000, Beyotime) and bands were detected by enhanced chemiluminescence using Western Blot Enhancer reagents (Thermo Scientific).
For the analysis of the localization and the impact of mutant GMPPB on muscle cells, C2C12 myoblast cells (Cell Bank of Chinese Academy of Sciences) were also transfected with the constructs mentioned above. For the pharmacological treatment assay, C2C12 cells were incubated for 12 or 18 h in the presence or absence of lysosomal inhibitor (100 lg/mL leupeptin Sigma-Aldrich). After treatment, cells were lysed and subjected to western blot analysis of GMPPB protein by Anti-GFP and Anti-GAPDH antibodies mentioned above. In addition, microtubule-associated protein 1 light chain 3 (LC3), the marker of autophagy, was detected by anti-LC3B (D11) (rabbit [3868], 1:1000 for western blotting, Cell Signaling Technology).
Quantitative real-time PCR
Total RNA was extracted from HEK 293T cells transfected with relative plasmids using a standard method with TRI-ZOL Reagent (Invitrogen), and reverse-transcribed using the PrimeScript1 RT reagent Kit (Takara) according to the manufacture's instruction. To determine whether the GMPPB variant affects the mRNA level, we designed three pairs of primers (Table S1), namely 2F/3R (at exon 2 and 3, respectively), 5F/5R (at exon 5), and 9F/9R (at exon 9) to verify the amount of GMPPB mRNA.
Clinical findings
The patient was a 22-year-old female with recurrent exertional muscle fatigue and walking slowly for 9 years. She was born of full-term spontaneous vaginal delivery with normal birth weight (3.3 kg). She achieved sitting, standing, and walking alone at 8, 10, and 12 months old, respectively. Initially, muscle weakness was demonstrated by difficulties in climbing stairs, standing up from squatting and sitting up from lying. Then, muscle weakness became continuous and progressed slowly to proximal upper limbs, especially in combing hair and dressing, which could be aggravated by labor and mildly relieved after resting. During school days (after the age of 13), poor performances of physical education examinations were recorded, such as running, ball games, and rope skipping. At the age of 22, she had normal strength in neck flexion (5/5 on a medical research council scale graded 0-5), reduced strength in iliopsoas (3 + /5), proximal upper limbs and proximal lower limbs (4 + /5), and normal strength in distal limbs. Muscle tone and tendon reflex in four limbs were reduced. Ptosis, nystagmus, dysarthria, ataxia, myokymia, muscle pain, and dyspnea were not noticed. In the exercise fatigue test, the patient was asked to elevate her right lower limb repeatedly, followed by muscle strength reexamination, showing further deterioration of strength in iliopsoas (3/5), quadriceps femoris (3/5), and posterior femoral muscles (4/5). Then we recorded the time spent on standing up from squatting before and after neostigmine treatment (intramuscular, 1 mg): before injection = 8 sec, 10 min after injection = 5 sec, 20 min after injection = 3 sec, 30 min after injection = 4 sec. We also reexamined the muscle strength with partial alleviation of strength 10 min after injection: iliopsoas (4 À /5), quadriceps femoris (5/5), and posterior femoral muscles (5/5); 20 min after injection: iliopsoas (4/5), quadriceps femoris (5/5), and posterior femoral muscles (5/5); 30 min after injection: iliopsoas (4 À /5), quadriceps femoris (4 + /5), and posterior femoral muscles (4 + /5). She walked slowly with mild waddling gait. Mental psychological and cognitive tests were normal. During The electromyogram showed motor unit potential (MUP) with small, short multiphase in deltoid, biceps brachii, and iliopsoas. As shown in Figure 1A, before medical treatment, RNS tests performed on bilateral deltoids showed 45.2% and 36.0% amplitude decrement on the left and right side, respectively. After administration with pyridostigmine bromide (30 mg, 40 min), the decremental responses still existed, with 39.4% on the left side and 36.0% on the right side. Nerve conduction was in normal range. Muscle MRI showed mild to moderate increased signal intensity mainly involving the posterior muscles in lower limbs, muscles of pelvic girdle and shoulder girdle on the T1-weighted sequences (Fig. 1B-C). The degree of muscle fatty infiltration was quantified as semimembranosus = 1 point, biceps femoral muscle = 2 points, rectus femoris gracilis = 1 point, left anterior tibial muscle = 2 points, right anterior tibial muscle = 3 points, left peroneal muscle = 2 points, right peroneal muscle = 3 points, trapezius = 1 point, and teres major muscle = 2 points. 13,14 The patient reported moderate improvement on muscle weakness after prescribing with 30 mg of pyridostigmine bromide for three times per day. One year after regular treatment with pyridostigmine bromide, reexamination showed the muscle strength with iliopsoas (4 À /5), quadriceps femoris (4 + /5), and posterior femoral muscles (4 + /5).
Myopathological findings
Muscle pathology of the patients showed a marked variation in fiber size with atrophy, hypertrophy, hypercontracted fibers, as well as regeneration by HE (Fig. 1D, left panel), without an increase in lipid by ORO or abnormal glycogen storage by PAS. Immunohistochemical staining of R-dystrophin (Fig. 1D, right panel), N-dystrophin, and C-dystrophin was normal. Under electronic microscopy, the myofibrillar arrangement is slightly disordered with dissolution and degeneration of individual myofibrils. Occasionally, autophagosome can be observed on longitudinally cut muscle specimens, without autophagic vacuoles or other disease-specific pathological changes (Fig. 1E). Immunofluorescence disclosed the normal distribution of a-dystroglycan on sarcolemma and cytoplasmic distribution mode of GMPPB in muscle fiber from nondisease control ( Fig. 1G and H). While the patient's muscle exhibited a mosaic deficiency of a-dystroglycan and GMPPB could be hardly detected ( Fig. 1G and H). Likewise, the results of western blotting suggested the significant lower levels of a-dystroglycan and GMPPB protein in the patient's muscle comparing with the control's (Fig. 1I).
Mutant protein detection
The protein level of GMPPB-WT/Mut was examined by western blotting after transfecting the GMPPB constructs into HEK 293T cells, showing the c.1060G>A was about 78.8% of the WT, and the c.803T>C was even lower (57.1%) through three independent repeated experiments ( Fig. 2A). In immunofluorescence staining, the c.803T>C and c.1060G>A mutant GMPPB tended to form punctate aggregates compared with the diffuse distribution of wildtype (Fig. 2B).
Furthermore, we also tested the level of LC3 in C2C12 cells transfected with GMPPB-WT/Mut (Fig. 2C, left panel). Since LC3-II is more stable and more closely related to autophagosomes than LC3-I, the consensus on autophagy detection is that levels of LC3-II should be compared to internal parameter (such as actin or GAPDH), but not to LC3-I. 15 In blotting, the LC3-II/ GAPDH in two mutant groups was higher than the WT or vector (VT) group (Fig. 2C).To further determine whether the mutant GMPPB was degraded via autophagy-lysosome pathway, we treated the cells with lysosomal inhibitor leupeptin for 12 h and observed that the decreased c.803T>C and c.1060G>A were restored and with the leupeptin maintained for longer time (18 h), the restoration was further promoted (Fig. 2D). In addition, both mutant GMPPB and LC3, appearing as aggregates, completely colocalized with each other in the mutant groups in immunolabeling. While in the WT group, both GMPPB and LC3 spread uniformly in cytoplasm (Fig. 2F). In comparison with VT (Fig. 2G) and WT groups (Fig. 2H), the cells transfected with c.803T>C (Fig. 2I) and c.1060G>A (Fig. 2J) have an increased number of lysosomes (L) and autophagosomes (AP) under electronic microscopy. However, the results of qPCR did not show statistical difference between WT c.803T>C or c.1060G>A (Fig. S1A). In vitro, GMPPB aggregates were surrounded or colocalized with lysosomes labeled by LAMP1 (Fig. S1B). The four mutations selected from previous literatures also had similar behavioral characteristics (Fig. S1B-D).
Discussion
We described a Chinese female patient presenting limbgirdle muscle weakness with mild fluctuation, without intellectual disability or epilepsy, due to compound heterozygous GMPPB mutations (c.803T>C and c.1060G>A). Her parents each carrying one of the two variants were not clinically affected. Similarly, in 2017, Luo et al. documented four cases harboring distinct features of LGMD as well as neuromuscular junction defect. 11 However, in 2015, Belaya et al. described eight cases with predominant muscle weakness fluctuation. 2 Among these, the myasthenic component could benefit from pyridostigmine treatment.
The GMPPB gene, encoding GDP-mannose pyrophosphorylase B, has two transcript variants NM_013334 and NM_021971, with eight and nine exons, respectively. The first seven exons of both transcripts are identical. The eighth exon in NM_013334 is 396 bases long encoding 131 amino acids. The middle 27 amino acids are spliced out in NM_021971 to separate exon 8 and 9. Thus, NM_013334 encodes protein with 387 amino acids (Isoform 1 in Fig. 3), whereas NM_021971 encodes protein with 360 amino acids (Isoform 2 in Fig. 3). It has been confirmed that Isoform 2 is expressed at much higher levels than Isoform 1 in the skeletal muscle and the central nervous system. 1 To date, no pathogenic mutations have been identified in the 27 amino acids that are specific to Isoform 1. Thus, Isoform 2 is commonly used for the annotation of variants described in most of the related studies. 1,2,11,16,17 Thus far, a total of 51 mutations in GMPPB have been associated with the disease, including 42 missense variants, 5 nonsense, 2 frameshift and 2 splicing (Fig. 3). Although no clear genotype-phenotype correlations have been clarified and mutations may locate through the whole length of the protein, a total of 15 mutations can be classified with relatively high risk to cause severe phenotype with not only myopathy but also mental retardation, cerebellar involvement, and epilepsy (red in Fig. 3). 1,2,4,12,16,18 As shown in Figure 3, GMPPB consists of two important domains, N-terminal catalytic domain and Left-handed parallel beta-Helix (LbH) domain. 1 Comparing with other parts of the protein, causative mutations mostly concentrated in LbH domain (32% per bp) indicating an evolutionary highly conservative sequence with essential physiological function. [1][2][3]12,16,18,19 Thus far, several compelling reports have confirmed GMPPB mutation could cause the protein aggregates and expression impairment. 1,2,11 However, it remains elusive about the accurate location of GMPPB aggregates and the mechanism of mutant degradation. In this work, we likewise observed low levels of GMPPB and a-dystroglycan immunostain in the patient's muscle. In vitro, two mutations exhibited varied degrees of aggregation and degradation accompanied by the upregulation of LC3-II, which could be blocked and reversed by lysosomal inhibitor. Furthermore, the insoluble aggregates of GMPPB mutants completely colocalized with LC3-II (the membrane type of LC3). In eukaryotes, autophagy-lysosome and ubiquitin-proteasome system (UPS) are two major quality control and recycling mechanisms responsible for cellular homeostasis. [20][21][22] The UPS is responsible for the degradation of short-lived proteins and soluble un/mis-folded proteins, whereas autophagy-lysosome eliminates longlived proteins, insoluble protein aggregates and degenerated organelles and intracellular parasites. [22][23][24][25] In addition, another four mutations documented before (c.781C>T, c.877C>T, c.1034T>C, and c.1108G>C, NM_013334), also have similar behaviors with the mutations reported in this paper. In this work, we support the mutant GMPPB can be distinguished by autophagy-lysosome and lead to protein degradation through lysosomaldegradation pathway.
Conclusion
This work identified two novel GMPPB mutations causing an overlap between LGMD 2T and CMS, without mental retardation and epilepsy. Moderate improvement was gained after pyridostigmine bromide treatment (30 mg, three times/day). The mutations lead to abnormal GMPPB distribution and reduced expression, as well as a-dystroglycan deficiency. This study provides the initial evidence that mutant GMPPB colocalizes with autophagosome and can be degraded through autophagy-lysosome pathway. Considering GMPPB-related LGMD 2T is generally due to enzyme deficiency and has intimate associations with lysosomal degradation, we suppose the enzyme replacement could become one of the therapeutic targets in the future, especially for the patients with severe phenotype. The other coauthors report no disclosures relevant to the manuscript. | 2019-05-26T13:35:34.020Z | 2019-05-08T00:00:00.000 | {
"year": 2019,
"sha1": "e7b4c40eb3fb7af5988ea0af5963013b16421f8e",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acn3.787",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7b4c40eb3fb7af5988ea0af5963013b16421f8e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253021088 | pes2o/s2orc | v3-fos-license | Current development and future challenges in microplastic detection techniques: A bibliometrics-based analysis and review
Microplastics have been considered a new type of pollutant in the marine environment and have attracted widespread attention worldwide in recent years. Plastic particles with particle size less than 5 mm are usually defined as microplastics. Because of their similar size to plankton, marine organisms easily ingest microplastics and can threaten higher organisms and even human health through the food chain. Most of the current studies have focused on the investigation of the abundance of microplastics in the environment. However, due to the limitations of analytical methods and instruments, the number of microplastics in the environment can easily lead to overestimation or underestimation. Microplastics in each environment have different detection techniques. To investigate the current status, hot spots, and research trends of microplastics detection techniques, this review analyzed the papers related to microplastics detection using bibliometric software CiteSpace and COOC. A total of 696 articles were analyzed, spanning 2012 to 2021. The contributions and cooperation of different countries and institutions in this field have been analyzed in detail. This topic has formed two main important networks of cooperation. International cooperation has been a common pattern in this topic. The various analytical methods of this topic were discussed through keyword and clustering analysis. Among them, fluorescent, FTIR and micro-Raman spectroscopy are commonly used optical techniques for the detection of microplastics. The identification of microplastics can also be achieved by the combination of other techniques such as mass spectrometry/thermal cracking gas chromatography. However, these techniques still have limitations and cannot be applied to all environmental samples. We provide a detailed analysis of the detection of microplastics in different environmental samples and list the challenges that need to be addressed in the future.
Introduction
Since the invention of plastic in the 1950s, plastic products have been widely used, and the environment is flooded with plastic waste. 1,2 It is estimated that by 2060 there will be approximately 1.55-26.5 billion tons of plastic waste. 3 In 2004, the term "microplastics" was first introduced, referring to plastic particles smaller than 5 mm. 4 Compared to large plastics, tiny microplastics are widely distributed in the environment and are more easily ingested by organisms. Microplastics are prevalent in aquatic organisms such as fish and shellfish and are subject to food chain transport, 5 thus posing a potential threat to the health of organisms and ecosystems. Microplastics are broadly divided into two categories, primary microplastics, and secondary microplastics. The former refers to nanoplastics added to toiletries, biomedical products, waterproof coatings, and nanomedicines. 6 The latter refers to plastic particles formed from plastic waste under the external effects of light, mechanical action, chemical and biological degradation. 7 Table 1 shows the main categories of microplastics. Investigations have shown that micro/nanoplastics are widely present in water, sediment, soil, and atmosphere. 8,9 In addition to areas of intense human activity, micro/nanoplastics have been found in uninhabited mountainous areas and even in surface water, sea ice, benthic organisms, and penguin gastrointestinal tracts in the North and South Poles. 10 The global plastic pollution situation is becoming increasingly critical, and it is estimated that microplastics in the subtropical convergence zone will increase by a factor of two or four by 2030 or 2060 compared to the present. 11 Studying the ecological and environmental health impacts of micro-and nano-sized plastics requires effective detection and monitoring methods. The extraction, separation, and determination of plastic particles of all sizes, especially nanoscale particles, from complex environmental and biological samples is a hot topic of current research. Although there are many techniques and methods, there is a lack of uniform standards for the extraction, qualitative and quantitative analysis of microplastics from different sample matrices. 26 Therefore, there is an urgent need to develop efficient and pervasive extraction, identification, and quantification methods to obtain comparable data. 27 Bibliometric analysis is a literature and information mining method based on mathematical statistics. It can reflect research trends and hotspots through clustering relationships of keywords in the literature and has become an important tool for global analysis in various scientific fields [28][29][30][31][32][33][34][35][36][37] . As an emerging environmental pollutant, microplastics are receiving increasing attention worldwide. Therefore, it is necessary to update the bibliometrics of this topic. To date, bibliometric analyses on microplastics have focused on the development of the entire field. We believe that how microplastics are detected is a very important part of its assessment. Only an accurate measurement of microplastics can give an indication of its impact on the environment. Therefore, we have paid special attention to the development of detection techniques in microplastics in this bibliometric work.
Materials and methods
Two bibliometrics software have been used in this systematic literature review. The first is CiteSpace, developed by Dr Chaomei Chen, a professor at the Drexel University School of Information Science and Technology [38][39][40][41] . CiteSpace 6.1R2 was used to calculate and analyze all documents. COOC is another emerging bibliometrics software. 42 COOC12.6 was used to analysis of annual publications and keywords co-occurrence. We used the core collection on Web of Science as a database to assure the integrity and academic quality of the studied material. "microplastics detection", "microplastics sensor" and "microplastics quantification" have been used as a "Topic." The retrieval period was indefinite, and the date of retrieval was December 30, 2021. 696 articles were retrieved, spanning the years 2012 to 2021. Low density polyethylene (LDPE) Plastic bags, bottles, fishing nets, straws, etc 14,15 High density polyethylene (HDPE) Milk, juice cans, cosmetic packaging, etc 16,17 Polyvinyl chloride (PVC) Plastic film, plastic cup, etc 18,19 Polyethylene terephthalate (PET) Bottles etc. 20 Polypropylene (PP) Rope, bottle caps, etc 21,22 Polystyrene (PS) Food containers, plastic utensils, etc 23 Polyamide (PA) Fishing nets etc 24 Foam polystyrene (EPS) Buoys, bait boxes, disposable cups, etc 25 Developments in the research field Literature development trends Figure 1 shows the annual and the cumulative number of publications on microplastic detection techniques between 2012 and 2021. As seen from the figure, the detection of microplastics did not become an immediate object of research with its conceptualization.
Since the introduction of microplastics in 2004, no detection techniques for it were reported until 2012 (this does not mean that microplastics could not be detected in previous work). Harrison et al. 43 investigated the applicability of Fourier transform infrared spectroscopy in detecting microplastics. Fossi et al. 44 proposed phthalates as a tracer of microplastic ingestion when investigating whether baleen whales ingest microplastics during their filter-feeding activities. Imhof et al. 45 constructed a precipitate separator to improve the density separation method. This method allows the separation of different ecologically relevant size classes of plastic particles from sediment samples. At the same time, they identified and quantified microplastics using micro-Raman (μ-Raman) spectroscopy, verifying that the recovery rate of this separation technique is significantly higher than that of classical density separation devices and froth flotation commonly used in the industry. The topic of microplastic detection has gradually gained attention since 2015, and the number of annual publications has started to exceed 10. After that, the topic started to enter a very rapid development, with more than 100 annual publications in 2019. By 2021, the annual number of articles on this topic has reached 263. Although we did not include data on new publications in 2022 in this bibliometric survey, the trend of continued increasing publications has not been diminished based on the available data (as of June 2022). From a bibliometric analysis, this topic is entering a phase of rapid development, attracting many scholars. There is no doubt that a large amount of interest in microplastic detection technology is inseparable from the fact that microplastics are a hot topic in the environmental field today. The update of old detection techniques and the establishment of new methods are generally based on the widespread interest in the analyte. Figure 2 shows the top 10 journals that published the most papers regarding microplastic detection techniques. It can be seen that Marine Pollution Bulletin published the most significant number of papers, accounting for 12.64% of all papers on this topic. In second place was Science of the Total Environment, with 77 papers accounting for 11.06% of the total. More than half of the journals in Figure 2 are affiliated with environmental science. In addition, Analytical and Bioanalytical Chemistry and Analytical Methods are journals related to analytical chemistry, and Analytical Methods in particular focuses on new analytical assay techniques. This demonstrates that the detection of microplastics has now attracted the attention of not only environmental scientists but has also involved analytical chemists. Figure 2 also includes the Journal of Hazardous Materials, which published 23 papers on this topic. This journal mainly publishes papers related to materials harmful to humans and the environment. Microplastics, a series of tiny forms of polymeric materials, have received so much attention in recent years precisely because of the pollution and toxicity they produce in the environment.
Journals, cited journals and research subjects
In addition to the number of papers published by the journal on the topic, the frequency with which the journal is cited papers related to the theme is also an important indicator. Table 2 shows the top 15 cited journals on microplastic detection techniques. It can be seen that most of the journals in Figure 2 are also included in Table 2, except for the Journal of Hazardous Materials. The papers published in the Journal of Hazardous Material are most likely about the analysis of different environmental samples with different detection techniques. These works do not necessarily provide improvements and innovations in the methodology of detection. Therefore, they are not widely cited in papers on the topics we set. Journals in the analytical chemistry are further represented in Table 2 with the additional inclusion of TrAC Trends in Analytical Chemistry and Analytical Chemistry. On the other hand, comprehensive journals are also covered in Table 2, including Scientific Reports, Science, and PLOS ONE. These journals do not necessarily publish a large number of papers on the topic, but the articles that appear in them have an indirect impact on the topic. For example, the detection of other substances published in analytical chemistry-related journals indirectly inspired the detection of microplastics. The analysis results in Figure 2 and Table 2 show that microplastic detection techniques mainly attract scholars from two fields: environmental science and analytical chemistry. In addition to the journals related to these two fields, the coverage of this microplastic in comprehensive journals will significantly impact the investigation of this topic.
Although the most important journals on this topic and the fields to which they belong can be known from Figure 2 and Table 2, they do not present the most cutting-edge advances. Therefore, we analyzed journals that published this topic for the first time in 2020 and 2021 (Table 3). Environmental science and analytical chemistry-related journals remain the most dominant areas in the table. It is also worth noting that the new journals appearing in 2020 include a series of journals related to food science, including Food Bioscience, Food Chemistry, Food Control, and Food Packaging and Shelf Life. This reflects that detecting microplastics in food has become a more important direction of investigation in 2021. Gündoğdu et al. 46 investigated the presence of microplastics in mussels sold in five cities in Turkey. μ-Raman has been used for the quantitative 47 proposed an improved detection method for the detection of microplastics in white wines. μ-Raman has been used for the first time to identify microplastic particles in complex beverages. Huang et al. 48 examined the extent of PS and PVC contamination in chicken-based on attenuated total reflection mid-infrared spectroscopy (ATR-MIR) combined with chemometric techniques. Kedzierski et al. 49 investigated whether food trays made of extruded PS could generate microplastic contamination between the meat and the sealing film. Several microscopy and instrumentation-related journals appeared in 2021, including Micron, Microscopy Research and Technique, Microsystems & Nanoengineering. Schmidt et al. 50 proposed a detection method for nanoplastics smaller than 1 μm. Using a correlation between scanning electron microscopy (SEM) and μ-Raman, they identified nanoplastics in the 100 nm range in various environments. Qian et al. 51 tried to combine sparse particle localization and miniaturized mass sensing functions on a microelectromechanical system (MEMS) chip to realize the analysis of sparse particles. The use of 4-dimethylamino-4 ′ -nitrostilbene (DANS) fluorescent dyes for detecting microplastics was proposed by Sancataldo et al. 52 DANS staining can provide access to different detection and analysis strategies based on fluorescence microscopy. Meanwhile, some pharmacology and toxicology journals also appeared in 2021, including Regulatory Toxicology and Pharmacology, Pharmaceutics. The category of the published paper can reflect the evolution of the topic. Figure 3 shows the evolution of the category of the microplastics detection techniques over time. 15% of all numbers. Both Germany and USA contributed more than 10% of the papers. This is due to the fact that this topic has attracted a great deal of academic attention and thus different countries have made considerable contributions to the topic. On the other hand, although a range of countries contributes to this topic, it is clear from the figure that Europe is the most actively involved region. This enthusiasm can also be observed in the timezone view ( Figure 5). The first countries to study this topic in 2012 were Italy and England. China and USA were added to the team in 2015. In Asia, Japan and South Korea are also interested in this topic. South Korea entered the survey on this topic in 2016, while Japan will not join in until 2018. Since the topic is still on the rise, many countries are getting involved every year. Singapore, Turkey, Iceland, U Arab Emirates, Ukraine, Romania, Peru, Croatia, Lithuania, and Cyprus published their first papers on this topic in 2021. Figure 6 illustrates the cooperation network between the different institutions on this topic. As can be seen, this topic has formed two main important networks of cooperation. The first cooperative network was mainly led by the Institut Francais de Recherche pour l'exploitation de la mer(IFREMER), Carl von Ossietzky Universitat Oldenburg, University of Toronto, Aalborg University and Technical University of Denmark. This collaborative network covers many research institutions and universities in European countries and North America. Another collaborative network is led by the Chinese
Keyword analysis and evolution of the field
A keyword analysis is often used in bibliometrics to understand the different directions under a topic. However, the fifteen most frequently occurring keywords in microplastic detection techniques do not include techniques and analytical methods. This is because a paper often contains five or more keywords. Microplastic detection techniques belong to one of the directions of microplastics, so this type of paper contains many keywords about environment and sample related. For this case, we purposely screened the keywords and listed only those related to detection technology (Table 4).
Visual classification is commonly used to identify microplastics in the environment, but the method has poor reliability and low accuracy. To obtain an accurate number of microplastics in the environment, fluorescent staining methods are used to aid in identifying microplastics. Usually, Nile red is used to fluorescently label microplastics. This method has the advantages of fast staining and a strong fluorescence signal. Therefore, it appears very frequently in the keywords in this topic. For example, Shim et al. 53 proposed a method for staining microplastics with Nile Red in 2016. They found that 5 mg/L of Nile Red in hexane can effectively stain plastics and identify them in green fluorescence. This technology can identify PE, PP, PS, polycarbonate, polyurethane (PU), and poly(ethylene-vinyl acetate). However, PVC, PA, and polyester cannot be identified. For a more efficient identification using fluorescent staining, the counting of microplastics in fluorescent images can be done by the corresponding developed software. 54 Other study has proposed staining microplastics with fluorescent dyes [55][56][57] . In these works, the Nile Red staining technique is often used as a control group to verify the feasibility and advancement of the new technique.
FTIR and μ-Raman are the two most commonly used spectroscopic techniques for microplastics. This technique has been widely used for the chemical identification of microplastics in water, sediment, and organism samples. In a study by Lusher et al., 58 FTIR spectroscopy was used to identify the presence of microplastics in fish successfully. The main components of these microplastics include PA, semi-synthetic cellulose materials, and rayon. However, this method is unable to identify irregular microplastics (reflectance FTIR measurements of irregularly shaped materials can have refractive errors 43 ). Attenuated total reflection (ATR)-FTIR can facilitate the identification of irregularly shaped microplastics but only applies to the analysis of plastic particles larger than 500 μm. 43 To address this problem, Löder et al. 59 applied focal plane array-based micro FT-IR imaging to identify microplastics in environmental samples to address this problem. This technique can detect plastic particles smaller than 20 μm and cover a larger filter surface area than conventional FTIR. However, analyzing the entire sample filter surface with high spatial resolution can be very time-consuming. Therefore, further optimization of FTIR spectroscopy is essential for detecting small particles of microplastics in complex environmental samples. Like FTIR spectroscopy, μ-Raman is a common technique for the analysis of microplastics. μ-Raman can detect plastic particles down to 1 µm and responds better to nonpolar plastic such as PP and PE. 60 The use of μ-Raman combined with microscopy allows the identification of microplastics of various sizes. 61 This technique does not destroy the sample while meeting the analytical needs of complex samples but cannot detect samples with fluorescence (e.g. residues from biological sample sources). In this case, purifying the sample before using μ-Raman measurements is recommended to prevent residuals in the fluorescent sample. Mass spectrometry is often used with thermal cracking gas chromatography (Pyr-GC/ MS). Pyr-GC/MS can identify the chemical composition of microplastics by analyzing their characteristic thermal degradation products. It accurately identifies the polymer type by comparing it to a pyrolysis reference map of a known pure polymer. 62 The advantage of this technique is that it does not require sample pretreatment and allows simultaneous determination of the type of plastic polymer and associated plastic additive. However, this technique only allows the analysis of one particle per cycle, which is not suitable for analyzing samples in complex environments. Mass spectrometry can also be used with thermogravimetric analysis and thermal desorption gas chromatography (TD-GC/MS). This technique is able to handle complex environmental samples. For example, Dümichen et al. 63 successfully identified PE microplastics from the soil, suspended solids, and mussels using TED-GC/MS. Effectively separating microplastics from samples such as water, sediment, and organic matter is a crucial step for subsequent detection and analysis. Density separation is widely used to separate low-density plastic particles from dense sand, slurry, sediment, and other samples. Therefore, it becomes a high-frequency keyword in this topic. A variety of highdensity solutions have been used to separate microplastics from environmental samples. The most commonly used solution is saturated NaCl. 64 It is inexpensive and non-toxic, but only for microplastics with densities below 1.20 g/cm 3 . However, some high-density microplastics, such as PVC and PET could not be completely separated. 65 To overcome this drawback, Nuelle et al. 66 developed a two-step method using air-induced NaCl solution overflow for pre-extraction followed by additional flotation using sodium iodide solution. The results showed that the recovery rates of all plastic pellets (PE, PP, PVC, PET, PS, PUR) ranged from 91% to 99%, except for the recovery rate of expanded PE, which was 68%. In addition, saturated sodium polytungstate solutions have been shown to isolate certain high-density microplastics. 67 Similar results can be achieved with other highdensity solutions. For example, ZnCl 2 solution has a density range of 1.50-1.70 g/cm 3 and can extract almost all microplastic particles, but it is toxic and relatively expensive. 45,68 NaI alone can also separate high-density microplastics, but it readily reacts with cellulose filter growth to darken them and thus affect visual recognition. 69 Cluster analysis of keywords can further understand the different directions of investigation in this topic. Figure 7 shows that 15 clusters were formed after clustering the keywords. Table 5 describes the clusters and their ID, size, silhouette, and respective keywords.
Based on the bibliometric analysis, the detection of microplastics currently encompasses the following techniques: 1. The most basic method used for microplastic identification is the visual inspection method. This method uses the human eye or microscope to observe the microplastics and count them according to their size and dimensions. Microplastics with particle diameters of 1 mm to 5 mm can be directly identified and analyzed using visual inspection. Due to the differences in color, structure, and other characteristics of the microplastics, there will be some influence on the identification results of the visual inspection method. 2. If microplastic samples cannot be observed by visual inspection, they can be identified by microscopic techniques. Compared with a visual inspection, light microscopy is more convenient and efficient, but at the same time, the error is also more significant. Microplastics are analyzed and identified under ordinary light microscopy with an error rate of about 20%. In the case of colorless, transparent microplastics, the error rate would be over 70%. Scanning electron microscopy uses electron microscopy for analysis and identification, and the magnification of electron microscopy is greater than that of ordinary light microscopy. However, this method requires the sample to be solid, non-toxic, and non-radioactive. Also, this method has high requirements for the laboratory environment. 3. Microplastics can be cleaved to produce unique pyrolysis profiles, so their chemical composition can be analyzed using Pyr-GC/MS. However, different polymers may also produce the same or similar pyrolysis products, which can overlap in the pyrolysis profile and produce errors in the final results. 4. The best current method for determining microplastics is FTIR. It can obtain specific polymer information from the characteristic spectra of microplastics and can identify the type of microplastics in the environment. However, the sample must be dried when using this method to avoid interference from moisture and impurities in the sample. 5. Due to the diversity of microplastic types, different microplastics have their own unique Raman spectral profiles. Compared with FTIR, μ-Raman has a broader range of applications and better sensing ability. However, μ-Raman is susceptible to the interference of fluorescent substances.
Conclusion and perspectives
The detection of microplastic detection has gradually gained attention since 2015. So far, this topic has shown a growing trend year by year. The detection of microplastics has now attracted the attention of not only environmental scientists but has also involved analytical chemists. Environmental science and analytical chemistry-related journals remain the most dominant areas in this topic. It is also worth noting that the new journals appearing in 2020 include a series of journals related to food science. The results of the geographical analysis point to European scholars being the most active in this topic. This topic has formed two main important networks of cooperation. Reliable identification of microplastic particles in various environmental matrices is still limited. First, some microplastics are present in trace level in the environment, thus requiring high sensitivity for detection techniques. At the same time, these microplastics present a mixed state, containing different types, which are difficult to distinguish quickly by detection techniques. Separation and concentration become effective methods to solve the above two challenges, but there is no well-established protocol.
Based on the bibliometric analysis, we believe that the following directions are the priorities that need to be overcome for the future development of microplastic detection technology.
1. Challenges remain in the concentration and detection of microplastics in water and air samples. The development of detection techniques requires new strategies for the analysis of these two types of environmental samples. 2. It is challenging to meet the requirements of sensitivity and resolution using a single analytical technology. Analytical techniques based on thermal cracking reactions coupled with GC-MS are most likely to be developed as a method that satisfies both detection sensitivity and specificity when supported by appropriate sample pre-treatment. 3. It is important to establish a standard protocol for the detection of microplastics.
Detection techniques at this stage do not even use uniform concentration units for microplastics. Therefore, the establishment of a uniform set of criteria would allow for comparability between the results of different studies.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/ or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/ or publication of this article: This work was supported by the Zhejiang Key Laboratory of Ecological and Environmental Monitoring, Forewarning and Quality Control, National Natural Science Foundation of China, (grant number EEMFQ-2021-2, 22108265, 42173073) | 2022-10-21T06:18:05.609Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "d468a03e506464e38dfdab6d99338bc5f0a22146",
"oa_license": "CCBYNC",
"oa_url": "https://rgu-repository.worktribe.com/preview/1787446/JIN%202022%20Current%20development%20and%20future%20(VOR).pdf",
"oa_status": "GREEN",
"pdf_src": "Sage",
"pdf_hash": "828e55f85ec665df72868d7902e0d62dc2fc9c12",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240533752 | pes2o/s2orc | v3-fos-license | Combined Full-Reference Image Quality Metrics for Objective Assessment of Multiply Distorted Images
: In the recent years, many objective image quality assessment methods have been proposed by different researchers, leading to a significant increase in their correlation with subjective quality evaluations. Although many recently proposed image quality assessment methods, particularly full-reference metrics, are in some cases highly correlated with the perception of individual distortions, there is still a need for their verification and adjustment for the case when images are affected by multiple distortions. Since one of the possible approaches is the application of combined metrics, their analysis and optimization are discussed in this paper. Two approaches to metrics’ combination have been analyzed that are based on the weighted product and the proposed weighted sum with additional exponential weights. The validation of the proposed approach, carried out using four currently available image datasets, containing multiply distorted images together with the gathered subjective quality scores, indicates a meaningful increase of correlations of the optimized combined metrics with subjective opinions for all datasets.
Introduction
The increasing popularity and availability of relatively cheap cameras, as well as electronic mobile devices, equipped with visual sensors, undoubtedly causes a dynamic growth of applicability of image and video analysis in many tasks. Some obvious examples may be related to video surveillance, traffic monitoring, video inspection and diagnostics, video-based navigation of mobile robots, or even autonomous vehicles. Some other applications are related to non-destructive testing, data fusion from various sensors, and many others, also related to modern Industry 4.0 solutions. Another factor, influencing the growing popularity of image analysis, is the development of some freeware libraries, such as OpenCV, that makes it possible to perform many tasks in real-time, especially with hardware support provided by modern Graphics Processing Units (GPU).
Nevertheless, machine and computer vision algorithms typically utilize natural images, which may be subject to various distortions, occurring not only during their acquisition but also caused by, e.g., lossy compression or the presence of transmission errors. This situation is typical for modern electronic devices, such as cameras, phones, and some other gadgets where image data are subject to several nonlinear transformations before recording. In such a case, the ability to detect such distortions and assess the overall image quality is an important challenge given the reliability of the results obtained from their analysis.
In the recent several years, many objective image quality assessment (IQA) metrics have been proposed, which may be divided into three major groups: full-reference (FR), which require the knowledge of the original "pristine" image without any distortions, no-reference (NR) methods, also known as "blind" metrics and less popular reducedreference (RR) approaches, which assume a partial knowledge of the original (reference) image. Although NR methods are the most desirable, their universality and correlation with subjective opinions of the human observers, provided as Mean Opinion Scores (MOS) or Differential MOS (DMOS) values in IQA databases, are typically significantly lower in comparison to FR methods. The more detailed analysis of many metrics and their comparisons for various widely accepted datasets containing reference and distorted images together with subjective quality scores may be found in some recent survey papers [1][2][3][4].
There are numerous attempts to improve the correlation between FR metrics and MOS (or DMOS). One way to do this is to design so-called combined metrics [5][6][7][8] that jointly employ several metrics (that we call elementary) in one or another way. In practice, one needs easily computable metrics and a simple way of combining them, similarly as for the 3D printed surfaces [9] or remote sensing images [10]. Because of this, the goal of this paper is to put forward a family of combined metrics that can be optimized with application to assessing the quality of images with multiple distortions. To the best of our knowledge, such optimization has not been yet carried out for available databases containing only images with multiple distortions. Previously developed combined metrics [5,6,8,11,12] concern only the singly distorted images.
The most commonly appearing types of distortions that an ideal IQA metric should be sensitive to concern blurring artifacts, various types of noise, and lossy compression artifacts. Although in some IQA datasets containing singly distorted images more than 20 types may be distinguished, e.g., 24 types in the TID2013 dataset [13] including colorrelated distortions, their combinations provided in the multiply-distorted IQA datasets are limited to a few kinds of them. Typically, they are the combinations of blur, noise, JPEG/JPEG 2000 artifacts, and contrast change. These five common types of distortions have been used, e.g., in the MDID database [14] discussed in Section 3.
Considering the interference of individual distortions and their influence on the perceived image quality, the usefulness of some metrics designed for singly distorted images for the development of the combined metrics highly correlated with subjective quality assessment of multiply distorted images is not obvious and should be verified experimentally.
The rest of the paper is organized as follows: Section 2 contains the overview of some elementary metrics, typically applied for the quality assessment of singly-distorted images, whereas four publicly available multiply-distorted image datasets used in experiments are presented in Section 3. Section 4 is related to the description of the idea of combined metrics and the proposed approach with experimental results discussed in Section 5. Section 6 concludes the paper.
Overview of Some Elementary Metrics
The performance of a combined metric depends on the following elements: • The number of the combined elementary metrics; • Which metrics are combined; • How the metrics are combined; • What images are used in testing.
Hence, we start by recalling modern elementary metrics. Development of modern visual quality metrics, replacing the "classical" pixel-based approaches such as Mean Square Error (MSE) or Peak Signal-to-Noise Ratio (PSNR), started in fact in 2002 with the idea of the Universal Image Quality Index (UQI) [15], followed by its improvement widely known as the Structural SIMilarity (SSIM) [16], implemented also in the multi-scale version (MS-SSIM) [17].
The general formula describing the idea of the SSIM, sensitive to three main types of distortions, i.e., luminance, contrast and structural distortions, may be expressed as where the default values of the stabilizing constants (preventing the instability of results for dark and flat image areas) for 8-bit grayscale images are: C 1 = (0.01 × 255) 2 , C 2 = (0.03 × 255) 2 and C 3 = C 2 /2. The above computations are performed using the sliding window approach and the final metric is the average of the local similarities. This approach was the basis also for some other similarity-based metrics leading to a further increase of the correlations between the objective quality scores and subjective MOS or DMOS values provided in various IQA datasets (typically containing only singlydistorted images). Some such examples, used also in this paper, are: information content weighted SSIM (IW-SSIM) and IW-PSNR [18], Complex Wavelet SSIM (CW-SSIM) [19], Feature SIMilarity (FSIM) [20], Quality Index based on Local Variance (QILV) [21] as well as a color version of SSIM (CSSIM), SSIM4 and its color version CSSIM4 [22], belonging to the group of SSIM-based metrics with additional predictability of image blocks.
A good illustration of the exemplary modifications of the SSIM might be the QILV metric [21] expressed as where σ V A V B denotes the covariance between the variances of two images (V A and V B , respectively), σ V A and σ V B are the global standard deviations of the local variance with µ V A and µ V B being the mean values of the local variance. Another example may be FSIM [20] based on the local similarity defined as where T 1 and T 2 are the stability constants preventing the division by zero and x is the sliding window position. The two main components are the phase congruency (PC) being a significance measure of a local structure) and gradient magnitude (GM) as a complementary feature extracted using the Scharr edge filter. The final metric should be calculated according to the formula where PC m (x) = max(PC A (x), PC B (x)) and x denotes each position of the local window on the image plane A (or B).
Another approach, originating from information theory, assumes the use of natural scene statistics (NSS) combined with a measurement of the mutual information between the subbands in the wavelet domain, proposed by Sheikh and Bovik as Visual Information Fidelity (VIF) metric [23]. Its simplified multi-scale pixel domain version (VIFp) requires fewer computations, although it does not allow the orientation analysis. Both methods are based on the earlier idea of Information Fidelity Criterion (IFC) [24]. A lower computational complexity metric, known as DCT Subbands Similarity (DSS) [25] utilizes the fact that statistics of DCT coefficients change with the degree and type of image distortion. Another motivation for its authors has been the popularity of the 2D DCT as many image and video coding techniques are based on block-based DCT transforms, particularly originating from JPEG and MPEG standards.
A combination of steerable pyramid wavelet transform and SSIM, known as IQM2, was proposed by Dumic et al. [26], where the kernel with two orientations was applied to achieve the best performance preserving low computational demands.
A different approach to the perceptual IQA was proposed by Wu et al. [27], utilizing the internal generative mechanism (IGM) adopting a Bayesian prediction model and decomposing the image into predicted and disorderly portions. It was assumed that the first part may be assessed using the SSIM-like methods, whereas the degradation on disorderly uncertainty may be predicted using the PSNR. Both parts should be further nonlinearly combined to acquire the final quality score.
Chang et al. [28] proposed the method based on the independent feature similarity (IFS) simulating the properties of the Human Visual System (HVS), particularly useful for the quality prediction of images with color distortions. Due to the possible use of the partial information from the reference image (based on the use of Independent Component Analysis-ICA), this method can also be considered as an example of the RR approach. Another metric based on the HVS, known as Perceptual SIMilarity (PSIM) was proposed as a four-step method [29] and partially verified using two multiply distorted databases. It is based on the extraction of gradient magnitude maps for both compared images followed by calculations of their multi-scale similarities and measurement of chromatic channel degradations and final pooling.
Alternatively, authors of the Sparse Feature Fidelity (SFF) metric [30] assumed transformation of images into sparse representations in the primary visual cortex to detect the sparse features by the feature detector trained by the ICA algorithm using natural image samples. They used feature similarity and luminance correlation components to simulate jointly visual attention and visual threshold. The other metric based on sparse representations, known as UNIQUE [31], utilized an unsupervised learning approach. Interestingly, in the preprocessing step, a color space selection is performed (conversion into YCbCr model is suggested with replacement of the Cb chrominance by the green channel) followed by random patch sampling, forming the vector containing 64 elements for each of three channels, further normalization using a mean subtraction and a whitening operation. The additional extension by analyzing the learned weights was proposed as the MS-UNIQUE metric [32]. Both metrics were trained using randomly selected patches from the ImageNet database. Further extension of such a training-based approach, particularly using deep learning CNN approaches [33,34], is also possible; however, it still requires a relatively large amount of training data available mainly in the singly distorted IQA datasets.
An interesting metric, utilizing gradient similarity, chromaticity similarity, and deviation pooling, was proposed as the Mean Deviation Similarity Index (MDSI) [35], where the color distortions were measured using a joint similarity map of two chromatic channels. Another attempt to use the gradient similarity has been proposed by Xue et al. [36], known as Gradient Magnitude Similarity Deviation (GMSD).
Reisenhofer et al. [37] proposed the use of the Haar wavelet decomposition to develop another HVS-based perceptual similarity metric, known as HaarPSI. This metric is based on the use of six 2D Haar wavelet filters extracting the horizontal and vertical edges on different frequency scales and may be considered as a simplification of FSIM [20]. Another feature-based method, known as RVSIM [38], utilizes Riesz transform (similarly as earlier RFSIM [39]) together with visual contrast sensitivity, whereas the CVSSI metric [40] is based on the similarity of contrast and visual saliency (VS), forming the final score with the use of weighted standard deviations of the local contrast quality map and the global VS quality map.
Considering the topic of this paper, the above overview of elementary metrics is limited to the FR algorithms demonstrating a high prediction accuracy for the four considered multiply distorted IQA datasets, obtained without any nonlinear fitting functions (e.g., logistic or polynomial ones). Although a few metrics oriented for the quality assessment of multiply distorted images have been recently proposed, e.g., using gradient detection [41], in some cases, their codes are not publicly available or they belong to the group of "blind" methods, such as the method based on phase congruency [42]. Therefore, the results presented in this paper are focused on the combination of better-known elementary metrics with available codes, originally developed for singly distorted images.
In addition to the above-mentioned metrics, some of the IQA methods, which have led to an improved performance applied in the combined metrics, include: WSNR [43], PSNRHMA [44], VSNR [45], Visual Saliency-Induced Index (VSI) [46], Multiscale Contrast Similarity Deviation (MCSD) [47], spectral residual similarity (SR-SIM) [48] and Wavelet Based Sharp Features (WASH) [49]. Some other recently proposed metrics used in experiments have been developed originally for the quality estimation of screen content images, such as SIQAD [50] and SCI_GSS [51], as well as for the reduced-reference image quality assessment of contrast change (RIQMC) [52].
Since some of the methods presented above are designed for the direct use with color images only and the others require the use of grayscale ones, all the calculations for the latter ones have been made using MATLAB's rgb2gray conversion, according to the ITU-R BT.601-7 Recommendation, after rounding to three decimal places.
Multiply Distorted Image Quality Assessment Datasets
The development of new IQA datasets is a quite challenging and time-consuming task, especially assuming conducting perceptual experiments involving many observers for a relatively large number of distorted images. Hence, among many IQA datasets, only a few of them, such as, e.g., TID2013 [13], containing numerous images subject to several types of distortions, may be considered as widely accepted by the community. Unfortunately, most of the databases developed several years ago do not contain images with more than a single distortion applied simultaneously, and most of the metrics developed and verified using such datasets predict the quality of multiply distorted images with relatively low accuracy.
As stated by Chandler [2], one of the main challenges in the multiply distorted IQA is the fact that the developed metrics should consider not only the joint effects of distortions on the image but also the effects of distortions on each other. Hence, considering the practical usefulness of metrics that would be able to predict the visual quality of multiply distorted images with the possibly highest accuracy, some other datasets have been developed to fill this research gap.
The first of such datasets, provided by the Laboratory for Image and Video Engineering (LIVE) from Texas University at Austin, referred to as LIVEMD [53], contains two groups of doubly distorted images. The first group deals with a blur followed by JPEG lossy compression, whereas the second one contains blurred images due to defocusing corrupted further by a white noise to simulate sensor noise. Each group contains 225 images, however, some of them are in fact singly distorted, hence only the subset of 270 multiply distorted images has been used in experiments carried out in our paper.
Another dataset, known as MDID13 [54], contains 12 natural color reference images and 324 images corrupted simultaneously by distortions that may take place during the acquisition, compression, and transmission of images. Six standard definition reference images (768 × 512 pixels) originate from the Kodak database, whereas the other six high definition images (1280 × 720) are the same as in the LIVEMD dataset. The testing images contain the three-fold mixtures of blurring, JPEG compression, and noise, being complementary to the LIVEMD, where only two-fold artifacts are used. Subjective scores have been provided by 25 inexperienced observers using two viewing distances due to different image sizes and the single-stimulus (SS) method according to the ITU-R BT.500-12 Recommendation.
The third database used for the verification of the proposed approach is known simply as MDID [14]. It contains 20 reference images (cropped to 512 × 384 pixels without scaling) and 1600 distorted images. The images are corrupted by the combinations of five distortions, namely Gaussian noise (GN), Gaussian blur (GB), contrast change (CC), JPEG, and JPEG2000 lossy compression. Each distorted image has been obtained from the respective reference image applying random types and random levels of distortions. The MOS values have been provided by 192 subjects who participated in the subjective rating. Sample images from the MDID database affected by various combinations of distortions with different levels are presented in Figure 1 with the reference image marked by the red frame. The last dataset, developed in the Imaging and Vision Laboratory at the University of Milano-Bicocca, is known as IVL_MD or MDIVL database [55]. It contains two groups of images: 400 images with noise and JPEG distortions, as well as 350 images with blur plus JPEG distortions, together with corresponding MOS values. The distorted images, subjectively evaluated by 12 observers using the SS method, have been obtained from 10 reference images that have the size of 886 × 591 pixels.
There are also other databases containing images with multiple distortions, e.g., LIVE in the Wild Image Quality Challenge database, containing widely diverse authentic image distortions [56]. However, this database does not offer reference images and, therefore, it does not allow calculating FR metrics that are needed in our case.
Comparing the four publicly available multiply distorted IQA databases, the most relevant one is undoubtedly the MDID database [14], not only because of the largest number of images and distortion types but also considering the numerous human observers involved in perceptual experiments. Therefore, the experimental results obtained for this dataset should be considered as the most important. On the other hand, due to the greater diversity of distortions and higher number of images, the expected correlation values are lower than for the other datasets.
To provide a comparison of the performance of the best elementary (individual) metrics for each of the above databases, the Pearson Linear Correlation Coefficients (PCC) between the raw objective scores (i.e., without any additional nonlinear fitting) and subjective MOS/DMOS values have been calculated, illustrating the prediction accuracy. Additionally, Spearman Rank Order Correlation Coefficients (SROCC) and Kendall Rank Order Correlation Coefficients (KROCC) have been calculated to illustrate the prediction monotonicity of each elementary metric.
The obtained performance for selected elementary metrics, including the best performing ones, is presented in Table 1, where the top three results for each dataset are marked with bold font. As can be easily noticed, various methods demonstrate the best performance for various datasets, also differing with prediction accuracy measured by PCC and prediction monotonicity indicated by rank order correlations. Although not all results obtained for elementary metrics have been provided in the paper, the values of over 50 of them have been calculated for four considered datasets. Additionally, the correlation results obtained for all databases weighted by the number of images in each of the considered datasets have been presented. Therefore, the weights (before normalization) are 270 for LIVEMD excluding the single distorted part of the database), 324 for MDID13, 1600 for MDID, and 750 for MDIVL, respectively. Hence, the most "universal" elementary metrics seem to be VIF, DSS, and IW-SSIM, providing the highest aggregated correlations, being a good starting point for the development of the combined metrics.
Combined Metrics and the Proposed Approach
Ideally, an FR metric has to provide a linear dependence between metric values and MOS. Less strictly, dependence between MOS and a metric should be monotonous (desirably, a larger metric value corresponds to a larger MOS). However, for many existing elementary metrics, these dependences are far from ideal. As examples, Figure 2 presents scatter plots of MOS vs. some elementary FR metrics for the considered databases (scatter plots in the left column). As one can see, the dependences can be nonlinear (as shown in the scatter plot of IQM2 vs. MOS), different metrics have different ranges of variation (many metrics vary in the limits from 0 to 1 but not all), some "outliers" (large displacements of some points with respect to the most of the others) might happen as well. These properties arise problems in aggregation of several elementary metrics into a combined one.
The idea of the combined metrics is motivated by the complementary properties of different elementary metrics, which may demonstrate a "sensitivity" to various kinds of distortions to varying degrees. Hence, it has been assumed that their nonlinear combination may replace the necessity of nonlinear fitting proposed by the Video Quality Experts Group (VQEG) to increase the linear correlation between the subjective and objective scores. Some initial attempts were made to combine the metrics for singly distorted images by the optimization of weighting exponents for the product of three metrics [5] using the TID2008 database, although during further experiments, one of the metrics was replaced by FSIM forming the Combined Image Similarity Index (CISI) [6], being the weighted product of MS-SSIM [17], VIF [18] and FSIM [20].
A multi-metric fusion based on the regression approach applied for some older elementary metrics was proposed in the paper [7] with the additional context-dependent version utilizing the machine learning approach to determine the context automatically. Nevertheless, the verification of results was made using the TID2008 dataset only.
Another approach to multi-metric fusion is based on the use of genetic algorithms for the combination of metrics [11], although modeled as their weighted sum instead of their product that may limit the possibility of avoiding the additional nonlinear fitting. Hence, a similar approach was also used for the weighted products of elementary metrics [12], leading to further improvements.
The use of neural networks for the combination of elementary IQA metrics was used in the paper [8], where a randomly selected half of the TID2013 dataset was used for training. This approach utilized six elementary metrics, leading to a significant increase of the SROCC chosen as the optimization criterion. Nevertheless, similarly as in the other cases, the combined metrics have been used only for the assessment of singly distorted images. Additionally, a potential application of deep learning methods would require the development of larger training datasets containing also the subjective quality scores for multiply distorted images. Therefore, a combination of existing metrics using a relatively simple model is expected to be a well-performing solution also for multiply distorted images. To provide a simple form of the combined metric which would not require the additional nonlinear regression, e.g., using the logistic function, the strategy based on the weighted product of elementary metrics has been initially chosen in this paper with PCC as the optimization criterion. Although, in some cases, prediction monotonicity may be more important than the prediction accuracy itself, we have verified experimentally that the optimization of weighting exponents using the PCC values as the criterion, provides also high SROCC values. During the experiments, it has appeared that the performances obtained in the opposite case are not always good enough. Another reason for the use of the PCC for raw scores without prior nonlinearity fitting was the flexibility of the proposed approach, making it possible to control all weights simultaneously in a single optimization procedure. Considering the various dynamic ranges of elementary metrics, as well as the DMOS and MOS values in each dataset, the use of the PCC does not require additional normalization of their values. Hence, the assumed formula of the combined metric may be expressed as: where N is the number of elementary metrics denoted as Q i , and w i are their exponential weights, obtained as the result of optimization conducted using MATLAB's fminsearch function.
Although the application of the assumed method of metrics' combination provides encouraging results, the selected fusion of metrics based on their weighted product does not always lead to fully satisfactory performance. Hence, a novel fusion model has been investigated based on the sum of the exponentially weighted metrics where each component of the sum has an additional weight. The proposed formula may be presented as: where the additional weights a i have been introduced to make the combined metric even more flexible and increase its correlation with subjective quality scores provided in stateof-the-art datasets for multiply distorted images.
Results of Optimization
Using the weights a in Equation (6), different ranges of metrics' variation are taken into account (i.e., specific normalization is performed). Using both a and w coefficients, the combined metric can be optimized, i.e., its better values of PCC and/or SROCC can be provided in comparison to elementary metrics used as inputs for the combined metric.
An initial verification of the usefulness of the proposed approach for the FR quality assessment of multiply distorted images has been made primarily for the metrics listed in Table 1 using the four considered datasets independently. All initially considered metrics providing the PCC values below the bottom limits assumed for all datasets have been excluded from initial experiments (i.e., at least one of the conditions should be fulfilled by each metric to be included in further experiments). The values of these limits for PCC are: 0.7 for LIVEMD, 0.8 for MDID13, 0.85 for MDID and 0.8 for MDIVL. The relatively low limit for the LIVEMD dataset is caused by removing the singly distorted images from the analysis leading to a decrease of the correlation values for this dataset. Nevertheless, in some cases, combinations of two or three "worse" metrics might provide better results in comparison to the combination of one of them with the best performing elementary metric. Therefore, in the second stage of experiments, all combinations of two and three metrics have been tested for all datasets. To limit the number of possible combinations reasonably, several "best" combinations have been chosen as the basis for further increase of the number of metrics.
The optimization of exponential parameters w i for the combined metrics CM as well as the multipliers a i and exponents w i for the proposed CM + formula has been conducted using the derivative-free method without constraints based on the Nelder-Mead simplex method implemented in MATLAB's fminsearch function. Finally, all multipliers a i in the proposed CM + formula have been normalized so that ∑ a i = 1.
As the "best" combinations of two, three and more metrics for individual databases differ from each other, they are presented in Table 2 separately for each dataset. Analyzing the obtained results, it can be noticed that a meaningful increase of the prediction accuracy has been achieved for all datasets even using the "best" combination of two or three elementary metrics using the weighted product of metrics denoted as CM. The use of more additional elementary metrics further improves the obtained results in terms of the PCC significantly and, in some cases, may lead to a slight decrease of the prediction monotonicity (lower values of SROCC and KROCC). The results of the application of the proposed CM + metrics based on the normalized sum of the exponentially weighted elementary metrics are presented in Table 3, where higher correlations in comparison to respective CM metrics are marked by bold font. As may be noticed, the obtained performance of the proposed combined metrics is better for three datasets and slightly worse for the MDID database. An additional comparison of the linearity of the achieved correlation (without the necessity of any additional nonlinear mapping) is presented in the scatter plots shown in Figure 2.
However, it should be kept in mind that many elementary metrics have various properties and various dynamic ranges, hence, the trends shown in the various plots may be reversed to each other. For some of these metrics, smaller values indicate higher quality whereas the opposite is true for some other metrics. Since the maximum absolute value of the PCC has been considered as the objective function, the presentation of the scatter plots using the raw scores of these metrics may present both "negative" and "positive" trends. It is dependent on the obtained results of the optimization and the elementary metrics which have been used in the final combined metric. As in two datasets the DMOS values have been provided as the subjective scores, whereas the inventors of the other two datasets have used the MOS values, the original values-different for different datasets-have been used in the paper and are presented in all scatter plots included in the paper. The scale of all obtained combined metrics depends on the raw scores of individual metrics and the obtained results have not been normalized. It should also be noted that the high DMOS values typically represent poor quality whereas high MOS values indicate a high quality of images. As it may be observed, results of the CM7 + metric obtained for the MDID2013 dataset vary noticeably less than for the three other databases. Nevertheless, highly linear relationships between the subjective and objective quality scores are achieved mainly for the proposed CM + metrics for all considered databases. Some differences in the dynamic ranges of the combined metrics, particularly using the CM formulas, result from the use of various types of metrics and different weights obtained after the optimization procedure.
An additional comparison of the performance of the proposed approach has been made using some other combined metrics, previously developed for singly distorted images, applied for the datasets containing only multiply distorted images. The obtained experimental results for three such datasets (MDID2013, MDID, and MDIVL) are presented in Table 4. Since four Regression-based Similarity (rSIM) metrics [11] have been actually designed as the weighted sum of individual metrics, the additional nonlinear regression with the use of the logistic function has been applied using the coefficients provided in [11]. As one can see, our approach provides sufficiently better results than the approaches proposed in [11,12].
Since the metrics used in "best" combinations for various datasets differ, an additional cross-database validation has been conducted applying the combined metrics optimized for a single database for the assessment of images from the other three datasets. The obtained validation results are presented in Table 5, where the better performance results than obtained for the best elementary metrics for each dataset are marked with bold font. As it may be observed, the application of some of the combined metrics obtained for the MDIVL dataset does not lead to satisfactory results for the others. Table 4. Comparison of results obtained for three major datasets using some combined metrics originally designed for singly distorted images with the "best" elementary metrics and the proposed methods. Performance of all metrics is expressed as Pearson, Spearman and Kendall correlation coefficients between the subjective quality scores and objective metrics. Better results from two alternatives are marked with bold font.
Nevertheless, from a practical point of view, a final recommendation of a "universal" combined metric suitable for all databases would be desired. Therefore, some additional experiments have been made using the "aggregated" correlation as the goal function. The "aggregated" correlation has been calculated as the weighted sum of four correlations computed for each dataset where their number of images has been used as the weight (before normalization), similarly as for the elementary metrics shown in Table 1. The results obtained for both proposed families of the combined metrics are presented in Table 6. It is worth noting that even considering all four databases, the correlations are higher than those achieved by the other combined metrics for single datasets as shown in Table 4. Analyzing the presented results, the advantages of the novel approach based on the weighted sum of metrics, leading to the CM + family, may be observed for most metrics (better results from two alternatives are marked with bold font). Another interesting observation is that the "best" combinations of metrics in the CM + family utilize different elementary metrics than in the case of the CM family. In some cases, due to the use of more parameters, it is also possible to achieve similar correlations using the CM + approach with a smaller number of combined elementary metrics than using the CM family.
The graphical illustration of the correlation between the "best universal" combined metric CM + 7 and subjective scores for individual datasets is provided in Figure 3, where the lowest correlation for LIVEMD may be easily observed. Nevertheless, due to the lowest number of images, this dataset may be considered as the least significant. Highly linear relationships between the subjective evaluation and objective metric achieved for three major datasets (PCC = 0.9387 for MDID, PCC = 0.8911 for MDID13, and PCC = 0.9122 for MDIVL, respectively, as shown over the plots in Figure 3) confirm the validity of the proposed approach. These results are still better in comparison to the results obtained for some alternative combined metrics presented in Table 4. The weights obtained for the elementary metrics that have different properties and various dynamic ranges, used in the CM + 7 according to Formula (6), are provided in Table 7. Table 6. Performance of the "best" elementary, and "universal" CM and CM + metrics for all four databases in view of the aggregated (weighted) correlation with subjective scores. Better correlations from two families of the combined metrics are marked with bold font. The conducted experiments have confirmed the hypothesis that the specificity of multiply distorted images requires a combination of different metrics since some of the previously proposed hybrid approaches have led to worse performance even in comparison to the "best" elementary metrics. Additionally, the application of the combination model proposed in the paper increases their performance meaningfully for most of the datasets considered in the paper as well as for all datasets treated as a whole. The application of the proposed approach makes it possible to improve both the quality prediction accuracy measured by the PCC and the prediction monotonicity reflected by both rank-order correlations (SROCC and KROCC).
Conclusions
Image quality assessment of multiply distorted images is still a challenging area of research as many elementary metrics designed using the IQA databases with singly distorted images have poor performance for multiple distorted ones. The application of the combined metrics makes it possible to increase the obtained performance; however, the results achieved using one of the available databases are not always directly applicable for the others. Therefore, our future research will concentrate on some other fusion strategies, including the use of genetic algorithms and neural networks for this purpose. Different approaches for feature extraction and network training are possible, however, as stated in the paper [34], "the training set has to contain enough data samples to avoid overfitting". Meanwhile, even an application of relatively simple fusion models, as proposed in this paper, makes it possible to achieve much better results than may be achieved for a single metric.
Analyzing the results presented for the four available databases considered together, a significant increase of the aggregated correlation with subjective scores may be observed, not only in comparison to elementary metrics but also with the use of some other combined metrics, proposed earlier for images with single distortions. Those results confirm the practical usefulness and universality of the proposed approach, particularly the novel CM + metrics.
Since the proposed fusion model is not computationally demanding, its efficiency does not decrease significantly, assuming the possibility of parallel calculations of the elementary metrics. The only exception may be related to the memory limitations that would hinder the parallel computations of elementary metrics for large images. The time and memory requirements are dependent on the used hardware and the image size. For the parallel computation of metrics (e.g., 7 metrics for 8 independent threads), the calculation time of the final combined metric is nearly the same as for the "slowest" elementary metric being used.
The next step of research might be related to the application of the CNN-based metrics trained using the images affected by multiple distortions. Regardless of the different "nature" of the multiply distorted images compared to those affected by a single distortion, this direction of future research might be promising and will be considered. Nevertheless, its significant limitation is the necessity of the development of some larger datasets containing multiply distorted images that may be used for training purposes.
Nevertheless, considering the presence of the multiple distortions in many electronic devices equipped with vision sensors, the proposed approach may be useful in various electronic systems used for image and video analysis purposes.
Conflicts of Interest:
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript: | 2021-10-20T16:02:56.778Z | 2021-09-14T00:00:00.000 | {
"year": 2021,
"sha1": "8f667780c53c183bf675c31fbdf3e16214002cce",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/10/18/2256/pdf?version=1631609574",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6245f823ae678bcd7f2be9f1e5d4d115f45e2796",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3036494 | pes2o/s2orc | v3-fos-license | Arrhythmogenic right ventricular cardiomyopathy (ARVC): cardiovascular magnetic resonance update
Arrhythmogenic Right Ventricular Cardiomyopathy (ARVC) is one of the most arrhythmogenic forms of inherited cardiomyopathy and a frequent cause of sudden death in the young. Affected individuals typically present between the second and fourth decade of life with arrhythmias coming from the right ventricle. Pathogenic mutations in genes encoding the cardiac desmosome can be found in approximately 60% of index patients, leading to our current perception of ARVC as a desmosomal disease. Although ARVC is known to preferentially affect the right ventricle, early and/or predominant left ventricular involvement is increasingly recognized. Diagnosis is made by combining multiple sources of diagnostic information as prescribed by the “Task Force” criteria. Recent research suggests that electrical abnormalities precede structural changes in ARVC. Cardiovascular Magnetic Resonance (CMR) is an ideal technique in ARVC workup, as it provides comprehensive information on cardiac morphology, function, and tissue characterization in a single investigation. Prevention of sudden cardiac death using implantable cardioverter-defibrillators is the most important management consideration. This purpose of this paper is to provide an updated review of our understanding of the genetics, diagnosis, current state-of-the-art CMR acquisition and analysis, and management of patients with ARVC.
Introduction
Arrhythmogenic right ventricular cardiomyopathy (ARVC) is an inherited cardiomyopathy characterized by fibro-fatty replacement of predominantly the right ventricular (RV) myocardium, which predisposes patients to life-threatening ventricular arrhythmias and RV dysfunction [1][2][3]. ARVC is present in up to 20% of individuals who experience sudden cardiac death (SCD) before the age of 35 years and is even more common among athletes who die suddenly [2,4]. The disease has been reported to have prevalence of 1 in 2000 to 5000 individuals, although some reports estimate the real prevalence could be as high as 1 in 1000 in certain regions of the world due to underrecognition [5,6]. Over the past decade, genetic testing for ARVC-associated mutations in five desmosomal genes and several non-desmosomal genes has become clinically available [7]. Inheritance is typically autosomal dominant with incomplete penetrance and variable expressivity [7][8][9]. Affected patients classically present between the second and fourth decade of life with ventricular arrhythmias coming from the RV [3]. However, SCD can occur as early as in adolescence, whereas mutation carriers may also remain asymptomatic throughout life [3,10,11].
Imaging modalities commonly used for ARVC evaluation include echocardiography, cardiovascular magnetic resonance (CMR), and RV angiography. Both echocardiography and angiography have significant limitations in assessing the RV due to its complex geometry [12]. Over the last decade, CMR has emerged as the imaging modality of choice in ARVC, allowing for non-invasive morphological and functional evaluation, as well as tissue characterization in a single investigation [13,14]. In spite of its low prevalence, ARVC accounts for a disproportionately high percentage of referrals for CMR. Unfortunately, many imaging centers have little experience with evaluating ARVC, and gaining experience is difficult because of the low prevalence of disease. The aim of this review is to review current knowledge of ARVC that is useful for CMR interpretation in ARVC. Our emphasis will be on an update of issues relating to CMR diagnosis of ARVC [15], including ARVC diagnostic criteria and common regional morphological and functional abnormalities in this disease.
Update on ARVC diagnosis
Diagnosis of ARVC may be challenging, as no single modality is sufficiently specific to establish ARVC diagnosis. Therefore, multiple sources of diagnostic information are combined in a complex set of diagnostic criteria. The original "Task Force" criteria (TFC), described in 1994 [16], largely relied on qualitative parameters and were shown to be insensitive to the disease especially in early stages [17][18][19][20]. In addition, imaging criteria were not specific, and led to many false positive diagnoses.
In 2010, modifications to the criteria were proposed (Table 1) [21]. These modifications had two purposes: (1) To improve the specificity of the diagnostic criteria by including quantitative metrics for ARVC diagnosis, and (2) To improve sensitivity of diagnosis in individuals who have a high likelihood of inherited/genetic disease. Specifically, quantitative parameters were included for imaging criteria, endomyocardial biopsy, and (signalaveraged) ECG. In addition, the revised TFC now include an ARVC-associated pathogenic mutation as a major criterion towards ARVC diagnosis. These changes to the TFC have resulted in increased sensitivity for inherited/genetic disease, while maintaining satisfactory specificity [22][23][24].
CMR protocol for ARVC
The CMR protocol that we recommend for ARVC evaluation shown in Table 2. The protocol has been designed to evaluate the RV for abnormalities in structure and tissue characterization while enabling quantitative evaluation. For black blood imaging, fast spin echo or turbo spin echo imaging sequences are ideal. The RV free wall and RV outflow tract are best evaluated in the axial black blood images. The stack of axial images should include the entire RV. This can be accomplished in 6-8 slices at intervals (slice thickness + gap) of about 1 cm. Most of the diagnostic information is obtained in the slices centered on the middle of the RV. Obtaining an excessive number of image slices will adversely prolong the examination.
For cine imaging, steady state free precession (SSFP) imaging is preferred at 1.5 Tesla. Insufficient information at 3 Tesla is available to determine if SSFP or fast gradient echo (FGRE) is superior. Quantitative analysis of the RV and left ventricle (LV) is performed on short axis images. Thus, 10-12 slices encompassing the entire ventricular volume must be obtained. We prescribe these images beginning approximately 1 cm above the valve plane and increment towards the apex of the ventricles. Cine images should also be obtained in standard long axis views of the LV. Some sites prefer to also acquire a vertical long axis view of the RV. Finally, we routinely obtain a stack of transaxial images of the RV at the same slice positions as the black blood images described above. Given modern CMR scanners, the temporal resolution of cine images is typically about 40 msec.
Delayed gadolinium images are best obtained using phase selective inversion recovery (PSIR), a sequence which does not depend upon identifying the precise inversion time (TI) [25]. In patients with significant ventricular ectopy, a low dose of a beta-blocker (metoprolol 25-50 mg) is recommended for arrhythmia suppression during the CMR scan.
CMR TFC and their derivation
A major addition to the revised TFC was the inclusion of quantitative measurements for imaging criteria. The revised CMR TFC now require presence of both qualitative findings (RV regional akinesia, dyskinesia, dyssynchronous contraction) and quantitative metrics (decreased ejection fraction or increased indexed RV end-diastolic volume) ( Table 1).
Quantitative values for RV volume and function for TFC were derived from a comparison of ARVC probands with normal healthy volunteers that were included in the Multi-Ethnic Study of Atherosclerosis (MESA) [26]. To ascertain cutoff values, RV dimension and function from 462 normal MESA participants were compared to 44 probands in the North American ARVC registry [21]. Major criteria (RV ejection fraction ≤40% or indexed RV end-diastolic volume ≥110 mL/m 2 for men and ≥100 mL/ m 2 for women) were chosen to achieve approximately 95% specificity. Cutoffs with high specificity invariably result in lower sensitivity; major CMR criteria have a sensitivity of 68 to 76% [27]. Minor criteria (RV ejection fraction 40-45% or indexed RV end-diastolic volume 100-110 mL/m 2 for men and 90-100 mL/m 2 for women) had a higher sensitivity (79 to 89%), but a consequently lower specificity (85 to 97%) [27].
Impact of new TFC on diagnostic yield
Several studies report on the impact of the revised TFC on diagnostic yield specifically for CMR [22,[28][29][30]. Unanimously, these studies showed a decrease in the prevalence of major and minor CMR criteria in the modified TFC compared to the original TFC. This corresponded to a decrease in sensitivity in most of these studies [22,29]. Interestingly, although sensitivity decreased, the positive predictive value (PPV) increased with the revised TFC, as shown by Femia et al. (PPV increase from 23% in original criteria to 55% in revised criteria) [30]. This may largely be due to the inclusion of quantitative CMR criteria. Vermes et al. report that 97% of subjects with minor changes according to the original CMR criteria did not meet revised CMR criteria [29]. In addition, a low sensitivity of CMR for ARVC diagnosis is understandable in the context of the recently published data indicating that electrical abnormalities precede structural changes detected by CMR in ARVC [31][32][33]. This emphasizes the concept that ARVC evaluation should not be solely based on any one test, in particular CMR. Limitations of quantitative evaluation of the right ventricle: the revised task force criteria Including quantitative metrics as a component to the CMR TFC has been an important contribution to ARVC evaluation, however some limitations exist. First, although quantitative measures reduce subjectivity of CMR TFC, there is substantial inter-reader variability for the RV. In our experience, two physician readers had excellent agreement (within~5% of reference values) only after training on approximately 100 CMR cases [34]. In clinical practice, we expect it would be difficult to achieve reproducibility of less than 10% for RV parameters. Cutoffs for ARVC criteria were derived from the MESA study that used the FGRE technique whereas a majority of study subjects in the US ARVC study had SSFP cine images. RV volumes by SSFP are least 10% larger than those measured with FGRE technique [35][36][37][38][39]. SSFP images provide superior contrast between blood and endocardium at the endocardial border, with less blood flow dependence [35,36,40]. Also, the revised TFC used MESA subjects whose mean age was 60 years. ARVC subjects in the Task Force study were 20-30 years younger on average. Since that time, Chahal et al. determined that, among MESA participants aged 45 to 84 years, RV end-diastolic volume decreased 4.6% per decade [34]. A very similar percentage of approximately 4% decrease per decade was obtained by Maceira et al. using the SSFP technique in subjects 20 to 80 years old [41]. Adjusting for body surface area did not remove the dependence of RV volume on age.
Fortuitously, the issues pulse sequence and older reference population in the MESA approximately balance each other. As an example, the CMR cutoff values for RV size in ARVC (Table 1) are ≥110 mL/m2 (male) or ≥100 mL/ m2 (female) for major criteria. These values closely correspond to the 95 th percentile confidence limits of RV volumes for normal subjects less than 60 years old [41]. Thus, until further studies are available, we feel that the current RV metrics in the revised TFC remain highly relevant. As important, further developments are necessary to improve the reproducibility of RV quantification by CMR. When evaluating younger patients, CMR physicians should keep in mind that the size of the RV is expected to be~10% larger in a 20 year old compared to a 40 year old patient.
Common findings in ARVC by CMR
Most of our information about structural abnormalities in ARVC comes from studies in subjects with a predominant RV phenotype (Figure 1) [42][43][44]. Abnormalities in the RV in ARVC have been extensively described (reviewed in [15]). Besides global reduction in RV function and enlargement of the RV, more subtle regional disease of the RV has been variously described in the literature using a variety of terms (including focal bulges, microaneurysms, segmental dilatation, regional hypokinesis, etc.). In the current TFC, the terms "akinesia" (lack of motion) and "dyskinesia" (abnormal movementinstead of contracting in systole, that segment of myocardium bulges outward in systole) and "dyssynchronous" (regional peak contraction occurring at different times in adjacent myocardium) are used for all imaging modalities (CMR, echocardiography and angiography) to describe regional wall motion abnormalities in ARVC. Microaneurysms are not explicitly described in the revised TFC for CMR; overuse of this finding was considered by the Task Force members to be misinterpreted by CMR physicians resulting in false positive diagnoses. However, microaneurysms are characterized by regional akinesia or dyskinesia in the revised criteria. The location of regional wall motion abnormalities of the RV was not addressed in the revised TFC. We now recognize that the distal RV (from the moderator band to the apex in long axis views) shows highly variable contraction patterns in the normal individual. Therefore in ARVC, we emphasize the significance of regional wall motion abnormalities in the subtricuspid region. An excellent example of this is the so-called "accordion sign" that represents a focal "crinkling" of the myocardium (Figure 2) [45,46]. In terms of TFC, the accordion sign is due to a small region of highly localized myocardium with dyssynchronous contraction.
The changing spectrum of ARVC: The triangle of dysplasia displaced Since the first report in 1982, regional abnormalities in ARVC were thought to locate to the RV inflow tract, outflow tract, and apex, collectively referred to as the "Triangle of Dysplasia" [1]. This concept was based on RV angiographic findings and autopsy data in a series of 24 patients with ARVC. In subsequent years, studies describing structural changes in ARVC focused on abnormalities in these three regions [47]. It is important to note that these observations were made in tertiary centers, without the advantage of genetic testing, and without sensitive TFC.
The last decade has witnessed a paradigm shift in our perception of regional structural involvement in ARVC. Autopsy series have shown predominant fibro-fatty infiltration on the epicardial surface, suggesting that the disease starts in the epicardium and progresses to the endocardium [47]. In 2004, Marchlinski et al. were one of the first to note preferential subtricuspid involvement in ARVC [48]. These results were confirmed in multiple studies using CMR [49,50], echocardiography [51], and electroanatomic voltage mapping [50,52,53].
Recently, Te Riele et al. provided a series of 80 ARVC mutation carriers who underwent CMR and/or endo-and epicardial electroanatomic voltage mapping [54]. Structural abnormalities in this cohort preferentially located to the epicardial subtricuspid region and basal RV free wall, whereas the RV apex and endocardium were relatively spared. In addition, the authors reported that the LV lateral wall was significantly more often involved than the RV apex, especially among subjects with early disease. This led the authors to coin the "displacement" of the RV apex from the Triangle of Dysplasia [54]. Although preferential involvement of the subtricuspid region and LV lateral wall has been described before in separate ARVC reports, the focus on sparing of the RV apex is novel. This is particularly important, as the RV apex is Figure 1 Four-chamber (top panels) and short-axis (bottom panels) bright blood images in an ARVC subject with predominant right ventricular abnormalities. End-diastolic images are shown in the left panels, end-systolic images in the right panels. Note subtricuspid dyskinesia in the end-systolic four-chamber image (arrow), and right ventricular free wall aneurysms (i.e. both systolic and diastolic bulging) in the short-axis image (arrows).
The changing spectrum of ARVC: Left ventricular involvement
The advent of genetic testing and use of sensitive TFC have significantly enhanced our appreciation of the wide phenotypic spectrum of ARVC, and increased our awareness of non-classical (including left-dominant and biventricular) phenotypes. As a result, we now know that some ARVC subjects have early and predominant LV involvement (Figure 3) [19,[55][56][57]. LV involvement has even been reported in 76% of ARVC subjects, of whom the majority had advanced disease [58]. The disease is, therefore, increasingly being referred to as "Arrhythmogenic Cardiomyopathy".
In 2010, Jain led a study investigating LV regional dysfunction using CMR tagging, and found that LV peak systolic strain was lower in ARVC subjects compared to controls [59]. Sen-Chowdhry et al. recently published data supporting a genetic association between left-dominant ARVC and classical right-sided ARVC [57]. In their study, the authors showed that one-third of genotyped ARVC patients with a left-dominant phenotype have a pathogenic mutation in the ARVC-related desmosomal genes. Phenotypic variations of predominant RV and LV involvement even coexisted in the same family.
LV involvement in ARVC may manifest as late gadolinium enhancement (LGE), often involving the inferior and lateral walls without concomitant wall motion abnormalities [55,57,60]. Septal LGE is present in more than 50% of cases with left dominant ARVC, in contrast to the right dominant pattern in which septal involvement Figure 2 Regional contraction abnormality in the subtricuspid region. End diastolic (left) and end systolic image (right) show the so-called "accordion sign" in an ARVC mutation carrier. Regional dyssynchronous contraction in the subtricuspid region is a readily recognized qualitative pattern of abnormal RV contraction. is unusual [55]. In addition, LV fatty infiltration was shown to be a prevalent finding in ARVC, often involving the subepicardial lateral LV and resulting in myocardial wall thinning ( Figure 4) [54,61]. Early data by Dalal et al. already showed that LV fat in the lateral wall is very specific for ARVC mutation carriers [45]. Future studies are necessary to confirm these data, and further our understanding of LV abnormalities in ARVC.
Late gadolinium enhancement in ARVC evaluation
Myocardial LGE is a well-validated technique for assessment of myocardial fibrosis. Given that one of the pathologic hallmarks of ARVC is fibro-adipose replacement of the myocardium [47], it is important to note that LGE is not incorporated in the current diagnostic TFC. Although the Task Force did recognize the presence of LGE in many patients with ARVC, several limitations withheld its inclusion in the diagnostic criteria. First, detection of LGE in the RV is greatly hampered by the thin RV wall. A high variability between centers resulted in limitations of LGE in the multi-center US ARVC study. In ARVC, RV wall thinning is pronounced [42], which makes the LGE technique less reliable than for the LV. Second, distinguishing fat from fibrosis by LGE sequences is challenging, which makes its interpretation highly subject to the CMR physician's experience. Last, LV LGE is non-specific, and has a wide differential diagnosis.
While these limitations exist, LGE may be very useful in ARVC evaluation ( Figure 5). RV LGE has been observed in up to 88% of patients [66][67][68], while LV LGE was reported in up to 61% of cases [50,69]. Importantly, before LGE can be included in a future iteration of the TFC, more data regarding the specific patterns of LGE that distinguish ARVC from other cardiomyopathies is necessary. Also, improved methods to determine fibrosis in the thin RV wall are needed. Until such a method emerges, use of LGE in clinical practice should be considered as diagnostic confirmation, not sole evidence of ARVC disease expression. LGE CMR is also extremely useful when ARVC is excluded due to other cardiomyopathy such as sarcoidosis.
LGE may also be useful in management of ARVC patients. Tandri et al. showed excellent correlation of RV LGE with histopathology and inducible ventricular arrhythmias on electrophysiologic study [68]. As such, identification of LGE by CMR may provide guidance for electrophysiologic studies and endomyocardial biopsy. However, it is important to note that recent studies correlating LGE with electroanatomic scar revealed that LGE is less sensitive for the detection of low voltage areas than endocardial mapping during electrophysiologic study [50,70].
Misdiagnosis of ARVC using CMR
Misdiagnosis of ARVC is a well-recognized problem. A prior study has shown that more than 70% of patients who were referred to Johns Hopkins Hospital from outside institutions with a diagnosis of ARVC did not actually meet diagnostic TFC [71]. In many cases, CMR misinterpretation is the cause of over-diagnosis in ARVC [71,72]. It is important to realize that, although CMR may be regarded the standard of reference for evaluation of RV morphology and function, the use of CMR alone is not the "gold standard" for ARVC diagnosis. Rather, the TFC prescribe the use of multiple diagnostic tests. Great caution must be employed when the only abnormality in a presumed ARVC patient is found on CMR, as it is uncommon for ARVC patients to have a normal ECG and Holter monitor but an abnormal CMR [32].
A proper understanding of common CMR abnormalities and patterns of disease in ARVC is invaluable for accurate CMR evaluation. Previous reports have extensively focused on fibro-fatty myocardial replacement, wall thinning, RVOT enlargement, and RV dilatation and dysfunction in ARVC [42][43][44]. As one of the pathologic hallmarks of ARVC, intramyocardial fat accumulation was thought to be highly sensitive for the disease. Unfortunately, multiple reports have shown that intramyocardial fat was not reproducible even among experienced readers, constituting an important cause of misdiagnosis in ARVC [44,71,73,74].
Furthermore, normal variants as well as other pathologic conditions may mimic ARVC. Important normal variants that were previously mistaken for ARVC are pectus excavatum [75], apical-lateral bulging of the RV free wall at the insertion of the moderator band [76], and the "butterfly apex", a normal anatomical variant of separate RV and LV apices causing the RV apex to look dyskinetic [77]. We have found the butterfly appearance of the apex to be more common on horizontal long axis views at inferior levels ( Figure 6). In addition, a prominent band of pericardial connective tissue that joins the RV free wall to the posterior sternum may lead to misinterpretation of RV wall motion: this "tethered" portion of the RV free wall remains static in location and may be misinterpreted as RV dyskinesia (Figure 7). Additionally, pathologic disorders such as myocarditis and sarcoidosis may mimic ARVC [62,63,78]. Further testing to specifically exclude these conditions should be strongly considered, especially in the presence of LV dysfunction [79].
ARVC: a desmosomal disease
Beginning with the seminal discovery of plakoglobin in 2000 [80], the past decade has witnessed the identification of mutations in five genes encoding the cardiac desmosome [81][82][83][84]. In recent reports, desmosomal mutations are found in up to 60% of ARVC cases [8,23,24,85]. Among US ARVC patients, the most common gene involved is plakophilin-2 [85], followed by desmoglein-2, desmocollin-2, and desmoplakin [86]. Prevalence of mutations is similar in Europe [8,87], although desmoplakin mutations are more prevalent in the United Kingdom and Italy [10,57]. Desmosomes are complex multiprotein structures providing mechanical [88] and electrical [89] continuity to adjacent cells. Mechanical uncoupling in ARVC is accompanied by cell death and regional fibrosis, which causes the monomorphic arrhythmias typically associated with ARVC. In addition, electrical uncoupling through gap junction remodeling and sodium channel dysfunction may lead to significant activation delay [90,91], which increases the propensity to functional block and arrhythmia. The exact mechanism by which these mutations cause the highly arrhythmogenic phenotype in ARVC has been subject of many hypotheses, which are extensively reviewed elsewhere [20,47]. The List of Genes Section shows an overview of genes associated with ARVC. In addition to desmosomal mutations, mutations in non-desmosomal genes have been identified in ARVC [92][93][94][95][96][97][98]. These non-desmosomal genes include, among others, desmin, titin, lamin A/C, and phospholamban, which are commonly mutated in subjects with DCM [99][100][101]. Although the distinction between ARVC and DCM has important implications for clinical practice, guiding both diagnostics and treatment, a considerable overlap of these conditions is increasingly recognized [102]. Compared to DCM, patients with left-dominant ARVC often have significant ventricular arrhythmias, disproportionate to the morphological abnormalities and impaired LV systolic function [57]. In addition, inflammatory processes such as (viral) myocarditis may mimic left-dominant ARVC [103]. In myocarditis, T2-weighted imaging may detect tissue edema, which is usually absent in ARVC [104]. In addition, fast spin-echo T1-weighted images during the first minutes after contrast injection may be useful to detect myocardial hyperemia and muscular inflammation suggestive of myocarditis [104,105]. In equivocal cases, invasive studies such as electroanatomic mapping and endomyocardial biopsy may provide a more definite diagnosis [63].
Impact of genetics on clinical ARVC management
With the identification of ARVC-causing mutations, integration of genetic testing into clinical practice is now proliferating. Currently, its main applications are confirmatory testing in index patients and cascade screening of families [106]. ARVC is generally transmitted as an autosomal dominant trait with incomplete penetrance and variable expressivity. A recent study by Cox et al. confirmed that asymptomatic mutation-carrying relatives have a 6-fold increased risk of developing ARVC compared to relatives of a proband without a pathogenic mutation [8]. However, it is important to realize that 50-70% of mutation carriers will never develop disease expression [7,23,107], and that severity of disease may vary greatly, even among members of the same family [11] or those carrying the same mutation [9]. In contrast, a negative genetic test result in a proband does not exclude the possibility of disease, nor does it exclude the possibility of a genetic process in the individual or family [106,108]. Because of the complexities associated with interpreting genetic test results in ARVC, including genetic counseling Figure 7 Misdiagnosis of ARVC -Axial and short-axis bright blood images in a control subject. Note the "tethering" of the mid right ventricular free wall to the anterior chest wall (arrows), giving the right ventricle a dyskinetic appearance.
prior and subsequent to genetic testing has been strongly recommended [106,109].
CMR in ARVC genotype-phenotype correlations
Over the last decade, several genotype-phenotype correlations in ARVC have been proposed, but large-scale studies confirming these observations are yet to come. Recently, patients with multiple desmosomal mutations were shown to have a more severe clinical course with more ventricular arrhythmias and more heart failure than subjects with a single mutation [10,86]. In addition, individuals with a mutation in the phospholamban or desmoplakin gene (especially when involving the Cterminus of desmoplakin) have significant left-dominant/ biventricular disease expression and a high prevalence of heart failure [19,98,110]. LV involvement in these patients often manifests as LGE in a LV circumferential, mid-myocardial pattern extending to the right side of the septum [111]. An example is shown in Figure 3. This left-dominant ARVC pattern should not be confused with LV involvement that occurs in advanced stages of right-dominant ARVC. These right-dominant ARVC subjects (often plakophilin-2 mutation carriers) commonly have focal LV disease involving the lateral LV wall with only mild or moderate LV dysfunction [111]. Large-scale studies from collaborative international registries are necessary to further unravel genotype-phenotype associations in ARVC.
Update on ARVC management
ARVC management is directed towards symptom reduction, delay of disease progression, and prevention of SCD.
Because of a lack of randomized trials comparing ARVC treatment options, management recommendations in ARVC are largely based on clinical judgments and results from retrospective registry-based studies. Mainstay therapies consist of conservative measures (exercise restriction), beta-blocking and antiarrhythmic agents, implantable cardioverter-defibrillator (ICD) implantation, and radiofrequency ablation of ventricular arrhythmias. Evidence for a potential role of exercise in ARVC expression and disease progression is accumulating. Many ARVC patients are highly athletic and those who participate in competitive sports have a 5-fold increased risk of arrhythmic death compared to non-athletes [112]. Recently, James led a study on the role of exercise in ARVC development, showing that endurance exercise and frequent athletics increases the risk of arrhythmias and heart failure in ARVC mutation carriers [113]. This important piece of evidence highlights the importance of exercise restriction in ARVC patients and those at risk of developing disease.
Once the diagnosis of ARVC is established in a patient, the most important decision is whether to implant an ICD for prevention of SCD. It is now standard of care for ARVC subjects with prior sustained ventricular arrhythmia to undergo placement of an ICD [114,115]. Studies have shown that these patients have a high incidence rate of appropriate ICD discharges of up to 70% during a mean follow-up of 3-5 years [115,116]. Unfortunately, guidelines for ICD implantation among subjects without prior ventricular arrhythmia are less unambiguous. Recent reports suggest an important role for CMR in risk stratification of these patients [32,117,118]. In their study, Deac et al. showed that an abnormal CMR was an independent predictor of arrhythmic events [118]. Also, it was recently shown that the revised CMR TFC have high negative predictive value for arrhythmic occurrence in ARVC [32].
Arrhythmia control in ARVC is often achieved by pharmacologic treatment. Beta-blockers and class III antiarrhythmic drugs (sotalol, amiodarone) have been shown to be successful in reducing arrhythmia burden and likelihood of ICD discharge [119,120]. In addition, radiofrequency ablation for ventricular arrhythmia in ARVC has gained enormous popularity over the last years. Although the results of endocardial ablation have been moderate [121], good arrhythmia control (but not complete cure) has been obtained using epicardial ablation [52,[122][123][124]. This is understandable, given the primary (sub)epicardial location of the abnormal substrate in ARVC. CMR with LGE may be useful in planning of these procedures, by providing information on the presence and distribution of ventricular scar [125].
Future directions
CMR evaluation in ARVC is a moving target. New CMR sequences such as high-resolution T1 mapping are promising tools to detect early, subtle changes in the RV. In addition, quantification of RV regional wall motion abnormalities and evaluation of inter-and intraventricular dyssynchrony may provide novel tools for early detection of ARVC. The genetic era allowed for ARVC genetic testing using comprehensive cardiomyopathy panels and whole exome sequencing, which are likely to significantly impact our knowledge of the genetic basis of ARVC and the overlap with other cardiomyopathies. Furthermore, genotype-phenotype correlation studies may guide our quest for genetic and environmental modifiers in this disease. Lastly, basic research in in vitro and animal models may have an important impact on our knowledge of ARVC pathophysiology. Results from these studies may open the path to modification of the abnormal substrate in ARVC, allowing for definite prevention of clinical disease manifestation and/or progression.
Conclusion
ARVC is a rare but important cause of SCD in the young and in athletes. The disease is inherited as an autosomal dominant trait with incomplete penetrance and variable expressivity. Because of the inherent risk of potentially lethal arrhythmias, correct diagnosis and early detection of ARVC are essential. This is critically important, because with the advent of genetic testing, the population of at-risk individuals is rapidly increasing. Clinical ARVC diagnosis is facilitated by a complex set of diagnostic criteria which were first described in 1994 and updated in 2010 to increase sensitivity for early disease. As the non-invasive "gold standard" for RV evaluation, CMR plays an important role in clinical ARVC workup. Recent studies have shown that RV involvement in ARVC often manifests as regional wall motion abnormality or global ventricular dysfunction, whereas LV involvement is often observed as LGE and/or fatty infiltration without concomitant wall motion abnormalities. ARVC preferentially affects the basal RV and lateral LV, while sparing the RV apex. Once diagnosis of ARVC is established, the most important management decision is whether to implant an ICD for prevention of SCD. Future studies are necessary to further unravel the pathophysiologic attributes of disease and provide insights into genotypephenotype correlations in ARVC.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions AT, HT, and DB conceived the study, performed the background research and review, and drafted the manuscript. All authors read and approved the final manuscript. | 2017-06-27T06:28:21.367Z | 2014-07-20T00:00:00.000 | {
"year": 2014,
"sha1": "17fb47bc6a8a32d8e7d55888653e098078c2a094",
"oa_license": "CCBY",
"oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-014-0050-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "172c266d3a4573f9c74b514a6079e6590854b48f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18999551 | pes2o/s2orc | v3-fos-license | Edinburgh Research Explorer Fare's fair? Concessionary travel policy and social justice
This paper argues that transport has an important influence on individuals’ welfare and therefore transport policy can be readily analysed from social justice and welfare policy perspectives – yet only rarely ever is. The paper develops a justice framework in which to assess the ‘fairness’ of the eligibility criteria used in concessionary fare policies – specifically the justice principles of need, desert, equality, option choices and affordability. The paper examines a concessionary bus fares policy from a social justice perspective, including an empirical assessment of who in practice benefits most from it and how these findings measure against justice principles.
Introduction
Transport policy has not featured to any significant extent in the analysis of welfare policy. Nevertheless, transport has been recognised as a key obstacle to welfare, wellbeing and social inclusion for many people. As in other areas of welfare policy, questions of entitlement to, and distribution of the benefits arising from, state-provided or state-subsidised transport schemes come to the fore in an analysis of the social justice implications of transport provision. Thus, transport is a highly relevant, but neglected, field of enquiry in the study of welfare policy, poverty and social justice.
The overall aim of the paper is to explore transport and transport policy as an area of welfare policy. Two objectives for analysis have been developed to address this aim. First, to empirically evaluate ex-post the impact of a concessionary bus fare policy in terms of who benefits and to what extent. Second, to assess the social justice implications of the impacts of this policy.
The remainder of the paper is structured as follows. The following section reviews the treatment of transport issues in existing literature on poverty and social justice. The next section 'Transport and the principles of justice' develops a justice framework in which to assess the 'fairness' of the eligibility criteria used to determine who benefits from transport services. The paper then outlines as a case study the National Concessionary Travel Scheme (NCTS) for older people in Scotland and reflects on the justice issues and framing of that particular policy. The following section outlines data and methods used to estimate the impact of the scheme in terms of who benefits and to what extent. The following section reports the results of the empirical analysis. The final section of the paper discusses the implications of the results for the fairness of the NCTS in particular, and offers further reflections on the nature of transport more generally as a welfare policy and its social justice implications.
Transport in poverty and social justice research
A number of studies have found links between poor transport and poor access to jobs, goods and services, making it difficult for some people to fully participate in society (Hine and Mitchell, 2003;Preston and Rajé, 2007;Church et al., 2000). Problems often arise when people are not able to afford, access or drive a car in an increasingly car-orientated world. Some people living in 'edge-city' or rural locations have difficulty accessing a range of transport. Groups with lower transport mobility include young people, older people, women, ethnic minorities and disabled people.
Travel difficulties are cited as obstacles to participation in a range of activities central to social inclusion, including training and education, employment, shopping and leisure (Social Exclusion Unit (SEU), 2003). The issue of poor access to transport and/or low proximity to key destinations such as jobs and post-school education and training institutions and the social disadvantage that can arise has been given much attention in the United States and to a lesser extent and with different emphases in the United Kingdom and elsewhere in Europe. In the US, there has been a strong emphasis on race, for example the entrapment of African Americans and immigrant populations in central city neighbourhoods and ensuing isolation from emerging suburban and 'edge-city' loci of lower-skilled jobs and other facilities (Ihlanfeldt, 1993;Raphael, 1998;Liu and Painter, 2011).
A strong 'transport justice' literature in the US highlights a range of inequalities in a variety of transport issues, including: access to transport (particularly the benefits of access to a car) by race, gender and age; patterns in exposure to transport pollution; racial and class differences in fatalities and injuries arising from transport accidents; and the severance of neighbourhoods by freeways and railroads (Gwynn and Thurston, 2001;Clifton and Lucas, 2004;Wier et al., 2009;Mindell and Karlsen, 2012). Research in the UK and Europe more generally has tended to focus on 'transport and social exclusion', highlighting mainly obstacles to the use of public transport and the isolation faced by some groups and some locations, particularly older people, remote rural communities, ethnic minorities and disabled people (Owen and Green, 2000;Gray et al., 2006;Shergold and Parkhurst, 2012). On neither side of the Atlantic, however, has transport policy been a sustained area of scholarship within literatures on social policy and welfare services. This is surprising, given that transport itself can be considered a substantial 'welfare service' and that access to adequate transport is a vital component of achieving social inclusion and wellbeing.
Transport and the principles of justice
In order to critically assess the effectiveness and equity/justice of any policy, including transport policies, a fundamental question is: 'what outcomes are sought and for whom?' In relation to concessionary bus fares, the 'answers' could include: a) improving social inclusion through increasing the mobility of mobility-constrained individuals for example along lines of income, age, race, ethnicity, gender and disability; b) redistribution of resources in favour of bus users on the basis that lowincome and other disadvantaged groups are over-represented among bus users; and c) environmental benefits through encouraging switches from car to bus.
The importance of access to transport in achieving social inclusion and wellbeing has been alluded to above. In relation to redistribution, given that some low-income individuals and households spend a non-trivial proportion of income on travel and are more likely to travel by bus, subsidising public transport has the potential to alter the overall distribution of disposable income. In relation to environmental benefits from reduced car use, the effectiveness of financial incentives alone in producing large-scale behaviour change has been questioned (Graham-Rowe et al., 2011). However, irrespective of any behaviour change that may or may not arise, financial instruments are used by governments to give reward and penalty signals to citizens of desired behaviours and wider values and deserts.
The eligibility criteria used in a concessionary bus travel policy therefore are of direct and substantial significance for issues of poverty and social justice. The principles of justice (need, desert, equality, etc.) can be applied just as readily to a concessionary travel policy as to a conventional area of social policy, such as social security or taxation (see Table 1). What constitutes 'need', 'desert' and so on of course is subjective, multi-facetted, contradictory and ultimately derives from underlying cultural and ideological values. In relation to transport, Farrington and Farrington (2005) argue that an 'acceptable' level of mobility is determined by societal norms around travel behaviour and the geographic locations of 'destinations' (homes, jobs, retail, leisure, etc.). Therefore, the examples listed in Table 1 serve solely to illustrate the principles and do not imply any judgements on what specific criteria might be considered 'just', necessary or appropriate in any particular society or context.
In relation to achieving social inclusion and alleviating poverty, who is in need of free or subsidised bus travel? Logically, the answer to this question is those who meet all of the following criteria: a) without access to a car; b) on low income; c) requiring to travel; and d) sufficiently able-bodied to make use of a bus (and therefore benefit from the provision of free bus travel). Therefore, in order to assess which groups in society are in greatest need of assistance from a concessionary bus fares policy (and therefore those whom policymakers may wish to target), it is necessary to consider the distribution of these four characteristics (a-d) across social categories such as income, race, ethnicity, gender and disability.
Before discussing the distribution of these characteristics, the relationship between each and an individual's potential to benefit from free bus travel is considered. Lack of access to a car can be expected, ceteris paribus, to lead to an increased need to use a bus. Low income can also be expected, ceteris paribus, to increase the risk of either foregone mobility (that is, not travelling in order to save money) or impoverishment resulting from a high proportion of income being spent on travel. Both the requirement to travel and disability have more ambiguous relationships with level of bus use. A greater requirement to travel might increase bus use, but equally may encourage car ownership and therefore serve to reduce bus use. Similarly, disability can be expected to either increase or decrease bus use for a given individual. On the one hand, some conditions limit bus use (despite improvements in the accessibility of public transport systems). On the other hand, some conditions lead to driving cessation, which may in turn increase bus use (provided of course an individual remains able to use a bus).
What is known about the distribution of these four characteristics across society? First, those without access to a car are most likely to be younger, older, disabled, low income, ethnic minorities, women and live in a city (Power, 2012). Second, those on low income are most likely to be younger, older, disabled, ethnic minorities and women -although many individuals with low or no personal income of course have access to household resources. The third characteristic, requirement to travel, is more difficult to measure because the requirement or need can be short-circuited by inability to travel, making the requirement to travel difficult to observe. Using the amount of actual travel as a proxy for requirement or need to travel, the greatest levels of travel mobility are found among those aged 30-50, those on higher incomes and men (although gender differences have narrowed substantially) (Tilley, 2013). Finally, the greatest prevalence of 'able-bodiedness' occurs among younger people, steadily declining with age (Noble, 2000). Poor health and impairment are more prevalent among lower socio-economic groups, although socio-economic disparities in 'ablebodiedness' are dwarfed by age differences.
A case study: the National Concessionary Travel Scheme (NCTS) in Scotland
Across the UK there are policies of providing older and disabled people with a concessionary travel pass which allows travel free of charge on bus services, with the bus operator being reimbursed by the state. The policy of free bus travel aims to improve older people's mobility, by eliminating the financial cost of public transport as borne
Activity needs
Those who have a requirement to travel, e.g.: • having a long commute; • to attend regular hospital appointments; • to undertake mandatory unpaid activities outside the home, e.g. jury service, reporting to social security offices; • to access leisure and cultural facilities and social activities.
Just deserts
Those who deserve to be rewarded, e.g.: • for undertaking unpaid 'beneficial' activities outside the home, e.g. attending college, caring for relatives; • for contributing to transport provision, e.g. employees of bus companies.
Equal shares
Redistribution towards greater equality across groups (income, race, ethnicity, gender, etc.) in, for example: • level of use of/benefit from state-subsidised services, incl. public transport; • overall levels of mobility by all modes of travel.
Option choices
Encouraging and rewarding desirable travel behaviours, e.g.: • scrapping a polluting car; • becoming a non-car owner; • travelling regularly by bus.
Affordability
Supporting those who struggle to pay bus fares, e.g.: • those on a low income; • those who spend a high proportion of their income on bus travel.
by the user. The development of this policy has been based on the assumption that the cost of transport is a barrier to mobility for older people, due to low incomes as a result of relying on pensions as well as, for some, not being able to drive. Concessionary travel for older people is a universal benefit, in that all people over a certain age are eligible. As it is based on age, it assumes that all older people have similar mobility requirements. It is often politically justified in terms of 'just deserts', on the basis that the recipient will usually have paid tax during their working life. The National Concessionary Travel Scheme (NCTS) that is in place in Scotland is a universal policy in so far as all people over the age of 60 are eligible. Eligibility to claim the concessionary pass is only based on age, irrespective of activity needs or affordability. However, it is, of course, consistent with the universalist principle of entitlement -albeit restricted to a certain age group. In addition, however, disabled people of all ages are also entitled to free bus travel in Scotland.
Concessionary travel passes will no doubt be meeting a need for some older people. Without this pass some older people may struggle to travel and therefore access services. For example, the maximum state pension available is £110.15 per week (Gov. uk, 2013) and a single bus fare cost an average of £1.89 in urban areas and £1.96 in non-urban areas (TAS Partnership Ltd, 2012) meaning to travel every day by bus costs around £25 per week, representing nearly a quarter of state pension income. As a consequence, older people on low incomes could either spend a relatively high proportion of income on transport with implications for poverty, or be discouraged from travel with implications for social exclusion.
Concessionary travel schemes have been in existence in the UK since the late 1960s (Headicar, 2009) yet are only weakly embedded within UK mainstream welfare policy, which is dominated by concerns with education, health, social security, and to a lesser extent pensions and housing. Older people are able to travel for free on public transport in all UK countries. In Scotland, free national concessionary bus travel has been available since April 2006 to older people aged 60 and over, as well as eligible disabled people, with the aim of promoting social inclusion by allowing improved access to services by increasing mobility by bus (Transport Scotland, 2009). Prior to this, schemes were provided at the discretion of each local authority. Table 2 summarises the changes that have been made to the Scottish concessionary travel scheme policy between 1999 and 2008.
This paper focuses on evaluating mobility before and after April 2006 as major changes were made to the scheme when local area boundary and peak hour restrictions were removed. These changes were a significant expansion of the policy and therefore provide a 'natural experiment' on which to base an empirical analysis of the impacts of the policy. The changes also served to increase awareness of the scheme, as the introduction of the new national scheme was quite widely publicised.
Due to the universal availability of the scheme anyone meeting the age criteria, irrespective of income, is eligible to claim. Not all older people have low incomes therefore this policy makes very broad generalisations about the income and mobility needs of this population. Concessionary travel for older people when considered as a welfare policy is highly unusual and has more in common with the universalist post-war welfare settlement than twenty-first century welfare policies, which are characterised by increased targeting on those in greatest need. It is a universal policy -every citizen (who meets the age criterion, of course) is entitled to free bus travel. Since the 1980s, means-testing has become more widespread in the UK social security system. This makes concessionary travel for older people unusual. Age is a very weak proxy of an individual's level of need for financial assistance to use public transport -many travel by car or can afford to pay for the bus. Age is a political, rather than a needs-based category when it comes to concessionary travel policy. Older people are seen as having 'earned' their concessions by paying tax throughout their working lives and are represented as in 'need' due to advancing years and the onset of frailty. This combination of desert and assumed need results in a presumption of entitlement.
In recent years in the UK there has been particular focus on the increasingly unequal distribution of wealth between generations (Willetts, 2010), which had been noted particularly in relation to housing and higher education costs and access to benefits, but exacerbated by high youth unemployment since the recession. Therefore cost as a barrier to transport may become more significant among some younger people. As the price of bus fares has increased above the rate of inflation the de facto benefit older people are receiving is rising while younger people using public transport have to pay increased fares. As a result there are concerns that the existing NCTS is neither effective in targeting those most mobility-constrained (many of whom are under 60 years of age and therefore not eligible) nor fair in that many who are eligible are not on low incomes. Higgs and Gilleard (2010) highlight that there has been a renewed interest in inter-generational justice in modern welfare states due to population ageing and the financial crisis in 2008 and resulting economic slowdown.
Although some research has found that quality of life has improved among older people from the introduction of concessionary travel (Rye and Mykura, 2009;Andrews, 2011;Hirst and Harrop, 2011) as well as health improvements (Coronini-Cronberg et al., 2012;Webb et al., 2012), these benefits have been contested due to the small numbers reporting them and high costs involved in providing the scheme (Rye and Carreno, 2008b).
There appears to have been very little social inclusion impact arising from concessionary travel in Scotland as for many there are non-financial barriers to using public transport (Rye and Carreno, 2008a;2008b;Transport Scotland, 2009). For example, aspects of the built environment can present barriers to accessing public transport (Marsden et al., 2008;Wennberg et al., 2009;Risser et al., 2010;Hess, 2012).
Quantitative evaluations of the NCTS have focused on levels of uptake of the concessionary pass and not on level of use of the pass. Prior to the introduction of the national scheme, uptake of concessionary travel passes under the old local schemes was highest among lower-income groups, women, those without access to a car and urban dwellers. Since the expanded national travel scheme was introduced, overall uptake has increased, with the greatest increases among groups with previously low levels of uptake -higher income groups, men, car owners and rural dwellers (Rye and Mykura, 2009;Scottish Government, 2009;Baker and White, 2010;Dargay and Liu, 2010;Humphrey and Scott, 2012). Only one study has examined the impact of the scheme on levels of bus use (Dargay and Liu, 2010), but without any controls for other factors associated with bus use or counterfactual trends as used in the analysis reported in this paper. Dargay and Liu (2010) found that while trips with the pass rose by 25 per cent, bus trips made by eligible individuals fell by 10 per cent. Without controlling for a counterfactual trend, it is not possible to conclude from these figures how many, if any, of the increase in number of concessionary trips are additional trips that would not have been made in the absence of the scheme.
Data and methods
This paper considers the impact of the Scottish NCTS and the appropriateness and consequences of providing free bus travel, especially in light of important changes to the health and wealth of retired people. Scotland is used as a case study as there is available data to conduct a 'natural experiment' due to the substantial enhancement of the scheme in 2006.
The analysis was undertaken using data from the Scottish Household Survey (SHS). This is a repeated cross-sectional survey of the composition and characteristics of households in Scotland. The survey includes a one-day travel diary element, enabling travel patterns to be linked to household characteristics. Other surveys with a travel diary, such as the UK's National Travel Survey, have much smaller sample sizes that would not support the analysis of specific sub-groups, such as older bus travellers differentiated by a range of personal characteristics. Data are available from 1999 to 2008 and as the NCTS was rolled out nationally in Scotland in 2006, data are available to consider how the policy is being taken up by different users. The SHS is based on a random sample from the Postcode Address File (PAF) generating 31,000 interviews spread evenly over two years. Aggregating data from all available years into a single dataset gives a final sample of older people (aged 60+ years) of over 26,000.
For the purpose of this research, mobility refers to distance travelled and frequency of trip-making that takes place outside the residential home in order to acquire goods or services or to take part in activities. Trip rates, distance travelled and mode of transport used are common indicators of mobility (Tacken, 1998;Rosenbloom, 2004;Páez et al., 2007). Changes in daily distance travelled per week are important in this study because as incomes have risen, car ownership has increased, leading to higher car use and greater distance travelled (Lucas and Jones, 2009;Metz, 2010). The analysis only focuses on daily mobility rather than less frequent longer distance travel. All older respondents were included in the analysis, even if there were no trips made, to be able to accurately assess mobility trends.
To estimate what might have happened without the existence of the NCTS policy, a statistical technique called 'difference-in-difference'¹ was used. This technique compares the differences in behaviour between two time periods as well as the 'difference in difference' between a 'control' and 'intervention' group over the time periods, which, it is argued, reveals the policy impact. The intervention group comprised of those aged 60 and over who stated that they held a concessionary pass. The control group were those aged 60 and over without a concessionary pass. Since uptake of the pass is voluntary, it is possible that those who take up the pass would have increased their bus use in the absence of the policy, introducing a bias into the quasi-experimental design of this part of the analysis. However, the 'difference-indifference' approach takes account of the change predicted to have taken place in the absence of the policy (based on the temporal change observed in the 'control' group). We can therefore have a degree of cautious confidence that the results are free from a strong bias effect. The two time periods considered were 1 January 1999-31 March 2006 (before the national roll out of the NCTS policy) and 1 April 2006-31 December 2008 (after the national roll out of the policy in April 2006). Previous research has shown that higher incomes generally lead to higher levels of car ownership (therefore driving licences) and greater distances travelled (Pooley et al., 2005;Lucas and Jones, 2009;Metz, 2010). Residential area type also influences mobility levels (Gray et al., 2008). Therefore these factors have been included as independent variables in the regression models. All analysis has been disaggregated by gender given gender differences in mobility patterns (Rosenbloom, 2006;Su and Bell, 2012). Trip frequency and distance travelled by car-driver and bus passenger were used as indicators of mobility.
Findings: impacts of the NCTS set against broader mobility trends
Broader mobility trends have been assessed to provide context to the changes in mobility related to the NCTS. Figure 1 presents the median distance travelled per week (km) by age group, comparing men and women at two points in time: 1995-97 and 2006-08. Over time, distance travelled has increased among older age groups aged 60 and over, while it has declined among younger age groups, particularly among men belonging to the youngest age groups.
Figure 1: Median distance travelled per week (km) excluding commuting, business and education trips by gender and age group from 1995 to 2008 (3-year moving averages with 95% confidence intervals)
Source: Tilley, 2013 During 1995-97 median distance travelled was greater among men than women and this was true of all age groups. However, during 2006-08, for younger age groups (below 40 years), women travelled further than men. These findings highlight important gender differences in mobility; however, the nature of this has changed over time among some age groups. The greater mobility of older men largely reflects better access to transport resources among that generation (Rosenbloom, 2006), while the contemporary greater mobility of younger women may reflect more complex activity schedules undertaken by women compared to men.
To further explain this finding Figure 2 shows that in Britain, since 1975/76, there has been an increase in the proportion of both men and women holding a full driving licence across most age groups. However, the biggest increases in holding a driving licence have been significant for older people and women.
Since 1975/76, the proportion of men aged 60-69 years holding a driving licence has increased from 58 per cent to 89 per cent in 2010. Among women, the increase has been even greater, rising from 15 per cent in 1975/76 to 69 per cent in 2010. These dramatic increases are also reflected in the proportions of men and women aged 70 and over who hold a driving licence. For men this increased from 32 per cent in 1975/76 to 78 per cent in 2010, while for women, it rose from 4 per cent to 41 per cent. These are very significant figures indicating that the older population are highly mobile compared to those of 30 years ago.
A reduced rate of driving licence holdership is, however, observed among the 17-20 age groups and 21-29 age groups, most notably among men. Among these younger age groups cost factors are the main barriers to learning to drive (DfT, 2010). To assess whether the NCTS in Scotland is increasing mobility, therefore promoting social inclusion among older people, a closer examination of the trends relating to concessionary pass holding is required.
From 1999-01 to 2006-08 the proportion of older people holding a concessionary bus pass increased from 65 per cent to 88 per cent, representing the increased scope and generosity of the concessionary scheme.
To assess whether the Scottish NCTS could be promoting social inclusion through increased mobility, the socio-economic characteristics of concessionary pass holders are explored. Figure 3 shows that concessionary pass holders aged 60 and over who also hold a driving licence has increased among both men and women. Figure 4 also presents a similar pattern in relation to household car access. Figure 5 shows that the proportion of older people holding a concessionary pass and belonging to the two highest income groups (£15,000 per annum and over) has increased over time, while it has declined among the lowest income group.
Although older people are eligible for concessionary travel there are differing trends between older and younger age groups. The trend of increasing driving licence holdership among older people (DfT, 2010) could be partly reflected in the increasing proportion of concessionary pass holders with a driving licence. In addition, the proportion of concessionary pass holders with access to a car has also increased over time.
The trends relating to concessionary pass holders are important to note as while rates of driving licence holding and car access is increasing among them, there is a reducing rate of driving licence holding among younger age groups, who are not eligible for free bus travel.
In terms of the effect of the Scottish NCTS on mobility after the policy became administered nationally, Table 3 shows that bus trips increased significantly among older women with a concessionary travel pass (0.25 p-value<0.1) and declined among men with a concessionary travel pass (-0.16) although not significantly. Table 4 shows that, weekly distance travelled by bus increased among older men (1.06 km) and older women (4.69 km) holding a concessionary pass, although not significantly.
Tables 5 and 6 present the results of car-driver use among older concessionary pass holders. Table 5 shows that car driver trips increased among both men and women (0.14 and 0.03 trips per week respectively), although not significantly. Table 6 shows that distance travelled as car-driver increased among older men after the policy extension (3.85 km), while it fell among older women (-2.90 km). Again, these results were not statistically significant, suggesting that the NCTS had only a small impact in shifting mode of travel from car to bus.
Overall, the mobility of older people appears to have increased by bus after the NCTS policy was applied to national travel in April 2006, in particular among older women. However, despite the provision of free bus travel through the concessionary travel scheme, the car remains important for the mobility of older people, particularly for older men.
Impacts and effectiveness of the NCTS
Older car users are increasing take-up of the concessionary travel scheme. This suggests that the majority of older people with concessions may not be dependent on public transport. Thus, concessionary bus travel based on age is not effective at targeting those in need of help with bus fares. However, women have increased bus use as a result of the scheme more than men and this may go some way to counteracting overall lower mobility among older women arising from lower car use. The results suggest that the NCTS could be suppressing potentially higher car use among older people, albeit only on a relatively small scale. This could in part reflect increasing mobility among older people over time, particularly as driving licence holding has increased. Recent research has argued that increasing mobility among older people is due to the ageing of the 'baby boomer' cohort who, generally, are associated with higher car use (Coughlin and Reimer, 2006;Coughlin, 2009). These findings may also be related to higher incomes among older people, again this is particularly associated with the boomer cohort. As this cohort ages, there will be a greater proportion of older people belonging to higher income groups being able to claim a concessionary travel pass.
The higher and increasing mobility of older people as a result of the ageing boomer cohort, compared to lower levels of mobility among younger cohorts adds to the debate regarding the appropriateness of the provision of concessionary travel to all older people based solely upon age. As the mobility of older people increases, bus use appears to be decreasing in favour of the private car.
Transport justice and older people
The justice principles assessed from a transport perspective in Table 1 are now used as a framework in which to assess the justice implications of the NCTS. This is set out in Table 7. On the one hand, older women have benefited from the scheme more than men. Since women were more mobility-constrained than men prior to the scheme, the scheme in this sense is meeting need and addressing gender inequality. On the other hand, increasing numbers of more affluent older people with access to a car are making use of free bus travel provided through the NCTS. This is happening in the face of declining mobility among young people who receive little support with the costs of bus travel and none on a nationally-consistent basis. In meeting the activity needs of those who have a requirement to travel, the NCTS is particularly ineffectively targeted. First, the removal of commuting trips at retirement substantially reduces the need to travel, making older people a low-need group for mobility support. Second, mobility is falling among younger people but rising among older people, consistent with unmet needs rising among younger people but falling among older people.
In terms of rewarding 'just deserts', older people often undertake unpaid childcare for grandchildren, which may require travel and society may consider deserving of reward. More generally, older people usually have paid tax during their working lives and experience a drop in income at retirement so in this sense could also be seen as 'deserving'.
In promoting equality in mobility, the NCTS has reduced age inequality in mobility by increasing the mobility of older people. It has also reduced gender inequality in mobility within the older population by increasing the mobility of older women. In encouraging and rewarding desirable travel behaviours, the NCTS has contributed to a shift from car to bus among older people with associated environmental benefits, albeit on a small scale. In addressing affordability issues, the NCTS has increased bus use among all income groups. However, higher-income groups among those eligible for the NCTS benefit most because they travel more than lower-income groups.
Some people, of a range of ages, require support with transport mobility to avoid social exclusion. Although concessionary fares are a universal benefit and available to all those aged 60 and over in Scotland, this population group varies greatly in terms of socio-economic characteristics. Older people are widely assumed to have lower incomes as a result relying on pensions. However, many continue to participate in the labour market until a number of years beyond the age of 60, yet the universal nature of the scheme does not take this into account. Not all older people require free bus travel and many are able to access facilities and participate in social networks without welfare state assistance. The concessionary scheme could be deemed unequal as it is provided for at the expense of other age groups for whom it could be argued are in more social need of receipt of such benefits, for example due to higher levels of unemployment experienced among the youngest adults. • the removal of commuting trips at retirement substantially reduces the need to travel, making older people a low-need group for mobility support; • mobility is falling among younger people but rising among older people, consistent with unmet needs rising among younger people but falling among older people; • to prevent loneliness and maintain social capital, older people require social contact.
Just Deserts
Those who deserve to be rewarded: • older people often undertake unpaid childcare for grandchildren, which may require travel; • older people usually have paid tax during their working lives and experience a drop in income at retirement.
Equal Shares
Redistribution towards greater equality across groups: • the NCTS has reduced age inequality in mobility by increasing the mobility of older people; • the NCTS has reduced gender inequality in mobility within the older population by increasing the mobility of older women.
Option choices
Encouraging and rewarding desirable travel behaviours: • the NCTS has contributed to a shift from car to bus among older people, albeit on a small scale.
Affordability
Supporting those who struggle to pay bus fares: • the NCTS has increased bus use among all income groups; • however, higher-income groups among those eligible for the NCTS benefit most because they travel more than lower-income groups.
While the concession for older people is protected by national statute, concessionary fares that are available for younger people are discretionary and have been cut by local authorities in conjunction with the rising cost of public transport. This cost is becoming a barrier for younger people accessing transport (Bourn, 2013).
Transport as a welfare service
For some, transport has an important role to play in avoiding social exclusion, such as younger people (including getting jobs and training), disabled people, women and older people . Therefore, transport policy needs to be considered as of relevance to, even an integral part of, the welfare state. Considerable sums of public (and private) money are invested in transport infrastructure, services and concessionary fares. The analysis reported in this paper has demonstrated that the benefits of concessionary fares are unequally distributed, and this is also likely to be the case of investment in transport infrastructure and services. Some groups in society will benefit more than others. Given the central implications of transport for wellbeing and equality, transport policy is a crucial, but rather neglected and perhaps 'Cinderella', policy area in relation to understanding poverty and social justice.
Notes 1 Difference-in-difference multiple regression models compare changes in behaviour between two time periods and two groups for a simple setup. The 'intervention' group is exposed to a 'treatment' in the second period but not in the first. The 'control' group is not exposed to 'treatment' during either period. For more details see Tilley (2013). | 2016-03-22T00:56:01.885Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "0818e152b24e6451768c502b892364a6bf1536e5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.1332/175982715x14418059634901",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f4da2d04bc606a0c9197ef67afb3d5242d16dd59",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Economics"
]
} |
119342831 | pes2o/s2orc | v3-fos-license | The evolution of the radial gradient of Oxygen abundance in spiral galaxies
The aim of this work is to present our new series of chemical evolution models computed for spiral and low mass galaxies of different total masses and star formation efficiencies. We analyze the results of models, in particular the evolution of the radial gradient of oxygen abundance. Furthermore, we study the role of the infall rate and of the star formation history on the variations of this radial gradient. The relations between the O/H radial gradient and other spiral galaxies characteristics as the size or the stellar mass are also shown. We find that the radial gradient is mainly a scale effect which basically does not change with the redshift (or time) if it is measured within the optical radius. Moreover, when it is measured as a function of a normalized radius, show a similar value for all galaxies masses, showing a correlation with a dispersion around an average value which is due to the differences star formation efficiencies, in agreement with the idea of an universal O/H radial gradient
Introduction
The elemental abundances in spiral and low mass galaxies are lower in the outer regions than in the inner ones, showing a well characterized radial gradient defined by the slope of a least-squares straight line fitted to the radial distribution of these abundances along the galactocentric radius (Shaver et al. 1983;Zaristky et al, 1994;Henry & Worthey 1999). These radial gradients seem to correlate with other characteristics defining their galaxies. This way, they are flatter in the early galaxies than in the late ones. They also seem steeper in the low mass galaxies than in the bright massive disks. This radial gradient is considered as an evolutionary effect, that is, it comes from a difference of enrichment between regions more evolved (at the inner parts of disk) compared with the less evolves zones of the outer disks. This way, a flat gradient implies a more rapid evolution 2 Mollá et al. than in disks where the gradient is steeper, as shown in Mollá, Ferrini & Díaz (1996) for a set of models for some nearby galaxies. These models resulted in a steep radial gradient for NGC 300 or M 33, while M 31 had a flatter gradient than our Milky Way Galaxy (MWG), and other similar galaxies as NGC 628 or NGC 6946. Since the evolution modifies the level of enrichment of a given region or galaxy, it is expected that the radial gradient also changes with time and, therefore, that high-intermediate redshift galaxies would have to show a steeper radial gradient than in the present time, at least when measured as dex kpc −1 . This results was obtained in Mollá, Ferrini & Díaz (1997), further obtained later in Mollá & Díaz (2005), hereafter MD05, and was also supported by the Planetary Nebulae (PN) O/H abundance data (Maciel, Costa & Uchida 2003) and by open stellar cluster metallicities for different ages bins. Results by cosmological simulations for a MWG-like galaxy also obtained a similar behavior (Pilkington et al. 2012). However, when a correct feedback is included in these simulations, the radial gradient results to have a very similar slope for all times/redshifts (Gibson et al. 2013). In turn, the most recent PN data (Stanghellini & Haywood 2010, Magrini et al. 2016) refined to estimate with more precision their ages and distances, give now the same result: there is no evidences of evolution of the radial gradient with time for MWG nor for other close spiral galaxies, at least until z = 1.5. Simultaneously, there are, however, some recent observational data which estimate the abundances of galaxies at high and intermediate redshift, and which obtain a plethora of different radial gradients with values as different as -0.30 dex kpc −1 or +0.20 dex kpc −1 (Cresci et al. 2010, Yuan et al. 2011, Queyrel et al. 2012, Jones et al. 2013, Genovali et al. 2014, Jones et al. 2015, Xiang et al. 2015, Anders et al. 2016. It is therefore necessary to revise our chemical evolution models and analyze in detail the evolution of this radial gradient not only for the MWG, but also for different galaxies.
Chemical evolution model description
We have computed a series of 76 models applied to spiral galaxies with dynamical masses in the range 5 × 10 10 -10 13 M ⊙ (with mass step in logarithmic scale of ∆ log M = 0.03) which implies disk total masses in the range 1.25 × 10 8 -5.3 × 10 11 M ⊙ , or, equivalently rotation velocities between 42 and 320 km s −1 . The radial distributions of these masses are calculated through equations from Salucci et al.(2007), based in the rotation curves and their decomposition in halo and disk components. The scenario is the same as the one from MD05, with the total mass in a spherical regions at the time t = 0, which infall over the equatorial plane and forms out the disk. The gas infall rates are computed by taking into account the relationship between halo mass and disk mass in order to obtain at the end of the evolution disks as observed, with the adequate mass. They result to be higher in the centers of disks (bulges) and lower in disks, decreasing towards the outer regions, as expected in an inside-out scenario. However, the evolution with redshift is very similar for all disk regions, only with differences in the absolute values of infall rates, but with a smooth decreasing for decreasing z, except for the central regions for which the infall changes strongly with z, more similarly to the The evolution of Oxygen radial gradient 3 cosmological simulations results found for early and spheroidal galaxies. Details about these resulting rates are given in Mollá et al. (2016). Within each galaxy we assume that there is star formation (SF) in the halo, following a Schmidt-Kennicutt power law on the gas density with an index n = 1.5. In the disk, however, we have a star formation law in two steps: first molecular clouds form from diffuse gas, then stars form from cloud-cloud collisions (or by the interaction of massive stars with the molecular clouds surrounding them). In our classical standard models from MD05, we treated these processes as depending on the volume of each region and a probability factor or efficiency for each one. For the halo SF, it is assumed an efficiency constant for all galaxies. The process of interaction of massive stars with clouds is considered as local and we use the same approach for all galaxies. The two other efficiencies defining the molecular clouds and stars formation processes are modified simultaneously from a model to another, with values between 0 and 1. In this new series of models we have also used an efficiency to form stars from molecular clouds, but to convert diffuse gas into molecular phase we have tried six different methods, two based in the same efficiency method as in MD05, called STD and MOD, and four based in different prescriptions based in Blitz & Rosolowsky (2006), Krumhold et al (2008Krumhold et al ( , 2009), Gnedin & Kravtsov (2011), and Ascasibar et al. (in preparation), so-called BLI, KRU, GNE, and ASC, respectively. More details about these calculations and their implementation in our code are given in Mollá et al. (submitted), where we have applied the models to MWG and have checked which of these techniques give the best results when comparing the observational data. Our results indicate that the technique ASC shows a behavior in better agreement with data than the others, In particular those related with the ratio HI/H 2 along radius or as a function of the gas density. The stellar yields are selected as derived in Mollá et al. (2015) among 144 different combinations with which we calculate a MWG model to see which of them is the best in reproducing the MWG data. We chose the Gavilán et al (2005Gavilán et al ( ,2006) stellar yields for low and intermediate stars, the ones from Limongi & Chieffi (2003) and Chieffi & Limongi (2004) for massive stars, and the Kroupa (2002) IMF. The Supernova type Ia yields from Iwamoto et al (1995) are also used.
Evolution of the O/H radial gradient in MWG
In Fig. 1 we represent the O/H radial gradient as a function of redshift for the six models for MWG calculated in Mollá et al. (2017) with the different prescriptions of the HI to H 2 conversion as explained. In top panels we show these gradients as dex kpc −1 , computed in panel a) with the whole radial range for which we have calculated the models. In panel b) we have the gradients calculated with regions for which R ≤ 2.5 R eff . We may see that in this last case the gradient is basically constant along z with very small differences among models. In both panels we have included the MWG data which give this evolution along z, PN from Stanghellini & Haywood (2010) and ; open clusters from Cunha et al. (2016); and stellar abundance from Anders et al. (2016). The Mollá et al. (2015), as labelled. We have also shown the cosmological simulation result for a Milly Way-like galaxy from Gibson et al. (2013, G13). We see that, in order to reproduce the data, it is necessary to compute the gradient within the optical radius. In bottom panels we show the gradients obtained using a normalized radius (R/R eff ). In panel c), where we use again all radial range, we see a different behavior between STD and MOD models (which use an efficiency to form molecular clouds) and all the others using a prescription to convert HI in H 2 which depend on the gas, stars or total density or/and on the dust through the metallicity. In these last cases the effective radius increases more slowly than in our standard models, thus producing a strong radial gradient when regions out of the optical disk are included for the fit. In panel d) , where only regions with R ≤ 2.5 R eff are used, The evolution of Oxygen radial gradient 5 Figure 2. Evolution of the effective radius R eff with redshift z. Data are from Trujillo et al (2007), Buitrago et al. (2008), andMargolet-Bentatol et al. (2016), as blue triangles, red squares, and black dots, respectively.
we find again a very constant radial gradient along the redshift. In fact, this value is in very good agreement with the one found by Sánchez et al. (2014, S14) as a common gradient for all the CALIFA survey galaxies. The grow of the disk in the different models is shown in Fig. 2 with data as labelled. We see, as said before, that ASC model is the one where the radius increases more slowly while the STD and MOD models started very early to show a large disk. GNE, BLI and KRU show an intermediate behavior. Although the data we show in Fig. 2 refer mainly to bulges and disks from early-type galaxies, it seems clear that ASC is the model closest to the observations.
Evolution of the O/H radial gradient in spiral and low mass galaxies
In Fig. 3 we show the radial gradients computed for different galaxy masses, as labelled, in a similar way than Fig. 1. In panel a), as in Fig.1a, the radial gradient computed for all radial regions is shown. It is clear that each galaxy has its own evolution being the smallest one which shows the most different behavior. Each galaxy has a different radial gradient, with the most massive ones showing the flattest distributions (∼ −0.05 dex kpc −1 for all z), while the smallest has the steepest gradient (∼ −0.20 dex kpc −1 ). However, when only radial regions within the optical radius are used to compute the radial gradient a very different behavior arises: all gradients are approximately constant with z for galaxies with log(M vir ) ≥ 11.65, although with the same behavior than before: the more massive the galaxy, the flatter the gradient. In the lowest masses galaxies, it is evident the moment in which the disk begins to grow: at z = 2.5 for log M vir = 11.35 and at z = 0.5 for log M vir = 11.05. When we represent the gradients measured as function of the effective radius, we see that they steepen with decreasing z when all radial regions are used (Fig. 3c) and, again, a very smooth evolution along z for all galaxies appears when only the optical disk is used to fit the gradient (panel d). The average value in this case is ∼ −0.13 dex kpc −1 , as the value found by Sánchez et al. (2014), supporting their claim that a universal radial gradient appears for all galaxies. A common radial gradient is easily obtained drawing O/H for the present time as a function of R/R eff for all galaxy masses and efficiencies (larger than 0.002) in a same plot, as we show in Fig.4. We see that effectively, such as Sánchez et al. (2014) found, a same radial gradient is obtained for all models, when R/R eff ≤ 2.5, with a dispersion given by the differences in the star formation efficiencies around an average radial distribution. Since it seems quite evident that the radial gradient is a scale effect due to the star formation rate which is measuring the stellar disk growth, we would expect a correlation between this O/H radial gradient measured as dex kpc1 and the scale length of the disk or any other quantity defining the size of the disk. We plot in Fig. 5, right panel, this correlation for all our models with different galaxy masses and with six different values for the efficiencies to form stars from molecular clouds, which are coded with different colored dots. The correlation is clear for all effective radii larger than 1.25 kpc. If the effective radius is smaller than this value, our code, working with radial regions of 1 kpc wide, is not able to calculate a radial gradient nor an effective radius. This theoretical correlation supports the observational one found by Bresolin & Kennicutt (2015) with the radial gradient and the scale length of the disks (their Fig. 3). These authors claim in that work that all galaxies, even the low surface brightness galaxies, share a common abundance radial gradient when this one is expressed in terms of the exponential disk scale-length (or any other normalization quantity).
Conclusions
The conclusions can be summarized as: • A grid of chemical evolution models with 76 different total dynamical masses in the range 10 10 to 10 13 M ⊙ is calculated. Figure 5. The O/H radial gradient measured as dex kpc −1 , as a function of the inverse of the effective radius, 1/R eff . Each color shows a different efficiency ǫ * .
• 10 values of efficiencies ǫ * to form stars from molecular clouds are used with values 0 < ǫ * < 1. But we find that useful values are only the first six-seven of them with ǫ * > 0.002.
• The best combination IMF from Kroupa et al (2002) + Gavilan et al. (2006) + Chiefi & Limongi (2003,2004 yields, is used. The stellar yields + IMF may modify the absolute abundances on a disk, but they do not change the radial slope of the abundance distributions of disks. • Using Shankar et al. (2006) prescriptions for M halo /M disk , we obtain the necessary infall rates to reproduce the radial profiles of galaxy disks • Different prescriptions for the conversion of HI to H 2 are used finding that the ASC model is the best one.
• The slope of the oxygen abundance radial gradient for a MWG-like model when it is measured for R < 2.5 R eff has a value −0.06 dex kpc −1 , which is around −0.10 dex R −1 eff when it is measured using a normalized radius. • This same slope is also obtained for all efficiencies and all galaxy masses in excellent agreement with CALIFA results, supporting the idea of a universal radial gradient for all galaxies when measured as a function of a normalized radius.
• The slope do not changes very much along z when the infall rate is as smooth as we have obtained recently, compared with old models with a stronger evolution. | 2016-12-10T22:15:15.000Z | 1900-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "48a46fc444c34ec074f5968f464eb9d3bcb8ae07",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "48a46fc444c34ec074f5968f464eb9d3bcb8ae07",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250099140 | pes2o/s2orc | v3-fos-license | Recent Development in Nanoconfined Hydrides for Energy Storage
Hydrogen is the ultimate vector for a carbon-free, sustainable green-energy. While being the most promising candidate to serve this purpose, hydrogen inherits a series of characteristics making it particularly difficult to handle, store, transport and use in a safe manner. The researchers’ attention has thus shifted to storing hydrogen in its more manageable forms: the light metal hydrides and related derivatives (ammonia-borane, tetrahydridoborates/borohydrides, tetrahydridoaluminates/alanates or reactive hydride composites). Even then, the thermodynamic and kinetic behavior faces either too high energy barriers or sluggish kinetics (or both), and an efficient tool to overcome these issues is through nanoconfinement. Nanoconfined energy storage materials are the current state-of-the-art approach regarding hydrogen storage field, and the current review aims to summarize the most recent progress in this intriguing field. The latest reviews concerning H2 production and storage are discussed, and the shift from bulk to nanomaterials is described in the context of physical and chemical aspects of nanoconfinement effects in the obtained nanocomposites. The types of hosts used for hydrogen materials are divided in classes of substances, the mean of hydride inclusion in said hosts and the classes of hydrogen storage materials are presented with their most recent trends and future prospects.
Introduction
The 21st century has been marked by tremendously important technological breakthroughs, yet the massive expansion of industrialization has led to a deepening scarcity and skyrocketing prices of fossil fuels and energy raw materials, concomitant with a continual atmospheric pollution [1]. In the context of ever-increasing energy demands and the serious downsides of using fossil fuels, hydrogen has emerged over the past decades as a true and relevant promise of a carbon-free, green energy source for the world. However, hydrogen has a very low boiling point (20.4 K) at 1 atm, which severely restricts its use in the native form, except in some high pressure, cryogenic tanks that pose themselves additional energetic costs and safety risks regarding charging, transport and storing [1]. To circumvent the downfalls of using molecular dihydrogen (H 2 ), scientists have turned their attention and research focus on hydrogen-containing compounds, in the form of metal hydrides and related materials, which in turn feature higher thermal stability, safer handling, no fuel loss upon storage and overall produce the cleanest energy known today. The fuel of the future should ideally produce no carbon-containing by-products, exhibiting time-and property-related endurance over 1500 dehydrogenation-rehydrogenation cycles, and most importantly, all of this while featuring a gravimetric weight percentage of at least 5.5 wt.% (DOE's target set for 2025) [1][2][3][4][5][6]. The use of fossil fuels will eventually be phased-out and an
Characterization Methods: Old, New, and Their Pitfalls
Traditionally, hydrogen storage materials follow a typical characterization protocol involving structural (XRD), elemental (XPS), morphological (SEM, TEM, N 2 sorption isotherms) and recording of hydrogenation data (PCI curves) [8]. Recently, a fundamental issue regarding elucidation of local environment of hydrogen in energy materials has revealed fast sample spinning 1 H NMR high-resolution spectroscopy as an appropriate tool to quantitatively characterize hydrogenated TiZrNi quasicrystals [30]. Kweon et al., showed by employing fast-spinning NMR spectroscopy that neutral hydrogen is surrounded by metal atoms shifting gradually from Zr to Ti and then Ni with increasing hydrogen content [30]. 1 H magic-angle spinning (MAS) NMR spectra has shown real promise for tuning electronic characteristics in a Ba-Ti oxyhydride, and could become a tool to investigate hydrogen occupation in the vicinity of the nuclei (negative Knight shift, indicative of interaction of conduction band electrons and probe nucleus) [31]. A potential downside indicative of interaction of conduction band electrons and probe nucleus) [31]. A potential downside of using this technique is the high sensitivity to sample temperature, which was shown to increase due to fast rotor spinning , with a direct effect on main peak width change. Thus, additional precautions need to be undertaken to account for the effect of sample temperature increase when using fast spinning NMR spectroscopy [31].
Correct understanding of interfacial phenomena occurring during hydrogen storage is now termed as hydrogen spillover effect (HSPE). First discovered in 1964, it describes the migration of hydrogen atoms produced by H2 decomposition on an active site, and it allows for a more insightful view on the dynamic behavior of hydrogen in energy storage materials [7]. While molecular orbital energy computations showed unfavorable energy for H atom spillover on non-reducible supports, recent studies have shown that HSPE is indeed possible on inert supports such as siloxanic materials (SiO2) [7]. This bears a direct effect on hydrogen storage materials such as metal hydrides confined in mesoporous silica supports, where the spillover distance is limited to very short distances of ~10 nm [7].
Interestingly, developing tools to characterize metal hydrides during hydrogenation cycles has led to a summary of soft (X-ray absorption, XAS; X-ray emission spectroscopy, XES; resonant inelastic soft X-ray scattering, RIXS, X-ray photoelectron spectroscopy, XPS) and hard (X-ray diffraction, XRD) X-ray techniques used to this end ( Figure 1) [32]. Soft X-rat techniques (100-5000 eV) are particularly appealing for tracking mechanistic behavior and intermediate product formation during hydrogenation studies, with direct influence over hydrogen storage capacity. XAS measurements for instance are bulk or surfacesensitive, and show 3d transition metal (TM) L-edges corresponding to transition of a 2p electron to an unoccupied 3d orbital, hence enabling monitoring of oxidation state changes during hydrogen release (+n...0) and uptake (0…+n) [32]. Similarly, TM-catalyzed alanates (2 mol%-catalyzed NaAlH4) showed in XAS measurements the Al and Na K-edge and Ti L-edge consistent with a Ti-like state throughout the hydrogen release/uptake cycles, but with clear differences in Al state, which may undergo various intermediate states (Al/NaAlH4/Na3AlH6) [32]. Quasi-elastic neutron scattering (QENS) studies have been undertaken to establish hydrogen dynamics in nanoscale sodium alanate NaAlH4 and showed that fitting QENS to a Lorentzian function can yield two dynamic states of hydrogen and concluded that even at 77 °C there is a high percentage (18%) of mobile hydrogen atoms in the nano-NaAlH4 [33]. As an alternative method to the conventional pressure-composition-temperature (PCT) method typically used to characterize thermodynamic parameters for hydride-based sys-tems, a less complex investigation method has been described for MgH 2 -based materials: thermogravimetric analysis (TGA) [34]. This method relies on cycling the hydride under a flowing gas of constant hydrogen partial pressure, and the TGA curves are further analyzed using the van't Hoff equation to obtain the absorption/desorption enthalpies, which in the case of VTiCr-catalyzed Mg/MgH 2 materials, showed good agreement with traditional PCT results [34]. Other recent research established a nano-Pd patched surface of Pd 80 Co 20 to afford one of the most sensitive optical hydrogen sensors (fast response of <3 s, high accuracy of <5%, and very low limit of detection of 2.5 ppm) [35]. Employing interpretable machine learning could also help formulate general design principles for intermetallic hydride-based systems being used to validate limited data from the HydPARK experimental metal hydride database and stressing the recommendation for experimental groups to report ∆H, ∆S, P eq , T and V cell [27].
Valero-Pedraza et al., have characterized the hydrogen release form ammonia borane nanoconfined in mesoporous silica by means of Raman-mass spectroscopy, which confirmed hydrogen release from AB at lower temperatures, fewer BNHx gaseous fragments in nanoconfined samples and a lack of polyiminoborane formation during thermolysis [36].
The study also pointed out to silica-hydride interactions, which were identifiable based on modifications in the Raman spectra [36].
However, analysis of the literature data also points out to several weaknesses in applying traditional characterization methods that have not yet been tuned for current nanosized materials [15,34,37,38]. For instance, AB (ammonia borane) hydrogenation studies showed many inconsistencies [38]. By assessing TGA data in the literature, Petit and Demirci urge caution when evaluating ammonia borane weight loss (and consequently hydrogen release), as this was found to be highly dependent on the operation conditions (semi-closed/open reactor) and were shown to erroneously indicate a different hydrogen release temperature onset and hydrogen wt.% [38].
Surrey et al., conducted a critical review of a paper discussing electron microscopy observation of elementary steps in MgH 2 release mechanisms [37]. In this work, they debunked the general assumption that TEM microscopy can be used, as such, without further testing methodology adjustment in the case of hydrogen storage materials such as MgH 2 . The issue was serious, as it led initial authors to misinterpret TEM observations, by disregarding the key aspect of electron beam induced dehydrogenation of MgH 2 [37]. In a cascade chain of errors, the beam-induced heat producing dehydrogenation also led to a false interpretation of SAD (selected area diffraction) data, which only showed hollow MgO shells deprived of Mg-core, an effect actually ascribed to the nanoscale Kirkendall effect. As a result, it was apparent that the sample actually measured did not even contain MgH 2 any longer [37].
In line with the issues raised above, Broom and Hirscher discussed the necessary steps for reproducible results in hydrogen storage research [15].
Bulk vs. Nanomaterials
After its first inclusion on the research outlook of scientists worldwide in 1996, nano-sized hydrides have known a wide expansion, mainly due to several important kinetic and thermodynamic improvements of nanoconfinement over their bulk counterparts [4,8,14,16,18,[21][22][23]27,28,. Over time, nanoconfinement has emerged as a reliable tool for tuning not only thermodynamic and kinetic behavior at nanoscale, but also for altering reaction pathways, lowering or even suppressing side-reactions and sideproducts, while also affording better size control of the particles over several hydrogen release/uptake cycles ( Figure 2).
Figure 2.
Main features of bulk and nanoconfined materials for hydrogen storage; exemplified for the case of an overly-studied hydride, MgH2. (inset reprinted/adapted with permission from Ref. [65]. 2022, Elsevier).
Types of Hosts
Confining LiBH 4 by a melt impregnation technique in nanoporous silica MCM-41 (1D, d pore < 2 nm) or SBA-15 (2D-ordered pore structure, d pore = 5, 7 and 8 nm) of different pore sizes reveals an interesting interfacial effect governing Li + and BH 4 − ion mobility [87]. Using solid-state NMR ( 1 H, 6 Li, 7 Li and 11 B), Lambregts et al., showed that, as a result of nanoconfinement, two distinct fractions of LiBH 4 coexist and this is a temperaturedependent equilibrium (Equation (2)): The high mobility LiBH 4 is located near silica pore walls, whereas LiBH 4 of lower mobility is located towards the pore's core; the theoretical wall thickness was estimated based on a core-shell model LiBH 4 @SBA-15, as t = r p (1 − f lower mobility . The dynamic layer thickness is temperature-dependent, and increases from 0.5 nm (30 • C) to 1.2 nm (110 • C). Here again the results of calorimetric data were found to overestimate the highlymobile LiBH 4 layer thickness (1.9 nm), pointing out the need for care when deriving the same parameter from different techniques [87]. While 6,7 Li NMR spectra was too complex for unequivocal deconvolution, 1 H and 11 B NMR spectra clearly show two components throughout the investigated temperature range (30-130 • C), consistent with the two LiBH 4 fractions of different ion mobility [87].
Melt impregnation of NaBH 4 in MCM-41 at 560 • C led to a drastic surface area decrease from 1110.9 m 2 g −1 (pristine MCM-41) to 3.5 m 2 g −1 (nanocomposite NaBH 4 @MCM-41), and to a 78% pore filling attested by pore volume decrease (1.02 cm 3 g −1 to 0.02 cm 3 g −1 ) [74]. Interestingly, some amount of sodium perborate NaBO 4 resulting from unavoidable oxidation of the borohydride with silanol (Si-OH) groups is the main additional phase detected by XRD, confirming no significant additional phases due to melt impregnation at >500 • C. The dehydrogenation onset peak for NaBH 4 was reduced by nanoconfinement from 550 • C (bulk) to 520 • C (nanocomposite) [74]. Due to the insulating nature of boron oxide phase (NaBO 4 ), the ionic conductivity did not improve the same way it does for LiBH 4 , and remained largely the same (7.4 × 10 −10 S cm −1 ). This 10-fold increase in ionic conductivity that only lasts up to 70 • C for the nanocomposite is attributed to the presence of larger dodecaborate ions B 12 H 12 2− whose distinct presence was signaled in 11 B NMR spectra by an additional sharp peak at −15.58 ppm (NaBH 4 @MCM-41) vs. −41.95 ppm (for pristine The high mobility LiBH4 is located near silica pore walls, whereas LiBH4 of lower mobility is located towards the pore's core; the theoretical wall thickness was estimated based on a core-shell model LiBH4@SBA-15, as = ( − . The dynamic layer thickness is temperature-dependent, and increases from 0.5 nm (30 °C) to 1.2 nm (110 °C). Here again the results of calorimetric data were found to overestimate the highlymobile LiBH4 layer thickness (1.9 nm), pointing out the need for care when deriving the same parameter from different techniques [87]. While 6,7 Li NMR spectra was too complex for unequivocal deconvolution, 1 H and 11 B NMR spectra clearly show two components throughout the investigated temperature range (30-130 °C), consistent with the two LiBH4 fractions of different ion mobility [87].
Melt impregnation of NaBH4 in MCM-41 at 560 °C led to a drastic surface area decrease from 1110.9 m 2 g −1 (pristine MCM-41) to 3.5 m 2 g −1 (nanocomposite NaBH4@MCM-41), and to a 78% pore filling attested by pore volume decrease (1.02 cm 3 g −1 to 0.02 cm 3 g −1 ) [74]. Interestingly, some amount of sodium perborate NaBO4 resulting from unavoidable oxidation of the borohydride with silanol (Si-OH) groups is the main additional phase detected by XRD, confirming no significant additional phases due to melt impregnation at >500 °C. The dehydrogenation onset peak for NaBH4 was reduced by nanoconfinement from 550 °C (bulk) to 520 °C (nanocomposite) [74]. Due to the insulating nature of boron oxide phase (NaBO4), the ionic conductivity did not improve the same way it does for LiBH4, and remained largely the same (7.4 × 10 −10 S cm −1 ). This 10-fold increase in ionic conductivity that only lasts up to 70 °C for the nanocomposite is attributed to the presence of larger dodecaborate ions B12H12 2− whose distinct presence was signaled in 11 B NMR spectra by an additional sharp peak at −15.58 ppm (NaBH4@MCM-41) vs. −41.95 ppm (for pristine BH4-) ( Figure 3) [74]. The organic-inorganic hybrid poly(acryalamide)-grafted mesoporous silica nanoparticles (PAM-MSN) have been evaluated as functionalized nanoporous hosts for tuning hydrogen release/uptake behavior in ammonia borane (AB), which started to desorb hydrogen in the said nanocomposite at a lower temperature with respect to pristine AB, which was further enhanced by functionalization of the mesoporous silica shell with car- Figure 3. Possible decomposition pathways for bulk NaBH 4 (a,b) and for melt-impregnated, nanoconfined NaBH 4 (c).
of 49
The organic-inorganic hybrid poly(acryalamide)-grafted mesoporous silica nanoparticles (PAM-MSN) have been evaluated as functionalized nanoporous hosts for tuning hydrogen release/uptake behavior in ammonia borane (AB), which started to desorb hydrogen in the said nanocomposite at a lower temperature with respect to pristine AB, which was further enhanced by functionalization of the mesoporous silica shell with carboxylic -COOH groups [88].
2D-ordered mesoporous silica of cylindrical pores (SBA-15) was successfully used by Yang et al., for enhancing the ionic conductivity of a mixed-anion borohydride, Li 2 (BH 4 )(NH 2 ). By following a melt infiltration procedure, the Li-ion conductivity was increased in Li 2 (BH 4 )(NH 2 )@SBA-15 to 5 × 10 −3 S cm −1 at 55 • C [89]. A marked kinetic improvement of hydrogen release (∆T = 70 • C) was recently reported by Rueda et al., by confinement of ammonia borane (AB) in silica aerogel by simultaneous aerogel drying and AB gas antisolvent precipitation using compressed CO 2 , and achieving a weight AB loading of up to 60 wt.% [90].
Gas Selective-Permeable Polymers
Attempts to restrict oxygen and moisture exposure of active hydrogenation sites in hydride materials have been made through the engineered approach of covering the hydride materials with a layer of H 2 -permeable polymer [88,127,156,[173][174][175]. This approach proved to be very successful, provided that the hydride coverage was indeed complete (Table 6).
MXene Type
Hydrogen Storage Material Nanoconfinement Method Ref.
Catalytic Effects of Doping the Host and/or Substitution of the Hydride Species
Improvements on hydrogen release/uptake cycles have often been explored in conjunction with utilization of catalysts used to either dope the host, or the hydride material. This strategy is based on formation of active sites for hydrogenation reaction to occur, or is sometimes ascribed to the formation of a reactive intermediate species [19,68,92,102,[111][112][113]117,125,128,151,160,161,163,[195][196][197]. In addition, cation substitution or anion substitution in complex hydrides has been employed to reduce energy barriers and improve overall recyclability of the hydride materials (Table 8).
Other approaches start from the organometallic precursor of the metal, which undergoes reduction (with H 2 or another reductant, such as LiNp) typically after impregnation into the porous host. (Equation (4))
Melt Infiltration
Melt infiltration of complex hydrides has widely been used to introduce the active hydride material into nanoporous hosts. This technique has the advantage of requiring no solvent (so it consists of less steps), but the hydride material must have a lower melting temperature, and the infiltration is carried out under H 2 pressure in order to avoid the onset of dehydrogenation reaction.
Solvent Infiltration
Solvent infiltration has become the method of choice as it achieves pore filling of the porous scaffold at temperatures that are near ambient, provided that a suitable solvent for the material has been identified. This is typically an issue, as solubility data on complex hydrides are rather scarce, and usually their solubility in ether-like solvents is limited [16].
Solvent-Assisted Ball-Milling
Nanoconfinement of hydride-based materials in nanoporous hosts has the potential advantage of bypassing the slow kinetics of their bulk counterparts, thus enabling a shorter refueling time, in pursuit of the DOE's current targets [5,6]. Very high surface area supports (MOFs, activated carbons) afford good hydrogen sorption capacities, but since the adsorption is mainly governed by physisorption, it is only relevant at 77 K. At this low temperature, a rough estimation (Chahine's rule) is that for pressures that would occupy all adsorption sites (exceeding 20 bar), the expected storage capacity is~1 wt.%/500 m 2 g −1 and scales proportional to the specific surface area [8]. Ball milling (with or without a solvent) can introduce the hydride material into the porosity of the employed scaffold. The process is energy-intensive and can proceed with an important increase in the local sample temperature, and therefore the process is carried out in steps (for instance, 20 min milling followed by a 10 min pause allowing controlled cooling).
Metal Hydrides and Their Recent Nanoconfinement Studies
Pristine metal hydrides have recently been comprehensively reviewed, and the results show promising trends upon nanoconfinement [213].
LiH
Alkali metal hydrides have been used for catalytic reactions, but have attracted attention due to their lightweight characteristics, as well as the high hydrogen gravimetric content. However, their high thermal stability makes them less attractive in their pure form; LiH, for instance, melts at 689 • C and decomposes at 720 • C into Li and H 2 (Equation (5)). Alkali metal hydrides have unusually high decomposition temperatures due to their saltlike nature (LiH, mp = 698 • C; NaH, mp = 638 • C; KH, mp~400 • C with K vaporizing in H 2 current). Given their high decomposition temperature, alkali metal hydrides require kinetic and thermodynamic destabilization (Table 10). Co(OH) 2 -Li@SiO 2 @Co(OH) 2 N/A αLiOH + 2αLi + + 2αe -= α Li 2 O + αLiH (0 < α < 1); High Li + storage in anode [217] Recently, a series of strategies have been utilized to produce nanosized LiH, but not all attempts dealt with hydrogen storage applications [114,133,198,[214][215][216], and some utilizing LiH-containing nanocomposites for their Li-storage capacity in a Co(OH) 2 -LiH novel anode material [217]. Even when dealing with potential hydrogen storage materials like LiH + MgB 2 , studies have focused on the phase-evolution process and XPS tracking thereof, rather than collection of hydrogen storage data [198]. Still, XPS data pointed to presence of LiBH 4 , Mg (3−x)/2 Li x (BH 4 ) x or Li-borate species present on account of multiple LiH-containing peaks identified [198]. At near-surface regions, LiBH 4 or mixed Li-Mg borohydrides can form at 100 • C below the threshold for hydrogenation of MgB 2 ; expectedly, LiBH 4 production scales with the LiH in the starting composite (Equation (6)) [198].
Sun et al., have shown that harnessing the plasmonic thermal heating effect of Au nanoparticles could lead to light-induced dehydrogenation of nanocomposites Au@LiH, which showed a 3.4 wt.% loss ascribed to dehydrogenation content [214]. The Au NPs dispersed on the surface of LiH, Mg or NaAlH 4 all showed marked improvements in hydrogenation studies. The preparation of Au/LiH composites involved LiH suspension in THF under sonication and overnight stirring at 500 rpm, after which a THF solution of HAuCl 4 was added and stirring continued for an additional 24 h, leading to the Au/LiH material after centrifugation and overnight drying by Schlenk line technique. Hydrogen absorption was carried out under 14.8 atm H 2 , while desorption was conducted under 0.2 atm pressure, utilizing Xe lamp illumination affording 100 • C local temperature [214].
Overcoming kinetic and thermodynamic barriers in the complex Li-N-H system (Equation (7)) led White et al., to study the Li 3 N effect on the LiNH 2 + 2LiH composite behavior [215]. On this occasion, a kinetic analysis showed the rate-limiting step is the formation of H 2 (g) at the surface of the core-shell structure Li 2 NH@Li 3 N [215]. Again, the use of TEM measurements was shown to be inappropriate for LiNH 2 materials, due to decomposition upon prolonged electron beam exposure. The equilibria shown in Equation (7) already occur upon the exposure of Li 3 N to 10 bar H 2 (200 • C, 2 h), but not at one bar H 2 , which only altered the α-to-β ration of Li 3 N [215].
Considering the gravimetric hydrogen densities required by DOE standards, LiH, MgH 2 and AlH 3 are the main binary systems proposed to date [216]. Silicon doping of LiH has shown a drastic reduction in decomposition temperature (∆T = 230 K), and could store up to 5 wt.% H 2 with release at 490 • C [216]. A nanostructured electrode of Co(OH) 2 and silica was recently employed in Li-conductivity studies and showed the formation of active LiH species, although the material was not investigated for its hydrogen storage properties [217].
A series of Li-based materials was investigated by Xia et al., who grafted on graphene LiH by in situ reduction in nBuLi with H 2 (110 • C, 50 atm), producing LiH@G. This nanocomposite LiH@G was further treated with B 2 H 6 or AB/THF, and novel LiBH 4 @G and LiNH 2 BH 3 @G nanocomposites were thus obtained (Equation (8)) [114].
The 2D LiH nanosheets were about 2 nm thick and afforded a 6.8 wt.% H 2 storage when loaded at 50 wt.% in the said graphene-based nanocomposite, which withstood structural integrity upon further hydride-to-borohydride transformation ( Figure 4) [114].
The morphology was tracked by SEM analysis and XRD diffraction, while hydrogenation data confirmed the modest 1.9 wt.% hydrogen storage by TGA ( Figure 5). This nanoconfinement approach in high surface area carbon (HSAG) of pore size 2-20 nm showed a high thermodynamic improvement, allowing for hydrogen release at 340 °C in LiH@HSAG rather than at the high 680 °C for pristine LiH [133].
Zhang et al., have dispersed TM-oxides (TiO 2 in particular) on amorphous carbon to achieve excellent, reversible hydrogen storage capacity, releasing in 10 min. at 275 • C, 6.5 wt.% hydrogen (85.5% that of pristine MgH 2 ) (Figure 7) [95]. Notably, the activation energies for desorption (E a,des ) and absorption (E a,abs ) have been considerably reduced compared to bulk magnesium hydride (Figure 7a). In a multi-fold enhancement strategy, the MgH 2 was first dispersed on carbon (MgH 2 + C), which showed modest improvements (<1 wt.% H 2 ) over MgH 2 bulk with no dehydrogenation in the same timespan (Figure 7c), TiO 2 was used as additive for MgH 2 to yield composites of MgH 2 + TiO 2 NPs, which surprisingly released~6 wt.% H 2 in 10 min [95]. Driven by these enhancements, nanocomposites of the type MgH 2 + TiO 2 SCNPs/AC were synthesized, which further improved hydrogen release/uptake: even at 50 • C, over the course of 20 min,~1.5 wt.% H 2 is released, whereas at 125 • C (~4.8 wt.%) and at 200 • C (6.5 wt.%) the kinetics is sped up considerably (Figure 7c-e). The rehydrogenation occurs within 5 min at 200 • C, and full recovery of the hydrogen storage capacity is achieved (6.5 wt.%). In addition, no appreciable hydrogen storage loss was recorded up to the 10th cycle (Figure 7f ) [95]. Using an FeCo nanocatalyst (mean size of 50 nm), Yang et al., synthesized composites MgH2 + nano-FeCo able to recharge to 6.7 wt.% hydrogen in one minute at 300 °C, and could desorb 6 wt.% (9.5 min, 300 °C) (Figure 8) [201]. In fact, even treatment under H2 backpressure at 150 °C produced 3.5 wt.% absorption in 10 min (Figure 8b). This highlights the importance of catalyst chosen, but also its morphology (nanosheets in the case of FeCo-nano). Plotting the Arrhenius equation also yielded the apparent activation energies: Ea,des = 65.3 ± 4.7 kJ mol −1 (60 kJmol-1 reduction from pristine MgH2), and the absorption energy Ea,abs = 53.4 ± 1.0 kJ mol −1 (Figure 8d). Gratifyingly, the FeCo-catalyzed magnesium hydride composite was able to rehydrogenate fully and was tracked over the course Using an FeCo nanocatalyst (mean size of 50 nm), Yang et al., synthesized composites MgH 2 + nano-FeCo able to recharge to 6.7 wt.% hydrogen in one minute at 300 • C, and could desorb 6 wt.% (9.5 min, 300 • C) (Figure 8) [201]. In fact, even treatment under H 2 backpressure at 150 • C produced 3.5 wt.% absorption in 10 min (Figure 8b). This highlights the importance of catalyst chosen, but also its morphology (nanosheets in the case of FeCo-nano). Plotting the Arrhenius equation also yielded the apparent activation energies: E a,des = 65.3 ± 4.7 kJ mol −1 (60 kJ mol −1 reduction from pristine MgH 2 ), and the absorption energy E a,abs = 53.4 ± 1.0 kJ mol −1 (Figure 8d). Gratifyingly, the FeCo-catalyzed magnesium hydride composite was able to rehydrogenate fully and was tracked over the course of 10 hydrogen release/uptake cycles (Figure 8h) [201]. Using a nanoflake Ni catalyst, Yang et al., have synthesized MgH2 + 5 wt.% Ni, composites able to store 6.7 wt.% hydrogen (des., 300 °C, in 3 min) (Figure 10). The absorption was also very fast, achieving 4.6 wt.% at 125 °C in 20 min, under 29.6 atm H2 [202]. The results also translate into much lowered activation energies (Arrhenius plot): Ea,des = 71 kJ The thermodynamic predictions that smaller size NPs will show the most important destabilization, Zhang et al., have produced ultrafine MgH 2 that was able to release and recharge hydrogen under ambient temperature, with a very high hydrogen storage capacity of 6.7 wt.% (Figure 9) [222]. This capacity was checked over 50 cycles, and showed virtually the same high-capacity behavior (Figure 9). The conditions employed for reversible behavior were 360 min at rt (6.7 wt.%), or 60 min at 85 • C (6.7 wt.%), under 30 bar H 2 . This unexpectedly high storage capacity (65.6 g H 2 /L) surpasses even DOE's requirement (50 gH 2 /L), and was possible solely on account of well-designed, size-restriction of MgH 2 to nanoscale [222].
Decomposition of n Bu2Mg typically used as an organometallic precursor to Mg/MgH2 NPs can follow two different steps, depending on the reaction temperature (Equations (11) and (12)). However small it might be, nanosized matter in general is also more reactive towards various gases and substrates, and Mg/MgH2 coupled system is no exception. Previous examples have overcome this downside by either pressing the nano-powders into pellets, or capping them with other reagents. There are however many reports where MgH2 has been introduced in the porosity of a carbonaceous host, such as the 3D activated carbon utilized by Shinde et al., to achieve a reversible hydrogen storage of 6.63 wt.% (Figure 11) [137]. Not only was the nanocomposite MgH2@3D-C storing hydrogen under relatively mild These results have been explained by means of the intermediate Mg 2 Ni intermediate, which is an intermetallic well-known in the Mg-Ni systems, and which absorbs rapidly H 2 to form Mg 2 NiH 4 . This functions as an effective "hydrogen pump" (Figure 10a) (Equation (10)) [202].
Decomposition of n Bu 2 Mg typically used as an organometallic precursor to Mg/MgH 2 NPs can follow two different steps, depending on the reaction temperature (Equations (11) and (12)).
However small it might be, nanosized matter in general is also more reactive towards various gases and substrates, and Mg/MgH 2 coupled system is no exception. Previous examples have overcome this downside by either pressing the nano-powders into pellets, or capping them with other reagents. There are however many reports where MgH 2 has been introduced in the porosity of a carbonaceous host, such as the 3D activated carbon utilized by Shinde et al., to achieve a reversible hydrogen storage of 6.63 wt.% ( Figure 11) [137]. Not only was the nanocomposite MgH 2 @3D-C storing hydrogen under relatively mild conditions 6.63 wt.% (five minutes, 180 • C), but the desorption was likewise fast (6.55 wt.%, 75 min, 180 • C), and perhaps more importantly, the nanoconfined MgH 2 was air-stable thanks to the protective carbon shell [137]. To the observed enhanced kinetics and improved thermodynamic behavior contribute decisively the transition metal dispersed into the 3D carbon: NI>Co >Fe. Running in a continuous regime, the nanocomposite was able to cycle for about 435 h (more than 18 days), without a palpable decrease in the hydrogenation storage capacity (Figure 11) [137].
While typically reduction in n Bu 2 Mg infiltrated into a nanoporous host to afford MgH 2 NPs is carried out in heterogeneous conditions (under H 2 pressure), Shinde used a mixed reductant system: TEA ((HOCH 2 CH 2 ) 3 N)/NH 2 NH 2 hydrazine to reduce Mg(II) to Mg(0) [137]. The synthetic procedure is nicely followed in Figure 11, and in this case, both scanning electron microscopy (SEM) and transmission electron microscopy (TEM) could be used for characterization, since the electron beam no longer hits directly the MgH 2 NPs; thus, the risk of in-situ decomposition during data acquisition is minimized (Figure 11). The hydrogen storage capacity exceeds 6 wt.% in case of Ni-NPs deposited in the 3D-AC (MHCH-5), confirming the beneficial and synergistic role of Ni when used in conjunction with MgH 2 . The plausible intermediate Mg 2 Ni forms the coupled system Mg 2 Ni/Mg 2 NiH 4 during hydrogenation, and this can be held responsible for the superior cycling behavior in case of MgH 2 @3D-AC (MHCH)-5(Ni), whereas this type of intermetallic is not common for Co or Fe [137].
The self-assembled MgH2 NPs are well embedded into the carbonaceous host, which plays a critical role in the overall performance of MHCH-5. It is implied, based on the thermal conductivity data (Figure 11h), that the carbon shell is important. The high thermal conductivity (70 W/mK), many times higher than that of MgH 2 NPs themselves, induces a lower temperature gradient in the sample and a high heat transfer coefficient, thus contributing to the exemplary behavior of the sample during hydrogenation cycling [137].
The reaction of LiH and AlCl 3 was shown to be greatly sped up by using a 0.1 molar TiF 3 , when the final product obtained after five hours milling under Ar pressure was a nanocomposite of composition α-AlH 3 /LiCl-TiF 3 [203]. Duan et al., have shown the critical role of TiF3 that acted as a seed crystal for α-AlH 3 . The pressure was also a crucial factor, as running the reaction under lower gas pressure only led to Al metal formation, without the envisioned hydridic phase (Equation (13)) [203].
However, thermodynamic data showed a Gibbs free energy for the expected α-AlH 3 formation of ∆G = −269 kJ mol −1 , therefore thermodynamically possible at 298 K [203]. Furthermore, tracking the reaction by solid-state 27 Al NMR spectra has shown the complex behavior of the reactive mixture ( Figure 12) (Equation (14)).
The kinetics are vastly improved, and raising the temperature above 120 °C allows for complete dehydrogenation in roughly 10 min (Figure 12).
The phase composition already shows formation of Al, consistent with the dehydrogenation reaction that had occurred. The report also highlighted the important role of the fluoride additive, as TiF3 reduced Ea of H-desorption to 52.1 kJ/mol [203].
Nanoconfinement of alane in a Cr-based MOF (MIL-101) with Al-doping has led to a nanocomposite able to store and recharge at 298 K (ambient) and 100 bar H2, 17.4 mg H2/g (equivalent to 1.74 wt.% H2) [40]. The introduction of alane inside the MIL-101 pores was made via solvent infiltration from a THF solution of AlH3. In fact, the pristine MOF MIL- The kinetics are vastly improved, and raising the temperature above 120 • C allows for complete dehydrogenation in roughly 10 min (Figure 12).
After five hours of ball milling under Ar pressure and dehydrogenation at 160 • C for 600 s, the final composite ( Figure 13) shows nanosized AlH 3 (mean size of α-AlH 3 was 45 nm, without traces of agglomerates). Furthermore, tracking the reaction by solid-state 27 Al NMR spectra has shown the complex behavior of the reactive mixture ( Figure 12) (Equation (14)).
The phase composition already shows formation of Al, consistent with the dehydrogenation reaction that had occurred. The report also highlighted the important role of the fluoride additive, as TiF3 reduced Ea of H-desorption to 52.1 kJ/mol [203].
The phase composition already shows formation of Al, consistent with the dehydrogenation reaction that had occurred. The report also highlighted the important role of the fluoride additive, as TiF 3 reduced E a of H-desorption to 52.1 kJ/mol [203].
Nanoconfinement of alane in a Cr-based MOF (MIL-101) with Al-doping has led to a nanocomposite able to store and recharge at 298 K (ambient) and 100 bar H 2 , 17.4 mg H 2 /g (equivalent to 1.74 wt.% H 2 ) [40]. The introduction of alane inside the MIL-101 pores was made via solvent infiltration from a THF solution of AlH 3 . In fact, the pristine MOF MIL-101 (3148 m 2 g −1 , 2.19 cm 3 g −1 and 2.5-3 nm pores) was shown to store 0.55 wt.% H 2 under the same conditions. The hydrogen release profile from the investigated samples shows the improvement of nanoconfinement of AlH 3 in MOF pores over the hydrogen release performance (Figure 14) [40]. The gravimetric storage capacity (17.4 mg H2 g −1 composite) was rather low considering DOE's goals, due to the inability to increase Al-doping of the framework without crystallinity loss, and the role of AC additive became apparent in order to enhance hydrogen interaction with confined Al NPs [40].
In an attempt to improve upon previous results, Duan switched the nano-host to MWCNT (multi-walled carbon nanotubes) of high pore textural characteristics (550 m 2 g −1 , 6-8 nm diameter) and obtained by ball-milling xMgH 2 + AlH 3 (x = 1-4) nanocomposites MgH 2 /AlH 3 @CNT of crystal size 40-60 nm that released 8. The gravimetric storage capacity (17.4 mg H2 g −1 composite) was rather low considering DOE's goals, due to the inability to increase Al-doping of the framework without crystallinity loss, and the role of AC additive became apparent in order to enhance hydrogen interaction with confined Al NPs [40].
The Al metal produced in the first dehydrogenation stage of the composite ( Figure 16) will react with MgH 2 not yet dehydrogenated, to yield an intermetallic phase of Al 12 Mg 17 , which was confirmed by XRD data (Equation (15) The Al metal produced in the first dehydrogenation stage of the composite ( Figure 16) will react with MgH2 not yet dehydrogenated, to yield an intermetallic phase of Al12Mg17, which was confirmed by XRD data (Equation (15)).
Wang et al., showed the potential of nanosizing by introducing (injection in HSAG of Et2O solution of freshly-made AlH3 from metathesis of LiAlH4 and AlCl3) [44]. Considering the 14 wt.% loading with AlH3 in the composite AlH3@HSAG (by ICP-OES), the expected hydrogen capacity was 1.4 wt.%. However, only 15% of the Al behaved reversibly and thus only an overall 0.25 wt.% storage could be attributed to the nanoconfined AlH3 [44]. Interestingly, during sample preparation, the composite was heated at 65 °C under Ar to yield α-AlH3 polymorph and minimize spontaneous decomposition of AlH3 [44]. Either way, the reduction in dehydrogenation onset to ~60° (60…270 °C with a peak at 165 °C) shows the effect of nanosizing, effectively reducing hydrogen release by 50 °C [44].
The reactions involved in the mechanistic proposal of the authors also allowed computation of the apparent activation energies (by Kissinger plot), which were of 97.3 kJ mol −1 for MgH 2 and 61.4 kJ mol −1 for AlH 3 (Figure 16c).
Wang et al., showed the potential of nanosizing by introducing (injection in HSAG of Et 2 O solution of freshly-made AlH 3 from metathesis of LiAlH 4 and AlCl 3 ) [44]. Considering the 14 wt.% loading with AlH 3 in the composite AlH 3 @HSAG (by ICP-OES), the expected hydrogen capacity was 1.4 wt.%. However, only 15% of the Al behaved reversibly and thus only an overall 0.25 wt.% storage could be attributed to the nanoconfined AlH 3 [44]. Interestingly, during sample preparation, the composite was heated at 65 • C under Ar to yield α-AlH 3 polymorph and minimize spontaneous decomposition of AlH 3 [44]. Either way, the reduction in dehydrogenation onset to~60 • (60 . . . 270 • C with a peak at 165 • C) shows the effect of nanosizing, effectively reducing hydrogen release by 50 • C [44].
The EELS spectra of AlH 3 @CTF-biph and AlH 3 @CTF-bipy confirm that both contained aluminum, thus AlH 3 introduction in the CTF-based frame was achieved. However, inherent oxidation had also occurred so the Al 2 O 3 presence was also recorded by EELS data [51]. Although alane introduction into CTF-biph and CTF-bipy porosity was confirmed by N 2 sorption isotherms (Figure 18), there was no reversibility in the case where CTF-biph was used as host [51].
TM-Hydrides
While main group metal hydrides are attractive due to metal abundance and low atomic weight of the metal (so higher wt.% H2 storage capacity), some TM (transition metals) have also been recently investigated by employing nanosizing effects (Table 13) [79,97,169,200,212,216,234]. The simplest and most classical model system to study TM-H interaction is the Pd-H system [200,234]. While the gravimetric storage capacity is too low for it to be considered for vehicular applications, the nature of Pd…H interaction has shed new light on thermodynamic predictions in Pd NPs forming PdHx, estimating cluster expansion, phase boundaries Pd/Pd…H, phase transitions (>400 K) and interfacial free energies by using DFT method [200,234]. Pd is often thought of as being able to absorb H2 like a sponge, reversibly absorbing more than 1000 times its own volume. In short, interaction of H2 with palladium comprises of H-H dissociation in atomic [H], diffusion of [H] into Pdbulk, where it occupies the free interstitial sites in fcc lattice of Pd, forming either an α-phase PdHx (x < 0.03, rt) or the hydridic β-phase PdHx (x > 0.03) [200]. The catalytic role of Ph-hydride has been recently harnessed in a complex Pd hydride CaPdH2, for semihydrogenation of CnH2n-2 (alkynes) to CnH2n (alkenes) [79]. Figure 19. The XRD pattern (0.9AlH 3 -0.1Li 3 N) dehydrog (a), the hydrogen release profile under isothermal conditions (100 • C) of (1 − x)AlH 3 -xLi 3 N (x = 0, 0.05, 0.1, 0.15) (b), and the calculated apparent activation energy (c). Reprinted/adapted with permission from Ref. [206]. 2022, Wiley-VCH GmbH. Figure 19b shows the isothermal dehydrogenation of (1 − x)AlH 3 -xLi 3 N (x = 0.05, 0.1, 0.15) at 100 • C, confirming a decrease in H 2 wt.% with the content of Li 3 N. The XRD pattern confirms that the sole dehydrogenation product of the composite is metallic Al (Figure 19a). The onset of dehydrogenation was conveniently reduced to 66.8 • C (0.95AlH 3 -0.05Li 3 N), thus approaching an operating regime suitable for FCEs. The beneficial role of lithium amide was confirmed by the apparent E a which is strongly reduced (Figure 19c) [206].
TM-Hydrides
While main group metal hydrides are attractive due to metal abundance and low atomic weight of the metal (so higher wt.% H 2 storage capacity), some TM (transition metals) have also been recently investigated by employing nanosizing effects (Table 13) [79,97,169,200,212,216,234]. The simplest and most classical model system to study TM-H interaction is the Pd-H system [200,234]. While the gravimetric storage capacity is too low for it to be considered for vehicular applications, the nature of Pd . . . H interaction has shed new light on thermodynamic predictions in Pd NPs forming PdH x , estimating cluster expansion, phase boundaries Pd/Pd . . . H, phase transitions (>400 K) and interfacial free energies by using DFT method [200,234]. Pd is often thought of as being able to absorb H 2 like a sponge, reversibly absorbing more than 1000 times its own volume. In short, interaction of H 2 with palladium comprises of H-H dissociation in atomic [H], diffusion of [H] into Pd bulk , where it occupies the free interstitial sites in fcc lattice of Pd, forming either an α-phase PdH x (x < 0.03, rt) or the hydridic β-phase PdH x (x > 0.03) [200]. The catalytic role of Ph-hydride has been recently harnessed in a complex Pd hydride CaPdH 2 , for semi-hydrogenation of C n H 2n−2 (alkynes) to C n H 2n (alkenes) [79].
Conclusions and Outlook
The urgency of a green, renewable and sustainable fuel to replace fossil fuels is more stringent today than ever. The metal hydrides constitute materials that possess intrinsically high gravimetric and volumetric hydrogen storage capacities, but their sluggish kinetics and poor thermodynamics still constitute an obstacle for the wide acceptance of their use in the fuel of the future. However, various strategies have been recently explored, and perhaps the most returns derive from basic shifts in thinking: oriented growth of MgH2 on catalytically active substrates; size-reduction in metal hydrides to few nm when thermodynamic destabilization works best; or usage of new class of catalysts of 2D-structure (MXenes)-they have all showed unexpectedly good results. There is clearly room for improvement in the fascinating field of metal hydrides, and research efforts ought to concentrate on improving nanoparticle system design, careful consideration of the incorporating matrix and selected hydrogenation/dehydrogenation catalysts, from both an economic and a feasibility point of view. Given the raw material scarcity but also reactivity and particular characteristics of some complex hydrides (like volatility of Al(BH4)3, or extreme toxicity of Be(BH4)2 etc.), the optimal hydrogen storage material will likely be based on magnesium nanoconfined in a carbonaceous host and/or catalyzed by Ti-based The most stable reversible capacity during cycling was achieved for 0.95 MgH 2 −0.05 TiH 2 nanocomposite, which shows fast kinetics and does not fall below 4.8 wt.% even after 20 cycles ( Figure 21). Additionally, no Mg-ETM-H ternary phases were observed [169].
Conclusions and Outlook
The urgency of a green, renewable and sustainable fuel to replace fossil fuels is more stringent today than ever. The metal hydrides constitute materials that possess intrinsically high gravimetric and volumetric hydrogen storage capacities, but their sluggish kinetics and poor thermodynamics still constitute an obstacle for the wide acceptance of their use in the fuel of the future. However, various strategies have been recently explored, and perhaps the most returns derive from basic shifts in thinking: oriented growth of MgH 2 on catalytically active substrates; size-reduction in metal hydrides to few nm when thermodynamic destabilization works best; or usage of new class of catalysts of 2D-structure (MXenes)-they have all showed unexpectedly good results. There is clearly room for improvement in the fascinating field of metal hydrides, and research efforts ought to concentrate on improving nanoparticle system design, careful consideration of the incorporating matrix and selected hydrogenation/dehydrogenation catalysts, from both an economic and a feasibility point of view. Given the raw material scarcity but also reactivity and particular characteristics of some complex hydrides (like volatility of Al(BH 4 ) 3 , or extreme toxicity of Be(BH 4 ) 2 etc.), the optimal hydrogen storage material will likely be based on magnesium nanoconfined in a carbonaceous host and/or catalyzed by Ti-based catalysts (such as TiO 2 , TiO, or MXenes). The realistic application of metal hydride systems is conditioned by a number of factors: (i) the discovery of a material that displays a reliablyreversible behavior in hydrogenation studies; (ii) consistent performance across hundreds of H 2 -absorption/desorption cycles; (iii) lower activation energies and consequently faster absorption/desorption kinetics and improved thermodynamics; (iv) consistently fast kinetics for fast refueling; (v) thermodynamic stability and material integrity to afford safe storage in a fuel tank; (vi) reasonable resistance to air and/or moisture; (vii) synthesis route moderately easy and preferably comprising of few steps; (viii) access to sufficient raw materials and limit amount of CRM (critical raw materials) used; (ix) reliable scaling-up of the lab demonstrator to a multi-KW tank capable to drive a vehicle for 500 km or more; (x) strong safety precautions and technological parameters implementation to afford a tank capable to store, release and withstand high H 2 pressures (of more than 100 atm). Within this framework, the EU directives to limit CRM usage is expected to drive the research towards more-abundant metal sources such as Mg or Al (Mg was also included in the list of CRM from 2020, although currently it can be obtained in enough quantities). Noble metal catalysis (like Pd) will probably not become a commercial way of speeding up hydrogen delivery or the recharging of hydride-based fuels due to the associated cost. Other catalysts like MXenes can be produced on a larger scale, but the Ti-based material could also face soon shortages. Nanoconfinement still offers general improvements across the board for hydride-based materials, but the choice of host is limited-among the classes of hosts presented in the current review, the most promising are carbonaceous frameworks and MOFs. Carbonbased materials can be tailored morphologically for hydride inclusion, and their cost is modest; however, this must be considered with care since a zero-carbon policy might imply soon that carbon should not be used as a host any longer. Even though it releases no CO 2 in the atmosphere; there will be an associated cost with treatment of the end-of-life C-based fuel, and so the carbon footprint will not be negligible.
Considering these material, performance, safety and cost restrictions, the final choice for a viable, sustainable hydride-based material is a delicate one and only validation through a scaling-up proven in an operational environment could confirm whether it can be used on a large-scale tank for vehicular applications and afterwards adopted by industry. The ultimate goal is, without a doubt, to approach as much as possible the reversible, theoretical hydrogen capacity, and this is a joint venture of all the above considerations. | 2022-06-29T15:23:52.266Z | 2022-06-26T00:00:00.000 | {
"year": 2022,
"sha1": "aef8f0518517c0d318e94212d5210fd49ed2f907",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/13/7111/pdf?version=1656491358",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf68734cfa3b7964b4eadfc3e289f72cad37558b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12055377 | pes2o/s2orc | v3-fos-license | Sense Amplifier Comparator with Offset Correction for Decision Feedback Equalization based Receivers
A decision feedback circuit with integrated offset compensation is presented in this paper. The circuit is built around the sense amplifier comparator. The feedback loop is closed around the first stage of the comparator resulting in minimum loop latency. The feedback loop is implemented using a switched capacitor network that picks from one of pre-computed voltages to be fed back. The comparator's offset that is to be compensated for, is added in the same path. Hence, an extra offset correction input is not required. The circuit is used as a receiver for a 10 mm low swing interconnect implemented in UMC 130 nm CMOS technology. The circuit is tested at a frequency of 1 GHz and it consumes 145 $\mu$A from a 1.2V supply at this frequency.
In this technique, a hard decision is made on the input in every clock cycle. This decision is scaled and subtracted from the input before the next sampling event. The scaling factor is chosen based on the amount of previous bit ISI in the input data. If the initial decision is correct, this effectively erases the memory of the previous bit. The delay involved in making the hard decision, scaling it and subtracting from the next input limits the maximum frequency of operation of this circuit. Further, in scaled technologies the decision devices have inherent offset that needs to be compensated for. In this paper, we propose a DFE circuit built around the Sense amplifier comparator that has low loop latency and features integrated offset compensation. DFE is a simple technique and has found wide applications from low power to high performance communication systems. DFE has been proposed as an effective way of extending the bandwidth of repeaterless low swing interconnects (1,2). DFE has also been used to correct for errors in digital systems (5), for implementing low power logic circuits based on pass transistor logic (6) and for enhancing bandwidth of flip-flops (7). A sense amplifier comparator (8) is used in most of these circuits as it can achieve high speed at low power consumption. When used for sampling low swing data, these comparators need offset compensation. Previous works have implemented this using an auxiliary input to the core comparator (2,3). In high speed designs where the loop delay becomes the bottleneck, look-ahead-dfe is used (2,4). In look-ahead-dfe, multiple comparators make decisions on the input data, each assuming a possible value of the previous decision. This increases the number of comparators needed, each requiring its own offset compensation circuit as well.
In this paper, we propose a DFE circuit that has low latency and integrated offset compensation. The feedback loop is built with a switched capacitor circuit, driven by the first stage of the sense amplifier, which picks from pre-computed inputs for the feedback. The offset to be corrected is added to the same feedback input, removing the need for an extra offset correction input to the comparator. The circuit is designed and fabricated in UMC 130 nm CMOS technology for a data rate of 1GHz. A double differential architecture, with a differential main input and differential feedback input, is used. For testing the equalizer, the comparator is used as a receiver of a 10 mm on-chip interconnect with a capacitively coupled low swing transmitter reported by Mensink et al. in (1).
The paper is organized as follows. The concept of switched capacitor DFE with offset compensation is discussed in Section 2. The circuit implementation details are then discussed in Section 3, which is followed by results in Section 4. Section 5 then concludes the paper.
DFE with switched capacitor feedback
In time domain, the output y[n] of the DFE circuit can be expressed in terms of the comparator input x[n] as Here, y[n − 1] is the hard decision made by the comparator in the previous cycle and α is a constant that is less than 1. α is chosen depending on the amount of ISI present in the input data. The difference equation is a high pass function, which compensates for the ISI produced by the low pass nature of the interconnect. Since y[n] is a hard decision, the term αy[n − 1] can take only one of two values, i.e.
The analysis till now assumes an ideal comparator. Practically, comparators also suffer from offset, which needs to be corrected. To compensate for the inherent offset of the comparator, the offset correction V of f set can added within the same feedback i.e.
We implement the DFE circuit using a switched capacitor circuit that uses the comparator output to select from pre-computed voltages that correspond to −α − V of f set and +α − V of f set for the feedback. Since most of the applications use differential input architecture, a comparator with a double differential input, i.e. with a differential main input and differential feedback input is used. Such an implementation needs two precomputed differential bias inputs with different common modes, for the feedback network to pick from. Hence, a total of 4 distinct bias voltages are needed. This is explained in the following.
When y[n − 1] = +1, the differential feedback voltages V + f b , V − f b can be written as Similarly, y[n − 1] = −1, Figure 2: Conceptual block diagram of the DFE circuit with offset correction. The circuit has a differential main input and a differential feedback input.
Here, V cm is the common mode of the feedback input. This is illustrated graphically in Fig. 2, along with the block diagram of the comparator with a double differential input.
To summarize, the voltages corresponds to the feedback factor α. The common modes of these two differential pairs are skewed by the offset to be corrected, as illustrated in Fig. 2.
We use the sense amplifier based comparator in the DFE circuit. The circuit diagram of the first stage of the comparator is shown in Fig. 3. An additional input transistor pair is used for the feedback input (3).
The second stage of the comparator is an SR slave latch (8). Prior implementations of DFE using this comparator have used the slave latch output for the decision feedback (1). We use for the decision feedback. The circuit implementation is discussed in the next section.
Circuit implementation
In this section we shall discuss the circuit implementation. The first subsection will describe the implementation of the comparator and the second subsection will describe the bias voltage generation. The circuit was designed in UMC 130 nm CMOS technology.
Comparator with DFE
As discussed in the previous section, we shall use the sense amplifier comparator. The feedback network is a switched capacitor circuit driven by the first stage of the comparator, which is shown in Fig. 5. In every clock cycle the comparator is reset, i.e. both S and R are precharged Figure 5: Feedback network using S and R signals as the select lines for an analog multiplexer. Low V t transistors are used for the select switches.
to V DD . This puts the multiplexer in a high impedance state. During input sample evaluation, one of S or R fall lower than the other, and the output of the analog multiplexer generates the scaled version of the resolved bit. This output is held dynamically on the parasitic capacitance of the node, as the comparator precharges for the next cycle. Hence, the next cycle evaluation subtracts the scaled previous bit value. Since the select transistors spend a little time in the ON state before the precharge phase of the next clock cycle begins, the time available for the output to change states is limited. Low V t transistors are used as switches in order to improve the selector performance.
One of the difficulties of using S and R for driving the feedback comes from the very large common mode swings on these signals due to the pre-charge cycle. Hence, the feedback input needs to have a good common mode rejection ratio. The first stage of the sense amplifier comparator is modified to bias the feedback transistors with a tail current source. The modified first stage is shown in Fig. 6. The dimensions of the transistors are also shown in Fig. 6, where all dimensions are in µm and unless specified otherwise, the length of the transistors is minimum which is 120 nm. The feedback voltages are chosen taking into account the gain of the feedback input of the comparator, relative to the main input pair's gain. In this design, the feedback network is designed to have half the gain of the main input pair.
V CM generation
The feedback input pair needs a bias current source. This input is biased with the common mode of the main data input. In this way the relative strengths of the main and the feedback inputs track each other. This bias is derived from the receiver termination, which is shown in Fig. 7. This is the same termination circuit reported in (9).
Generation of feedback voltages
A 5 bit resistive string digital to analog converter (DAC) is used to generate the four bias voltages. The resistor string, driven by a current source, is used to generate 32 levels of voltages. Figure 6: DFE circuit implemented in UMC 130 nm technology, with transistor dimensions. All dimensions are in µm. Unless explicitly mentioned, length of all transistors is minimum which is 120 nm.
gate switches, are used in the DAC to generate the required four bias outputs. The outputs of the switch matrices are buffered with a single stage opamp. Four digital words, each 5 bit wide, are used to select appropriate bias voltages for the output. Two digital inputs, one for offset and Figure 7: Line termination circuit. The common mode is used to bias the feedback input tail transistor. another for the feedback tap weight are used to generate the required 4 digital words to drive the switch matrices. Of these, the offset control input is a 5 bit control word and the feedback factor is a 4 bit control word. The required 4 digital words are calculated as Here, REG of is a 5 bit word that can take values from "01000" to "10111", to select the offset input. β = α/2, is a 4 bit word that can take values from "0000" to "1000" to chose the tap weight. This arrangement allows equal dynamic range for the offset and the feedback tap weights. Depending on expected offsets and desired tap weights, unequal splits can also be considered.
The DFE circuit was implemented using a resistive DAC to generate the bias voltages. This implies that a feedback factor of up to ∼ 0.2 is possible with this implementation.
Results
The circuit was designed and fabricated in UMC 130 nm CMOS technology. The total circuit area including the bias generation circuits is 91µm×52µm. Fig. 9 shows a photograph of a bare die of the fabricated circuit.
The circuit was tested at a frequency of 1 GHz with a supply of 1.2 V. The circuit consumes 145 µA of current at this frequency. First, the DFE feedback factor was set as zero and the offset was swept to find the code which showed the widest bath tub. After the offset code was found higher values of the feedback factor. The width of the bath tub increases by 15% for the highest tap weight. Fig. 11 shows the eye diagram of the recovered data when the data is sampled at the minimum BER sampling instant. From layout extracted simulations, the loop delay is found to be 350 ps. Figure 11: Measured eye diagram of the recovered data.
Conclusions
In this paper, we report a low latency DFE circuit with integrated offset compensation, built around a sense amplifier comparator with a switched capacitor feedback network. The switched capacitor circuit uses signals from the first stage of the sense amplifier comparator for selecting from precomputed bias voltages, thus resulting in low latency. The bias voltages are programmed for the sum of DFE feeback weight and offset to be corrected. This allows DFE and offset correction with the same feedback input, avoiding an extra offset correction input in the comparator. A 5 bit DAC, along with a little logic circuitry, is used to generate the required four bias voltages. The circuit is designed, fabricated and tested in UMC 130 nm CMOS technology. | 2017-02-03T16:06:21.000Z | 2017-02-03T00:00:00.000 | {
"year": 2017,
"sha1": "8363bea7e355ea2576cb9a27232a7cea35bda9d3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1702.01067",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "32f5815adf97bddf22b0050d33189799814a5afe",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
202076668 | pes2o/s2orc | v3-fos-license | The Apoptotic Pathway Induced by Vincristine in Mouse Proliferating and Resting Normal Lymphocytes and Lymphoma Cell Line
Background: Most anti-cancer drugs target mitosis and induce apoptosis in cancerous cells. In the immune system, proliferation and apoptosis of lymphocytes are indeed essential modulating elements. Objectives: In this study, we have investigated the effect of vincristine on normal resting and normal proliferating lymphocytes comparing with cancerous cells. Methods: Resting and proliferating splenocytes from mice and BCL1 (mouse lymphoma cell line) were cultured with different con-centrationsof vincristinefor48hours,andcelllysateswereprepared. Theactivityof caspases3,8,and9incelllysateswasmeasured usingspecific chromogenicsubstratesDEVD-pNAfor caspase3, IETD-pNAfor caspase8, andLEHD-pNA forcaspase9, the activitycal-culated as µ mol/min/mg protein. Results: IntheBCL1cellline,theactivityof bothcaspases8and9andcaspase3increasedatthepresenceof vincristine(5 µ g/mL).In resting splenocytes, however, only a mild increase in caspase 9 activity was observed without any change in the activity of caspases 8 or 3. In the same situation, the activity of caspase 3 and 9 (but not caspase 8) elevated in proliferating cells exposed to vincristine. Nearly similar results were obtained with higher concentrations of vincristine (up 20 µ g/mL). Conclusions: The results suggesting that vincristine may induce internal pathways of apoptosis in normal and cancerous cells whileextrinsicpathwaywasinducedincancerouscells. Ontheotherhand,theeffectsarehighlydependentontheactivationstatus of normal cells, and affirms that responding immune cells should be more seriously noticed when side effects of anticancer drugs are estimated.
Background
Vincristine, as a key drug for the treatment lymphoid malignancies. belongs to Vinca alkaloids -the secondmost-used class of cancer drugs. It was widely used as a potent chemotherapeutic agent in the treatment of various cancers such as lymphoma; however, it has common side effects including nausea, vomiting, weight loss, diarrhea, bloating, mouth sores, dizziness, headache, hair loss, and also neutropenia and peripheral neuropathy (1)(2)(3).
Many anti-cancer chemotherapy drugs target mitosis, hence induce apoptosis in cancerous cells, such as vincristine, which induces mitotic arrest as a microtubule inhibitor (4,5). Therefore, cells undergo more apoptosis when they are more dividing. However, many studies regarding the effects of vincristine may notice only normal resting lymphocytes e.g. peripheral blood mononu-clear cells, so the effect on normal activated lymphocytes remains unseen whereas these normal proliferating cells may be more susceptible to cytotoxic effects of the drug. It is noteworthy to consider that proliferation and apoptosis of lymphocytes are essential parts of the immune modulation and apoptosis induction by chemotherapy drugs include the normal proliferating lymphocytes responsive to malignancies or other invaders.
Caspases (cysteine-dependent asparagine-oriented protease) are the main players in apoptosis. Two distinct yet interconnected signaling pathways control apoptosis by activating caspases. The intrinsic apoptotic pathway engages caspase 9 via members of the BCL-2 protein family and the mitochondria in reaction to severe cellular damage or stress and is mediated by a multimeric adaptor complex known as the Apaf-1 apoptosome (6). The extrinsic pathway however usually activates caspase 8 via cell-surface death receptors (7). The activation of caspase 8 may lead to caspase 9 activation as well (8,9). Caspase 8 has anti-tumor roles, and increased levels of caspase 3 in tumor cells induce apoptosis (10,11).
The detailed mechanism involved in the induction of apoptosis by vincristine not yet well defined, but it is caspase-dependent (12). Its apoptosis induction or caspase activation in normal proliferating lymphocytes is also unclear. We have previously shown that 10 µg/mL (and more) of vincristine caused cell death in both resting and proliferating lymphocytes in vitro and the cells undergo apoptosis (double staining with acridine orange and ethidium bromide) (13). However, the apoptotic pathway induced by vincristine in proliferating and resting normal lymphocytes and lymphoma cell lines is not clearly compared.
Objectives
The aim of this study is to evaluate the effect of vincristine on the activity of main apoptotic caspases (3, 8 and 9) in resting and proliferating lymphocytes and cancerous lymphoma cell line (B cell lymphoma BCL1).
Splenocytes
The spleen of 8 -10-weeks BALB/c mice was removed under sterile conditions and the splenocytes separated as cell suspension and centrifuged, then resuspended in 2 mL Tris-buffered (0.2%) ammonium chloride (0.83%) and incubated for 2 minutes at room temperature to remove red blood cells. Immediately after that, 2 mL of fetal bovine serum (FCS) (Gibco) was added, centrifuged at 300 g for 10 minutes, and washed twice with RPMI medium (Gibco). Cell viability, as determined by the Trypan blue exclusion method, exceeded 95%. The cells were resuspended in 10% FCS RPMI medium and cultured in 96-well microplates at 2 × 10 5 cells per well.
Vincristine, at various concentrations (3.7, 5, 10 and 20 µg/mL depending on work design), were added with or without 25 µg/mL concanavalin A (Con A) as a lymphocyte stimulator (5 wells were assigned for each condition) and incubated for 48 hours at 37°C, 5% CO 2 .
Cell Line
BCL1 cells were purchased from the Pasteur Institute Cell Bank (Tehran, Iran, code C551). The cells were cultured and passaged at 37°C, 5% CO 2 in 10% FCS supplemented RPMI1640 Medium. 2 × 10 4 cells were added to each well of the 96-well microplate and cultured overnight, then, the supernatants were removed and the desired concentrations of vincristine were added (5 wells for each condition) and incubated for 48 hours at 37°C, 5% CO 2 .
Cell Lysates
Briefly, after treatment with desired doses of vincristine for a specific time period (48 hours), the cells were washed and then lysed in a lysis buffer composed of 20 mM PIPES (piperazine-N, N'-bis (2-ethanesulfonic acid)), 10 mM KCl (potassium chloride), 2 mM MgCl 2 (magnesium chloride), 4 mM DTT (Dithiothreitol), pH 7.4 and 20 µL antiprotease PMSF (phenylmethylsulfonyl fluoride, and 1 µL leupeptin were added per 1 mL of lysing buffer). To obtain total protein of each sample, 100 µL of this cocktail was added to each 10 6 splenocytes or 10 5 BCL1 and incubated on ice for 30 minutes. Then, the cells were passed from an insulin syringe 10 times and the Lysates were clarified by centrifugation at 3500 g (4°C) for 15 minutes. The supernatants were collected and froze at -70°C and the total protein of each sample was determined by the Bradford Method (14).
activity according to µmol/min/mg protein was calculated whenever necessary. The experiments were conducted on three independent samples of cultured cells.
Statistical Analysis
Statistical differences were assessed by the ANOVA (analysis of variance) and Tukey's post hoc test and P < 0.05 was considered statistically significant.
BCL1 (Cancerous Cell): Caspase 8, 9 and 3 Activation
As shown in Figure 1A, in vincristine treated cells (5 µg/mL, 48-hour incubation), the activity of caspase 3 was significantly augmented as assayed in cell lysates. Vincristine also causes an increase in the activity of caspases 8 and 9 detectable 20 minutes after adding substrate to cell lysate ( Figure 1B and C). The activity of caspases was calculated using a standard curve with known concentrations of substrates and presented as µmol/min/mg protein (table of Figure 1). In the normal culture of BCL1 cells, no caspase 3 activity was observed, however, at the presence of vincristine, the activity of caspase 3 was 4.87 ± 0.88 µmol/min/mg protein; more than each of caspases 8 or 9 (3.29 ± 0.06 and 3.53 ± 0.09 µmol/min/mg protein, respectively, P <). In lower concentration of vincristine (3.7 µg/mL), the activity of both caspases 8 and 9 were less but detectable (2.18 ± 0.87 and 1.58 ± 0.07 µmol/min/mg protein respectively) and no caspase 3 activation was observed.
Normal Resting Splenocytes: Low-Level Activation of Caspase 9
As shown in Figure 2, vincristine at 5 µg/mL induced a low-level activation of Caspase 9 in normal resting lymphocytes (from mouse spleen), but no activity of caspase 8 and capase-3 was observed.
Higher concentrations of vincristine were also examined and the caspase activity was calculated as µmol/min/mg protein presented in table of Figure 2.
As shown, at the absence of vincristine (normal culture), resting lymphocytes have no caspase 3 activity, however, at the presence of vincristine, the activity of caspase 3 was observed at 10 and 20 µg/mL concentration (5.95 ± 0.50 and 6.73 ± 0.28 µmol/min/mg protein respectively), but not at 5 µg/mL of vincristine. There was no increase in caspase 8 of normal resting cells in any concentration of vincristine used, but caspase 9 activity was elevated dose-dependently i.e. 1.99 ± 0.29, 3.93 ± 0.12 and 13.49 ± 0.63 µmol/min/mg protein at 5, 10 and 20 µg/mL of vincristine respectively.
Normal Proliferating Splenocytes: Activation of Caspase 9 and 3
As shown in Figure 3, in normal proliferating lymphocytes, vincristine at 5 µg/mL induced both caspase 3 and caspase 9 activity, but not caspase 8.
Vincristine at higher concentrations (8.5 and 20 µg/mL) had no observable effect on caspase 8 activity, however, it significantly augmented the activity of caspase 9 and caspase 3 (table of Figure 3). The highest level of caspase activity has been observed in caspase 9 activity when vincristine was administered at 20 µg/mL (23.81 ± 0.02 µmol/min/mg protein).
Caspase Activities
A comparison between the effect of vincristine (5 µg/mL) on various caspases (3, 8, and 9) and various cells (cancerous BCL1, normal resting and activated lymphocytes) is presented in Figure 4A . The activity of caspase 3 and 9 in activated lymphocytes is clearly comparable with BCL1 cells, although resting cells have no caspase 3 activity and caspase 8 activity is seen only in BCL1. As shown in Figure 4B, caspases 3 and 9 are activated at higher concentrations (20 µg/mL) both in resting and stimulated cells, although the activity is about twice in activated cells.
Discussion
Apoptosis induction is one of the important mechanisms regarding cancer therapy (16). Vincristine bind to βtubulin close to the guanosine triphosphate (GTP)-binding sites (the vinca domain) at the β-α-tubulin heterodimers (17). Prolonged mitotic arrest leads to phosphorylationmediated inactivation of BCL-2 and BCL-XL. Inactivation of antiapoptotic BCL-2 proteins promotes activation of BAX (BCL2-associated X protein) and BAK (Bcl-2 homologous antagonist killer), cleavage of caspase-9 and -3 and caspasedependent apoptosis (18). It induces distinct death programs in primary ALL cells depending on cell-cycle phase, and cells in G1 are particularly susceptible to perturbation of interphase microtubules (19). Pharmacokinetic studies of vincristine in patients with cancer have shown variable characteristics (few minutes to hundred hours in serum) and also neurotoxic side effects. It is able to accumulate a few hundred times more than extracellular concentrations (20-23). Apoptosis and proliferation in lymphocytes are among the most important features of the immune system (24,25). Lymphocytes in response to invaders (including malignancies) start to proliferate before any effector function (26,27). Therefore , and LEHD-pNA for caspase 9 (C) as a substrate. The absorbance was read at 10 min intervals. Supernatants of untreated cells were (without Vincristine) considered as a negative control, the sample without substrate was designed as "control 1" and without supernatant as "control 2".
The results are presented as mean ± SD (very low SD in some experiments makes the bars hardly visible) and the activity of caspases 3, 8, and 9 in BCL1 lymphoma cell line is shown as µmol/min/mg protein (table). proliferating lymphocytes much more than normal resting lymphocytes, however, these normal resting cells from the peripheral blood are usually used to study the adverse effect of vincristine on lymphocytes (28,29). Thus, evaluating the effect of an anti-tumor drug (especially antilymphoma) on activated normal lymphocytes could be informative.
We have previously reported that different concentrations of vincristine induce cell death and apoptosis in resting and proliferating spleen lymphocytes. The cytotoxicity of vincristine was determined by the MTT assay and the percentage of apoptotic cells were also determined using double staining acridine orange and ethidium bromide. The toxic effect of vincristine on normal cells was highly dependent on the time and activation status of the cells. The IC50-according to MTT test -was about 3.6 µg/mL for BCL1 and 10.6 and 8.5 µg/mL for resting and proliferating lymphocytes respectively (13). In this study, we compared the effect of vincristine on the activity of main apoptotic caspases (3, 8, and 9) in resting and proliferating lymphocytes in addition to cancerous lymphoma cell line (BCL1).
According to our results, vincristine (5 µg/mL) induces the activity of both caspase 8 and 9 (and caspase 3) in BCL1 cell line. These results are in agreement with various studies reporting that vincristine induces caspase 3 (or 7) and caspase 8 activation in various cell lines (30)(31)(32)(33) and promoted the expression of caspase-9 (and caspase-3) in human neuroblastoma cells (34).
It is noteworthy that lower concentration causes gently increase in the activity of caspases 8 and 9 in BCL1 cell line but did not results in caspase 3 activation immediately. It may be due to weak activation of caspases unable to lead to reach the threshold to activate caspase 3 (35). There is an in vitro study that shows that increased cytotoxicity with alterations in G1 and S cell cycle phases may occur without detectable differences in apoptosis (36).
In resting splenocytes only a mild increase in caspase 9 activity was observed without any change in the activity of caspase 8 or 3. It can be attributed to the low toxicity of vincristine on normal lymphocytes. At the same situation, the activity of caspase 3 and 9 elevated in proliferating cells exposed to vincristine (1.5 fold). The results indicate that the activated lymphocytes are more susceptible to apoptosis. The same results were obtained with higher concentrations of vincristine (up 20 µg/mL) i.e. no caspase 8 activity in non-cancerous splenocytes (resting or proliferating) and augmented activity of caspase 9 and 3 in both. It can be concluded likewise that the caspase 8 pathway was not easily activated in normal lymphocytes by vincristine.
Regarding the importance of lymphocytes ( (table). proliferation), in response to various pathogens and also tumor cells, paying attention to the activation status of these cells leads to more admissible results. Figure 3. Vincristine at 5 µg/mL added to 2 × 10 5 ConA-stimulated splenocytes and cultured for 48 hours, cell lysates prepared and caspase activity was assayed colorimetric using DEVD-pNA for caspase 3 (A), IETD-pNA for caspase 8 (B), and LEHD-pNA for caspase 9 (C) as a substrate. The absorbance was read at 10 min intervals. Supernatants of untreated cells (without vincristine) was considered as a negative control, the sample without substrate designed as "control 1" and without supernatant as "control 2". The results are presented as mean ± SD (very low SD in some experiments makes the bars hardly visible) and the activity of caspases 3, 8, and 9 in BCL1 lymphoma cell line is shown as µmol/min/mg protein (table). or stimulated (with ConA) were cultured at the presence or absence of vincristine for 48 hours; cell lysates were prepared and caspase activity was assayed colorimetrically and the caspase activity was calculated using standard curves (fold increase according to resting lymphocytes at 5 µg/mL vincristine). | 2019-09-10T00:27:59.900Z | 2019-08-19T00:00:00.000 | {
"year": 2019,
"sha1": "edeb92eb8230e579b70283a93f7354216d0248c5",
"oa_license": "CCBYNC",
"oa_url": "https://zjrms.kowsarpub.com/cdn/dl/05426a2c-f965-11e9-b8f1-e746e023ab12",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "afb19d915e4d537096556d580ec2899d63a2cd09",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6129990 | pes2o/s2orc | v3-fos-license | When Neuroscience ‘Touches’ Architecture: From Hapticity to a Supramodal Functioning of the Human Brain
In the last decades, the rapid growth of functional brain imaging methodologies allowed cognitive neuroscience to address open questions in philosophy and social sciences. At the same time, novel insights from cognitive neuroscience research have begun to influence various disciplines, leading to a turn to cognition and emotion in the fields of planning and architectural design. Since 2003, the Academy of Neuroscience for Architecture has been supporting ‘neuro-architecture’ as a way to connect neuroscience and the study of behavioral responses to the built environment. Among the many topics related to multisensory perceptual integration and embodiment, the concept of hapticity was recently introduced, suggesting a pivotal role of tactile perception and haptic imagery in architectural appraisal. Arguments have thus risen in favor of the existence of shared cognitive foundations between hapticity and the supramodal functional architecture of the human brain. Precisely, supramodality refers to the functional feature of defined brain regions to process and represent specific information content in a more abstract way, independently of the sensory modality conveying such information to the brain. Here, we highlight some commonalities and differences between the concepts of hapticity and supramodality according to the distinctive perspectives of architecture and cognitive neuroscience. This comparison and connection between these two different approaches may lead to novel observations in regard to people–environment relationships, and even provide empirical foundations for a renewed evidence-based design theory.
In the last decades, the rapid growth of functional brain imaging methodologies allowed cognitive neuroscience to address open questions in philosophy and social sciences. At the same time, novel insights from cognitive neuroscience research have begun to influence various disciplines, leading to a turn to cognition and emotion in the fields of planning and architectural design. Since 2003, the Academy of Neuroscience for Architecture has been supporting 'neuro-architecture' as a way to connect neuroscience and the study of behavioral responses to the built environment. Among the many topics related to multisensory perceptual integration and embodiment, the concept of hapticity was recently introduced, suggesting a pivotal role of tactile perception and haptic imagery in architectural appraisal. Arguments have thus risen in favor of the existence of shared cognitive foundations between hapticity and the supramodal functional architecture of the human brain. Precisely, supramodality refers to the functional feature of defined brain regions to process and represent specific information content in a more abstract way, independently of the sensory modality conveying such information to the brain. Here, we highlight some commonalities and differences between the concepts of hapticity and supramodality according to the distinctive perspectives of architecture and cognitive neuroscience. This comparison and connection between these two different approaches may lead to novel observations in regard to people-environment relationships, and even provide empirical foundations for a renewed evidence-based design theory. Keywords: neuroscience, architecture and design, sensory perception, vision, touch, hapticity, supramodality, review In recent years, novel methodologies to explore the neurobiological bases of mind and behavior have inspired the fields of architecture (e.g., Mallgrave, 2011), planning and urban studies (Portugali, 2004(Portugali, , 2011van der Veen, 2012;de Lange, 2013), geography (Anderson and Smith, 2001), social sciences and the humanities (Leys, 2002) to open toward cognitive neuroscience and, more specifically, to brain imaging. Novel interdisciplinary fields with the 'neuro-' prefix have thus recently emerged, such as neuro-economy, neuro-law, neuro-marketing, and even neuro-architecture. A neuroscientific approach to the most diverse fields has proven to be able to offer experimentalbased pieces of evidence to different domains, often confirming, reviewing or integrating previous theoretical notions. Yet, when promoting any dialog among disciplines, caution must be urged against certain conceptual ambiguities, as we shall see in this commentary.
NEUROSCIENCE AND ARCHITECTURE
In architecture, new awareness of the complexity of cognitive and emotional processes involved in the daily experience of designed environments has rapidly grown. Such interest also led to the foundation of the Academy of Neuroscience for Architecture (ANFA) in 2003 in San Diego. Since then, various important contributions have emerged from both fields (Eberhard, 2008;Mallgrave, 2011;Robinson and Pallasmaa, 2015).
Provocatively, we may argue that neurophysiology and design started influencing one another during the Renaissance, when anatomists and designers shared their education, studies and the same cultural milieu: while Vesalius, Descartes and Willis explored the functional and structural characteristics of the central nervous system, laying the grounds for the subsequent scientific revolution, artists such as Leonardo Da Vinci and Andrea Mantegna spent their days in anatomical observations, visionary hydraulic projects, painting and architectural design.
Since then, design studies and life sciences have been continuously inspiring each other, but only recently have they started to truly share interdisciplinary theoretical and methodological perspectives. Nowadays, the contribution of neuroscientists is actively influencing the architectural debate. For instance, Albright (2015) is approaching design with a neuroscientific perspective on perception and aesthetics. Suggestions on the role of embodied cognition through the mirror neuron system in aesthetic response (Freedberg and Gallese, 2007) are taken into account in architectural essays (Mallgrave, 2012;Pallasmaa, 2012;Robinson and Pallasmaa, 2015), and Zeki's neuroaesthetic theories are being discussed within the architectural field (Mallgrave, 2011). Arbib (2012Arbib ( , 2015 is directly addressing designers with suggestions on sensory perception that could have an impact on design practice. A specific topic now emerging in the neuro-architectural debate deals with the relationship between sensory experience and architectural perception. The role of non-visual perceptual modalities, and specifically of touch, is currently arousing great interest (e.g., Pallasmaa, 2005). Here, we specifically focus on how the recent neuroscientific evidence of a modality-independent processing of sensory information could actually lead to a 'sensory intensification' (i.e., visual and non-visual appreciation of designed spaces) in architectural design.
SENSORY INTENSIFICATION IN ARCHITECTURAL THEORY: THE CONCEPT OF HAPTICITY
In the past, many architectural theorists already speculated about the body-architecture relationship, usually in formal theories lacking any experiential or perceptual bases, as in the famous cases of the 'golden-ratio' (Markowsky, 1992;Höge, 1995;Falbo, 2005) or other 'natural' formal principles, such as those inspired by the supposed preference for natural, living forms (the so-called 'biophilia hypothesis' -for a critical assessment see Joye and De Block, 2011).
The phenomenological philosophy of Maurice Merleau-Ponty (1964) initiated a theory postulating the embodiment of the built environment into our daily sensorial experience. Similarly, the Danish architect Steen Eiler Rasmussen (1964) favored the importance of perceiving and appreciating architectural features through different sensory modalities, such as in the subtle haptic cues mediated by visual perception: for instance, visual cues on textures and shapes are also able to convey haptic information, as roughness, smoothness or weight, and thus to gratify the eye through sensorimotor imagery ( Figure 1A). Other authors supported an even tighter relationship between architectural design and embodied cognition, as well as architectural experience and bodily selfconsciousness (Mallgrave, 2011;Pasqualini et al., 2013). For instance, the architect Yudell claimed that the visual rhythm of the urban landscape could actually affect body motion (e.g., our walking pace) and excite our imagination toward an enhanced interaction with environmental elements, as in fantasizing about climbing non-existent steps when looking at the unusually textured facade of a skyscraper (in: Bloomer and Moore, 1977).
Currently, multisensory perceptual integration and the role of the sense of touch in architectural design are being explored through the notion of hapticity. The term hapticity is commonly defined as "the sensory integration of bodily percepts" (Pallasmaa, 2005(Pallasmaa, , 2000 and it suggests a pivotal role of tactile-based (i.e., generally non-visually based) perception and imagery in the architectural experience. The Finnish architect and theorist Pallasmaa hypothesizes the existence of an "unconscious tactile ingredient in vision" (Pallasmaa, 2005) that would be fundamental in architectural appreciation and would exalt touch as the primordial sensory modality.
In this view, even though touch and vision remain intrinsically interwoven in object form and spatial perception, tactile sensations would constitute the core of architectural appraisal ( Figure 1B). In this sense, for example, it is common to refer to a comfortable and relaxing space as a 'warm' place. In this regard, Pallasmaa just recently stressed the importance of sensory experience and our ability to catch complex atmospheres and moods "through simultaneous multi-sensory sensing" (Pallasmaa, 2012). The anthropologist Hall (1966) also emphasized the lack of appeal among designers for the role of haptic sensations, even when visually presented, in bonding people with their environment. Similarly, the architect FIGURE 1 | (A) According to the notion of hapticity, visual cues (e.g., textures or shapes) are able to convey tactile information (e.g., roughness or consistency). Left, top and bottom: edgy shape and texture. Right, top and bottom: smooth shape and texture. Of note, neuroscientific observations showed that the same perceptual information is often processed in a supramodal manner, i.e., independently of the modality through which that sensory content is acquired. (B) What are the implications of supramodal processing when perceiving architecture, such as the facades of the Beauvais Cathedral (Beauvais, France -on the left) or of the Casa Milà (Barcelona, Spain -on the right)? Has visual appreciation of architecture any non-visual (e.g., tactile) implications as well?
Sara Robinson (2015) recently reconsidered the privileged link between haptic sensations and emotion.
Consistently, theorists in the architectural field recently advised against the overemphasis on vision as the primary source of aesthetic appreciation, which may result in biased design methodology (O'Neill, 2001;Mallgrave, 2011). Similarly, the neuro-architectural framework claims that the lack of expertise on multi-sensorial appreciation represents a serious limitation in the current design methodology and struggles for a "sensory intensification" in architectural design (Van Kreij, 2008). On the contrary, most practicing architects typically rely on visual representations both during the design process (e.g., sketches and technical drawings) and the subsequent phase of project communication to the public or the client (e.g., 3D models and renders). Moreover, architects rely almost solely on pictures and drawings (in architectural magazines or books) to establish their personal aesthetics and design method (Wastiels et al., 2013).
NON-VISUAL PERCEPTION AND SUPRAMODALITY IN THE HUMAN BRAIN
Visual information plays a crucial role in shaping the manner in which we represent and interact with the world around us. In fact, for sighted people, vision is so pervasive that they find it hard to imagine a world that does not reach them through their eyes. Thanks to the omnipresence of such kind of perceptual information, sighted people tend to think of themselves as 'visual beings.' Through preferred metaphors, languages often suggest the dominance of vision over other modalities to construct conceptual knowledge. In English, for example, knowing and seeing are often used interchangeably in daily conversation, with expressions such as 'I see what you mean,' 'can you see my point?' or 'seeing is believing.' In ancient Greek, the verb root 'to know' was used as the past tense of the verb root 'to see, ' which lacked its own past tense, so that "I saw" was the equivalent of "I knew." Consequently, the great majority of psychophysical and neuroscientific studies have been historically focused on the characterization of visual perception and on the dissection of the different steps of visual information processing (e.g., Firestein, 2012) and only recently has non-visual perception started to attract some attention (e.g., Klatzky and Lederman, 2011;Ricciardi and Pietrini, 2011;Ricciardi et al., 2014a;Lacey and Sathian, 2015).
In particular, although vision offers distinctive and unique pieces of information (e.g., colors, perspective, shadows, etc.), several observations indicate that vision might not be so necessary to form a proficient mental representation of the world around us. Indeed, individuals who are visually deprived since birth show perceptual, cognitive, and social skills comparable to those found in sighted individuals (Ricciardi et al., 2006(Ricciardi et al., , 2009(Ricciardi et al., , 2014aPietrini et al., 2009;Ricciardi and Pietrini, 2011;Handjaras et al., 2012Handjaras et al., , 2016Heimler et al., 2015). Chris Downey is an architect, Esref Armagan is a painter, Peter Eckert is a photographer: all of them are blind people and yet perfectly capable of successfully conducting their professional lives.
In recent years, functional brain imaging allowed neuroscientists to look at the brains of visually deprived individuals in vivo to explore the effects of lack of vision on the formation of proper mental representations. Notably, the question of the extent to which vision is really necessary for the human brain to function, and thus to represent the surrounding world, has recently extended its reach toward a few architectural theorists (Robinson and Pallasmaa, 2015).
Most neuroscientific studies conducted on blind individuals have primarily focused on the structural and functional compensatory plastic rearrangements occurring as a consequence of sensory loss. In sight-deprived individuals, the 'unisensory' visual occipital cortex structurally rewires to accommodate non-visual sensory inputs (e.g., Cecchetti et al., 2015), while showing functional cross-modal responses to several non-visual perceptual and cognitive tasks (e.g., Amedi et al., 2005;Frasnelli et al., 2011;Heimler et al., 2014). The loss of a specific sensory modality, such as vision, represents a unique opportunity to understand the real extent to which the brain morphological and functional architecture is programmed to develop independently of any visual experience. Neuroimaging protocols have been suggesting that distinct perceptual tasks evoke comparable patterns of brain responses between congenitally blind and sighted individuals: for instance, both groups show overlapping responses in the ventral temporo-occipital cortex when visually or non-visually recognizing object forms, in the middle temporal area when discriminating motion across sensory modalities and in the dorsal occipito-parietal region when processing spatial information and spatial representations (Amedi et al., 2001(Amedi et al., , 2002Pietrini et al., 2004;Ricciardi et al., 2007;Bonino et al., 2008Bonino et al., , 2015; for a review: Ricciardi and Pietrini, 2011;Handjaras et al., 2012Handjaras et al., , 2016Heimler et al., 2014;Ricciardi et al., 2014a,b).
The sharing of an active 'visual' area both in sighted and blind participants across visual and tactile task modalities implies a more abstract, supramodal representation of specific information content. Supramodal brain regions may share a representation of the perceived stimuli independent of the input format from the sensory modality conveying the information to the brain (Figure 2).
As vision has long been considered crucial to explore and represent external sensory stimuli (that are processed along a segregated, but hierarchically organized, network of brain areas), supramodal responses were first assessed within the well-known visual functional pathways (e.g., Milner and Goodale, 1995;Goodale and Milner, 2006;Handjaras et al., 2012). Supramodality has more recently been shown to be involved in integrated semantic representations and affective processing, ranging from action understanding to emotional and social functioning (Ricciardi et al., 2013(Ricciardi et al., , 2014aHandjaras et al., 2015;Handjaras et al., 2016;Leo et al., 2016). Consequently, a more general 'supramodal mechanism' advances from simpler lowlevel to more complex sensory information toward more abstract, 'conceptual' representations. Therefore, according to this perspective, distinct elements of form and space in architectural perception may be processed and represented in highly specialized brain regions in a sensory modality-independent manner. In this sense, assessing the consistency or roughness of a material may recruit a supramodal neural content independently of the sense involved. The same may happen when exploring a complex object only by actively touching it. Rasmussen (1964) provided many examples which could be construed as supramodal architectural experiences ante litteram: he claimed, for instance, that just looking at the surface of a wall could evoke sensations of weightiness or lightness, hardness or softness.
On these premises, Mallgrave (2011) approached the supramodal hypothesis as a possible neural explanation of hapticity. As a matter of fact, by supporting the view of a more abstract nature of information representation, supramodality could theoretically comprehend and thus represent the neural correlate of hapticity and consequently provide the theoretical basis for its empirical investigation.
Nonetheless, if it is evident that vision is not solely responsible for spatial appraisal and perception as hapticity would imply, the notion of supramodality, in line with the intuition of a 'sensory intensification' in architectural appraisal (Van Kreij, 2008), further implies a more comprehensive overview on the embodiment of architectural experiences, shifting the balance beyond immediate sensory perception -not limited to a single sensory modality -toward higher cognitive, more abstract representations involving semantic, emotional and even social processing.
The conceptual potential of hapticity may have not been fully characterized yet, and therefore not fully exploited by architects. In addition, stating the predominance of the tactile sensory modality may be wrong. In fact, touch is constrained both spatially and temporally, as compared to vision. By definition, haptic perception happens in sequence, within a limited perceptual range and only through direct contact with the perceived object (Pons et al., 1987). In addition, the sense of touch relies more on specific properties, such as surface texture, than global ones, such as shape or localization in space (e.g., Lakatos and Marks, 1999;Podrebarac et al., 2014). On the other hand, vision relies on a parallel sensory processing, able to provide a comprehensive, 'gestaltic' perception over a distance and on a wider spatial extent (e.g., Gibson, 1979). Furthermore, functional neuroanatomy and psychophysiology demonstrated a perceptual and cognitive dominance of vision over other sensory modalities (Sereno et al., 1995;Gross, 1998).
Nonetheless, neuroscientists have recently referred to touch in a way that may take hapticity into account. From a phylogenetic perspective touch is an 'earlier' sense, developing prior to vision (even bacteria have it). Touch is a key element in communicating emotions and intimacy, maintaining and reinforcing social bonds (Suvilehto et al., 2015) and evidence shows that tactile stimulation accelerates brain development in infants (Guzzetta et al., 2009).
Touch could even entail emotional involvement with inanimate objects (e.g., Hornik, 1992) and, from a functional perspective, it has been proven that the somatosensory cortices and the action recognition network show vicarious activations during non-visual socially relevant interactions (for a review: Keysers et al., 2010). Most importantly, haptic perception is crucial in determining a 'sense of presence, ' which refers to the perception "of being immersed in the surrounding environment, " whereas vision often does not (Bracewell et al., 2008;Slater et al., 2009). As neuroscientists and architectural designers, we may ask ourselves whether environment appraisal indeed relies on such sensation of 'being there' (or 'in touch, ' as it were) as the notion of hapticity seems to indicate, and to what extent it does so. Because the theorists of hapticity supported their idea of a multimodal sensing in the architectural experience by relying on the neuroscientific evidence that visual and non-visual information is equally processed and represented in the human brain, design decisions can truly integrate such knowledge to enhance architectural experience embracing the whole of the different sensory modalities. For instance, a recent study showed that symmetry is represented in the lateral occipital cortex in a supramodal fashion (Bauer et al., 2015) and many other designrelevant properties await to be investigated.
TOWARD AN EMPIRICAL RESPONSIBILITY PRINCIPLE IN ARCHITECTURE?
Since we spend the most part of our lives in buildings, our environment would greatly benefit from a perspective on architectural and urban design that is shared by both the architect and the neuroscientist. However, we must bear in mind that when dealing with the scientific method that characterizes life sciences, as suggested by Mallgrave (2015), architects must be prepared to address unexpected and possibly unwelcome empirical realities.
In fact, while the 'neuro-turn' has been welcomed by some architects as a way to "humanize" buildings (Pallasmaa, 2012) or to enhance architectural experience (Mallgrave, 2011), in other fields the same shift provoked an opposite reaction: some historians and sociologists see the fascination for neurosciences as a menace to human diversity and creativity (Fitzgerald and Callard, 2014), as a deeper knowledge of the molecular and neural correlates of human mind and behavior would prompt stereotyped approaches to design.
Many socially relevant research questions could be explored by neuroscience and architecture in synergy (see for instance: Pasqualini et al., 2013;Vartanian et al., 2013Vartanian et al., , 2015Choo et al., 2016). Whereas currently the outcomes of this dialog and contamination between architecture and neuroscience are hardly predictable, we believe in the paramount importance of sharing knowledge among disciplines. Actually, the dialectics between the notions of hapticity and supramodality that we have described in this essay is a clear example of the weaknesses and potential strength of sharing theoretical models and terms. So, although hapticity suggests a primacy of touch that evidence from neuroscience does not fully support, it also highlights the urge for a deeper understanding of processing or integration of multiple sensory modalities in environmental perception and appraisal. Actually, the comparison between these two different, but complementary approaches, may lead to novel observations regarding the people-environment relationships (e.g., concerning the architectural elements that may evoke the 'sense of presence'), and even provide empirical foundations for a renewed evidencebased design theory (e.g., characterizing which visual and haptic cues evoke similar percepts or dissecting the role of each sensory modality in processing spatial information).
Such ambiguity of terms demands clarity. Many scientific fields that have matured toward the establishment of accepted methods had to come to terms with theoretical uncertainties such as those faced by architectural theorists and researchers right now. In scientific investigation, more accurate conceptual and linguistic choices should be made, in order to provide a common ground for the involved disciplines: specific terms must be preferred to fashionable and evocative ones, and evidence-based demonstrations should overcome speculations [Lilienfeld et al., 2015;see Franz (2005) as an example of such approach].
No infatuation for neuroscience will bring beneficial change to the architectural field if even eminent theorists still rely on verbal descriptions and speculations. On the contrary, if a paradigm shift awaits architecture, it cannot rely on a turn to neuroscience alone: architectural researchers now need to embody the ethos of empirical responsibility.
AUTHOR CONTRIBUTIONS
All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication. | 2017-05-05T08:51:53.086Z | 2016-06-09T00:00:00.000 | {
"year": 2016,
"sha1": "6c20d302487a68e90da52e4eb58f9588aa919100",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.00866/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c20d302487a68e90da52e4eb58f9588aa919100",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
16026654 | pes2o/s2orc | v3-fos-license | Circulating biomarkers to monitor cancer progression and treatment
Tumor heterogeneity is a major challenge and the root cause of resistance to treatment. Still, the standard diagnostic approach relies on the analysis of a single tumor sample from a local or metastatic site that is obtained at a given time point. Due to intratumoral heterogeneity and selection of subpopulations in diverse lesions this will provide only a limited characterization of the makeup of the disease. On the other hand, recent developments of nucleic acid sequence analysis allows to use minimally invasive serial blood samples to assess the mutational status and altered gene expression patterns for real time monitoring in individual patients. Here, we focus on cell-free circulating tumor-specific mutant DNA and RNA (including mRNA and non-coding RNA), as well as current limitations and challenges associated with circulating nucleic acids biomarkers.
Introduction
Tumor heterogeneity that enables malignant progression by evolutionary selection is also the major cause of emergent resistance during Computational and Structural Biotechnology Journal 14 (2016) [211][212][213][214][215][216][217][218][219][220][221][222] cancer treatment. Yet, we rely on few standard diagnostic tumor biopsies for the characterization of a given cancer. These specimens will provide only a partial characterization of the overall makeup of the dynamic systemic disease cancer represents with intratumoral and interlesional heterogeneity as well as emerging host responses [1]. Tumor heterogeneity is generally accepted as following Darwinian evolutionary principles (Fig. 1), where genetic heterogeneity within a cancer cell population translates into a range of phenotypes that includes distinct surface marker expression, metabolism, proliferation, apoptosis, invasion, angiogenesis, drug sensitivity, antigen presentation or organotropism of cell subpopulations present in a given tumor [2,3]. Selective pressure and selection of cancer cell subpopulations are generally thought to drive increasing heterogeneity during tumor growth and metastatic spread (Fig. 2). Additionally, phenotypic plasticity of cancer stem cells in response to changes in the tumor microenvironment contribute to heterogeneity [4].
A striking example that illustrates intratumoral heterogeneity was recently described for kidney cancer specimen that revealed distinct expression of an autoinhibitory domain of the mTOR kinase and multiple tumor-suppressor genes (i.e. SETD2, PTEN and KDMSC). Additionally, this study demonstrated extensive heterogeneous mutational profiles in 26 out of 30 tumor samples from four renal cell carcinoma patients [5]. Another illustrative example of intratumoral/intermetastatic tumor heterogeneity is the extensive whole genome sequencing analysis of a patient with breast cancer and brain metastasis. Four different tissue samples (the primary tumor, blood, brain metastasis and xenografts) showed tumor heterogeneity at a low frequency even at the primary tumor [6]. Therefore, a single tumor biopsy will underestimate the mutational landscape due to intratumoral/interlesional mutational and phenotypic | heterogeneity. These concepts and additional examples were reviewed recently [7].
What are circulating biomarkers
Capturing and analysis of circulating biomarkers is an alternative method to gain insight into the molecular makeup of a cancer in a given patient. Historically, circulating biomarkers have been observed and studied since the late 1800s in a form of circulating tumor cells (CTCs) [8]. However, extensive study on CTC did not occur until the mid-20th century when the studies of circulating tumor cells showed that the presence of CTCs in cancer patients was correlated with poorer prognosis or progression-free and overall survival [9][10][11].
Here we will discuss cell-free circulating tumor-specific mutant DNA and RNA (including mRNA and non-coding RNA; Fig. 3) due to recent improvements in the sensitivity and analysis scope that impacted the potential of these approaches significantly. A review of circulating tumor cells, circulating proteins, and metabolites will not be included here.
Circulating tumor DNA (ctDNA)
Circulating, cell-free DNA (cfDNA), i.e. fragments of DNA found in the cell-free blood compartment was first described in 1948 [12], but cell-free DNA fragments that originated from tumor cells (ctDNA) have not been well characterized until the late 1980s [13]. The origin of ctDNA has not been well defined yet, but is thought to result from cell death. The presence of ctDNA has been correlated with overall tumor burden, and disease activities [14,15]. Somatic oncogenic Ras, p53 and other cancer-related gene mutation, promoter hypermethylation of tumor suppressor genes have been detected and measured in several different cancers including, but not limited to, colon, small cell and non-small cell lung cancer, melanoma, kidney and hepatocellular carcinoma [16]. Fig. 1. Branching of a cancer evolutionary tree. This model is similar to animals' phylogeny. A (red) represents a common tumorigenesis event, often characterizes by a common driver mutations. B (green) is the first, C (orange) and D (yellow) are subsequent branch evolutionary events.
It is believed that ctDNAs are results of apoptosis. Nucleosomes play essential roles in the fragmentation of DNA during programmed cell death and a recent study developed a genome-wide nucleosome map that showed ctDNA fragments bearing footprint of transcription factors in specific tissues [17]. Additionally, ctDNA from cancer patients also demonstrated distinct pattern of nucleosome spacing which suggested contribution of ctDNAs from non-hematopoietic tissues, unlike ctDNAs from healthy counterparts whose contribution of nucleosome spacing are mostly from lymphoid and myeloid tissues. Fig. 2. Selection of cancer subpopulation during tumor progression and treatment. Both genetics and environment factors influence tumorigenesis and cancer evolution. Selection will enhance cell growth, proliferation, invasion, metastasis, immune evasion and reduce apoptosis. Clones with unfavorable compositions of genetic or epigenetic alterations (blue) will be eliminated after primary therapy. Resistant clones (pink) with survival advantages are indicated. Orange: normal cells; colored-outline: pre-malignant lesion, blue, pink, green, dark brown: different malignant clones. Fig. 3. Circulating biomarkers. Circulating cell-free (plasma/serum) biomarkers include nucleic acids, extracellular vesicles, proteins and metabolites from all metastatic sites as well as normal organ physiologic turn over or impact of systemic drug treatment. Each organ contributes wild-type DNAs to the circulation and organ metastatic seeds will shed mutant DNA. Circulating microRNAs, exosomal RNAs and long non-coding RNAs thus reflect the overall host-tumor crosstalk.
Detection methods and sensitivity
ctDNA detection methods have improved substantially during the past few decades. In the early 1990s, recovery of ctDNA was performed by conventional polymerase chain reaction, followed by Sanger sequencing. However, recovery of ctDNA was often inconsistent, and was considered inferior to other biomarkers, including circulating tumor cells (CTCs) and cancer-related protein markers (i.e. alfafetoprotein, lactate dehydrogenase). The main obstacle in the detection of ctDNA is the relatively low abundance per milliliter of blood examined. Conventional methods of PCR detection and Pyrosequencing have their lower limit of detection at 10% of ctDNA copies in the bulk of background normal DNA (Table 1). Similarly, the early 2000s method of Next-generation sequencing and quantitative PCR (qPCR) lowered the lower limit of detection to approximately 1-2% and enhanced detection performance in hematologic malignancies i.e. Bcr-Abl fusion transcripts in chronic myelogenous leukemia from circulating leukemic cells. Nevertheless, the detection of ctDNA in patients with solid tumors using these techniques remained problematic. The first and successful molecular technique in the identification of ctDNA was the introduction of Beads, Emulsion, Amplification and Magnetics (BEAMing) [18,19] that consisted of emulsion PCR and included Streptavidin-coated beads in every PCR compartment, followed by recovery of tagged amplicons and fluorescent oligohybridization of the mutation of interest. (See Table 2.) More recent methods using droplet digital PCR [20] and targeted panels of amplicon sequencing [21] platforms improve ctDNA recovery and further decrease the lower limit of detection to approximately 1 in 10,000 copies (0.01%). Droplet Digital PCR (ddPCR) takes advantage of partitioning the PCR amplification reactions into approximately 10,000 to 20,000 independent polymerase reactions per tube. This bypasses both reverse transcription, amplification efficiency, and avoids the need for data normalization between each sample [22] according to the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines, both of which are prone to analytical error. Direct measurement of mutant DNA copies further minimizes errors in relative quantification of qPCR and streamlines the analysis with less additional steps.
PCR-based assays do carry limitations related to their detection methods. The numbers of ctDNA that can be detected in one assay are limited. The number of fluorescence acquisition channels available often dictates the number of multiplex-droplet PCR amplification and probe-hybridization reactions. BEAMing is labor-intensive and requires both Streptavidin bead emulsion PCR and flow cytometry, thus, decreasing productivity and the possibility for high-throughput analyses. Also, only known targeted mutations are measured in BEAMing or ddPCR analysis. This also generates a challenge in situations where the amount of template DNA is limited and multiple mutations may be emerging.
Genome wide approaches to assess global ctDNA in the circulation have gained significant attention. This is because only a fraction of patients has known cancer-related driver mutations, i.e. EGFR, BRAF or KRAS. However, initial efforts to utilize shot-gun approaches with whole-exome sequencing to identify and measure ctDNA were difficult due to ctDNAs being fragmented and degraded in the circulations. This further complicates the validation of variant calling in extensively fragmented DNA samples [23]. A new method that utilized multiple-tiered mutation analysis based on somatic mutation found in non-small cell lung cancer in The Cancer Genome Atlas (TCGA), i.e. cancer personalized profiling by deep sequencing (CAPP-Seq) [24], have improved ctDNA detection. In a set of 96 patients with stage II-IV NSCLC the authors reported 96% specificity for mutant allele frequency with lower limit of detection at 0.02%. This method remains dependent on tumor volume and the type of cancer assessed due to differences in quantifiable ctDNA that is distinct between cancer types.
Clinical application
ctDNA are found at a relatively high concentration in the peripheral circulation in patients with metastatic cancer, compared with localized disease [16]. Also, the presence and amount of ctDNAs in the circulation is independent of the presence or concentration of CTCs [16], suggesting independent mechanisms of shedding ctDNA and CTCs. Moreover, the ctDNA concentration reflects the response to chemotherapy, or molecular targeted therapy [25,26]. These findings will still need to be tested for their clinical implications.
Cancer screening
Conventionally, cancer-related protein markers have been used to monitor patients with limited sets of cancers for recurrent disease, i.e. CA-125 in ovarian cancer, AFP for hepatocellular carcinoma, carcinoembryonic antigen (CEA) for colorectal adenocarcinoma, or lactate dehydrogenase (LDH) for malignant melanoma. Unlike germcell tumors where cancer-related protein markers are highly sensitive and specific to cancer activities, the majority of cancer-related proteins, i.e. LDH, remain only screening tools for cancer recurrence without adequate specificity.
ctDNA are more abundant in the circulation among metastatic cancers than early-staged disease, and the prevalence of ctDNA detected in patients with no radiographic evidence of metastasis varies between 49-78%, compared with 86-100% in metastatic disease [27].
Alternative method to monitor disease activity is through the detection of unique sets of single nucleotide point mutations specific to the patient as indicators of disease activity. Also, identification of a patient's specific somatic chromosomal translocation through high-throughput sequencing, ("personalized analysis of rearranged ends" PARE) or through next-gen, matched-pair sequencing analysis have recently been established. [28][29][30][31] This approach uses tumor-specific somatic rearrangement as personalized biomarkers to monitor disease activities with the notion that all tumor cells carry structural chromosomal rearrangements that are not presented in normal tissue or in the circulations. Major potential limitations in this personalized biomarker monitoring includes the stability of each biomarker during the treatment course as the detected biomarker could possibly represent passenger mutations/rearrangements that can undergo negative selection and disappear as the tumor progresses.
Prognostic markers
Earlier studies used restriction fragment-length polymorphism and polymerase chain reaction (RFLP-PCR) assays on circulating DNA to selectively detect circulating mutant KRAS in patients with non-small cell lung cancers. This correlated with the presence of KRAS mutations in tumors and with poorer prognosis for overall survival [32]. Several subsequent studies have confirmed the positive correlation between survival and ctDNA burden using newer and more sensitive detection methods. For example, in a cohort of 69 patients with metastatic colorectal cancers with detectable KRAS ctDNA, the higher concentration of ctDNA correlated with a poorer survival rate, independent of ECOG performance status, and CEA level [27]. Another series also demonstrated the prognostic significance of increased levels of ctDNA that is related to poor overall survival in patients with metastatic breast cancer, a relationship that cannot be found between level of CA15-3 and metastatic breast cancer survival [28,33]. Relationship of the ctDNA concentration has been linked to disease burden, prognosis, and response to therapy. The utility of ctDNA as a prognostic biomarker has been extended to different type of cancers, for example cervical cancer [34], colorectal cancer [35,36], pancreatic cancer [37][38][39], and melanoma [40,41].
Predictive markers
Predictive biomarkers that can guide treatment decision have been sought after to identify subsets of patients who would be "exceptional responders" to specific cancer therapies, or individuals who would benefit from alternative treatment modalities. An example of ctDNA as a potential predictive biomarker is the measurement of O 6 -methyl-guanine-methyl-transferase (MGMT) promoter methylation from ctDNA in glioblastoma multiforme (GBM) patients. This would determine potential benefits from adjuvant alkylating chemotherapy such as temozolomide or dacarbazine, in addition to standard post-operative adjuvant radiation [42,43]. Identification of plasma ctDNA with MGMT methylation using methyl-BEAMing and bisulfite-pyrosequencing techniques in metastatic colorectal cancers demonstrated 86% agreement of MGMT methylation status the tumor and ctDNA analyses with the most methylated allele in the tissues presented in the circulation. Additionally, MGMT methylation status in ctDNA was associated with improved median PFS (2.1 v.s. 1.8 months; p value: 0.08) [44]. Analysis of tumor specific ctDNA could thus facilitate the detection of emerging resistant mutations to molecular targeted therapy, and could help tailor the appropriate treatment based on mutations detected in the tumor or in the circulation. Sundaresan et al. [45] demonstrated that the use of ctDNA, complemented by mutation analyses of CTCs and tumor biopsies can improve the detection rate of T790M EGFR resistant mutation to molecular targeted therapy of non-small cell lung cancers, firstand second-generation EGFR tyrosine kinase inhibitors. ctDNA can also be incorporated into prospective clinical studies to identify predictive markers of response to cancer therapy with stratifications based on the underlying somatic mutation that will render subjects susceptible to specific targeted therapies. (e.g. BRAF L597 mutation in cutaneous melanoma with MEK inhibitor, or PIK3CA mutation in solid tumors with PIK3CA inhibitors) or indicate emerging resistant subclones.
Treatment monitoring
Several studies have utilized ctDNAs as markers of metastatic disease activities to monitor disease response and overall disease burden. In one study, a total of 30 out of 52 patients with metastatic breast cancers were found to have somatic variants in their tumors, either by targeted gene sequencing, or whole-genome paired-end sequencing. Compared with CTCs and CA 15-3, 97% of patients had measurable ctDNA, compared with 78% for CA 15-3, and 87% for CTCs. The trend of serial ctDNA levels appeared to correlate with radiographic response to therapy. A comparison showed fluctuations of CTCs that are not informative when the number of CTCs was below 5 cells/ml, and CA 15-3 changes in response to cancer treatment were only small.
Application of ctDNA for treatment monitoring and surveillance could be useful in certain malignancies where there is no optimal method of screening and surveillance, such as pancreatic cancer, or ovarian cancers. Pereira et.al [46] suggested the potential utility of ctDNA as an early screening and surveillance tool for gynecologic malignancies
Limitations
While ctDNA monitoring could offer potential improvements in non-invasive cancer treatment monitoring, there are inherent limitations related to ctDNA tumor markers. ctDNAs demonstrates a strong correlation with tumor burden but are not always detectable in peripheral blood. Most studies have shown an approximately 70-80% concordance between tumor somatic mutation and the presence of ctDNA in the circulation [25,47].
ctDNA quantification is highly dependent on pre-analytical specimen handling. While it is possible to recover ctDNA at a comparable concentration between 2-4 h and 24 h processing time [25,48], several studies have demonstrated significant changes in the mutant-towild type DNA ratios between specimens processing within 2-4 h of blood collection relative to processing at 24 h. There is also no consensus on the method of ctDNA quantification and how ctDNA should be selected from multiple mutations detected in the cancer genome.
The source of ctDNA should also be standardized, either from serum or plasma. Prior studies [49,50] demonstrated a discrepancy of ctDNA concentrations between serum and plasma samples. ctDNA concentrations were consistently low in the plasma compared to the serum due to possible loss of circulating DNA during purification, as coagulation and other proteins are being eliminated during specimen preparation.
While ctDNA could be useful in the early detection of cancer recurrence, a potential major limitation is the lack of a consensus on the next step of management following detection of ctDNA in individuals without radiographic evidence of cancer recurrence or relapse. A great example has been CA-125, a protein biomarker for ovarian cancer, in the MRCOV05/EORTC 55955 trial [51] for which 529 of 1442 ovarian cancer patients completed their chemotherapy and had their CA-125 returned to normal range were randomized to either early or delayed treatment upon their recurrence of CA-125 above twice normal limits. Despite earlier treatment based on elevated CA-125 level, there was no difference in overall survival (median overall survival 25.7 months (95% confidence interval (CI), 23.0-27.9) in the early treatment arm vs. 27.1 months (95% CI, 22.8-30.9) in the delayed treatment arm, with a hazard ratio (HR) of 0.98 (95% CI, 0.80-1.20; p = 0.85). This finding led to a recommendation against treatment decision based on CA-125 alone without radiographic or physical evidence of disease recurrence.
Similarly, lead-time bias is another major challenge in early cancer screening tools, as previously mentioned in yearly low-dose CT scan for lung cancer screening, and routine PSA monitoring in prostate cancer [52][53][54]. Further research should be performed to validate the utility of ctDNA as potential biological markers in prospective trials.
Types of circulating cell-free RNA: messenger RNA
Circulating messenger RNAs (mRNA) in human cancer patients were first described in the 1990s in patients with different type of cancers, i.e. gastric cancer, pancreatic cancer [55], nasopharyngeal carcinoma [56] and melanoma [57]. Because mRNAs possess a critical role in intracellular protein translation and, it is likely that extracellular mRNAs reflect the status of the intracellular process, and are conceivably potential biomarkers for cancer diagnosis or therapeutic monitoring. Later studies reported various coding RNAs in plasma or serum from patients with cancer, and levels of circulating cell free mRNAs (cf-mRNA) were found to be predictive of clinical outcome [58,59] and disease prognosis [60,61]. However, extracellular circulating mRNAs are subjected to degradation, instability, low abundance, and intracellular mRNA contamination from specimen processing [62,63]. Thus, the reproducibility and utility of cf-mRNA as biomarkers is severely limited.
Types of circulating cell-free RNA: non-coding RNA
Non-coding DNA sequences are actively transcribed into non-coding RNAs consisting of long non-coding RNAs (lncRNA), microRNAs (miRNA), short interfering RNAs (siRNAs), and piwi-interacting RNAs (piRNA), among other lncRNA species. Unlike mRNA, the function of non-coding RNA is the regulation of gene expression. The vast majority of observations in the field of circulating RNAs involve miRNAs and lncRNAs, however as increasing RNA sequencing data is being generated, it is becoming clear that piRNAs and snoRNAs in human plasma are gaining importance.
Piwi-interacting RNAs (piRNA)
PiRNAs are single stranded 26-31 nucleotide long RNAs which can repress transposons and target mRNAs, mediated by binding to PIWI proteins. PIWI proteins belong to a subfamiliy of Argaunate proteins. piRNA biogenesis is Dicer and Drosha independent [64] Although the piRNAs are studied only recently, it is known that piRNAs are a large class of small non-coding RNAs in animal cells and it is thought there are many thousands of distinct piRNAs. According to the piRNABank (http://pirnabank.ibab.ac.in/stats.html) there are more than 32,000 unique piRNAs. In addition to their role in maintaining the integrity of germ line DNA, piRNAs are found to be deregulated in cancer. [65] PiRNAs are highly abundant in human plasma [63] Plasma levels of PiR-019825 were found to be deregulated in patients with colorectal cancer, whereas piR-016658 and piR-020496 were associated with prostate cancer patients, and plasma levels of piR-001311 and piR-016658 were found to be dysregulated in patients with pancreatic cancer. [63] Despite their large quantities, the role of piRNAs in the circulation has not been studied and still needs to be elucidated.
Small nuclear and small nucleolar RNA(snRNA and snoRNA)
snRNA and snoRNA consist of large number of non-coding RNA species of 60-300nucleotide long that were transcribed from intervening sequences of protein-coding genes (a.k.a.host genes). snRNAs are important in RNA-RNA remodeling and spliceosomes assembly. snoRNAs involve in post-transcriptional modification of ribosomal RNA and play integral roles in formation of small nucleolar ribonucleoprotein particles (snoRNP), which are important cellular regulation and homeostasis. There are two major classes of snoRNA, the first one is box C/D snoRNA, a.k.a. SNORDs (contains box C (RUGAUGA) and D (CUGA) motif), and box H/ACA snoRNA, a.k.a. SNORAs, (contains box H (ANANNA) motif and ACA elements) (review in [66]). Perturbation of snRNA and snoRNA expression has been documented in different type of cancers. Increased ratio of U6 snRNA to SNORD44 snoRNA were noted to be higher in breast cancer patients regardless of disease status or staging. SNORD112-114 are overexpressed in acute promyelocytic leukemia and suppression of the same snoRNAs under the effect of alltrans retinoic acid-mediated differentiation [67]. There are also enrichment of U22, U3, U8, U94 box C/D snoRNAs in human breast cancer cell lines [68] and over-expression of both SNORD and SNORA species in lung adenocarcinoma and squamous cell carcinoma [69,70]. Additionally the same study demonstrates increased expression of certain snoRNAs species. Namely, in a study of snoRNA on non-small cell lung carcinoma that showed increased expression of SNORD33, SNORD66, SNORD73B, SNORD76, SNORD78, and SNORA41, subsets of overexpressed snoRNA, SNORD33, SNORD66, SNORD76 were reliably detectable in the NSCLC patients' plasma at a significantly higher level compared to healthy controls or COPD patients. However, there remains paucity of data on snRNAs and snoRNAs as potential diagnostic, prognostic or predictive markers.
Long non-coding RNAs (lncRNA)
The lncRNAs are defined as N 200 nucleotides in length and classified into five subclasses, which include intergenic, intronic, sense overlapping, anti-sense, and bidirectional lncRNAs [71]. LncRNA regulates expression of protein-coding genes, functions at the level of splicing, chromatin remodeling, transcriptional control and post-transcriptional processing after binding to DNA, RNA or proteins [72]. Dysfunction of lncRNAs is associated with a wide range of diseases. Experimentally supported lncRNA-disease associations are collected and curated in publicly available domain, i.e. the LncRNADisease database which contains sequence annotations, description of lncRNA functions and organ specific expression levels [73]. LncRNADisease also curated lncRNA-interacting partners at various molecular levels, including protein, RNA, microRNA and DNA. Several thousand RNA transcripts have been identified as lncRNAs [74] and their expression are tissuespecific [75], involving growth, metabolism and cancer metastasis [76]. Despite the paucity of data on circulating lncRNAs, the interest in circulating lncRNAs in human cancer has grown recently [77][78][79][80][81]. In renal cell cancer, levels of plasma lncARSR is higher than those of healthy blood donors, lncARSR levels decreased after tumor resection and were elevated upon tumor relapse. [82]. Moreover they showed that high pre-therapy plasma lncARSR levels could predict which patients would suffer from progressive disease during sunitinib therapy. This could indicate that circulating lncRNAs have potential to serve as predictive biomarkers for clinical benefits of cancer therapy.
Interestingly, the ratio of different RNA transcripts within exosomes differs from their cells of origin, suggesting that lncRNA are transported into exosomal vesicles in a tightly regulated manner [83]. For example, circulating levels of lncRNA H19 are elevated in patients with gastric cancer compared with healthy controls and plasma H19 lncRNA expression was reduced postoperatively in patients with elevated levels of H19 lncRNA pre-operatively [84]. However, there was no correlation between the expression of H19 in plasma and primary tumor tissues. This discrepancy may be due to decreased RNA integrity in plasma and reduced RNA quality and degradation in formalin-fixed paraffinembedded (FFPE) tissues. Interestingly, there was no difference in H19 expression between tumor and paired non-cancerous tissues in FFPE samples. These findings provide evidence of different tissues of origin from each circulating lncRNAs, e.g. the lymphatics, the cardiovascular or nervous system, circulating peripheral blood cells or hematologic stem cells. This implies that circulating lncRNAs can provide information about the tumor-host microenvironment and crosstalk, and thus reflect the systemic nature of cancer. A study using sera from gastric cancer patients suggested that circulating CUDR, PTENP1 and LSINCT-5 lncRNAs expression could distinguish patients with gastric cancer as early as stage 1 from healthy subjects and from patients with gastric ulcers, although there was no association between the lncRNAs and tumor characteristics (location, size, and TNM staging) [85].
microRNA
Mature microRNAs (miRNA) are highly conserved short strands of non-coding RNA, derived from hairpin precursor transcripts [86]. After cleavage of primary microRNA (pri-miRNA) transcripts by the Drosha/ DCGR8 complex, nuclear-to-cytoplasmic transport, and maturation with DICER1 [87,88], 21-24 nucleotide long, double stranded mature miRNAs are formed. One of the mature miRNA strands binds predominantly to the 3′untranslated region (UTR) region of mRNA to regulate protein translation. Additionally, miRNAs can also bind to the open reading frame (ORF) or 5′UTR of target mRNAs to repress or activate translational efficiency [89][90][91][92]. The discovery of small RNAs that are involved in translation regulation via an antisense RNA-RNA interaction was first described in Caenorhabditis elegans [93]. To date, more than 2500 human mature miRNAs have been identified and annotated [94], with more than half of human protein-coding genes likely regulated by a miRNA [95].
miRNAs are dysregulated in cancer and play crucial roles in cell proliferation, apoptosis, metastasis, angiogenesis and tumor-stroma interactions [96]. Dysregulated miRNA(s) can function both as oncogenes (e.g. miR-155; miR-21, miR-221; miR-222, miR-106b-93-25 cluster; the miR-17-92 cluster) and tumor suppressors (e.g. miR-15; miR-16; let-7; miR-34; miR-29; miR-122, miR-125a-5p and miR-1343-3p), depending on their downstream targets [63,97]. Many human miRNA genes are located on chromosomal sites that are susceptible to chromosome breakage, amplification and fusion with other chromosomes [98]. Additionally, alterations in RNA binding proteins and cell signaling pathways contribute to cancer through miRNA expression changes as well as mutations in core components of the miRNA biogenesis machinery that can promote oncogenesis [87]. It has recently been shown that mutant KRAS in colon cancer cell lines leads to decreased Ago2 secretion in exosomes and Ago2 knockdown resulted in decreased secretion of let-7a and miR-100 in exosomes whilst cellular levels of the respective miRs remained unchanged compared to control cells. [99].
A systematic expression analysis of 217 mammalian miRNAs from 334 samples, including multiple human cancers revealed extensive diversity in miRNA expression across cancers, and a large amount of diagnostic information encoded in a relatively small number of miRNA. More than half of the miRNA (129 out of 217) had lower expression levels in tumors compared to normal tissues, irrespective of cell types [100]. miRNA expression profiles allows classification of poorly differentiated cancers and identify tumors of unknown tissue origin [100]. In subsequent studies, profiling miRNA expression improved cancer diagnosis and helped identify the tissue of origin in carcinoma with unknown primary site by standard histology or immunohistological analyses [101,102].
miRNAs are present and stable in the peripheral circulation. The first report on miRNA expression in the circulation in 2008 described detection of four placenta-associated miRNAs (miR-141, miR-149, miR-299-5p, and miR-135b) in maternal plasma during pregnancy, after which the level decreased following delivery [103]. In 2008, a study demonstrated increased levels of circulating miR-21, miR-155 and miR-210 expression in patients with diffuse large B-cell lymphoma (DLBCL) compared to healthy controls [104]. Mitchell et al. also showed that circulating serum miR-141 could distinguish patients with advanced prostate cancer from healthy controls [105].
The vast majority of research on circulating miRNA signatures in oncology is focused on diagnostics [106], in which patients with cancer are compared to healthy individuals. Given the profuse inter-individual differences in genetic background of individual patients in addition to the heterogeneous nature of cancer, using cf-miRNA as cancer diagnostic biomarkers will remain challenging.
The origin of cf-miRNA is heterogeneous. miR-21 is a good example to illustrate this point. Although the release of miR-21 into the circulation is correlated with a multitude of cancer types, it is also highly expressed in activated T-cells and associated with inflammation and wound healing [107][108][109]. Elevated circulating miR-21 levels do not merely reflect tumor presence. They can also reflect the host response to the tumor, which is important in predicting disease progression. Moreover, there are often discordances between cf-miRNA signatures and the paired tumor tissue [106]. Assuming that the quality of miRNA measurements is not determined by the efficacy of RNA extraction, this suggests that cancer-associated cf-miRNA deregulations is more likely to reflect the systemic response to the presence of cancer. Indeed, several studies have shown that cf-miRNAs are predominantly derived from blood cells [110] and the endothelium [111] in addition to the tumor.
Cancer progression and systemic drug therapy involve many organ systems and are not limited to the primary tumor. This makes cf-miRNA attractive biomarkers for cancer progression and drug efficacy monitoring. For instance, in serum obtained pre-surgically from patients with early stage colorectal cancers, a panel of 6 circulating miRNAs can predict cancer recurrence [112]. Changes in cf-miRNA patterns within the same patients can be monitored over time during therapy. The growing evidence of the utility of cf-miRNA as cancer therapy response indicators has been accumulating during the last few years [113][114][115]. Cf-miRNAs are likely to surpass the clinical utility of conventional protein markers such as CA-125, CA19-9, PSA and radiographical techniques, which have low sensitivity and specificity and are not designed to characterize cancer at a genetic level.
Modes of RNA transport into the circulation
Human serum contains ribonucleases (RNase) that originate from leukocytes and the pancreas and catalyze the cleavage of bonds between ribonucleotides. Levels of serum RNases are elevated in patients with cancer [116]. Despite the rich abundance of RNAses, circulating RNAs have been found to be unexpectedly stable against RNase degradation, as long as the uncentrifuged blood is stored at 4°C, and plasma is processed within 6 h. Also, single freeze/thaw cycle produces no significant effect on the RNA concentration of plasma or serum [117].
One explanation for the circulating RNAs' stability is encapsulation by protective membrane bound vesicles. These vesicles consist of a lipid bilayer membrane surrounding a small cytosol and are separated into three types: exosomes, microvesicles (MVs, ectosomes or microparticles), and apoptotic bodies (ABs). Each vesicle type can originate from normal or cancerous cells, transfer molecular cargo to both neighboring and distant cells, and modulate cellular behaviors involved in physiology and pathology [118][119][120].
Exosomes were first identified as vesicles with 5′nucleotidase activity in 1981 by Trams et al. [121] and later described as 30 to 100 nm vesicles of endosomal origin [122]. An attempt to profile the ribonucleic material enclosed within exosomes isolated from plasma of 3 healthy human blood donors was performed by using small RNA sequencing libraries designed to capture small non-coding RNAs of 20-40 nucleotides length [123]. This analysis was recently repeated in a larger cohort of human subjects and generated similar results: The plasma exosomal RNA species are made up of 40.4% mature miRNAs, 40% piRNAs, 2.1% mRNAs and 2.4% lncRNAs [63]. In a recent RNA sequencing analysis in human plasma from 40 individuals, 669 miRNAs, 144 piRNAs and 72 snoRNAs were found to be expressed above one read per million [124].
Interestingly, bovine miRNAs were detected in the human plasma exosomes. However their origin remains to be elucidated since it is unknown whether dietary miRNAs can enter the human circulation through the gastrointestinal system. Microvesicles are larger vesicles (50 to 1000 nm) created through direct budding from the plasma membrane and contain metalloproteases in addition to lipids, cytokines, growth factors, membrane receptors and nucleic acids which exosomes also carry [119]. Exosomes can be separated from vesicles of different sizes using ultracentrifugation at different speeds, with the larger vesicles pelleting at lower speed than the smaller ones [118]. ABs are 500 to 2000 nm in diameter that are released by cells undergoing apoptosis and may contain genomic DNA fragments and histones in addition to RNAs. Tumor-derived mRNA associated with apoptotic bodies remains stable in serum, in contrast to mRNA in serum samples mixed with free tumor cell-derived mRNA even when the mRNA was rapidly extracted, i.e. within 1 min after incubation [125,126]. Extracellular vesicles play a critical role in cancer, since they can contain oncogenes, mutated tumor suppressor genes, hypoxia-related molecules, angiogenic factors, immune regulatory proteins, RNAs, and various metabolites and the field of extracellular vesicle research in cancer biology is expanding fast.
Despite the protection provided by extracellular vesicles against RNA degradation, miRNA in plasma can pass through 0.22 μm filters and remain in the supernatant after ultracentrifugation, indicating the non-vesicular origin of a portion of extracellular miRNA [127]. This phenomenon is explained by the fact that miRNA can be transported when bound to proteins, in addition to being carried by vesicles. One example of miRNA delivering proteins is high-density lipoprotein (HDL). HDL can carry both exogenous and endogenous miRNAs to recipient cells resulting in direct targeting of mRNA reporters [128], and HDLmediated delivery of miRs to recipient cells is dependent on scavenger receptor class B type 1. Furthermore, Nucleophosmin (NPM1, nucleolar phosphoprotein b23, numatrin) is thought to be involved both in the miRNA exporting process and in protecting external miRNAs outside the cell from RNAse digestion [129]. Another study describes that a large portion of plasma miRNAs cofractionated with protein complexes rather than with vesicles and that miRNAs were sensitive to protease treatment of plasma, indicating that protein complexes protect circulating miRNAs from plasma RNases [130]. Argonaute2 (Ago2) is present in plasma and is the key effector protein of miRNA-mediated RNA silencing. Importantly, the identification of extracellular Ago2-miRNA complexes in plasma raises the possibility that cells release a functional miRNA-induced silencing complex into the circulation. Irrespective of the packaging of circulating RNAs, extracellular RNA secretion is an active and tissue-specific phenomenon, which makes them biologically significant. It is likely that isolation of RNA from plasma or serum without prior separation into subsets can capture all compartments including membrane-derived vesicles and protein bound molecules. Since they are biologically functional regardless of the type of carrier, analysis of the complete assemblage should be performed for their utilization as informative biomarkers.
Method of detection for circulating RNA
There are multitudes of commercial RNA isolation kits available that serve their purpose adequately [131]). Cf-RNA yields are low compared to levels of RNAs of cellular or tissue origin, and depending on the desired type of RNA, diverse methods can be used for either total RNA or exclusively small RNA isolation.
The gold standard for RNA quantitation is qRT-PCR and this applies to circulating RNAs as well. The required input for this assay is as low as a few nanograms of RNA which makes qRT-PCR attractive for low abundant cf-RNA detection. Whether based on Taqman, Locked-Nucleic-Acid or Sybr-Green technology, overall RNA-specific qPCR is sensitive, the specificity of the assay is high and results are obtained within a day. A relatively novel technology called Droplet Digital PCR (ddPCR, Bio-Rad™) is described above and can also be applied to RNA. This analysis enables highly reproducible quantitation of low abundant RNAs. The limitations of RNA-specific qPCR are the low throughput, lack of suitable housekeeping gene normalizers and the inability of miRNA discovery.
Broad gene expression arrays allow for higher throughput, as they can include several hundreds of target RNAs in a single assay. Arrays are either based on qPCR or hybridization technologies and are commercially offered by ABI, Agilent, Affymetrix, Exicon, Nanostring, Toray, MiRXES and Illumina among many other companies. These assays require 30-100 ng RNA input. It has been reported that qRT-PCR-based arrays performed better than hybridization platforms with respect to limits of miRNA detection [132]. Adequate data normalization and analysis requires experience and can take several days.
For RNA discovery beyond detection of known target genes, RNA sequencing is necessary. However, cDNA library preparation may introduce sample bias. Deep sequencing with the use of small RNA-cDNA libraries is suitable for shorter RNAs, however for adequate mRNA and lncRNA transcript discovery, longer read sequencing may be more suitable.
Recently, next-generation sequencing (NGS) has emerged as an unbiased alternative option with greater dynamic range of detection, increased sensitivity and reproducibility. NGS platform could overcome fundamental problems with array-based platform that rely on hybridization of RNAs to the pre-specified probes and rendered small dynamic range of detection as well as limitation in discovery of new ncRNA species. It is also important to realize that cross-comparison of ncRNA, especially miRNA, between different platforms remains problematic due to (i) enrichment of ncRNA species that is below or exceed the detection limit (ii) amplification bias and (iii) false positive detection from non-full length RNA sequencing. For example, NanoString miRNA detection platform that utilize solution-based hybridization and fluorescent-based barcode digital counting system showed only moderate correlation [133], Spearman's p = 0.49, with NGS platform with Illumina TruSeq Small RNA protocol that underwent pre-amplification, followed by size-selection and multiplexed sequencing in each flow cells prior to sequencing.
Validation of results obtained by any of the aforementioned methods is necessary and this is usually done with qPCR. Although collection of larger sample numbers are achieved in multi-institutional studies, acquiring robust data is usually problematic in this setting due to technical differences in blood processing, RNA isolation and quantitation methods. Strict methodological standardization must be applied to generate informative circulating RNA data.
Clinical application of circulating RNA
Circulating cell-free RNA has a major potential as a cancer biomarker. A number of RNA species are deregulated as a result of the uncontrolled cell proliferation, stromal remodeling and immune regulation that define cancer. Distinct alteration in circulating RNA reflects dysregulation of cancer immunity, cell growth, proliferation and stromal interaction. Given the systemic nature of cancer, its biology should be studied in the context of the host response, which makes cf-RNAs suitable complementary tools. Besides the non-invasive nature of blood sampling, liquid biopsies allow for serial sample collection at different time points relative to treatments. This is particularly valuable with respect to the promising cancer immune therapy research that relies on the host response.
Summary and future direction of circulating biomarkers
Circulating biomarkers development is a fledgling but rapidly growing field in cancer research. Circulating biomarkers will continue to evolve with ongoing improvements in detection limits, decreasing the amount of nucleic acids template, expanding the number genes available for analysis and reduction of the operating cost and time. An overall estimation of tumor characteristic with a snapshot of circulating nucleic acids is no doubt going to support treatment decisions and monitoring of cancer due to the dynamic nature of the disease and its heterogeneity. However, the major challenge in biomarker discovery is its validation in prospective clinical studies to assess their impact. Finally, until each of them is thoroughly validated and compared with standardized assessment for treatment response (i.e. RECIST criteria) and overcome problems of standardization, tumor-liquid biopsy discrepancies and lead-time bias, circulating biomarkers are still experimental and represent an interesting set of research tools. | 2018-04-03T03:51:14.834Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "504d8285e247b12b019f3c251ee24ca82cad969a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.csbj.2016.05.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8918dc3791425014689c609dac6fbdb08f1bda4b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
86328306 | pes2o/s2orc | v3-fos-license | Why is marsh productivity so high? New insights from eddy covariance and biomass measurements in a Typha marsh
Researchers have a poor understanding of the mechanisms that allow freshwater marshes toachieveratesofnetprimaryproduction(NPP) thatarehigher thanthosereportedformost other types of ecosystems. We used an 8-year record of the gross primary production (GPP) and NPPat theSan Joaquin Freshwater Marsh (SJFM) inSouthern California to determine the relative importance of GPP and carbon use efficiency (CUE; the ratio of total NPP to GPP calculated as NPP GPP (cid:2) 1 ) in determining marsh NPP. GPP was calculated from continuous eddy covariance measurements and NPP was calculated from annual harvests. The NPP at the SJFM was typical of highly productive freshwater marshes, while the GPP was similar to that reported for other ecosystem types, including some with comparatively low NPPs. NPP was weakly related to GPP in the same year, and was better correlated with the GPP summed from late in the previous year’s growing season to early in the current growing season. This lag was attributed to carbohydrate reserves, which supplement carbon for new leaf growth in the early growing season of the current year. The CUE at the SJFM for the 8-year period was 0.61 (cid:3) 0.05. This CUE is larger than that reported for tropical, temperate, and boreal ecosystems, and indicates that high marsh NPP is attributable to a high CUE and not a high GPP. This study underscores the importance of autotrophic respiration and carbon allocation in determining marsh NPP.
Introduction
Freshwater marshes, also known as reed swamps or reed beds, have among the highest rates of net primary production (NPP) reported for terrestrial ecosystems (Westlake, 1963;Whittaker, 1975;Keefe, 1972;Bradbury and Grace, 1983;Mitsch and Gosselink, 1993;Valiela, 1995;Keddy, 2000). Marsh NPP can be as high as that of tropical forests and intensive agricultural ecosystems, but the physiological mechanisms that drive high wetland production are poorly understood. Ecosystem NPP represents the balance between carbon uptake by photosynthesis (gross primary production, GPP) and carbon loss by autotrophic respiration (R a ). There are two likely, nonmutually exclusive explanations for the reports of high productivity by marshes. First, freshwater marshes may have high GPP, which directly leads to high rates of NPP. Alternatively, freshwater marshes may have a high carbon use efficiency (CUE), which allow for high rates of NPP even though GPP is not atypical.
Marsh vegetation and environments have unique attributes that may favor high GPP. These traits include abundant resources, such as nutrients and water (Keefe, 1972;Bradbury and Grace, 1983), and plant canopies with vertically orientated leaves (Jervis, 1969;Longstreth, 1989). Wetlands accumulate nutrients (Bowden, 1987;Childers, 2006), such as nitrogen and phosphorus, which are positively related to leaf photosynthetic capacity (Wright et al., 2004). Carbon gain comes at the expense of water loss, and the high water table associated
Typha latifolia
Freshwater marsh Net primary production Gross primary production Carbon use efficiency Carbohydrates a b s t r a c t Researchers have a poor understanding of the mechanisms that allow freshwater marshes to achieve rates of net primary production (NPP) that are higher than those reported for most other types of ecosystems. We used an 8-year record of the gross primary production (GPP) and NPP at the San Joaquin Freshwater Marsh (SJFM) in Southern California to determine the relative importance of GPP and carbon use efficiency (CUE; the ratio of total NPP to GPP calculated as NPP GPP À1 ) in determining marsh NPP. GPP was calculated from continuous eddy covariance measurements and NPP was calculated from annual harvests. The NPP at the SJFM was typical of highly productive freshwater marshes, while the GPP was similar to that reported for other ecosystem types, including some with comparatively low NPPs. NPP was weakly related to GPP in the same year, and was better correlated with the GPP summed from late in the previous year's growing season to early in the current growing season. This lag was attributed to carbohydrate reserves, which supplement carbon for new leaf growth in the early growing season of the current year. The CUE at the SJFM for the 8-year period was 0.61 AE 0.05. This CUE is larger than that reported for tropical, temperate, and boreal ecosystems, and indicates that high marsh NPP is attributable to a high CUE and not a high GPP. This study underscores the importance of autotrophic respiration and carbon allocation in determining marsh NPP.
Published by Elsevier B.V.
with wetlands decreases the chance of water stress, which can reduce leaf photosynthetic capacity, induce leaf senescence, and decrease the length of the growing season (Morgan, 1984). Marshes are dominated by species with vertical leaf orientation, such as Cattail (Typha spp.) and Bullrush (Scirpus spp.). Canopies with vertically oriented leaves allow for greater light penetration into the canopy, and are less prone to self-shading than canopies with horizontally oriented leaves (Sheehy and Cooper, 1973). These factors would lead one to hypothesize that marshes create environments that are suited to maximize GPP. However, our ability to test this hypothesis is limited by the lack of marsh GPP measurements.
Although GPP determines the amount of carbon fixed by the canopy, plant growth (i.e. NPP) is ultimately controlled by the conversion efficiency of photosynthate to plant biomass (Amthor, 1989). This conversion efficiency is known as the Carbon Use Efficiency, and is thought to be largely controlled by plant respiration (Van Iersel, 2003). The regulation of respiration by photosynthetically derived sugars and the coupling of respiration to photosynthesis at long timescales (Dewar et al., 1998) have led some to hypothesize that CUE is constant across ecosystems (Waring et al., 1998;Gifford, 2003). However, methodological limitations associated with deriving CUE (Medlyn and Dewar, 1999), the lack of CUE data from a variety of ecosystem types, and a limited understanding of the physiological mechanisms that drive a constant CUE have lead others to question the idea that CUE is constant (Amthor, 2000;DeLucia et al., 2007). Moreover, it remains to be determined how CUE could remain constant when carbon allocation to leaves, roots, and stems differs between ecosystems, and the respiratory cost of maintaining and constructing these different plant tissues varies widely (Penning de Vries et al., 1974;Poorter and Villar, 1997;Chapin, 1989).
Marsh vegetation and environments have several features that may lower the respiratory requirements of the plants and lead to a high CUE relative to other terrestrial systems. Marsh sediments are reduced and contain high amounts of ammonium relative to nitrite and nitrate (Bowden, 1987). Constructing plant tissues with ammonium rather than nitrate can reduce plant respiratory costs by 13% because ammonium does not have to be reduced for incorporation into amino acids, as is the case with nitrate (Poorter and Villar, 1997). Wetland macrophytes allocate a disproportionate portion of their carbon to leaves rather than stems or roots (Gustafson, 1976;Lorenzen et al., 2001), and CUE has been shown to increase with increased investment in leaves relative to roots (DeLucia et al., 2007). Anaerobic conditions created by waterlogged soils are unfavorable habitats for mycorrhizae (Peat and Fitter, 1993). Since mycorrhizae can decrease plant productivity and increase photosynthetic rates (Dunham et al., 2003), the lack of mycorrhizae in wetlands may result in an increased CUE.
We used 8 years of eddy covariance data and peak biomass harvests, and 2 years of belowground biomass harvests, in a Southern California marsh to determine the relative importance of GPP and CUE in determining marsh NPP. The eddy covariance method provides a measure of the net ecosystem exchange of CO 2 (NEE), which can be used to determine whole ecosystem GPP. Recent studies have paired eddy covariance observations with simultaneous measurements of primary production to understand the relationship between carbon uptake and plant growth (Arneth et al., 1998;Curtis et al., 2002;Rocha et al., 2006;Gough et al., 2008). Our goal was to use this strategy to determine the physiological mechanisms that allow marshes to attain high NPP. In this study, we consider carbon fluxes (GPP) and carbon allocation as the direct controllers of NPP, while nutrients and water availability are considered indirect controls that impact NPP by altering GPP or CUE.
Site description
The study was conducted at the San Joaquin Freshwater Marsh (SJFM) reserve located in the University of California's Irvine campus in coastal Orange County (33839 0 44.4 00 N, 117851 0 6.1 00 W) (see Goulden et al., 2007; in press for details). The site was dominated by Cattail (Typha latifolia L.) and water levels were managed for research and wildlife habitat. The SJFM was flooded annually to a depth of $1 m in the winter of most years, after which water levels gradually declined through evapotranspiration or subsurface drainage (Rocha, 2008). The lone exception to this pattern in the last 10 years occurred in 2004, when the marsh remained dry year-round because of concern about the West-Nile virus and a management decision to reduce mosquito habitat.
2.2.
Calculating GPP from eddy covariance observations NEE was measured using the eddy covariance method (see Goulden et al., 2007; in press for details). Quality of the eddy covariance data was dependent on several conditions, including adequate turbulent mixing ðu à > 0:20 m s À1 Þ, instrument functioning, and adequate sampling of the ecosystem due to wind direction. Data that did not meet these conditions were excluded from the analysis, and data gaps filled subsequently. Analysis of the energy budget closure at the SJFM indicated the raw turbulent energy flux measurements underestimated the true energy flux by $20% (Goulden et al., 2007). This percentage of unaccounted flux is similar to that observed in many other eddy covariance studies, and is presumably caused by transport in lowfrequency circulations that are underestimated by a 30-min averaging interval (Mahrt, 1998;Twine et al., 2000). The 20% underestimation of energy flux is generally interpreted as an indication that the CO 2 flux is similarly underestimated by 20%, so CO 2 fluxes were increased by 20% to account for this underestimation (Twine et al., 2000).
Gross primary production was calculated from eddy covariance derived daytime growing season NEE by separately considering the day and night observations (Goulden et al., 1997). NEE represents the sum of two component fluxes: GPP and total ecosystem respiration (R). At night, GPP is zero and NEE (NEE night ) is equal to total ecosystem respiration (NEEnight = R). The difference between NEE day and NEE night can be used to calculate GPP provided that a realistic approach is a g r i c u l t u r a l a n d f o r e s t m e t e o r o l o g y 1 4 9 ( 2 0 0 9 ) 1 5 9 -1 6 8 adopted for extrapolating nocturnal respiration (Modeled NEE night ) to daytime periods (Eq. (1)): There is no standardized approach to modeling daytime respiration, and researchers have used a variety of approaches . Some approaches may overestimate daytime respiration and GPP because the respiration model is parameterized with cool nocturnal temperatures and extrapolated to warmer daytime temperatures; other approaches may underestimate respiration and GPP because the empirical model poorly represents the diel and seasonal changes in respiration.
We calculated GPP using several approaches to test for methodological sensitivity. We used four empirical models of ecosystem respiration (simple (Rocha and Goulden, in press), linear, Q 10 , and the restricted form of Lloyd and Taylor; see Table 1 in Richardson et al., 2006 for details) with air temperature to calculate GPP from NEE day . The simple model used average NEE night to calculate daytime respiration, while the linear model used an empirical relationship between NEE night and air temperature to model daytime respiration. The Q 10 and restricted form of Lloyd and Taylor are empirical exponential models that use air temperature to calculate ecosystem respiration during the day. We also tested the sensitivity of GPP to integration time by using integration times of 15 and 25 days. Respiration models were chosen because they represented a broad range of commonly used respiration model functions to derive GPP from NEE.
Annual GPP was calculated by subtracting modeled total ecosystem respiration from NEE day and integrating GPP over the course of a year. Small gaps were filled using a Michaelis-Menten hyperbolic regression between GPP and solar radiation (<20 days long). A longer gap in 2000 was filled by combining the Michaelis-Menten equation with an estimate of leaf phenology based on the empirical relationship between NEE and reflected radiation. On the whole, 15% of the GPP data were filled, which is typical for a long-term eddy covariance record. These analyses yielded 12 estimates of GPP per year for a total of 108 estimates of GPP for the 1999-2007 record. These estimates were used to calculate the uncertainty in GPP.
Peak biomass observations
We sampled living plants from 1999 to 2007 within thirty 0.25 m 2 quadrats along a 91.5-m transect that radiated to the southwest of the eddy covariance tower during September (the month of peak biomass). Plants were pulled from the ground, clipped below the crown to remove rhizomes and roots, taken to the lab, oven dried at 65 8C for 2-3 days and weighed. We partitioned growth into leaves, stems, inflorescences, and the crown base. We conducted additional harvests in November 2006 and 2007 in eight 1 m 2 plots to determine the proportional allocation to aboveground and belowground biomass. Belowground organs (crown bases, rhizomes, and coarse roots) were excavated and separated according to organ type and age. Belowground biomass produced in a given year was identified by texture and color (Jervis, 1969;Gustafson, 1976). Sorted material was oven dried at 65 8C for 2-3 days and weighed.
Calculating NPP from biomass observations
NPP was calculated from observations of peak biomass. We used the growth of leaves, stems and inflorescences as a measure of aboveground net primary production (ANPP) because aboveground components of Typha (i.e. leaves, stems, inflorescences) are produced and senesce every year. Crown bases are also produced every year, and these were harvested and counted as a portion of belowground NPP (BNPP). Coarse root and rhizome production was not measured every year, and we used the average allocation ratios between the crown base and the rhizomes and roots during 2006 and 2007 to estimate the remaining BNPP components during the other years.
The plants at the SJFM are herbaceous perennials that store starch reserves over winter and subsequently use this starch for leaf growth during spring. Carbohydrate storage can complicate the determination of NPP because growth can include carbon that was fixed in a previous year (Gustafson, 1976;Roxburgh et al., 2005). Consequently, we calculated a conservative NPP by correcting NPP estimates for potential double counting of the starch reserve (S R ). S R in harvested plants was calculated as 50% of the previous year's BNPP, and was based on two independent studies that showed that the end of season starch pool in Typha comprises 45-47% of crown, root, and rhizome weight (Gustafson, 1976;Kausch et al., 1981). S R in 2005 (i.e. the year after the dry down) was estimated from BNPP in 2003. S R in 1999 was estimated by the average ratio of corrected to uncorrected NPP during the flooded years. This conservative approach to calculating NPP assumes that all of the previous year's starch is used for growing some of the current year's aboveground tissue, and that there is no metabolic cost to convert starch into tissue (Eq. (2)): We compared the SJFM's NPP from 1999and 2005 to NPP values in the Osnburck dataset (http://www.esapubs.org/archive/ecol/E081/011/) (Esser et al., 2000). This was done to determine if NPP at the SJFM was comparable to that reported for other marshes and ecosystems. The Osnburck dataset is a compilation of 700 estimates of NPP for natural ecosystems worldwide. Productivity in the Osnburck dataset is reported as grams of dry weight and was converted to carbon using a conversion factor of 0.45. We only used data from sites that reported species composition, ecosystem type, and author.
2.5.
Constraining the NPP to GPP ratio Ecosystem CUE can be calculated using several approaches. Our first estimate of CUE used linear regression to calculate NPP/GPP as the slope of the relationship between NPP and GPP. Regressions were forced through the origin whenever the intercept was not significant at the 95% confidence level. The 95% confidence interval for NPP/GPP was constructed with a Scheffe multiplier based on the F-distribution (Ramsey and Schafer, 2002). Least squares regressions were carried out with Sigmaplot 8.0 (SPSS, Chicago, IL). This approach is consistent with previous approaches used to calculate CUE (Waring et al., a g r i c u l t u r a l a n d f o r e s t m e t e o r o l o g y 1 4 9 ( 2 0 0 9 ) 1 5 9 -1 6 8 1998; Litton et al., 2007), but is limited because sampling uncertainty in NPP and GPP can markedly alter the slope. Another approach for calculating ecosystem CUE is to integrate NPP and GPP over a long period and calculate the ratio between the two quantities. The limitation of this approach is that errors in the NPP/GPP are proportional to the measurement variability and uncertainty in NPP and GPP. Consequently, we used bootstrap analysis to calculate the uncertainty and 95% confidence interval for ecosystem CUE. The bootstrap technique includes a measure of the uncertainty in the ecosystem CUE by incorporating the variance associated with NPP and GPP, repeatedly sampling these estimates with substitution, and calculating a probability distribution for NPP/GPP. Ecosystem CUE was calculated by integrating our measures of NPP and GPP from 1999 to 2007 using the following equation: The bootstrap analysis repeatedly sampled annual GPP and NPP 1000 times and calculated the CUE by integrating these estimates over the 9-year-time period and recalculating the CUE. One thousand of these estimates were then randomly chosen to represent the CUE probability distribution for the SJFM. This distribution represented the potential range of the CUE and allowed for the calculation of a 95% confidence interval for the mean (DiCiccio and Efron, 1996). Analyses were accomplished using the Bootstrap MATLAB Toolbox (http:// www.csp.curtin.edu.au/downloads/bootstrap_toolbox.html) (Zoubir, 1993).
Carbon allocation and biomass partitioning
Belowground harvests in 2006 and 2007 revealed that carbon was mostly allocated to aboveground biomass (Table 1). Most of the carbon allocated belowground was used for the growth of crown bases, which represented 22-33% of total biomass. Rhizomes and roots represented a smaller fraction of total biomass ($7-13%). The average ratio of belowground to total biomass at the SJFM was consistent with that reported for other Typha dominated communities (reported range: 32-70%; Keefe, 1972;Gustafson, 1976;Bradbury and Grace, 1983). These results demonstrate that our annual collections of aboveground green biomass and crown bases captured 87-94% of the total biomass produced in a given year. This increases confidence in our measurement of NPP, and indicates that the uncertainty in NPP associated with our estimates of rhizome and root production is less than 10%.
ANPP, BNPP, and NPP
Cattail production exhibited marked interannual variability ( Table 2, see also Rocha and Goulden, in press a g r i c u l t u r a l a n d f o r e s t m e t e o r o l o g y 1 4 9 ( 2 0 0 9 ) 1 5 9 -1 6 8 3.3.
GPP
Annual GPP at the SJFM also exhibited marked interannual variability ( Table 2, see also Rocha and Goulden, in press; Rocha et al., in press Annual estimates of GPP for the SJFM were minimally influenced by the models used to calculate respiration ( Table 3). The simple respiration model produced the lowest estimates of GPP, while the restricted Lloyd and Taylor and Q 10 respiration models produced the highest. Changing the integration time also led to differences in annual GPP. The Lloyd and Taylor respiration model with long integration times tended to overestimate GPP because errors in the estimation of GPP increased with increasing temperature. The simple model with short integration times may have underestimated GPP because it incorporated a lower temperature sensitivity. Nonetheless, our results demonstrate that estimates of annual GPP are robust, and supports previous work that estimated a 10% uncertainty for eddy covariance based annual GPP (Hagen et al., 2006).
3.4.
What is the relationship between NPP and GPP at the SJFM?
The amount of biomass produced in a given year was weakly related to the amount of gross carbon uptake in the same year (r 2 : 0.33; p: 0.14) (Fig. 1A). The poor correlation between GPP and NPP may be attributed to the physiological characteristics of Cattail. Cattails are rhizomatous perennials that allocate a large proportion of their assimilated carbon to carbohydrate storage as starch (Kausch et al., 1981;Gustafson, 1976). Carbon assimilated in a prior year can be stored belowground and remobilized to supplement growth of new leaves in the current year (McNaughton, 1974;Gustafson, 1976;Dickson, 1991;Carbone and Trumbore, 2007). If NPP depends on a proportion of the prior year's GPP, then year-to-year differ-ences in carbohydrate storage and translocation that result from interannual GPP variability could decouple the relationship between NPP and GPP in a given year. Consequently, we hypothesized that incorporating carbon uptake in the previous year would improve the correlation between NPP and GPP.
The incorporation of the previous year's carbon uptake markedly improved the correlation between NPP and GPP (Fig. 1B). Statistically significant relationships between NPP and GPP were observed by incorporating the previous year's late growing season gross carbon uptake. Defining GPP as the amount of carbon uptake from the previous August to the current July (GPP August-July ) resulted in the best relationship between annual GPP and NPP (r 2 : 0.76; p: 0.01). Carbohydrates comprise 45% of belowground tissue and can supplement 15% of the carbon used for the current year's peak biomass (Gustafson, 1976;Kausch et al., 1981). The improved correlation between NPP and GPP from the previous year's August to the current year's July implies that carryover from carbohydrate reserves are important in driving year-to-year differences in NPP. Consequently, we used the slope from this relationship to derive NPP/GPP at the SJFM. Numbers represent percent deviations from the overall mean. Fig. 1 -Relationship between annual GPP and NPP for the SJFM (A) (r 2 : 0.33; p: 0.14). Pearson correlation coefficients for NPP and GPP calculated using a 12-month period lagged with a time step of a month (B). Black bars denote statistically significant relationships at the 95% confidence level. 3.5.
NPP to GPP ratio
The NPP to GPP ratio (the CUE) was robust and independent of the methodology used to derive it. The CUE derived from the slope between NPP and GPP August-July was 0.65 with a 95% confidence interval of 0.14 ( Fig. 2A). The bootstrapping technique, which accounted for the variation in GPP and NPP, produced a statistically similar CUE (Fig. 2B). The CUE followed a normal probability distribution and ranged from 0.53 to 0.69. The average CUE from the bootstrapping approach was similar to that derived from the linear regression and was 0.61 with a 95% confidence interval of 0.05. We believe that the bootstrapped average NPP/GPP provided the best estimate of ecosystem CUE because it was less sensitive to uncertainty in NPP and GPP, and also included a degree of variability in annual measures of NPP and GPP. It should be noted that this CUE is much higher than the range reported by Waring et al. (1998)
4.2.
Is high SJFM NPP attributable to a high GPP?
We compared GPP data summarized in Falge et al. (2002) and elsewhere with the SJFM GPP from 1999 to 2007 to determine whether high NPP at the SJFM was associated with high annual GPP (Fig. 4). The GPPs for coniferous forests, grasslands, crops, deciduous forests and the SJFM were broadly comparable, while tropical rainforests had the highest GPPs (3200 gC m À2 year À1 ). The comparison between tropical forest and SJFM production is particularly striking. The GPP of an Amazonian tropical forest was 130% greater than that at the SJFM, whereas the Total NPP at the tropical forest was 20% less than that at the SJFM (Figueira et al., in press). The annual rates of GPP observed at the SJFM were within the range of those reported for most other ecosystem types (Table 2; Fig. 4), including ones with comparatively low NPP. These results indicate that the GPP at the SJFM was similar to that reported for a variety of ecosystem types, and implies that high rates of marsh NPP are not a result of high GPP. We also considered whether internal CO 2 recycling might increase GPP in a way that would not be detected by eddy covariance. Constable et al. (1992) found high concentrations of CO 2 in the aerenchyma of cattail and hypothesized that this CO 2 could be used as a supplementary source of carbon to accelerate photosynthesis and yield high NPP. However, studies using 14 C on Typha and other plants with aerenchyma (i.e. Scirpus lacustris, Cyperus papyrus, Allium cepa) have shown that little (i.e. 0.25-2.2%) of the CO 2 in the aerenchyma is recycled and used for photosynthesis (McNaughton and Fullem, 1970;Singer et al., 1994;Byrd et al., 1995). The isotopic signature of leaf d 13 C at the SJFM confirmed that CO 2 recycling does not play a major role in increasing GPP. Leaves that reassimilate respired d 13 C should have an unusually negative leaf d 13 C (Vogel, 1978). However, the leaf d 13 C at the SJFM was typical of that reported for C3 plants (Goulden et al., 2007;Smith and Epstein, 1971), indicating that the eddy covariance measurements did not underestimate GPP due to internal CO 2 recycling. In summary, our analysis indicates that GPP cannot explain the high NPP at the SJFM.
4.3.
Does the SJFM have a high CUE?
Comparing NPP/GPP between ecosystems indicates that the SJFM's CUE is high. The CUE we observed at the SJFM is much higher than has been reported for boreal (range: 0.23-0.45; mean: 0.31 AE 0.02), temperate (range: 0.07-0.68; mean: (Amthor, 2000;DeLucia et al., 2007). There are few studies that report both NPP and GPP for marsh ecosystems with which to compare our estimates ( year S1 ) at the SJFM (white box) compared to NPP observed in other ecosystem types (gray boxes). Boxes encompass the median and the 25th and 75th percentiles, while error bars encompass the 10th and 90th percentiles. Outliers are denoted as closed circles. The width of the bar is proportional to the sample size (n) with n = 26 for temperate coniferous forests and n = 5 for lowland tropical rainforests. Data from the Osnuburck database (Esser et al., 2000). This study/Typha latifolia a g r i c u l t u r a l a n d f o r e s t m e t e o r o l o g y 1 4 9 ( 2 0 0 9 ) 1 5 9 -1 6 8 measurements. The CUE that we observed at the SJFM is comparable to CUE estimates from other freshwater marshes, and indicates that a high conversion efficiency of assimilated carbon to growth explains the high rates of NPP observed in freshwater marshes. Our conclusion that the SJFM has a high CUE is conservative and includes several measures of uncertainty. Estimates of GPP were corrected for energy balance closure and were largely insensitive to gap-filling method. The use of peak biomass and the exclusion of leaf and root turnover may have underestimated both NPP and CUE by 12% (c.f. Dickerman et al., 1986). The similarity of CUE between the SJFM and other Cattail systems indicates that our results are consistent and representative. The greatest uncertainty in our conclusions results from the treatment of carbohydrate storage in calculating NPP. We may have overestimated carbohydrate storage because not all of the starch pool is available for growth (Gustafson, 1976;Kausch et al., 1981;Chapin et al., 1990). However, our conclusions remain conservative because overestimation of carbohydrate reserves decreases NPP and CUE, indicating that the CUE calculated from this approach is lower than the ''true'' CUE.
4.4.
Summary: why are marshes so productive?
We found no evidence that the high rates of marsh NPP are a result of high gross photosynthetic rates. Rather, we attribute the previous reports of high marsh productivity to a high carbon use efficiency. Our conclusions are conservative and are not biased by the assumptions used in estimating GPP or the carbon use efficiency for the following reasons: (1) our estimate of GPP is constrained with the application of several gap filling techniques and all estimates are comparable with rates observed in other ecosystems with lower productivity, (2) carbon use efficiency was higher than observed for other ecosystems, despite the potential for underestimating CUE, and (3) a high carbon use efficiency is the only mechanism that can account for a high NPP and average GPP. NPP was poorly correlated with total photosynthesis in the same year, but incorporating a portion of the previous year's late growing season gross production into the calculation of GPP markedly improved the relationship between NPP and GPP. This improved relationship highlighted the importance of carbohydrate storage and translocation in determining NPP at the SJFM. This study underscores the importance of respiration and carbon allocation in determining marsh productivity and stresses the need to further understand the interaction between these two factors and NPP. | 2019-03-30T13:10:14.653Z | 2009-01-04T00:00:00.000 | {
"year": 2009,
"sha1": "2a2eeb1a8908fd3f06e3725717b0ef8c56e975b5",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt0jb5j8sv/qt0jb5j8sv.pdf?t=mrmu8n",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "24fe858dd5c36275de7e6f54e07a2f823ab5b0a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
26624963 | pes2o/s2orc | v3-fos-license | Pregnant women’s preferences for mode of delivery questionnaire: Psychometric properties
Introduction: The rate of caesarean delivery is increasing worldwide. Maternal beliefs may be influential on the mode of delivery. This study aimed to validate pregnant women’s preferences for mode of delivery questionnaire among pregnant women. Materials and Methods: This was a cross‐sectional study which was done in Ahvaz Public and Private Health Care Centers. A total of 342 low‐risk pregnant women were included in a study conducted in spring 2011 in Ahvaz, Iran. After careful consideration and performing content and face validity, a 62‐item measure was developed and subjects completed the questionnaire. Reliability was estimated using internal consistency and validity was assessed by performing face, content and structure and discriminate validity. Data were analyzed using explanatory factor analysis, t‐test, and correlations in SPSS 16. Results: The findings of content and face validity showed almost perfect results for both content validity ratio = 1 and content validity index = 1. The explanatory factor analysis indicated a 7‐subscale measure (Eigenvalue >1, factor loading >0.4), and discriminate validity revealed satisfying results P < 0.05 for 6 out of 7 subscales. Internal consistency as measured by the Cronbach’s alpha coefficient was acceptable for subscales. Conclusions: In general, the findings suggest that this newly generated scale is a reliable and valid specific questionnaire for assessing pregnant women’s preferences for mode of delivery. However, further studies are needed to establish stronger psychometric properties for the questionnaire.
The rate of caesarean section delivery is rising worldwide. In some countries, it becomes a part of their culture. [5,6] World Health Organization (WHO) recommended that no more than 10-15% of pregnancies should be terminated by C-section. [7] Some individual and cultural factors may affect the rate of C-section. [8] The term "elective caesarean section delivery" refers to those C-section deliveries which are performed with no medical cause. [9] It has been well documented that mortality and morbidity for C-section deliveries are greater than normal vaginal delivery. C-section delivery, also, increases the expenses up to 3 times. [10] INTRODUCTION Mode of delivery method is defined as choosing either the vaginal or caesarean section (C-section) delivery. [1] Vaginal delivery is the natural method of birth, though about 10% of normal deliveries may be complicated, caesarean section delivery is suggested to prevent either maternal or fetal morbidities and mortality. [2][3][4] However, nowadays, many C-sections are performed upon maternal request with no medical cause.
This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
For reprints contact: reprints@medknow.com
This article may be cited as: Zamani Protecting mothers from unnecessary medical technologies is one of the WHO strategies to promote maternal health. [11] International Confederation of midwives has announced that performing caesarean section deliveries with no medical indication is immoral. [12] Although reducing the rate of elective C-section delivery has been considered by health professionals' authorities, this rate is increasing in some countries. [13] In the USA, caesarean delivery rate increased from 20/7% in 1996 to 31/1% in 2006 and to 32/9% in 2009. [14] In Arab countries, also, this rate is reported to be 15%. [15] According to the results of a study this rate, in Iran, is about 50% showing Iran is far from the WHO C-section advocated rate; therefore, it seems to be crucial to conduct studies to focus on the reasons of such increases and to promote programs to reduce this health issue. [14] Considering C-section as a behavior; before any intervention to reduce the rate of C-sections deliveries, it is essential to understand the reasons for this behavior. [2] Because of the importance of the values and beliefs in directing behavior, understanding underlying elements of behavior are necessary to promote any health promotion program. As such, a valid and reliable tool is needed to extract personal values and beliefs. Taking the previous studies and researches into consideration, there is no exact measure on maternal beliefs. Only two studies which are mostly focused on the cognitive aspects of behavior are exist. Considering the fact that the nature of human behavior is very complex as many psychosocial factors are affecting it, the available tools are not provide enough reasons to extract maternal influential factors on mode of delivery. [2,16] Therefore, designing a reliable and valid questionnaire to extract the psychological factors related to the women's preferences for mode of delivery seems to be more essential. To do so, the results of previous studies can be very helpful. [17] Fear and anxiety are one of the most frequent reasons to choose C-section women might consider themselves at risk of probable morbidities. [5,[18][19][20] Many studies confirmed that negative beliefs are the main reasons for choosing any mode of delivery. [9,21] Such beliefs, as perceived threat, as well as evaluating the benefits or risks, are the key constructs of health belief model (HBM). This model which is based on the behavioral sciences theory is an interpersonal health education model which is composed of theoretical constructs as perceived susceptibility, perceived benefit, perceived barriers, and self-efficacy. [20] In addition to these factors, some researchers believe that pain intolerance also is another effective factor on choosing the delivery mode. That is to say that this factor is inconsistent with the self-efficacy constructs in HBM. [22,23] Based on above-mentioned results, HBM can be an appropriate model to design the materials. Since it is unlikely that one specific model can predict the behaviors appropriately, it is recommended that for having more comprehensive understanding, other components and beliefs might be taken into consideration, too. In this regard, some studies demonstrated that physicians', midwifes', and relatives' ideas, as well as following the fashion are very significant factors in choosing C-section delivery. [5,15,24] This concept is consistent with the construct of normative beliefs which is found within the theory of planned behavior.
Hence, the aim of this study was to develop a questionnaire to access pregnant women's preferences for mode of delivery. It was hoped this might help to fill the gaps and perhaps contribute to the existing literature on the topic.
MATERIALS AND METHODS
This was a cross-sectional study carried out in 2011 in Ahvaz the South West of Iran Public and Private Health Care Centers. Combining the previous theory-based questionnaires, as well as studying the related text books; the researchers made a questionnaire which was piloted in a small sample of pregnant women. Internal consistency was measured (Cronbach' alpha: 0/70). Several methods were used so as to verify the validity and the reliability of the questionnaire as: (1) Extracting items from the related texts and questionnaire and interviewing with the women. (2) Estimating the content validity based on the experts' viewpoints. (3) Evaluating face validity based on the pregnant women's ideas. (4) Using exploratory factor analysis (EFA) to assess the construct validity. (5) Measuring discriminate validity. (6) Evaluating the reliability using Cronbach' alpha.
In the first stage, having used the published texts, also, based on the viewpoints of the professional faculty members, the researchers designed a questionnaire consisting of 62-item questionnaire. These questions which were based on some constructs of HBM and normative beliefs of the theory of planned behavior were designed to evaluate factors affecting the mode of delivery.
In order to, qualitatively measure content validity, in the second stage, the questionnaire was given to 10 experts and their corrective ideas were applied. Then, content validity ratio (CVR) and content validity index (CVI) were calculated to assess the validity quantitatively. The results, then, were used to ensure researchers from the best selection of the items. In so doing, 10 experts including 4 health education experts, 4 midwives, and 2 health experts were asked to answer the questions arranged in three levels (necessary, useful but not necessary, and unnecessary).
Based on their answers, CVR was calculated. For each question, CVR acceptable quality limit was more than 62%. [20] The quantitative face validity was evaluated through impact score. The impact score for each item was calculated as multiplying the importance of an item with its frequency. The impact scores of >1.5 were considered suitable. In order to measure CVI, the questions were reviewed by a panel of experts and rated on simplicity, relevance, and clarity on a four-point Likert-type scale The CVI of each statement was calculated and as recommended values of ≥0.80 were considered acceptable. [20] At the end, 49 questions remained. Each item is rated on a five-point Likert scales ranging from strongly agree to strongly disagree giving a possible score of 1-5 for each item.
In the third step, to measure the construct validity and, also, to determine the factor structure of the questionnaire, at first, the questionnaires were given to 342 nonrandom pregnant women referred to Public and Private Health Care Centers. The inclusion criteria were: being aged 18-35-year-old, having the history of pregnancy without adverse outcomes, not suffering from chronic diseases during the present pregnancy and not having the history of fertility problems. Demographic characteristics of the pregnant women included recording of age, education of pregnant women and their husbands, gestational age, and family monthly income.
Statistical analysis
Data were analyzed applying descriptive and inferential tests using SPSS 15 (SPSS, Inc., Chicago, IL, USA) software. EFA was done to identify the underlying relationships between measured variables. A set of observed variables was used to identify a set of latent constructs. To determine the adequacy of the sample size, Kaiser-Meyer-Olkin test was applied. A threshold of >0.4 for corrected item-total-correlation was chosen sufficient. [25] Discriminant validity Discriminant validity of the instrument was assessed using known groups comparison. Known groups comparison was performed to test how well the questionnaire discriminates between pregnant women with the different intention for their mode of delivery with no medical reason (either C-section or vaginal delivery). 112 women (31/6%) chose C-section delivery, and 120 participants (33/9%) chose vaginal delivery as their definite preference. T-test, also, was used to verify the discriminated validity between these two groups.
Reliability
Internal consistency of the instrument was assessed by using Cronbach's alpha coefficient. Alpha values of ≥0.70 were thought satisfactory. However, item correlation with intended factors was assessed to calculate reliability (P < 0.05).
Ethics
The ethics committee of Ahvaz Jundishapur University of Medical Sciences approved the study. Informed consent was obtained from participants.
RESULTS
In total, 342 pregnant women completed the questionnaire. The mean age of women was 23.9 (±4.07) years, and the mean gestational age was 32.1 (±4.3) weeks. The characteristics of participants' demographic characteristics are shown in Table 1.
The results obtained from validity analysis showed good levels of the CVR (0.86), CVI (0.84), and impact score (IS = 5) for items. In the qualitative face validity, all participants acknowledged that they had no problems in reading and understanding the items. After content validity phase, 42 items were remained for the next stage of validation process.
The result of Kaiser-Meyer-Olkin measure of sampling adequacy test was 0.738, showing the adequacy of the sample size for factor analysis.
The principal component analysis with VARIMAX rotation was performed for the items resulting in a seven factor solution. Table 2 shows the rotated factor matrix of these seven factors and indicates the factor loading of each of 21 items.
To measure the discriminate validity, in Table 3, the scores of each factor were compared between different groups based on their intention to either vaginal or C-section. The results of the t-test indicated that these factors showed a significant difference between the pregnant women who chose C-section delivery and those whose selection was vaginal birth in self-efficacy, false impression of the benefits of C-section delivery, exaggerating the risks of vaginal delivery, perceived susceptibility, normative beliefs, desire for acceptance, and even in total score (P = 0.001). However, such difference was not found concerning the health professionals' idea factor (P = 0.19). The mean scores of the second group in all these dimensions were higher. Generally, the mean score of the total factors in a group choosing vaginal delivery were more than those of the group that will undergo C-section delivery, 68.92 ± 9.78 and 55.38 ± 8.58, respectively [ Table 3].
Cronbach' alpha (internal consistency) for all 21 items in the questionnaire was 0.747. This number for self-efficacy, false impression of the benefits of C-section delivery, exaggerating the risks of vaginal delivery, and normative beliefs were more than 0.7, indicating that all these dimensions had high internal reliability. Other dimensions as perceived susceptibility, desire for acceptance, and health professionals' idea showed other results (0.649, 0.534, and 0.332, respectively) [ Table 2].
DISCUSSION
The aim of this study was to evaluate the overall psychometric properties of pregnant women's preferences for mode of the delivery questionnaire. Based on the findings, the developed questionnaire revealed seven factor solutions. So far no specific research has been found focusing exclusively on the behavioral beliefs related to the mode of delivery.
The first extracted subscale from factor analysis was self-efficacy. In fact, this is a key construct within many health education theories and seems to be the most fundamental behavioral constructs related with the choice of delivery method. [25][26][27][28][29][30][31][32] Self-efficacy refers to an individual's perception of his or her competence to successfully perform a specific behavior. It is driven from both Bandura's social learning theory. Self-efficacy can predict health behaviors. Given any sort of behavior, it can motivate individuals to engage in the behavior or even to change that behavior. Therefore, recognizing this construct would help to better explain individual differences in health behaviors. [22,23] The second and the third extracted factors were false perception of the benefits of C-section delivery and exaggerating the risks of vaginal birth, respectively. [30] In spite of the fact that, in many studies, it is demonstrated that both the mother and the baby are more at risk in C-section delivery than vaginal birth, many people still perceive C-section delivery as having less risks than vaginal delivery. [9] Penna et al. in their studies showed that high socioeconomic class and awareness of delivery time were the other important reasons for women choosing C-section delivery. [7,30] Therefore, it seems that providing appropriate educational program can help pregnant women to perceive the advantages and disadvantages of vaginal delivery and C-section.
Therefore, it seems necessary that pregnant women be taught truly about the advantages and disadvantages of C-section delivery. With colleagues, Penna and Soltani and Sandall in their studies showed that social welfare and controlling the exact time of delivery and hospital release were the main reasons for women's tendency toward C-section delivery. [7,30] Perceived susceptibility was identified as the fourth factor. It refers to one's perception of the risk or the chances of contracting a health disease or condition. Individuals who perceive that they are susceptible to a particular health problem will engage in behaviors to reduce their risk of developing a health problem. Hajian et al. found that if individuals know about the risks of C-section delivery, it is more probable that they, when having no medical indications, choose vaginal delivery. [19] As consistent with the findings of this research, in another study by Penna and Arulkumaran, Liu et al., and Angeja et al. normative belief was found as an important factor impacting on the pregnant women's decision making of the mode of delivery. [5,30,32] The ideas of the women's spouse, family, friends, and close others are very influential. That is why such people, also, should be invited to educational classes.
The sixth factor was a desire for acceptance. Although this factor is not mentioned in any behavioral model, [23] the item of distorted body image is a variable mentioned in other studies. [33,34] Health professionals' idea was the last extracted factor. Although results of discriminate validity showed no significant differences between the women choosing C-section delivery and those who chose vaginal delivery, this factor was found to be influential in the studies of Turner et al. and Guittier et al. [35][36][37] In their studies, Turner et al. showed that the ideas of midwives were very effective in choosing the mode of delivery. [37] One possible explanation for nonsignificant discriminate validity for this subscale might be due to the fact that while other studies considered midwives' idea, this study investigated all health professional's idea. Since the personnel's ideas can change the pregnant women's preferences for the mode of delivery, it is very necessary that some educational classes, focusing on the morals and social skills, be held for the health professionals too. [7] In spite of the fact that the Cronbach' alpha and reliability of all factors were in high levels, desire for acceptance showed a weak reliability. Moreover, the individual's perception of the health professionals was unacceptable in terms of reliability. Taking Cramines's and Zeller's ideas into consideration, the number of items are one of the important factors constructing alpha Cronbach's level, such result can be justified since these two factors had only 2 questions. They, also, believe that mean correlation of the items is another way to evaluate reliability. In their cross table, they suggested expected alpha between 0.333 and 0.572. Having only 2 items, acceptability orientation with the mean correlation of 0.355 showed a higher alpha, 0.524. According to this theory, if the number of questions become double, [4] alpha will be between 0.5 and 0.727. This number, also, will change to 0.60-0.80 if the questions become six. [38] Based on the Cromines's idea, expected alpha for the personnel's idea, having 2 items and mean correlation of 0.247, should be about 0.333. In this study, it was measured as 0.332. If the numbers of the items change to 4 or 6, alpha levels will increase up to 0.500 or 0.600. Many experts believe that with the increase in the sample size, the alpha coefficient would increase too. [39] It is recommended, then, that in the further studies the effect of the above-mentioned factors be examined.
Limitation
Not to conduct confirmatory factor analysis is the study's limitation.
CONCLUSIONS
In general, the findings suggest that this newly generated scale is a reliable and valid specific questionnaire for assessing pregnant women's preferences for mode of delivery. However, further studies are needed to establish stronger psychometric properties for the questionnaire. | 2018-04-03T06:21:12.661Z | 2017-04-19T00:00:00.000 | {
"year": 2017,
"sha1": "2129e4d1b8acb8a3e8adbdc583954f1660f5209f",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2277-9531.204738",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8fc5fd3086c1959d5865ebf10d0eaeeb2694d1fa",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55905872 | pes2o/s2orc | v3-fos-license | Renewable energy strategies to overcome power shortage in Kurdistan Region of Iraq
The aim of this paper is to investigate the possibility of applying renewable energy strategies in Kurdistan Region of Iraq to overcome the shortage of electricity supply. Finding alternative renewable sources could overcome the problem. The renewable energy will reduce CO2 emission in the cities which considers the main source of pollution. That will participate in reducing the effect of global warming. The study tries to investigate the direct solar renewable energy through two of the main renewable energy categories to produce electricity based on a survey of literature review. Photovoltaic and wind power technologies are possible to be conducted in the region to overcome power shortage.
Introduction
The renewable energy became the main source in producing electricity in many developed and developing countries. Its usage is increasing and recently a large amount of investment has been made to enable the countries to produce renewable energy more cost effectively. There are eighteen countries that produce more than half of their electric power needs from renewable energy. Norway, Iceland New Zealand, Canada, and Brazil are among those countries (Meisen and Garzke, 2008). Renewable energy can provide our planet by low carbon emission power and cost effective energy Economic progress and countries development related with the demand for energy. In the last few years, energy demands have increased. Despite the area of Kurdistan is rich in oil, it has experienced energy shortages from the beginning of oil explore in the region until today.
Kurdistan Region of Iraq faces power blackouts reaches more than half of the day in some seasons because the shortage in producing enough electricity. The economic crisis and political conflicts between Kurdistan and the central government in Baghdad magnified the problem. That made the investments in power sector which mainly depended on producing power through noneconventional resources came to an abrupt halt in 2014. The paper assign questions of; 1) how much renewable energy could be a remedy in solving the problems of Kurdistan in energy shortage. 2) Which technology of renewable energy is effective to solve power shortage problem? The aim of the study is the possibility of applying renewable energy strategies in Kurdistan Region of Iraq in order to overcome the shortage of electricity supply.
The paper tries to identify the potential of Kurdistan region to use the renewable energy to overcome the shortage of energy which it faces a long time back. The study attempts to evaluate the use of renewable energy in Kurdistan region as an alternative to fossil fuel based on its geographical location and its climate. The paper hypothesizes that if the renewable energy applied as alternative or co-source to produce energy, then KRG (Kurdistan region government) can improve the power condition in Kurdistan region. Until now we have no papers investigationg this topic.
Renewable Energy
"Renewable energy" is the energy that flows through the environment in a natural way. There are many sources of these energy flows such as solar radiation incident on the earth's surface. Direct solar radiation is naturally the main cause for a number of other energy flows as; wind energy results from thermal inclination or variation across the earth's surface due to radiation incidence; wave energy results from the effects of winds on the oceans; and biomass energy, which is the chemical energy concealed in living organisms like a plants produces through photosynthesis process.
Part of Solar radiation also is converted to latent heat which results in potential energy embedded in the hydrological cycle. There are many other sources of energy flows apart of solar radiation and related to the gravitational force between earth and the moon, which known by tidal power. Geothermal energy comes from the heat of the earth core. The paper will focus on two types of these renewable energy sources which are solar power and wind power as strategies to be applied, because of their obvious presence in Kurdistan region climate.
Solar Power
Solar energy is the most obtainable permanent energy resource on earth and it has two forms; direct (solar radiation) and indirect (wind, biomass, hydro, ocean etc.). This paper will be limited to the direct use of solar radiation and focus on Solar Photovoltaic (PV) technology to produce the energy. The direct solar resource is massive. Solar radiation is absorbed on earth at an average rate of 120,000 TW (1 TW = 1 terawatt = 1x1012 watts) (Lewis, 2007, P.816). However, this resource reaches the earth relatively as diffuse flow because of very fine drops of rain and microscopic fog or mist (Georgescu-Roegen, 1975). The solar photovoltaic system is technology to convert light of sun directly into electricity using the effect of photoelectric, which through that, light causes matter to emit electrons. Many researchers have been investigated the advantages of solar power for residential, commercial and industrial consumption last twenty years. Japan and Germany were the pioneers to use large-scale electricity generation by solar photovoltaic (PV) in last decade of the twentieth century. Both countries were leaders in the production of solar power technologies. Recently, China has developed wide solar power capacity taking the benefit of the cheap labor and government subsidies, which led to decrease the cost of solar power generation. The advantages of Solar Photovoltaic technology environmentally and economically are obvious. Reduction of fossil fuel usages like Coal, Petrol, and Natural gas to generate a power, will reduce CO2 emission and reduce global warming effects.
Nowadays, the cost of power generation cost has been significantly reduced by using conventional solar PV technologies, because of the development, and increase in efficiency, of solar power technologies. For example in USA the cost of power reduced because of using PV panel to generate electricity. On the other hand, there are also disadvantages of solar technologies, through degradation of the lands, affecting the aesthetic value in the buildings using this technology, and chemical effects of their material, etc. (Abolhosseini, et al., 2014). The efficiency of (PV) panel system is between 15% and 20% commercially (Kazem, et al., 2014, p.735).
Wind Power
The wind is the world's fastest-growing electricity sources. Its energy is a gathered kinetic energy of all of the molecules. The flow of the wind against the wind blades creates a rotational movement which can be used either to generate electricity. The wind turbines, which are using for generating electricity, have two or three blades fixed on a horizontal axis, generating power according to the wind speed. In 1996, the power of wind capacity reached 6,000 MW. More than 3000MW of this capacity was in Europe and much of the remaining in USA. Germany jumped giant leaps in this field. In 1994, it had installed capacity around 632 MW. With the beginning of 1998, the installed capacity in Germany had reached 2,079 MW. Performance in this field continues to develop, and costs continue to fall. Researchers estimated that the global market for wind energy will be reaching $133 billion by 2020 (Jackson, 2007).
Kurdistan Region Of Iraq
The Kurdistan Region is located South-West of Asia and North-East of Iraq. The region has borders with Turkey from the north, Syria from the west, Iran from the east, and rest part of Iraq from the south. It is consist of three governorates; Erbil (Hawler), the capital of the region, Sulaymani, the Cultural capital, and Duhok. The region located within the Federal Republic of Iraq according to Iraqi Constitution, Article 62. The area of the region is 42,812 Km2 without the disputed area like Kirkuk, Khanaqin, and Shangar. The total area with these disputed areas is around 73,618 km2, which represents almost 17% of the total area of the Republic of Iraq. Kurdistan region located between longitudes 42°25' E, and 46°15' E, and between latitudes 34°42' N and 37°22' N. The area of the three governorates; Sulaymani , Erbil, and Duhok are 17,023 Km2, 15,074 Km2, and 11,715 Km2, respectively (Rashid, 2014).
Power generation existing scenario in the Region
The demands on power in Kurdistan region has been increased dramatically within last decade. A total power demand in the Region based on official data in 2004 was 829 MW (Megawatt). Whereas, increased to reach 3279 MW in 2012. Means the demands on the power increased approximately 4 times over less than a decade from 2004 until 2012 (Ali, et al., 2015). The production of power in Kurdistan region has been also increased during last decade. The data shows that there is a shortage in power supply by 579 MW in 2012, which means almost 82% of the total power demands are provided by the government. The demands increased more nowadays, especially after the tremendous displacement of people from other parts of Iraq to the region, since 2014 because of the fights against ISIS. KRG (Kurdistan Region Government could supply electricity between 12 to 19 hour per day and this number changes to more or less according to the seasons and demands. The remaining hours will be provided through private generators.
Figure 1. The Production of Electricity in Kurdistan Region of Iraq
Source: (KRG, 2013) According to the ministry of Planning in Kurdistan region of Iraq, 5 Billion US$ 'Five Billion American dollars' are required to overcome the shortage of the electricity in the region until 2020.
The power generation in Kurdistan region mainly depends on natural gas and hydropower generation from the dams of 'Darbandekhan' and 'Dukan' by design capacity of 650 MW which sometimes reduced their capacity to 186MW and 146 MW and in 2006, respectively (The Kurdistan region, 2009). The region receives electricity from adjacent countries like Iran and Turkey. But the region Government has their vision to explore the potential of using the renewable energy sources. According to the vision of government, renewable energy may be costly as the initial cost of building plants that use them, so the steps forward renewable energy generation technology should be careful to proceed. Although the advantages of this technology for our environment, but they also must evaluate financially (Ministry of Planning-KRG, 2013).
Renewable energy in Kurdistan Region
Kurdistan geographical position on earth globe gave it well situation regarding solar energy potential. The region located between 34° 42' N and 37° 22' N latitudes, which is highly abundant in solar energy irradiation region (Saeed and Qadir, 2010). Solar energy potential for the region can be evaluated by annual solar radiation as the average for Kurdistan region is 6318.83 MJ/ m2 / year and equal to1755.23 kWh/m²/year, which is '4.81 kWh/m²/day' (Abdul-Wahid, et al., 2010). Such average rate of the solar energy is encouraging for establishing a grid connected (PV) system to reduce the electricity shortage problems. Regarding wind source in Kurdistan region, the climatic data in the region indicate that in winter, Kurdistan influenced by Mediterranean cyclones which move east to northeast over the region. The Arabian Sea cyclones move northward passing over the Persian Gulf creating great humidity affecting Kurdistan region by recognizable amounts of precipitation. Occasionally, European winter cyclones move eastward to the southeast part of Turkey and over the mountainous region of Kurdistan, bringing substantial amounts of rain and snow. In summer, Kurdistan region influenced by Mediterranean anticyclones and subtropical high-pressure belts and center. The subtropical high-pressure centers that move from north to east and west to north above the Arabian Peninsula bringing dusty wind to the region. The daily temperature can drop to -10º C in the cold season, and in the hot season the higher daily temperature could hit upper than 50∘ C, (Saeed, 2012). These effects let the region face a big amount of the wind that gives the region the potential in using the wind to generate power.
Methods And Materials
The methodology employed in this paper applied a qualitative study performed by a literature review on the several credible types of researches. The paper tried to select the studies which carried out about the renewable energy in different places inside Kurdistan region to get a more comprehensive understanding of the subject. The main aim of this research is to assess the potential of renewable energy to overcome power shortage in Kurdistan region. Evaluate the effect of involvement in the renewable energy on the region problems nowadays. The region faces a shortage in the power production, financial crisis, and environmental challenges. We will limit the study on direct solar system category, and focus on solar Photovoltaic (PV) and wind power technology through the literature and empirical tests in these two specific fields. Then we will extract indicators of the judgments about the usability of these two systems to solve the region problems through review the literature results. To reach proper findings, we quantify the outcomes in three directions; power generation availability, financial consideration, and environmental impact.
Solar Photovoltaic technology potential in Kurdistan Region
Recently, some researches and studies of solar energy and its aspects in the Kurdistan region of Iraq have been carried out. Data was obtained from Erbil city from solar radiation database "PVGIS-CMSAF" 4. The results were demonstrating that the potential of Photovoltaic panel's technology to produce electricity are different according to the time within a day and the period within a year. (See table 1), .
The power output in Erbil area according to the table shows that the average rate of power output along the year is '0.79 kWh/m2'. Another study has been carried out in 'Koya' city to investigate the potential of PV system to produce power for a 200 kW the results demonstrated that the total output is equal to '0.3 kWh/ m2' and the total area covered by PV panels equal to 1400 square meter. Photo-Voltage Geographical Information System (PVGIS) software for this work has been simulated to evaluate the irradiation rate, optimal inclination, and assess the average obtained energy . The system of PV panels to produce 200 kW costs with maintenance and operation US$ 326,340 (Dr. . A recent study in 'Sulaymani' investigated CO2 emission reduction through using Solar Photovoltaic (PV) project to supply 315 kW project. The study made a comparison between the emission of CO2 from electricity produced by PV Panels and others which produced by fossil fuel. The study showed that the average values for the CO2 emissions for (PV) gives 0.105kg CO2/KWh in time Coal gives 0.909 kg CO2 /kWh . These values can be considered to estimate and the reduction in CO2 emissions through using PV to produce electricity compared with other systems, which apply the fossil fuel to produce the electricity.
Wind Power technology potential in Kurdistan Region
Many studies have been carried out in the field of wind power generation in Kurdistan region. One of the studies selected several stations to be investigated and show the proper wind speed duration per year to produce power by mean of wind. The results have been shown that several of the places in Kurdistan region have a good wind potential. The highest speed of the wind founded in 'Chamchamal' area and it fluctuated between 13.66m/s (meter per second) in August and 6.50m/s in February. Also the average output power in this area per square meter registered more than 1 kWh/m2 for 9 months, (see table 2). The method that applied in this study was carrying out a simulation by the code HOMER, which is a computer model, developed by the National Renewable Energy Laboratory from the U.S. Department of Energy that helps to evaluate design options for both grid and off-gridconnected power systems (Husami, 2007). The study results are promising in the field of wind power in the region. Source: (Husami, 2007) One of other researches has been carried out in Kurdistan region to evaluate to identify any potential barriers to wind farm development and assess the capacity for large-scale wind projects. 'ArcGIS' 5 software has been applied for this purpose, in order to determine ultimate areas for wind projects based on several factors, such as; close to an electrical grid connection and availability of service roads to transport large turbines equipment. The research tried to suggest areas within 30 km of a 132 kV substation or power stations. Several places have been identified and suggested for wind power development in Kurdistan region. The proposed places have been identified by two systems, the geographical coordinates system 'Longitude or Easting & Latitude or Northing', and by global system 'UTM'6 , (see table 3), (Esmael, et al., 2013).
'Garmyan' zone which located in the south of at Kurdistan region has been investigated for wind power generation. The places like (Kirkuk, Kalar, Khanaqin, and Touz Khormato) have been examined. The study has been chosen the level of 10 meter and 50 meters above the ground for this test. The annual wind energy densities were weak for (Kirkuk, Kalar, Khanaqin, and Touz Khormato) at the level of 10 meters, and 292.32, 696.87, 695.39, and 671.93 KW/year/m2 at the level of 50m, respectively. The results showed that the wind powers in this zone are small for grid-connected power system purposes. Wind power can be applied only for agricultural application such as power generation for well water pumps until 100-meter depth using windmills, especially in the summer and spring because the higher wind speed recorded (Ibrahim and Saeed, 2010). Source: (Esmael, et al., 2013).
The average size of grid-connected wind turbines is around 1.16 MW and between 2 to 3 MW in new projects and even larger are available, as 'REPower's' 5 MW wind turbine. To group wind turbines together, they are referred to as "wind farms". Wind farms consist of the turbines themselves, plus roads for site access, buildings, and the grid-connection point. Typically onshore wind farms range between USD 1 800/kW and USD 2 200/kW in major markets. However, wind farms were cost as low as USD 1 300 to USD 1 400/kW in China and Denmark, (International Renewable Energy Agency, 2012). Environmentally and according to IEA's (International Energy Agency) they estimates the emission of CO2 by 0.015kg/kWh as an average value from wind generation, (Thomas and Harrison, 2015).
Discussion And Findings
According to the literature review, the needs of power in Kurdistan region is 3279 MW, and the actual availability of power from several sources are 2700MW. Hence, the shortage is around 579 MW, and this data dates back to the end of 2012, means today the actual needs are more than this number. But the paper will deal with this number as a credible published data, in the same time the solutions are flexible and applicable with the same rhythm for a bigger shortage in the future. Theoretical analysis of the recent studies about Photovoltaic Panels (PV) potential in Kurdistan region demonstrates that the potential of this system is applicable and promising in several areas inside the region. The average rate of power output along the year in some places recorded by '0.30 kWh/m2' and in even more in specific places, to reach '0.79 kWh/m2' and the efficiency of the system is between 15-19%. The cost of PV project to generate 200kW of power estimated by 326, 340 US$, which means 1,631.7 US$ per kW as initial cost plus the maintenance. The studies demonstrated also that each one kW needs to cover 7square meter land. Hence, the total cost to overcome the shortage of electricity in the region by using PV panels will be around 944,754,300 US$, and the total lands need to be covered are 4,030,000 square meter which is almost 4.03 square Kilometer. The average values for the CO2 emissions for (PV) gives 0.105kg CO2/KWh in time Coal gives 0.909 kg CO2 /kWh and natural gas gives. The paper will compare carbon dioxide emission per 1 kWh of electricity generation by this system with the generated by natural gas, considering natural gas as the main source to generate electricity in Kurdistan region. Natural gas powered electricity generation has CO2 emission almost half that of coal 0.5 Kg/kWh (Parliamentary Office of Science and Technology, 2006). The carbon footprint of PV technology for electricity production is only 21% of the power footprint of natural gas to generate electricity, which indicates an advantage environmentally. (See table 4). The findings demonstrate that the average power output in some places in Kurdistan region recorded more than 1 kWh/m2 for 9 months, except December, February, and April. Many places in Kurdistan according to recent researches identified as optimum places for wind farm projects. Based on software analysis, their climatic potential and their accessibility to main roads and electrical national grid elucidated the potential of those places. Thus, the results are encouraging for grid connected system wind power technology to overcome the shortage of electricity in the region. The literature review determined that the south part of Kurdistan region in Iraq which calls 'Garmian' zone is not very stimulating. The findings illustrated that the climatic characteristic for the zone is not hopeful for the grid-connected power system. It could be useful only for small-scale power generation for agricultural purposes to supply power generation for wells water pumps with100 meter depth or some limited uses through windmills.
The studies encourage the 10meter to 50-meter height above the ground for better results. CO2 emission for 1 kWh/m2 power generation by this technology is between 0.012-0.015 Kg/kWh. Hence, 2 MW or 3 MW power productions or even 5 MW wind turbine can be implemented in Kurdistan region. The 2 MW turbines will be suggested in this region, seeking easier and more practical solutions to avoid construction and technological obstacles. The initial (establishing) cost of 1 KW wind turbine is between 1800US$ to 2200US$, according to the as reviewed in Literature. The study will suppose 2000US$ per 1 KW as average. Thus, the total cost to overcome the shortage of electricity in the region will be 1,158,000,000 US$. This cost includes establishing wind turbines farms, service roads, grid connections and site buildings. A number of the Turbines will be '290' wind turbines and carbon dioxide emission for electricity generation by wind power will be only 3% from CO2 emission of the electricity generation by using natural gas. (See table 5). According to the ministry of planning in Kurdistan region, the region needs 5,000,000,000US$ to overcome the problem. The paper came out with the results that illustrate the initial cost to solve the problem through renewable energy costs 1,158,000,000US$ which is almost 20% of expected budget.
Renewable energy system gives 97% to 79% reduction in CO2 emission through using PV Panel's or Wind power technology alternatively, compared with using natural gas as the power to generate electricity. The significant reduction of carbon footprint improves global climate. Hence, the international financial support to such projects will be helpful to reduce the global warming.
Mixing both of the systems will fortify the system in order to depend on more than one renewable resource to generate electricity.
Conclusion
Theoretical analyses for several studies and researches have been approached. The results showed that Kurdistan region has a good potential to apply renewable energy strategies to generate the electricity. The potential fluctuated from place to place according to the micro-climatic characteristic. Also, it changes based on the season and even within one day. The paper ends with that both renewable energy systems, photovoltaic and wind power technologies are possible to be conducted in the region to overcome power shortage. The initial cost is relatively high and approximately same for both systems, but it is almost one over five from the expected budget determined by Kurdistan region government to solve this problem.
These types of projects will be cheaper when the payback period for 20 to 25 years be calculated as supposed life for these projects. That indicates they are more economic than other types of solutions. Suggested renewable energy strategies to generate electricity will have a positive impact on the environment. Reduction in carbon footprint occurs by more than 93% in wind power technology and 79% for solar photovoltaic system compared with using natural gas as the power to generate electricity. Consequently, it will improve the global environment, which it is deserved to be supported by other countries financially. Involve both technologies to overcome the shortage of electricity in the region will be a more responsible decision in order to diversify the resources.
The study opens the door for researchers to conduct the potential of other renewable energy sources in Kurdistan region of Iraq, as an alternative or cooperative solution for power shortage. | 2018-12-11T13:15:28.843Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "25ca7ceef494c1d63fa0cdda8becbf895a49bda5",
"oa_license": "CCBYSA",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0350-0373/2017/0350-03731702007A.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "25ca7ceef494c1d63fa0cdda8becbf895a49bda5",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Economics"
]
} |
205896932 | pes2o/s2orc | v3-fos-license | A new approach to constructing efficient stiffly accurate exponential propagation iterative methods of Runge-Kutta type (EPIRK)
The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.
Introduction
Stiff systems of differential equations of the form are routinely encountered in a wide variety of scientific and engineering applications.Obtaining the numerical solution to this problem over a long time interval compared to the fastest scales in the system is a challenging task that has been traditionally addressed with the use of implicit time integrators.The implicit methods have better stability properties compared to explicit techniques and thus allow for numerical integration of (1.1) with larger time steps.However, while an implicit method can outperform an explicit scheme, it too is affected by the stiffness of the problem which manifests itself in the solution of the implicit equations at each time step.A general stiff system of type (1.1) is typically solved with an implicit method that has a Newton iteration embedded within each time step.Each Newton iteration in turn requires approximation of a product of a rational function of the Jacobian with a vector (I − c f (u)) −1 v where c is a constant, I is an N ×N matrix and u and v are N-dimensional vectors.For a general stiff matrix f (u) a method of choice to approximate (I − c f (u)) −1 v is usually a Krylov-projection based algorithm such as GMRES.Stiffness of the matrix f (u) results in the slow convergence of any Krylov-projection-type algorithm.Developing an efficient preconditioner is often essential to making an implicit Newton-Krylov time integrator sufficiently fast.However, construction of such preconditioner can often be a difficult and even impossible task.Consequently development of more efficient time integrators becomes an important problem in numerical analysis.Exponential integrators received attention over the past decade as an alternative to implicit methods for stiff systems of type (1.1).Just like implicit methods, exponential integrators possess good stability properties but they require evaluation of exponential-like, rather than rational, matrix function-vector products.Using a Krylov-projection based method to evaluate an exponential-like function can save significant amount of computational time compared to the rational function evaluation needed for an implicit integrator.Exponential propagation iterative methods of Runge-Kutta-type (EPIRK) framework has been introduced to enable construction of particularly efficient exponential methods.The general formulation of the EPIRK methods is α i j ψ i j (g i j h n A i j )∆ ( j−1) r(u n ), i = 2, . . ., s, u n+1 = u n + β 1 ψ s+11 (g s+11 h n A i1 )h n f (u n ) + h n s j=2 β j ψ s+1 j (g s+1 j h n A i j )∆ ( j−1) r(u n ) (1.2) where h n = t n+1 −t n is the time step and ∆ (k) denotes the kth forward difference vector.As described in [25] each matrix A i j can be either a full Jacobian J n = f (u n ) or a part of the full Jacobian if the operator f (u) can be partitioned in some meaningful way.For example, we can set A i j = L when the right-hand-side operator in (1.1) can be partitioned as f (u) = Lu + N(u) with stiffness contained in the linear portion L. Function r(u) can either be r(u for the partitioned operator f (u).To obtain a fully exponential EPIRK integrator, functions ψ i j (z) are chosen to be linear combinations of exponential-like functions It is also possible to choose some of the these functions to be rational ψ i j (z) = 1/(1 − z) to derive implicit-exponential integrator [20,25] which can be used in cases when an efficient preconditioner is available for the full Jacobian J n or its stiff part L. The main advantages of the EPIRK framework (1.2) are the flexibility of the choices for A i j and ψ i j (z) and the degrees of freedom in constructing particular integrators represented by coefficients α i j , β i j and g i j .In particular, as shown in [26,20] optimizing coefficients g i j shrinks the spectrum of the corresponding matrix A i j and therefore results in significant computational savings by speeding up convergence of the Krylov projection algorithm to approximate ψ i j (h n g i j A i j )v.A number of numerical studies showed that the EPIRK methods performed well on stiff problems [23,24,20].However, the derivation of the EPIRK schemes and the general convergence theory were based on classical rather than stiff order conditions in previous publications.Methods constructed using stiff order conditions are a subset of classically accurate schemes that do not suffer from order reduction for certain classes of problems.
In this paper we demonstrate that the stiff order conditions and convergence theory developed in [4,11,13] can be extended to the EPIRK methods.We derive stiff order conditions for the EPIRK methods of nonsplit (or unpartitioned) type, i.e. the most general version of the EPIRK integrators with A i j = J n and r(u We present a systematic way to solve the resulting stiff order conditions and show how the flexibility of the EPIRK framework can be utilized to construct particularly efficient schemes.The paper is organized as follows.Section 2 describes how the stiff order conditions and the convergence theory from [12] can be extended to include the EPIRK methods.This section also includes an explanation of the differences between the EPIRK framework and the exponential Rosenbrock methods for which the theory was originally developed.In Section 3 we propose a new optimization approach and procedure to solve the stiff order conditions for EPIRK methods to derive a range of efficient fourth-and fifth-order schemes.In particular, we construct EPIRK methods that can be particularly efficient when used together with an adaptive Krylov-projection algorithm, currently the most general and efficient way to estimate the exponential matrix function vector products.Finally, Section 4 contains numerical tests that validate the performance of the newly derived methods and demonstrate the relative efficiency of these techniques compared to previously proposed schemes.
EPIRK and exponential Rosenbrock methods
In this paper we focus on the nonsplit, or unpartitioned, [24,20] EPIRK schemes for the general problem (1.1).The unpartitioned EPIRK methods are constructed from (1.3) by setting A i j = J n to obtain: Classical order conditions were derived for EPIRK schemes in [24] and these methods were shown to be efficient for stiff problems [8,27].The extension of the theory to stiff order conditions presented below will enable us to construct stiffly accurate EPIRK schemes that can be proved to be convergent even for unbounded operators J n .
The stiff order conditions and the corresponding convergence theory has been developed in [5,12,11] for the exponential Rosenbrock methods.While the original formulation of the exponential Rosenbrock methods was first proposed in [18], the full development of this class of integrators, including derivation of the classical and stiff order conditions along with the convergence theory, have not been done until the resurgence of interest in exponential methods over the past several decades [4,5,12].Due to efficiency of implementation and theoretical considerations, in [5] the original formulation of the exponential Rosenbrock methods have been recast in the following form where N n (u) = f (u) − J n u, D n j = N n (U n j ) − N n (u n ) and coefficients a i j (z) and b i j (z) are functions comprised of linear combinations ϕ k (z).The structural difference between (2.1) and (2.2) is in the use of g i j coefficients in (2.1) and in allowing the second term of the right-hand-side in each of the stages to have a more general function ψ i1 (z) rather than restricting it to ψ i1 (z) = ϕ 1 (z) as in (2.2).Note that D n j = r(U n j ).Any exponential Rosenbrock method can be written in EPIRK form.Any EPIRK method can be written in an extended exponential Rosenbrock form if the differences mentioned above are taken into account.To make it more straightforward to apply the stiff order conditions theory developed for exponential Rosenbrock methods to EPIRK integrators we re-write (2.1) in the extended exponential Rosenbrock form using the expansion and collecting the terms corresponding to each r(U ni ) in every stage.Then (2.1) can be expressed as where We additionally define ψ i1 (z) = s k=1 p i1k ϕ k (z).We now incorporated the g i j coefficients into the definitions of a i j (z) and b i j (z) and extended the second term of the right-hand-sides in stages to general ψ i j (z) functions.Later we will show how these generalizations of the exponential Rosenbrock methods to EPIRK framework offer added flexibility that allows for derivation of more efficient methods.To illustrate this reformulation we consider a simple example of a three-stage EPIRK method This EPIRK method can be re-written in an extended exponential Rosenbrock way as Due to the close relationship between EPIRK and exponential Rosenbrock methods outlined above, most of the theory from [12] applies to EPIRK directly.But this generalization has to be handled with care since additional results are needed to account for the more general form of EPIRK schemes.Below we outline the theory for the reader's convenience and present more detail in places where the differences between EPIRK and exponential Rosenbrock methods result in distinct expressions and, ultimately, modified stiff order conditions.
Analytical framework.
As in [12] the analysis is based on the theory of strongly continuous semigroups in a Banach space X with norm • .For the reader's convenience we state the assumptions from [12] that form the base for the stiff order conditions and the convergence theory; we also outline the main ideas of the theory.For our analysis we consider (1.1) written in a linearized form with Jacobian (2.9) The following two assumptions are then made about operators L and N(u): Assumption 1 ( [12]).The linear operator L is the generator of a strongly continuous semigroup e tL in X .Assumption 2 ([12]).We assume that (1.1) possesses a sufficiently smooth solution u : [0, T ] → X with derivatives in X and that the nonlinearity N : X → X is sufficiently often Fréchet differentiable in a strip along the exact solution.
Given these assumptions it can be shown that the Jacobian J in (2.9) is the generator of a strongly continuous semigroup ( [17],Chap.3.1).This implies that there exist constants C and ω such that e tJ X ←X ≤ Ce ωt , t ≥ 0 (2.10) holds uniformly in a neighborhood of the exact solution.Furthermore, it can be concluded from this result that ϕ k (h n J) and subsequently their linear combinations a i j (h n J) & b i (h n J), are bounded operators.Assumption 2 also implies that the Jacobian (2.9) satisfies the Lipschitz condition in a neighborhood of the exact solution.
Note that problems with homogeneous or no-flow boundary conditions satisfy Assumptions 1 and 2. In general, non-homogeneous boundary conditions (including time-dependent) do not necessarily satisfy Assumption 1.As a simple example, consider a semigroup T (t) = e tA defined over the Banach space X = L 2 (R) of square-integrable functions.Assumption 1 requires the semigroup to be strongly continuous.By the Hille-Yoshida theorem the necessary condition for the semigroup to be strongly continuous is that the domain D(A) of the infinitesimal generator of the semigroup A is dense in X .Thus, it is necessary to restrict the domain D(A) in order to ensure that the semigroup e tA is strongly continuous.If we assume that the solutions w(t) of w (t) = Aw belong to a subspace C ∞ 0 (R) ∈ X it is possible to prove that the semigroup e tA is strongly continuous [17] .This is not necessarily the case for the subspace of functions w(t) with non-homogeneous boundary conditions.The analysis quantifying how much order reduction one can expect for the problems with non-homogeneous boundary conditions has been done for standard Runge-Kutta and Rosenbrock methods in [15,16].Developing similar fractional order convergence theory for exponential methods or developing exponential schemes which do not exhibit order reduction for problems with non-homogeneous boundary conditions are non-trivial tasks which we plan to address in our future research.In this paper we address this issue by using numerical examples to illustrate how much order reduction one can expect if the boundary conditions are non-homogeneous.
The derivation of the stiff order conditions and the convergence theory for problems which satisfy Assumptions 1 and 2 then proceeds as follows.
Local error and stiff order conditions for EPIRK methods.
Derivation of the stiff order conditions and proof of convergence require analysis of the local error and construction of expressions for the approximate and exact solutions.To accommodate the difference between EPIRK and exponential Rosenbrock methods derivation of expressions for the numerical solution in [12] has to be adjusted.Thus, we choose to present this derivation in more detail and simply restate other results from [12] that are used without alternations.The key idea in the convergence theory is to express the error in terms of operators that are bounded.While the Jacobian operator J n can potentially be unbounded, expressions involving only derivatives of the solution u (k) n and/or the nonlinearity ∂ k N/∂ k u are bounded given Assumptions 1 and 2. The following formulas that connect these two groups of operators help make these transitions.Consider linearization of (1.1) along the exact solution ũn = u(t n ) to get u (t) = Jn u(t) + Ñn (u(t)) (2.12) From (2.13), we obtain Using these identities along with the repeated differentiation of (2.12) we get: where ũ(k) n denotes the kth derivative of the exact solution of (2.12).More generally, we have which shows that Jn u (k) is bounded (due to Assumption 2).This result and identities (2.14) are key to deriving the stiff order conditions.First following the procedure in [12] we carry out one integration step with (2.4) with exact solution ũn used for the initial value to express the numerical solution as with
.18)
We now begin computing the Taylor expansion of (2.17) by first calculating r ni as a Taylor series around ũn .Using (2.14) we obtain with and the remainder which is bounded and of order R ki = O(h k+1 n ) by Assumptions 1 and 2. Note the expression (2.20) corresponds to the formula (3.7) in [12] with functional coefficients ψ i1 , a i j generalized.By substituting (2.19) into (2.17)we obtain The following lemmas represent analogues of lemmas 3.1 and 3.2 in [12].These results allow us to obtain the expansion of (2.21) avoiding terms containing powers of the possibly unbounded operator Jn .
Lemma 1.Under Assumptions 1 and 2, we have for all t ≥ 0 and furthermore,
23)
where Proof.By using f (ũ n ) = ũ n , the recurrence relation and formulas (2.15) we have where the last equality holds due to the boundedness of Jn ũ(3) n .Equation (2.23) follows directly from using these expansions for ϕ k (z) in ψ i1 (z) = s k=1 p i1k ϕ k (z).Lemma 2. Under Assumptions 1 and 2, we have where Proof.Inserting (2.20) into (2.19) for k = 2 and using Lemma 1 with t = g 1 j h n gives Substituting this into (2.20)we obtain which yields the desired result if Lemma 1 is used: We now use these expansions of V i to obtain and insert these expressions into (2.21) with k = 4 to get and ultimately The expression for the Taylor expansion of the exact solution we borrow directly from [12]: Now subtracting (2.30) from (2.29) we obtain the expression for the local error ẽn+1 = ûn+1 − ũn+1 in the form: with Ψ i (z) given by (2.27).From (2.31) we can easily read off the stiff order conditions by ensuring the terms for a given order (three,four, or five) vanish.These conditions are given in Table 1.To eliminate the first order term in our error we must have β 1 ψ s+1 1 (g s+11 z) = ϕ 1 (z).If this condition is satisfied, the resulting method will be of stiff order two.Throughout the rest of the paper we will set β 1 = g s+1 = 1 and ψ s+1 1 = ϕ 1 .In Section 3 we will show how more flexibility in the stiff order conditions for EPIRK compared to the exponential Rosenbrock methods leads to construction of more efficient techniques.
Ref. label
Order condition Order C1 Table 1: Stiff order conditions for EPIRK methods.Note that Z and K are arbitrary square matrices and Ψ i is given by (2.27)
Convergence.
The majority of the convergence proof presented in [12] for the exponential Rosenbrock (EXPRB) methods can be applied directly to the EPIRK schemes.The only exception is Lemma 4.5 in [12].In order to obtain the same result as in this lemma we need to use additional assumption on the coefficients.Assumption 3 given below allows for less restrictive choice of the coefficients compared to EXPRB methods but enables us to proceed with the convergence proof in the same way as in [12]: Assumption 3. Suppose the coefficients of an EPIRK scheme satisfy one of the following for each i and all k α i1 p i1k = g i1 or α i1 = g i1 or p i1k = g i1 . (2.32) Given this assumption we then need to modify Lemma 4.5 and its proof as described in the Appendix.With this modification the convergence is proved exactly as in [12] and results in the following Theorem 3 (see Theorem 4.1 in [12]).Theorem 3. Let the initial value problem (1.1) satisfy Assumptions 1 and 2. Consider for its numerical solution an explicit exponential propagation iterative method of Runge-Kutta type (2.4) that fullfills Assumption 3 and the order conditions of Table 1 up to order p for some 3 ≤ p ≤ 5.Then, under the stability assumption (4.16 [12]), the method converges with order p.In particular, the numerical solution satisfies the error bound uniformly on t 0 ≤ t n ≤ T .The constant C is independent of the chosen step size sequence.
Using Theorem 3 we now construct specific stiffly accurate methods of order four and five.
Construction of new schemes
In this section we demonstrate how the flexibility of coefficients in the EPIRK framework can be used to construct efficient stiffly accurate schemes.The improvement of the computational cost comes from constructing a method particularly optimized given an algorithm for evaluating the exponential matrix function-vector products.Evaluation of exponential-like matrix functions and vector products ψ(A)v constitutes the largest computational cost of an exponential integrator.The EPIRK methods have been originally introduced to minimize the number of such estimates required per time step as well as reduce the cost of each of these evaluations by carefully selecting the exponential functions in the products [24,8,10,20].In [26] the authors derived particularly efficient EPIRK methods that use the adaptive Krylov algorithm to approximate products ψ(A)v.While there have been several techniques introduced to evaluate ψ(A)v, the adaptive Krylov algorithm remains one of the most general cost efficient way to estimate these terms if no a priori information is available about the spectrum of A. Thus we adopt using this algorithm in constructing the new stiffly accurate EPIRK methods here.
We begin with a description of our method of choice, the adaptive Krylov algorithm, and discuss the structural requirements application of this method imposes on a time integrator.A systematic approach to solving the order conditions is then offered to derive appropriate three-stage methods of order four and five.Specific EPIRK schemes are constructed following this technique.
Adaptive Krylov algorithm and its implementations
The computational cost of the standard Krylov-projection based algorithm to approximate terms of type ψ i j (g i j h n J n )v scales as O(m 2 ) where m is the size of the Krylov basis required to achieve a prescribed accuracy.The value m, in turn, depends on the spectrum of the matrix and it is expected that the computational cost of the Krylov projection will increase with the time step size h n .Thus, for a given error tolerance it might actually be more efficient to integrate with a smaller time step rather than encounter large Krylov bases.An alternative and more efficient approach was proposed in [22,14].The adaptive Krylov algorithm seeks to evaluate linear combinations of type where The idea is to replace computing one large Krylov subspace of size m with a finite number of smaller Krylov subspaces of sizes m 1 , m 2 , ..., m K .In [14] it was observed that expressions like (3.1) can be computed in a way that replaces the evaluation of terms like ϕ p (A)b P with something that requires fewer Krylov vectors, like ϕ p (τ k A)b p with 0 < τ k < 1.For example, consider the following discretization of the interval Then we would have to compute K Krylov projections at computational cost proportional to O(m 2 1 ) + O(m 2 2 ) + ... + O(m 2 K ) which can be more efficient than computing ϕ p (A)b p that has computational complexity of O(m 2 ).Obviously if K gets too large, the total cost of computing K smaller Krylov subspaces may exceed the cost of computing only one large Krylov basis.Therefore the efficiency of the algorithm is dependent on the choice of step sizes τ k .As the optimal choice varies, an algorithm was developed in [14] to choose these step sizes adaptively.Further details and information can be found in [14,22,26,10].
In [26], specific EPIRK methods were designed to efficiently employ the adaptive Krylov algorithm.By enforcing the requirement ψ i j (z) = ϕ k (z) for fixed j and i = 2, . . ., s − 1, we can construct an s-stage adaptive Krylov based EPIRK scheme that requires only s Krylov projections per time step [26,10].This requirement is necessary since if a linear combination (3.1) consists of a single term ϕ k (tJ)b k , it can be computed as ϕ k (tJ)b k = u(t)/t k for any real value t.Hence we can compute ϕ k (g i j hJ)b k = u(g i j )/g k i j for fixed j with just one Krylov evaluation (as long as g i j are included in the set of times {t k } K k=0 ).We refer to this implementation as "vertical exponential-adaptive-Krylov" or "vertical exponential-Krylov" due to computing the "columns" or the terms of each stage with shared vectors b k with one Krylov subspace.Note that the last term of the last stage is not restricted to a single ϕ-function but rather can be any linear combination of ϕ-functions since it does not share a vector with any terms of the previous stages.
The flexibility and choice of g i j coefficients in EPIRK methods allows for further improvement in overall computational cost.In particular, by taking these coefficients to be smaller than 1 we would effectively reduce the size of the Krylov basis since t end < 1.When implementing vertical adaptive Krylov we have for each fixed j = 1, . . ., s, t end = max i∈[2, j+1] g i j .Therefore an efficient vertical Krylov EPIRK scheme has max i g i j < 1 for each j = 2, . . ., s.The classical order conditions offer enough freedom to choose small g-coefficients.However, the same approach is difficult to use to construct the stiffly accurate methods since the stiff order conditions require that max i g i j = g (s+1) j = 1 as shown in Lemma 4.
Proof.It must be the case that there is at least one b j (Z) such that b j 0 for some j = 2, . . ., s, otherwise the method can only be of order two.We can then solve conditions (C1)-(C3) for the non-zero b j (Z) functions.The resulting b j (Z) are simply linear combinations of ϕ 3 (Z), ϕ 4 (Z) functions.By substituting these solution(s) for b j 's into (2.5)we find that ψ (s+1)k (g (s+1)k Z) = A 3 ϕ 3 (Z) + A 4 ϕ 4 (Z), for some A i ∈ R Since this must holds for all Z ∈ R n×n , we can conclude that g (s+1)k = 1.
Even though the stiff order conditions are restrictive with respect to the g-coefficients in the last stage, we still have flexibility with the g's in the internal stages and will use them to reduce the computational cost.Our approach is to modify the implementation of the adaptive Krylov algorithm from computing "vertically" to computing "horizontally" or in a "mixed" way as described below.
The "horizontal" exponential-adaptive-Krylov, or exponential-Krylov, algorithm is intended to compute all terms in each stage with one Krylov evaluation.In contrast to the vertical Krylov, here we compute along the "rows" (i.e.compute ψ i 0 j (z)b j for fixed i 0 and j = 1, . . ., i 0 − 1).Since each term will have a different vector b j , the only way for a s-stage method to require only s Krylov evaluations per time-step is to enforce the condition that any non-zero exponential terms in a given stage must share the same g i j -value.As an example let us consider the following internal stage of a five-stage EPIRK method where ψ 51 (z) = ϕ 1 (z), ψ 52 (z) = ϕ 2 (z) + ϕ 3 (z), and ψ 54 (z) = ϕ 3 (z).Using the recurrence relation (2.25) we can express (3.2) as Since the vectors b 1 , b 2 , b 4 are not the same we must have g 51 = g 52 = g 54 so (3.3) can be written in the form Then the adaptive Krylov algorithm can be used to compute (3.4) with the possibility of taking g 51 < 1.Note that due to Lemma 4 all projections in the vertical method require adaptive Krylov algorithm to integrate over the interval [0, 1].Thus the savings associated with smaller g coefficients which are equivalent to reducing the integration interval in adaptive Krylov algorithm to [0, g] are not possible for the vertical methods.Thus the horizontal version of the method with coefficients g i1 < 1 carries computational savings comparable to the vertical method.
We can also use a combination of both vertical and horizontal Krylov adaptation to develop an EPIRK scheme.The idea is to compute the last stage horizontally and the internal stages vertically or with a combination of vertical and horizontal approaches depending on what yields the most optimized scheme.This alleviates the drawbacks of strictly implementing vertical Krylov or horizontal Krylov for stiffly accurate EPIRK schemes and opens more possibilities for customization of methods as well as improving the efficiency of a particular scheme.After solving the order conditions below, we will construct a specific mixed exponential-adaptive-Krylov, or exponential-Krylov, method that will serve as an illustrative example of this idea.
Note that any EPIRK method can be implemented in a vertical, horizontal or mixed way, however, the schemes can be constructed to be particularly optimized for a given implementation.Thus, for example, we will call an EPIRK scheme optimized for a mixed implementation a mixed exponential-Krylov method but in the numerical tests section we test such method with either vertical, horizontal or mixed implementation and demonstrate that the integrator optimized for the mixed implementation and applied in this way is the most efficient.
Solving the order conditions
Our primary objective is to construct new and more efficient stiffly accurate EPIRK schemes.We focus on constructing three-stage methods.Below we solve the order conditions given in Table 1 to obtain new general classes of three-stage fourth and fifth-order EPIRK methods.For each of these classes, the remaining free-parameters are then chosen to obtain specific schemes each targeted for a specific version (vertical, horizontal or mixed) of the adaptive Krylov algorithm.We begin by considering a three-stage EPIRK method in EXPRB form (2.7) where ψ i1 (z) = p i11 ϕ 1 (z) + p i12 ϕ 2 (z) + p i13 ϕ 3 (z) for i = 2, 3.
Fourth-order methods
From the conditions given in The flexibility with the remaining parameters makes these methods appealing.For example, the choice a 32 (Z) ≡ 0 and ψ 21 (Z) = ψ 31 (Z) = ϕ 1 (Z) yields the structure necessary in order to construct a vertical, horizontal, or a mixed exponential-Krylov method.These methods will require three Krylov projections per time-step when implementing the vertical or horizontal whereas it is possible to construct a mixed exponential-Krylov scheme that only requires two projections.The removal of a whole projection each time-step can significantly reduce the overall cost.Another computation saving feature of the fourth-order schemes is the ability to choose both g 21 and g 31 .The choice g 21 = 1 2 and g 31 = 2 3 leads to the following three-stage fourth-order method EPIRK4s3A: Another fourth-order method can be obtained similarly by taking ψ 21 (Z) = ψ 31 (Z) = ϕ 2 (Z).Different g-coefficients were specified but were chosen so that they are comparable to those above to obtain the EPIRK4s3B method (3.9) Note that method EPIRK4s3B lies outside of the set of exponential Rosebrock methods because it is using ϕ 2 (z) function in the internal stages.In the numerical experiments section we will show the performance of EPIRK4s3B is very similar with EPIRK4s3A.This illustrates that the EPIRK form allows for more flexibility in constructing the methods.
In Section 4 it will be shown that (3.8) performs particularly well when implemented in a horizontal or a mixed exponential-Krylov way.The flexibility in constructing this fourth order method allows building a scheme that can even be computationally favorable compared to three-stage fifth order methods as shown below.
Fifth-order methods
Our construction of methods of order five are built upon the solutions obtained above which satisfy conditions (C1)-(C3).A fifth-order method must additionally satisfy (C4)-(C8); upon inserting (3.6) it is found that there is no solution which is able to satisfy these conditions for all Z ∈ R n×n .However, as noted in [12] for EXPRB methods, with additional regularity assumptions the convergence results hold under weaker assumptions on the coefficients of the method.Similar results can be obtained for the general EPIRK form by considering a simplified set of conditions (C4*)-(C8*) given in Table 2 that replace conditions (C4)-(C8).The regularity assumption on operators in (2.9) needed to prove convergence in this case is the same as for EXPRB methods but stated in a less compact form as follows.
Assumption 4. The operator L and the nonlinearity N from (2.8) are such that are uniformly bounded on X for all 2 ≤ i ≤ s.
Given Assumption 4, we can now prove convergence of methods that satisfy conditions (C4*)-(C8*), if the stability requirement 4.16 in [12] holds.The major aspects of the proof are exactly the same as in [12] but several modifications are needed as we describe below.
Theorem 5 (Theorem 4.2 [12]).Let the initial value problem (1.1) satisfy Assumptions 1,2, and 4. Consider for its numerical solution an EPIRK method (2.4) which satisfies Assumption 3, the order conditions (C1)-(C3) of Table 1 and (C4*)-(C8*) of Table 2.Then, under the stability assumption (4.16 [12]), the method is convergent of order 5.In particular, the numerical solution u n satisfies the error bound uniformly on t 0 ≤ t n ≤ T .The constant C is independent of the chosen step size sequence.
Proof.We begin by defining terms of O(h 5 n ) in (2.31) by each of which can be written in the following form where C 1, j , C 2, j ∈ R and j = 1, . . ., 4. Using the recurrence relation (2.25) we have for each j = 1, . . ., 5 where ξ 5, j is a bounded operator.Since the simplified conditions (C4*)-(C8*) are satisfied, we have ξ 5, j (h n Jn ) = 0 + h n Jn ξ 5, j (h n Jn ).Substituting this back into (2.31) and using Assumption 4 yields each term to be of order O(h 6 n ).
Ref. label
Order Condition Order (C4*) Table 2: Simplified stiff order conditions.Note that Z and K are arbitrary square matrices, Ψ i is given by (2.27),and ψ 3,i (Z) = i−1 j=2 a i j (Z) Therefore our error ẽn+1 = O(h 6 n ).
As a result of Theorem 5, a stiffly accurate three-stage fifth-order EPIRK scheme can be constructed.Let us consider the solutions obtained for the fourth-order schemes and (C4*)-(C8*).We first note that the simplified conditions (C6*) and (C7*) become equivalent and are used to solve for g 21 : Expressions in (3.17) give rise to the use of general ψ i1 functions and the ability to construct horizontal exponential-Krylov methods.The coefficients given by (3.18) simplify (3.5) by setting ψ i1 (z) = ϕ 1 (z).This simplification condition provides these methods with the structure necessary for construction of a mixed exponential-Krylov scheme.For this reason we consider each case separately and construct methods specifically designed for the two different approaches -horizontal and mixed.
Beginning with (3.17), the remaining condition (C8*) is Table 3: Stiff order conditions for EPIRK methods and EXPRB.Note that Z and K are arbitrary square matrices, Ψ i is given by (2.27),and Since a 32 (Z) is a function of both g 31 and g 21 , a horizontal exponential-Krylov method is not possible unless one of the terms vanish (since g 21 g 31 ).We thus introduce a new condition by setting one of these terms to zero.The first term in the second internal stage already uses g 31 in the evaluation of ϕ 1 (Z) and therefore we seek removing the term associated with ϕ 3 (g 21 Z) in (3.21).With the use of Mathematica, only one solution was found which did not violate any of the conditions or specifications, .
With all conditions satisfied, the remaining parameters g 31 , p 212 , and p 311 can be chosen freely.In order to optimize our g-coefficients, we use (3.15) to help choose g 31 .The relationship (3.15) can be reduced under the conditions for a fifth-order method to g 21 = 3 (5g 31 − 4) 5 (4g 31 − 3) .
From the plot of this relationship in Figure 1 it can be seen that minimizing either g 21 or g 31 results in the other coefficient approaching eight tenths.Furthermore, Figure 1 shows that minimizing one of these coefficients is the best choice computationally.Thus let us specify g 31 = 4/9, p 212 = 1 and p 311 = 1 to obtain our first horizontal Mixed exponential-Krylov methods.
For a mixed exponential-Krylov method, we propose using vertical exponential-Krylov approach to compute the internal stages and horizontal exponential-Krylov method for the last stage (see Table 4).For a three-stage method, this type of a mixed exponential-Krylov method requires Ψ 21 (z) = Ψ 31 (z) = ϕ k (z) for some fixed k ∈ N. The simplification from coefficients (3.18) satisfies this requirement with k = 1.Therefore we obtain a three-stage fifth-order mixed exponential-Krylov method by additionally satisfying A further result from this simplification is that a stiffly accurate fifth-order three-stage EPIRK method is also a fifthorder three-stage EXPRB method.Thus any three-stage EXPRB scheme is of a mixed exponential-Krylov type.As an example and for our numerical experiments we will consider a fifth-order three-stage method from [12], EXPRB53s3 (Table 4).
Table 4: EXPRB53s3 and grouping of terms for mixed exponential-adaptive-Krylov
Variable time-stepping
Variable time-stepping has been used with both the vertical implementation of EPIRK and EXPRB methods in [23] and [6] respectively.For both of these classes of methods the approach to implementing an efficient variable step-size mechanism was to embed a lower-order error estimator into a higher-order method in such a way that both rely on the same internal stages and do not require additional Krylov projections per time step.While this is possible for vertical Krylov implementation, the horizontal and mixed implementations are limited by computing the final stage horizontally where one Krylov projection is used for its approximation and accounts for the specific coefficients of that stage.Thus, the implementation of variable time-stepping in this manner will require an extra Krylov projection each time-step in order to calculate the error estimator.To further reduce computational cost of the horizontal and mixed implementations with variable time-stepping the adaptive Krylov algorithm has to be modified.This is the goal of our current research but in this paper we restrict our attention to the existing adaptive Krylov algorithm as in [14].
Since the horizontal and vertical implementations require the same number of projections per time-step, the cost of an additional projection can offset any computational gains from optimized g-coefficients in the horizontal implementation.However, the mixed implementation of methods like (3.8) requires fewer projections each time-step than the vertical implementation of a method with the same number of stages.Therefore the extra Krylov evaluation for the mixed implementation would still be competitive with the vertical implementation due to now having the same number of projections each time-step.As an example we can embed the following third-order method into EPIRK4s3 and use it as our error estimator for both vertical and mixed implementations.
To efficiently approximate (3.23), the vertical method needs to use the same Krylov bases that are computed each time-step for (3.8).In [27] methods were restricted to using single ϕ-functions for terms who shared the same vector.By modifying the implementation we can account for multiple ϕ-functions and approximate the terms (32ϕ 3 (h n J n ) − 144ϕ 4 (h n J n )) h n r(U n2 ) and 8ϕ 3 (h n J n )h n r(U n2 ) in (3.8) and (3.23) using the same Krylov basis.After computation of the Krylov basis for ϕ 4 (h n J n )h n r(U n2 ), an approximation of ϕ 3 (h n J n )h n r(U n2 ) can then be obtained by using the recurrence relation ϕ k (z) = zϕ k+1 (z) + 1/k!.
Numerical Experiments
The numerical experiments presented below are designed to address several objectives.First, we want to demonstrate the performance of the new stiffly accurate EPIRK schemes.Second, we will confirm our claim that implementing the horizontal and/or mixed exponential-adaptive-Krylov algorithm for stiffly accurate methods can offer significant computational savings.Third, we examine the accuracy of the integrators on problems which do not satisfy Assumptions 1 and 2. Finally, we conclude the section with tests that illustrate the performance of the variable time stepping version of the methods.
The integrators employed in our numerical experiments are: EPIRK4s3A (3.8), EPIRK4s3B (3.9), EPIRK5s3 (3.22), EXPRB53s3 (Table 4) and one classically (non-stiff) derived method EPIRK5P1 from [27].Each of the stiffly accurate methods will be implemented in its vertical, horizontal and mixed exponential-Krylov versions as follows: • EPIRK4s3A -vertical, horizontal, and mixed; these three implementations of the same fourth-order method demonstrate the advantages of mixed or horizontal forms; • EPIRK4s3B -mixed; this is an EPIRK method that cannot be written in the exponential Rosenbrock form; • EPIRK5s3 -horizontal; this fifth-order method has been derived specifically to take advantage of the horizontal form; • EXPRB5s3 -vertical and mixed; this fifth-order method has been designed to take advantage of a mixed form.
The classically derived EPIRK5P1 method has been included to illustrate the difference in performance compared to the stiffly accurate and particular implementation adapted schemes.We begin by describing the test problems that we will be using and verifying the theoretically predicted order of all the integrators used in our experiments.The simulations and results are then detailed.
Test problems
Our numerical experiments are conducted on a select subset of the test problems used in [8].For each of these test problems, numerical comparisons between previously derived exponential schemes not included here can be found in [24,8,4,12].In all of the problems presented below the ∇ 2 term is discretized using the standard second order finite differences.
(e) 2D Brusselator (N = 300 2 ) Figure 2: Log-log plots of error vs. time step size.For convenience the lines with slopes equal to three (dashed), four (dash-dotted), and five (dotted) are shown.
Comparative performance
The results of our numerical experiments are presented and analyzed to address the comparative performance of the different exponential-Krylov implementations and the new stiffly accurate EPIRK schemes themselves.Our comparisons are based on the analysis of precision diagrams for the following simulations: • ADR: N = 400 2 with h = 0.01, 0.005, 0.0025, 0.00125, 6.25e − 4, • Allen-Cahn: N = 500 2 with h = 0.5, 0.25, 0.1250, 0.0625, 0.03125, • SemilinearParabolic: N = 1000 with h = 0.1, 0.05, 0.0250, 0.0125, 0.00625, • Gray-Scott: N = 400 2 with h = 0.01, 0.005, 0.0025, 0.00125, 6.25e − 4, • Brusselator: N = 300 2 with h = 0.5, 0.25, 0.1250, 0.0625, 0.03125, where N and h correspond to the spatial discretization and time-step sizes respectively.The precision diagrams are given in Figure 3. Previously published performance comparisons such as [9] addressed computational issue characteristic of the EPIRK methods in general such as, for example, the C-shape of the precision graphs which is induced by the computational complexity scaling of the Krylov algorithm with respect to the size of the time step.Here we concentrate on the numerical experiments demonstrating the properties of the stiffly accurate and optimized with respect of a particular implementation EPIRK methods.
Overall, the figures verify that the performance of each method highly depends on the number of Krylov evaluations and the size of the interval [0, g] (which depends on the chosen g-coefficients) that the adaptive Krylov method has to traverse.For example, consider the horizontal (dashed) and vertical (dotted) implementation of the fourthorder EPIRK4s3A (diamond).The same number of adaptive-Krylov evaluations per time-step were taken (three) but Figure 3 shows a considerable difference in the overall computational cost.By comparing the CPU times for each time-step we can easily see how much computational savings are obtained by using the horizontal adaptive-Krylov algorithm.Table 5 (a) displays the maximum, minimum, and average of the cost of EPIRK4s3A-Vert compared to cost of EPIRK4s3A-Horz over all time-steps.Considering all the test problems, the vertical implementation of EPIRK4s3A costs on average 129% of the cost of the horizontal implementation.Similar results are also found when comparing the fifth-order EXPRB53s3-Vert with the specifically constructed EPIRK5s3-Horz (Table 6 (b)).As we predicted, these savings come from the horizontal implementations ability to make use of g i j < 1 coefficients by reducing the Krylov basis size.While this advantage should be observed for the vertical implementation of any closely related method of the same order and same number of stages, the amount of savings will depend on the coefficients of the method.The vertical and horizontal implementation of EPIRK4s3A require three Krylov evaluations each time-step.The mixed implementation of EPIRK4s3A only requires two Krylov projections and therefore it is expected this method will further increase the savings compared to the vertically implemented EPIRK4s3A.Our numerical experiments confirm EPIRK4s3A-Mix has a clear advantage over both its horziontal and vertical implementations and can offer up to 50% savings (compared to its vertical implementation).The maximum/minimum/average of the per time-step comparisons are given in Table 6 and plots of CPU execution time versus error in Figure 3.In the case where the number of Krylov evaluations are the same (i.e.fifth-order three-stage methods), the mixed implementation can still offer computational savings over the vertical implementation but is really dependent on the g-coefficients.For example, the difference in performance between the mixed and vertical implementations of EXPRB53s3 is much smaller due to g 31 = 9/10.For this value the resulting intervals [0, g] are nearly the same and no significant savings are obtained.We will pursue further optimization of coefficients strategies for horizontal and mixed implementations in our future research.We now turn to comparing the performance of the schemes themselves.While there is no clear dominate fifthorder method in regards to computational cost we do see that EXPRB53s3 is slightly more accurate for all problems.The more interesting comparison is that of the fourth-order method with the fifth-order schemes.For a prescribed accuracy, the fourth-order mixed (and horizontal) EPIRK4s3A can offer significant (up to 64%) savings in comparison to the fifth order methods (of any implementation).In Table 7 we list the CPU execution times for each method and each test problem for various tolerances.For any set tolerance we see that the mixed implementation of EPIRK4s3A can achieve this level of accuracy at a fraction of the cost of any of the fifth-order methods.A simple justification is that conditions for a stiffly accurate fifth-order method are far more restrictive than for a fourth-order scheme.Thus the additional flexibility of stiffly accurate fourth-order schemes allows for more customization and design of methods which optimize the efficiency.
Non-homogeneous boundary conditions
As mentioned in Section 2.2 problems with non-homogeneous boundary conditions do not necessarily satisfy the assumptions of our framework and therefore the stiff order is not guaranteed.The purpose of this section is to show that order reduction occurs and identify how much of a reduction to expect for these problems.We perform simulations with the following test problems: • Allen-Cahn 2d: Neumann boundary conditions with initial and boundary values given by u = 0.4 + 0.1(x + y) + 0.1 sin 3 2 πx sin 5 2 πy .
• Brusselator 2d: Dirichlet boundary conditions with initial and boundary values given by • 1D Degenerate nonlinear diffusion [21]: with Dirichlet boundary conditions u(−23, t) = 1 and u(50, t) = 0, and initial conditions The same spatial discretization and time-step sizes were used for the Brusselator problem as in the previous section.The Allen-Cahn and degenerate nonlinear diffusion problem were conducted with time-step sizes h = 0.05, 0.0250, 0.0125, 0.00625, 0.003 and respective discretization sizes of N = 500 2 and N = 1000.
Figure 4 displays the log-log plots of time-step size versus error and Table 8 has the approximate order exhibited by each of the method for every test problem.While some of the methods achieve full order for some problems, generally the results illustrate that a reduction of order is possible even if the method is stiffly accurate.The extent of the order reduction ranges from 0.03 to 1.34.Such reduction is expected since a similar phenomenon occurs for implicit methods.A theory presented in [15,16] allows to quantify the extent of order reduction for Rosenbrock methods.We plan to pursue development of a similar theory for exponential integrators applied to nonhomogeneous problems in our future research.
Variable time-step comparisons
We present here the results of our variable time-step experiments on tests problems described in Section 4.1.In addition to the stiffly accurate schemes from Section 3.3 we will also consider the fifth-order classical (non-stiff) EPIRK5-P1 method with a fourth-order error estimator [9].We used the same configuration for our experiments as in [9].For each problem, five runs were made with the following absolute and relative tolerances Atol = Rtol = 10 −2 , 10 −3 , 10 −4 , 10 −5 , 10 −6 .The resulting diagrams of CPU execution time versus error are displayed in Figure 5.The classically derived EPIRK5-P1 shows to be the most efficient method across all the problems but has the potential drawback of suffering from a reduction of order as seen with the semilinear parabolic problem.This further confirms the need for more efficient stiffly accurate methods as well as illustrates the need for a more refined theory that predicts how much order reduction can be expected for a given problem and a chosen integrator.
Conclusions and future work
We have extended the stiff order conditions and convergence theory in [12] for EXPRB methods to EPIRK-type methods.We offered a different approach to solving the stiff order conditions that allows construction of efficient schemes of several types particularly when these methods are used in conjunction with the adaptive Krylov algorithm.Using the generality of the EPRIK framework we constructed new stiffly accurate fourth and fifth-order schemes and numerically confirmed they achieved their full predicted order of accuracy on a set of test problems.Our numerical experiments further showed that the new technique of deriving horizontal or mixed EPIRK schemes does offer improved computational savings compared to previously derived (EPIRK & EXPRB) methods.For a given exponential method, however, the most efficient implementation will depend on its coefficients and the structure of the problem under consideration.We are currently working on a modified adaptive Krylov algorithm that provides more computational savings for horizontal and mixed optimized EPIRK schemes.Development of better guidelines in constructing/choosing the most efficient integrator for a given problem is a goal of our future investigations.We also plan to extend/develop the stiff order conditions theory for partitioned (or split) EPIRK, implicit-exponential type-methods.Finally, more research is needed to investigate whether stiffly accurate exponential integrators that do not suffer from order reduction can be developed for problems with non-homogeneous boundary conditions.Lemmas 4.1 through 4.5 in [12] provide bounds to the different terms in this expression.All of these lemmas are directly applicable to the EPIRK methods except for Lemma 4.5.Here we present a modified proof of Lemma 4.5 that accounts for the fact that EPIRK methods employ the general ψ-function rather than the ϕ 1 -function as in the exponential Rosenbrock methods.To motivate the lemma we begin by applying Lemma 4.4 in [12] where E ni = U ni − U ni is the difference between the numerical solutions obtained from (2.4) and (2.17).Our desired estimate for P n is obtained by bounding E ni in terms of e n .The bound found in [12] for EXPRB methods only holds for methods whose internal stages strictly use ϕ 1 -function in the first term.With the additional assumption that the method satisfies Assumption 3, we prove the same bound holds for any linear combination of ϕ-functions.as long as the global errors e n remain in a sufficiently small neighborhood of 0 and h n ≤ C H .
Proof.Without loss of generality and for sake of presentation, we will assume the method satisfies p i1k = g i1 of Assumption 3 for each i and all k.We begin by proving the estimate (a i j (h n J n ) − a i j (h n Jn ))r n j . (A.9) Using the identity f (u) − f (ũ n ) = J n e n + N n (u n ) − g n (ũ n ), (2.32) and recurrence relation (2.25), (A.9) can be expressed as (A.10)The estimate (A.8) then follows from the positive-scalability and sub-additivity of the norm, the estimates of Lemmas 4. 1 & 4.3 in [12], boundedness of f (ũ n ) = ũ n and ϕ k (h n J) (and a i j (h n J n )).Now we can prove (A.6).Since e n is assumed to remain in a sufficiently small neighborhood of 0, there exists 0 < δ < 1 such that e n < δ for all n.This implies that for each n, e n 2 ≤ e n and furthermore shows that ).The estimate (A.7) now follows from (A.5) and (A.6).
and (3.7) satisfied, methods of the form (3.5) define a new class of stiffly accurate fourth-order three-stage methods.
Lemma 6 .
Under Assumptions 1-3, for all i, we have E ni ≤ C e n + Ch n e n 2 + Ch 5 n (A.6) P n ≤ C e n + C e n 2 + Ch 6 n (A.7)
E ni ≤ C e n + Ch n e n 2
n + e n + E n j ) E n j .(A.8)By adding and subtracting α i1 h n p i1k ϕ k (g i1 h n J n ) f (ũ n ) for each k = 1, . . ., s to E ni we can then write E ni asE ni = e n + α i1 h n s k=1 p i1k ϕ k (g i1 h n J n ) ( f (u n ) − f (ũ n )) + α i1 h n s k=1 p i1k ϕ k (g i1 h n J n ) − ϕ k (g i1 h n Jn ) f (ũ n )+ (h n J n )(r n j − rn j ) + h n i−1 j=2
1 j=2(
j=2 a i j (h n J n )(r n j − rn j ) + h n i−a i j (h n J n ) − a i j (h n Jn ))r n j = e n + α i1 s k=1 (ϕ k−1 (g i1 h n J n ) − 1/k!) e n + α i1 g i1 h n ϕ k (g i1 h n J n )(N n (u n ) − N n (ũ n )) + + α i1 g i1 h n s k=1 (ϕ k (g i1 h n J n ) − ϕ k (g i1 h n Jn )) f (ũ n ) + h n i−1 j=2 a i j (h n J n )(r n j − rn j ) + h n i−1 j=2 a i j (h n J n ) − a i j (h n Jn ))r n j.
Table 6 :
Cost of horizontal and vertical implementations in comparison to mixed implementation to (A.2) and obtain the preliminary estimateP n ≤ Ch n e n + C e n 2 + n2 ≤ C e n + Ch n e n 2 ≤ C e n + Ch n e n (A.11) by using (A.8) with i = 2. Assuming E ni−1 ≤ C 1 e n + C 2 h n e n 2 we obtain + e n + (C 1 e n + C 2 h n e n 2 ))(C 1 e n + C 2 h n e n 2 ).By expanding the terms and using the assumption that e n < δ we arrive atE ni ≤ (C + h 2 n C 1 ) e n + (C + C 2 h 2 n + C 1 + C 2 1 + C 2 h n + 2C 2 C 1 h 2 n + C 2 2 h 2 n )h n e n ≤ (C + C 2 H C 1 ) e n + (C + C 2 C 2 M + C 1 + C 2 1 + C 2 C M + 2C 2 C 1 C 2 M + C 2 2 C 2 M )h n e n =max(C H , 1) and C = max((C + C 2 H C 1 ), (C | 2016-08-02T03:29:38.000Z | 2016-04-03T00:00:00.000 | {
"year": 2016,
"sha1": "a836f6f040540af3868296fc3d28c1cb6fa52fc6",
"oa_license": "publisher-specific-oa",
"oa_url": "http://manuscript.elsevier.com/S0021999116303217/pdf/S0021999116303217.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "a836f6f040540af3868296fc3d28c1cb6fa52fc6",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
236935195 | pes2o/s2orc | v3-fos-license | Predictors for one-year outcomes of cardiorespiratory fitness and cardiovascular risk factor control after cardiac rehabilitation in elderly patients: The EU-CaRE study
Introduction Studies on effectiveness of cardiac rehabilitation (CR) in elderly cardiovascular disease patients are rare, and it is unknown, which patients benefit most. We aimed to identify predictors for 1-year outcomes of cardiorespiratory fitness and CV risk factor (CVRF) control in patients after completing CR programs offered across seven European countries. Methods Cardiovascular disease patients with minimal age 65 years who participated in comprehensive CR were included in this observational study. Peak oxygen uptake (VO2), body mass index (BMI), resting systolic blood pressure (BPsys), and low-density lipoprotein-cholesterol (LDL-C) were assessed before CR (T0), at termination of CR (T1), and 12 months after start of CR (T2). Predictors for changes were identified by multivariate regression models. Results Data was available from 1241 out of 1633 EU-CaRE patients. The strongest predictor for improvement in peak VO2 was open chest surgery, with a nearly four-fold increase in surgery compared to non-surgery patients. In patients after surgery, age, female sex, physical inactivity and time from index event to T0 were negative predictors for improvement in peak VO2. In patients without surgery, previous acute coronary syndrome and higher exercise capacity at T0 were the only negative predictors. Neither number of attended training sessions nor duration of CR were significantly associated with change in peak VO2. Non-surgery patients were more likely to achieve risk factor targets (BPsys, LDL-C, BMI) than surgery patients. Conclusions In a previously understudied population of elderly CR patients, time between index event and start of CR in surgery and disease severity in non-surgery patients were the most important predictors for long-term improvement of peak VO2. Non-surgery patients had better CVRF control.
Introduction
Background 2 Scientific background and explanation of rationale Theories used in designing behavioral interventions
Methods
Participants 3 Eligibility criteria for participants, including criteria at different levels in recruitment/sampling plan (e.g., cities, clinics, subjects) Method of recruitment (e.g., referral, self-selection), including the sampling method if a systematic sampling plan was implemented Recruitment setting Settings and locations where the data were collected Interventions 4 Details of the interventions intended for each study condition and how and when they were actually administered, specifically including: Unit of assignment (the unit being assigned to study condition, e.g., individual, group, community) Method used to assign units to study conditions, including details of any restriction (e.g., blocking, stratification, minimization) Inclusion of aspects employed to help minimize potential bias induced due to non-randomization (e.g., matching) n.a. n.a. n.a.
Please note: Black numbers relate to page numbers of manuscript, red numbers to other publications from EU-CaRE study as referenced in the bibliography of the manuscript.
TREND Statement Checklist
Blinding (masking) 9 Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to study condition assignment; if so, statement regarding how the blinding was accomplished and how it was assessed.
Unit of Analysis 10 Description of the smallest unit that is being analyzed to assess intervention effects (e.g., individual, group, or community) If the unit of analysis differs from the unit of assignment, the analytical method used to account for this (e.g., adjusting the standard error estimates by the design effect or using multilevel analysis) Statistical Methods
11
Statistical methods used to compare study groups for primary methods outcome(s), including complex methods of correlated data Statistical methods used for additional analyses, such as a subgroup analyses and adjusted analysis Methods for imputing missing data, if used Statistical software or programs used
Participant flow 12
Flow of participants through each stage of the study: enrollment, assignment, allocation, and intervention exposure, follow-up, analysis (a diagram is strongly recommended) o Enrollment: the numbers of participants screened for eligibility, found to be eligible or not eligible, declined to be enrolled, and enrolled in the study o Assignment: the numbers of participants assigned to a study condition o Allocation and intervention exposure: the number of participants assigned to each study condition and the number of participants who received each intervention o Follow-up: the number of participants who completed the followup or did not complete the follow-up (i.e., lost to follow-up), by study condition o Analysis: the number of participants included in or excluded from the main analysis, by study condition Description of protocol deviations from study as planned, along with reasons Recruitment 13 Dates defining the periods of recruitment and follow-up Baseline Data 14 Baseline demographic and clinical characteristics of participants in each study condition Baseline characteristics for each study condition relevant to specific disease prevention research Baseline comparisons of those lost to follow-up and those retained, overall and by study condition Comparison between study population at baseline and target population of interest Baseline equivalence 15 Data on study group equivalence at baseline and statistical methods used to control for baseline differences n.a.
TREND Statement Checklist
Numbers analyzed 16 Number of participants (denominator) included in each analysis for each study condition, particularly when the denominators change for different outcomes; statement of the results in absolute numbers when feasible Indication of whether the analysis strategy was "intention to treat" or, if not, description of how non-compliers were treated in the analyses Outcomes and estimation 17 For each primary and secondary outcome, a summary of results for each estimation study condition, and the estimated effect size and a confidence interval to indicate the precision Inclusion of null and negative findings Inclusion of results from testing pre-specified causal pathways through which the intervention was intended to operate, if any Ancillary analyses 18 Summary of other analyses performed, including subgroup or restricted analyses, indicating which are pre-specified or exploratory Adverse events 19 Summary of all important adverse events or unintended effects in each study condition (including summary measures, effect size estimates, and confidence intervals)
Interpretation 20
Interpretation of the results, taking into account study hypotheses, sources of potential bias, imprecision of measures, multiplicative analyses, and other limitations or weaknesses of the study Discussion of results taking into account the mechanism by which the intervention was intended to work (causal pathways) or alternative mechanisms or explanations Discussion of the success of and barriers to implementing the intervention, fidelity of implementation Discussion of research, programmatic, or policy implications Generalizability 21 Generalizability (external validity) of the trial findings, taking into account the study population, the characteristics of the intervention, length of follow-up, incentives, compliance rates, specific sites/settings involved in the study, and other contextual issues Overall Evidence
22
General interpretation of the results in the context of current evidence and current theory | 2021-08-07T06:18:10.685Z | 2021-08-05T00:00:00.000 | {
"year": 2021,
"sha1": "65ba26de1683c726171f48ffc350435e2194b76d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0255472&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8dfdd33c82395e70a9c91c016b2ab29ac6f39ce2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248111526 | pes2o/s2orc | v3-fos-license | Analysis of Concentration Levels of Atmospheric Pollutants in Warri, Nigeria
A critical environmental problem facing the Niger Delta region is Air Pollution. This study therefore analyses concentration levels of atmospheric pollutants in the region. Statistical analysis of CH 4 and O 3 concentrations for the period of 2003 to 2012 and NO 2 and CO 2 concentrations for the period of 2011 to 2014 were carried out. The results showed that concentration levels of the pollutants were lower during the rainy season than during the dry year time. This is due to higher occurrences of atmospheric instability during the rainy season. On the other hand, ozone (O 3 ) concentration reached its peak value during the peak period of the rainy season unlike the other pollutants. In all likelihood, some of the ozone-depleting substances such as aerosols and atmospheric hydrogen chloride become soluble in water and are being washed off by precipitation during rainy season, thereby leading to increased tropospheric ozone concentration during the rainy season. The study also revealed a steady increase in the concentration of CO 2 within the period of investigation. This steady increase in CO 2 can be traced to the alarming increase in anthropogenic activities which appreciably increases the amount of CO 2 in the atmosphere. Methane (CH 4 ) had higher standard deviation values than carbon dioxide (CO 2 ), meaning that on a per molecule basis, a proportional rise in CH 4 is much more effective as a greenhouse gas than a similar increase in CO 2 . However, CO 2 has a greater effect than CH 4 on climate change owing to its higher atmospheric concentration. The Mann-Kendall rank statistics of the atmospheric pollutants revealed that the standardization variables U(t i ) and U'(t i ) have a sequential fluctuating behavior around a zero level.
Introduction
Air pollution is the addition of harmful substances known as air pollutants to the atmosphere, resulting in damage to the natural or built environment, human health, and quality of life. The major sources of air pollution in the Niger Delta area are gas flaring, traffic emissions and industrial emissions [1].
Since Nigeria's discovery of oil in the 1950's, the country (especially the Niger Delta region) has been suffering the undesirable environmental repercussions of oil development [2]. Nigeria is accountable for about 46% of Africa's total gas flared per tonne of oil produced and has the highest record (19.79%) of natural gas flaring globally [3]. [4] carried out a comparison of concentrations of ambient air pollutants in Lagos and in the Niger Delta region. He concluded that concentration levels of the pollutants were highest in the Niger Delta region. [5] undertook an air quality assessment of the Niger Delta. The study revealed that the levels of volatile oxides of carbon, sulphur and nitrogen exceed existing Federal Environmental Protection Agency (FEPA) limits for CO: 10 ppm, SO 2 : 0.01 ppm and NO 2 : 0.04 -0.06 ppm. Also, [6] examined air samples obtained from 16 communities in the Niger Delta region for their suspended particulate matter (SPM) composition. The study showed that the particulate load was above the World Health Organization (WHO) specification for both PM 2.5 and PM 10 annual mean and 24-h mean (PM 2.5 : 10 μg/m 3 annual mean, 25 μg/m 3 24-h mean; PM 10 : 20 μg/m 3 annual mean, 50 μg/m 3 24-h mean). Furthermore, [7] undertook an assessment of the atmospheric levels of PM 10 in Port Harcourt. The study revealed that the trend in the seasonal PM 10 concentration levels was dry > transition > wet. Even though some amount of work has been done on the air quality assessment of some other parts of the Niger Delta area, not much work has been undertaken on the analysis of emission levels of atmospheric pollutants in Warri which is one of the major hubs of petroleum activities in the Niger Delta region. Understanding the extent of the emission of atmospheric pollutants in Warri could assist in the mitigation of air pollution in the Niger Delta area.
Study Station
The city of Warri (5.52˚N, 5.75˚E) is a major center of petroleum activities in southern Nigeria. It has a population of over 311,970 (2006 census) [8]. The climate is marked by two different seasons: the rainy season (May to October) and the dry season (November to April). Figure 1 is the map of Delta state showing gas flaring sites and highlighting study station (Warri). The area is characterized with annual rainfall amount of about 2768.8 mm with rainfall periods varying from January to December. Over the course of the year, temperature typically varies from 20.56˚C to 31.11˚C and is rarely below 16.11˚C or above 33.33˚C. The daily methane (CH 4 ), carbon dioxide (CO 2 ), nitrogen dioxide (NO 2 ) and tropospheric ozone (O 3 ) concentrations data used in this study were obtained from the National Aeronautics and Space Administration (NASA).
Method
Monthly and annual averaging of the daily pollutant concentrations (NASA data) within the period of investigation were carried out. Statistical analysis of CH 4 and O 3 concentrations for the period of 2003 to 2012 and NO 2 and CO 2 concentrations for the period of 2011 to 2014 were carried out. The sequential version of the Mann-Kendall rank statistics was then used to analyze the atmospheric pollutants data in order to identify long-term trends. The effective application involves the following steps in sequence: • The values x i of the initial series are substituted by their ranks y i , set up in ascending order.
• The magnitudes of y i , (i = 1, ..., N) are compared with y j , (j = 1, ..., i − 1). At each comparison, the number of cases y i > y j is counted and represented by n i . • A statistic t i is given as follows (1) • The distribution of the test statistic t i has a variance and a mean as follows ( )( ) 1 • The values of the statistic u(t i ) in sequence are then calculated as • Likewise, the values of u'(t i ) are calculated backward starting from the end of the series. Table 1, Table 2 show the values of average monthly concentration of CH 4 (ppmv) and O 3 (ppmv) respectively for the period of 2003 to 2012, while Table 3, Table 4 show the values of average monthly concentration of NO 2 (ppmv) and CO 2 (ppmv) respectively for the period of 2011 to 2014. Table 5 shows the values of average annual concentration of CH 4 (ppmv) and O 3 (ppmv) for the period of 2003 to 2012 while Table 6 shows the values of average annual concentration of NO 2 (ppmv) and CO 2 (ppmv) for the period of 2011 to 2014. show the graph of average annual concentration levels of the atmospheric pollutants within the period of investigation. Table 9 shows the Mann-Kendall rank statistics for NO 2
Discussion
The results of the descriptive statistics of the annual averages of CH 4 is much more effective as a greenhouse gas than a similar increase in CO 2 [9].
However, CO 2 has a greater effect than CH 4 on climate change owing to its higher atmospheric concentration.
The results from the analysis of the average monthly concentration of the at- September and begins to increase as the dry season sets in. Therefore, concentration levels of the atmospheric pollutants were lower during the rainy season than during the dry yeartime. This is due to higher occurrences of atmospheric instability during the rainy season. This finding is in agreement with the result of [7]. On
Conclusions
The results of the analysis of concentration levels of the air pollutants showed that concentration levels were lower during the rainy season than during the dry yeartime. This is due to higher occurrences of atmospheric instability during the rainy season. On the other hand, ozone (O 3 ) concentration reached its peak value during the peak period of the rainy season unlike the other pollutants. In all likelihood, some of the ozone-depleting substances such as aerosols and atmospheric hydrogen chloride become soluble in water and are being washed off by precipitation during the rainy season, thereby leading to increased tropospheric The study also revealed a steady increase in the concentration of CO 2 within the period of investigation. This steady increase in CO 2 can be traced to the alarming increase in anthropogenic activities (such as combustion of fossil fuels, industrial emissions, gas flaring and deforestation) which appreciably increases the amount of CO 2 in the atmosphere. Methane (CH 4 ) had higher standard deviation values than carbon dioxide (CO 2 ), meaning that on a per molecule basis, a proportional rise in CH 4 is much more effective as a greenhouse gas than a similar increase in CO 2 . However, CO 2 has a greater effect than CH 4 on climate change owing to its higher atmospheric concentration. The Mann-Kendall rank statistics of the pollutants showed that the standardization variables U(t i ) and U'(t i ) have a sequential fluctuating behavior around a zero level. | 2022-04-13T15:17:58.039Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "c8f80dc21fd74c70563a3acb81b1deb85e38e104",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=116482",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89e96338dbf6fb96239c64f88dcd88ab947d3c0f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
259065746 | pes2o/s2orc | v3-fos-license | Case report: High-resolution, intra-operative µDoppler-imaging of spinal cord hemangioblastoma
Surgical resection of spinal cord hemangioblastomas remains a challenging endeavor: the neurosurgeon’s aim to reach total tumor resections directly endangers their aim to minimize post-operative neurological deficits. The currently available tools to guide the neurosurgeon’s intra-operative decision-making consist mostly of pre-operative imaging techniques such as MRI or MRA, which cannot cater to intra-operative changes in field of view. For a while now, spinal cord surgeons have adopted ultrasound and its submodalities such as Doppler and CEUS as intra-operative techniques, given their many benefits such as real-time feedback, mobility and ease of use. However, for highly vascularized lesions such as hemangioblastomas, which contain up to capillary-level microvasculature, having access to higher-resolution intra-operative vascular imaging could potentially be highly beneficial. µDoppler-imaging is a new imaging modality especially fit for high-resolution hemodynamic imaging. Over the last decade, µDoppler-imaging has emerged as a high-resolution, contrast-free sonography-based technique which relies on High-Frame-Rate (HFR)-ultrasound and subsequent Doppler processing. In contrast to conventional millimeter-scale (Doppler) ultrasound, the µDoppler technique has a higher sensitivity to detect slow flow in the entire field-of-view which allows for unprecedented visualization of blood flow down to sub-millimeter resolution. In contrast to CEUS, µDoppler is able to image high-resolution details continuously, without being contrast bolus-dependent. Previously, our team has demonstrated the use of this technique in the context of functional brain mapping during awake brain tumor resections and surgical resections of cerebral arteriovenous malformations (AVM). However, the application of µDoppler-imaging in the context of the spinal cord has remained restricted to a handful of mostly pre-clinical animal studies. Here we describe the first application of µDoppler-imaging in the case of a patient with two thoracic spinal hemangioblastomas. We demonstrate how µDoppler is able to identify intra-operatively and with high-resolution, hemodynamic features of the lesion. In contrast to pre-operative MRA, µDoppler could identify intralesional vascular details, in real-time during the surgical procedure. Additionally, we show highly detailed post-resection images of physiological human spinal cord anatomy. Finally, we discuss the necessary future steps to push µDoppler to reach actual clinical maturity.
Surgical resection of spinal cord hemangioblastomas remains a challenging endeavor: the neurosurgeon's aim to reach total tumor resections directly endangers their aim to minimize post-operative neurological deficits. The currently available tools to guide the neurosurgeon's intra-operative decisionmaking consist mostly of pre-operative imaging techniques such as MRI or MRA, which cannot cater to intra-operative changes in field of view. For a while now, spinal cord surgeons have adopted ultrasound and its submodalities such as Doppler and CEUS as intra-operative techniques, given their many benefits such as real-time feedback, mobility and ease of use. However, for highly vascularized lesions such as hemangioblastomas, which contain up to capillarylevel microvasculature, having access to higher-resolution intra-operative vascular imaging could potentially be highly beneficial. µDoppler-imaging is a new imaging modality especially fit for high-resolution hemodynamic imaging. Over the last decade, µDoppler-imaging has emerged as a high-resolution, contrast-free sonography-based technique which relies on High-Frame-Rate (HFR)-ultrasound and subsequent Doppler processing. In contrast to conventional millimeter-scale (Doppler) ultrasound, the µDoppler technique has a higher sensitivity to detect slow flow in the entire field-of-view which allows for unprecedented visualization of blood flow down to sub-millimeter resolution. In contrast to CEUS, µDoppler is able to image high-resolution details continuously, without being contrast bolus-dependent. Previously, our team has demonstrated the use of this technique in the context of functional brain mapping during awake brain tumor resections and surgical resections of cerebral arteriovenous malformations (AVM). However, the application of µDoppler-imaging in the context of the spinal cord has remained restricted to a handful of mostly pre-clinical animal studies. Here we describe the first application of µDoppler-imaging in the case of a patient with two thoracic spinal hemangioblastomas. We demonstrate how µDoppler is able to identify intra-operatively and with high-resolution, hemodynamic features of the lesion. In contrast to pre-operative MRA, µDoppler could identify intralesional vascular details, in real-time during the surgical procedure. Additionally, we show highly detailed post-resection images of physiological human spinal cord anatomy.
Introduction
Hemangioblastomas are highly vascularized, benign tumors (1), accountable for 2%-15% of all primary tumors in the spinal cord (2, 3), making them the third most common primary spinal cord tumor after astrocytoma and ependymoma (4). Histopathologically, hemangioblastomas are thought to consist of intricate vascular networks containing microvasculature, primarily at a capillary scale (5). Hemangioblastomas can either occur sporadically or as a part of von Hippel-Lindau (VHL) disease (6, 7), a multicentric disorder caused by an autosomal dominant tumor suppressor gene mutation, leading to multifocal and recurrent hemangioblastomas (8). In the majority of cases, hemangioblastomas present as intramedullary lesions, with only sporadic reports of combined intramedullary-extramedullary or exclusively intradural, extramedullary presentations of the disease (9).
To this day, surgical removal of the lesion remains the primary choice of treatment (10), with literature consistently reporting the importance of achieving total tumor resection in terms of minimizing recurrence of disease and improving functional outcome (3,11). However, the benefit of total versus subtotal tumor resection is highly dependent on the surgical safety of the procedure and risk of iatrogenic post-operative neurological deficits (12). What is more, intra-operative identification of tumor location and radicality of resections remains challenging, especially as routine MRIs have led to an evolution of case load towards earlier detection of small tumors or even incidental findings (13). In all cases, the use of intra-operative surgical tools can be of great importance to ensure safe and complete surgical resection.
Current clinical practice for spinal cord tumor resections relies heavily on a combination of pre-operative imaging (CT/MR/MRA) combined with electrophysiological intra-operative neuromonitoring (IONM). Although literature shows that IONM can significantly improve the prevention of neurological damage during surgery (14), there is still a considerable percentage of patients who experience significant long-term neurological deterioration (12,15,16), despite use of IONM. What is more, relying on pre-operative images to guide real-time intra-operative decision-making is fallible, especially in the spinal cord, where the laminectomy, myelotomy, locoregional swelling and bleeding, as well as shifts due to the resection cavity itself can significantly change the field of view as the surgery progresses, disturbing the match with pre-operatively acquired images, despite the latest neuro-navigation and -tracking software.
Over the last decade, µDoppler-imaging has emerged as a new, high-resolution, contrast-free sonography-based technique which relies on High-Frame-Rate (HFR)-ultrasound and subsequent Doppler processing. In contrast to conventional millimeter-scale (Doppler) ultrasound (28), the µDoppler technique has a higher sensitivity to detect slow flow in the entire field-of-view which allows for unprecedented visualization of blood flow down to submillimeter resolution. The sensitivity to slow flow is attributed to the large amount of frames available to calculate the Doppler signal from and ability to separate it from the frame wide tissue motion (29)(30)(31). Previously, our team has demonstrated the potential of µDoppler-imaging and its functional counterpart called 'functional Ultrasound (fUS)' during awake brain tumor resections, where we showed highly detailed functional maps and vascular morphology of a range of low and high-grade gliomas (31). Additionally, our team has evaluated the potential of µDoppler-imaging in the context of a cerebral arteriovenous malformation (AVM) (32), where the technique was able to identify key anatomical features including draining veins, supplying arteries and microvasculature in the AVM-nidus intra-operatively.
Like many other developments in Neurosurgery, the focus of µDoppler-imaging so far has primarily been on cerebral pathology, with only a handful of animal (33)(34)(35)(36) and in human (37,38) studies showing the potential for spinal cord imaging, with one inhuman study in particular focusing on functional images acquired within the context of Spinal Cord Stimulation (SCS) for pain treatment (39). What would make µDoppler-imaging specifically valuable for the context of spinal cord hemangioblastomas, is the technique's unique potential to provide high-resolution, real-time images of the vascular network of the lesion. Literature reports recommendations on improving surgical safety and efficacy during hemangioblastoma resection by focusing on vascular details specifically (12). Compared to currently available ultrasound-based techniques such as CEUS which aim to image these vascular details (20, 21, 26), µDoppler-imaging would be able to achieve the same if not better resolution images without the need for a contrast agent (32). This means that µDoppler-imaging is continuous in nature, whereas CEUS is contrast bolus-dependent (20, 21,26). Similarly, compared to conventional Doppler, µDoppler-imaging is able to reach far superior resolutions (in the range of 100-500 µm, depending on the transducer frequency) (40). µDoppler-imaging might therefore be a valuable addition to provide real-time, highresolution vascular details to guide hemodynamics-based surgical decision-making in the OR, especially when macroscopic or preoperative identification of vasculature is not sufficient.
Here we describe the first application of µDoppler-imaging in the case of a patient with hemangioblastomas located in the thoracic spinal cord. We demonstrate how µDoppler is able to identify intra-operatively and with high-resolution, key anatomical and hemodynamic features of the lesion. In contrast to pre-operative MRA, µDoppler could identify intralesional hemodynamic details, in real-time during the surgical procedure. Additionally, we show post-resection µDoppler-images of physiological human spinal cord anatomy. Finally, we discuss the necessary future steps to push µDoppler reach actual clinical maturity.
Case description 2.1. Patient characteristics
The patient is a female in her 60's with an extensive prior history of hypertension and recurrent spinal hemangioblastomas, for which she had three prior surgical procedures: two procedures to remove intramedullary hemangioblastomas in the lumbar region (6 years apart, including laminectomy Th12-L3) and one procedure to remove a high cervical, intramedullary hemangioblastoma ( Figure 1A). This surgery was complicated by neurological deterioration. After rehabilitation, she was able to walk independently with the aid of crutches. Four years after the last surgical procedure, the patient returned with complaints of loss of strength in the right leg and shooting pains towards the foot. Neurological examination showed complete loss of right-sided lower leg strength (MRC gastrocnemius (GC) 0, Tibialis Anterior (TA) 0), and pre-existent weakness in the left leg (overall MRC 4). Additionally, the patient reported hypesthesia and loss of sharp-dull distinction on the lateral side of the right-sided lower leg and foot.
Pre-operative imaging
Pre-operative imaging (MRI/MRA) confirmed the presence of multiple intradural hemangioblastomas. The largest (lesion 1) appeared to be both intra-and extramedullary and was located at Th10-Th11 on the right posterior side of the myelum (1.0 cm × 1.4 cm × 2.2 cm, Figures 1B-C), causing myelum compression. An additional, extramedullary lesion was found at level Th11-Th12 (lesion 2). MRA ( Figure 1D) did not show hypertrophy of the radiculary artery or dural fistuling. Prominent, probably Frontiers in Surgery venous vessels were seen directly caudal from lesion 1, suspected to be formed after local hemodynamic changes due to compression and/or congestion ( Figure 1E).
Ethical statement
The patient was treated at the Department of Neurosurgery of Erasmus MC in Rotterdam. Prior to inclusion, written informed consent was obtained in line with the National Medical-Ethical Regulations (MEC2020-0440, NL67965.078.18).
MicroDoppler data acquisition
High-frame-rate (HFR)-acquisitions were performed using our experimental research system (Vantage-256, Verasonics, United States) interfaced with a L8-18I-D linear array (GE, 7.8 MHz, 0.15 mm pitch, probe footprint of 11 by 25 mm) or a 9l-D linear array (GE, 5.3 MHz, 0.23 mm pitch, probe footprint of 14 by 53 mm). Acoustic safety measurements were performed in collaboration with our department of Medical Technology prior to obtaining medical ethical approval to perform this study. For all scans we acquired continuous angled plane wave acquisition (10-12 angles equally spaced between −12 and 12 degrees) with a PRF ranging from 667 to 800 Hz depending on the imaging depth and transducer. The average ensemble size (number of frames used to compute one Power Doppler Image (PDI)) was set at 200 angle-compounded frames from which the live PDIs were computed, providing a live Doppler FR ranging between 3 and 4 Hz. The PDIs as well as the raw, angle compounded beamformed frames were stored to a fast PCIe SSD hard disk for offline processing purposes. Parallel to our HFRacquisitions, patient's vital signs (EKG, arterial blood pressure) were recorded using a National Instruments' CompactDAQ module (NI 9250) at 500 Hz and stored for post-processing purposes.
To make our PDIs trackable in the OR, we integrated our transducers into Brainlab neuronavigation software by attaching the conventional optical tracking geometry to the transducer casing using custom-made 3D-printed attachments. An overhead camera recorded the surgical field as the surgeon performed µDoppler-acquisitions and removed the tumor. Through integration of our custom CUBE-cart in the OR-system, our live PDIs were displayed in real-time on the OR-screens (Figure 2A).
Intra-operative imaging procedure
Our experimental image acquisitions were integrated into the conventional surgical workflow, with an acquisition session both pre-and post-resection. First, the patient was placed in prone position and head-fixated in the Mayfield. A medial incision was made at the level of T9-T11, before stripping paraspinal muscles from the spinous processes and inserting a wound distractor. A laminectomy was performed from T9-T11, revealing the dura Frontiers in Surgery surrounding the spinal cord. The pre-resection HFR-acquisitions were performed prior to durotomy. First, hand-held 2D-images were made along a continuous trajectory spanning the full axial and sagittal length of the exposed myelum ( Figure 2B) for orientation purposes. Next, stable acquisitions of 30 s were made by placing the probe over a ROI using a modified intra-operative surgical arm (Trimano, Gettinge) with a transducer-holder ( Figure 2C). In sagittal plane, the surgical field allowed for positioning of both the L8-18I-D linear array and 9l-D linear array. For the axial plane, the surgical field was too narrow for the larger 9l-D array, so only the L8-18I-D linear array could be used in this context. Saline was added frequently to the operating field by the OR nurse to ensure adequate acoustic coupling during imaging. After the first HFRacquisitions, the dura was opened ( Figure 2D). Both tumors were removed microscopically and under guidance of IONM including measurements of Somatosensory Evoked Potentials (SSEPs) and Motor Evoked Potentials (MEPs). The most cranial lesion (lesion 1) revealed to have both an intramedullary and extramedullary component. The caudal lesion (lesion 2) revealed to be only extramedullary. Finally, the post-resection µDoppler-acquisitions were performed, again both hand-held and using the intra-operative surgical arm. The total intra-operative acquisition time of the µDoppler-data was around 30 min.
MicroDoppler data processing
In offline processing, PDIs were computed using an adaptive SVD clutter filter (20% cut-off percentage) over each ensemble and mapped onto a 100 µm grid using zero-padding in the frequency domain. The ensemble size was kept similar to the one used in acquisition (ne = 200). Given the significant, mostly inplane motion due to the patient's breathing, single PDIs at the end of the inhale or exhale were manually selected from each dataset to ensure presentation of the most stable images.
Color Doppler Images (CDIs) were computed by taking the mean of the difference of the instantaneous phase signal for all frames in one ensemble as described by Kasai et al. (41). All initial µDoppler-data processing was performed using custom scripts in Matlab 2020b (MathWorks, Inc.).
Pre-resection μDoppler images
2D-µDoppler was able to identify an intricate microvascular network inside both hemangioblastoma foci (Figures 3A-C) None of these details were visible in the pre-operative MRA ( Figure 1E). Zooming in on one of the vascular details in the sagittal µDoppler-image ( Figure 3A), we see the submillimeter level of detail µDoppler is able to provide in real-time during the surgery. Interestingly, when comparing the sagittal µDopplerimage ( Figure 3A) to its conventional greyscale Bmode counterpart ( Figure 3B), this particular vessel seems to demarcate the contour of the compressed healthy myelum. In Figure 3D we see an axial image of the most cranial (yellow asterix) hemangioblastoma, again revealing µDoppler's ability to detect microvascular details. Figure 4A shows a final preresection sagittal image of the spinal cord, now focusing on a larger network of more prominent vessels, seen directly caudal from lesion 1. These vessels seem to be similar to the ones seen pre-operatively in MRA ( Figure 1E), where they were suspected to be formed due to compression and/or congestion. Figure 5A shows the CDI of the same plane shown in Figure 3A, demonstrating the differences in flow directionality in the two hemangioblastoma foci. Figure 5B shows the CDI of the decompressed myelum post-resection of both foci (same plane as shown in Figure 4B). As we expect based on the anatomical organization of the spinal cord, the penetrating peripheral branches from the pial plexus clearly present with different flow directionalities in the dorsal and ventral side of the spinal cord.
Post-operative patient outcomes
Directly post-operatively, neurological examination showed similar motor scores for the right leg as were seen preoperatively. The patient underwent an intensive rehabilitation programme and was seen for regular check-ups with MRI-scans every 6 months. One year after surgery, walking and standing had subjectively improved based on patient report, without significant change on the MRC-scale for both legs. The patient expressed to be satisfied with the surgical outcomes. The oneyear MRI showed a slight growth of tissue in the thoracic surgical region, which now warrants more close monitoring with more regular MRI-scans (every 3 months).
Discussion
To the best of knowledge, this work presents the first µDoppler-images of human spinal hemangioblastomas. We show how µDoppler has the ability to detect intricate, intralesional microvasculature, which is otherwise not available pre-or intraoperatively with the currently available clinical techniques such MRI, MRA or conventional ultrasound. Having access to a realtime, high-resolution technique which can visualize hemodynamics in particular, could be valuable to support the neurosurgeon in their balancing act between removing too much or too little of the hemangioblastoma intra-operatively. The hope is that by having access to demarcating microvascular details as we show here (for example Figures 3A-C, Figure 4A), combined with µDoppler's real-time hemodynamic information such as flow directionality (Figure 5), neurosurgeons will be able to identify key anatomical features, and how these change as surgery progresses. Within the neurosurgical field, colleagues such as Siller et al. recommended to use vascular details to guide resection. For example, to first coagulate and transect feeding arteries before tumor resection and occlusion of the draining veins (12). Being able to identify these vessels easily and reliably, as well as monitor in real-time what would be the hemodynamic consequences of a surgical decision, would be an addition to the neurosurgeon's toolbox.
However, in this first description of intra-operative µDopplerimaging applied to hemangioblastoma, we have not described any immediate surgical impact on the case. In fact, the Dutch medical-ethical committee explicitly restricted the use of our experimental technique for surgical decision-making at this point of the study. Until now, the resolution we could achieve while imaging the spinal cord with µDoppler was not available intraoperatively using ultrasound, with only CEUS coming somewhat close (20, 21,26). Therefore, this current report aims to create scientific awareness of the availability and image quality of µDoppler, hoping to inspire others working on hemangioblastoma to join in studying its surgical potential.
What is more, in line with our previous report on µDopplerimaging in the context of cerebral AVMs (32), real-time imaging of spinal cord hemodynamics and morphology has many other benefits than improving surgical decision-making alone: increasing our understanding of neurovascular pathology. Up until now, there is only a handful of reports in literature showing images of the human spinal cord (37)(38)(39). This means that, as we continue to acquire µDoppler-images of the spinal cord in both health and disease, a wealth of new information becomes Frontiers in Surgery available for study. This could for example improve our understanding of how hemangioblastomas and other spinal cord tumors manifest and grow. But also outside of oncology, fields such as neurotrauma and spinal cord injury in particular, would benefit from understanding physiological vascular patterns in the human spinal cord (36). Hopefully, this kind of knowledge could in turn circle back to improve surgical procedures and ultimately, patient outcomes.
To truly add to surgical decision-making in the future, we will need to take our limited 2D-images and move to real-time 3Dimaging in the OR, an effort currently being undertaken by our team and many others alike. For 3D to succeed, but also to improve 2D-image quality, we will need to find better ways to deal with the breathing motion artefacts. In this paper, we chose to avoid motion compensation altogether by selecting specific, relatively stable PDIs which we acquired using our intra-operative surgical arm. Although our approach with the surgical arm has minimized motion artefacts, the ideal scenario would be to correct or compensate for the breathing motion artefact altogether.
Motion correction would be especially essential in the context of functional mapping of the spinal cord. As discussed in the introduction, the microvascular hemodynamics measured with µDoppler-imaging form the basis of 'functional Ultrasound' (fUS) (30,42). Through the process of neurovascular coupling (NVC), Frontiers in Surgery hemodynamics can serve as an indirect measure of neuronal activity and therefore brain functionality (30,31,43). So far, two teams have demonstrated how fUS can be used to map brain functionality during awake brain tumor resections, where patients were able to perform simple functional tasks such as lip pouting or word repetition (29,31). Although spinal cord tumor resections are not performed awake, they are at times guided by neurophysiological signals or electrical stimulation during IONM, which can serve as functional task patterns to use for functional mapping of the spinal cord. In animals, fUS proved to be reliable in tracking of spinal cord responses to patterned epidural electrical stimulations (34). The authors also demonstrated how fUS had a higher sensitivity in monitoring spinal cord response than electromyography, with fUS being able to detect spinal cord signals subthreshold to motor response level of SCS (34). Similarly, a first application of fUS in the human spinal cord during standard-of-care implantation of a SCS paddle lead showed the technique's ability to capture functional response in the axial plane after electrical stimulation in the context of pain treatment (39). A future direction of our team will be to expand µDoppler-imaging to IONM-guided functional mapping of the spinal cord during spinal cord tumor resections. One important point of focus in this effort will be to increase our understanding of the similarities and differences between the brain and spinal cord in terms of NVC. This case report marks the first application of µDoppler-imaging in the case of a patient with two thoracic spinal hemangioblastomas. We demonstrate how µDoppler is able to identify intra-operatively and with high-resolution, hemodynamic features of the lesion. In contrast to pre-operative MRA, µDoppler could identify intralesional vascular details in real-time during the surgical procedure, without the need for a contrast-agent. Additionally, our technique was able to capture highly detailed post-resection images of physiological human spinal cord anatomy. Although immediate surgical impact could not be achieved in this single case report, we hope this demonstration will add to scientific awareness of the availability of µDoppler-imaging, as well as the quality of its images when applied to new contexts such as hemangioblastoma.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by the METC (Medisch-Ethische Toestingscommissie), Erasmus MC, Rotterdam (MEC2020-0440, NL67965.078.18). The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
SS, LV, BG and BH were involved in the data-acquisition. SS and PK were involved in the data-analysis. SS and PK were involved in writing the initial manuscript. All authors were involved in finalizing the manuscript. All authors contributed to the article and approved the submitted version. | 2023-06-05T13:19:55.750Z | 2023-06-05T00:00:00.000 | {
"year": 2023,
"sha1": "e11a465cb8deeff6b23ab81cbfabc8c7a7ecc8be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e11a465cb8deeff6b23ab81cbfabc8c7a7ecc8be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
239132581 | pes2o/s2orc | v3-fos-license | Frequency-Mixing Lasing Mode at European XFEL †
: We demonstrate generation of X-ray Free-Electron Laser (XFEL) pulses in frequency mixing mode at the SASE3 line of the European XFEL. The majority of the SASE3 FEL segments are tuned at two frequencies ω 1 and ω 2 following an alternate pattern. Leveraging on non-linearities generated through longitudinal dispersion in the system, we obtain electron bunching at a frequency difference ω FM = ω 2 − ω 1 . FEL amplification at ω FM follows in a few last radiator segments. We report on the generation of frequency mixing at photon energies between 500 eV and 1100 eV with pulse energies, depending on the length of the radiator, in the mJ level. This method allows generating low photon energies in cases where the FEL runs at high electron energy and the target photon energy cannot be reached in the main undulator, with the simple addition of a short, custom-made afterburner.
Introduction
Self-Amplified Spontaneous Emission (SASE) X-ray free-electron lasers (XFELs) generate beam energy and density modulation as well as output radiation by exploiting a narrow-bandwith FEL instability centered around a single resonant frequency. In other words, only a narrow spectral part of the input density modulation from the electron beam shot-noise is actually amplified around the resonant frequency.
As is well-known (see for example [1]), the FEL amplification process itself includes a non-linear regime, which follows the linear amplification down the undulator line. In the linear regime, different frequencies are treated fully independently. In the non-linear regime, this is no longer the case. This means that if the FEL amplification bandwidths were large enough, two separate frequencies ω 1 and ω 2 would also give rise to mixed frequency signals at ω 2 ± ω 1 . In its standard configuration, the FEL bandwidth is very narrow and the relative spectrum is of order 2ρ at saturation, where ρ indicates the efficiency parameter, so ω 1 and ω 2 are forced to be very near to each other, and the frequency mixing signals are outside of the amplification bandwith.
Nonetheless, a frequency mixing lasing mode can be obtained at facilities like the European XFEL where long, tunable undulators are available. In fact, if one tunes different undulator segments at two well-separated frequencies ω 1 and ω 2 , both frequencies would be separately amplified in the linear regime, further yielding frequency mixing signals in the bunching at frequencies ω 2 ± ω 1 , now within the reach in resonance frequency of a final radiator. The bunching at these two frequencies can be further optimized with the help of a downstream element adding longitudinal dispersion, which can be provided by a few detuned undulator segments. Pushing this scheme, the SASE FEL background from ω 1 and ω 2 can be kept at very low levels by keeping the first undulator segments (tuned at ω 1 and ω 2 ) in the linear regime, and relying on dispersion to obtain bunching at the mixed frequencies. Finally, as already anticipated above, the signal at the mixed frequencies can be picked up by a radiator tuned at the sum or at the difference frequency.
After a few introductory considerations in Section 2, in Section 3 we report about the generation of radiation in frequency mixing mode at the SASE3 undulator line of the European XFEL [2]. SASE3 [3] includes 21 undulator segments, each of them with a magnetic length of five meters, with tuneable K-parameter, and a period of 68 mm. In the experiments discussed here we focused on the generation of photon energies between 500 eV and 1100 eV, amplified by a few segments of SASE3. Frequency mixing is a wellknown technique employed in radio-and laser-physics. It was considered for seeded FELs in [4,5], while in [6] frequency mixing of long-wavelength modulations of the electron beam induced by the laser heater and by the seed laser at FERMI was actually shown to generate FEL pulses in the Extreme ultraviolet (EUV) range. However, frequency mixing was never shown to work with SASE FELs, nor in the X-ray region. Besides constituting a novel method for the generation of XFEL radiation, as discussed in Section 4, frequency mixing may constitute a useful mode of operation, allowing to obtain target photon energies too low to be obtained in the main undulator, with the simple addition of a short afterburner with larger-than-baseline K parameter reach.
Theoretical Considerations about Frequency Mixing
As stated in the introduction, frequency mixing is enabled by nonlinearities in the FEL amplification process. As is well known, in the linear regime, bunching, energy modulation, and radiation at different frequencies evolve independently of each other, while in the non-linear regime this is no longer the case. However, pushing the FEL process to the nonlinear regime to obtain frequency mixing has the disadvantage of a relatively large output at the initial frequencies ω 1 and ω 2 , which spoils the electron beam. Moreover, ideally, one wants to control and keep the emission at ω 1 and ω 2 as small as possible because it constitutes usually unwanted background to the main pulse at the mixed frequency and, in particular, at the difference frequency ω 2 − ω 1 . This is achieved by keeping the FEL process at ω 1 and ω 2 in the linear regime, and mostly relying on a longitudinally dispersive region for creating non-linearities and optimizing the mixing process. Therefore, for the frequency-mixing setup at SASE3, we rely on configurations such as the one in Figure 1. The largest part of the setup is dedicated to the generation of electron energy modulation in the linear regime at two separate frequencies. This is obtained by using an alternating configuration where several undulator segments are tuned at frequency ω 1 , followed by others tuned at frequency ω 2 (highlighted in green and yellow colors in the figure). In this way, diffraction effects are kept as small as possible and moreover, while one frequency is amplified, the beam energy modulation at the other can still benefit from the passage through a longitudinally dispersive medium. The exact sequence used in the configuration depends on the frequencies to be generated.
The first part of the undulator is followed by a second, consisting of a few segments that are detuned with respect to all the frequencies of interest. They act as a longitudinally dispersive region that further bunches the beam at the difference frequency ω 2 − ω 1 (white region in Figure 1) by transforming energy modulation into density modulation. Finally, a few radiator segments are tuned to the difference frequency (orange region in Figure 1). Initially, the pre-bunched electron beam emits coherent radiation. However, if the radiator is longer than a gain length, FEL amplification takes place, resulting in an exponential increase of the frequency-mixed component of the radiation pulse.
We can describe the essence of the frequency mixing scheme outlined above for the simple case of vanishing small SASE bandwidths around ω 1 and ω 2 as a special subcase of the Echo Enable Harmonic Generation (EEHG) equation for the bunching factor [7]: Here, b n,m is the bunching at a harmonic with wave number k E = nk 1 + mk 2 , with k 1,2 the wave numbers of the two EEHG lasers, K = k 2 /k 1 , A 1,2 = ∆E 1,2 /σ E , and B 1,2 = R 1,2 56 k 1 σ E /E 0 . To describe the frequency-mixing mode, it is sufficient to consider a particularly simple case when B 1 = 0, i.e., the first chicane is turned off (the generalization to non-zero case is straightforward, and can be used to further optimize the performance). Moreover, our case of interest actually corresponds to m = 1 and n = −1. Finally, setting B = B 2 we obtain the bunching factor at the mixed frequency [5].
where we got rid of minus signs in the arguments and in the order of the Bessel functions since we consider modulus of the product. This bunching is then amplified in the output radiator. A thorough mathematical description of our system should actually include finite SASE bandwidths around the initial frequencies ω 1 and ω 2 . However, such a description, as well as considerations on the statistics of the mixing process go beyond the scope of this paper, which is to report about experimental results instead. They will be developed in a separate, forthcoming work. Here we only limit ourselves to a few additional remarks. First, we remind that in the linear regime, the SASE process can be modelled as a Gaussian process. Therefore, the arguments of the Bessel functions in Equation (2) must follow a Rayleigh distribution because they are proportional to the field amplitude through the energy modulations A 1 and A 2 . As a result, b FM should be considered as a random variable too.
Second, we note that in the simple case of a cold beam, the exponential function in Equation (2) becomes unity. The probability density function for b FM can be easily obtained numerically by calculating the product of the Bessel functions in Equation (2). As their arguments, we use two large sets of independent random variates obtained from the Rayleigh distribution. The probability density function calculated in this way depends on the mean value (which we choose equal for both Rayleigh distributions). It is then straightforward to find the average value for b FM and to optimize it. We found that the maximum bunching amounts to about 20% and is obtained for a mean value of (K − 1)A 1,2 B 1.5 slightly below the maximum of J 1 , which is for (K − 1)A 1,2 B 1.8.
Finally, we note that while the SASE process in the linear regime follows Gaussian statistics, the frequency mixed signal obviously deviates from it, being the result of a non-linear operation.
Setup and Results
In the following, we report on several experimental results recently obtained at the SASE3 line of the European XFEL. Table 1 summarizes the main parameters and output radiation pulse energy for the various experiments. In all cases, the bunch charge was 250 pC. Table 1. Summary of the main parameters for four different freqeuncy mixing experiments (Exp. 1-4) at SASE3: photon energy of the target signal E FM and of the two frequencies E 2,1 , electron energy E e , radiation pulse energy after 1 up to n radiator segments E r,1 . . . E r,n . The values correspond to the actual signal E FM , where the background due to the other colors has been substracted from the XGM reading. The comment (no) tap* means (no) tapering applied. Note that readings below 20-30 µJ are within the measurement uncertainty, and only serve as a rough estimation of the actual pulse energy.
Quantity
Exp. Since SASE3 has a period of 68 mm and consists of 21 variable-gap undulator segments, the total magnetic length amounts to 105 m [3]. After the 11th segment, a magnetic chicane for 2-color pump-probe (2CPP) experiments has been recently installed [8].
The undulator takes several hours to be configured and optimized for frequency mixing lasing, an operation that was carried out independently for the various experiments.
However, once an optimal configuration file is saved, it can be loaded later on, within a few minutes. For the sake of illustration, Figure 1 shows one of the configurations used during the fourth experiment, Exp. 4 (see Table 1). The first part of the undulator comprises segments 2-15 (SASE3 begins from segment number 2 and segment number 13 corresponds to the 2CPP chicane) in an alternating sequence slightly favouring the second color at ω 2 (equivalent to the photon energy E 2 = 1400 eV), as it corresponds to the shortest wavelength. Up to the present, it has not been not possible to keep the generation of ω 1 and ω 2 before the 2CPP chicane, which would have been optimal in terms of longitudinal dispersion control. However, we could still optimize the chicane to obtain some gain in the bunching. In the case shown in Figure 1, the optimum delay amounted to 0.7 fs. Four segments (16-19) were used as dispersive elements with scrambled values of the K parameter optimized for optimum output, while the last four segments were tuned around 600 eV (the exact value was 604 for this case) with no taper applied. The actual values in the last line of Exp. 4 in Table 1 refer to a slightly different case of an optimum delay of 0.6 fs, two segments (16-17) used as dispersive elements, and the last six segments, tapered, used as a radiator at the difference frequency.
A single X-ray Gas Monitor (XGM) device [9,10] could be used to investigate each color separately. This was achieved while suppressing the others by detuning or opening the relevant segments. We initially optimized the second color at 1400 eV in the first part of the undulator (up to segment 16) in normal SASE mode, removing every taper, and having care of avoiding saturation. After a first chicane delay optimization, which gave about a factor four increase in FEL output energy, we set the undulator segments into an alternating pattern configuration, taking care of obtaining a similar output from each separate color and keeping the output pulse energy level around a few tens of microjoules, to ensure that we were still in the linear regime. Later on, the configuration was tweaked a few times to optimize the output at ω FM and to minimize the background colors at ω 1,2 . Figure 1 actually refers to an intermediate configuration. At this point, calibrating the XGM for the difference frequency gives an unphysical background value from the colors at ω 1 and ω 2 , but further closing a few segments (up to six) at the end of the undulator to the frequency difference ω 2 − ω 1 (while keeping at least two segments detuned after segment 16 to create longitudnal dispersion), yields a variation in the XGM signal that corresponds directly to the pulse energy of the mixed frequency. Once the mixed frequency signal is found, one can follow up optimizing the configuration used and, subsequently, characterizing the output signal.
In Table 1, we report up to 4.5 mJ with six radiator segments tuned at 600 eV. The segments in the radiator were tapered to an empirically found optimum valid for the six radiators. The XGM value of 4 µJ with one radiator segment closed was below the measurement uncertainty. Visually, it neared the background contribution of the two frequencies ω 1,2 and since the XGM was set for ω FM does not have physical meaning.
The backgrounds from each of the two colors ω 1,2 were measured in the optimized configuration by detuning all other segments, and were found in the noise level (a few microjoules), which we ascertained separately by imparting a transverse kick to the electron trajectory prior to the entrance to SASE3. During the optimization, we were able to reduce the contribution of the two colors of about a factor ten (from the few tens of microjoules reported above), while providing an optimal bunching level at ω FM . In this respect, tuning the longitudinal dispersion by adjusting the K value of the segments out of resonance before the final radiator easily led to important changes during the tuning process (without performing a systematic study, we found a factor two improvement with four radiator segments). Note that the background level decreased substantially in time as we performed the various experiments. During the very first experiment, Exp. 1 (see Table 1), we imaged [11] the transverse FEL radiation pulse distribution with the help of a scintillator with the radiator segments detuned (left plot in Figure 2) or at resonance with the mixed frequency (right plot in Figure 2). The appearance of the freqeuncy mixed signal, with a slight pointing difference with respect to the background from the two initial colors at 1200 eV and 700 eV is evident. The estimated angular divergence at 500 eV is in the order of 20 µrad. An important feature of the frequency mixed mode is that, despite the more involved setting procedure, it is easily tuneable in output photon energy. We showed this during the fourth experiment, by scanning the final photon energy from 504 eV to 604 eV in steps of 20 eV. This was done by adjusting the K parameter of the final radiator (and hence ω FM ) and that of the color with highest photon energy, ω 2 . The scan was performed with four radiator segments tuned at resonance and without adjusting any other parameter. It took about four minutes to perform this scan manually and it could be easily automatized. Results are shown in Figure 3, which shows an average output pulse energy stability within about 200 µJ. Another important characteristic to be studied is the amplification bandwidth around the nominal output frequency ω FM . This can be done by correlating the pulse energies measured by the XGM with the photon energy, that is by scanning the K parameter of the radiator segments. The results of the scan are presented in the upper plot in Figure 4, where each measure is found by averaging each point over 40 measures (4 s at 10 Hz). The lower plot in the same figure shows the actual pulse energies during the scan. It should be remarked that the bandwidth provided by this scan does not correspond directly to the bandwidth of the output radiation. To illustrate this point, during one of the various experiments, Exp. 2 (see Table 1), we resolved the frequency-mixed signal in frequency with the help of a spectrometer [12]; see Figure 5. The figure corresponds to the case of three closed radiator segments. In Figure 6, upper plot, we show the color-coded spectra as a function of the K parameter: in the inset we extract the same information as in a K-parameter scan done with the pulse energies measured by the XGM (see Figure 6, lower plot). Note that the FWHM bandwidth is about 0.6%, which is larger than intrinsic bandwidth of SASE at this photon energy, but is comparable to the typical performance of SASE3 that is usually strongly influenced by electron energy chirp. Table 1). Upper plot: color-coded spectra as a function of the K parameter; inset: maximum of the spectrometer signal as a function of the K parameter. Lower plot: pulse energy as a function of the K parameter. Each point is found by averaging over 50 single SASE3 pulses (5 s at 10 Hz).
Outlook and Conclusions
In this paper, we investigated frequency mixing generation at the SASE3 line of the European XFEL. We implemented it by using the first part of SASE3 to generate two frequencies ω 1 and ω 2 in alternating K configuration, by subsequently obtaining a large bunching at ω 2 − ω 1 using a few non-resonant segments as dispersive elements and finally by generating radiation at the target frequency in a last, short part of SASE3 used as a radiator; see Figure 1. We demonstrated, for the first time, freqeuncy mixing in the X-ray region, between 500 eV and 1100 eV. We showed that the bunching can be amplified in the radiator, and that six SASE3 segments dedicated to amplification allow reaching 4.5 mJ at 500-600 eV, see Table 1 and Figure 7. We studied the actual amplification bandwidth by means of K-parameter scans in the radiator; see Figure 4. Finally, we demonstrated easy tuneability over a range of 100 eV (see Figure 3) and showed that the frequency mixing mode of operation only needs a single XGM in order to be established and operated. Nevertheless, in some of the experiments we also acquired spectra, see Figures 5 and 6, and transverse profile distribution, see Figure 7 of the frequency mixed signal.
It should be noted that together with the bunching at the difference frequency ω 2 − ω 1 one automatically generates bunching at the sum frequency ω 2 + ω 1 as well. The sum frequency generation is certainly an interesting phenomenon to consider, and it might be useful to reach shorter wavelengths than allowed by the baseline mode of operation of an FEL when a suitable radiator is available. However, in this study we limited ourselves to the difference frequency generation. This is not only of interest as a phenomenon pertaining FEL physics, but has the practical relevance of a method to provide low photon energies for operation at high electron energies. In fact, due to constraints posed by the parallel operation of three FEL lines (SASE1 and SASE2, enabling hard X-rays and SASE3, in the soft X-rays range), it is usually preferred to operate the accelerator at relatively high electron energies. To be specific, European XFEL usually operates with an electron energy of 14 GeV and a charge of 250 pC. For this electron energy, the lowest photon energy in the SASE3 undulator is 660 eV. This is limited by the maximum value of the undulator parameter K 9, with a period of 6.8 cm. The generation of pulses with lower photon energy is possible, but only at lower electron energies, which poses issues in the planning of simultaneous experimental activities at SASE1, SASE2, and SASE3. Consider now the addition of a short radiator reaching lower photon energies than those achievable by the main FEL undulator at the fundamental and at a fixed electron energy. Using a such radiator, frequency mixing allows generating intense FEL pulses at lower photon energies than those permitted in standard SASE mode. This offers an interesting alternative for operating the facility at the same time for very soft and very hard x-ray radiation using a single electron energy.
An Apple-X afterburner made of four segments with 22 periods each and a period length of 9 cm will soon be installed after the main SASE3 undulator. At 14 GeV, it will allow to be resonant down to 440 eV in circular or linear horizontal/vertical polarization mode. Therefore, one can use the frequency mixing mode to generate bunching between 440 eV and 660 eV in the main SASE3 undulator, as difference of allowed energies, and subsequently exploit the Apple-X afterburner to actually radiate at those photon energies. One cannot directly compare the output of our experiment and the actual output for the frequency mixing mode enabled by the Apple-X afterburner, because of the different parameters (undulator period, photon energy, electron energy, and undulator polarization). Nevertheless, theoretically, for the same parameters of the electron beam the gain length for 300 eV and a period of 9 cm is comparable with that for 700 eV and a period od 6.8 cm. This reasoning suggests that the four Apple-X afterburner segments will be roughly equivalent to two SASE3 baseline segments. For two SASE3 segments used as radiators, Table 1 indicates an output of only a few tens of microjoules, beacause we report the gain of a setup optimized for six radiators. However, during the same experiment, we were able to obtain up to 200 µJ with two segments, just by optimizing the longitudinal dispersion.
Moreover, this energy level can be dramatically boosted by possibly refurbishing a few SASE3 undulator segments, increasing their period to 9 cm, which is a possibility under current scrutiny at the European XFEL. In other words, frequency mixing allows generating X-ray pulses for an FEL running at high electron energy where the target photon energy is too low to be reached in the main undulator, with the simple addition of a short afterburner with extended photon energy reach. | 2021-10-20T16:17:09.842Z | 2021-09-13T00:00:00.000 | {
"year": 2021,
"sha1": "881fb37136df6b4e8b16757c6a61391f89fef570",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/18/8495/pdf?version=1631533358",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dcde102239cbc741853647589c2a9aef44ba5c00",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
18719900 | pes2o/s2orc | v3-fos-license | Long range scattering for the Maxwell-Schr"odinger system with large magnetic field data and small Schr"odinger data
We study the theory of scattering for the Maxwell-Schr"odinger system in the Coulomb gauge in space dimension 3. We prove in particular the existence of modified wave operators for that system with no size restriction on the magnetic field data in the framework of a direct method which requires smallness of the Schr"odinger data, and we determine the asymptotic behaviour in time of solutions in the range of the wave operators.
Introduction
This paper is devoted to the theory of scattering and more precisely to the construction of modified wave operators for the Maxwell-Schrödinger system (MS) 3 in 3 + 1 dimensional space time. That system describes the evolution of a charged nonrelativistic quantum mechanical particle interacting with the (classical) electromagnetic field it generates. It can be written as follows : (1.1) Here u and (A, A 0 ) are respectively a complex valued function and an IR 3+1 valued function defined in space time IR 3+1 , ∇ A = ∇−iA , ∆ A = ∇ 2 A and ⊓ ⊔ = ∂ 2 t −∆ is the d'Alembertian in IR 3+1 . We shall consider that system exclusively in the Coulomb gauge ∇ · A = 0. In that gauge, one can replace the system (1.1) by a formally equivalent one in the following standard way. The second equation of (1.1) can be solved for A 0 by where P = 1l − ∇∆ −1 ∇ is the projector on divergence free vector fields, together with the Coulomb gauge condition ∇ · A = 0 which is formally preserved by the evolution. From now on we restrict our attention to the system (1.3).
The (MS) 3 system is known to be locally well posed in sufficiently regular spaces [11] [12] and to have global weak solutions in the energy space [9] in various gauges including the Coulomb gauge. However that system is so far not known to be globally well posed in any space.
A large amount of work has been devoted to the theory of scattering for nonlinear equations and systems centering on the Schrödinger equation, in particular for nonlinear Schrödinger (NLS) equations, Hartree equations, Klein-Gordon-Schrödinger (KGS), Wave-Schrödinger (WS) and Maxwell-Schrödinger (MS) systems. As in the case of the linear Schrödinger equation, one must distinguish the short range case from the long range case. In the former case, ordinary wave operators are expected and in a number of cases proved to exist, describing solutions where the Schrödinger function behaves asymptotically like a solution of the free Schrödinger equation. In the latter case, ordinary wave operators do not exist and have to be replaced by modified wave operators including a suitable phase in their definition. In that respect, the (MS) 3 system (1.1) belongs to the borderline (Coulomb) long range case, because of the t −1 decay in L ∞ norm of solutions of the wave equation. Such is the case also for the Hartree equation with |x| −1 potential, for the Wave-Schrödinger system (WS) 3 in IR 3+1 and for the Klein-Gordon-Schrödinger system (KGS) 2 in IR 2+1 .
The construction of modified wave operators for the previous long range equations and systems has been tackled by two methods. The first one was initiated in [13] on the example of the NLS equation in IR 1+1 and subsequently applied to the NLS equation in IR 2+1 and IR 3+1 and to the Hartree equation [1], to the (KGS) 2 system [14] [15] [16] [17], to the (WS) 3 system [18] and to the (MS) 3 system [19] [21].
That method is rather direct, starting from the original equation or system. It will be sketched below. It is restricted to the (Coulomb) limiting long range case, and requires a smallness condition on the asymptotic state of the Schrödinger function. Early applications of the method required in addition a support condition on the Fourier transform of the Schrödinger asymptotic state and a smallness condition of the Klein-Gordon or Maxwell field in the case of the (KGS) 2 or (MS) 3 system respectively [14] [21]. The support condition was subsequently removed for the (KGS) 2 and (MS) 3 system and the method was applied to the (WS) 3 system without a support condition, at the expense of adding a correction term to the Schrödinger asymptotic function [15] [18] [19]. The smallness condition of the KG field was then removed for the (KGS) 2 system, first with and then without a support condition [16] [17]. Finally the smallness condition on the wave field was removed for the (WS) 3 system, without a support condition or a correction term to the Schrödinger asymptotic function [8].
In the present paper, we extend the results of our previous paper [8] from the (WS) 3 system to the (MS) 3 system in the Coulomb gauge (1.3). In particular we prove the existence of modified wave operators without any smallness condition on the magnetic potential A, and without a support condition or a correction term on the asymptotic Schrödinger function. In addition, in the same spirit as in [8], we treat the problem in function spaces that are as large as possible, namely with regularity as low as possible. As a consequence, we require only a much lower regularity of the asymptotic state than in previous works.
For completeness and although we shall not make use of that fact in the present paper, we mention that the same problem for the Hartree equation and for the (WS) 3 and (MS) 3 system can also be treated by a more complex method where one first applies a phase-amplitude separation to the Schrödinger function. The main interest of that method is to remove the smallness condition on the Schrödinger function, and to go beyond the Coulomb limiting case for the Hartree equation. That method has been applied in particular to the (WS) 3 system and to the (MS) 3 system in a special case [4] [5] [6].
We now sketch briefly the method of construction of the modified wave operators initiated in [13]. That construction basically consists in solving the Cauchy problem for the system (1.3) with infinite initial time, namely in constructing solutions (u, A) with prescribed asymptotic behaviour at infinity in time. We restrict our attention to time going to +∞. That asymptotic behaviour is imposed in the form of suitable approximate solutions (u a , A a ) of the system (1.3). The approximate solutions are parametrized by data (u + , A + ,Ȧ + ) which play the role of (actually would be in simpler e.g. short range cases) initial data at time zero for a simpler evolution. One then looks for exact solutions (u, A) of the system (1.3), the difference of which with the given asymptotic ones tends to zero at infinity in time in a suitable sense, more precisely, in suitable norms. The wave operator is then defined traditionally as the map Ω + : (u + , A + ,Ȧ + ) → (u, A, ∂ t A)(0). However what really matters is the solution (u, A) in the neighborhood of infinity in time, namely in some interval [T, ∞), and we shall restrict our attention to the construction of such solutions.
Continuing such solutions down to t = 0 is a somewhat different question, connected with the global Cauchy problem at finite times, which we shall not touch here, especially since the (MS) 3 system is not known to be globally well posed in any function space. The construction of solutions (u, A) with prescribed asymptotic behaviour (u a , A a ) is performed in two steps.
Step 1. One looks for (u, A) in the form (u, A) = (u a + v, A a + B) with ∇ · A a = ∇ · B = 0. The system satisfied by the new functions (v, B) can be written as where G 1 and G 2 are defined by and the remainders are defined by (1.6) It is technically useful to consider also the partly linearized system for functions (1.7) The first step of the method consists in solving the system (1.4) for (v, B), with (v, B) tending to zero at infinity in time in suitable norms, under assumptions on (u a , A a ) of a general nature, the most important of which being decay assumptions on the remainders R 1 and R 2 . That can be done as follows. One first solves the linearized system (1.7) for (v ′ , B ′ ) with given (v, B) and initial data (v ′ , B ′ )(t 0 ) = 0 for some large finite t 0 . One then takes the limit t 0 → ∞ of that solution, thereby obtaining a solution (v ′ , B ′ ) of (1.7) which tends to zero at infinity in time. That construction defines a map φ : (v, B) → (v ′ , B ′ ). One then shows by a contraction method that the map φ has a fixed point. That first step will be performed in Section 2.
Step 2. The second step of the method consists in constructing approximate asymptotic solutions (u a , A a ) satisfying the general estimates needed to perform Step 1. With the weak time decay allowed by our treatment of Step 1, one can take the simplest version of the asymptotic form used in previous works [6] [19] [21]. Thus we choose ϕ is a real phase to be chosen below and w + = F u + . We furthermore choose A a in the form A a = A 0 + A 1 where A 0 is the solution of the free wave equation ⊓ ⊔A 0 = 0 given by where ω = (−∆) 1/2 , and where (1.14) In particular A 1 is constant in time. We finally choose ϕ by imposing We shall show in Section 3 that the previous choice fulfills the conditions needed for Step 1, under suitable assumptions on the asymptotic state (u + , A + ,Ȧ + ). In order to state our results we introduce some notation. We denote by F the Fourier transform, by < ·, · > the scalar product in L 2 and by · r the norm in L r ≡ L r (IR 3 ), 1 ≤ r ≤ ∞ and we define δ(r) = 3/2 − 3/r. For any nonnegative integer k and for 1 ≤ r ≤ ∞, we denote by W k r the Sobolev spaces where α is a multiindex, so that H k = W k 2 . We shall need the weighted Sobolev spaces H k,s defined for k, s ∈ IR by H k,s = u : u; H k,s = (1 + x 2 ) s/2 (1 − ∆) k/2 u 2 < ∞ so that H k = H k,0 . For any interval I, for any Banach space X and for any q, 1 ≤ q ≤ ∞, we denote by L q (I, X) (resp. L q loc (I, X)) the space of L q integrable (resp. locally L q integrable) functions from I to X if q < ∞ and the space of measurable essentially bounded (resp. locally essentially bounded) functions from I to X if q = ∞. For any h ∈ C([1, ∞), IR + ), non increasing and tending to zero at infinity and for any interval I ⊂ [1, ∞), we define the space We can now state our result. 2 and let X(·) be defined by (1.17). Let u a be defined by (1.8) with w + = F u + and with ϕ defined by (1.16) (1.2) (1.14). Let A a = A 0 +A 1 with A 0 defined by (1.11) and A 1 by (1.13) (1.14). Let u + ∈ H 3,1 ∩H 1,3 with xw + 4 and w + 3 sufficiently small. Let ∇ 2 A + , ∇Ȧ + , ∇ 2 (x · A + ) and ∇(x ·Ȧ + ) ∈ W 1 1 with A + , x · A + ∈ L 3 andȦ + , x ·Ȧ + ∈ L 3/2 and let ∇ · A + = ∇ ·Ȧ + = 0.
Then there exists T , 1 ≤ T < ∞ and there exists a unique solution (u, A) of the system (1. for some constant C depending on (u + , A + ,Ȧ + ) and for all t ≥ T . Remark 1.2. The assumptions A + , x · A + ∈ L 3 andȦ + , x ·Ȧ + ∈ L 3/2 serve to exclude the occurence of constant terms in A + , x · A + ,Ȧ + , x ·Ȧ + and of terms linear in x in A + , x · A + , but are otherwise implied by the W 1 1 assumptions on those quantities through Sobolev inequalities.
Remark 1.3. The assumptions on A + ,Ȧ + imply that ω 1/2 A + , ω −1/2Ȧ + ∈ H 1 through Sobolev inequalities, and therefore also that ∇A + ,Ȧ + ∈ L 2 . As a consequence the free wave solution A 0 defined by (1.11) belongs to L 4 (IR, W 1 4 ) by Strichartz inequalities, with ∂ t A 0 ∈ L 4 (IR, L 4 ) [3]. In particular A 0 satisfies the local in time regularity of B required in the definition of the space X(·). Further- , namely A 0 is a finite energy solution of the wave equation.
The Cauchy problem at infinite initial time
In this section we perform the first step of the construction of solutions of the system (1.3) as described in the introduction, namely we construct solutions (v, B) of the system (1.4) defined in a neighborhood of infinity in time and tending to zero at infinity under suitable regularity and decay assumptions on the asymptotic functions (u a , A a ) and on the remainders R i . As a preliminary to that study, we need to solve the Cauchy problem with finite initial time for the linearized system (1.7). That system consists of two independent equations. The second one is simply a wave equation with an inhomogeneous term and the Cauchy problem with finite or infinite initial time for it is readily solved under suitable assumptions on the inhomogeneous term, which will be fulfilled in the applications. The first one is a Schrödinger equation with time dependent magnetic and scalar potentials and with time dependent inhomogeneity, which we rewrite in a more concise form and with slightly different notation as We first give some preliminary results on the Cauchy problem with finite initial time for that equation at the level of regularity of H 2 . The following proposition is a minor variation of Proposition 3.2 in [7].
Lemma 2.1. The following inequalities hold.
(1) For any admissible pair (q, r) and for any u ∈ L 2 (2) Let I be an interval and let t 0 ∈ I. Then for any admissible pairs (q i , r i ), In addition to the Strichartz inequalities for the Schrödinger equation, we shall need special cases of the Strichartz inequalities for the wave equation [3] [10]. Let I be an interval, let t 0 ∈ I and let B(t 0 ) = ∂ t B(t 0 ) = 0. Then We now begin the construction of solutions of the system (1.4). For any T , t 0 with 1 ≤ T < t 0 ≤ ∞, we denote by I the interval I = [T, t 0 ] and for any t ∈ I, we denote by J the interval J = [t, t 0 ]. In all this section, we denote by h a function in C([1, ∞), IR + ) such that for some λ > 0, the function h(t) ≡ t λ h(t) is non increasing and tends to zero as t → ∞, and we denote by j, k nonnegative integers. We shall make repeated use of the following lemma.
for 1 ≤ k ≤ n, for some constants N k and for all t ∈ I.
Let ρ ≥ 0 such that nλ + ρ > µ. Then the following inequality holds for all by a direct application of Hölder's inequality in J. The same situation occurs if ρ > µ.
Part (2) is proved by separating |x| −1 into short and long distance parts, applying the Hölder inequality, and optimizing the result with respect to the point of separation (see [1]).
⊓ ⊔
We can now state the main result of this section. Proposition 2.2. Let h be defined as above with λ = 3/8 and let X(·) be defined by (1.17). Let u a , A a , R 1 and R 2 be sufficiently regular (for the following estimates to make sense) and satisfy the estimates and in particular for some constant C and for all t ≥ T .
Proof. We follow the sketch given in the introduction.
for all t ≥ T 0 . We first construct a solution (v ′ , B ′ ) of the system (1.7) in X([T, ∞)).
For that purpose, we take t 0 , T < t 0 < ∞ and we solve the system (1.7) in X(I) be the solution thereby obtained. The existence of v ′ t 0 follows from Proposition 2.1 with V = g(|u| 2 ) and f = G 1 − R 1 . We want to take the limit of (v ′ t 0 , B ′ t 0 ) as t 0 → ∞ and for that purpose we need estimates of (v ′ t 0 , B ′ t 0 ) in X(I) that are uniform in t 0 . Omitting the subscript t 0 for brevity we define where J = [t, ∞) ∩ I and we set out to estimate the various N ′ i . We also define the auxiliary quantities namely with the inner norm taken in L r (IR 3 ) and the outer norm taken in L q (J). Furthermore we shall use a shorthand notation for two important cases, namely We first estimate N ′ 0 , defined by (2.33).
by Lemma 2.3, part (1) and Lemma 2.2, by Lemma 2.3, part (1) and Lemma 2.2 again. Collecting the previous estimates yields which is of the form where (o(1); ·, · · · , ·) denotes a quantity depending on the variables indicated and tending to zero as T → ∞ when those variables are fixed. We next estimate the Strichartz norms of v ′ , namely N ′ 1 defined by (2.34). By Lemma 2.1, in addition to the contribution of G 1 − R 1 estimated above, we need to by Sobolev inequalities and Lemma 2.2, by Lemma 2.3, part (2), by Lemma 2.3, part (1) and Lemma 2.2. The term g(u a v)v ′ need not be considered because it is controlled by the previous ones.
Collecting the previous estimates yields which is of the form (2.45) We now turn to the estimates of B ′ . We first estimate B ′ in L 4 (J, L 4 ), namely we estimate N ′ 2 defined by (2.35), by the use of (1.5) (2.8). For that purpose we estimate G 2 in L 4/3 (J, L 4/3 ). The linear terms in v are estimated by The linear term in B is estimated by The quadratic terms in v 2 are estimated by The quadratic terms in Bv need not be considered because The cubic term B|v| 2 is estimated by is an admissible pair and that the middle norm is controlled by N 1 .
Collecting the previous estimates yields which is of the form We next complete the estimates of B ′ by estimating ∇B ′ and ∂ t B ′ in L 4 (J, L 4 ), namely we estimate N ′ 6 defined by (2.39), through the use of (1.5) (2.9). For that purpose we estimate ∇G 2 in L 4/3 (J, L 4/3 ). Now The estimate of ∇G 2 in L 4/3 (J, L 4/3 ) proceeds exactly as that of G 2 in the same space, with one additional gradient acting on each factor in each term, except for two facts. First because of the symmetry of the quadratic form P Im (v 1 ∇ A v 2 ), we can always ensure that no terms occur with two derivatives on v or u a . Second, the quadratic terms coming from vBu a have to be estimated explicitly because they are no longer estimated by polarisation. When hitting v, and additional gradient produces a replacement of N 0 by N 1/2 and of N 1 by N 4 in the estimates. When hitting B, it produces a replacement of N 2 by N 6 . When hitting u a or A a , it only requires higher regularity of these functions, but does not change the form of the estimates. With those remarks available, only the terms from ∇(vBu a ) and from B∇|v| 2 need new estimates.
The linear terms in v are estimated by The linear terms in B are estimated by The quadratic terms in v 2 are estimated by The quadratic terms in Bv are estimated by The cubic terms from B|v| 2 are estimated by Collecting the previous estimates yields which is of the form We now come back to the estimates of v ′ and we first estimate ∂ t v ′ in L 2 , namely we estimate N ′ 3 defined by (2.36) by using (2.3). Here however we encounter a technical difficulty due to the fact that B a priori does not satisfy the assumption ∇B ∈ L 1 loc (I, L ∞ ) needed in Proposition 2.1, part (2) in order to derive (2.3). We circumvent that difficulty by first regularizing B, introducing the associated solution v ′ which then satisfies (2.3), deriving the N ′ 3 estimate for the auxiliary solution, and removing the regularization by a limiting procedure, which preserves the estimate.
Here in order not to burden the proof with technicalities, we provide only the derivation of the estimates from (2.3) and we refer to the proof of Proposition 3.2, part (1) in [7] for the technical details. From (2.3) (2.4) with V = g(|u| 2 ) and f = G 1 − R 1 , we obtain We first estimate the terms containing v ′ , starting with i by Lemma 2.2.
We next estimate ∂ t G 1 . The estimates are similar to those performed when estimating v ′ in L 2 , with an additional time derivative acting on each factor in each term. This has the effect of requiring more regularity on (A a , u a ) when that derivative hits (A a , u a ), without changing the form the estimate, and of replacing one factor N 2 by N 6 when that derivative hits B and one factor N 0 by N 3 when that derivative hits v. Thus we obtain We finally estimate ∂ t v ′ (t 0 ) 2 and for that purpose we need pointwise (in time) estimates of R 1 and of B. Now from (2.21) it follows that while from (2.27) (2.31) and therefore We then estimate by Lemma 2.3 for the terms containing g and the definitions.
Collecting the previous estimates and in particular (2.52) (2.53) taken at t 0 ≥ t, we obtain which is of the form As a consequence, We next estimate ∆v ′ (t) 2 , namely N ′ 5 defined by (2.38). From Taking ε = 1/3 yields and therefore by (2.38) (2.57) which is of the form We finally estimate the Strichartz norms of ∇v ′ . For that purpose, by Lemma 2.1, we have to estimate the following quantity in the sum of spaces of the type L q (J, L r ) for admissible pairs (q, r) : The estimates are similar to those performed when estimating v ′ 2 and the Strichartz norms of v ′ (see the proof of (2.42) (2.44)), with an additional gradient acting on each factor in each term, thereby producing the replacement of N 0 by N 1/2 , of N ′ 0 by N ′ 1/2 , of N ′ 1/2 by N ′ 5 and of N 2 by N 6 at suitable places. More precisely, the terms containing v ′ are estimated by where we have used again Lemmas 2.2 and 2.3 in the estimates of the terms containing g.
The terms from ∇G 1 are estimated by ∇B · ∇ Aa u a + ≤ N 6 C c 4 + c a t −1 h(t) , B · ∇∇ Aa u a + ≤ C c N 2 1 + a t −1 h(t) , Collecting the previous estimates yields so that it remains only to estimate N ′ 3 and N ′ 4 . Substituting the previous estimates into (2.54) (2.62) yields which ensure the required estimate of N ′ 3 , N ′ 4 provided T is sufficiently large so that which we assume from now on. Note that the terms responsible for that large T condition are the terms ∂ t B · ∇v ′ from (2.51) and A · ∇ 2 v ′ from (2.61). No such condition was required at this stage in the simpler case of the (WS) 3 system [8].
The estimates obtained for the N ′ i are obviously uniform in t 0 . We now take the limit t 0 → ∞ of (v ′ t 0 , B ′ t 0 ), restoring the subscript t 0 for that part of the argument. Let T < t 0 < t 1 < ∞ and let (v ′ t 0 , B ′ t 0 ) and (v ′ t 1 , B ′ t 1 ) be the corresponding solutions of (1.7). From the L 2 norm conservation of the difference where K 0 is the RHS of (2.42), while from (1.7) (2.8) (2.9) (2.46) (2.49) and the initial conditions, it follows that where K 2 and K 6 are the RHS of (2.46) and (2.49) respectively. It follows from (2.67) (2.68) that there exists where we have omitted the dependence of the o(1) terms on N i and N ′ i . We know in addition that In order to ensure (2.69), we proceed as follows. We first choose N 0 and N 2 by which is possible under the smallness condition on c 3 , c 4 N 6 = C 6 c 4 (N 0 N 5 ) 1/2 + r 2 + 1 = C 6 2c 4 (N 0 (N 3 + r 1 + 1)) 1/2 + r 2 + 1 (2.73) and we impose o(1) ≤ 1 in (2.50) and (2.60) by taking T sufficiently large depending on the relevant N i . This ensures the N ′ 6 ≤ N 6 part of (2.69) together with the inequality N ′ 5 ≤ 4 (N ′ 3 + r 1 + 1) (2.74) which will ensure the N ′ 5 ≤ N 5 part of (2.69) as soon as the N ′ 3 ≤ N 3 part holds. Furthermore, under the choices and assumptions made so far, (2.74) implies is a positive increasing concave function of N ′ 3 for fixed T and N i . It follows therefrom that (2.76) will imply N ′ 3 ≤ N 3 provided we ensure that This is obtained by imposing N 3 = C 3 2 a + C 6 c 2 4 (N 0 (N 3 + r 1 + 1)) 1/2 + c 4 C 6 (r 2 + 1) +c N 2 + c 2 3 N 3 + 2c 2 N 0 + r 1 + 1 (2.78) which is possible under the smallness condition C 3 c 2 3 < 1, and by imposing that o(1) ≤ 1 in (2.76) by taking T sufficiently large depending on the N i .
It is then a simple matter to choose N 1 and N 4 in order to ensure the N ′ 1 ≤ N 1 and N ′ 4 ≤ N 4 parts of (2.69), since all the N ′ i in the RHS of (2.44) and (2.62) are now under control. It suffices to choose N 4 = C 4 a N 5 + (a + 2c 2 )(N 0 N 5 ) 1/2 + c(N 6 + N 2 ) + 2c 2 N 0 + r 1 + 1 (2.80) and to impose that o(1) ≤ 1 in (2.45) (2.63) by taking T sufficiently large depending on the N i (with the N ′ i in the o(1) terms being estimated by the N i ).
We now show that the map φ is a contraction on R for a suitable norm defined on X(I). (2.81) Here however, in contrast with the case of the (WS) 3 system where the corresponding map φ can be shown to be a contraction for the whole norm of X(I), we encounter a difficulty due to the derivative coupling in the covariant Laplacian. In fact if D is a differential operator of order m, a straightforward energy estimate of Dv ′ − 2 from (2.81) yields and requires therefore a control of v ′ + at order m + 1, so that one can hope to contract norms of v of degree at most one less than those occurring in the definition of X(I). Fortunately, because of the special algebraic properties of the equations, it turns out that the lowest two semi norms of X(I) for the differences, namely those corresponding to N 0 and N 2 , can be decoupled from the higher ones and can be contracted on the bounded sets of X(I). This follows from the fact that the symmetry of the quadratic form P Im(v 1 ∇ A v 2 ) has made it possible to avoid having a gradient acting on v − in the equation for B ′ − in (2.81). Thus we shall show that φ is a contraction for the pair of semi norms We The terms not containing v ′ + are estimated as in the proof of (2.42), namely We next estimate the terms containing v ′ + .
Collecting the previous estimates yields which is of the form The linear terms are estimated as in the proof of (2.46), namely The non linear terms are estimated in a slightly different way. The quadratic terms are estimated by The cubic terms are estimated by
Collecting the previous estimates yields
which is of the form It follows from (2.85) (2.88) that the map φ is a contraction for the pair of semi norms (N 0 , N 2 ) on the set R under the smallness condition and for T sufficiently large. Since the set R is closed for the norm defined by the pair (N 0 , N 2 ), it follows therefrom that the system (1.4) has a solution in R. This proves the existence part of Proposition 2.2. The uniqueness part follows from (2.85) (2.88) again with N ′ i− = N i− . We remark at this point that the constants C 0 , C 2 appearing in (2.89) can be taken to be the same as in (2.72) so that the two smallness conditions actually coincide. In fact those constants are determined by the linear terms in the estimates, and those terms are the same in both cases. There may occur additional, different constants coming from the non linear terms. They have been omitted in (2.84) (2.87).
It remains to prove the last statement of Proposition 2.2 and for that purpose we need to estimate the energy norm of B ′ . From (1.7) (2.10) it follows that for all t ∈ I where G 2 is defined by (1.5). We estimate the various terms of G 2 successively. The linear terms in v are estimated by The linear term in B is estimated by The quadratic terms in v 2 are estimated by The quadratic terms in Bv again need not be considered. The cubic term B|v| 2 is estimated by Collecting the previous estimates and using (2.23), we obtain which proves that the solution of (1.4) constructed previously satisfies (2.24).
⊓ ⊔
Remark 2.2. The only smallness condition on u is the condition (2.72), coming from N 0 and from its coupling with N 2 . The subsequent condition C 3 c 2 3 < 1 needed for the choice of N 3 comes in fact from exactly the same estimate as the c 2 3 contribution to N ′ 0 , so that the latter condition is actually the c 4 = 0 special case of (2.72) and is therefore weaker than (2.72). That fact is hidden by the use of overall constants C 0 and C 3 in the estimates of N ′ 0 and N ′ 3 .
Remainder estimates and completion of the proof
In this section, we first prove that the choice of asymptotic functions (u a , A a ) made in the introduction satisfies the assumptions of Proposition 2.2 for the choice of h made in Proposition 1.1, under suitable assumptions on the asymptotic state (u + , A + ,Ȧ + ). We then combine those results with Proposition 2.2 to complete the proof of Proposition 1.1.
We first supplement the definition of (u a , A a ) with some additional properties of a general character. In addition to the representation (1.13) (1.14) of A 1 , we need a representation of ∂ t A 1 . From (1.12) it follows that so that upon substitution of (1.8) we obtain On the other hand, from (1.13) We shall need the operator The asymptotic form A a for A has been chosen in order to make R 2 small. In fact R 2 can be rewritten as and A a has been chosen in such a way that ⊓ ⊔A a = P (x/t)|u a | 2 (3.7) so that R 2 = P t −1 Re u a Ju a + A a |u a | 2 .
Proof. We estimate which implies the first estimate of (3.13) by integration, which implies (3.14) by integration.
In order to prove the second estimate of (3.13), we note that the quadratic form and therefore ≤ C t −7/4 c (c 1 (1 + ℓn t) + ac) + c 2 (a + 1) from which the second estimate of (3.13) follows by integration.
⊓ ⊔ We now turn to R 1 . We first skim R 1 of some harmless terms. Expanding the covariant Laplacian and using again J, we rewrite R 1 as In the same way as for R 2 , we can show that R 1,2 satisfies the assumptions needed for Proposition 2.2 with the choice of h required for Proposition 1.1 under general assumptions on (u a , A a ) not making use of their special form.
Lemma 3.2. Let u a , A a and A 0 satisfy the estimates
⊓ ⊔
We now turn to R 1,1 . We shall need the commutation relations (3.27) The choice (1.15) of ϕ has been taylored to cancel the two long range terms in (3.27), so that We now have to prove that the previous choice of (u a , A a ) satisfies the remaining .20)).
The contribution of A 0 to A a and to R 1,2 will be taken care of by the following general estimates of solutions of the wave equation. Lemma 3.3. Let A 0 be defined by (1.11) and let k ≥ 0 be an integer. Let A + anḋ A + satisfy the conditions (3.29) Then A 0 satisfies the estimates
(3.30)
A proof can be found in [20]. As mentioned in Remark 1.2, the assumptions A + ∈ L 3 andȦ + ∈ L 3/2 serve to exclude constants in A + andȦ + and linear terms in x in A + , but are otherwise controlled by the W k 1 assumption through Sobolev inequalities.
We next derive some preliminary estimates of A 1 and A 1 .
Lemma 3.4. Let k ≥ 0 be an integer. Then the following estimates hold.
⊓ ⊔ As an immediate corollary, we obtain the following estimates of A 1 and ∂ t A 1 .
Corollary 3.1. The following estimates hold.
Proof. The result follows from (1.13) (3.2) (3.4) and from (3.33) (3.34). ⊓ ⊔ We next derive the remaining estimates of u a and of R 1,1 . The following proposition is slightly stronger than needed.
We expand the derivatives acting on exp(−iϕ)w + by the Leibnitz rule and we estimate the expressions thereby obtained by the Hölder inequality. For that purpose we need some control of ϕ. From Lemma 2.3 it follows easily that for w + ∈ H 3 , ∇g(|w + | 2 ) ∈ H 4 and in particular ∇ k g(|w + | 2 ) ∈ L ∞ for 0 ≤ k ≤ 3. Together with Lemma 3.4, this provides an estimate of ∂ j t ∇ k ϕ r for j = 0, 1, for k = 1, 2 and r = ∞ and for k = 3 and r = 6. With that information available, we apply the Hölder inequality according to the following rules : (i) all the explicit powers of x are attached to w + . In addition, whenever there appears a factor ∂ t ϕ (with no space derivative), one power x is extracted from the A 1 part of ∂ t ϕ and attached to w + (since A 1 belongs to L ∞ but a priori ∂ t ϕ does not).
(ii) The x amputated contribution of ∂ t ϕ generated by rule (i) and all the factors ∂ j t ∇ k ϕ with k = 1, 2 are estimated in L ∞ . The factors ∇ 3 ϕ are estimated in L 6 (in fact in H 1 ). Such factors occur only from the t −1 x s ∇∆ terms in the proof of (3.44).
(iii) The previous rules generate norms of the type x s ∇ k w + r for w + . Those norms are estimated by H 1 norms of the same quantities for 2 < r ≤ 6 and by H 2 norms for 6 < r ≤ ∞.
(iv) The time dependence of the various terms follows from the explicit t dependence of the operators Z of the previous list, together with the fact that ∂ j t ∇ k ϕ r generates a factor t −1 for j = 1 and a factor ℓn t for j = 0.
With the previous rules available, the proof reduces to an elementary book keeping exercise, which will be omitted. We simply remark that the dominant terms as regards w + have x 3 ∇, x 2 ∇ 2 and x∇ 3 which are exactly controlled by the assumption w + ∈ H 1,3 ∩ H 3,1 , equivalent to the assumption u + ∈ H 3,1 ∩ H 1,3 . As regards the time dependence, the dominant terms come from x s ∇ϕ in the proof of (3.43) thereby generating a factor ℓn t, and from x s ∆ exp(−iϕ) in the proof of (3.44), generating x s |∇ϕ| 2 and therefore a factor (ℓn t) 2 . Finally (3.40) is the special case j = k = 0, Z = 1, r = 3 of (3.45), while (3.41) follows from the estimate ∇u a (t) 4 ≤ xw + 4 + t −1 ( ∇w + 4 + ∇ϕ ∞ w + 4 ) t −3/4 . Since the estimates are used only for t ≥ T , one can replace T 0 by T in that expression, and the last term can be made arbitrarily small by taking T sufficiently large, so that the smallness condition of c 4 reduces to the smallness of xw + 4 . ⊓ ⊔ Remark 3.1. The regularity assumptions on u + or w + could be somewhat weakened. The strongest assumptions come from the ∆w + term in R 1,1 and from the x 3 , x∇ 2 and x 2 ∇ operators Z in the estimate of ∂ t ∇u a . On the one hand the ∆w + term in R 1,1 could be eliminated by the choice w(t) = U(1/t) * w + at the expense of generating either a more complicated and less explicit ϕ or additional terms in R 2 . On the other hand, we have obtained on L 6 estimate of ∂ t ∇u a whereas an L 4 estimate was sufficient. Only a minor weakening of the assumptions on u + could be achieved along those lines, and we shall not press that point any further. | 2014-10-01T00:00:00.000Z | 2004-07-01T00:00:00.000 | {
"year": 2004,
"sha1": "73c7043f15c9edf6e9fee05999bf2b841537959f",
"oa_license": null,
"oa_url": "http://www.ems-ph.org/journals/show_pdf.php?iss=2&issn=0034-5318&rank=7&vol=42",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "88e3f73232aee8471cba8de5c6748bae4e23a7f6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
234495500 | pes2o/s2orc | v3-fos-license | Persistence with mirabegron or antimuscarinic treatment for overactive bladder syndrome: Findings from the PERSPECTIVE registry study
Abstract Objectives This analysis from the PERSPECTIVE (a Prospective, Non‐interventional Registry Study of Patients Initiating a Course of Drug Therapy for Overactive Bladder) study evaluated treatment persistence with mirabegron or antimuscarinics over a 12‐month period. Methods Participants were adults diagnosed with overactive bladder (OAB) by their health care provider (HCP), who were initiating mirabegron or antimuscarinic treatment. The HCP made all treatment decisions, and patients were followed for 12 months with no mandatory scheduled visits. Information requests were sent to patients at baseline and months 1, 3, 6, and 12. Patients were nonpersistent if they switched, discontinued, or added OAB medications/therapies to their initial treatment. Reasons for discontinuation and switching patterns were investigated. Results Overall, 1514 patients were included (613 mirabegron and 901 antimuscarinic initiators). Persistence rates decreased steadily over time in both groups. A low proportion of patients added or switched medication at each time point. Unadjusted Kaplan‐Meier analysis showed similar persistence rates for both groups. When the data were adjusted for patient characteristics (age, sex, and OAB treatment status), mirabegron initiators had higher persistence rates. No significant differences were noted in unadjusted median time to end of persistence. However, end of treatment persistence by any cause was longer with mirabegron (median: 9.5 vs 6.7 months for antimuscarinics). HCPs stated that the most common reasons for nonpersistence were no symptomatic improvement and side effect aversion. Conclusions Treatment persistence was longer for mirabegron compared with antimuscarinic initiators after controlling for patient characteristics. End of treatment persistence by any cause was also longer with mirabegron.
| INTRODUCTION
Overactive bladder (OAB) is a chronic symptom syndrome affecting more than one in ten North American adults, although the condition is particularly common among the elderly. [1][2][3][4] Serious ramifications on the quality of life (QoL) and daily living of affected patients have been noted, 5,6 and the treatment of OAB is estimated to cost over $100 billion in the US each year. 7 Pharmacotherapy approaches are advocated for OAB symptom management if behavioral and conservative measures do not produce adequate improvement. 8,9 Owing to their established efficacy and propensity to improve QoL, oral antimuscarinics and the β 3adrenoreceptor agonist, mirabegron, are the recommended principal pharmacologic interventions in North America for patients with OAB symptoms. [9][10][11][12] Although both types of agent are approved as first-line therapeutic agents, 9 patients typically receive antimuscarinics first in clinical practice, and, as such, patients who receive mirabegron tend to be more treatment experienced than patients treated with antimuscarinics. [13][14][15] However, the clinical utility of antimuscarinic agents is limited by the occurrence of certain adverse events (AEs), including dry mouth, constipation, and urinary retention. 16 Owing to its different mechanism of action, mirabegron is associated with a lower frequency of these AEs and is thought to have a more favorable safety profile than antimuscarinics. [16][17][18][19] AEs that have been commonly reported in mirabegron trials include hypertension, nasopharyngitis, and headache, although similar frequencies of these events were also observed in the placebo groups. [20][21][22] Clinical investigations have shown that patients who continue to persist with OAB medication typically experience significantly improved urinary symptoms and QoL. 12,23,24 However, despite the objective and patient-reported benefits associated with pharmacological agents as treatment modalities for OAB symptoms, persistence with these medications consistently decreases over time. 14,15,25 Furthermore, a retrospective pharmacy claims analysis found that oral antimuscarinics specifically exhibit relatively poor adherence and persistence compared with medications used for other common chronic conditions. 26 Patient-reported reasons for discontinuing antimuscarinic use include the medication not working as expected or because of the occurrence of side effects. 27 In fact, a UK prescription data analysis found that between 65% and 86% of patients discontinue therapy within 1 year depending on the antimuscarinic prescribed. 28 However, real-world data collected from retrospective administrative claims databases and observational studies suggest that treatment persistence with mirabegron may be greater than for antimuscarinics in patients with OAB. 15
| Statistical analyses
Statistical data were analyzed with SAS (version 9. Adding or switching medication was categorized according to the initial OAB treatment and the addition or switch to a second-line OAB medication type. This included adding/switching to mirabegron, adding/switching to antimuscarinics, discontinuation of initial treatment, or switching to onabotulinumtoxin A or sacral neuromodulation.
Rates of switching, adding on, or discontinuing were calculated as: 3 | RESULTS
| Study population
The study population data have been reported previously. 31 concomitant treatment at baseline with α 1 -adrenoreceptor antagonists and 5α-reductase inhibitors, respectively (Table S1).
| Persistence
The unadjusted persistence rates decreased steadily in both the mirabegron and antimuscarinic groups (Table 1) (Table S2).
The unadjusted Kaplan-Meier analysis showed that patients initiating mirabegron or antimuscarinics had similar rates of persistence during follow-up ( Figure 1A). However, when the data were adjusted for baseline age, sex, and prior OAB medication use, the persistence curves for initial mirabegron or antimuscarinic treatment separated by month 2 of the study, with mirabegron initiators showing higher rates of persistence ( Figure 1B).
Although low numbers of patients were included, no statistically significant differences were noted between the mirabegron and antimuscarinic initiators in terms of unadjusted median time to end of persistence for all patients and for each of the subgroup analyses ( Analysis of the end of treatment persistence by any cause data showed that clear separation was observed in favor of mirabegron (median time to end of treatment persistence: 9.5 months vs 6.7 months for antimuscarinics; Figure 2A). This observation may be due to antimuscarinic initiators switching treatment (median time: 4.1 months vs 8.5 months for mirabegron; Figure 2B) or adding a treatment (median time: 6.5 months vs 12 months for mirabegron, Figure 2C) sooner than mirabegron initiators. In contrast, mirabegron initiators who stopped all treatment during follow-up did so sooner than antimuscarinic initiators (median time: 6.1 months vs 9.1 months, respectively; Figure 2D).
Data from the adjusted Cox proportional hazard model also showed that initial treatment had no effect on time to end of persistence ( Figure 3)
| DISCUSSION
In this secondary analysis from the PERSPECTIVE registry, patients with OAB initiating a new course of mirabegron persisted with treatment longer than those initiating antimuscarinics after controlling for differences in patient characteristics between groups. End of treatment persistence by any cause was also longer with mirabegron than with antimuscarinic treatment. It is important to note, however, that no statistical analyses were conducted on these specific results. Furthermore, no additional notable differences in persistence were observed between the two groups.
In agreement with the specific persistence findings mentioned above, previous real-world analyses conducted in Canada, Japan, Spain, the US, and the UK typically showed that patients treated with mirabegron remained on treatment longer than those treated with antimuscarinics. [13][14][15][32][33][34] In these previous studies, 12-month persistence rates of between 14% and 38% were noted for mirabegron, and rates of between 3% and 25% were reported with antimuscarinics. Conversely, the opposite finding was noted in a Japanese urology clinic study, which reported 12-month persistence rates of 12.2% and 20.1% with mirabegron and solifenacin, respectively. 35 It is important to note, however, that the Japanese urology clinic study was a prospective, randomized trial when the other studies mentioned above were retrospective database analyses, and these differences in study design may partially explain the contradictory This investigation showed that treatment persistence was longer with mirabegron when the Kaplan-Meier results were adjusted for baseline age, sex, and prior OAB medication use. However, no difference was noted in the unadjusted results. These findings are potentially due to the fact that mirabegron initiators were more likely to be male and treatment experienced than their antimuscarinic counterparts. In agreement with these findings, previous persistence studies typically showed that higher proportions of males and treatmentexperienced patients were noted in mirabegron compared with antimuscarinic cohorts. [13][14][15]25 In this study, the age of the patients did not appear to have an effect on the treatment that was initiated.
Furthermore, the fact that a greater proportion of mirabegron initiators were treatment experienced may partially explain why end of treatment persistence by any cause was longer in this group than in the antimuscarinic initiators group. The patients who initiated mirabegron may have therefore been more familiar with the efficacy of OAB medication and the potential adverse effects associated with treatment.
No statistically significant differences in the persistence results were observed in PERSPECTIVE. This finding was potentially due to the study design as there was no requirement for patients to attend scheduled visits during the follow-up period. The lack of statistically significant differences in persistence may explain why similar patientreported OAB symptom bother and HRQoL results were observed for both treatments in the present study. 31 In our investigation, older, female, and treatment-naïve patients were more likely to persist with treatment for a longer period of time.
The finding that older patients persisted with OAB therapy longer than younger patients has been previously reported in several persistence studies involving mirabegron and antimuscarinics. 13,15,25 However, a further study found no relationship between patient age and OAB treatment persistence, 14 and a UK-based hospital prescription analysis found that younger age was associated with improved persistence with mirabegron therapy. 36 Furthermore, previous investigations support the results of this study in so much as female patients were found to persist with OAB medication for a longer period of time than their male counterparts. 13,37 Conversely, a UK clinical practice study found no difference in persistence rates between the sexes, 14 and a Japanese real-world analysis study found that males typically persisted with mirabegron and antimuscarinic therapy longer than females. 15 In contrast to the present study, previous persistence studies have typically found that treatment-experienced patients persist with mirabegron and antimuscarinic therapy longer than treatment-naïve patients. [13][14][15]25 The discrepancies between our findings and the results of previous investigations may be partially due to the design of the studies. The previous studies were database analyses, while the present study was a clinical practice investigation. In support of this hypothesis, a retrospective, real-life study found that treatment-naïve patients persisted with mirabegron treatment longer than patients who had received previous anticholinergic therapy. 38 The switching analyses showed that a low percentage of patients 41 Furthermore, a Japanese single-center retrospective study found that the biggest reasons for discontinuing mirabegron treatment were unmet treatment expectations, the occurrence of AEs, and symptom improvement. 42 In our study, the proportion of patients who discontinued following no improvement in their symptoms was slightly higher in the mirabegron group, whereas the proportion of patients who discontinued due to side effects was higher in the antimuscarinics group. The latter finding is potentially due to the higher incidence of anticholinergic side effects that are noted with antimuscarinics compared with mirabegron. 16 Despite the promising results obtained in the present study, long-term treatment of a chronic disease such as OAB represents a challenge for patients in terms of adhering to and persisting with prescribed medication. The decline in patient persistence in both treatment groups over time emphasizes the importance of using strategies that improve patient treatment experiences. For example, patientsupport programs have been shown to improve persistence with and/or adherence to medications for the treatment of other chronic diseases. [43][44][45] Indeed, an initial study found that patients with OAB who followed a patient-support plan found that it was informative and feasible to implement and that they were satisfied with several aspects of the plan. 46 The widespread use of support plans could therefore prove beneficial in this setting to improve treatment persistence.
Owing to the varied population involved, we believe that this study more accurately reflects the real-world situation compared with a clinical trial population, where specific inclusion and exclusion criteria have to be satisfied for enrollment. In addition, this investigation was a prospective study and therefore does not suffer from the potential data deficiencies that are inherent with retrospective database investigations. However, this study does have some limitations.
As specific scheduled follow-up visits were not included as part of the study protocol, the quantity of persistence data available throughout the study varied according to the follow-up time point. Furthermore, data on the reasons for discontinuing, switching, or adding on a treatment were only available for a minority of patients. Additionally, no data were captured on whether the patients actually took the medication. Therefore, patients could have been identified as persisting with treatment when they were not taking the medication as prescribed by their HCP. Lastly, if a patient diary had been used during this study, it may have increased patient awareness about the current status of their condition and may have therefore led to increased persistence with the study medication.
In conclusion, the present study is the first observational study conducted in North America to investigate patient persistence with OAB medication following treatment with either mirabegron or antimuscarinics. Specific results from this study showed that patients who received mirabegron persisted with treatment for a longer period of time than the patients who initiated antimuscarinics, although no statistical differences were observed. Regardless of the treatment used, approximately two-thirds of patients were still persisting with their initial treatment after 12 months. These high rates of persistence could lead to symptomatic improvements in patients with OAB.
ACKNOWLEDGMENTS
The authors would like to thank the PERSPECTIVE study investigators and all patients who took part in the study. This study was funded by | 2021-05-15T06:16:57.134Z | 2021-05-14T00:00:00.000 | {
"year": 2021,
"sha1": "cb7e2b4f4fb136ce3dcf1fffcad751a23b11bd55",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/luts.12382",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "629a7f5aec7b133d3da99a7f572455cf1f8bcf20",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248715820 | pes2o/s2orc | v3-fos-license | Complex formation of potassium salt of highly fatty acid with hemagglutinin protein in influenza virus via exothermic interaction
In our previous study, we found highly fatty acid salts, which are a skin-friendly soaps, had a high ability to inactivate the influenza virus. In order to elucidate the mechanism of inactivation of influenza virus, we investigated interactions and complex formation of potassium tetradecanoate (C14K) as a highly fatty acid salt with a virus particle (VP) derived from avian influenza virus by using isothermal titration calorimetry (ITC) and small-angle X-ray scattering (SAXS). ITC showed C14K attractively interacted with hemagglutinin protein (HA) which exists in the envelop of VP. SAXS analyses revealed C14K formed highly ordered complex with HA through the attractive interaction. Since the HA is responsible for cell entry events, inactivation of influenza viruses by highly fatty acid salts are derived owing to HA inhibition of influenza viruses through the complex formation. Time-resolved SAXS measurements elucidated the complex formation was completed within 40 s after mixing aqueous solutions of C14K and VP. This result strongly suggests that hand-washing with a highly fatty acid salts is an effective measure to prevent infection with influenza virus without causing rough hands.
Introduction
Seasonal spread of influenza viruses poses a serious threat to human health [1,2]. In addition to their health impact, influenza pandemics impose a large economic burden [3]. Epizootic influenza virus, such as avian influenza viruses, can be transmitted from animals to humans, resulting significant morbidity to an infected human [4][5][6][7].
Influenza virus infection can be prevented by vaccination and can be treated post-infection with anti-influenza drugs. However, owing to antigenic changes and development of drug resistance, these measures can become ineffective. Therefore, hygiene measures such as handwashing are critical for influenza prevention. Although handwashing with ethanol or surfactants is effective in preventing influenza virus infection and many other pathogens [7], frequent handwashing may cause serious damage to the skin [8,9]. Skin damage increases the risks of secondary infections by staphylococci and Gram-negative bacteria [10,11]. Hence, optimal surfactants for handwashing, which should cause no or minimal skin damage, are intensively requested.
Fatty acid salts, surfactants generated from natural fats, have low cytotoxicity and do not cause skin damage [12]. In addition, in our previous reports, we have found that soaps consisting potassium salts of fatty acids can inactivate influenza viruses more efficiently than other surfactants, such as sodium dodecylsulfate [13][14][15]. Therefore, fatty acid salts such as potassium oleate should become suitable soaps that inactivate influenza viruses without causing skin damage. The efficient inactivation of influenza viruses may be attributable to interactions between fatty acid salts and virions [13]. However, detail mechanism for inactivation of influenza viruses with fatty acid salts have not been clarified. Thus, we studied here the interactions and molecular assembly in the mixtures of influenza viruses with fatty acid salts to elucidate the mechanisms of influenza virus inactivation by fatty acid salts.
Materials
Potassium tetradecanoate (C14K) as a fatty acid salt and potassium oleate (C18=1K) as a control of fatty acid salt were purchased from Tokyo Chemical Industry Co., Ltd. (Tokyo, Japan) and used as obtained. Avian influenza virus A/swan/Shimane/499/83 (H5N3) provided by Dr. K. Otsuki (Totttori University, Japan) and propagated in embryonated chicken eggs was used as an influenza virus. Before the H5N3 virus in which RNA was inactivated by UV irradiation was used as virus particle (VP). The VP was dispersed in phosphate buffer solution (PBS) at 15 μg/mL of protein concentration. An influenza hemagglutinin (HA) vaccine, containing the HA proteins of A/California/7/2009 (H1N1) pdm09, A/Texas/50/2012 (H3N2) and B/Massachusetts/2/2012), was purchased from Biken Co. Ltd. (Osaka, Japan). The vaccine was a split vaccine made from purified virus particles and contained more than 90 μg/mL of the HA proteins from three viruses.
Isothermal titration calorimetry (ITC)
ITC measurements were performed at 25 • C using a VP-ITC MicroCal microcalorimeter (Northampton, MA). Aqueous surfactant solutions (1.75 × 10 − 1 mmol/L) were maintained in the ITC cell at 25 • C with stirring at 300 rpm. Aliquots of VP or HA solutions were injected into the ITC cell using a micro syringe. The volume of each injection was 5 μL.
The duration of each injection was 14 s, and there was an interval of 250 s to allow for equilibration correction. The heat of dilution was subtracted even through its contribution to total heat was negligibly small. The enthalpy change (ΔH) due to interaction were determined from the titration data on the basis of the amount of C14K or C18=1K.
Small-angle X-ray scattering (SAXS)
SAXS measurements were carried out at BL40B2 and BL03XU stations of SPring-8, Japan. A 2 dimensional photon counting detector (Pillatus 2 M, Dectris, Switzerland) was placed at a distance of 1 m from sample position. The wavelength of the incident X-ray was adjusted to 0.10 nm. Exposure time of static SAXS measurements was kept at 300 s. For time-resolved SAXS measurements, a stopped-flow cell with quartz windows (USP-SFM-CD10, Unisoku, Japan) was used as a sample cell. Aqueous C14K and HA solutions were filled into separate syringes on the stopped-flow apparatus. The two solutions were mixed at a ratio of 1:1 by volume. The temperature in each syrinde and cell was maintained at 25 • C. After the stopped-flow mixing with the dead time of 4 msec, successive X-ray exposures were performed with each exposure time of 100 msec and interval of 200 msec.
The obtained 2-dimensional SAXS images were converted to scattering intensity, I(q), vs the magnitude of scattering vector, q, defined as q = (4π/λ)sin(θ/2), where λ is the wavelength of incident X-ray (0.1 nm) and θ is the scattering angle. Fig. 1 shows change of the state of aqueous C14K mixed with VP in PBS. When phosphate-buffered saline (PBS), pH 7.4, was added to aqueous C14K solution, precipitates were immediately formed by salting out of C14K (VP(− ) in Fig. 1). The amount of C14K precipitates increases with elapsed time. By contrast, no precipitates were formed when PBS containing VPs was added to aqueous C14K (VP(+) in Fig. 1). Thus, the presence of VPs in PBS prevented salting out of C14K, suggesting a strong interaction between C14K and VPs. To study the interactions between C14K with VP, the enthalpy changes of mixing (ΔH) the these surfactants with VPs and hemagglutinin protein (HA) were measured by ITC, where HA is a membrane protein of VP responsible for recognition of cell receptors [16]. Fig. 2 (a) and (b) show isothermal titration of C14K-VP mixtures and enthalpy change of mixing (ΔH) obtained by ITC for C14K-VP, C18=1K-VP and C14K-HA mixtures as a function of volume ratio of VP (or HA) solution to surfactant solution, respectively. The binding isotherms were determined by injecting either VP or HA solution into the C14K or C18=1K solution. Mixtures of C14K or C18=1K and VP showed negative ΔH, indicating attractive interactions between potassium salts of fatty acids and VP. In our previous study, we reported that the C18=1K-VP mixtures showed the negative ΔH (exothermal change). Thus, a negative ΔH appears to be a universal consequence of mixing influenza viruses with potassium salts of fatty acids. On the contrary, our previous paper reported sodium dodecyl sulfate (SDS)-VP and sodium laureth sulfate (LES)-VP mixtures showed positive ΔH [13]. The positive ΔH indicates fusion of envelop membrane of VP [17,18]. Therefore, the interaction between fatty acid salts and VP can be considered to be different from SDS or LES and VP. The negative ΔH indicates the attractive interactions, such as electrostatic interactions or hydrogen bonding, between fatty acid salts and VP [19,20]. The attractive interaction of fatty acid salts with the VP is a factor of efficient inactivation of the influenza virus compared to other surfactants [13]. It is of great interest that which component in VP attractively interacts with C14K or C18=1K. Such attractive interactions do not occur between the fatty acid salts and the envelop membrane consisting of phospholipids. Consequently, the attractive interaction between fatty acid salts and VP should be attributed to the interaction with the spike proteins in the envelope. It has been well-known that there are two spike proteins, HA and neuraminidase (NA), that cause influenza infection. Among them, NA is inhibited by strong binding to a cationic compound having an amino group, which has the opposite charge to the anionic fatty acid salts [21][22][23]. Hence, it is considered that an attractive interaction does not occur between NA and the fatty acid salts. On the other hand, HA is known to have an attractive interaction with an anionic compound having a carboxy group [24]. Therefore, the exothermic interaction between fatty acid salts and VP is considered to act between HA and fatty acid salts. As shown in Fig. 3, the mixtures of C14K or C18=1K and HA showed intensively negative ΔH similar to mixtures of C14K of C18=1K and VPs. Therefore, the attractive interaction between fatty acid salts and VP was attributed to binding of fatty acid salts to HA in the VP envelope. Since the attractive interaction of fatty acid salts with VP is related to effective inactivation of the influenza virus [13], it is considered that the attractive interaction between fatty acid salts with HA plays an important role of effective inactivation of the influenza virus. The amino group at the exposed N-terminus of HA is considered to interact attractively with carboxy group [24]. Thus, inactivation of influenza viruses by potassium salts of fatty acids should be associated with HA inhibition.
Results and discussion
Binding of C14K to HA should result in the formation of a molecular assembly consisting of C14K and HA. To confirm formation of a molecular assembly in the C14K-VP mixture, we performed SAXS measurements. Fig. 3 shows how the SAXS profiles (I(q) vs. q) changes upon mixing aqueous C14K and VP. In the SAXS profile from C14K, any diffraction peaks are not observed. The broad peak observed in the SAXS profile of C14K is attributed to the form factor of C14K micelle [25]. By contrast, the SAXS profile of C14K-VP mixture shows diffraction peaks at 1.7 nm − 1 and 3.4 nm − 1 in addition to the form factor of C14K micelle. The relative q positions of the first and second order peaks are 1 : 2. Hence this diffraction pattern is attributed to the organized lamellar structure with 3.7 nm period. This ordered lamellar structure is an emergent property of C14K-VP mixture. The SAXS profile of the C14K-HA mixture also shows two diffraction peaks at the same q positions as those of the C14K-VP mixture. Consequently, the ordered lamellar structure is formed by cooperatively assembly of C14K and HA. The formation of ordered lamellar structure is also confirmed by transmission electron microscopy for C14K and HA (Fig. S1 in supporting information). Based on this result in conjunction with ITC analyses (Fig. 2), we conclude that fatty acid salts form complex with HA through electrostatic interaction. Hence, the fatty acid salts act not only as detergents but also as HA inhibitors via binding and cooperative assembly. The rate of formation of the ordered lamellar structure should correspond to the rate of HA inhibition by fatty acid salt. To confirm the growth rate of the ordered lamellar structure, we performed time-resolved SAXS by using stopped flow cell. Fig. 4 (a) and (b) show how the SAXS profile and the intensity of 1st order peak of ordered lamellar structure change with elapsed time after mixing C14K and VP, respectively. The first order peak of the ordered lamellar structure immediately appears and grows over time after mixing C14K and VP. The intensity of the first order peak steeply increases until 20 s after mixing and then the slope gradually decline. After 40 s from mixing C14K and VP, the intensity of diffraction peak is almost constant. Therefore, the complexation of C14K and VP is completed within 40 s. This behavior is in good agreement with the following single exponential function shown as a solid line in Fig. 4(b), as in the first-order reaction.
Here, It is scattering peak intensity at q = 1.7 nm − 1 at time t sec., I ∞ is the average intensity of the peak after 50 s, and τ is the time constant.
The rate constant calculated from the time constant (τ = 10.5 ± 2.0 s) is 9.5 × 10 − 2 sec − 1 , indicating rapid assembly of C14 and HA. Thus, C14K can rapidly inhibit influenza virus HA. To sum up, complex of C14K and HA existing in the envelop of VP is immediately formed through the attractive interaction between carboxy group and HA upon mixing C14K and VP. Then, HA, which is important component for binding to cell surface, is removed from envelop of VP. Inevitably, the ability of infection of VP is significantly reduced. Therefore, highly fatty acid salts are the effective agents for preventing the infection of influenza virus.
Conclusion
Here, we found that potassium salts of fatty acids strongly binds with HA of influenza virus through exothermic interactions, such as electrostatic interaction. This attractive interaction between the fatty acid salt and HA results in rapid molecular assembly into an ordered lamellar structure and inhibition of HA function. The mechanism of HA inhibition by fatty acid salt is considered to be universal for viruses covered with envelope such as SARS-CoV-2. Because the natural soaps consisting of fatty acid salts have low cytotoxicity and are not damaging to skin, handwashing with these products is an effective measure to prevent infection diseases caused by enveloped viruses without adverse effects.
Funding
The funder (Shabondama Soap Co., Ltd.) provided support in the form of salaries for TK but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. This work was also supported by MEXT Promotion of Distinctive Joint Research Center Program (JPMXP0621467946).
Author contributions
T. K., T. S., and I. A designed research; T. K., M. S., Y. F., S. K., and I. A. performed research; T. S. contributed influenza virus samples; M. S, Y. F., S. K., and I. A. analyzed data; and T. K. and I. A. wrote the paper.
Declaration of competing interest
The anti-viral effect of soap is patented in Japan (Patent No. 5593572, An anti-viral and washing agent), Russia, Korea and China. A hand soap having potassium salt of fatty acid is sold by Shabondama Soap Co., Ltd. The funder (Shabondama Soap Co., Ltd.) provided support in the form of salaries for TK. This does not alter our adherence to Biochemistry and Biophysics Reports policies on sharing data and materials.
Data availability
No data was used for the research described in the article. | 2022-05-12T15:22:15.464Z | 2022-06-25T00:00:00.000 | {
"year": 2022,
"sha1": "d6bf7107c855a96706921b9d44a70aef9364cb35",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.bbrep.2022.101302",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f22e51fb5eddaccca47ef35bc7173f20b660caad",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
13489787 | pes2o/s2orc | v3-fos-license | New Records of Endophytic Paecilomyces inflatus and Bionectria ochroleuca from Chili Pepper Plants in Korea
Two new species of endophytic fungi were encountered during a diversity study of healthy tissues of chili pepper plants in Korea. The species were identified as Paecilomyces inflatus and Bionectria ochroleuca based on molecular and morphological analyses. Morphological descriptions of these endophytic isolates matched well with their molecular analysis. In the present study, detailed descriptions of internal transcribed spacer regions and morphological observations of these two fungi are presented.
Endophytes are microorganisms that reside within internal tissues of living plants without visibly harming the host plant [1]. Endophytic microorganisms have been found in all plant families [2], and represent many species in different climate regions of the world [3][4][5][6][7]. Endophytic attention has increased in recent years because of their taxonomic diversity [8], multiple functions, including the potential for use as genetic vectors [9], and their host plant growth promotion and fitness [10,11]. Furthermore, they are the source of secondary metabolites [7,12] and biological control agents [13]. Chili peppers that belong to the genus Capsicum are probably the most widely consumed spice in the world [14] and cultivated crop plants may live in association with a variety of mycoflora.
The genus Paecilomyces was first introduced by Bainier [15], who described it as being closely related to Penicillium, but differing in the absence of green colored colonies and in having short cylindrical phialides [16]. The Paecilomyces inflatus described by Onions and Barron [17] is the only monophialidic species of Paecilomyces that is commonly isolated from forest soil. However, most species of nectrioid fungi have been assigned to the genus Nectria, which includes about 1,000 names. Based on morphological and molecular studies, Rossman et al. [18] revised the concept of Nectriaceae and established the family Bionectriaceae typified by Bionectria Speg. Bionectriaceous fungi are decomposers of plant debris, pathogens of plants and insects and biological control agents [19]. Bionectria ochroleuca is characterized by pale yellow or white ascomata and twocelled, hyaline ascospores. An anamorph of the B. ochroleuca is Glicladium rosea, which is normally isolated from forest areas [20]. Interestingly, these two species were isolated from chili pepper as endophytes. In this study, we characterized P. inflatus and B. ochroleuca isolated from healthy symptomless root tissues of chili pepper in Korea by molecular and morphological analysis.
MATERIALS AND METHODS
Isolation of endophytic fungi. Chili pepper plant (Capsicum annuum L.) tissues were collected from a field in Daejeon, which is in Chungnam Province in the central portion of the Republic of Korea, in 2009. Leaf, stem and root samples of plants were randomly excised and brought to the laboratory in separate sterile polyethylene bags, where they were processed for isolation within 5 hr of collection. Briefly, samples were washed in running tap water to remove dust and debris, dried in the air and then cut into 1 cm segments. For surface sterilization, the segments were soaked in 95% ethanol for 1 min, then in sodium hypochlorite (4% available chlorine) for 3 min, and 95% ethanol for 30 sec. The samples were subsequently washed in sterile distilled water three times and dried in a laminar air flow chamber. Next, ten segments per sample were placed horizontally on dichloran rose bengal chloramphenicol agar (DRBC; Difco, Detroit, MI, USA) and potato dextrose agar (PDA; Difco) supplemented with streptomycin sulfate to inhibit bacterial growth. Developing hyphal tips of emerged colonies were collected after incubation at 25 o C for 5, 10, and 25 days and sub-cultured on PDA for 8~10 days. Pure cultures of isolates were maintained in PDA slant tubes and 20% glycerol stock solution and deposited in the culture collection of the Chungnam National University Fungal Herbarium. In this study, molecular and morphological characteristics of two isolates, CNU081043 and CNU081055, were examined.
Genomic DNA extraction and PCR amplification.
Genomic DNA was extracted from mycelium using the method described by Deng et al. [21]. Amplification of the internal transcribed spacer (ITS) region was performed using the ITS5 and ITS4 primers, after which the PCR products were purified using a Wizard PCR prep kit (Promega, Madison, WI, USA). Purified double stranded PCR fragments were then directly sequenced with BigDye terminator cycle sequencing kits (Applied Biosystems, Forster City, CA, USA) according to the manufacturer's instructions. Gel electrophoresis and data collection were performed using an ABI prism 310 Genetic Analyzer (Applied Biosystems).
Sequence analysis.
The sequences were compared with those available in the GenBank database by BLAST search analysis. Sequences generated from materials in this study and retrieved from GenBank were initially aligned using CLUSTAL X [22], after which the alignment was refined manually using PHYDIT ver. 3.2 [23]. Neighbor-joining trees were reconstructed for ITS gene sequences with Kimura's 2-parameter distance model [24] using the MEGA 4 program [25]. Bootstrap analysis using 1,000 replications was performed to assess the relative stability of the branches.
Sequence data were deposited in GenBank and assigned accession numbers KC285890 for isolate CNU081043 and KC285891 for CNU081055.
Morphological characterization.
Morphological characteristics of isolates CNU081043 and CNU081055 were examined on corn meal agar (CMA), malt extract agar (MEA), oat meal agar (OMA), and PDA. Small discs (0.5 cm diameter) were cut from the margin of developing cultures, inoculated on three points of the Petri dish for CNU081043 and in the center of plates for CNU081055 and incubated at 20~35 o C in the dark to determine the favorable growth conditions. The mycelia, phialides, penicillus and conidiophores were observed using a BX50 microscope (Olympus, Tokyo, Japan). The conidia, phialides and conidiophores were measured using an Artcam 300MI digital camera (Artray, Tokyo, Japan). Colors were named using a mycological color chart [26]. Morphological characteristics of the isolate were then compared with previous descriptions.
RESULTS AND DISCUSSION
Taxonomy of the isolate CNU081043. Molecular analysis: To determine the phylogenetic relationship among the endophytic isolate CNU081043 from chili pepper and its related species, the ITS region was compared. BLAST searches revealed 99% sequence similarity between the endophytic fungal isolate (CNU081043) and its relevant sequences in GenBank. Isolate CNU081043 and GenBank isolates P. inflatus (isolate H34, accession no. GU466291) and Acremonium atrogriseum (AB540569) clustered together in a group that matched the reference P. inflatus well with a high bootstrap value (100%). There was only one nucleotide difference between CNU081043 and P. inflatus isolate H34, but two or more nucleotide differences were observed among other related species from the GenBank database (Fig. 1). Morphological characterization: Taxonomic descriptions and microphotographs of morphological structures of the species are shown in Table 1 Fig. 2). Colony on MEA: Slow growing, attaining a diameter 33 (34.02)~35 mm in 14 days at 25 o C. Appearing powdery, velvety and cottony when freshly isolated, becoming more floccose to funiculose and tougher from an increase in vegetative hyphae after several transfers. Vegetative hyphae hyaline, smooth-walled ( Fig. 2A and 2B). Colony on PDA: Growing slowly on PDA media at 25 o C. The colony length ranged from 33 (34)~35 mm after 14 The present isolate is shown in bold. Evolutionary analyses were conducted using the MEGA5 program [25].
days, was white to pale yellow in color, and reverse pale yellow. The optimum temperature for the growth of this fungus on PDA media was 25 o C ( Fig. 2C and 2D).
Colony on OMA: Slow growing at 25 o C, with the diameter ranging from 33 (34)~38 mm after 14 days. Velvety to granular, greenish white to pale yellow. The fungus did not (Table 1, Fig. 2). Isolate examined: On roots of chili pepper; CNU081043. Only one Paecilomyces was isolated from this plant. This is a rarely isolated endophytic fungus, and its isolation frequency was 0.21. Distribution: Common species, found especially in forest soil. Paecilomyces are rarely isolated endophytic fungi and P. inflatus is the first report in Korea.
Taxonomy of the isolate CNU081055.
Phylogenetic analysis: To determine the phylogenetic relationship among the endophytic isolate CNU081055 from chili pepper and its related species, the ITS region was analyzed. The results revealed 99~100% sequence similarity between the endophytic fungal isolate (CNU081055) and its relevant sequences in GenBank. Isolate CNU081055 and B. ochroleuca GCA-605-5 (DQ279793), which was isolated from Gladiolus grandiflorus in Mexico, showed 100% sequence similarity. The ITS sequence of the present isolate also showed 99% sequence similarity (1 nucleotide differences) with B. ochroleuca isolate ATT093 (HQ607832) and other Bionetria spp. isolated from different regions of the world. Furthermore, the isolate showed similarity with its anamorphs Gliocladium roseum isolate G97012 (AJ309334) and Clonostachys rosea f. catenulata isolate NRRL:22970 (HM751081). Finally, the phylogenetic tree revealed that sequences of CNU081055 and B. ochroleuca isolate GCA-605-5 clustered together in a group in which the reference B. ochroleuca matched with a high bootstrap value (64%) (Fig. 3). Morphological characterization: Taxonomic descriptions and microphotographs of the morphological structures of the species are shown in Table 2 and Fig. 4. 1997 (Table 2, Fig. 4). Colony on PDA: Growth fast at 25 o C, with colonies reaching 35~40 mm in diameter in 7 days. Colonies Fig. 3. Neighbor-joining phylogenetic tree of the endophytic fungi CNU081055 and its relevant species from GenBank based on internal transcribed spacer gene sequences. Numbers at the nodes indicate bootstrap values from a test of 1,000 replications. The scale bar indicates the number of nucleotide substitutions. The present isolate is shown in bold. Evolutionary analyses were conducted using the MEGA5 program [25].
Bionectria ochroleuca (Schweinitz) Schroers & Samuels
whitish to yellowish. Surface textures plane velutinous. Reverse white to light yellowish. Aerial mycelium strongly developed in thick, often erect hyphal strands. Surface unpigmented or with slight yellow pigmentation, appearing white because of aerial mycelium and white conidial masses, or in yellow or orange hues with yellowish white to orange-white granules because of conidial masses ( Fig. 2A). C. Yellow pigment generally diffusing beyond the colony, pigment only visible in the agar inside the colony margins. Colony reverse on OMA yellowish white to light yellow, with orange or brownish hues occurring with time and becoming generally orange-white to light orange or carrot-red after incubation under UV. Surface mycelium optimally developed, felty to tomentose, arranged in strands, particularly toward the colony centre, or granulose because of conidial masses from solitary or aggregated conidiophores. Surface yellow or orange hues because of pigmentation of the agar or with yellowish white to orangewhite granules because of conidial masses (Fig. 2C). Colony on CMA: Colony diameter reaching 40 mm after 7 days at 25°C, fast growing fungi. The suitable temperature for the growth of this fungi ranged from 24~27 o C. The colony color was transparent white to light brownish and produced granulose structures on CMA plates because of conidial masses from solitary or aggregated conidiophores. Surface mycelium developed on CMA (Fig. 2D). Conidiophores, phialides, penicillius and conidia: Conidiophores dimorphic. Primary conidiophores verticilliumlike, formed throughout the colony, arising from the agar surface. Stipes (20~) 75 (~250) µm long, 3.5~5.5 µm wide at the base, generally longer than the 30~120 µm high branching portion. Phialides divergent, in whorls of 3~5, or singly from lower levels, straight, each producing a small, hyaline drop of conidia. Secondary conidiophores solitary or aggregated, particularly around the colony center. Branches and phialides appressed, phialides slightly flaskshaped, with the widest point below the middle, slightly tapering in the upper part. Conidia from primary conidiophores larger, frequently less curved, (3.4~) 5.3 (~7.3) × (2.0~) 2.8 (~4.7) µm. Perithecia formed frequently in single ascospore isolates, crowded in large numbers on a well-developed stroma.
Isolate examined. On the roots of chili pepper; CNU081055. Only one Bionectria was isolated from this plant. This organism is a rarely isolated endophytic fungus and this is the first report of its occurrence in Korea. | 2016-05-12T22:15:10.714Z | 2013-03-01T00:00:00.000 | {
"year": 2013,
"sha1": "cf575101be5d37ccf20c72d46049d1bc4c1f9048",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.5941/MYCO.2013.41.1.18?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf575101be5d37ccf20c72d46049d1bc4c1f9048",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
25734775 | pes2o/s2orc | v3-fos-license | Enalapril and Diltiazem Co-Administration and Respiratory Side Effects of Enalapril
A persistent, chronic dry cough is the most common adverse effect of angiotensin converting enzyme (ACE) inhibitors therapy. The mechanism of this respiratory adverse effect is related to the inhibition of ACE and the accumulation of bradykinin, substance P, prostanoids and other inflammatory neuropeptides in the airways. The aim of this study was to follow the relationship between 15-day administration of enalapril and the defense reflexes (cough and bronchoconstriction) of the airways in experimental animals, as well as the possibility of their pharmacological restriction with simultaneous diltiazem administration. Cough reflex was investigated by the method of mechanical irritation of laryngopharyngeal and tracheobronchial area in non-anesthetized cats. The reactivity of tracheal smooth muscles of the airways to bronchoconstrictor mediators (histamine 10 nM – 1 mM, acetylcholine 10 nM – 1 mM and KCl 1 mM – 100 mM) was evaluated by an in vitro method in guinea pigs. Enalapril 5 mg/kg/day and diltiazem 30 mg/kg/day were administered perorally for 15 days. The results showed that long-lasting administration of enalapril resulted in a significant increase of measured cough parameters and increased reactivity of tracheal smooth muscle to histamine and KCl. Simultaneous administration of enalapril together with diltiazem significantly decreased the enalapril induced cough, and decreased enalapril induced hyperreactivity of tracheal smooth muscles to KCl. The results showed a partially protective effect of diltiazem and enalapril co-administration on the respiratory adverse effects induced by enalapril therapy.
Introduction
Angiotensin-converting enzyme (ACE) inhibitors are the drugs of choice in the treatment of hypertension and congestive heart failure.ACE-inhibitors lower the blood pressure without adverse effects on lipid and glucose metabolisms.However, it has been reported that in some patients ACE-inhibitors induce a dry nonproductive cough with the incidence between 0.2-37 % (Israili and Hall 1992).Other airway reactions following ACE-inhibitor therapy such as dyspnoe and wheezing occur less frequently (Semple 1995).
The mechanism of respiratory adverse effects associated with ACE-inhibitors is related to the inhibition of angiotensin convertase, which plays a pivotal role in the metabolism of bradykinin and substance P. Kinins (such as bradykinin), normally degraded by ACE, are accumulated in the airways as a result of ACE inhibition.The result of this effect is enhanced sensitivity of the cough reflex and the reactivity of airway smooth muscles (Trifilieff et al. 1993).
Bradykinin stimulates bronchial C-fibres and induces the release of substance P via axon reflexes (Sekizawa et al. 1996).Substance P is the second neuropeptide for the proteolytic action of ACE and may be involved in the stimulation of respiratory adverse effects of ACE-inhibitors (cough reflex and bronchoconstriction).
Another possible mechanism that can be involved here is that bradykinin and substance P may stimulate phospholipase A 2 activity that results in an increased formation of arachidonic derivates, mainly prostaglandins and tromboxane A 2 (Dendorfer et al. 1999).PGF 2α and PGE 1 belong to prostanoids stimulating the cough reflex (Ho et al. 2000).
The aim of the present study was to investigate the effect of ACE-inhibitor enalapril administration on the mechanically stimulated cough and reactivity of the airway smooth muscle in experimental animals.ACEinhibitors are frequently used in clinical practice for the treatment of hypertension in combination with the group of calcium channel blockers.Ca 2+ channel blockers exert their therapeutic effects by reversibly blocking the L-type voltage-dependent Ca 2+ channels (Striessnig et al. 1998).The block of transmembrane calcium ion flux in the respiratory tract through these channels causes inhibition of bronchoconstriction and modulation of the cough reflex (Undem et al. 2002).The second phase of the study was to examine the possibility of lowering respiratory adverse effects of enalapril by means of simultaneous administration of Ca 2+ channel blocker diltiazem.
Material
Enalapril, diltiazem, histamine hydrochloride, acetylcholine were purchased from Sigma-Aldrich.All other chemicals and solvents used were purchased from commercial sources.
Mechanically induced cough by in vivo method
A method of mechanical stimulation of the laryngopharyngeal and tracheobronchial area of the airways in non-anesthetized cats of both sexes weighing 1500-2500 g was used in the experiment (Korpáš and Nosáľová 1991)
Reactivity of smooth muscles of the airways by in vitro method
The reactivity of tracheal smooth muscles was estimated in vitro, after 15 days administration of enalapril (5 mg/kg/day) and after 15 days combined administration of enalapril (5 mg/kg/day) with diltiazem (30 mg/kg/day).
TRIK guinea pigs (250-350 g) of either sex were used in the experiment.The guinea pig tracheal strips were placed in 20-ml organ chamber containing Krebs-Henseleit buffer of the following composition (mM): NaCl, 110.0; KCl, 4.8; CaCl 2 , 2.35; MgSO 4 , 1.20; KHPO 4 , 1.20; NaHCO 3 , 25.0; in glass-distilled water.Organ chambers were maintained at 36.5±0.5 o C and were aerated continuously with the mixture 95 % O 2 and 5 % CO 2 , to maintain pH 7.5±0.1.The tissue strips were initially set to 4 g of tension (30 min loading phase).After this period, the tension in each tissue segment was readjusted to a baseline of 2 g (30 min adaptation phase).During these periods the tissue was washed at 15 min intervals.The amplitude of isometric contraction (mN) of the tracheal smooth muscle to the cumulative doses of histamine (10 nM -1 mM), acetyl-choline (10 nM -1 mM) and KCl (1 mM -100 mM) were used for the reactivity evaluation (Urdzik et al. 2003).
Statistical analysis
The results of the experiments, estimated the cough response, were evaluated by the Wilcoxon-Wilcox statistical method.In in vitro experiments the reactivity of the tracheal smooth muscle was evaluated by Student´s ttest for unpaired data.
Results
During 15-days peroral administration of enalapril (5 mg/kg b.w.) the sensitivity of the cough reflex was investigated by the method of mechanical stimulation of the airways in non-anesthetized cats.In comparison with control values, a statistically significant increase in the number of cough efforts (Fig. 1) was observed in day 3, 5, 8, 10, 12, 15 after enalapril administration.The measured cough parameter was increased from both parts of the airways.The 15 days simultaneous administration of enalapril (5 mg/kg b.w.) together with diltiazem (30 mg/kg b.w.) revealed a decline in the number of cough efforts from laryngopharyngeal and tracheobronchial part of the airways (Fig. 2).After 15 days of drug administration the reactivity of the tracheal smooth muscle was investigated in vitro.The 15 days of treatment with enalapril (5 mg/kg/day) resulted in a significant increase of the reactivity of tracheal smooth muscle to cumulative doses of histamine.This increase in bronchoconstrictor activity compared to the control values was significant at the concentration range of histamine 10 nM -1 mM.However, 15-day combination treatment with enalapril and diltiazem did not cause the lowering of the contraction of tracheal smooth muscle to histamine (Fig. 3).The reactivity of tracheal smooth muscles to acetylcholine (10 nM -1 mM) after enalapril treatment showed a significant increase of contraction amplitude even to low doses of acetylcholine.Simultaneous administration of enalapril with diltiazem did not influence the tracheal reactivity to acetylcholine (Fig. 4).Vol.54
Number of cough efforts (NE) after enalapril administration
An unambiguous lowering in the reactivity of tracheal smooth muscle to other bronchoconstrictor mediator KCl (1 mM -100 mM) was observed after 15-day combined therapy with enalapril and diltiazem, in comparison to enalapril monotherapy (Fig. 5).
Discussion
Clinical trials and experimental studies dealing with ACE-inhibitor treatment have been currently aimed at management of the cough induced by administration of the above mentioned group of substances.The basic condition for the cough to be eliminated by means of the pharmacological intervention, consists of maintaining the primary pharmacological efficacy of ACE-inhibitors, thanks to which they are so widely used in clinical practice.
ACE-inhibitors and calcium channel blockers are in combination widely used in the treatment of cardiovascular diseases.This co-administration exhibits synergic hemodynamic, antiproliferative, antithrombotic and antiatherogenic effects (Ruschitzka et al. 1998).In our experimental conditions, the animals treated for 15 days with enalapril showed a statistically significant increase of the cough response to mechanical stimuli.Simultaneous administration of enalapril with diltiazem decreased the number of cough efforts in comparison to enalapril monotherapy.A significant decline was found mainly in the tracheobronchial region.
The neural pathway responsible for the cough regulation may undergo disease-related changes (plasticity), which cause that the protective aspects of the cough reflex are replaced by exaggerated and inappropriate coughing in response to stimuli that are otherwise only slightly irritating (Mazzone and Canning 2002).Increased incidence of the cough after enalapril treatment is linked with ACE-inhibition and accumulation of bradykinin, substance P, prostaglandins and other pro-inflammatory mediators in the airways (Gajdoš et al. 2000).These accumulated substances may sensitize airway afferent nerve endings, thereby lowering their chemical and mechanical threshold for activation.From the point of view of afferent nerve endings, the cough reflex is induced by stimulation of rapidly adapting airway mechanoreceptors (RARs) (Hargreaves et al. 1992), bronchopulmonary C-fibres (Fox 1996) and Aδ nociceptors (Undem et al. 2002).While all these three types of receptors are activated differently by tussigenic agents, RARs cause a cough directly, C-fibre receptors by local release of tachykinins that stimulate RARs.The reflex role of Aδ nociceptors is not known (Widdicombe 2001).Peripheral afferent nerve sensitization may lead to increased input to the nucleus tractus solitarius (nTS) in the brainstem and contribute to the cough plasticity (Mazzone and Canning 2002).
The mechanism of diltiazem action in suppression of the cough induced by enalapril administration is unknown.Modulation of the cough reflex with diltiazem could involve the peripheral and central level.The antitussive effect of diltiazem can be the result of its ability to inhibit the activity of peripheral nerve endings regulating the cough reflex.The modulation of the central transmission of the cough reflex through inhibition of calcium-dependent glutamate release in nucleus tractus solitarius (nTS) may be the second location where the calcium channel blockers might act (Korpáš and Nosáľová 1991).
Apart from the cough, bronchospasm is another typical reflex response occurring secondarily to RAR stimulation.On the other hand, increased reactivity of airway smooth muscles can enhance the cough reflex (Canning et al. 2001).Our experimental results confirmed the increase of tracheal smooth muscle activity after enalapril treatment.
The increased reactivity of the tracheal smooth muscle after enalapril treatment could be caused by a release of kinins, tachykinins and other proinflammatory mediators.During the treatment with ACE-inhibitors, bradykinin and substance P could contribute to the enhanced reactivity of airways smooth muscles directly by inducing smooth muscle contraction and indirectly by local edema.Furthermore, by stimulating phospholipase A 2 , bradykinin could augment the formation of other bronchoconstrictor mediators from the group of prostaglandins and tromboxane A 2 .Bradykinin and substance P can also release histamine from mast cells (Israili and Hall 1992).
In our experiments, 15 days' enalapril treatment increased the amplitude of contractions to cumulative doses of histamine under the in vitro conditions.This finding partially supports the results of Bucknall et al. (1988), who demonstrated increased bronchial reactivity to histamine in subjects who cough after taking an ACEinhibitor.However, simultaneous administration of enalapril with diltiazem did not decrease the reactivity of tracheal smooth muscle to histamine.The contractile response of airway smooth muscle to histamine depends upon stimulation of phospholipase C-dependent pathway, and release of calcium from intracellular stores.The classical voltage-dependent calcium channel antagonists, which inhibit calcium entrance from extracellular sources, are not able to block histamine-induced tracheal smooth muscle contraction (Hall 2000).
Another contractile agonist used in our experimental conditions for evaluation of tracheal smooth muscle reactivity in guinea pigs was acetylcholine.However, after 15 days of enalapril administration acetylcholine added to the organ bath caused a contraction of tracheal smooth muscle strips only in low concentrations and this action was not influenced by combination of enalapril with diltiazem.This result cannot be explained exactly on the basis of our experiments.Long-lasting administration of ACEinhibitors causes an accumulation of bradykinin, substance P in the airways.These neuropeptides increase the tracheal smooth muscle contraction induced by acetylcholine (Lundberg et al. 1983).Besides the contractile effect, substance P plays a role in the regulation of airway smooth muscle tone through sensory nerve inhibitory system, which modulates cholinergic contraction.According to some experiments the contractile response of tracheal smooth muscle induced by acetylcholine was inhibited by substance P (Szarek et al. 1996).
Acetylcholine induces the tracheal smooth muscle contraction through the release of calcium from intracellular stores.For this reason, diltiazem in combination with enalapril is not able to influence the contraction of tracheal smooth muscles induced by acetylcholine.
The airway smooth muscle displays a concentration-related contraction by the administration of KCl in terms of a depolarization mechanism.It has been demonstrated that Ca 2+ antagonists cause relaxation of airway smooth muscles precontracted with KCl by blocking the transmembrane Ca 2+ influx through voltagedependent Ca 2+ channels (Koga et al. 1989).In our experiments, cumulative administration of KCl induced the concentration-dependent increase in tracheal smooth muscle reactivity after enalapril treatment.The enalapril-induced release of bronchoconstrictor mediators might potentiate the potassium-induced contraction of tracheal smooth muscle.The combination of enalapril and diltiazem caused the significant decline in the amplitude of tracheal smooth muscle contraction induced by KCl.Moreover, the bronchodilatory effect of diltiazem could enhance its cough-suppressing activity in the cough induced enalapril administration.
In conclusion, the present study demonstrates the protective effect of diltiazem administration on the incidence of the cough and partially on the occurrence of bronchoconstriction during enalapril treatment.The combination of ACE-inhibitors with calcium channel blockers is useful in the treatment of cardiovascular diseases, but it is also beneficial in the management of respiratory side effects of ACE-inhibitors.
Fig. 1 .
Fig. 1.Changes in the number of cough efforts (NE) from laryngopharyngeal (LPh) and tracheobronchial (TB) area of the airways of the non-anesthetized cats during 15 days of enalapril administration The control represent the number of the cough efforts before drug administration.Data represent mean ± S.E.M., n=12, *p<0.05,** p<0.01.
Fig. 2 .
Fig. 2. Changes in the number of cough efforts (NE) from laryngopharyngeal (LPh) and tracheobronchial (TB) area of the airways of the non-anesthetized cats during 15 days of administration of enalapril with diltiazem.The control represent the number of the cough efforts before drug administration.Data represent mean ± S.E.M., n=12, *p<0.05,** p<0.01. | 2017-08-15T00:09:55.273Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "718fb1b2dab6f634e097cec16630e12f891ab26e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.33549/physiolres.930605",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "718fb1b2dab6f634e097cec16630e12f891ab26e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33365371 | pes2o/s2orc | v3-fos-license | A Hybrid ICA-SVM Approach for Determining the Quality Variables at Fault in a Multivariate Process
The monitoring of a multivariate process with the use of multivariate statistical process control MSPC charts has received considerable attention. However, in practice, the use of MSPC chart typically encounters a difficulty. This difficult involves which quality variable or which set of the quality variables is responsible for the generation of the signal. This study proposes a hybrid schemewhich is composed of independent component analysis ICA and support vector machine SVM to determine the fault quality variables when a step-change disturbance existed in a multivariate process. The proposed hybrid ICA-SVM scheme initially applies ICA to the Hotelling T2 MSPC chart to generate independent components ICs . The hidden information of the fault quality variables can be identified in these ICs. The ICs are then served as the input variables of the classifier SVM for performing the classification process. The performance of various process designs is investigated and compared with the typical classification method. Using the proposed approach, the fault quality variables for a multivariate process can be accurately and reliably determined.
Introduction
In recent years, considerable concern has arisen over the multivariate statistical process control MSPC charts in monitoring a multivariate process 1-6 .The MSPC chart is one of the most effective techniques to detect the occurrence of a multivariate process disturbance.An out-of-control signal implies that disturbances have been occurred in the process.When a signal is triggered by the MSPC chart, the process personnel should begin to search for the root causes of the underlying disturbance.Once the root causes have been determined, the process personnel would significantly decrease the effects of the disturbance and then bring the underlying process back in a state of statistical control.
When the root causes have been determined, the necessary remedial actions can be properly taken in order to compensate for the effects of the underlying disturbance.Also, the identification and fixing of the root causes would mainly depend on the accurate identification of the quality variables at fault.As a consequence, the identification of the quality variables at fault in a multivariate process is a very important research issue.
However, the use of the MSPC charts typically encounters a major problem in the interpretation of the signal.Although the MSPC chart's signal will indicate that the underlying process is out of control, the quality variables at fault are very difficult to determine.The degree of difficulty increases when the number of quality variables p in the multivariate process increases.Typically, there are 2p − 1 possible sets of quality variable at fault in an outof-control multivariate process which has p quality variables.For example, there are 31 possible sets of quality variables at fault in a multivariate process with 5 quality variables.When a MSPC signal is triggered, it is not straightforward to determine which one of the 31 possible combinations is responsible for this signal.
Runger et al. 1 introduced a decomposition method to overcome this problem.They computed an approximate chi-square statistic to determine which of the monitored quality variables invoked the MSPC signal.However, their method has some limitations in certain situations 2 .Specifically, their approach may not be able to offer an accurate identification rate AIR when a small magnitude of process disturbance exists in a multivariate process.Some classification techniques are therefore developed to overcome the drawback of their approach 2, 3 .Shao and Hsu 2 used the Artificial Neural Networks ANNs and support vector machine SVM approaches to determine the quality variables at fault in the case of process mean shifts.C. S. Cheng and H. P. Cheng 3 also studied the ANN and SVM techniques to determine the quality variables at fault in the case of process variance shifts.
Huang et al. 4 demonstrated that performance of hierarchical support vector machine technique is better than the traditional SVM.Also, Shao et al. 5 proposed decomposition schemes and developed useful statistics to estimate the quality variables at fault in the case of variance shifts that have occurred in a multivariate process.However, in their approach, the sample size needed was very large, which may be different from what is encountered in practice.
Many studies on the utilization of one-shot or one-step classifiers' approach have been conducted 1-4, 6 .However, very little is known about the hybrid scheme for determining the quality variables at fault in a manufacturing process 7, 8 .In this paper, we present the use of a hybrid mechanism, which integrates independent component analysis ICA and SVM as processing methods to improve the results in determining the quality variables at fault in an out-of-control multivariate process.The basic concept of the proposed hybrid approach is that the most useful information to determine the quality variables at fault may be embedded in the monitor statistics, for example, the Hotelling T 2 statistics in the Hotelling T 2 control chart.We could enhance the AIR if we decompose the monitor statistics and input the decomposed factors to the classifiers.
Due to its frequent use in real applications 2, 9, 10 , this study uses the Hotelling T 2 control chart to detect the process mean shifts in a multivariate process.In addition, since the ICA has been reported to have the capability of distinguishability 11-19 , this study uses the ICA as the first-step technique to extract the independent components ICs from Hotelling T 2 statistics.The hidden useful information of the quality variables at fault would be embedded in these ICs.In the second step of classification, those ICs are then used as the input variables of the classifiers.This study considers the SVM as a classifier for the reason of its great potential and superior performance in practical applications 20-27 .This study is organized as follows.Section 2 discusses the individual components of the proposed hybrid mechanism.Section 3 addresses the appropriate models for determining the quality variables at fault when the process mean shifts are introduced in a multivariate process.In this section, the various experimental settings and the simulation results are also discussed.The final section summarizes the research findings and presents our conclusions.
Methodologies
There are two components in our proposed hybrid scheme, and they include independent component analysis and the support vector machine.The following section addresses the applications and the use of these two techniques.
Independent Component Analysis
The present study employs ICA to enhance the accurate identification rate AIR of the proposed hybrid scheme.There are some ICA applications for process monitoring.Lu et al. 11 successfully combined the ICA and SVM to identify the control chart patterns.Kano et al. 12 applied the ICs, instead of the original measurements, to monitor a process.In their study, a set of devised statistical process control charts have been developed effectively for each IC. Lee et al. 13 used the utilization of kernel density estimation to define the control limits of ICs that do not satisfy Gaussian distribution.In order to monitor the batch processes which combine independent component analysis and kernel estimation, Lee et al. 14 extended their original method to multiway ICA.Xia and Howell 15 developed a spectral ICA approach to transform the process measurements from the time domain to the frequency domain and to identify major oscillations.
Let X x 1 , x 2 , . . ., x m T be a matrix of size m×n, m ≤ n, consisting of observed mixture In the basic ICA model, the matrix X can be modeled as follows: where a i is the ith column of the m×m unknown mixing matrix A; s i is the ith row of the m×n source matrix S. The vectors s i are latent source signals that cannot be directly observed from the observed mixture signals x i .The ICA model aims at finding an m × m demixing matrix B such that where y i is the ith row of the matrix Y, i 1, 2, . . ., m.The vectors y i must be as statistically independent as possible and are called independent components ICs .ICs are used to estimate the latent source signals s i .The vector b i in 2.2 is the ith row of the demixing matrix B, i 1, 2, . . ., m.It is used to filter the observed signals X to generate the corresponding independent component y i , that is, The ICA modeling is formulated as an optimization problem by setting up the measure of the independence of ICs as an objective function and using some optimization techniques for solving the demixing matrix B 28, 29 .The ICs with non-Gaussian distributions imply the statistical independence 28, 29 , and the non-Gaussianity of the ICs can be measured by the negentropy 28 : where y gauss is a Gaussian random vector having the same covariance matrix as y.H is the entropy of a random vector y with density p y defined as H y − p y log p y dy.The negentropy is always nonnegative and is zero if and only if y has a Gaussian distribution.Since the problem in using negentropy is computationally very difficult, an approximation of negentropy is proposed 28 as follows: where v is a Gaussian variable of zero mean and unit variance, and y is a random variable with zero mean and unit variance.G is a nonquadratic function and is given by G y log cosh y in this study.The FastICA algorithm proposed by 28 is adopted in this paper to solve for the demixing matrix W. Two preprocessing steps are common in the ICA modeling, centering and whitening 28 .Firstly, the input matrix X is centered by subtracting the row means of the input matrix, that is, x i ← x i −E x i .The matrix X with zero mean is then passed through the whitening matrix V to remove the second-order statistic of the input matrix, that is, Z VX.The whitening matrix V is twice the inverse square root of the covariance matrix of the input matrix, that is, V 2 C X − 1/2 , where C X E xx T is the covariance matrix of X.The rows of the whitened input matrix Z, denoted by z, are uncorrelated and have unit variance, that is, E zz T I.In this study, it is assumed that the training and testing process datasets are centered and whitened.
Support Vector Machine
The use of SVM algorithm can be described as follows.Let { x i , y i } N i 1 , x i ∈ R d , y i ∈ {−1, 1} be the training set with input vectors and labels.Here, N is the number of sample observations and d is the dimension of each observation, y i is known target.The algorithm is to seek the hyperplane w • x i q 0, where w is the vector of hyperplane and q is a bias term, to separate the data from two classes with maximal margin width 2/ w 2 , and all the points under the boundary are named support vector.In order to obtain the optimal hyperplane, the SVM was used to solve the following optimization problem 30 :
2.5
It is difficult to solve 2.5 , and we need to transform the optimization problem to be dual problem by Lagrange method.The value of α in the Lagrange method must be nonnegative real coefficients.Equation 2.5 is transformed into the following constrained form 30 :
2.6
In 2.6 , C is the penalty factor and determines the degree of penalty assigned to an error.It can be viewed as a tuning parameter which can be used to control the tradeoff between maximizing the margin and the classification error.
In general, it could not find the linear separate hyperplane in all application data.For problems that cannot be linearly separated in the input space, the SVM uses the kernel method to transform the original input space into a high-dimensional feature space where an optimal linear separating hyperplane can be found.The common kernel function is linear, polynomial, radial basis function RBF , and sigmoid.In this study, we used multiclass SVM method proposed by Hsu and Lin 31 .
The ICA-SVM Scheme
This study integrates ICA and SVM for determining the quality variables at fault of an outof-control multivariate process.In the training phase, the aim of the proposed scheme is to obtain the proper parameter setting for the SVM model.Since the RBF kernel function is adopted in this study, the performance of SVM is primarily affected by the setting of parameters C and γ.There are no general rules for the choice of those two parameters.This study uses the grid search proposed by Hsu et al. 32 for these two parameters setting.The trained SVM model with proper parameter setting is preserved and employed in the testing phase.
The proposed model first collects two sets of Hotelling T 2 statistics from an out-ofcontrol process.The ICA model is used to generate the two estimated ICs from the observed Hotelling T 2 statistics.Subsequently, the proposed approach considers those two ICs and 3 averaged quality variables, 4 averaged quality variables, and 5 averaged quality variables as the inputs for SVM in the case of processes with 3 quality characteristics, 4 quality characteristics, and 5 quality characteristics, respectively.
The Simulated Example
This study employs a simulated example to demonstrate the use of our proposed approach.In our simulation, we assume that a multivariate process is initially in control, and the sample observations come from a multivariate normal distribution with known mean vector μ0 and covariance matrix Σ 0 .This study assumes that a disturbance has intruded into the underlying process at time t.It results in a mean vector change which is shifted from μ0 to μ1 .The 1st averaged quality variable (X 1 -bar) The 2nd averaged quality variable (X 2 -bar) The 3rd averaged quality variable (X 3 -bar) The 1st averaged quality variable (X 1 -bar) The 2nd averaged quality variable (X 2 -bar) The 3rd averaged quality variable (X The 1st averaged quality variable (X 1 -bar) The 2nd averaged quality variable (X 2 -bar) The 3rd averaged quality variable (X 3 -bar) c ρ 0.9
This study applies Hotelling T 2 control chart to monitor a multivariate process in the cases of 3, 4, and 5 quality characteristics.For each type of process, this study considers the following types of correlation, ρ, between any two quality variables: 1 no correlation i.e., ρ 0 , 2 moderate correlation i.e., ρ 0.6 , and 3 high correlation i.e., ρ 0.9 .Now, consider a case of out-of-control multivariate normal process with 3 quality characteristics.c ρ 0.9
Figure 2:
The corresponding Hotelling T 2 statistics for the data sets in Figure 1.
Without loss of generality, we assume that each quality characteristic for an in-control process is sampled from a normal distribution with zero mean and one standard deviation.We also assume that the out-of-control process has a mean shift of 1 standard deviation, and, thus, the out-of-control control process is sampled from a normal distribution with a mean of one and one standard deviation.The sample size n is assumed to be 5.
The sample averages X i , i 1, 2, and 3 are used to calculate the Hotelling T 2 statistics.The Hotelling T 2 statistics are computed as follows: where n: the sample size, X: the mean vector at the time t, X: the grand mean vector of the quality characteristics, and S −1 : the inverse of variance and covariance matrix.This study generates 100 data sets of observations each of sample size 5 for every possible combination of fault sets.Since there are 7 possible sets of quality variables at fault in the case of p 3, we have 700 data sets in a simulation run.Those 700 data sets are initially used to serve as the training data.This study generates another 700 data sets for the purpose of the testing.Figure 1 displays the 700 data sets of X 1 , X 2 , and X 3 in the cases of ρ 0, ρ 0.6, and ρ 0.9, respectively.In the first step of classification, we also use the data set of out-of-control Hotelling T 2 statistics which is shown in Figure 2. Figure 3 displays the two ICs which are generated by using ICA technique.
The Results
Consider the case of a multivariate process with a three-quality characteristics i.e., p 3 .The typical approach directly uses four variables, X 1 , X 2 , X 3 , and the Hotelling T 2 statistics as inputs for SVM.Different from the typical approach, the proposed approach initially decomposes the Hotelling T 2 statistics as two ICs, and then the proposed approach uses those two ICs as the inputs for SVM classifier.Therefore, the proposed approach employs five variables, X 1 , X 2 , X 3 , and the two ICs, as the inputs for the classifier SVM.Tables 1, 2, and 3 report the accurate identification rates AIRs when the typical and proposed approaches apply to the multivariate process when p 2, p 3, and p 5. In Table 1, in the case of ρ 0, we notice that the AIRs are 79.6% and 78.2%, respectively, for the typical and proposed approaches.The same AIR interpretations apply to the remaining conditions for Tables 1, 2, and 3.
Observing Table 1, one is able to conclude that the AIR for the proposed approach is almost larger or better than the cases of typical approach except for the case of ρ 0. This implies that the proposed approach has a better performance.Also, in the case of ρ 0, the difference in performance between the two approaches is not significant.Those findings are displayed in Figure 4.
Observing Tables 2 and 3 for the cases of p 3 and p 4, respectively, we can be very sure that the proposed approach outperforms the typical approach.The AIR values for the proposed approach are always larger.In addition, it is apparently that the AIR values become larger when the values of ρ become larger.The values of AIR are smaller when the number of quality characteristics increases.Those research findings are demonstrated in Figures 5 and 6.
Conclusion
Determination of the quality variables at fault for an out-of-control multivariate process is very important in practice.While most of the studies use the single step of classification, this study proposes a hybrid or a two-step approach, ICA-SVM, to enhance the performance of the typical approach.Accordingly, our proposed approach has two more extra inputs, two ICs, for the SVM classifier models.Again, those two ICs are obtained from running the ICA models as the first-step modeling in our proposed scheme.The two ICs are then served as inputs for the second-step modeling in our proposed scheme.The proposed ICA-SVM hybrid mechanism is able to enhance the accurate identification rate for the determination of quality variables at fault in a multivariate process.In this study, a multivariate process with 2, 3, and 5 quality variables and various correlations structures are considered for evaluating the performance between the typical one-step and proposed hybrid approaches.Experimental results strongly agreed that the proposed hybrid ICA-SVM scheme is able to produce the better accurate identification rate for the testing datasets.Observing the experimental results, we can strongly conclude that the proposed hybrid approach is able to effectively determine the quality variables for a multivariate process.
Our approach requires several steps and to total is quite complicated; therefore, we have not attempted analytic evaluation.However, we believe that our simulation example is generically applicable for monitoring real manufacturing processes when the circumstances of the processes resemble to the simulation conditions of this study.To make the proposed method more applicable, a multivariate process with 6 to 10 quality characteristics and a different set of correlations between quality characteristics will be discussed in future research.
2 c ρ 0. 9 Figure 3 :
Figure 3: The corresponding two ICs to the Hotelling T 2 statistics in Figure 2.
Figure 4 :
Figure 4:The performance between the typical and proposed approaches for p 2.
Figure 5 :
Figure 5:The performance between the typical and proposed approaches for p 3.
9 AIRFigure 6 :
Figure 6:The performance between the typical and proposed approaches for p 5.
Table 1 :
The accurate identification rate % for p 2.
Table 2 :
The accurate identification rate % for p 3.
Table 3 :
The accurate identification rate % for p 5. | 2017-08-17T00:33:13.024Z | 2012-09-13T00:00:00.000 | {
"year": 2012,
"sha1": "733abd3131b5dfddb27434e85378552c96a90672",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2012/284910.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "733abd3131b5dfddb27434e85378552c96a90672",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
249213537 | pes2o/s2orc | v3-fos-license | Fungal Diversity in Two Wastewater Treatment Plants in North Italy
In urban wastewater treatment plants, bacteria lead the biological component of the depuration process, but the microbial community is also rich in fungi (mainly molds, yeasts and pseudo-yeasts), whose taxonomical diversity and relative frequency depend on several factors, e.g., quality of wastewater input, climate, seasonality, and depuration stage. By joining morphological and molecular identification, we investigated the fungal diversity in two different plants for the urban wastewater treatment in the suburbs of the two major cities in Lombardia, the core of industrial and commercial activities in Italy. This study presents a comparison of the fungal diversity across the depuration stages by applying the concepts of α-, β- and ζ-diversity. Eurotiales (mainly with Aspergillus and Penicillium), Trichosporonales (Trichosporon sensu lato), Saccharomycetales (mainly with Geotrichum) and Hypocreales (mainly with Fusarium and Trichoderma) are the most represented fungal orders and genera in all the stages and both the plants. The two plants show different trends in α-, β- and ζ-diversity, despite the fact that they all share a crash during the secondary sedimentation and turnover across the depuration stages. This study provides an insight on which taxa potentially contribute to each depuration stage and/or keep viable propagules in sludges after the collection from the external environment.
Introduction
Wastewater treatment technology has a long story; the first tests on depuration by activated sludges were attempted by Ardern and Lockett in 1914 [1].
According to Italian law (Decreto Legislativo 3 Aprile 2006, n. 152) [2], currently, wastewater depuration discriminates among urban wastewater (domestic wastewater possibly mixed with industrial wastewater and rainwash water), domestic wastewater (from domestic activities and human metabolism only) and industrial wastewater (from any productive and/or commercial activity, different from domestic activity and rainwash water).
As schematized by ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale) [3], a typical treatment plant for urban and domestic wastewater treatment is composed of different sectors basically referred to as preliminary treatment (debris and oil removal), primary treatment (reduction of total suspended solids), secondary treatment (reduction of biodegradable organic matter and colloids by activated sludge), tertiary treatment (reduction of nutrients, mainly nitrogen and phosphorous, which have not been removed yet by microbial metabolism), and disinfection (to reduce microbes before final discharge in stream/river). The plant sectors are therefore different from each other as concerns the quantity and composition of suspended solid particles, pH, microbial competition, dissolved O 2 , C/N ratio, fluid perturbation/agitation, residence time of the water and sludges [4][5][6].
Based on the above, wastewater depuration plants can host different microbial (and fungal) communities in different environmental conditions depending on the peculiar structure of the plant itself, climate, land cover and human activities in the catchment area and number of inhabitants [7][8][9][10]. The latter variable is related to the definition of "population equivalent". A population equivalent of one person means "the organic biodegradable load having a five-day biochemical oxygen demand (BOD5) of 60 g of oxygen per day" [2,11]. The population equivalent is a basic unit to size a treatment plant and to provide it with the most suitable technology; this also affects the composition and structure of the whole microbial community.
To date, the depuration stages that significantly involve a biological activity, i.e., the activated sludges, rely on selected bacteria strains naturally mixed with autochthonous ones. Bacteria are assumed to be the most efficient and easily self-sustainable microbes that are able to crash the dissolved nitrogen in the enormous volumes of wastewater to be treated daily in urban and metropolitan areas [12,13]. On the other hand, neither funginor algae-based technologies have been applied yet beyond laboratory scales despite their great potential [14].
Fungi are a well-represented component of the wastewater microbial community that can prove to be very useful and exploitable organisms thanks to their multiple capabilities [15]. Fungi can easily adapt to hostile environments and rapidly changing conditions, for instance different types of municipal and industrial wastewaters, sites strongly polluted by hydrocarbons, acid substrates, or low level of oxygen [4,16,17]. Fungi have been studied and exploited for their production of extra-cellular enzymes (e.g., laccase and peroxidase) capable of degrade complex and potentially hazardous molecules as pesticides, hydrocarbons, dyes, and pharmaceuticals [18][19][20][21][22][23]. Some species can also accumulate and bio-concentrate heavy metals and other elements [24][25][26][27].
Even if many studies are still needed, mycoflora in wastewater treatment plants could help the denitrification process, the removal of nutrients and the reduction of suspended solids. Hyphae of filamentous fungi tend to strengthen sludge flocks, making them larger and with irregular shapes and thus improving the active sludge process [28].
The fungal community in wastewater environments is highly variable, but a core of common shared genera is reported [5,[29][30][31]. Penicillium, Candida and Geotrichum species are the most represented followed by a more variable group with Trichoderma, Trichosporon and Rhodotorula. Fungal taxa variation in treatment plants also depends on season and temperature: fungal diversity seems to differ between summer and winter season. Some taxa such as Penicillium, Trichoderma, Acremonium and Aspergillus are more represented in the warm months [32,33].
Taking all of this into consideration, the aim of the present work was to qualitatively characterize the fungal diversity at different stages of the depuration process in an area never investigated before. Two plants for the treatment of urban wastewater located in Lombardia, the most densely populated region and the core of productive and commercial activities in Italy, were chosen. The treatment plants in metropolitan and peri-metropolitan areas thus receive remarkable inputs throughout the whole year [34] and provide significant study cases for highly inhabited areas. Besides, the study area has a subcontinental climate with a sharp difference between summer and winter seasons that can influence fungal diversity as well.
Such a characterization aims therefore to provide a scenario of the variation in diversity patterns across the different environmental conditions in the depuration process, pointing out which taxa are the most represented or unexpected instead. This work on ecological diversity in Italian plants is preliminary to subsequent studies on the possible functional role of the fungal species present.
Structure of Wastewater Treatment Plants
The following treatment plants for urban wastewater were examined (the full names are not available due to security reasons): -Plant 1, managed by CAP Holding; this plant is located in the South-West sector of the Metropolitan City of Milan; it caters for a population equivalent of 320,000 people and treats an average wastewater volume of 100,000 m 3 day −1 ; -Plant 2, managed by A2A Ciclo Idrico; this is located in Eastern Lombardia; it caters for a population equivalent of 296,000 people and treats an average wastewater volume of 70,000 m 3 day −1 .
A basic scheme of the water treatment process is reported in Figure 1. work on ecological diversity in Italian plants is preliminary to subsequent studies on the possible functional role of the fungal species present.
Structure of Wastewater Treatment Plants
The following treatment plants for urban wastewater were examined (the full names are not available due to security reasons): -Plant 1, managed by CAP Holding; this plant is located in the South-West sector of the Metropolitan City of Milan; it caters for a population equivalent of 320,000 people and treats an average wastewater volume of 100,000 m 3 day −1 ; -Plant 2, managed by A2A Ciclo Idrico; this is located in Eastern Lombardia; it caters for a population equivalent of 296,000 people and treats an average wastewater volume of 70,000 m 3 day −1 .
A basic scheme of the water treatment process is reported in Figure 1. The structures of Plant 1 and Plant 2 are slightly different from each other; therefore, the two sampling transects do not perfectly overlap and only some stages (namely, the activated sludge) are properly comparable. The two plants were consequently analyzed separately; the codes corresponding to each depuration stage in the Plants are schematized in Table 1. The structures of Plant 1 and Plant 2 are slightly different from each other; therefore, the two sampling transects do not perfectly overlap and only some stages (namely, the activated sludge) are properly comparable. The two plants were consequently analyzed separately; the codes corresponding to each depuration stage in the Plants are schematized in Table 1.
Sampling and Isolation in Pure Culture
Samples of water and samples of sludge (i.e., water with 5-8 g L −1 solids suspension) were collected between November 2018 and May 2020. Samples were manually shaken for at least 1 min per bottle in order to resuspend all the particulate and homogenize the propagules distribution. Serial dilution in physiological solution (NaCl 0.9%) was axenically performed by using 1 mL as the basic unit according to the scheme in Table 2.
Bulk and diluted samples were spread in triplicate onto PDA (potato-dextrose-agar, Biokar Diagnostics), 15 cm diameter Petri dishes and incubated in the dark at room temperature for 28 days. PDA was prepared according to the manufacturer's instructions (Biokar Diagnostics, 3.9%) and 150 ppm chloramphenicol (Fagron) were added before autoclave sterilization. Mycofloristic surveys were performed weekly to allow the propagules to overcome any latency period.
In every weekly survey, real-time approximate identification (morphotype approach) based on morphology was carried out by means of stereomicroscope (Zeiss Stemi 2000-C) and optical microscope (Zeiss Axioplan). The morphotype approach represents a first, basic step to organize the identification workload when dealing with apparently numerous taxa and little survey time, either for fungi or other organisms [35,36].
At least two cultures per morphotype were isolated in a glass tube containing PDA (as above), corked with raw cotton and incubated at room light and temperature. Pure cultures were morphologically checked to validate the morphotype.
Molecular Identification of Selected Strains
Based on the strain set obtained as above, at least one isolated morphotype per each plant was selected for further molecular identification.
DNA extraction was obtained by means of a Nucleospin Plant II kit (Macherey-Nagel) according to the manufacturer's instructions. Due to the great variety in mycoflora, PCR amplification concerned the ITS region only; on the other hand, the ITS region is regarded as an efficient barcode for most fungal taxa [37][38][39]. ITS1-ITS4 primers were used for filamentous fungi (including mycelia sterilia too), whereas ITS5-ITS4 primers were used for yeasts and pseudo-yeasts [40]. Further details of the complete identification protocol are reported in Girometta et al. (2020) [41].
Estimation of Ecological Parameters
The wastewater flow in the treatment plants under examination is mainly unidirectional and the two plants share only one significant re-pumping line from Oxy to Denitro.
The most water proceeds from the discharge of the activated sludge to the final depuration stages and discharge in the stream. This allows for the approximation of the data structure to a spatial environmental gradient whose sample selection scheme is directional from a point source [42].
Based on the General concepts of α-, β-, γand ζ-diversity, the composition and structure of the communities in each depuration stage were investigated and compared along the depuration process, i.e., stage by stage.
where: p i = fraction of individuals of the species i in the overall individual population; S = overall number of species in the population = γ diversity As summarized by Baselga (2010) [45], "β-diversity is the variation of species composition of assemblages", i.e., the variation between depuration stages in this context. The β-diversity partitioning was investigated based on pairwise (nearest neighbor) presenceabsence models by Jaccard's and Simpson's indices [46].
According to Hui and McGeoch (2014) [47], "ζ-diversity is the number of species shared by a given number of sites and provides a measure of turnover for each combination of i sites". Analogous to β-diversity, ζ-diversity was normalized based on Jaccard's assumptions [42,45].
As a whole, β-diversity and ζ-diversity were calculated as follows: Jaccard's dissimilarity Simpson's turnover Normalized ζ i -diversity Basic generic example scheme of the data structure as applied in β-diversity and ζ-diversity formulae. The hypothetical example considers two neighbor sites including 10 and eight species, respectively.
Sampling, Isolation in Pure Culture and Identification
From the whole pool of fungal taxa sampled, 60 morphotypes from Plant 1 and 47 from Plant 2 were successfully isolated in pure culture.
Sampling, Isolation in Pure Culture and Identification
From the whole pool of fungal taxa sampled, 60 morphotypes from Plant 1 and 47 from Plant 2 were successfully isolated in pure culture.
The morphotype approach generally fails in discriminating most yeast species from each other, and the same happens for arthrosporigenous pseudo-yeasts. Yeasts and pseudoyeasts must be therefore sampled more intensely than moulds.
ITS-based molecular identification of the selected strains resulted in acceptable discrimination for all of the morphological categories under examination (yeasts, sporigenous filamentous, mycelia sterilia).
All fungal taxa sampled in this study are reported in Table 3: genera identification was carried out by morphological approach, and species identification was achieved by ITS-based molecular analysis. Taxonomy check on MycoBank [48]. Table 3. Sampled fungal taxa with reference to the depuration stage of provenance.
Fungal Taxa Author Depuration Stages of Provenance
Acremonium spp. Certain fungal taxa were found in almost every stage of depuration: genera such as Acremonium, Aspergillus, Cladosporium, Fusarium, Mucor, Penicillium and Trichoderma are generally found in water and soil and their spores are constantly present in the air [49,50]. These fungi follow the stream across the depuration stages and their availability as environmental contaminants can explain why some taxa are present even after the oxidation and disinfection process, as they are easily sampled by chance.
Family Trichosporanaceae is also well represented by species of the genera Apiotrichum, Cutaneotrichosporon and Trichosporon, once all are grouped in the latter [51]. These fungi are yeast and yeast-like organisms generally isolated from soil and environment and some species also from human and animal skin [52]. Species such as Apiotrichum domesticum, Apiotrichum montevideense, Cutaneotrichosporon mucoides and Trichosporon asahii are potentially pathogenic and of clinical importance [52,53] but from this study it emerges that even if these fungi are found in different depuration stages, they are successfully eliminated by the depuration process, as they are no longer found in post-ozonation (1-End) and in Filtration input (2-End).
Diversity Patterns at Fungal Order Scale
By merging morphotypes with results from molecular identifications, the isolated strains show the diversity pattern reported in Figure 3, giving us the composition of the fungal community throughout both treatment plants. The single data were grouped at the taxonomic level of order to better compare the mycoflora present both in the different depuration stages in the two plants and during the four seasons. Among all isolated strains, Eurotiales is the most represented in both the treatment plants and in almost all depuration stages. In this study, Eurotiales include Aspergillus and Penicillium or Talaromyces, as well as Paecilomyces. It should be noted that the nomenclatural distinction between Penicillium and Talaromyces has been adopted, despite the fact that they are the anamorph and teleomorph of the same taxon, respectively. This decision As expected, diversity in wastewater in the first sampling stages (1-PSed and 2-Equalization) is more affected by external propagule sources, both from the urban areas and agricultural systems. Eurotiales, Hypocreales, Saccharomycetales and Trichosporonales are the Orders mainly sampled.
Among all isolated strains, Eurotiales is the most represented in both the treatment plants and in almost all depuration stages. In this study, Eurotiales include Aspergillus and Penicillium or Talaromyces, as well as Paecilomyces. It should be noted that the nomenclatural distinction between Penicillium and Talaromyces has been adopted, despite the fact that they are the anamorph and teleomorph of the same taxon, respectively. This decision was taken to preserve the information about the occurrence of species which are known to reproduce sexually as well.
Furthermore, Hypocreales constitutes the base of the fungal community in this study, as species belonging to this order have been sampled in three out of five depuration stages in Plant 1 and in all stages in Plant 2. Fusarium and Trichoderma, which are very common in agricultural systems and soils, are the most represented species. A special mention is deserved by Trichoderma, whose species play an important role in soil ecology due to their competition and hyperparasitism versus phytopathogens (mainly fungi and nematodes). Moreover, Trichoderma species stimulate plant defense induction [54]. Here, five species were detected: T. harzianum and T. virens, T. citrinoviride and T. saturnisporum, and T. asperellum (the most common species in the present sampling) [55]. The widespread T. asperellum is particularly interesting because only in 1999 it was recognized to be a different species from T. viride; however, since then T. asperellum has been increasingly detected in agricultural soils. This is likely to also be due to its above mentioned application as a biocontrol agent [56].
Despite the fact that a quantitative approach is out of the scope of the present work, it can be noted that Fusarium species are less represented than its major antagonists in the soil, i.e., Trichoderma species; this is important since both F. oxysporum and F. fujikuroi are severe phytopathogens [57].
Cosmospora, typically developing an Acremonium-like morphology, is phylogenetically close to Fusarium. Here, the genus is represented by C. butyri, which is related to lipid-rich substrates [58].
According to the recent taxonomic revision of the genus Paecilomyces, the species P. lilacinus now is named Purpureocillium lilacinum and it has switched from Eurotiales to Hypocreales [59].
Saccharomycetales includes fungi that are well-known to be common in wastewater, where they degrade simple polysaccharides and fatty acids. In the present work, eight genera belonging to Saccharomycetales were detected: Candida, Dipodascus, Diutina, Galactomyces, Geotrichum, Scheffersomyces, Yarrowia and Zygoascus. Geotrichum, which is found worldwide in air, soil, water, sewage, as well as in plants, besides being found in human feces, was the most representative, as it concerns morphotype frequency and spatial colonization in a Petri dish, displaying most pseudo-yeast morphology.
The order Trichosporonales resulted in three Genera phylogenetically very close to each other and belonging to Trichosporon sensu lato, i.e., Apiotrichum, Cutaneotrichosporon and Trichosporon sensu stricto. Trichosporon s.l. morphotype proved to be very common and displayed both budding and arthrospore formation. As for Saccharomycetales, Trichosporon s.l. is also commonly represented in soil and in water; however, its trophic spectrum also includes keratinolysis and thus degradation of hair(s) and skin in wastewater [9,60,61].
As a whole, yeasts and pseudo-yeasts generally are over-represented in wastewater treatment plants as they are favored by the abundance of organic matter and, compared to filamentous fungi, are facilitated in growth by the asexual mode of reproduction (buds and arthrospores). Filamentous fungi in continuous wastewater flow are often hampered in sporulation and mycelia can produce forms of resistance such as chlamydospores [62].
Seasonal Variation
Fungal community composition at the Order scale shows a seasonal variation, with similar results between the two plants ( Figure 4). A higher number of isolates was found in summer and autumn compared to winter and spring. Orders follow this trend as well.
Eurotiales (mainly Aspergillus and Penicillium species) and Hypocreales (mainly Fusarium and Trichoderma species) are confirmed to be the most represented in the two plants across the whole year. Saccharomycetales with Geotrichum species and Trichosporanales are also frequent, especially in autumn. Other orders are less represented and were sporadically isolated compared to the others.
As a whole, the wastewater environment seems to host a wider and more diversified community in summer and autumn compared to the other seasons: these results confirm what is as also reported in other works [32,33].
Seasonal variation of isolated strains is probably related to conditions of humidity and temperature, with rainy and warmer months characterized by a more diverse fungal community. This relation is also supported by the metereological data of Lombardia: April, May, June, September, October and November are the months with most average millimeters of precipitation and with average temperatures between 15 °C and 20 °C [63].
Diversity Indices
Since a wastewater treatment plant is composed of different systems and environmental conditions, different community structures are expected in each depuration stage.
Simpson's evenness and Pielou's regularity describe how each taxon is representative within the community based on the ratio between the taxon individuals and the overall number of individuals.
Evenness indices by Simpson (1949) [64] and Pielou (1966) [65] are compared in Figure 5. The substantial disagreement between the two indices suggests that Pielou's regularity, (a derivation from Shannon-Weaver's index), is not truly informative in this case since the small sample size highlights the bias [43]. As the community structure evolves towards increasing diversity loss, zero inflation is a bias factor to take into account [66]. A higher number of isolates was found in summer and autumn compared to winter and spring. Orders follow this trend as well.
Eurotiales (mainly Aspergillus and Penicillium species) and Hypocreales (mainly Fusarium and Trichoderma species) are confirmed to be the most represented in the two plants across the whole year. Saccharomycetales with Geotrichum species and Trichosporanales are also frequent, especially in autumn. Other orders are less represented and were sporadically isolated compared to the others.
As a whole, the wastewater environment seems to host a wider and more diversified community in summer and autumn compared to the other seasons: these results confirm what is as also reported in other works [32,33].
Seasonal variation of isolated strains is probably related to conditions of humidity and temperature, with rainy and warmer months characterized by a more diverse fungal community. This relation is also supported by the metereological data of Lombardia: April, May, June, September, October and November are the months with most average millimeters of precipitation and with average temperatures between 15 • C and 20 • C [63].
Diversity Indices
Since a wastewater treatment plant is composed of different systems and environmental conditions, different community structures are expected in each depuration stage.
Simpson's evenness and Pielou's regularity describe how each taxon is representative within the community based on the ratio between the taxon individuals and the overall number of individuals.
Evenness indices by Simpson (1949) [64] and Pielou (1966) [65] are compared in Figure 5. The substantial disagreement between the two indices suggests that Pielou's regularity, (a derivation from Shannon-Weaver's index), is not truly informative in this case since the small sample size highlights the bias [43]. As the community structure evolves towards increasing diversity loss, zero inflation is a bias factor to take into account [66]. Concretely, the different structure of the two plants and sedimentation pools in particular may explain the differences in the taxa occurrence and repartition along the depuration stages. The final stage in Plant 1 loses diversity (Simpson's evenness 100) with only few species represented, while Plant 2 appears to be favoured in preserving more propagules until the final stages.
As mentioned, β-diversity described the compositional change of the community. The above discussed data suggest that β-diversity partitioning is governed by pairwise presence/absence models. Jaccard's indices and Simpson's turnover, i.e., normalizations of raw ζ-diversity [42], are reported in Figures 6 and 7 for Order and genus scale, respectively. Concretely, the different structure of the two plants and sedimentation pools in particular may explain the differences in the taxa occurrence and repartition along the depuration stages. The final stage in Plant 1 loses diversity (Simpson's evenness 100) with only few species represented, while Plant 2 appears to be favoured in preserving more propagules until the final stages.
As mentioned, β-diversity described the compositional change of the community. The above discussed data suggest that β-diversity partitioning is governed by pairwise presence/absence models. Jaccard's indices and Simpson's turnover, i.e., normalizations of raw ζ-diversity [42], are reported in Figures 6 and 7 for Order and genus scale, respectively.
Based on the order scale, Jaccard's dissimilarity β cc increases in Plant 1 by a pairwise comparison of depuration stages, whereas the Jaccard's turnover β −3 based on absolute species number doesn't show a clear trend, except for the final stage. The dissimilarity trend in Plant 2 is less regular. In Plant 1, the severe constraints before the final discharge provoke a dramatic increasing in turnover. The Jaccard's distance β rich in Plant 1 is highest when crossing from 1-Oxy to 1-Filt. Input, despite the turnover, is null, as the number of isolated strains is higher compared to the previous stage; however, it is not the same in Plant 2, where the distance is less variable.
As expected, the genus scale is more informative than the order scale, although only four orders are represented by three or more genera. Based on genus scale, Jaccard's dissimilarity βcc in Plant 1 particularly increases when crossing from 1-Oxy to 1-Filt. input and then furtherly increases to final discharge. More interestingly, Plant 1 has a constantly low Jaccard's turnover β −3 except when crossing to the final discharge, whereas the turnover in Plant 2 is always around 0.5. The Jaccard's distance β rich is similar among the stages in both the plants, except for the stage from 1-Oxy to to 1-Filt. input in Plant 1 (that explains the most dissimilarity observed by β cc ) and the stage from 2-Denitro to 2-Oxy in Plant 2 (although the communities are qualitatively different due to turnover). In Plant 2, turnover has a remarkable role in preserving the diversity. Based on the order scale, Jaccard's dissimilarity βcc increases in Plant 1 by a pairwise comparison of depuration stages, whereas the Jaccard's turnover β-3 based on absolute species number doesn't show a clear trend, except for the final stage. The dissimilarity trend in Plant 2 is less regular. In Plant 1, the severe constraints before the final discharge provoke a dramatic increasing in turnover. The Jaccard's distance βrich in Plant 1 is highest when crossing from 1-Oxy to 1-Filt. Input, despite the turnover, is null, as the number of isolated strains is higher compared to the previous stage; however, it is not the same in Plant 2, where the distance is less variable. The activated sludge seems therefore to be a critical stage that imposes environmental constraints with particular concern to oxygenation. In many depuration plants there are two backward re-pumping lines: the first is from the oxidation pool to the denitrification one (this is both the case of Plant 1 and Plant 2), and the second is from the final filtration to the denitrification one (this is the case of Plant 1 only). Such a partial bi-directional flow favors the community homogenization at least between the activated sludge and the sedimentation stage.
Notwithstanding this, the stage from the activated sludge to the secondary sedimentation provokes, as mentioned, a dramatic decline in microbial richness, since the supernatant is impoverished in nutrients with respect to the sunk slurry particles. As expected, and most important in the depuration process, the final discharge furtherly destroys the microbial community in the water. As a whole, such a loss can be seen both as the result of microbial quantitative reduction and consequent qualitative sampling bias. This is consistent with normalized ζ-diversity at a genus scale that clearly shows similar dynamics in Plant 1 and Plant 2. The stage from the initial input to the activated sludge represents a first bottleneck more in Plant 2 than in the Plant 1. The ζ-diversity is in fact lower when passing from the 2-Equal to 2-Denitro (ζ-diversity 0.33) than from 1-PSed to 1-Denitro (ζ-diversity 0.55). This was unexpected because the wastewater in 2-Equal is very similar to 2-Denitro, whereas 1-Psed is a further intermediate stage between the initial input and the denitrification.
In both the treatment plants the output from the activated sludge (1-Oxy and 2-Oxy) encounters a bottleneck where the number of shared taxa (i.e., ζ-diversity) crashes (Plant 1 ζ-diversity 0.18; Plant 2 ζ-diversity 0.14). In Plant 1 the output from the activated sludge undergoes secondary sedimentation; the resulting supernatant (1-Filt. input) is therefore significantly depleted of particles as well as fungal propagules. As a whole, ζ-diversity between 1-Oxy and 1-Filt. input relies on sharing taxa in Eurotiales (Figure 3). As expected, the genus scale is more informative than the order scale, although only four orders are represented by three or more genera. Based on genus scale, Jaccard's dissimilarity βcc in Plant 1 particularly increases when crossing from 1-Oxy to 1-Filt. input and then furtherly increases to final discharge. More interestingly, Plant 1 has a constantly low Jaccard's turnover β-3 except when crossing to the final discharge, whereas the turnover in Plant 2 is always around 0.5. The Jaccard's distance βrich is similar among the stages in both the plants, except for the stage from 1-Oxy to to 1-Filt. input in Plant 1 (that explains the most dissimilarity observed by βcc) and the stage from 2-Denitro to 2-Oxy in Plant 2 (although the communities are qualitatively different due to turnover). In Plant 2, turnover has a remarkable role in preserving the diversity.
The activated sludge seems therefore to be a critical stage that imposes environmental constraints with particular concern to oxygenation. In many depuration plants there are two backward re-pumping lines: the first is from the oxidation pool to the denitrification one (this is both the case of Plant 1 and Plant 2), and the second is from the final filtration to the denitrification one (this is the case of Plant 1 only). Such a partial bi-directional flow favors the community homogenization at least between the activated sludge and the sedimentation stage.
Notwithstanding this, the stage from the activated sludge to the secondary sedimentation provokes, as mentioned, a dramatic decline in microbial richness, since the supernatant is impoverished in nutrients with respect to the sunk slurry particles. As expected, and most important in the depuration process, the final discharge furtherly destroys the microbial community in the water. As a whole, such a loss can be seen both as the result of microbial quantitative reduction and consequent qualitative sampling bias. This is consistent with normalized ζ-diversity at a genus scale that clearly shows similar dynamics in Plant 1 and Plant 2. The stage from the initial input to the activated sludge In Plant 2, the output from the activated sludge undergoes two different processes which result in similar values of ζ-diversity. When passing from 2-Oxy to 2-End ζ-diversity relies on sharing taxa in Hypocreales and Eurotiales (i.e., true moulds), which are very common in the environment and may therefore be represented even at the exit of the depuration process.
Actually, the taxa sampled after the ozonation process are to be meant as sampled by chance due to the environmental availability of propagules outside the ozonation compound itself instead of the failure of the depuration process. It should be kept in mind that ζ-diversity is a similarity measure based on diversity instead of population size, therefore it does not at all imply considerations about the depuration success.
The depuration principle as meant by Italian law [2,6] aims at "disinfection" instead of "sterilization", meaning that microbial contamination is accepted on condition it is below the safety threshold indicated by the law itself. Nevertheless, it is noteworthy that such thresholds concern bacteria only (namely the coliforms Escherichia coli, Enterococcus spp., Clostridium perfringens, and Pseudomonas aeruginosa) whereas no fungal propagules are monitored by default [67]. This is due to the fact that the depuration process basically relies on the bacterial activity more than the fungal one. Moreover, there is another issue hampering the ability to apply to the fungi the same qualitative and quantitative surveys routinely applied to bacteria: filamentous fungi grow much slower even in optimal conditions [68]. However, as previously discussed, filamentous fungi show a severely limited reproduction in sludges and slurries of depuration plants, whereas yeasts and pseudo-yeasts are more favoured but their populations are crashed by the conventional disinfection methods [6] as well as the bacterial ones, particularly when adopting sieving biomembranes [34,69]. This means that the discharged water from a depuration plant provides a negligible fungal inoculum into the receiving stream. Nevertheless, periodic surveys on the fungal propagules in the discharged water may suggest what is the most efficient disinfection method when projecting future plants or restructuring/adjusting the existent ones.
Conclusions
Wastewater treatment plants are composite systems where different fungal communities are hosted depending on the specific conditions of each depuration stage. The diversity pattern in input strongly affects the community in first stages (primary sedimentation and activated sludge), but is radically changed in secondary sedimentation, i.e., after the activated sludge stage and the separation of spent microbial particles and residual nutrients from the supernatant. As expected, the cyclic flow between denitrification and nitrification systems contributes to homogenize the communities in activated sludge despite the difference in oxidation conditions. From a qualitative-mycofloristic perspective, Eurotiales, Hypocreales and Trichosporonales, as well as Saccharomycetales, are the most represented orders in all the depuration stages, mainly including genera such as Penicillium or Talaromyces, Aspergillus, Trichoderma, Trichosporon sensu lato and several yeasts and pseudo-yeasts such as Geotrichum.
Despite the fact that Plant 1 and Plant 2 show different diversity patterns, the above mentioned taxa are basically represented in both.
The ITS region approach resulted in acceptable discrimination based on cross-check on the output by Mycobank Molecular ID. ITS is regarded as a suitable barcode region when dealing with surveys on a wide spectrum of fungi. Further selected markers may be introduced to confirm specific identification within complex Genera such as Penicillium or Talaromyces and Trichoderma as well as to investigate sub-specific diversity.
The wastewater fungal community is an often ignored, but equally represented, part of the microbial community. Deepening the knowledge about fungal species' presence and fluctuation across depuration stages and seasons can help in better understanding their role in the depuration process and how to exploit them in synergy with the bacterial component. This work also highlights the importance of periodic sampling campaigns to monitor the fungal community not only in the different depuration stages but also into the final water stream.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to industrial secrecy. | 2022-06-01T15:25:38.189Z | 2022-05-25T00:00:00.000 | {
"year": 2022,
"sha1": "701f7203711221f9f42333638c899534d34af801",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/10/6/1096/pdf?version=1653548485",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e897d79e11d23ac7fb23cccee478029c445d0d8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20885325 | pes2o/s2orc | v3-fos-license | Histone H3 Lysine 9 Methyltransferase G9a Is a Transcriptional Coactivator for Nuclear Receptors*
Methylation of Lys-9 of histone H3 has been associated with repression of transcription. G9a is a histone H3 Lys-9 methyltransferase localized in euchromatin and acts as a corepressor for specific transcription factors. Here we demonstrate that G9a also functions as a coactivator for nuclear receptors, cooperating synergistically with nuclear receptor coactivators glucocorticoid receptor interacting protein 1, coactivator-associated arginine methyltransferase 1 (CARM1), and p300 in transient transfection assays. This synergy depends strongly on the arginine-specific protein methyltransferase activity of CARM1 but does not absolutely require the enzymatic activity of G9a and is specific to CARM1 and G9a among various protein methyltransferases. Reduction of endogenous G9a diminished hormonal activation of an endogenous target gene by the androgen receptor, and G9a associated with regulatory regions of this same gene. G9a fused to Gal4 DNA binding domain can repress transcription in a lysine methyltransferase-dependent manner; however, the histone modifications associated with transcriptional activation can inhibit the methyltransferase activity of G9a. These findings suggest a link between histone arginine and lysine methylation and a mechanism for controlling whether G9a functions as a corepressor or coactivator.
Methylation of Lys-9 of histone H3 has been associated with repression of transcription. G9a is a histone H3 Lys-9 methyltransferase localized in euchromatin and acts as a corepressor for specific transcription factors. Here we demonstrate that G9a also functions as a coactivator for nuclear receptors, cooperating synergistically with nuclear receptor coactivators glucocorticoid receptor interacting protein 1, coactivator-associated arginine methyltransferase 1 (CARM1), and p300 in transient transfection assays. This synergy depends strongly on the arginine-specific protein methyltransferase activity of CARM1 but does not absolutely require the enzymatic activity of G9a and is specific to CARM1 and G9a among various protein methyltransferases. Reduction of endogenous G9a diminished hormonal activation of an endogenous target gene by the androgen receptor, and G9a associated with regulatory regions of this same gene. G9a fused to Gal4 DNA binding domain can repress transcription in a lysine methyltransferase-dependent manner; however, the histone modifications associated with transcriptional activation can inhibit the methyltransferase activity of G9a. These findings suggest a link between histone arginine and lysine methylation and a mechanism for controlling whether G9a functions as a corepressor or coactivator.
Activation and repression of transcription involve the recruitment of many coregulator (coactivator or corepressor) proteins to the regulated gene promoter by sequence-specific DNA binding transcription factors (1,2). These coregulator proteins contribute to transcriptional regulation by helping to remodel chromatin conformation in the promoter of the gene and by influencing the recruitment and activation of RNA polymerase II and its associated basal transcription factors. The mechanisms by which coregulators accomplish these tasks include proteinprotein interactions, ATP-dependent alterations in conformations of chromatin, and catalysis of post-translational modifications of histones and other protein components of the transcription machinery.
Post-translational modifications of the N-terminal tails of histones include acetylation, phosphorylation, ubiquitylation, and arginine and lysine methylation. Individual histone modifications or sequential or concurrent combinations of these modifications may constitute a histone code, which is then recognized by effector proteins to bring about distinct changes in chromatin structure or other aspects of transcrip-tion complex assembly and activity (3). Methylation of histones on various lysine and arginine residues has been found to play both positive and negative roles in transcriptional regulation. For example, methylation of Lys-9 of histone H3 is associated with inactive genes, whereas methylation of Lys-4 and Arg-17 of histone H3 has been generally associated with active or potentially active genes (4). Lysine residues can be modified to mono-, di-, or trimethyl states; arginine can be modified to a monomethyl, asymmetric dimethyl, or symmetric dimethyl state. It appears that different degrees of methylation may be associated with distinct chromatin regions or transcriptional states. Trimethylation of Lys-9 of histone H3 is associated with pericentromeric heterochromatin and transcriptional repression, whereas dimethylation of Lys-9 appears to occur on repressed genes in euchromatin. However, these general rules, which represent our current level of understanding, may require some refinement if various histone modifications are indeed interpreted in combinations as part of a histone code.
Nuclear receptors (NR) 3 are ligand-activated, DNA binding transcription factors. Among the many coactivators that NRs recruit to the promoters of their target genes, one critical coactivator complex contains a member of the p160 coactivator family, which includes steroid receptor coactivator 1, GRIP1, and AIB1 (amplified in breast cancer 1). p160 coactivators bind to NRs in a ligand-dependent manner and use at least three different activation domains to recruit additional coactivators (5). The histone acetyltransferases p300 and CBP bind to AD1 of p160 coactivators, whereas the histone arginine methyltransferases CARM1 and PRMT1 bind to AD2 (6 -9). In addition, several coactivators with no apparent enzymatic activity (e.g. CoCoA, Fli-I, and GAC63) bind to AD3 in the N-terminal region of p160 coactivators (10). Methylation of arginine residues 2, 17, and 26 of histone H3 by CARM1 and Arg-3 of histone H4 by PRMT1 occurs during hormone-dependent transcriptional activation by NRs (11,12). Various combinations of these coactivators can cooperate synergistically to enhance transcriptional activation of NRs in transient transfection as well as chromatinbased in vitro transcription systems. For example, p300 and CBP cooperate synergistically with CARM1, and their enzymatic histone modifications are required for transcriptional activation and occur in a requisite sequence (13,14). In contrast, histone modifications associated with repression and those associated with activation are often mutually inhibitory (15,16).
Here we test functional relationships between coregulators that make activating and repressive histone modifications. G9a is the major euchromatic histone H3 Lys-9 methyltransferase in higher eukaryotes * This work was supported in part by National Institutes of Health Grant DK55274 (to and is responsible for mono-and dimethylation of Lys-9 of histone H3 in euchromatin (17,18). Previous studies found that G9a functions as a corepressor which can be targeted to specific genes by associating with transcriptional repressors and corepressors such as CDP/cut, Blimp-1/ PRDI-BF1, and REST/NRSF (19 -21). Here we show, surprisingly, that G9a functions as a coactivator for NRs, collaborating synergistically with CARM1 and other NR coactivators. We also tested the role of the enzymatic activities of G9a and CARM1 in their synergistic coactivator function, and we investigated potential regulatory mechanisms for the histone lysine methyltransferase activity of G9a. Our results suggest that promoter context and/or regulatory environment control whether G9a functions as a corepressor or a coactivator.
Cell Culture and Transfections-CV-1 and Cos-7 (25) cells were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum at 37°C and 5% CO 2 . CV-1 cells (5 ϫ 10 4 /well) were seeded into 12-well dishes 18 h before transfection with a total of 1 g/well DNA using Targefect F-1 (Targeting Systems) (8). After transfection, cells were grown in 5% charcoal-stripped fetal bovine serum (Gemini Bioproducts) for 48 h in the absence or presence of 20 nM dihydrotestosterone for AR or 20 nM estradiol for ER. Cell extracts were assayed for luciferase activity using a luciferase assay kit (Promega) as described previously (8). Results shown are the mean Ϯ S.D. for two transfected wells. The results are representative of at least four independent experiments. For coimmunoprecipitation assays and protein expression assays, Cos-7 cells were seeded at 1 ϫ 10 6 cells/10-cm dish 1 day before transfection with a total of 2.5 or 5 g of expression vector using Targefect F-2 (Targeting Systems) according to the manufacturer's instructions.
Protein-Protein Interactions-GST fusion proteins were produced in Escherichia coli strain BL21 by standard methods using glutathione agarose (Sigma) affinity chromatography. GRIP1 fragments GRIP1.N, GRIP1.M, GRIP1.C were cloned into the vector pGEX4T-1 (Amersham Biosciences) for expression. 35 S-Labeled G9a was synthesized in vitro by transcription and translation using the TNT-T7 coupled reticulocyte lysate system (Promega). GST pull-down assays were performed as described previously (9). Coimmunoprecipitation assays were performed as described previously (13) using anti-FLAG antibody (M2, Sigma), anti-HA antibody (3F10, Roche Applied Science) or normal mouse or normal rat IgG for immunoprecipitations followed by either anti-HA antibody (3F10, Roche Applied Science) or by anti-FLAG antibody for immunoblotting. Further antibodies used for immunoblotting were anti-G9a (Sigma), anti--actin (Santa Cruz Biotechnology), and anti-PSA (DAKO Corp.).
RESULTS
We first tested whether the histone H3 Lys-9 methyltransferase G9a can enhance or inhibit transcriptional activation of transiently transfected reporter plasmids by steroid hormone receptors in CV-1 cells. We used previously established conditions that allow synergistic effects of multiple coactivators to be observed (13). Reporter gene expression mediated by hormone-activated AR and ER␣ (Fig. 1, A and B) was enhanced by GRIP1 and further enhanced by CARM1. G9a alone exhibited little coactivator activity, but it cooperated strongly with GRIP1; furthermore, the combination of G9a, GRIP1, and CARM1 was highly synergistic, producing an activity level up to 20-fold higher than that achieved with GRIP1 and CARM1. The synergy was entirely dependent on the steroid hormone (Fig. 1A) as well as GRIP1 (Fig. 1B). Similar results were obtained with glucocorticoid receptor (Supplemental Fig. S1) and thyroid hormone receptor 1 (data not shown). G9a cooperated synergistically with selective combinations of coactivators. In the presence of GRIP1, G9a was highly synergistic with CARM1, but not with p300; however, the addition of G9a to the combination of GRIP1, CARM1, and p300 produced a dramatic synergy (Fig. 1C). Thus, although G9a cooperated with GRIP1, CARM1, and p300, its coactivator function was more highly dependent on GRIP1 and CARM1.
To test the requirement for CARM1 enzymatic activity in its synergistic action with G9a, we used two mutants of CARM1 that lack enzymatic activity in vitro; mutation of VLD (amino acids 189 -191) to AAA in the SAM (S-adenosyl-L-methionine) binding domain and the E267Q mutation in the arginine binding pocket. Both mutants maintain the ability to bind to AD2 of GRIP1 and are expressed at wild type levels (8,13). The synergistic enhancement of AR function by GRIP1, wild type CARM1, and G9a was completely lost when either of the two CARM1 mutants was substituted for wild type CARM1 ( Fig. 2A). The activity observed with the CARM1 (E267Q) mutation was equivalent to that observed with no CARM1, whereas the CARM1 (VLD) mutant displayed a dominant negative behavior. Thus, the enzymatic activity of CARM1 is required for the coactivator synergy between G9a and CARM1.
To test whether the histone lysine methyltransferase activity of G9a is required for its synergistic cooperation with CARM1, we used mutants that lack the enzymatic activity in vitro; that is, mutation H1166K in the catalytic site of the SET domain or deletion of the entire SET domain. The synergistic coactivator function observed with GRIP1, CARM1, MARCH 31, 2006 • VOLUME 281 • NUMBER 13 and wild type G9a was reduced but not eliminated when G9a (H1166K) or G9a (⌬SET) was substituted for wild type G9a (Fig. 2B). In contrast, a G9a fragment consisting of the SET domain alone and missing the ankyrin repeats (⌬ANK) was inactive as a coactivator. The G9a (H1166K) and G9a (⌬SET) mutants were less efficient than wild type G9a when lower levels of the plasmids were transfected but were almost equivalent to wild type G9a when higher levels of plasmid were transfected. Although the G9a mutants are expressed at approximately wild type levels when overexpressed in Cos-7 cells (Fig. 2C), we cannot rule out the possibility that modest reductions in their expression levels may account for the lower activities observed in Fig. 2B. Thus, the methyltransferase activity of G9a is not absolutely required for, but may contribute to, the synergistic coactivator function of G9a with CARM1.
G9a Is a Coactivator for Nuclear Receptors
We next tested whether any part of G9a, when brought to a promoter, could activate transcription. Fig. 2D shows the results of cotransfecting fragments of G9a fused to the Gal4 DBD with a Gal4-responsive reporter gene. The fragment containing G9a residues 72-333 contains an autonomous activation domain, whereas no other isolated fragment of the protein has such an activity. Interestingly, the first 71 residues of the protein appear to have a negative effect on this autonomous activity (compare assays 2 and 3, Fig. 2D).
To further define the specificity of the synergy between CARM1 and G9a, we tested various combinations of arginine and lysine methyltransferases in the transient transfection assays. The coactivator synergy between G9a and CARM1 was not observed when mammalian arginine methyltransferase PRMT1, PRMT2, or PRMT3 or yeast RMT1 was substituted for CARM1 (Fig. 3A). Each of these arginine methyltransferases exhibited similar coactivator activity when assayed with GRIP1 (13). PRMT1 and PRMT3 methylate histone H4 on Arg-3 among other substrates (4). Similarly, the mammalian histone H4 Lys-20 methyltransferase PR-SET7 could not be substituted for G9a (Fig. 3B), although both proteins can be expressed at high levels (Fig. 3C). Thus, among various histone methyltransferases, CARM1 and G9a have specific characteristics that allow them to function with each other as synergistic coactivators.
Because GRIP1 was important for the synergy between CARM1 and G9a, we explored physical interactions between G9a and specific GRIP1 domains. In co-immunoprecipitation experiments, G9a bound strongly to GRIP1.N (5-765) but not to the middle or C-terminal regions of GRIP1 (Fig. 4A). The interaction of G9a with GRIP1.N was apparently indirect or required post-translational modification, because no binding was observed between G9a translated in vitro and bacterially produced GST-GRIP1.N (Fig. 4B). However, G9a did bind weakly to the GRIP1 C-terminal region in vitro. The region of G9a involved in its association with GRIP1.N was further mapped using N-terminal truncations (Fig. 4, C-E). Although truncated G9a proteins containing the ankyrin repeats as well as the full-length G9a protein bound to GRIP1.N equally well, the SET domain alone bound only very weakly (Figs. 4, C and D), although it was expressed in equal or greater amounts as compared with the larger proteins (Fig. 4E). In addition, point mutation of the SET domain in full-length G9a to a catalytically inactive form did not inhibit association with GRIP1.N (Fig. 4C). The binding of G9a to GRIP1 (whether direct or indirect) suggests a possible mechanism for recruitment of G9a to the promoter by NRs.
In transient transfection assays GRIP1 mutants lacking the N-terminal AD3 domain (which associates with G9a and binds several other coactivators), the C-terminal AD2 domain (which binds CARM1), or the AD1 domain (which binds p300/CBP) all had a substantially reduced ability to support the G9a-CARM1 coactivator synergy (Supplemental Fig. S2). The mutant GRIP1 proteins are all expressed at similar levels (13,23,27). These results reinforce the key role of GRIP1 as a primary coactivator that binds directly to NRs and serves as a scaffold for recruiting p300, CARM1, G9a, and other secondary coactivators to contribute to transcriptional activation. We examined the effect of reducing endogenous G9a on androgendependent activation of the prostate-specific antigen (PSA) gene in LNCaP prostate cancer cells. In a typical experiment siRNA against G9a lowered endogenous levels of G9a mRNA by about 75%, compared with cells receiving no siRNA or a control siRNA (Fig. 5A, assays 2-4). The addition of the AR agonist DHT caused strong induction of PSA mRNA levels (Fig. 5B, assays 1-2). The siRNA against G9a lowered the hormone-induced level of PSA mRNA by about 50%, whereas control siRNA had no effect (assays 3-4). The G9a and PSA mRNA levels were normalized to -actin mRNA levels, thus demonstrating that the effects of the G9a-directed siRNA were gene-specific. The siRNA against G9a also compromised the hormonal induction of PSA protein (Fig. 5C). Thus, although many different coactivators are involved in mediating transcriptional activation by NRs, endogenous G9a is necessary for efficient induction of the endogenous PSA gene in response to hormone. Similar results were obtained with induction of the endogenous pS2 gene by estradiol in MCF7 breast cancer cells (data not shown).
G9a has been associated with transcriptional repression both through its ability to associate with repressive transcription factors (19 -21) and through its ability to methylate H3 Lys-9 (17). By fusing various fragments of G9a to Gal4 DBD, we confirmed that the C-terminal SET domain, which contains the methyltransferase activity, contains the transcriptional repression activity of G9a (Fig. 6A). Mutations in the SET domain (⌬NHLC or H1166K) that eliminate the methyltransferase activity also eliminated transcriptional repression by full-length G9a and its C-terminal fragments, showing that the methyltransferase activity is required for repression by G9a. The SET domain mutant G9a proteins are expressed at levels comparable with the corresponding wild type proteins. (Ref. 17 and Fig. 6B).
We have, thus, demonstrated that G9a can function as a coactivator or corepressor; this presumably depends on the context of transcription factors and other coregulators at the promoter. What specific factors direct G9a to function as a coactivator rather than a corepressor? Elimination of its methyltransferase activity or the presence of a poor substrate should prevent G9a from functioning as a corepressor and could, therefore, possibly allow G9a to function as a coactivator. We tested whether histone H3 modifications associated with active genes could restrict the ability of G9a to methylate Lys-9. Using histone H3 peptides (amino acids 1-21) as substrates for methylation by G9a in the presence of 3 H-labeled S-adenosyl-L-methionine, we found that acetylation of Lys-9 and phosphorylation of Ser-10 each caused complete inhibition of G9a methyltransferase activity, whereas Lys-14 acetylation had no effect (Fig. 6C). In contrast, CARM1 was able to methylate all of these peptides, except for the Ser-10-phosphorylated peptide. Therefore, a combination of Lys-9 acetylation and Ser-10 phosphorylation (marks associated with transcriptional activation) would presumably cause a dramatic restriction of G9a ability to methylate Lys-9 and, thus, could help direct G9a to function as a coactivator rather than a corepressor.
Finally, because G9a can function as a coactivator, we sought to verify that G9a could be found associated with transcriptionally active genes.
To determine whether G9a is recruited to NR-responsive genes, we performed chromatin immunoprecipitation assays. Using the androgen-responsive PSA gene, we found that G9a is associated most strongly with the upstream enhancer region but, notably, also with other regions including the transcribed portion of the gene (Fig. 7). Although the association of the androgen receptor with the enhancer region is highly dependent on hormone, the association of G9a appears largely consti- FIGURE 5. G9a is necessary for efficient transcriptional activation by AR. siRNA against G9a or control siRNA was transfected into LNCaP cells. After growth with or without DHT, cDNA was synthesized from total RNA and analyzed by quantitative realtime PCR. The level of G9a mRNA (A) or PSA mRNA (B) was normalized to that of -actin; all ratios are expressed relative to the DHT-only samples (assay 2). Compared with DHT treatment alone, siRNA against G9a reduced G9a mRNA by 75% (p Ͻ 0.0001 and 95% confidence interval of Ϯ 3%) and reduced PSA mRNA by 48% (p Ͻ 0.0001 and 95% confidence interval of Ϯ 9%). Statistical significance was calculated using a paired 2-tailed t test on eight independent experiments. C, LNCaP cells were treated with siRNA and DHT as above, and cell extracts were analyzed by immunoblot using antibodies against the indicated proteins. tutive to the regions examined. Nevertheless, these results demonstrate that G9a is found associated with both the inactive and transcriptionally activated states of the PSA gene. These results are consistent with a role for G9a in transcriptional regulation.
DISCUSSION
Di-and trimethylation of Lys-9 of histone H3 is generally low in active genes and higher in inactive genes (28,29). However, a recent study suggests that the pattern of Lys-9 methylation may be more complex and have different roles in the promoters versus the transcribed regions of genes. Di-and trimethylation of histone H3 Lys-9 was found along with the methyl(Lys-9)-histone H3-binding protein HP1␥, as a common feature in the transcribed regions of active genes (30). Moreover, their presence was dependent on active elongation by RNA polymerase II. Although further studies are needed (in light of these new findings) to examine H3 Lys-9 methylation patterns in more detail, this result suggests a role for histone H3 Lys-9 methylation and HP1 proteins in the transcription of active genes. The enzyme responsible for these modifications has not yet been identified, but our results strongly suggest G9a as a prime candidate for this function.
The extensive mechanisms for modulating chromatin structure and for recruiting and activating RNA polymerase II involve many different coregulators and complexes of coregulators, each of which appears to play a specific role in transcriptional activation (1,2). We have shown that G9a, a protein thought to be central to a large number of gene repression events in euchromatin, is also involved in transcriptional activation by several members of the large family of nuclear receptors (Figs. 1 and 5 and data not shown). G9a acts in synergy with the p160 coactivator GRIP1, the protein arginine methyltransferase CARM1, and the histone acetyltransferase p300 (Figs. 1-3). The repressive activity of G9a depends on its lysine methyltransferase activity; however, G9a methyltransferase activity is inhibited by histone modifications associated with transcriptional activation (Fig. 6). We propose that this restriction of G9a methyltransferase activity could allow G9a to operate as a coactivator.
To function as a coactivator, G9a must have a mechanism for associating with the gene that will be activated and a mechanism for transmitting an activating signal to the chromatin or transcription machinery. Several characteristics of G9a suggest potential mechanisms of recruitment to genes targeted for activation. First, G9a is associated with euchromatin, placing it in the proper chromatin domain. Second, G9a can bind to the N-terminal tail of histone H3. 4 Third, G9a can associate with GRIP1 (Fig. 4). In fact, the coactivator function of G9a is highly dependent on GRIP1 (Fig. 1) and various functional domains of GRIP1. Deletion of any of the three activation domains of GRIP1 (N-terminal AD3 or C-terminal AD1 or AD2) compromised the ability of G9a to function as a coactivator (Supplemental Fig. S2). These results reflect the role of GRIP1 as an NR binding coactivator that functions as a scaffold to recruit p300/CBP (through AD1), CARM1 (through AD2), and G9a as well as other coactivators that bind to the N-terminal AD3 of GRIP1.
Our results also suggest possible mechanisms by which G9a may transmit the activating signal downstream toward the transcription machinery. The N-terminal region of G9a contains an autonomous activation activity (Fig. 2D). The centrally located ankyrin repeats, which are known to function in protein-protein interaction in other proteins, associate with GRIP.N but could also make contact with other upstream or downstream components of the coactivator signaling pathway. Although the C-terminal SET domain was not absolutely required for the coactivator function of G9a, G9a mutants with methyltransferase-inactivating deletions or point mutations in this domain appeared to have reduced activity at lower levels of G9a expression, suggesting that G9a methyltransferase activity may contribute to coactivator function in some way (Fig. 2B).
In addition to G9a, at least two other lysine methyltransferases have been shown to work as coactivators for nuclear receptors. Riz1, which can dimethylate Lys-9 of histone H3, acts as a coactivator for the estrogen and progesterone receptors but not for several other NRs (31). NSD1, which can methylate both H3 Lys-36 and H4 Lys-20 in vitro (32), acts as a coactivator and corepressor for NRs (33). These methylation marks have been associated with transcriptional elongation and gene repression, respectively.
What factors regulate whether G9a acts as coactivator or corepressor? We propose that promoter context determines whether G9a functions as a corepressor or coactivator. G9a is recruited as a corepressor by several sequence-specific DNA binding repressor proteins. Similarly, protein-protein interactions between G9a and coactivators such as GRIP1 (Fig. 4) could be a factor that switches G9a from corepressor to coactivator. The functional switch then could be simply mediated by recruitment to either the potentially activated or repressed promoter. Alternatively, because our chromatin immunoprecipitation data indicate that G9a is present at enhancer and promoter regions in the absence or presence of gene activation, absolute recruitment may not be a functional switch. Rather, the nature of the recruiting proteins, such as histone H3 versus GRIP1, may convert G9a from corepressor to coactivator. Dimethylation of H3 Lys-9 has been associated with repression of transcription in euchromatin, and the methyltransferase activity of G9a, located in the SET domain, is required for its corepressor function (19 -21). Our results also indicate that tethering of G9a to a promoter can repress transcription and that its methyltransferase activity is necessary for this repression (Fig. 6A). Therefore, another possible contributing mechanism for switching G9a from corepressor to coactivator would be to inhibit its methyltransferase activity, at least when it is recruited to an active promoter region. Acetylation and methylation of Lys-9 are obviously mutually inhibitory, as confirmed by our results (Fig. 6C). Methylation of Lys-9 of histone H3 has previously been shown to interact with other histone modifications. Lys-9 methylation is mutually inhibitory with Lys-4 methylation (16), and these marks are inversely correlated with each other in inactive and active chromatin (28,29). In addition, Lys-9 methylation by Suv39h1 inhibits Ser-10 phosphorylation (15), and our results show that Ser-10 phosphorylation also inhibits G9a-mediated methylation (Fig. 6C). Therefore, histone H3 tails containing several histone marks associated with active transcription serve as poor substrates for G9a.
The cooperative coactivator function between CARM1 and G9a is quite specific for CARM1 in that no other PRMT tested was able to cooperate with G9a (Fig. 3). Furthermore, the methyltransferase activity of CARM1 was essential for the synergistic cooperation between CARM1 and G9a ( Fig. 2A). These results suggest a specific role for H3 Arg-17 methylation (the major histone target of CARM1) in G9a coactivator function. We have examined both the interaction of G9a and the methyltransferase activity of G9a with Arg-17-meth-ylated H3 peptides. Thus far we have been unable to show a significant difference between unmodified peptides and Arg-17-methylated peptides in these assays (data not shown). However, we cannot rule out direct effects of Arg-17 methylation that, when combined with one or more other activating marks such as Lys-9 acetylation, Ser-10 phosphorylation, or Lys-4 methylation, might have significant effects on G9a recruitment and/or methyltransferase activity. Although transcription activating histone modifications in the promoter region may prevent Lys-9 methylation in the promoter, they apparently do not inhibit Lys-9 di-and trimethylation in the transcribed region of active genes, since levels of Lys-9 methylation were recently reported to be higher in the bodies of active versus inactive genes (30).
In summary G9a is capable of functioning either as a coactivator or as a corepressor depending on the promoter context to which it is recruited. Our results suggest that some level of regulation of the methyltransferase activity of G9a is necessary for coactivator activity and that G9a reads and responds to posttranslational modifications of histone H3, consistent with the existence of a histone code. | 2018-04-03T06:19:58.806Z | 2006-03-31T00:00:00.000 | {
"year": 2006,
"sha1": "5610901596fd3d1f4c9e37029687a348ae714b69",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/281/13/8476.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "ebd1096ba6a325fa486da7cebbbccd487bf13f38",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
239004120 | pes2o/s2orc | v3-fos-license | Prognostic impact of lingual lymph node metastasis in patients with squamous cell carcinoma of the tongue: a retrospective study
Squamous cell carcinoma (SCC) of the tongue rarely metastasizes to the lingual lymph nodes (LLNs), which are inconstant nodes and often situated outside the areas of basic tongue tumor surgery. The current study evaluated the clinicopathological features and prognostic impact of LLN metastasis (LLNM), compared to that of cervical lymph node metastasis, in patients with tongue SCC. A total of 608 patients underwent radical surgery for tongue SCC at our department between January 2001 and December 2016. During neck dissection, we scrutinized and resected lateral LLNs, when present. Of the 128 patients with lymph node metastasis, 107 had cervical lymph node metastasis and 21 had both cervical lymph node metastasis and LLNM. Univariate analysis demonstrated that LLNM was significantly associated with the adverse features of cervical lymph node metastasis. The 5-year disease-specific survival (5y-DSS) was significantly lower in patients with LLNMs than in those without LLNMs (49.0% vs. 88.4%, P < 0.01). Moreover, Cox proportional hazards model analyses revealed that cervical lymph node metastasis at level IV or V and LLNM were independent prognostic factors for 5y-DSS. LLNM has a strong negative impact on survival in patients with tongue SCC. An advanced status of cervical lymph node metastasis may predict LLNM.
The oral cavity is the most common subsite of head and neck cancers 1 . More than 90% of malignancies in the oral cavity are squamous cell carcinomas (SCCs) 1,2 , and SCCs of the tongue and floor of the mouth account for > 50% of primary oral SCCs 3,4 . Despite progress in diagnosis and treatment, survival in patients with advanced oral SCC has not improved significantly 5,6 . Metastasis to the cervical lymph nodes is one of the most accurate prognostic factors in patients with oral SCC [1][2][3] . SCC arising in the tongue frequently metastasizes to the cervical lymph nodes compared to SCC arising at other subsites. However, tongue SCCs rarely metastasize to the lingual lymph nodes (LLNs) that interrupt the lymphatic collecting trunks draining from the tongue and floor of the mouth to the cervical lymph nodes 7 .
LLNs can be divided into two groups, median and lateral LLNs [7][8][9][10] . The median LLNs are situated in the lingual septum between the genioglossus and geniohyoid muscles 7-10 , whereas lateral LLNs are situated along the course of the lingual artery on the external surface of the genioglossus or hyoglossus muscle [7][8][9][10] . Furthermore, the lateral LLNs can divided into two groups: the parahyoid nodes, which are located along the course of the lingual artery at the cornu of the hyoid bone, and the paraglandular nodes, which are located in proximity to the sublingual gland 10 . LLNs are often not detected in any imaging examinations as they are frequently absent or are www.nature.com/scientificreports/ small when present 8,10-12 . Moreover, due to their anatomic locations, LLNs are often situated outside the areas of surgical resection of the primary tongue tumor and neck dissection 8,10,[13][14][15][16][17] . For instance, the median LLNs cannot be resected during partial glossectomy that does not include the lingual septum. Lateral LLNs in the sublingual space cannot be resected during partial glossectomy that frequently does not include this space or during discontinuous neck dissection without scrutiny. If not investigated carefully, lateral LLNs in the parahyoid area may be overlooked during any type of neck dissection. Previous studies have reported the incidence of median and lateral lingual lymph node metastasis (LLNM) to be 0.7-3.0% 8,11,13 and 1.4-14.3% 11,13,[17][18][19] , respectively. However, there are insufficient data on the clinical implications and prognostic value of LLNM in patients with tongue SCC because LLNM has received little attention until recently. We hypothesized that LLNM, similar to cervical lymph node metastasis, contributes to poor prognosis in patients with tongue SCC. The present study aimed to evaluate the clinicopathological features associated with LLNM and impact of LLNM, compared to that of cervical lymph node metastasis, on survival in patients with tongue SCC.
Results
Characteristics of patients and LLNM. Of the 608 patients who underwent treatment for tongue SCC during the study period, 128 (21.1%) had cervical lymph node metastasis or LLNM. These patients included 92 men and 36 women aged 21 to 83 (median, 60.5) years. Eighty-nine patients initially underwent both glossectomy and neck dissection; of these, 80 patients underwent continuous neck dissection with a pull-through maneuver and nine underwent discontinuous neck dissection. Thirty-nine patients initially underwent resection of the primary tumor alone, followed by neck dissection for delayed lymph node metastasis.
Of the 128 patients, 107 had cervical lymph node metastasis without LLNM and 21 had both cervical lymph node metastasis and LLNM. Therefore, LLNM was detected in 3.5% of 608 patients with tongue SCC who underwent radical surgery during the study period. Of the 21 patients with LLNM, three had median LLNM (Fig. 1), seven had lateral LLNM in the sublingual space (Fig. 2), and 11 had lateral LLNM in the parahyoid area ( Figs. 3 and 4). LLNM was confirmed during the first surgery in 14 patients, all of whom underwent resection of the primary tumor and neck dissection-12 patients underwent continuous neck dissection with a pull-through maneuver and two patients underwent discontinuous neck dissection. Among these 14 patients, LLNM was not detected in pretreatment examinations in 12 (85.7%) patients. Occult LLNM was evident in seven patients who subsequently developed cervical lymph node metastasis. Taken together, 90.5% (19/21) of the LLNMs were subclinical metastases. The median number of LLNM was one (range, 1 to 3). The median length of the long axis of the 12 subclinical LLNMs, which were proven at the first surgery, was 7.5 (range, 1 to 18) mm. Extranodal extension (ENE) of LLNMs was observed in 15 (71.4%) of the 21 patients.
Correlation between clinicopathological features and LLNM. Univariate analyses were performed to compare clinicopathological features between patients with and without LLNM (Table 1). Age, sex, cT stage, and pathological differentiation were not significantly associated with LLNM. However, LLNM was significantly associated with the number (≤ 3 vs. ≥ 4), level (I-III vs. IV-V), and ENE (negative vs. positive) of ipsilateral cervical lymph node metastases as well as with involvement on the contralateral side (absence vs. presence). Table 2, the number of ipsilateral positive nodes, level of ipsilateral positive nodes, contralateral cervical lymph node metastasis, LLNM, and postoperative treatment were significantly associated with 5y-DSS. The 5y-DSS rates differed significantly between patients with and without LLNM (49.0% vs. 88.4%, P < 0.01; Fig. 5). Age, sex, cT stage, and pathological differentiation
Discussion
The key findings of the present study were as follows: (1) The incidence of LLNM was 3.5% in 608 patients with tongue SCC. (2) All patients with LLNM also had cervical lymph node metastasis. Statistical analysis of the correlation between LLNM and clinicopathological features suggested that pathologic adverse features of cervical lymph node metastasis may be a reliable predictor of LLNM; (3) The 5y-DSS in patients with LLNM was significantly poorer than that in patients without LLNM. Multivariate analysis revealed that LLNM and level IV or V involvement of ipsilateral positive nodes were strong prognostic factors in patients with tongue SCC. Few studies have investigated the clinicopathological factors related to LLNM in patients with tongue SCC. Jia et al. reported that among 111 patients with tongue SCC, five had LLNM and cervical lymph node status of pN2 17 . Moreover, T stage and occult cervical lymph node metastasis were associated with LLNM in patients with cT1-2N0 tongue SCC 18 , whereas T stage, tumor differentiation, perineural invasion, lymphovascular invasion, and cervical lymph node metastasis were associated with LLNM in patients with cT2-4 tongue SCC 19 . To the best of our knowledge, the present study is the first to statistically evaluate the clinicopathological features related to LLNM in all stages (cT1-4) of tongue SCC. We found that all patients with LLNM also had other cervical lymph node metastases. Univariate analysis showed that LLNM was significantly associated with the status of cervical lymph node metastasis, including ≥ 4 positive nodes, level IV or V involvement, ENE, and spread to the cervical lymph nodes on the contralateral side. These results suggest that cervical lymph node metastasis, especially advanced status, may predict LLNM in patients with tongue SCC. In contrast, there was no significant relationship between cT stage and LLNM, indicating that early T-stage tongue SCC can metastasize to the LLN.
Further, few studies have assessed the prognostic significance of LLNM in patients with tongue SCC. Yang et al. reported LLNM to be an independent prognostic factor for survival only in patients with cT1-2N0 tongue SCC, with a 5y-DSS of 51% 18 . Ando et al. reported that recurrent disease in the parahyoid area contributed to poor disease-specific survival (DSS) in 77 patients with regional failure of cT1-2 tongue SCC 14 . To the best of our knowledge, the present study is the first to assess the impact of LLNM on survival in patients at all stages (cT1-4) of tongue SCC. We evaluated the impact of LLNM on survival in patients with tongue SCC and compared it with that of cervical lymph node metastasis. The results indicated a significantly lower 5y-DSS rate in patients with LLNM than in those without LLNM (49.0% vs. 88.4%, P < 0.01). Furthermore, Cox proportional hazards analysis revealed that the number of ipsilateral positive nodes, ENE of ipsilateral positive nodes, and contralateral cervical lymph node metastasis were not associated with 5y-DSS. However, level IV or V involvement and LLNM were significantly associated with poor 5y-DSS. These results suggest that LLNM has a greater negative impact on survival in patients with tongue SCC than other representative adverse features of cervical lymph node metastasis.
In this study, most LLNMs were subclinical metastases, consistent with the results of previous studies 11,12,14,17 . To completely dissect subclinical median and lateral LLNMs, glossectomy including the tongue septum and continuous neck dissection using a pull-through maneuver are required. However, this surgical procedure may not be recommended uniformly in all patients with tongue SCC due to the low incidence of LLNM (3.5% in the present study). Therefore, it may be appropriate to carefully scrutinize and resect LLNs during any type of neck dissection. To scrutinize and resect the lateral LLN in the parahyoid area, we resected the digastric and stylohyoid muscles, which facilitated an adequate approach to this area with a clear visual field. We firstly scrutinized the lateral LLN along the hypoglossal nerve on the external surface of hyoglossus muscles. We next scrutinized the lateral LLN with a careful observation and palpation along the course of the lingual artery from the anterior surface of the external carotid artery to the hyoglossus muscle (Fig. 4). To avoid a swallowing dysfunction, the posterior belly of digastric muscle and stylohyoid muscle were spared on one side during bilateral neck dissection. Then, we retracted these muscles superiorly during lateral LLN scrutiny. A few reports also had recommended an inspection and dissection of the lateral LLN in the parahyoid area during neck dissection 14,15,17 . Regarding the lateral LLN in the sublingual space, we retracted the mylohyoid muscle anteriorly and scrutinized these nodes with careful palpation. Our procedure for the lateral LLN did not provide additional comorbidity with neck dissection, which is consistent with the findings of previous reports 14,15 . In addition to, careful surveillance for LLNM should be strictly performed in patients with tongue SCC.
This study has several limitations. First, as only few tongue SCC patients with cervical lymph node metastases and LLNM were included, the statistical power to draw firm conclusions was insufficient. Second, the survival benefits of scrutinizing LLNs during neck dissection and hemiglossectomy could not be evaluated due to the retrospective study design. Table 3. Multivariate analyses of factors related to 5-year disease-specific survival in tongue SCC patients with lymph node metastasis. Analyses performed using Cox proportional hazards model. SCC squamous cell carcinoma, CI confidence interval, LNM lymph node metastasis, LLNM lingual lymph node metastasis, ENE extranodal extension. P < 0.05 is considered statistically significant.
Clinicopathological factors
Hazard ratio P-value 95% CI www.nature.com/scientificreports/ In conclusion, LLNMs, which rarely develop in patients with tongue SCC, are associated with poor survival outcomes. Most metastatic LLNs are subclinical and undetectable by imaging modalities before surgery. Therefore, LLNs should be carefully scrutinized and resected during neck dissection. Careful surveillance for LLNM is necessary in patients with tongue SCC. The adverse features of cervical lymph node metastases may be predictive of LLNMs. Additional large prospective studies are needed to verify the results of this study and establish the most appropriate treatment for LLNMs.
Methods
Patients. The medical records of 608 patients with tongue SCC who underwent radical surgery at the Department of Oral Maxillofacial Surgery at Tokyo Medical and Dental University Hospital between January 2001 and December 2016 were retrospectively reviewed. Patients who received previous treatments for tongue tumors were excluded. Data on patients with pathologically confirmed cervical lymph node metastasis or LLNM were analyzed. Demographic and clinical data, including age, sex, T stage, pathological differentiation of the primary tumor, mode of cervical lymph node metastasis, and treatment outcomes, were obtained from patients' medical records. Tumors were clinically staged according to the 7th edition of the American Joint Committee on Cancer TNM staging system 20 . LLNs were divided into three groups: median LLNs (located in the lingual septum between the genioglossus and geniohyoid muscles on both sides), lateral LLNs in the sublingual space (located along the genioglossus or hyoglossus muscle), and lateral LLNs in the parahyoid area (located along the course of the lingual artery or hypoglossal nerve at the cornu of the hyoid bone) [7][8][9][10] . This study complied with the Declaration of Helsinki and was approved by the Institutional Review Board of Tokyo Medical and Dental University (D2015-600). Informed consent was obtained in the form of opt-out on the in-hospital bulletin. Patients who rejected participation in this study were excluded.
Treatments. All patients underwent preoperative imaging, including computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography, and 18 F-fluorodeoxyglucose positron emission tomography/ computed tomography (FDG-PET/CT). The primary tumor and cervical lymph node metastases were assessed based on physical examination and imaging findings. All primary tumors were resected with surgical margins ≥ 10 mm. We followed a wait-and-watch strategy for the management of clinically negative cervical lymph node metastases (cN0) in patients with tongue SCC. When patients with cN0 underwent excision of the primary tumor and reconstruction using a vascularized free flap, supraomohyoid neck dissection (levels I, II, and III) was performed as elective neck dissection. Patients with clinically positive cervical lymph nodes underwent radical or modified radical neck dissection at levels I-V.
During any neck dissection, we resected the digastric and stylohyoid muscles and scrutinized the lateral LLN along the course of the lingual artery and hypoglossal nerve at the cornu of the hyoid bone. When the mylohyoid muscle was spared, we anteriorly retracted this muscle and scrutinized the lateral LLN in the sublingual space by careful palpation. Moreover, extended resection was performed when clinically positive LLNM adhered to the surrounding muscles, mandible, hyoid bone, hypoglossal nerve, or lingual artery.
Patients with ≥ 4 pathological metastatic lymph nodes or ENE with adhesion to the surrounding structures underwent postoperative radiotherapy 21 of the neck or region of LLNM, with platinum-based anticancer agents administered concurrently, if possible. After radical treatment, patients were followed up every 4 weeks for 1 year, every 2 months for the next year, and every 3 months for another year, with follow-up intervals gradually increasing thereafter. Surveillance included CT or FDG-PET/CT performed 6 months after treatment and every year thereafter. The cervical lymph nodes were carefully monitored by physical examination and ultrasonography. The median follow-up period was 60 months (interquartile range, 27 to 92 months).
Statistical analyses.
Survival was analyzed using the Kaplan-Meier method and compared between groups using the log-rank test. DSS was measured from the date of surgery to the date of death from uncontrolled tongue SCC. Multivariate analyses of factors related to 5y-DSS were performed using the Cox proportional hazards model. The associations between LLNM and categorical variables were assessed using Fisher's exact test or Pearson's chi-square test. P < 0.05 was considered statistically significant. All statistical analyses were performed using JMP14 (SAS Institute Inc., Cary, NC, USA).
Data availability
The datasets used and analyzed in the current study are available from the corresponding author on reasonable request. | 2021-10-17T06:17:14.139Z | 2021-10-15T00:00:00.000 | {
"year": 2021,
"sha1": "050619898c51acd15f90f5d69dff4a757e49de46",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-99925-2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "14542cdd0f831d369a3742494e0a9555ae6bc83d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.