id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
16998529
pes2o/s2orc
v3-fos-license
Molecular and Physiological Properties Associated with Zebra Complex Disease in Potatoes and Its Relation with Candidatus Liberibacter Contents in Psyllid Vectors Zebra complex (ZC) disease on potatoes is associated with Candidatus Liberibacter solanacearum (CLs), an α-proteobacterium that resides in the plant phloem and is transmitted by the potato psyllid Bactericera cockerelli (Šulc). The name ZC originates from the brown striping in fried chips of infected tubers, but the whole plants also exhibit a variety of morphological features and symptoms for which the physiological or molecular basis are not understood. We determined that compared to healthy plants, stems of ZC-plants accumulate starch and more than three-fold total protein, including gene expression regulatory factors (e.g. cyclophilin) and tuber storage proteins (e.g., patatins), indicating that ZC-affected stems are reprogrammed to exhibit tuber-like physiological properties. Furthermore, the total phenolic content in ZC potato stems was elevated two-fold, and amounts of polyphenol oxidase enzyme were also high, both serving to explain the ZC-hallmark rapid brown discoloration of air-exposed damaged tissue. Newly developed quantitative and/or conventional PCR demonstrated that the percentage of psyllids in laboratory colonies containing detectable levels of CLs and its titer could fluctuate over time with effects on colony prolificacy, but presumed reproduction-associated primary endosymbiont levels remained stable. Potato plants exposed in the laboratory to psyllid populations with relatively low-CLs content survived while exposure of plants to high-CLs psyllids rapidly culminated in a lethal collapse. In conclusion, we identified plant physiological biomarkers associated with the presence of ZC and/or CLs in the vegetative potato plant tissue and determined that the titer of CLs in the psyllid population directly affects the rate of disease development in plants. Introduction The presence of potato psyllids and their associated disorders were first documented in the 1920's in Colorado, and described as psyllid yellows disease of solanaceous plants, displaying upward cupping of leaves and dwarfing of the plants [1]. The disease was considered to be caused by a toxin or a virus, but at that time no conclusive causal agent was reported. Since then anecdotal evidence suggested that certain disease symptoms might be associated with the presence of phytoplasma in some instances [2]. However, evidence is increasing that the potato psyllid Bactericera cockerelli transmits an a-proteobacterium Candidatus Liberibacter solanacaerum (CLs) to several solanaceous crops causing two disorders named after their effects in the plants; psyllid yellows (PY) and zebra chip or zebra complex (ZC) disease [3]. ZC has recently become an important economical disease in Texas [4], currently spreading northward in the USA [5] causing increased monetary losses to the potato industry. A similar triangular association between psyllids, a CL species, and plants, is thought to cause citrus greening [6]. In addition to being a serious disease on potatoes, because of the much faster host generation time and wider geographic distribution, once suffi-ciently understood, experimental ZC systems could potentially serve as a research model for studying molecular plant-microbepsyllid interactions associated with citrus greening. ZC disorder is named after its characteristic stripe discoloration pattern observed in the potato chips after frying [5], decreasing their marketable value. The present consensus is that the occurrence of ZC is closely associated with the presence of CLs in plants and in transmitting psyllids [7,8]. In this context correct identification presents technical challenges because CLs is a phloem restricted a-proteobacterium which at present is nonculturable [9,10] Currently, detection of CLs in the potato plant and psyllids is only feasible by the amplification of a conserved 16S ribosomal DNA fragment through either conventional PCR [11,12], nested PCR [13] or qPCR [14]. The bacteria accumulate to relatively high levels in the plant roots [14], perhaps as a result of the natural flow of nutrients (source to sink) in the phloem. However, the uneven distribution of CLs through the entire plant oftentimes prevents detection of the bacteria by PCR in foliar tissue [10] even when symptoms reminiscent of ZC are observed. Therefore, in the present study we implemented PCR-based methods in combination with the chip frying test to permit the selection of only those plants positive for both tests for subsequent analysis. Characteristic above ground symptoms of ZC vary from mild to severe but these can easily be confused with symptoms caused by other pathogens [11]. Some commonly observed ZC-associated symptoms include: curling up of emerging leaves, purple and yellow discoloration of the new shoots, leave scorching, stem thickening, occasional formation of aerial tubers, zigzagging of the stem, tuber and chip discoloration and early senescence of the plant [5]. In spite of all these symptoms, only the chip discoloration that is readily identified upon frying is the most reliable and routinely used method by scientists and the potato industry to determine if the disorder observed in the field is due to ZC. Therefore, our first objective was to define molecular biomarkers to better understand the underlying molecular physiological principles of ZC disease. The potato psyllid vector of ZC (and CLs) is a phloem feeding insect that requires the presence of obligate endosymbionts to acquire essential amino acids not present in the plant sap [15]. Along with these obligate or primary endosymbionts, psyllids carry facultative or secondary endosymbionts, which can establish differential symbiotic relationships. For instance, we noticed variations in psyllid prolificacy for three different psyllid populations of the same biotype maintained in the laboratory [16], which was corroborated by findings that levels of CLs in psyllids affected population growth rate and longevity [17]. These findings suggest that CLs act as secondary endosymbiont to confer different fitness characteristics to the psyllids [17]. A recent study on the diversity of endosymbionts in the potato psyllids found that the primary endosymbionts Candidatus Carsonella ruddii and Wolbachia were present in all developmental stages of the insect [18]. Considering these properties, our second objective was to develop RT-qPCR and traditional PCR-based methods to precisely monitor CLs population dynamics and primary endosymbiont levels in individuals of different ZC-transmitting psyllid colonies, and to determine if a direct relationship exists between any of these and colony prolificacy or the severity of disease in plants. The collective goal of the aforementioned two objectives was to obtain a better understanding of the potato-psyllid-CLs molecular interactions responsible for the onset of ZC disease. In order to begin to dissect the triangular complex we analyzed samples from the field and those grown under laboratory conditions with and without ZC using PCR to identify truly CLs-positive samples. These were then used to determine the nature of molecular host responses and physiological changes caused by infection. Also, we newly established and maintained several psyllid colonies under controlled experimental conditions for transmission studies with a focus on two psyllid populations that contain either relatively low or high titers of CLs. Using these identified ZC affected plants and CLs-transmitting psyllid colonies, the studies revealed the new findings that: i) stems of potato plants infected with CLs acquire tuber-like properties such as accumulation of starch and tuber-enriched proteins, ii) these stems have increased levels of total phenolic compounds as well as elevated quantities of polyphenol oxidase enzyme, iii) the level of CLs in psyllids and its effect on prolificacy is not related to differences in endosymbiont titers, and iv) the CLs levels in the psyllid insect vector directly correlate with the degree and rate of symptom induction in the ZC affected plants. Results Plant physiological properties associated with ZC disease Sample selection. An initial screening of field-collected samples was conducted to verify presence of CLs and potato chip brown discoloration pattern. Conventional PCR was used to identify a CLs specific 16S ribosomal DNA sequence [11], and a frying test (Fig. 1) was conducted to corroborate if the potatoes had the defect that characterizes ZC. Only samples positive for both tests were then subjected to further analysis. Abnormal starch accumulation. Barriers in the plant phloem can cause sugar levels to increase in the stems of potato plants and induce the formation of starch storage organelles known as amyloplasts. For example, a macroscopically observable characteristic that could possibly result from such sugar accumulation in ZC plants is the formation of aerial tubers, something also observed during potato infections with fungus Rhizoctonia solani on the lower part of the stem [19]. Because ZC disease is associated with the presence of CLs, and this phloem-restricted aproteobacterium can potentially form a barrier to normal photosynthate flow, we hypothesized that in ZC plants photosynthate nutrient distribution is impaired resulting in the accumulation of starch in the stem tissues. To address this, we first tested different potato varieties to determine the normal pattern of starch accumulation by the standard lugol staining test, and identified that healthy potato plants accumulate starch at varying degrees in different plant organs, except for the upper stems ( Fig. 1). Then we proceeded and compared accumulation of starch in the upper stems of healthy and ZC affected plants and correlated that with the appearance of ZC-typical dark discoloration in fried tuber chips. As hypothesized, starch was accumulating abnormally in the upper stems of ZC-affected plants while this did not occur in healthy controls. This characteristic was proven to consistently appear in all ZC diseased plants. Protein content. At the onset of the studies we noticed that plants diseased with ZC develop a fleshy and succulent stem instead of the characteristic hollow and brittle stem of healthy potato plants of the Atlantic and FL1867 chipping varieties. This characteristic was observed in all different types of potato samples whether grown in the field or under laboratory conditions. To evaluate whether the observable changes in stem structure related to differences in total protein content, infected tissues from plants collected in the field were subjected to standard Bradford assays. Healthy control plants were from the same immediate area where the ZC affected plants were collected. The protein quantification was performed with 0.5 g of fresh tissue and values are presented as mg/g of tissue in Table 1. The difference in protein content was significant based on a one way ANOVA and a Tukey's test, P value of 0.01. On average stems of the diseased potato plants contained 3.5 times more protein per gram of tissue when compared to healthy potato plants. Protein profile. Protein profiling is a broad molecular screening technique that can be used to identify major proteins that accumulate abundantly in diseased tissue in plants either as a result of plant (defense) responses or produced by the invasive microbe. For this study, total proteins were extracted from several healthy and ZC diseased potato plants and their protein profiles compared using standard SDS-PAGE. The results showed that ZC affected stem tissues displayed a protein profile resembling a healthy potato tuber ( Fig. 2A). Furthermore, we identified differentially expressed proteins by Coomassie brilliant blue and silver staining of samples run on 15% polyacrylamide gels. Two specific protein bands present in infected plants were excised and sent for MALDI-TOF mass spec identification (Fig. 2B). Three proteins were specifically detected only in ZC affected plants, protein band-a contained: i) a basal transcription factor BTF-3 like, and ii) a single domain cyclophilin protein, and protein bandb represented, iii) a glycoprotein-like protein. These proteins were not of pathogen origin or defense response related proteins, but rather they appeared to represent host proteins involved in general processes of transcription and translation; events that seem to be actively up-regulated in the symptomatic ZC potato stems. Cyclophilin. The presence of the peptidyl-prolyl isomerase single domain cyclophilin protein was verified with immuno (western) blotting experiments. For this purpose a cyclophilin specific antibody was used [20]. This specific antibody successfully recognized the cyclophilin protein in the potato samples (Fig. 3A). Comparative western blot analysis with several healthy and ZC affected field samples corresponding to the stem or the tubers revealed that cyclophylin was present at elevated levels in the stems of plants afflicted by ZC when compared to healthy control plants (Fig. 3). Moreover, cyclophilin was only absent from the ZC infected tubers (Fig. 3B) which seem to be in a developmental stage that resembles older tissues, where no active protein production is occurring. Aging studies on this aspect were performed on potato tubers, corroborating the notion that this particular cyclophilin plays roles in actively translating tissues since only the potatoes that were not aged expressed cyclophilin (data not shown). Patatins. Mass spec protein identification of proteins in the previous section also revealed the presence of a number of patatin like protein fragments in healthy and ZC plants. Class I patatins are the major storage proteins that can be detected in potato tubers and are readily observed on a Coomassie brilliant blue stained gel, migrating at around 40 kDa, [21], also class II patatins can be detected in potato tubers although they compose up to 30% of the total pool of patatin storage proteins [22]. Therefore, we tested the hypothesis of organ identity change of the ZC stem by assaying presence or absence of patatin proteins via western blot analysis on potato samples collected from the field. The results demonstrated that class I and II tuber-specific patatin antibody reactive proteins were abundantly present in healthy tuber and stems of ZC affected plants, once more indicating a change in the molecular programming of ZC afflicted stems (Fig. 4). Phenolic compounds and polyphenol oxidase. Compared to healthy plants, tissue from ZC-affected potato plants rapidly displays a brown discoloration upon protein extraction. This can be attributed to an abundance of phenolic compounds in the stem of ZC affected plants or to an increase in polyphenol oxidase activity because when using reducing agents or antioxidants in the extraction buffer, such as b-mercaptoethanol or DTT, the brown discoloration is not observed, while browning readily occurs upon extraction with TE buffer. To address this, we analyzed the samples from healthy and ZC diseased plants using a standard total phenol extraction method (Fig. 5). A selection of the most representative samples was made based on PCR, lugol staining of stems and frying test of tuber slices ( Fig. 5B-C), and these samples were then evaluated for total phenol content. A two-fold increase of total phenols was found in ZC stems when compared to healthy stems (Fig. 5A). Polyphenol oxidase (PPO) is the enzyme involved in the phenol oxidation and its content was assayed via western blot analysis. The results pointed to a clear increase of oligomeric PPO amounts in ZC-diseased plants (Fig. S1). These findings indicate that ZC-affected tissues are characterized by increasing levels of phenol substrate and PPO to result in the formation of polyphenolic compounds that cause typical ZC-associated brown discoloration. Characterization of microbiota in psyllids Establishment of psyllid colonies. The varying degrees of symptoms and syndromes (e.g. psyllid yellows) caused by psyllids on potato plants, lead us to initiate psyllid transmission studies in order to understand plant responses to psyllid feeding compared to infestation with psyllids containing different levels of CLs. Different psyllid colonies of the same biotype initially either containing no detectable CLs, or high levels of the bacteria, were established and maintained in the laboratory in cages with potato and tomato plants that were periodically replaced by new plants. The insect colonies were frequently evaluated for CLs concentration using PCR techniques, as described in subsequent sections. Monitoring of CLs in individual psyllids. Conventional PCR was first used to screen different psyllid colonies for the presence or absence of CLs. DNA extractions of single psyllids were performed with C-TAB and the PCR screening was done with CLs specific primers, ZCf/OI2c, [11], and with 28S psyllid ribosomal DNA primers, designed to be used in multiplex PCR for a DNA quality test [11]. We identified an insect colony (C1) with generally undetectable levels of CLs in individual psyllids, but sometimes testing positive (at relative low titers) in a low percentage of adults and nymphs (Fig. 6). During the course of the study a more sensitive nested primer set [13], was used to routinely monitor the relatively low CLs titer C1 psyllid colony. A second psyllid colony (C2) tested positive for CLs in an average of ,47% of the individuals tested (adults and nymphs), whereas the third colony (C3) tested positive for 92% of its population (Fig. 6). Another important observation was that in a high percentage of CLs-positive psyllids generally correlated with high titers in individual psyllids ( Fig. S2) which had a negative impact on psyllid prolificacy (Fig. S3). In fact this effect of continuous elevated CLs levels in a high percentage of C3 individuals on prolificacy led to the eventual demise of the colony. One more finding that surfaced as a result of the periodic PCR screening for CLs was that in contrast to the relatively stable high percentage and titer of CLs in psyllids of the C3 colony (92%, SDEV 67), the percentage of psyllids containing detectable levels of CLs in the laboratory colony C2 temporally fluctuated (between 9% to 100%). Therefore, extra sampling was required in order to have a better representation of the proportion of C2 psyllids that carried CLs. This variability of CLs is illustrated by the high SD (630) obtained for this psyllid colony (Fig. 6). Based on these analyses it was clear that frequent monitoring was pertinent and that we did not possess a colony absolutely free of CLs because during the course of this study even the C1 colony contained at least some individuals within the population that tested positive for CLs. Therefore, for all subsequent experiments we categorized newly established colonies as low-CLs or high-CLs based on the combination of the percentage of psyllids testing positive and the relative level in individuals (Fig. S2). Microbe population dynamics in psyllids. Based on recurrent observations it was noticed that the low-CLs C1 psyllid population showed a high rate of prolificacy and outnumbered the high-CLs C2 psyllid population (Fig. S3). In order to begin to understand the microbial population dynamics and the changes in reproduction rates of these two psyllid colonies, we investigated potential changes in population densities of other microbes such as Wolbachia spp. The underlying premise was that this secondary endosymbiont is known to alter reproduction behaviors in different insects [23] and is present in the potato psyllid [18,24], and therefore it was considered possible that CLs-mediated changes in its endosymbiont density could potentially affect psyllid reproduction. To assess such possible changes we developed a semiquantitative SYBR green based real-time PCR method that allowed us to assess CLs levels in individual psyllids and compare those to levels of other endosymbionts such as Candidatus Carsonella ruddii, a primary psyllid endosymbiont [18,25], and two Wolbachia strains. Primers for the 28S psyllid ribosomal DNA reference gene, the endosymbionts Carsonella and the two strains of Wolbachia present in potato psyllids, were successfully designed (Table S1) and complied with the real time qPCR standard MIQE guidelines for primer efficiency to allow reliable comparisons [26]. Based on the qPCR analysis on the two psyllid populations (C1 and C2) no differences were apparent for Candidatus Carsonella ruddii or in the population of Wolbachia sp. Both psyllid colonies contained the same relative amounts of these microbes (based on Tukey's test P value of 0.01, Fig. 7), while the relative amount of CLs were evidently superior and statistically different (Tukey's test P value of 0.01) in the high-CLs C2 psyllid colony (Fig. 7). As in previous experiments, for unknown reasons, considerable CLs titer variation was observed among individuals evaluated in both colonies (Fig. 7, note the vastly different scales on y-axis for CLs level in C1 and C2 on the top panel). Together these analyses demonstrated that the microbial dynamics in the low-CLs or high-CLs psyllid populations do not obviously affect the populations of the two major endosymbionts present in psyllids, Carsonella and Wolbachia. Plant-CLs-psyllid interactions Plant responses upon colonization of potatoes with psyllids. The need to understand the spectrum of ZC symptoms observed in potato plants lead us to begin psyllid transmission studies with insect populations carrying low or high titers of CLs, and examine the plant responses to varying exposure times. Towards this, the first transmission studies initially aimed at dissecting the contribution of psyllid infestation were performed in the greenhouse with a C1 psyllid colony that at the beginning did not contain detectable levels of CLs. Atlantic potato plants, known to be quite susceptible [8] were planted in caged pots and 6 weeks later 40 psyllids were released into each cage. The psyllids were exterminated after 5 and 12 weeks (treatment I and II respectively) of insect exposure. For both treatments, psyllids and plant tissues were collected right before insecticide application and conventional PCR and nested PCR were used to determine the presence of CLs in individual psyllids and in plants. Consistent with results presented in the previous section, the C1 colony was not free of CLs, and in fact the results showed that 53% of psyllids were CLspositive for treatment I at 5-w, and for treatment II 26% at 5-w and then further increased to 38% at 12-w, as shown in Table 2. However, as noted before, despite these percentages of positive C1 psyllids the titer of a-proteobacteria in each individual was still relatively low (example in Supplemental Fig. S2), for instance only the use of nested primers allowed the detection of CLs at 12-w for treatment II (Table 2). For both time-points the plants developed PY-like symptoms, such as curling up of new leaves, yellowing of the leaf blade and the appearance of some aerial tubers (Fig. 8). Lugol staining was performed on the upper portions of the stems and all plants that were infested with psyllids tested positive for starch accumulation (Table 3). However, none of the stem pieces used for DNA extraction were positive for CLs with the conventional multiplex PCR method (Fig. S4A) and even though only one plant sample showed cyclophilin accumulation (Fig. S4), patatin was abundantly expressed in the plants exposed with these C1 psyllids (data not shown). A nested PCR was also run on these stem DNA samples, confirming that two of the plant samples contained CLs for treatment I (Fig. 9A) and three plant samples were CLs-positive for treatment II (Fig. 9B), in other words, C1 psyllids were transmitting CLs to the plants but due to the low titers and uneven distribution not all plants were positive. Tubers were not fully developed at the time of harvest, due to heat stress conditions that were not optimal for tuber development. Interestingly, upon potato chip slicing we observed symptoms of heat necrosis only in potato plants that were not exposed to psyllids, and upon frying no typical ZC patterns were observed for any treatment. Collectively, these experiments suggested that exposure of potato plants to C1 psyllids initially harboring no CLs, but containing low levels of CLs towards the end, did acquire low quantities of CLs that caused some symptoms consistent with ZC but not the full range, suggesting a correlation between levels of CLs and host responses culminating in disease symptoms. Effect of CLs abundance in psyllid colonies on plant survival. To more directly examine the effect of CLs inoculum titer on disease symptom severity, a second set of experiments was conducted under controlled growth conditions with low-CLs (C1) and high-CLs (C2) psyllid colonies, including a long term exposure to low-CLs psyllids. Six-week old Atlantic potato plants were placed in cages and inoculated with 20 low-CLs C1 or high-CLs C2 psyllids per plant, two plants per cage and placed in lighted shelves in the laboratory with 24uC constant temperature and 12 hours of light. A caged control contained plants without psyllids. In the short-time exposure experiments, two and a half weeks later the psyllids were exterminated. The long exposure experiments were conducted with the C1 colony on Atlantic potato plants (20 days old) exposed to psyllids for nine weeks, in growth chambers with 18 hours light/25uC and 6 hours dark/ 22uC. Insects were sampled to assess the levels of CLs during the transmission studies by conventional PCR ( Table 4). The intriguing finding even though consistent with previous experiments, was that the low-CLs C1 colony over time produced an increased number of psyllids with detectable levels of CLs, and surprisingly during the short time exposure experiment a higher number of psyllids (83%) tested positive for CLs compared to the psyllids that remained nine weeks on potato plants (27%). In comparison, the fraction CLs-positive psyllids in the high-CLs C2 colony was originally 100%, which decreased moderately to 69%, after 2.5 weeks (Table 4), while the CLs titers in individual positive psyllids remained quite high. Therefore, one set of plants could be considered having undergone relatively low exposure (C1) whereas the other was exposed from the onset to high levels of CLs (C2) by a combination of high percentage and titer of CLs and accompanying lower prolificacy than C1. As with the experiments in the greenhouse in the previous section, all plants infested with CLs-positive psyllids developed some degree of ZC symptoms (Fig. 10). Also consistent was that despite the presence of some level of CLs in the low-CLs psyllids, none of the plant samples exposed to these low-CLs insects tested positive for the presence of CLs when using multiplex or qPCR, and with one exception when using conventional PCR (Fig. S5). In contrast, even though the number of psyllids in the C2 and C1 colony were the same, the titers of CLs were different, therefore, plants infested with the C2 psyllids rapidly collapsed in a matter of weeks, consistent with the elevated CLs levels (Fig. S5). Consequently no potato tubers were recovered from these plants and screening of the plant tissue at the end of the experiment was also impossible since the plant material quickly decomposed (Fig. 10 C). These findings provide supportive evidence for the conclusion that ZC transmission by low-CLs psyllids results in a very low overall CLs content and distribution in plants and consequently comparatively moderate symptoms are induced. However, transmission by high-CLs psyllids leads to rapid lethal disease symptoms probably due to multiple blockages along the phloem by the proteobacteria and explosive induction of host responses. Plant responses to CLs Our studies showed that the intensity of physiological responses in potato plants to Candidatus Liberibacter solanacearum (CLs) associated with ZC, correlates with the content level of bacteria in the transmitting psyllid population. A variety of syndromes can be explained by the presence of different levels of CLs in the plants and the resulting number of sites that can possibly be physically blocked in the phloem by bacteria, their exudates or biofilms, to cause interruption of the flow of photosynthates. However, our experiments also demonstrate that part of the syndrome can be attributed to molecular reprogramming events that are caused by CLs. For instance, as illustrated by the lugol staining test, an abnormal accumulation of starch occurs in the upper stems of potato plants exposed to psyllids that transmit CLs. The expression of the tuber storage protein patatin, can also be affected by changes in the photosynthate flow. It has been reported that patatins can be induced in petioles and stems when tubers and axillary buds are removed from potato plants, and also when potato leaves are incubated in high sucrose concentrations [27]. Furthermore, patatin expression seems to be regulated by STOREKEEPER, a DNA binding protein that is activated upon sucrose accumulation [28,29]. Here, we observed an elevated accumulation of patatins and possibly other proteins, that are normally present in tubers but now accrue in the stems. The overall protein content of ZC afflicted plants is also surprisingly high for such disease stressed plants. Up to 3.5 times more protein accumulates in ZC stems when compared to healthy stems, and again this could serve towards explaining the changes in tissue morphology and its contents. A protein profile comparison of ZC affected plants with healthy potato plants revealed that even though no proteins of microbial origin were identified, three host proteins were found to be enhanced in ZC affected plants. These were a peptidyl-prolyl isomerase single domain cyclophilin protein [20], a putative transcription factor BTF-3 like [30,31] and a glycoprotein-like protein, which based on amino acid sequence similarity to Arabidopsis could be a 60S ribosomal protein L14, with roles in protein translation [32]. The increase of cyclophilin was verified with western blot analyses. The three proteins share a common function in transcription and translation; processes that seem to be actively turned on in the ZC diseased potato stems, which contain high amounts of total protein. The exact role of these putative regulatory protein remains to be identified but cyclophilins are known to stabilize the cis-trans transition state and accelerate isomerization, a process that is considered important in protein folding [33] and since overall protein levels are high, this would necessitate elevated levels of cyclophilin. The rapid brown discoloration of sliced or ground tissue of ZC plants is likely caused by polyphenol oxidase (PPO), a copper metallo-enzyme that catalizes the oxidation of phenolic compounds to quinones, which upon polymerization react with amino acids on cellular proteins generating brown pigmentation in wounded tissues [34,35]. PPO is localized in plastids, i.e. chloroplast and thylakoid lumen [36] and in potatoes it can be found in the amyloplast [37] where the starch granules concentrate. Upon tissue disruption, by mechanical damage or insect feeding the phenolic substrates that are accumulated in the vacuole will be brought in contact with the PPO released from the plastids, allowing the oxidation reaction to take place. Previous studies with tubers affected by ZC did show an increase in total phenol compounds but no assays were performed with the potato stems [38]. Here we show that stem tissues of ZC affected plants contain more phenolics than the healthy plants as well as higher levels of PPO Increasing phenolic compounds is an antimicrobial strategy that some plants use to restrict the growth or spread of the microbes [39], suggests that this accumulation of phenolic Figure 8. Symptoms of potato plants exposed to C1 psyllids with a low-CLs content. Differences between caged plants with or without psyllids containing low amounts of CLs. A shows a caged control potato plant, non-exposed to psyllids and B, C and D are potato plants exposed to low-CLs psyllids. Symptoms observed vary from curling up of new leaves (B), Psyllid yellows-like symptoms (C) and formation of aerial tubers (red arrow, D). doi:10.1371/journal.pone.0037345.g008 Table 3. Lugol staining of potato stems. Plant-microbe-insect interactions We have shown that several potato psyllid colonies have varying levels of CLs content and for reasons still unknown (e.g., environmental cues, food supply triggers, or sensing of psyllid population densities) the titers of the a-proteobacteria and percentage of CLs-positive individuals in colonies fluctuate over time and this affects the prolificacy of the insect colony. Moreover, in extreme cases we occasionally noted that the percentage of individuals testing positive would switch from a few to the majority and vice versa (Fig. 6). This phenomenon has also been observed for psyllid colonies maintained under greenhouse conditions (Don Henne, personal communication). Likewise, collaborators working with colonies derived from the same stocks as those used in our study, showed that levels of CLs in psyllids affect population growth rate and longevity [17]. Extrapolating these observations to what occurs in the field suggest the distinct possibility that in one year an exponential growth of low-CLs psyllids might occur but with low incidence of ZC, as was indeed observed in 2010 [40]. In other years there may be low numbers of high-CLs psyllids but a high incidence of ZC infected plants in the potato fields. Microbial population dynamics were studied in relation to the fluctuation of CLs and differences in proliferation traits identified for the psyllids populations with different CLs contents. Wolbachia ssp, was the first suspect because of its known role in insect reproduction [23,41], but the levels of these bacteria remained the same when comparing the evaluated low-CLs and high-CLs psyllid populations. Levels of C. Carsonella ruddi a primary endosymbiont [25] also remained similar for the two populations. Therefore, other factors or other microbes must influence the fitness properties, especially since the changes in CLs content can change within a matter of days (Fig. 6). In this context we are intrigued by the discovery that Candidatus Liberibacter asiaticus (CLa) associated with citrus greening, seems to harbor two phages which become lytic when CLa is injected into periwinkle plants [42]. It is known that during such lytic stages the phage destroys its bacterial host and one can imagine that similar changes to lytic stages could be responsible for the CLs titer fluctuation in psyllids. Sequences that resemble prophage genomes are present in the published CLs genome [43], and future experiments may elucidate if active phages are present within CLs. Psyllid-mediated CLs transmission studies were conducted in order to better understand the underlying basis for the induction of different ZC responses. Our studies illustrated the existence of a clear correlation between CLs content levels in the psyllid Table 4. Assessment of CLs in individual psyllids. population and severity of plant symptoms. For example, exposure of potato plants to psyllids containing high titers of CLs for two and a half weeks was sufficient under our conditions to cause plant death. Prolonged exposure time was a contributing factor to exacerbation of symptoms when psyllids with low CLs titer were used. Under such low infection pressure detection of CLs in the plants tissues is not always possible due to the low bacterial levels in combination with uneven distribution [9,10], which affects the number of CLs-positive plants that can be detected ( Fig. 9 and Fig. S4), but still we clearly observed the consequences of the bacteria infiltrating the phloem tissue as evidenced by disease symptoms. At the physiological and molecular level this was evidenced by the observations that for the most part plants exposed to CLsharboring psyllids eventually accumulated starch in the upper stems, exhibited increased amount of protein per gram of tissue, contained elevated levels of the patatin storage protein, and phenolic content levels were higher. The severity or rate of symptoms onset culminating in the ZC syndrome is likely dependent on how fast the blockage of phloem occurs and to what extent the plants reprogram the stem identity and stem composition into a tuber-like condition. Summary Several different physiological and molecular ZC-associated host responses were identified that include abnormal starch and protein accumulation in stems and induction of specific proteins, phenolics and PPO. We also established and maintained CLstransmitting psyllid colonies, and developed SYBR green based real time qPCR procedures to determine the titer of CLs and other microbes in the potato psyllids. With these newly established properties, materials and tools; it was shown that the content levels of CLs in the psyllid populations can fluctuate over time for reasons unknown, but as demonstrated it is unlikely associated with variations on reproduction-associated endosymbionts. Im-portantly, the ultimate level of CLs at the time of inoculation appears to be key in the extent of disease symptom development in potato plants that at least partly results from physiological reprogramming events. High inoculum levels presumably allow for the rapid spread of the a-proteobaceria through the phloem causing a rapid wilt and collapse of potato plants. Under lower inoculum pressure plants disease incidence is lower, although if the CLs-transmitting psyllid colony persists on plants for several weeks, the effective size of the inoculum increases with time, resulting in severe symptoms and loss of the potato tuber production. Our results support the conclusion that physiological reprogramming events contribute to the ZC disease syndrome in potatoes and that the severity and rate of symptom development in plants correlate with the CLs inoculum density in the transmitting psyllids. Psyllid colonies Three psyllid colonies were obtained from different sources. The C1 colony was obtained from Dalhart TX, generously provided by Drs. Charlie Rush and Don Henne. The C2 colony was obtained from Dr. Joe Munyanenza, Wapato Washington, and the C3 colony was generously provided by Dr. Christian Figure 10. Disease progression on potato plants after psyllid exposure. A. Atlantic plants exposed for 2.5 weeks to low-CLs psyllids, no drastic symptoms were observed, and plants appear normal at the end of experiment. B. Atlantic plants exposed for 9 weeks with the low-CLs psyllid colony, disease progression is visible by leaf edges curling up proceeded by a strong psyllid yellows like symptoms. C. Atlantic plants exposed 2.5 weeks to high-CLs psyllids. At the time of insecticide treatment the plants have a normal appearance, but 2 weeks later the symptoms develop, the plant collapses and dies 3 weeks later. Plants caged without psyllids are labeled as control, and the time points shown (2.5, 5 and 9 weeks) are in weeks after psyllid inoculation. doi:10.1371/journal.pone.0037345.g010 Nansen, Lubbock TX. The psyllids are susceptible to Agrimek and Marathon insecticides, and these were used as the manufacturer recommended, whenever the psyllids needed to be exterminated (i.e. psyllid transmission studies). Psyllid colonies were maintained by periodic transfer of young tomato and potato plants in the Bug Dorm cage. Starch detection Samples were placed in a tube containing I 2 /KI solution: (5 g KI, 0.5 g I 2 , 500 ml H 2 O), followed by an 80% ethanol wash. A positive reaction produced a dark blue precipitate in the tissue where the starch was accumulating. Phenotype analysis of chips Potato tubers of different plants were rinsed, peeled and sliced. Slices of approximately 1.3 mm were cut by hand with a knife. The slices were rinsed in water and blot dried with paper towels. A deep fryer containing peanut oil (high flash point) was used to make the chips. When the oil temperature reached 355uF the chips were fried for 45 seconds. The chips were removed and placed on paper towels to eliminate the excess oil. Chips were then photographed. Protein extraction, quantification and immunoblot assays For total protein quantification ,0.5 g of tissue was extracted in 100 mM Tris-HCl pH8, 500 mM NaCl, 50 mM EDTA and 10 mM b-mercaptoethanol. Protein samples were then quantified using the Bradford method and a standard curve was obtained using BSA as a standard. Once the concentration of protein was calculated (mg/g of tissue) the numbers were then analyzed statistically using the program SPSS 14.0. For immunoblot assays, protein samples were electrophoresed through 10%, 12.5% or 15% polyacrylamide gels using SDS-PAGE and then electrotransferred to PVDF membranes. The blots were incubated with polyclonal rabbit antibodies raised against patatin [29] [20] at 1:3000, obtained from Dr. T. Rorat (Institute of Plant Genetics, Poland). Goat anti-rabbit IgG antibodies conjugated to alkaline phosphatase were used at 1:3000 dilution, and the protein bands were then visualized by the addition of 5-bromo-4-chloro-3 indolyl phosphate p-toluidine and nitrotetrazolium blue salt. Total phenolic content We followed an adapted Folin-Ciocalteu method [45]. Briefly, samples of 0.5 g were taken from stem tissue, ground in liquid nitrogen and mixed with 2 mL of 100% methanol, followed by a 24 h incubation at 4C. Equal amounts of sample and folic acid (62.5 uL) were mixed with 1 mL of dd water, vortexed and incubated for 2 min. Then 125 uL of sodium carbonate were added, mixed and incubated at room temperature for 2 hours. A standard curve was obtained using gallic acid as reference, and samples were measured at A 720 in a Beckman-Coulter Spectophotometer Model DU 530. Detection of Wolbachia in psyllids by PCR Psyllids were tested for the presence of Wolbachia by using the primer sets wsp-81F and wsp-691R [24]. The PCR reaction contained: 16 Phusion HF buffer, 200 mM dNTPs, 0.4 mM forward and reverse primer, 2 ml of DNA and 0.02 U of Phusion Hot Start DNA Polymerase. DNA amplification was performed on an Applied Biosystems 2720 Thermocycler following the condi-tions of 94uC (5 min), then 35 cycles of 94uC (30 s), 55uC (1 min), 72uC (1 min), and 72uC (5 min). PCR analysis Conventional PCR and multiplex PCR were conducted as described before [11]. Modifications to Pitman's nested PCR protocol [13] were incorporated for our screenings. The primers used for the first round of PCR were ZCf/OI2c, and for the second round Lib 16S01F and Lib16S01R. Cycling parameters were also modified, the annealing temperatures for the first and second round were 62uC and 55uC respectively. Real-time qPCR Primer design was based on specific sequences for Bactericera cockerelli, Candidatus Liberibacter solanacearum, Carsonella ruddii and Wolbachia obtained from the NCBI database. After PCR confirmation and amplicon sequencing, qPCR primers were designed with Primer Express 3.0 (Life Technologies Corporation, Carlsbad, CA). A PCR efficiency test was done to select the best primer pairs (Table S1). Briefly, curves for PCR amplification were established by a serial dilution of known psyllid DNA sample concentrations, and the efficiency was determined from the slope of the log linear portion of the calibration curve. The PCR efficiency equals (10 21/slope -1). The theoretical maximum of 1 indicates that the amount of product doubles with each cycle. The real time PCR reaction was performed with 16 power SYBR green mix (Life Technologies Corporation), 500 nM forward and reverse primers and 30 ng of DNA sample in a 15 ml reaction. The PCR cycle used was as recommended (95uC for 10 minutes, 95uC for 15 seconds follow by 60uC for 1 minute for a total of 40 cycles). Ct values were obtained and then normalized to the 28S psyllid reference gene. The values relative to the reference gene were represented in graphs and the standard deviation of the technical repeats was determined. Figure S1 Detection of polyphenol oxidase (PPO). Western blot analysis of polyphenol oxidase was performed with an apple PPO antibody (panel I) that successfully cross-reacted with the potato PPO. The Coomasie brilliant blue loading control of the protein samples used in the western blot is shown in panel II. The size of PPO is about 60 kDa [35]. The PPO enzyme is active as a tetramer, and as reported previously, some aggregated complexes can still be detected in an SDS-PAGE western blot [46] as seen in this western blot only for the ZC samples, indicated by arrow on right. HS, healthy stem; ZS, ZC affected stem; and ZT, ZC affected tuber. (TIF) Figure S2 Analysis of CLs titer changes in psyllid populations. The prolificacy of the psyllid colony is a very good indication of the density of CLs in the psyllids. The high-CLs psyllid colony (H-1 through H-3) with low prolificacy was used to initiate a new high-CLs colony, on a fresh set of tomato plants. However in the process of adapting to the fresh tissue in a new cage, the colony started to rapidly proliferate and when evaluated by PCR the density of CLs was very low. Subsequently a simultaneous screening on both psyllid populations was performed. Three psyllids per colony were randomly picked and DNA was extracted, including a ''water'' DNA extraction within sets to account for any possible contamination during the extraction. Conventional PCR was conducted with primer pairs Zcf/OI2c and 28SrDNA, used in a single PCR or combined in a multiplex. Results show that the prolific colony has reduced CLs titer (L-1 to L-3) and the initial high-CLs colony (H-1 to H-3) retained elevated amounts of CLs. Moreover when multiplex PCR was performed, only the high-CLs colony yielded comparable results, amplifying both PCR products, but the low-CLs sample did not produce a PCR product for the Zcf/OI2c primer pair, implicating that the primer ratios and conditions for multiplex PCR when testing low titer colonies need to be adjusted. Size markers in Kb. (TIF) Figure S3 Prolificacy differences between psyllid colonies. New colonies were established on potato plants with 5 pairs of female and male psyllids, after a month substantial differences in the population numbers were observed. A. High-CLs C3 colony, B. Low-CLs C1 colony. (TIF) Figure S4 Molecular characterization of plants exposed to low-CLs psyllids. A. Multiplex PCR was conducted on DNA extracted from stems of plants exposed to low-CLs psyllids. Arrows indicate the position of the 1,171 bp CLs and the 881 bp b-tubulin (b) amplicons. b-tubulin is used as a marker for DNA quality control, B. Total protein was extracted from the same tissues and western blot analysis was performed for the detection of cyclophilin (arrow). Sample loading is shown by the red Ponceau S staining. Molecular markers are indicated. (TIF) Figure S5 Conventional PCR screening of plant samples exposed to low and high-CLs psyllids. Conventional PCR tests were performed on DNA extracted from stems of plants exposed for 2.5 weeks to low-CLs (C1) and high-CLs (C2) psyllids. Supporting Information The arrow indicates the position of the 1.17 Kb ZCf/OI2c 16S rDNA amplicon. For the group of plants exposed to low-CLs, DNA samples A and B are from control plants that were caged without psyllids, and C through H represent plants exposed to psyllids, For plants exposed to high-CLs, A9 and B9 are DNA from control plants (caged without psyllids) and H9 and G9 are from plants exposed to high-CLs C2 psyllids. DNA ladder is indicated in kilo bases (Kb). Unnecessary lanes were removed (two lanes between marker and ZC plant DNA used as positive control). (TIF)
2016-05-12T22:15:10.714Z
2012-05-17T00:00:00.000
{ "year": 2012, "sha1": "200b1db6743ed2ad2b816f314f57111940d6b1c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0037345&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "200b1db6743ed2ad2b816f314f57111940d6b1c7", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266548520
pes2o/s2orc
v3-fos-license
Low Risk Meets High Stakes: Unraveling the Mystery of Low D-dimer Pulmonary Embolism Pulmonary embolisms (PEs) are potentially life-threatening emergencies that carry significant morbidity and mortality. Advances in treatment options and the safety of existing procedures have effectively reduced the long-term and short-term effects of the condition. Therefore, it is important to make an early diagnosis so that treatment options can be thoroughly explored. The D-dimer is an important tool in the early diagnosis of PEs. It is especially useful in ruling out the diagnosis in patients with a low to moderate suspicion of the disease. We present a case of a 22-year-old male who presented with exertional dyspnea, congestion, and rhinorrhea for one day and was noted to have persistent hypoxia and tachycardia. The influenza test was positive, and he was started on oseltamivir. Due to persistent hypoxia, a CT pulmonary angiogram was ordered and revealed filling defects in the left lower lobe segmental vessels suggestive of PE, as well as multifocal multilobar bilateral ground-glass opacities. He was initially treated with a heparin drip and subsequently switched to eliquis. After a significant improvement in his hypoxia, he was discharged home for outpatient follow-up, including a hypercoagulable workup. This case demonstrates that despite the usefulness of the D-dimer as a diagnostic tool for PEs, it cannot solely or fully replace the full gamut of screening tools used to determine the risk of PE. Although rare, false-negative scores do occur; therefore, the tool should always be used in conjunction with other scoring systems, physician gestalt, and within the specific clinical context. Introduction Pulmonary embolism (PE) represents a serious medical condition characterized by the obstruction of pulmonary arteries.They are potentially life-threatening emergencies and carry significant morbidity and mortality, especially when misdiagnosed or left untreated [1].However, diagnosing the condition can be challenging, often requiring a high index of suspicion and the use of multiple clinical and laboratory tools to aid in the decision for further testing and treatment.The challenge in diagnosis often arises from the fact that presenting symptoms can be highly nonspecific and inconsistent, overlapping with various other medical conditions, and therefore, making PE susceptible to being overlooked [2].Despite this challenge, early detection is essential for mitigating the morbidity and mortality associated with the condition.This highlights the importance of risk stratification tools.Among these, the D-dimer has historically been useful in decision-making, especially as an exclusion tool, given its high sensitivity and negative predictive value [3].We present a case, however, in which a negative D-dimer was found in a patient who was diagnosed with PE. Case Presentation The patient is a 22-year-old male with no significant past medical history or surgical history and no family history of thrombophilia.He presented with exertional dyspnea, congestion, and rhinorrhea of one day's duration following influenza exposure two days prior to the onset of symptoms.The initial examination revealed tachycardia, confirmed to be sinus tachycardia on EKG, and respiratory distress with decreased breath sounds bilaterally.The patient was found to be hypoxic on room air, necessitating nasal cannula oxygen supplementation.Laboratory tests upon admission showed a white blood cell count (WBC) of 12.8, with no abnormalities on the comprehensive metabolic panel (CMP).COVID-19 testing was negative.The WELL's score was initially noted to be 1.5 by the emergency physician and so the patient was initially started on oseltamivir and supportive treatment.However, due to the presence of persistent hypoxia and tachycardia, the clinical suspicion of PE was deemed high enough to warrant ruling out.A D-dimer test ordered earlier returned negative; however, a CT pulmonary angiogram was ordered as well based on the higher suspicion, which revealed filling defects in the left lower lobe segmental vessels suggestive of PE, along with multifocal multilobar bilateral ground-glass opacities (Figure 1). FIGURE 1: CT pulmonary angiogram demonstrating left lower lobe segmental filling defect (red arrow) The patient was initiated on a heparin drip and continued to get supportive care and oseltamivir.He was transitioned to eliquis and subsequently discharged when the hypoxia resolved, with plans for outpatient follow-up and a hypercoagulable workup. Discussion The D-dimer assay stands as a highly sensitive test frequently employed in the evaluation of PE.This test quantifies monoclonal antibodies targeting D-dimer, a byproduct of fibrinolysis, thus reflecting coagulation activity [4].Initially, D-dimer levels rise during fibrin clot formation, gradually diminishing as clot organization and adherence commence.Notably, the D-dimer test has a relatively brief half-life of four to six hours but remains elevated for about seven days post-clot formation.Nevertheless, a negative D-dimer assay is generally deemed reliable, particularly in low-and moderate-risk patients [4]. Several factors could contribute to the discordance observed in this case.Subsegmental emboli, though smaller in size, can still cause significant clinical symptoms and compromise pulmonary function.However, there exists a correlation between the extent and location of VTE and D-dimer levels [5].This correlation may potentially explain the lack of a measurable elevated D-dimer level.The clinical significance of subsegmental emboli has been debated, with the 2019 ESC guidelines acknowledging their potential clinical importance and advising tailored management decisions based on the patient's overall condition [6]. Multiple scoring systems have been proposed for risk stratification of patients to determine the need for further testing, including the WELL's score for PE, the revised Geneva score, the CHOD score, and the Padua score, each with varying predictive levels [7].As useful as the existing predictive scores are as a tool for guidance, they each have their pitfalls that limit universal application and, therefore, need to be applied in the context of the patient's presentation.Some studies favor the WELL's score over the revised Geneva score or the simplified revised Geneva score, while others find no significant difference or even advocate for clinical judgment, often termed "physician gestalt," as a superior alternative [8][9][10][11].The findings from metaanalyses generally indicate a lack of consistent distinctions between clinical decision instruments, with some studies suggesting a slight preference for the WELL's score, or no variance compared to physician gestalt [12][13][14][15].However, it is essential to consider variations among individual clinicians in gestalt performance [16].Recent research has allayed concerns regarding the WELL's score's inter-rater reliability, which includes a subjective criterion pertaining to PE's likelihood as a diagnosis [17,18].As useful as the existing predictive scores are as a tool for guidance, they each have their pitfalls that limit universal application and, therefore, need to be applied within the specific context of the patient's presentation taking into account all available information and tools at the disposal of the physician. Conclusions While D-dimer has proven to be a sensitive marker for detecting fibrinolysis and is often used as a screening tool for PE, it is important to recognize its limitations.D-dimer levels can be influenced by various factors, including age, renal impairment, and comorbidities.False-negative D-dimer results have been reported in patients with localized clot formation, such as subsegmental pulmonary emboli, where the extent of fibrinolysis might not be sufficient to trigger a substantial D-dimer release.This phenomenon raises questions about the appropriateness of relying solely on D-dimer in cases of suspected PE, especially when clinical symptoms and imaging studies suggest otherwise.This case report highlights the multidimensional nature of diagnosing PE.A comprehensive approach, combining clinical assessment, imaging studies, and laboratory findings, is essential for an accurate diagnosis and appropriate management, especially in cases where D-dimer results appear discordant with the clinical picture.
2023-12-26T16:02:28.817Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "9a0121c205a1b8d06e10f27b53addb265d57d62d", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/207348/20231224-12425-yj19j1.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5cb2053a338908e4e62f2db00ea226c21461721", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
259126321
pes2o/s2orc
v3-fos-license
Detecting the Unseen: Understanding the Mechanisms and Working Principles of Earthquake Sensors The application of movement-detection sensors is crucial for understanding surface movement and tectonic activities. The development of modern sensors has been instrumental in earthquake monitoring, prediction, early warning, emergency commanding and communication, search and rescue, and life detection. There are numerous sensors currently being utilized in earthquake engineering and science. It is essential to review their mechanisms and working principles thoroughly. Hence, we have attempted to review the development and application of these sensors by classifying them based on the timeline of earthquakes, the physical or chemical mechanisms of sensors, and the location of sensor platforms. In this study, we analyzed available sensor platforms that have been widely used in recent years, with satellites and UAVs being among the most used. The findings of our study will be useful for future earthquake response and relief efforts, as well as research aimed at reducing earthquake disaster risks. Introduction An earthquake is one of the major natural hazards which destroys lives and properties; more than 522 major earthquakes have occurred in the 21st century, killing more than 430,000 people worldwide [1]. The earthquake phenomenon is sudden, and people cannot organize effective action in a short time because the seismic wave is transmitted to a certain point which spreads instantaneously, destroying houses and critical infrastructures [2]. In the early hours (4:17 a.m. local time) of 6 February, a 7.8-magnitude earthquake struck southeastern Türkiye and some parts of Syria. About nine hours later, a 7.5-magnitude earthquake along with more than 200 aftershocks took the lives of at least 59,000 people and injured more than 100,000 [3]. The United Nations Development Programme (UNDP) estimated that 1.5 million people in Türkiye lost their homes and nearly 500,000 houses must be rebuilt. Seismic waves are divided into three types according to their mode of propagation: P-waves, S-waves, and surface waves. A P-wave travels in the Earth's crust at a speed of 5.5~7 km/s. It makes the ground vibrate up and down and is less destructive. An S-wave propagates in the Earth's crust at a speed of 3.2~4.0 km/s [4]. Surface waves (R and L waves) are mixed waves generated by P-waves and S-waves, which meet at the surface. Their large wavelengths and strong amplitudes are the main factors that cause strong damage to infrastructures. Despite numerous efforts to develop earthquake prediction technology, it is still in the initial stage and remains a challenging area of research [2]. Therefore, it is difficult to provide imminent prediction for most earthquakes to enable prompt rescue operations. The experiences during emergency management after a major Using hammer strike, electric spark, or explosion as the source, it is aroused by observing and recording the travel time of seismic wave and detecting the buried depth, shape, and distribution law of different elastic stratum interface, so as to solve the problem of engineering and resource geophysical exploration and explore depths from several meters to hundreds of meters Underground: several meters to several kilometers P-alert Earthquake early warning Long period Earthquake P-wave sensor, in addition to the traditional S-wave detection function, but also embedded in the earthquake rapid report technology, can detect P waves and determine a catastrophic earthquake within 3 s. Underground Alpha GUARD Earthquake early warning Long period Seismic precursor analysis using atmospheric radon anomaly. Ground CG-5 Gravimeter Seismic prospecting Long period Studies the physical phenomena of gravity changes on the Earth's surface and in the space around it. Ground Distributed acoustic sensing, DAS Earthquake early warning Long period The acoustic sensor detects the external signal in the optical fiber. By extracting and demodulating the interference signal of sound vibration at different times, the quantitative measurement of external physical quantity can be realized. Radio-Frequency Identification (RFID) Communication support 0-72 h for saving lives and after 72 h for other usages RFID is a generic term for technologies that use radio waves to automatically identify people or objects. Life-Detection Sensors Life-detection sensors are used to collect physiological, physical, and chemical information of trapped survivors to effectively identify their location immediately after a disaster [8]. Based on their principles and types of sensors used, life-detection technologies can be classified into acoustic life-detection techniques, optical detection techniques, radar life-detection techniques [9], and volatile organic compound (VOC) detection techniques [10,11]. Acoustic life-detection technology is used to locate trapped individuals by detecting cries for help, movements, tapping, and even small chest fluctuations during breathing [12]. Passive sensors that receive trapped people's cries for help and knocking sounds have the advantage that rescue workers can hear these sounds and locate them if they are within the detectable range. However, the practical application of this technology requires sufficient experience from operators due to noisy sound at the earthquake site. Recent advancements in sensor technology have enabled the detection of the chest fluctuations of a trapped person during breathing by transmitting sound waves and analyzing reflected waves [13]. This approach has become more effective in recent years because acoustic signals can penetrate metal walls and detect stationary people through breathing movements alone without being disturbed by the remains of the victim [13,14]. Optical detection technology includes visible light detection and infrared detection technology. The optical detection technology involves using a small camera equipped with Sensors 2023, 23, 5335 6 of 21 a light source connected by a flexible data transmission line to penetrate the aperture of a collapsed building and avoid moving it. One form of life-detection technology, also known as a Snake Eye (SE) life detector [15], can determine the position and living condition of trapped individuals while avoiding secondary collapse. Infrared detection technology uses the infrared characteristics of the human body to distinguish a human body from the surrounding environment. Currently, Unmanned Aerial Vehicles (UAVs) are becoming popular for collecting video information, audio information, infrared information, and other information at the scene of disaster areas synchronously. The collected data are further classified by operating software to analyze the images and audio in the video and determine the location and living state of personnel [16][17][18]. Radar life-detection technology is one of the most mature and widely studied lifedetection technologies at present. It was used extensively in the 2008 Wenchuan earthquake in Sichuan, China, and in the 2023 Türkiye-Syria earthquake. The common radar lifedetection system is divided into Continuous Wave (CW) and Ultra-Wide Band (UWB) radar life-detection systems. A CW radar transmits a monophonic continuous wave signal to demodulate the phase change of the reflected wave and obtain the breathing and heart rate of the person [19]. This is because the phase change of the reflected waves is linearly proportional to the displacement of the chest caused by cardiopulmonary activity [20]. The UWB radar life-detection system emits pulsed microwave beams on the biological body. The beam reflects the echo pulse according to the circular sequence modulated by biological activity that extracts the parameters of the life signal through the digital signal processing system [21]. However, due to the radiation effect of electromagnetic waves on the human body and interference caused by the simultaneous use of multiple radar life detectors at the earthquake site, radar life-detection systems still have some limitations in their use. Volatile organic compound (VOC) detection technology refers to identifying characteristic compounds in the exhaled air, blood, and urine of trapped individuals by determining the type and content of VOCs in the environment [22]. Breathing is considered a unique feature that can determine if a trapped individual is still alive [23] by detecting CO 2 and O 2 levels. Ion Mobility Spectrometry (IMS) and electronic sensors are common VOC life-detection instruments [24], where IMS separates these volatile organic compounds according to the difference in the drift velocity of the product ions in the inert buffer gas under the influence of an electric field [25]. The electronic sensor (also known as the electronic nose) uses an array of gas sensors to simulate animal olfactory organs to recognize odors [26]. VOC life-detection technology has some limitations, such as interference from dust and other particles at the rescue site, different VOCs of different groups of people (especially the VOCs of people trapped for a long time, with a lack of water and food), and the insufficient miniaturization of equipment [27]. Seismic Monitoring Sensors Seismic monitoring sensors are essential for measuring abnormal activity and precursor signals of earthquakes [28]. They provide invaluable data on the position, depth, magnitude, onset time of shocks, and source mechanism of earthquakes, both before and after they occur. Sensors play a crucial role in seismic monitoring and are used in various applications such as mobile gravity monitoring [29], electromagnetic wave signal detection [30], and cross-fault deformation measurement [31]. The first seismic network was established in California, USA, in 1929 using Wood-Anderson seismometers [32]. Modern seismic networks typically consist of broadband and strong-motion seismometers. Broadband seismometers have a wide recording capacity ranging from hundreds of seconds to hundreds of hertz. The Southern California Seismic Network (SCSN) is an exemplary seismic network that has grown from 7 seismometers in 1929 to over 600 seismometers in 2021 [33] Each station is now equipped with co-located, three-component broadband and strong-motion seismometers. Mobile gravity monitoring is an effective technique for earthquake prediction and exploration, primarily for two reasons. First, changes in gravity directly reflect crustal deformation and variations in the focus medium during earthquake incubation [34]. Second, seismic activity is intricately linked to the spatial inhomogeneity and temporal discontinuity of gravity change. Earthquake incubation and occurrence involve multiple stages, starting from stress accumulation to energy release. During the earthquake breeding process, stress builds up in the source, leading to the migration of material in the crust and changes in crustal density, which then affect the corresponding surface gravity. One notable success story comes from China, where a forecast system was developed based on the principle of "A field, a network". This system uses mobile gravity monitoring to predict earthquakes and has been successful in detecting abnormalities in gravity prior to several significant earthquakes [35]. Gravity monitoring and prediction are foundational for earthquake prevention and disaster reduction efforts. This involves the use of gravity sensors mounted on both ground-based instruments and satellites. By continuously monitoring changes in gravity, researchers can detect patterns and anomalies that may indicate the potential for an earthquake. This information can then be used to inform early warning systems and evacuation plans, potentially saving lives and minimizing damage. Several countries have developed earthquake early warning systems using various techniques, including Japan, Mexico, China, and the USA [36]. Among them, the most advanced system is the Japanese REIS earthquake early warning system. REIS can accurately calculate the location and magnitude of an earthquake just 5 s after receiving the seismic wave signal. Additionally, it can estimate the source mechanism of an earthquake rupture within approximately 2 min [37]. It is important to note that Japan's ability to develop such an advanced earthquake early warning system is due in large part to its dense seismic station network. In Japan, there is approximately one seismic station every 20 km, which provides the necessary data to accurately calculate an earthquake's location and magnitude within seconds of receiving the seismic wave signal. The Shake Alert earthquake early warning system in the USA is composed of six components, including the station observation system, data transmission system, data processing and alarm center, test and certification platform, information release system, and end-users. When an earthquake occurs, the system's automatic rapid reporting system takes between 3 to 5 min to relay the relevant earthquake information to the appropriate authorities and end-users. This includes location, magnitude, and estimated shaking intensity based on the seismic waves detected by the network of monitoring stations [38]. The earthquake early warning system in Mexico City (SAS) is composed of four main components. (1) There is an earthquake detection system that employs 12 digital seismometers spaced 25 km apart within a 300 km coastal area of Guerrero. Each station is equipped with a microcomputer capable of determining the magnitude of an earthquake within 10 s. (2) There is a communication system with a very-high-frequency (VHF) central radio relay station and three ultra-high-frequency (UHF) radio relay stations that transmit seismic information to Mexico City within just 2 s. (3) The central control system, located in Mexico City approximately 320 km from the Guerrero Coast area, continuously receives seismic signals and automatically processes them to determine the magnitude and decide whether to issue an alarm. (4) The alarm issuance system issues warnings via commercial radio, and relevant departments are equipped with special receivers where trained personnel are responsible for receiving and coordinating disaster prevention activities [39]. A change in magnetic field can be taken as a precursor of an earthquake because the huge accumulations of crustal pressure may change the properties of the rock layer. This phenomenon affects its electrical conductivity, and the trapped gas accumulated in the formation will also produce an electric current to affect the geomagnetic activity [40]. Therefore, it is sometimes controversial to regard electromagnetic motion as an earthquake precursor. It is not clear yet, but the reasons might be as follows: (1) the signal is too weak and easily mixed with background noise to distinguish it, such as noise from nearby vehicles or small changes in solar activity that can be mistaken for geological disturbance signals; (2) accurate measurement equipment at a fixed position with enough statistical recordings are required to resolve reliable signals [41]. A number of researchers have used artificial noise signals for seismic wave velocity monitoring [42]. Artificial seismic noise is usually dominated by high-frequency body waves, providing a high spatial resolution. In addition, the location of artificial noise sources is often fixed (e.g., industrial operations) or moves along a fixed trajectory (e.g., trains and cars), which is easy to track and simulate the movement of noise sources [43]. Micro-electromechanical systems (MEMS) are devices or systems that combine microstructures, micro transducers, and micro-actuators with signal processing and control circuits [44]. Nowadays, these are commonly found in smartphones and laptops. These sensors are inexpensive and can be used to construct ultra-dense arrays. Additionally, MEMS sensors are known for their high accuracy, low power consumption, and robustness, which makes them ideal for use in harsh environments [45,46]. Distributed Acoustic Sensing (DAS) is another effective technique to measure strain rate that consists of two parts, namely, a demodulator and sensing fiber optic cable ( Figure 1). This fiber optic is deformed by the movement of the Earth's crust, which causes the refractive index of the cable to change the phase of the back-scattered light [47]. The demodulator can detect seismic activity by analyzing the coherent Rayleigh scattered light phase information of the fiber [48]. Since 2017, DAS has emerged as a novel technology to obtain numerous seismic sensors at a relatively low cost. The concept of DAS was proposed in the 1990s, followed by being applied in various fields. However, its applicability in earthquake seismology has only recently been considered. phenomenon affects its electrical conductivity, and the trapped gas accumulated in the formation will also produce an electric current to affect the geomagnetic activity [40]. Therefore, it is sometimes controversial to regard electromagnetic motion as an earthquake precursor. It is not clear yet, but the reasons might be as follows: (1) the signal is too weak and easily mixed with background noise to distinguish it, such as noise from nearby vehicles or small changes in solar activity that can be mistaken for geological disturbance signals; (2) accurate measurement equipment at a fixed position with enough statistical recordings are required to resolve reliable signals [41]. A number of researchers have used artificial noise signals for seismic wave velocity monitoring [42]. Artificial seismic noise is usually dominated by high-frequency body waves, providing a high spatial resolution. In addition, the location of artificial noise sources is often fixed (e.g., industrial operations) or moves along a fixed trajectory (e.g., trains and cars), which is easy to track and simulate the movement of noise sources [43]. Micro-electromechanical systems (MEMS) are devices or systems that combine microstructures, micro transducers, and micro-actuators with signal processing and control circuits [44]. Nowadays, these are commonly found in smartphones and laptops. These sensors are inexpensive and can be used to construct ultra-dense arrays. Additionally, MEMS sensors are known for their high accuracy, low power consumption, and robustness, which makes them ideal for use in harsh environments [45,46]. Distributed Acoustic Sensing (DAS) is another effective technique to measure strain rate that consists of two parts, namely, a demodulator and sensing fiber optic cable ( Figure 1). This fiber optic is deformed by the movement of the Earth's crust, which causes the refractive index of the cable to change the phase of the back-scattered light [47]. The demodulator can detect seismic activity by analyzing the coherent Rayleigh scattered light phase information of the fiber [48]. Since 2017, DAS has emerged as a novel technology to obtain numerous seismic sensors at a relatively low cost. The concept of DAS was proposed in the 1990s, followed by being applied in various fields. However, its applicability in earthquake seismology has only recently been considered. Post-earthquake monitoring is being carried out using audio signals to locate human targets in a hidden way [41,49], and it can be strengthened by using Wi-Fi and Long-Term Evolution (LTE) in future [50]. This sensor is small and monitors the environment in a narrow space by sensing different physical characteristics such as temperature, humidity, pressure, and vibration. The collected sensor data are first sent to the monitoring node based on ZigBee technology and then transmitted to the monitoring center together with Post-earthquake monitoring is being carried out using audio signals to locate human targets in a hidden way [41,49], and it can be strengthened by using Wi-Fi and Long-Term Evolution (LTE) in future [50]. This sensor is small and monitors the environment in a narrow space by sensing different physical characteristics such as temperature, humidity, pressure, and vibration. The collected sensor data are first sent to the monitoring node based on ZigBee technology and then transmitted to the monitoring center together with the monitoring images. The results of physical experiments show that using these wireless sensors, the monitoring center can display the monitoring image of the monitoring area in real time and visualize the collected sensor data [29]. The ongoing research has been using intelligent monitoring algorithms (such as object recognition or intrusion detection) on monitoring nodes to achieve better monitoring performance [51]. Other advancements include the optimization of the mechanical design of the monitoring nodes (e.g., miniaturization or lightweight) and the positioning algorithms for the sensor nodes. The co-seismic dislocation and optical data are the main parts of seismic monitoring via remote satellites [52], where GNSS and InSAR measure the co-seismic dislocation. Ground-based receivers using satellite signals from global navigation satellite systems (GNSS) such as the Global Positioning System (GPS) have served as primary sensors for over a decade to measure co-seismic ground deformation [53][54][55]. The combination of ground-based GPS and remote satellite information is very useful to improve earthquake deformation [56]. Synthetic Aperture Radar (SAR) is an imaging radar that uses a small antenna that moves at a constant speed along a trajectory of a long array and radiates coherent signals to process the echoes received at different locations coherently for a higher resolution [57]. Similarly, InSAR (Interferometric Synthetic Aperture Radar) is an advanced technique that combines synthetic aperture radar imaging technology with interferometry to measure the phase difference of two or more SAR images [58]. InSAR accurately measures the three-dimensional position and small changes in any points on the Earth's surface and has been demonstrated to be a reliable tool for measurements [59]. The use of optical satellite data to detect various anomalies before a strong earthquake is the key to predict seismic activity because it can identify phenomena related to thermal radiation in the initial stage of an earthquake. Therefore, satellite observations are powerful tools for monitoring earthquake preparedness areas in near real time on a large scale [60]. Earthquake Early Warning The main purpose of an earthquake early warning is to detect earthquakes in the early stages to estimate the seismic intensity of the expected area and warn users before the seismic waves spread to the ground [61]. The occurrence of an earthquake is sudden, and it is therefore not possible to predict accurately. However, a few seconds of warning can allow people to escape the building, find proper shelter, and move to a safer place inside the building [62]. An earthquake early warning system detects non-destructive seismic waves (P-wave) emitted at the beginning of an earthquake, while destructive seismic waves (S-wave) arrive at the surface several seconds later due to relatively slow propagation velocity (Figure 2). on monitoring nodes to achieve better monitoring performance [51]. Other advancements include the optimization of the mechanical design of the monitoring nodes (e.g., miniaturization or lightweight) and the positioning algorithms for the sensor nodes. The co-seismic dislocation and optical data are the main parts of seismic monitoring via remote satellites [52], where GNSS and InSAR measure the co-seismic dislocation. Ground-based receivers using satellite signals from global navigation satellite systems (GNSS) such as the Global Positioning System (GPS) have served as primary sensors for over a decade to measure co-seismic ground deformation [53][54][55]. The combination of ground-based GPS and remote satellite information is very useful to improve earthquake deformation [56]. Synthetic Aperture Radar (SAR) is an imaging radar that uses a small antenna that moves at a constant speed along a trajectory of a long array and radiates coherent signals to process the echoes received at different locations coherently for a higher resolution [57]. Similarly, InSAR (Interferometric Synthetic Aperture Radar) is an advanced technique that combines synthetic aperture radar imaging technology with interferometry to measure the phase difference of two or more SAR images [58]. InSAR accurately measures the three-dimensional position and small changes in any points on the Earth's surface and has been demonstrated to be a reliable tool for measurements [59]. The use of optical satellite data to detect various anomalies before a strong earthquake is the key to predict seismic activity because it can identify phenomena related to thermal radiation in the initial stage of an earthquake. Therefore, satellite observations are powerful tools for monitoring earthquake preparedness areas in near real time on a large scale [60]. Earthquake Early Warning The main purpose of an earthquake early warning is to detect earthquakes in the early stages to estimate the seismic intensity of the expected area and warn users before the seismic waves spread to the ground [61]. The occurrence of an earthquake is sudden, and it is therefore not possible to predict accurately. However, a few seconds of warning can allow people to escape the building, find proper shelter, and move to a safer place inside the building [62]. An earthquake early warning system detects non-destructive seismic waves (P-wave) emitted at the beginning of an earthquake, while destructive seismic waves (S-wave) arrive at the surface several seconds later due to relatively slow propagation velocity (Figure 2). A seismic sensor can sense the speed and acceleration signals caused by the ground movement and convert them into directly datable electrical signals [45]. Seismic sensors are widely used in in energy exploration, building quality detection, and geological detection, in addition to in earthquake detection [46,63]. The main observation data include temperature, pressure, and humidity [64]. Mechanical earthquake early warning technology is the most widely used earthquake early warning technology based on microelectromechanical technology (MEMS). This technology emphasizes ultra-precision machining with small size characteristics, making it well-suited for large-scale applications due to its low cost and low power consumption. [65]. The P-wave sensors are an essential component of earthquake early warning systems as they can detect the first seismic waves generated by an earthquake. Factors that affect the warning time provided by an early warning system include the distance from the epicenter of the earthquake and the speed of data transmission. In regions with dense sensor networks, accurate and timely data from multiple sensors can be captured to minimize the warning time. This can provide valuable time for people to take protective measures and reduce the potential damage caused by an earthquake [62]. The electrochemical earthquake early warning technology is based on the solution flows relative to the electrode, resulting in the corresponding change in the ion concentration gradient generating the electrical output. Therefore, the electrical signal output by the cell changes with the change in the input seismic motion. The transmission of seismic wave signals can cause electrochemical seismic sensors to perform well at low frequencies compared to others. The electrochemical seismic sensors have little mechanical noise, a small amount of thermal noise and low power consumption, thus having a high signal/noise ratio and a wide dynamic range [46,66]. The magnetic fluctuations at low frequency (0.01~10 hz), 10 to 100 Tesla (nT), occur hours or days before the earthquake. This fluctuation can be detected by a sensor composed of a composite Metglas-PZT-Metglas sensor of the magnetoelectric (ME) composite material [62]. The composite has two components: a ferromagnetic layer that responds to a magnetic field by generating mechanical strain, and a piezoelectric layer that converts mechanical strain into voltage. This sensor is very small, light, and cheap and works at room temperature [67]. The abnormal element detection technology, such as a radon detection sensor, is unique to earthquake early warning systems [66]. The content of radon varies with the temperature, pressure, and humidity. Radon has a half-life of 3.8 days, so it can be detected shortly after the basement fissure has formed [68,69]. A rise in radon concentration is a sign of the formation of new basement cracks. The cracks facilitate the flow of groundwater, allowing radon to escape [65]. Communication Support Sensors An earthquake can destroy communication channels by collapsing mobile base stations and power lines. At the same time, traffic lines between the disaster area and the outside world can be blocked, meaning the victims in the disaster area cannot communicate with the outer world [70]. This creates a great challenge in search and rescue operations. Special network service avenues can play a critical role in the first 72 h after an earthquake, in which conventional communication services are disrupted. These network service categories of earthquake relief sensors can be classified into wireless emergency communications and wired emergency communications. Space Satellite Communications The communication satellite plays an important role in earthquake rescue because of its large communication range and good communication effect. It can be quickly deployed and opened within a short period of time and has the working characteristics of mobility and flexibility, strong environmental adaptability, etc. Its communication network covers a large area, is in real time, and receives a lot of unexpected information, which can provide information and communication security services between all levels of command at the earthquake site and complete the earthquake emergency rescue work. Satellite communication is self-contained and has low power requirements, requiring only small generators or solar cells to be provided at the terminal for communication services [71]. Ground-Based Electromagnetic Wave Communication Ground-based wave communication uses terrestrial electromagnetic waves that provide services for earthquake relief. This includes shortwave, digital trucking, two-way radio systems, microwave communications, and radio frequency identification devices (RFIDs). The most obvious application is maintaining communication among the rescue team personnel on the ground to coordinate their efforts. Shortwave Shortwave waves have strong penetrating abilities and can pass through mountains, buildings, and other obstacles because they have waves with a frequency of 3 to 30 MHz [72]. They are commonly used not only for long-distance communication such as maritime, aviation, and overseas communications but also for emergency communications, such as earthquakes, floods, and other disaster events. This is instrumental during emergencies because of the characteristics of simple equipment and a simple point-to-point communication platform [72]. Digital clusters Digital clusters usually consist of multiple nodes, each of which is a computer, which are connected to each other through a high-speed network. This communication can be dynamic networking and emergency calls with data transmission using fax and voice service functions with automatic monitoring and alarm functions, etc. Therefore, it has become an important part of emergency communication and command and dispatch systems [73]. The digital trucking system can meet the command and mobilization requirements of a rescue department in the process of disaster relief because this can relate to satellite positioning and other functions. Two-way radio system Two-way radio has both transmitting and receiving functions that can be used for long-distance communication such as maritime and aviation communication, because it can enable two-way communication. Users can send and receive communications via radio waves, which are mainly used for the use of walkie-talkies when internet-based systems fail [74]. It is very important to organize rescue teams and coordinate operation and communication support for rescuers in the event of a communication breakdown caused by an earthquake. Microwave Communication System A microwave communication system uses waves between about 1 mm and 1 m with shorter wavelengths and higher frequencies. Microwave radio waves are highly resistant to interference and can transmit a large amount of information in a limited frequency band. Microwave communication plays a vital role during earthquake emergencies because an earthquake can destroy wired transmission networks such as fiber optic communication networks [75]. Through communication rescue, vehicles, and other carriers can quickly reach the disaster area and provide communication services. Satellite communication is also a kind of microwave communication located in space to achieve microwave relay communication. Radio Frequency Identification Technology (RFID) This is a non-contact automatic identification technology that identifies information about an item using radio frequency signals without the need for direct contact. RFID systems include readers and tags, where a tag is a chip that is implanted or attached to an item and has the function of storing information, and the reader is a device that can read the tag information via radio frequency signals. It has benefits over bar codes in terms of non-optical proximity communication, information density, and bidirectional communication capability [76]. In the rescue process after an earthquake, rescuers need to find buried survivors as soon as possible. Using life detectors with RFID tags, buried survivors can be found quickly. During the rescue process after an earthquake, rescuers need to coordinate rescue operations to ensure rescue efficiency. The proper use of RFID can strengthen the rescue service, collecting information and providing effective information support [61]. Sensor Integration Platform The integration and effective use of search and rescue technology require unique platforms, because these technologies do not solely rely on specific technology. Various platforms integrated with different technologies provide significant contributions during earthquake search and rescue operations. Some of the existing platforms are described below. Earthquake Emergency Vehicle Earthquake emergency vehicles are very important tools during search and rescue operations following earthquake disasters because they can supply vital instruments and logistics instruments. Different countries classify earthquake rescue equipment in different classes, such as rescue trucks, ambulances, firetrucks, mobile command centers, urban search and rescue vehicles, medical support units ets. For example, in Japan, the "Hyper Rescue" series of vehicles is used for earthquake emergency responses. These vehicles are equipped with specialized equipment such as medical supplies, cutting and excavation tools, and communication systems. They can also serve as mobile commanding and information collecting centers [77]. In the USA, the Federal Emergency Management Agency (FEMA) uses a variety of earthquake emergency vehicles, including large trucks and trailers that carry generators, communications equipment, and other supplies. The Los Angeles Fire Department operates specialized urban search and rescue (USAR) vehicles, which include cranes, bulldozers, and other heavy machinery [78]. In China, specialized earthquake rescue vehicles called "earthquake rescue vehicles" are used to transport rescue personnel and equipment to disaster areas. These vehicles are equipped with a variety of specialized tools and equipment such as stretchers, oxygen supplies, and search cameras. In Italy, the National Fire Corps operates a fleet of specialized vehicles for earthquake responses, including bulldozers, excavators, and cranes. These vehicles are used to clear rubble and debris and to search for survivors trapped under collapsed buildings. The China Earthquake Administration has categorized emergency equipment into eight categories, which include detection, search and rescue, medical, communication, assessment and information, logistics, and rescue vehicles. These categories can be further subdivided based on their specific application scenarios and functions. In order to meet local needs, certain emergency vehicles are being modified to serve specialized functions such as forward command vehicles and telemedicine consultation vehicles. Rescue vehicles equipped with life detectors, toxic and harmful gas detectors, communication equipment and basic medical equipment, including cardiopulmonary resuscitation machines and stretchers, have been developed to meet the needs of earthquake disaster site rescue (Figure 3). They can also supply power for other search and rescue equipment. Similarly, the forward command vehicle serves as an important channel for the collection and sharing of front-line information, providing an information channel for disaster assessment to have effective decision making and on-site resource scheduling. It mainly provides satellite communication, field voice communication, network communication, field network system, shortwave radio, and other communication functions. The telemedicine consultation vehicle is used as field medical and health equipment in the earthquake-stricken area where medical resources cannot reach the disaster area. This acts as an online platform and provides support through virtual consultation to the injured people in the hard-hit area. mainly provides satellite communication, field voice communication, network communication, field network system, shortwave radio, and other communication functions. The telemedicine consultation vehicle is used as field medical and health equipment in the earthquake-stricken area where medical resources cannot reach the disaster area. This acts as an online platform and provides support through virtual consultation to the injured people in the hard-hit area. Unmanned Vehicle Unmanned vehicles are very useful in an area where an earthquake has damaged critical infrastructures or areas with toxic gas or polluted air which is unsafe and unhealthy [79]. These vehicles can reduce the threat of unknown environments to rescuers and emergency workers. Common unmanned devices include drones and mobile robots [19], but drones are the most used in disaster areas [79]. An unmanned aerial vehicle (UAV), a drone, can monitor a large area in a short period of time. Therefore, drones have become an increasingly popular tool for use in earthquake response and recovery efforts, including search and rescue operations, damage assessment, mapping, monitoring, infrastructure inspection, delivering aid, etc. [80]. They were widely used to provide long-term light in dark evenings and communication networks in search and rescue sites during the 2023 Türkiye MS 7.8 earthquake. A UAV also can create 3D maps of its surroundings using lasers, which are very useful for mountain environments. Medical robots are very popular nowadays due to increasing accuracy. They can be divided into urban search and rescue robots, evacuation robots, and on-site diagnosis robots based on their functions. An urban search and rescue (USAR) robot, represented by the serpentine robot, conducts a preliminary exploration of the disaster site and identifies human survivors by examining the video (with audio) [81]. This instrument transmits the location of survivors to a centralized cloud server. It also monitors the relevant air quality in the selected area to determine whether it is safe for rescuers to enter the area [18]. An evacuation robot can be utilized to remove survivors from debris. An on-site diagnosis robot can judge the condition of the injuries of a survivor according to the skin condition Unmanned Vehicle Unmanned vehicles are very useful in an area where an earthquake has damaged critical infrastructures or areas with toxic gas or polluted air which is unsafe and unhealthy [79]. These vehicles can reduce the threat of unknown environments to rescuers and emergency workers. Common unmanned devices include drones and mobile robots [19], but drones are the most used in disaster areas [79]. An unmanned aerial vehicle (UAV), a drone, can monitor a large area in a short period of time. Therefore, drones have become an increasingly popular tool for use in earthquake response and recovery efforts, including search and rescue operations, damage assessment, mapping, monitoring, infrastructure inspection, delivering aid, etc. [80]. They were widely used to provide long-term light in dark evenings and communication networks in search and rescue sites during the 2023 Türkiye MS 7.8 earthquake. A UAV also can create 3D maps of its surroundings using lasers, which are very useful for mountain environments. Medical robots are very popular nowadays due to increasing accuracy. They can be divided into urban search and rescue robots, evacuation robots, and on-site diagnosis robots based on their functions. An urban search and rescue (USAR) robot, represented by the serpentine robot, conducts a preliminary exploration of the disaster site and identifies human survivors by examining the video (with audio) [81]. This instrument transmits the location of survivors to a centralized cloud server. It also monitors the relevant air quality in the selected area to determine whether it is safe for rescuers to enter the area [18]. An evacuation robot can be utilized to remove survivors from debris. An on-site diagnosis robot can judge the condition of the injuries of a survivor according to the skin condition of the buried person [82]. However, earthquake rescue robots are not popular due to their high cost, and they can only be brought to a disaster area by an emergency communication agency after the disaster. Therefore, it is difficult to use mobile robots dedicated to disaster reduction to carry out disaster reduction activities in the initial stage of a disaster. Base Station A base station receives and sends signals and forwards them to other terminal equipment. The composition of a mobile communication base station mainly includes a communication tower, antenna feeder system (antenna and feeder), communication room, main equipment, supporting facilities and equipment (grounding system, power supply system, lightning protection facilities, transmission equipment, transmission lines, air conditioning, alignment frame, lighting and monitoring facilities, and fire prevention facilities). It mainly consists of a rack, desktop, and self-supporting equipment. The main structural room is generally built with a reinforced concrete frame structure, brick, and color steel plate. The communication tower can be divided into the tower room (independent tower), outside tower, and roof tower. A base station can be fixed and mobile. Fixed base stations are stationary and usually located in a specific location such as a roadside cellular base station. A mobile base station is very easy to deploy in an emergency situation due to its light weight and portable base stations in a vehicle-mounted base station. The reliable operation of a communication network is important for the effective implementation of earthquake relief, and it is also a prerequisite for rescue teams to start a rescue smoothly. Different large earthquakes have destroyed communication channels around the globe in recent history. For example, the 2004 Indian Ocean earthquake and tsunami with magnitude of 9.1 caused damage to communication infrastructure in several countries, including India, Indonesia, and Thailand. The devastating 2010 Haiti earthquake with a magnitude of 7.0 caused extensive damage to the country's communication infrastructure, including the destruction of the National Palace and many government buildings that housed important communication equipment. Communication was severely disrupted for weeks following the earthquake [83]. The Tohoku earthquake and tsunami (2011), which occurred off the coast of Japan, was one of the most powerful earthquakes ever recorded in Japan [7]. It caused widespread damage to communication infrastructure, including undersea cables and satellite systems. This led to major disruptions in internet and phone services. Similarly, according to a report by the Nepal Telecommunications Authority (NTA), a total of 1299 base transceiver stations (BTSs) were damaged or destroyed in the 2015 Gorkha earthquake [81]. Drones were deployed to survey damage and identify places where people were trapped in the Mw 7.1 Mexico City earthquake. The drones were equipped with thermal cameras to detect body heat under rubble and helped locate survivors in time to save their lives. Similarly, drones captured high-resolution images to assess the extent the damage caused by the 2015 Gorkha earthquake in Nepal [5]. These images also helped to create a high-resolution image for further geomorphic analysis. A 6.2 magnitude earthquake hit central Italy in 2016, and drones were quickly deployed to survey the area to identify victims under rubble. The drones were equipped with sensors to detect signs of life with the help of thermal imaging technology. After a series of earthquakes hit Puerto Rico in 2020, drones were used to survey damaged buildings and infrastructures [84]. The drones provided high-resolution images of damaged structures and buildings that were very useful in making important decisions for the local government [85]. In China, emergency rescuers used the mobile public network base station carried by the aerial emergency communication platform of the drone to provide continuous mobile communication signals. The multi-rotor drones provided instant calls, internet access, and other services 24 h a day during the Jiuzhaigou valley earthquake, China, on 8 August 2017. In addition, mobile base stations are very effective in situations where all kinds of communication channels are destroyed by an earthquake. For example, on 21 July 2021, an extraordinarily heavy rainstorm hit Zhengzhou city, China, destroying the electric communication equipment in many places and resulting in the interruption of all cell phone and network signals for the affected people. The emergency rescuers could not get information from the affected area. Therefore, the Ministry of Emergency Management urgently dispatched a winged dragon drone to fly long distances across the region. Satellite Satellites have many advantages during disastrous situations because they can provide a systematic and synergistic framework to facilitate scientific understanding of the Earth and thus facilitate disaster prediction and post-disaster support (Figure 4) [86]. Satellite data can help analyze the situation in a disaster area and provide rescuers with an important basis for decision making. They can further support earthquake rescue operations in the areas of detection, early warning, rescue navigation, communication, and the prediction of secondary hazards [87,88]. For example, the satellite imagery shared by the NASA Earth observatory showed the damage of Turkish cities after the destructive earthquakes occurred on 6 February 2023. drones were used to survey damaged buildings and infrastructures [84]. The drones pro vided high-resolution images of damaged structures and buildings that were very usefu in making important decisions for the local government [85]. In China, emergency rescu ers used the mobile public network base station carried by the aerial emergency commu nication platform of the drone to provide continuous mobile communication signals. Th multi-rotor drones provided instant calls, internet access, and other services 24 h a day during the Jiuzhaigou valley earthquake, China, on 8 August 2017. In addition, mobil base stations are very effective in situations where all kinds of communication channel are destroyed by an earthquake. For example, on 21 July 2021, an extraordinarily heavy rainstorm hit Zhengzhou city, China, destroying the electric communication equipmen in many places and resulting in the interruption of all cell phone and network signals fo the affected people. The emergency rescuers could not get information from the affected area. Therefore, the Ministry of Emergency Management urgently dispatched a winged dragon drone to fly long distances across the region. Satellite Satellites have many advantages during disastrous situations because they can pro vide a systematic and synergistic framework to facilitate scientific understanding of th Earth and thus facilitate disaster prediction and post-disaster support (Figure 4) [86]. Sat ellite data can help analyze the situation in a disaster area and provide rescuers with an important basis for decision making. They can further support earthquake rescue opera tions in the areas of detection, early warning, rescue navigation, communication, and th prediction of secondary hazards [87,88]. For example, the satellite imagery shared by th NASA Earth observatory showed the damage of Turkish cities after the destructive earth quakes occurred on 6 February 2023. Navigation and monitoring satellites can perform detection and early warning functions before earthquakes. The recent advancement [52] of real-time high-rate GPS means this technology can directly estimate permanent displacement on the Earth. This information can be combined with ground seismic sensors to make a more accurate early warning of earthquakes [89]. Infrared satellites can predict earthquakes by detecting anomalous ground warming prior to an earthquake [90]. The satellite information can be used by different agencies on different levels. For example, satellites enable governments and rescuers to know about earthquakes in advance, send rescue teams, and evacuate residents [87,91,92]. For the public, satellites also allow people to know about earthquakes in advance so that they can take protective measures nearby. Satellites provide historical and real-time images of the disaster area before and after the earthquake. Satellites also provide a unique synergistic view of the spatial scale and variable time of the disaster area. Earth-orbiting satellites complement traditional in situ measurements and ground-based sensor networks such as those for seismology, volcanology, geomorphology, and hydrology [93]. Satellite communication has the advantages of a large communication range, good communication effect, and not being influenced by terrestrial disasters, which plays an important role in earthquake rescue [93]. Moreover, satellites can provide the spatio-temporal information of a disaster area to understand the possibility of subsequent disasters [94]. This information can be used to avoid casualties from subsequent disasters and economic losses [95]. This review shows that there are benefits and limitations to earthquake sensors for decision making based on their model and applications (Table 3). Base stations, UAVs, and satellite sensors have different capacities and limitations because of their model design. For example, base station sensors have a very high cost compared to UAV sensors but they provide very good communication systems for long-term data for early warning systems. Meanwhile, UAV sensors are very good at detecting objects in harsh situations, but they have lots of technical challenges. Satellite sensors are very useful for large-scale disaster scenarios, but they are very expensive. Conclusions The application of earthquake sensors is very important for timely search and rescue operations in disaster scenarios. Earthquake sensors have been evolving in recent years and have been applied in different time and space scenarios. The development of frontier technologies such as the Internet of Things and Artificial Intelligence has provided unique opportunities in emergency situations. The current development status of seismic monitoring and rescue sensors is manifested in different aspects. The constant innovation in sensors technologies such as MEMS, DAS, UAV, satellite, and nanotechnology can enable to more effective detection and recording of seismic activity. Moreover, the use of big data and AI could help to achieve real-time data to share earthquake locations and destruction behavior with the public. This will enhance the capability of earthquake responders to provide early warning and rescue operations. Moreover, the new development of sensor networks could establish a stable communication network to achieve information transmission and real-time responses. The development of earthquake-related sensors is an ongoing process of innovation and expansion, with the continuous strengthening of scientific and technological progress, providing more efficient, intelligent, and comprehensive protection for earthquake rescue and relief. However, there is a strong need to strengthen the capabilities of earthquake sensors for timely prediction for effective disaster management. The integration of new innovative technology in earthquake prediction should be in place to provide comprehensive information sharing for effective disaster management in the future.
2023-06-11T05:11:58.094Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "87bc0e643c652e1b0a8980cf5da16be5f90ed920", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/s23115335", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87bc0e643c652e1b0a8980cf5da16be5f90ed920", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
13237559
pes2o/s2orc
v3-fos-license
Childhood maltreatment and the medical morbidity in bipolar disorder: a case–control study Background Childhood maltreatment (abuse and neglect) can have long-term deleterious consequences, including increased risk for medical and psychiatric illnesses, such as bipolar disorder in adulthood. Emerging evidence suggests that a history of childhood maltreatment is linked to the comorbidity between medical illnesses and mood disorders. However, existing studies on bipolar disorder have not yet explored the specific influence of child neglect and have not included comparisons with individuals without mood disorders (controls). This study aimed to extend the existing literature by examining the differential influence of child abuse and child neglect on medical morbidity in a sample of bipolar cases and controls. Methods The study included 72 participants with bipolar disorder and 354 psychiatrically healthy controls (average age of both groups was 48 years), who completed the Childhood Trauma Questionnaire, and were interviewed regarding various medical disorders. Results A history of any type of childhood maltreatment was significantly associated with a diagnosis of any medical illness (adjusted OR = 6.28, 95% confidence intervals 1.70–23.12, p = 0.006) and an increased number of medical illnesses (adjusted OR = 3.77, 95% confidence intervals 1.34–10.57, p = 0.012) among adults with bipolar disorder. Exposure to child abuse was more strongly associated with medical disorders than child neglect. No association between childhood maltreatment and medical morbidity was detected among controls. Conclusions To summarise, individuals with bipolar disorder who reported experiencing maltreatment during childhood, especially abuse, were at increased risk of suffering from medical illnesses and warrant greater clinical attention. Background Bipolar disorder is associated with substantial morbidity and mortality (Kupfer 2005); for instance people with bipolar disorder die up to 14 years younger than the general population (Chang et al. 2011). Premature mortality among individuals with this illness cannot be explained by suicide alone (Hoang et al. 2011) but has contribute to the medical morbidity in bipolar disorder (and other serious mental illnesses), the factors which have received the most attention include side effects of psychotropic medications, unhealthy life style choices and issues with health care provision for this group. Various side effects are associated with psychotropic medication, such as mood stabilisers and antipsychotics which are commonly used to treat bipolar disorder, the most pertinent to this context include weight gain and insulin resistance (Newcomer 2007). Such side effects are risk factors for diabetes and cardiovascular disease (Correll et al. 2015) and thus may explain the high rates of these illnesses in people with bipolar disorder. Smoking, unhealthy diet and physical inactivity are lifestyle choices and habits which are prevalent in people with bipolar disorder (Scott and Happell 2011) but are also known risk factors for physical illnesses, such as diabetes and coronary heart disease, thus these lifestyle choices may explain the comorbidity between these illnesses and bipolar disorder (De Hert et al. 2011). A small but growing body of research suggests that people with serious mental illnesses are less likely to receive standard levels of care (De Hert et al. 2011). For instance, low rates of surgical interventions for coronary heart disease (e.g. stenting) and screening for metabolic abnormalities associated with diabetes are recorded for people with mental illnesses including bipolar disorder (De Hert et al. 2011). This is despite the fact that such illnesses are highly prevalent in this population. A factor which has received less attention in this context is the influence of childhood adversity. Preliminary research suggests that the experience of childhood maltreatment may contribute to the medical morbidity observed in bipolar disorder (Post et al. 2013), but these findings await replication and extension. Childhood maltreatment encompasses both abuse (e.g. sexual, emotional and physical abuse) and neglect (lack of provision for the individual's needs by their caregiver, including food, shelter and support) (Norman et al. 2012). Childhood maltreatment can be considered a plausible risk factor for the comorbidity between medical illnesses and bipolar disorder based on two lines of evidence. First, childhood maltreatment is associated with lasting changes or abnormalities in a number of biological systems or processes detected in adulthood (Gonzalez 2013;Danese and Lewis 2016). For instance, increased inflammatory cytokines are exhibited by maltreated individuals both as children (Slopen et al. 2013) and adults (Baumeister et al. 2016). Elevated inflammation has also been implicated in bipolar disorder (Leboyer et al. 2012) and a series of medical illnesses, such as diabetes, arthritis and certain cancers (Couzin-Frankel 2010) and thus could explain the comorbidity between the two disorder groups. Moreover, there is evidence that people with mood disorders and a history of childhood maltreatment exhibit particularly pronounced elevation in inflammation levels (Danese et al. 2008(Danese et al. , 2011. For example, maltreated individuals with depression have significantly increased inflammation levels compared to those with depression only, a history of childhood maltreatment only and those without either (controls) (Danese et al. 2011). Secondly, childhood maltreatment is associated with an increased risk of medical illnesses ) and bipolar disorder (Fisher and Hosang 2010;Palmier-Claus et al. 2016) in adulthood. The results from several studies have gone further and shown that childhood maltreatment is linked to the co-occurence of medical illnesses and mood disorders (including bipolar disorder) (Lu et al. 2008;McIntyre et al. 2012). To date only one study has examined this relationship in bipolar disorder specifically and found that childhood adversity is significantly related to the diagnosis of medical illnesses in adulthood, including diabetes, cardiovascular disease and asthma (Post et al. 2013). The limited available research in this area has not explored the specific role of child neglect but has focused on broadly defined childhood maltreatment (McIntyre et al. 2012), or childhood adversity which includes child abuse, parental psychopathology and violence in the home (Lu et al. 2008;Post et al. 2013). Exposure to child neglect has been related to a number of medical illnesses, such as cardiovascular disease, diabetes and osteoarthritis in adulthood (Norman et al. 2012; and therefore is a crucial construct to consider in this context. Furthermore, previous studies examining the medical morbidity in bipolar disorder have not included comparisons with control groups (Post et al. 2013), thus it remains unclear whether the relationship between child adversity (including childhood maltreatment) and medical illnesses is specific to or greater among people with bipolar disorder relative to the general population. To address the methodological gaps in the literature, the current study aimed to investigate the association between a history of child maltreatment and the diagnosis of medical illnesses in adulthood among people with bipolar disorder compared to unaffected controls (those without a personal or family history of a psychiatric illness). The differential influence of child abuse and child neglect on the diagnosis of medical illnesses was also examined in this context. It is hypothesised that both child abuse and neglect will be more significantly associated with medical illnesses in the bipolar disorder group compared to controls. Participants A total of 426 participants were included in this study, 354 (58% females, N = 205) of which were psychiatrically healthy controls and 72 (78% females, N = 56) were diagnosed with bipolar disorder (see Table 1). The participants with bipolar disorder were aged between 29 and 72 years, with a mean of 48.4 years (SD = 9.43). Participants with bipolar disorder were enrolled in the BADGE (gene-environment interplay in bipolar affective disorder) study (see Hosang et al. 2012) and were recruited by re-contacting bipolar cases from the Bipolar affective disorder case-control study (BaCCs) (see Gaysina et al. 2009;Hosang et al. 2017). Participant recruitment for BACCs was mainly via psychiatric outpatient clinics with the rest enlisted through media advertisement and self-help groups in the UK. All participants with bipolar disorder met DSM-IV criteria for bipolar I or bipolar II disorder ascertained via the schedules for clinical assessment in neuropsychiatry interview (see "Measures" section), and were Caucasian to control for population stratification since they were originally recruited from a genetic association study (see Gaysina et al. 2009). Participants were excluded if their bipolar episodes only occurred in relation to substance misuse or a physical disorder or if they had a personal or family history of schizophrenia. Participants with bipolar disorder were not experiencing a mood Table 1 Comparison of reports of each medical condition and history of childhood maltreatment between bipolar disorder cases and unaffected controls Significant p values are italicised N number of participants, SD standard deviations, % percentage, P probability due to chance a These figures are not the sum of the derived variables as some participants report experiencing more than one type of maltreatment b Childhood maltreatment was considered present if any type of child abuse or neglect were rated as moderate or severe c Child abuse was considered present if any form of child abuse was rated as moderate or severe d Child neglect was considered present if physical or emotional neglect was rated as moderate or severe episode at either of their assessments for the BaCCs and BADGE studies. The controls were a sub-sample of a case-control genetic association study on unipolar depression that provided information on their experience of maltreatment during childhood (see Fisher et al. 2013). The controls were aged between 24 and 68 years with a mean of 47.7 years (SD = 9.15). They were recruited through UK general medical practices and excluded if they had a personal or family history (among first degree relatives) of any psychiatric disorder. Given that participants were drawn from genetic association studies they were Caucasian to control for population stratification. All participants were aged 18 years or over and provided written informed consent after the nature of the study and procedures were fully explained. All studies received ethical approval from either King's College Hospital or the Joint from South London and Maudsley and Institute of Psychiatry Research Ethics Committees. All procedures contributing to this work were conducted in accordance with the Declaration of Helsinki in 1975Helsinki in (revised in 2008, and the ethical standards of the national and institutional committees on human experimentation. Bipolar disorder diagnosis The Schedules for Clinical Assessment in Neuropsychiatry (SCAN), Version 2.1 interview (Wing et al. 1990) was used to ascertain a lifetime DSM-IV diagnosis of bipolar disorder. The presence and severity of the psychopathology items were rated for the worst depressive and manic episodes, separately. History of childhood maltreatment All participants completed the 28-item Childhood Trauma Questionnaire (CTQ) (Bernstein et al. 2003), which was used to record the experience of five types of childhood maltreatment (i.e. physical abuse, sexual abuse, emotional abuse, physical neglect and emotional neglect). A total of 5 items were used to measure each type of maltreatment, which were rated on a 5-point Likert scale ranging from 1 (never true) to 5 (very often true). The cut-offs for moderate to severe levels of each type of maltreatment were employed in this study in accordance with the manual (Bernstein et al. 2003). The five types of childhood maltreatment rated as moderate or severe were categorised into abuse (i.e. sexual, emotional and/or physical abuse) and neglect (emotional and/or physical neglect). Good psychometric properties have been reported for this instrument, for instance there is high concordance between CTQ scores and therapists' ratings of childhood maltreatment (Bernstein et al. 2003). Moreover, good test-retest reliability has been found using this instrument in a sample of people with bipolar disorder (Shannon et al. 2016). Medical history All participants completed a self-report questionnaire to determine the lifetime presence of various medical illnesses (Farmer et al. 2008;Forty et al. 2014). Participants were asked whether they had been formally diagnosed with any of the following illnesses: heart problems (i.e. stroke, angina and heart attack), asthma, diabetes (I and II), arthritis (i.e. osteoarthritis, rheumatoid arthritis and other types of arthritis), hypertension, epilepsy or convulsions, osteoporosis, multiple sclerosis, emphysema or chronic bronchitis, or post herpetic neuralgia. Trained research assistants administered the questionnaire to all participants, which involved confirming that a formal diagnosis of the illness was provided by a medical professional (e.g. General Practitioner or medical consultant). Good concordance between the self-report of medical illnesses using this interview and practitioner ratings have been found (Farmer et al. 2008). Analyses Group differences were tested using Chi-square (χ 2 ) tests, one-way ANOVAs and independent sample t tests. The Fisher's exact test was conducted if a χ 2 test could not be used (e.g. expected values were less than 5). Case-control differences concerned with the association between childhood maltreatment and medical illnesses were examined using two approaches. First, using logistic regression models when at least one medical disorder was examined and second, ordinal logistic regression models when the number of medical illnesses was the focus (none, 1 and 2 or more illnesses). Gender and age were entered as covariates, along with child maltreatment, bipolar disorder status, as well as the interaction between childhood maltreatment and bipolar disorder status. Three parallel models (for each approach) were undertaken to investigate the effect of any type of childhood maltreatment, child abuse and child neglect. Given the relatively small bipolar disorder sample size and uneven distribution of some variables, we estimated the variance in the regression models with non-parametric bootstrap with replacement (1000 replications) to obtain empirical standard error estimates without making distributional assumptions. All statistical tests were performed in STATA version 14.0; the conventional level of significance, p < 0.05, was used in this study. Results There was no significant age difference between the controls and participants with bipolar disorder (t(393) = 0.60, p = NS), but there was a significantly higher proportion of females in the bipolar group relative to the controls (χ 2 (1) = 9.34, p = 0.002). The percentage of participants reporting each medical illness and different types of childhood maltreatment are presented in Table 1. The most commonly reported medical illnesses in the sample were arthritis, asthma and hypertension. The relatively low number of participants that recorded being diagnosed with each medical illness prevented the examination of associations between specific disorders and childhood maltreatment. Thus the remaining analyses focused on either the diagnosis of at least one or the number of (none, 1 and 2 or more) medical illnesses. There were no gender differences in the diagnosis of at least one (χ 2 (1) = 0.08, p = NS) or the number of medical disorders (χ 2 (1) = 0.29, p = NS). Those individuals that reported receiving a diagnosis of at least one medical disorder were significantly older than those without a diagnosis (t(393) = 4.05, p < 0.001). A similar pattern was observed when the number of medical illnesses were examined (F(2, 392) = 9.92, p < 0.001): participants that reported 1 (mean age = 49.73 years, SD = 8.14) or at least 2 (mean age = 52.54 years, SD = 9.54) medical disorders were significantly older than the individuals that recorded none (mean age = 46.53 years, SD = 9.19) according to a Tukey post hoc test (p = 0.009, p < 0.001, respectively). Significantly more participants with bipolar disorder reported being diagnosed with at least one medical illness compared to controls (χ 2 (1) = 26.61, p < 0.001) and they also reported to have significantly more medical illnesses relative to controls (χ 2 (2) = 49.88, p < 0.001). The rates of all types of childhood maltreatment were significantly greater among the bipolar group compared to the controls (see Table 1). A moderate correlation between child abuse and neglect was detected in the entire sample (Pearson's r(426) = 0.33, p < 0.001). Logistic regression models were conducted to explore the interaction between bipolar disorder status and the history of childhood maltreatment on the diagnosis of at least one medical illness, and ordinal logistic regression models were undertaken to examine the number of medical disorders (gender and age were included as covariates in the analyses), the results of which are presented in Table 2. Bipolar disorder status significantly interacted with both the exposure to any type of childhood maltreatment and child abuse on the diagnosis of at least one and the increased number of medical illnesses (see Table 2). The results remained significant for child abuse even when the effects of child neglect were controlled for (at least one medical illness: adjusted OR = 5.90, 95% confidence intervals (CI) 1.31-26.62, p = 0.021; number of medical disorders: adjusted OR = 4.85, 95% CI 1.30-18.06, p = 0.019). Although exposure to child neglect was associated with higher odds of having a medical illness with bipolar disorder, the results failed to reach conventional levels of significance (at least one medical illness: adjusted OR = 4.32, 95% CI 0.96-19.47, p = 0.057; number of medical disorders: adjusted OR = 3.30, 95% CI 0.89-12.22, p = 0.075). Further examination of the interactions showed that exposure to any type of childhood maltreatment, child abuse and child neglect were significantly associated with higher odds of having at least one and a greater number of medical illnesses in the bipolar group but not for the controls (see Table 3). The percentage of bipolar cases Table 2 Main and interaction effects of bipolar disorder diagnosis and childhood maltreatment on the diagnoses of medical illnesses Significant p values are italicised OR odds ratio derived from binary logistic regression for 'at least one medical illness' and from ordinal logistic regression for 'number of medical illnesses' , P probability due to chance a Adjusted for the effects of gender and age b Childhood maltreatment was considered present if any type of child abuse or neglect were rated as moderate or severe c Child abuse was considered present if any form of child abuse was rated as moderate or severe d Child neglect was considered present if physical or emotional neglect was rated as moderate or severe and controls with medical illnesses by each type of childhood maltreatment is visually presented in Fig. 1. Given that there were a restricted number of participants that reported experiencing each type of child abuse (i.e. sexual, physical and emotional) and neglect (i.e. emotional and physical), analyses examining their individual and interactional effects were not possible. Discussion This study found a significant relationship between childhood maltreatment and medical illnesses in adulthood among individuals with bipolar disorder but not in unaffected controls. When the analyses were stratified by the type of childhood maltreatment the results were strongest for child abuse rather than child neglect. This is the first study to explore child neglect in this context using both controls and participants with bipolar disorder. Our findings are consistent with previous studies that have focused on mood disorders (Lu et al. 2008;McIntyre et al. 2012) and bipolar disorder specifically (Post et al. 2013). For instance, broadly defined childhood adversity, which includes child abuse (verbal, physical and sexual) and parental psychiatric diagnosis, but not child neglect, was found to be significantly associated with the overall number of medical illnesses in a sample of over 900 people with bipolar disorder (Post et al. 2013). Childhood adversity was also found to be significantly associated with specific medical illnesses in this group including arthritis, asthma, hyper-and hypo-tension (Post et al. 2013). However the results from the present investigation add to this literature by showing that the relationship between childhood maltreatment and medical illnesses is especially pertinent to bipolar disorder compared to controls. Although the relationship between childhood maltreatment and medical illnesses in adulthood has been established in the general population , this relationship maybe particularly relevant to bipolar disorder for two reasons. First, high rates of childhood maltreatment have been found among people with bipolar disorder (Fisher and Hosang 2010;Palmier-Claus et al. 2016). Childhood maltreatment has also been associated with a worse clinical course among people with bipolar disorder, such as, earlier age of onset and more mood episodes (Agnew-Blais and Danese 2016). Such clinical course characteristics have also been linked to the medical morbidity in bipolar disorder (Magalhães et al. 2012). Bringing together these lines of research it is possible that childhood maltreatment may lead to an unfavourable clinical course in bipolar disorder that in turn contributes to the high medical burden observed in this illness. Although the exact mechanism that underpins this relationship is unclear, it has been postulated that it may reflect shared biological vulnerabilities, such as disruption in inflammation and oxidative systems (Magalhães et al. 2012). Alternatively, the more severe clinical course associated with childhood maltreatment (Agnew-Blais and Danese 2016) is likely to increase the need for medication treatment. The side effects of mood stabilisers and antipsychotics, include weight gain and insulin resistance (Newcomer 2007) are linked to various medical conditions, such as diabetes (Newcomer 2007), potentially explaining the link between childhood maltreatment and physical illnesses in bipolar disorder but may Table 3 Relationship between childhood maltreatment and medical illnesses, presented separately for participants with bipolar disorder and controls Significant p values are italicised OR odds ratio derived from binary logistic regression for 'at least one medical illness' and from ordinal logistic regression for 'number of medical illnesses' , P probability due to chance a Adjusted for the effects of gender and age b Childhood maltreatment was considered present if any type of child abuse or neglect were rated as moderate or severe c Child abuse was considered present if any form of child abuse was rated as moderate or severe d Child neglect was considered present if physical or emotional neglect was rated as moderate or severe have also attenuated the results here. Although in the current study all of the participants with bipolar disorder were on long-term medication regimens for their psychiatric illness; therefore this confounding effect is unlikely here, but more research is needed to clarify this issue. Second, the biological sequelae of childhood maltreatment, such as increased inflammation levels (Baumeister et al. 2016) are also evident in bipolar disorder (Leboyer et al. 2012) and various medical illnesses, particularly autoimmune diseases, such as arthritis and type I diabetes (Couzin-Frankel 2010). Research indicates that elevated inflammation is particularly pronounced among maltreated individuals with mood disorders even when compared to those who have a history of child abuse and neglect (Danese et al. 2008(Danese et al. , 2011, thus potentially increasing the risk of such medical illnesses. For example, the results from one study found that inflammation levels were significantly higher among those with a history of childhood maltreatment and depression relative to those with depression only, childhood maltreatment only and controls (without either) (Danese et al. 2011). The biological consequences of childhood maltreatment in bipolar disorder warrants further research attention to better understand the possible mechanisms that underlie its high medical burden. The results from the current investigation provide a novel contribution to the field by helping to show differential relationships between child abuse, child neglect and medical illnesses in bipolar disorder. Previous studies examined these adversities under one overarching construct of childhood maltreatment or did not explore the impact of child neglect separately (Lu et al. 2008;McIntyre et al. 2012;Post et al. 2013). The results of the present study suggest that the effect of child abuse on medical illnesses is not only significant but may also be stronger than that of child neglect in bipolar disorder. This is consistent with the results of previous studies which have shown that child abuse is associated with a list of medical disorders whereas neglect is linked to only a limited number of illnesses Norman et al. 2012). It is possible that unhealthy lifestyles may explain the stronger association between the experience of child abuse and medical disorders compared to child neglect. For example, smoking is a major risk factor for a series of medical disorders (Ezzati et al. 2002), and has been significantly associated with child abuse but not child neglect (Norman et al. 2012). The exact mechanisms behind the link between child abuse and the medical burden in bipolar disorder is unclear and warrants further investigation. The limited sample size of the bipolar group may have impacted on the study's power and is likely to have contributed to the non-significant interaction effect of child neglect and bipolar disorder status on the diagnosis of medical illnesses. Future research focusing on the biological, psychological and behavioural correlates of child abuse using larger samples would be especially informative. With replication, the findings of this study are clinically valuable since they can be used to identify a subgroup of people with bipolar disorder (those with a history of childhood maltreatment) who are at risk of poor health (Post et al. 2013) and worse clinical course (Agnew-Blais and Danese 2016). These results underscore the need for routine assessment of childhood maltreatment history in clinical practice, which would assist with the early recognition of an 'at risk' group who would benefit most from targeted prevention and intervention efforts. Family therapy or psychoeducation focused on improving the social support provided to people with bipolar disorder maybe particularly beneficial. This suggestion is based on research that shows that social support influences the risk of relapse in bipolar disorder and mediates the effect of childhood maltreatment on physical health in adulthood (Herrenkohl et al. 2016). There are a number of strengths of this study including the use of a well-characterised bipolar disorder sample and screened controls that completed validated instruments. But several limitations of the current study should be considered when interpreting the findings. First, the limited sample size of the bipolar group may reduce the power to detect significant effects and the ability to generalise our results. Participants with bipolar disorder in this study were recruited from across the UK through psychiatric outpatient clinics and self-help groups, so are not entirely biased or unrepresentative. Future studies should use a larger case-control sample to confirm the associations observed in this investigation. Second, childhood maltreatment and medical illnesses were assessed using self-report which has been associated with various problems (e.g. reporting accuracy) (Reuben et al. 2016). Retrospective self-report questionnaires used to assess childhood maltreatment are commonly used in both epidemiological and psychiatric studies (Norman et al. 2012; Agnew-Blais and Danese 2016). Moreover, childhood maltreatment data yielded from self-report show high concordance with case notes (Fisher et al. 2011) and therapists' ratings (Bernstein et al. 2003). Substantial agreement between the self-report medical interview used here and the health practitioner reports of the diagnoses of medical disorders has been reported (Farmer et al. 2008). Nonetheless, it would be useful for future studies to replicate the findings of the present study using practitioner reports of medical illnesses, and prospective objective assessment of childhood maltreatment. Finally, the incidence of several medical illnesses, particularly heart problems was relatively low especially compared to other studies, this precluded the examination of the specific association between childhood maltreatment and particular medical illnesses. This may have been the result of the age of the sample (median age 49 years, range 24-72 years), with the majority of participants outside of the median age of onset (58-64 years) for various heart problems, including coronary heart disease and stroke (Terry et al. 2004). But the prevalence of arthritis, hypertension and asthma in the current study is comparable to those reported in previous investigations (McIntyre et al. 2006;Perron et al. 2009). Future studies should explore the influence of childhood maltreatment on the medical morbidity in bipolar disorder using an older sample. To summarise, this is one of a limited number of studies that has examined the relationship between childhood maltreatment and the medical morbidity in bipolar disorder. This study extends previous work by exploring the differential relationship between child abuse and neglect and in this context using a sample of controls and individuals with bipolar disorder. The results of this study showed that childhood maltreatment is significantly associated with medical ill health among people with bipolar disorder but not controls. On further examination of the data, child abuse showed the strongest association with medical illnesses compared to child neglect. With more research these findings can be used to identify individuals who would benefit most from prevention and intervention efforts. Abbreviations SCAN: schedules for clinical assessment in neuropsychiatry interview; CTQ: Childhood Trauma Questionnaire; BADGE: gene-environment interplay in bipolar affective disorder; BaCCs: bipolar affective disorder case-control study. Authors' contributions GMH conducted the analyses for the manuscript, interpreted the results and drafted the manuscript. BM, PM and AF worked on the conception and design of the study and critically revising the manuscript in preparation for submission. HF, SC-W and UR were involved in the data analysis, interpretation of the findings and drafting the manuscript. All authors read and approved the final manuscript.
2017-10-17T03:37:22.160Z
2017-09-07T00:00:00.000
{ "year": 2017, "sha1": "87dac0cb986ba7b9d174fc40ef01ff03493ba3e2", "oa_license": "CCBY", "oa_url": "https://journalbipolardisorders.springeropen.com/track/pdf/10.1186/s40345-017-0099-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87dac0cb986ba7b9d174fc40ef01ff03493ba3e2", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
247490201
pes2o/s2orc
v3-fos-license
Analysis of Occurrence Characteristics and Influencing Factors of Out-of-Hospital Induced Stress Injury in Patients with Community-Acquired Pneumonia in Respiratory Intensive Care Unit Background: We aimed to evaluate the characteristics of a stress injury in community-acquired pneumonia (CAP) patients in the respiratory intensive care unit (RICU) and analyze the risk factors, to provide evidence for clinical prevention and treatment. Methods: This retrospective study was conducted in RICU at Qilu Hospital of Shandong University, China. We selected 85 patients with traumatic CAP who were brought in from January 2019 to December 2020 as the case group and 167 patients without traumatic CAP hospitalized in the same period as the control group. Multivariate binary Logistic regression analysis was used to explore the influencing factors. Results: The incidence rate of a stress injury in 252 patients was 33.73%. The most affected region found in these patients was the sacrococcygeal region (24.26%). Most of the patients were presented in stage one (49.50%). Factors associated with a stress-induced injury in RICU, CAP patients were CURB-65 combined with cerebrovascular disease, fever combined with heart disease and albumin was found as an independent risk factor. Conclusion: Attempts to improve stress injury in CAP patients through setting measurable process of care standards are to be encouraged. An approach including the patient’s clothes and bedding should be changed frequently, nutrition should be managed and the skin of the patient should be kept clean and dry. The occurrence of stress can further be reduced by the use of protective tools and the timely participation of the family members in patient management. Introduction Stress-induced injury is a common and serious condition because it not only hurt patient but also create problems for medical staff. In clinical practice family members, physicians, and nursing staff should not only pay attention to the skin of the patient's body surface, but also pay great attention to the mucosal stress injury caused by the improper use of medical devices (1). On admission to the respiratory intensive care unit (RICU) efforts have been made to establish guidelines to reduce patient's problems by prescribing analgesics and sedative medicines. Stressinduced injury is an important public health problem that affects several organs i.e, decline patients' circulatory and respiratory vital functions, severe inflammation, nutritional imbalance. Collectively all these reasons have increased the percentage of stress-induced injury patients from 4%-49% (2). The tolerance of subcutaneous soft tissues to pressure and shearing force is affected by sensation, humidity, nutrition, microenvironment, perfusion, soft tissue condition, age, friction, shearing force, activity ability, movement ability, and other complications (3). However, during the process of nursing pressure injury and preventing pressure injury, besides paying attention to frequent turnover, nutritional support is more important with the proper applications of vasoactive and sedatives drugs . At the same time, the bed unit where the patient is located should be clean, free of debris, and dry. Whether the incidence of stress injury can reach the expected ideal standard is a very important index for evaluating the level of hospital care. Understanding and mastering the occurrence characteristics of a stress injury in RICU patients and the related high and medium risk factors as well as taking reasonable, accurate, and targeted measures to prevent stress injury can reduce the incidence of a stress injury in patients by 20%-30% (4). Therefore, it is of great significance to understand and master the high and medium risk factors related to stress injury in patients with RICU, to reduce the occurrence of stress injury and effectively prevent stress injury. Therefore, for nursing managers, the focus of nursing is very important to identify correctly, timely, and reasonably high-risk environmental factors, causing stress-induced injury and reasonably predicting the risk of stress-induced injury so that appropriate and targeted preventive measures can be taken to achieve standard of nursing control and reduce the suffering and burden of patients. We used a case-control study to explore the incidence of out-of-hospital stress injuries in patients with community-acquired pneumonia in respira-tory intensive care unit with biological characteristics and its influencing factors, to provide a theoretical basis for clinical prevention and treatment. General Information This retrospective investigation was performed on patients admitted to Qilu Hospital of Shandong University (Jinan Shandong, China) in RICU from January 2019 to December 2020. All the patients were agreed to the informed consent in this study. This study was conducted by the approval of the Ethics Research Committee of Qilu Hospital of Shandong University. Patients included in this study should have met the following criteria: 1) All patients had community-acquired pneumonia and were confirmed by chest CT; 2) Those patients who had complete data. Exclusion criteria were concomitant skin diseases such as burns, pyrosis, and systemic lupus erythematous. Data collection method The studied sample consisted of 252 consecutive patients. All the patients were allocated into two groups. Community-acquired pneumonia patients with stress injury (85 cases) were included in the 'case group' while patients without stress injury were in the 'control group' (167 cases). On admission, detailed information (including the location, stage, and some stress-injury) of all patients were collected through the nursing and medical electronic system. The stress-induced injury was divided into 4stages (from stage 1-4) as well as deep tissue injury and non-staging. The contents of clinical data collection included ① general data. ② Clinical data: admission evaluation form (including BMI and fever). ③ Accompanying diseases: combined heart disease and cerebrovascular disease. ④ Laboratory examinations: Blood samples (hemoglobin, albumin, pre-albumin, etc.) were taken on an empty stomach on the following morning of admission. ⑤CURB65 classification (disorder of consciousness, urea nitrogen, respiratory frequency, blood pressure, and age). Statistical method SPSS 17.0 software (Chicago, IL, USA) was used for data entry. Independent sample t-test, ranksum test, χ 2 test, and multivariate binary Logistic regression analysis were used. Patient general data A total of 252 patients with RICU CAP were enrolled. The average age in the control group was (64.81 17.15) yr, the average age of the case group was (71.04 12.98) yr old, with 90 males (53.9%) and 77 females (46.1%) in the control group. Among the 252 patients with CAP, 85 cases (33.73%) were diagnosed with stress-induced injury with a total of 202 stress-induced injuries. Out of 85 cases, 31 cases (36.47%) were diagnosed with injury at one point while 54 cases (63.53%) with injuries at more than two points ( Table 1). Independent sample t test, rank sum test, χ 2 test and multivariate binary Logistic regression analysis were used. Pre-albumin, albumin and CURB65 were tested using non-parametric rank sum test. The results of the comparison between the control group and the case group are shown in Table 2. The χ 2 test was performed on BMI classification, heart disease, cerebrovascular disease, and fever, and the comparison was made between the control group and the case group, as shown in Table 3. With the occurrence of stress-induced injury as the dependent variable (0= none, 1= yes), the variables (P<0.05) in Tables 2 and 3 were included in the multi-factor binary Logistic regression analysis. The diagnosis by multi-collinearity showed that there was no colinearity, and all factors were included in the analysis. The variables finally entering the equation as shown in Table 4. Discussion In this study, the incidence of stress-induced injury in RICU CAP patients was 33.73% with a total of 202 stress-induced injuries in 85 patients brought to the hospital. Our results showed that the two most affected regions were sacrococcygeal (24.26%) and heel region (12.87%) which is consistent with the previous results (5). One of the reason could be that most patients with severe CAP were bedridden for a long time. Buttocks were vulnerable to pressure and shear forces during lateral decubitus position, however, nurses and family members showed uneven execution force for turning over; which results injury in sacrococcygeal and buttocks regions. Therefore, nursing staff should actively strengthen the skin in sacrococcygeal and hip areas with high incidence of pressure injury in patients with CAP by closely observing the skin condition, keeping the skin clean and dry, and using appropriate local decompression tools. Most of the CAP patients (49.505) enrolled in our study were in stage 1 stress injury (Table 2) which is consistent with the previous study (6). Among all the stages of stress-induced injury, stage 1 was the most sensitive to nursing intervention, and therefore, ideal curative effects could be achieved by timely relieving stress and minimizing risk factors. Thus, training of nursing staff on relevant knowledge play an important role. Their ability to identify stage 1 stressinduced injury should be improved, and the correct intervention methods should be mastered in order to reduce the incidence of stress-induced injury. This study showed that for every 1-point increase in the CURB-65 score in patients with RICU CAP, the risk of stress-related injury increased by 35.8%. In the domestic and foreign guidelines for diagnosis and treatment of CAP, the CURB-65 score is often used to initially predict and evaluate the severity of pneumonia in patients (7). CURB-65 for patients includes patients' consciousness, urea nitrogen, respiratory frequency, blood pres-sure, and age. The higher the CURB-65 score, the more serious will be the CAP patients. Similarly high CURB-65 score affects the lifestyle of the patients and that's why they have a low ability to take care of themselves. The results of our study revealed that the risk of a stress injury in patients with RICU CAP combined with heart disease is 1.976 times that of the average CAP patient, therefore, the skincare of the CAP patients combined with heart disease should be improved. Our data showed that severe CAP patients combined with heart disease were prone to heart failure, respiratory fatigue, and massive sweating. The pain perception of the patients was distracted, and attention was focused on dyspnea. The pain caused by pressure, it might be covered by symptoms such as dyspnea. The patients could not feel the pain stimulation caused by excessive pressure, thus affect the skin pain, and cause the local skin of the patient's body to be in a stage of high pressure for a long time, thereby, increasing the risk of stressinduced injury. The survey finding showed that the risk of a stress injury in patients with RICU CAP combined with the cerebrovascular disease was 2.944 times that of common CAP patients. In most patients with CAP combined with cerebrovascular disease consciousness disorders and imbalance in sensory function were found. The patients would not automatically change the lying position or could not correctly describe to the nursing staff in assisting to change the lying position which affected the skin sensation and perception. The local skin of patients is under high pressure for a long time, which increases the risk of stress injury. Therefore, cluster nursing should be applied to patients with CAP combined with cerebrovascular diseases and consciousness disorder or decreased self-care ability. To improve the grasp of pressure ulcers related knowledge among patients and caregivers further reduce the incidence and severity of pressure ulcers in patients (8). The results of the survey showed that the incidence of a stress injury in patients with CAP fe-ver was 2.827% in patients with CAP without fever times. The tolerance of soft tissues to pressure and shear forces may be affected by the microenvironment, nutritional status, perfusion status, concomitant disease, skin and soft tissue conditions (9). Fever is a common symptom in CAP, and great attention should be paid to the care of the skin and the bed unit of the patients with fever. The skin is stimulated due to fever and sweating, and the protection function of the skin is imbalanced due to pH change. Once the tissue is under high pressure for a long time, the skin of the body will suffer from an ischemic and hypoxic environment. In addition, the energy supplied by nutrition from the body is far from enough to maintain the high consumption state of the body due to fever, which will increase the risk of stress injury. This study also showed that the incidence of stress-related injury increases by 9.30% when the albumin level decreases in patients with CAP. Patients with CAP combined with hypoproteinemia need high nutritional support. As these patients suffer from consciousness disorder or dysphagia due to various reasons, therefore, some patients cannot receive enteral/parenteral nutrition due to refusal of nasogastric feeding, poor vascular condition, or family economic difficulties, which ultimately result in hypoproteinemia. Most patients in intensive care units with dysphagia and disorder of consciousness disorder (10) are in a stress response state because the body consumption rate is high due to the onset of illness. If nutrition supplement is not timely or adequate, the body is in an imbalanced nutritional state, which will affect the skin elasticity and the self-recovery ability of the tissue, and thus the risk of stress injury is greatly increased. Human albumin improves local blood circulation by increasing circulating blood volume, while maintaining plasma osmotic pressure, increasing protein content, and promoting fresh granulation growth. Therefore, the healing of the sore surface is promoted and the pain of patients is reduced (11). Nutritional intake can effectively reduce the occurrence of stress injury, improve nutritional status and reduce the cost of nutritional support (12). In this study, we did not find a correlation between our patient's hemoglobin and the occurrence of stress injury. Anemia will reduce the oxygen content in the blood, and the hypoxia of tissues will be more serious under the condition of pressure ischemia. As a result, the tolerance to pressure will be reduced and the risk of stressinduced injury will be increased (13). However, no consistent result has been obtained in this study, which may be related to the insufficient sample size. Conclusion Risk factors for stress-related injury in patients with RICU CAP were CURB-65, concomitant cerebrovascular disease, fever, combined with heart disease and albumin. In all the various stages of stress injury, stage 1 stress injury was found to be the most sensitive stage. Further measures such as eliminating/decreasing pressure and other risk factors, could be helpful to restore the stress-related injury to the normal level. Therefore, training on the prevention of stress injury for nursing staff should be strengthened to improve their identification ability of stage 1 stressrelated injury, and to master the correct and targeted intervention methods to avoid the occurrence and further deterioration of -related injury. Ethical considerations Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors. tion and control of Acinetobacter baumannii infection in patients with AECOPD in RICU.
2022-03-17T15:24:28.546Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "4450b8558b0bd6ec9b378dd9a3ac2d619a2860ef", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.18502/ijph.v51i3.8932", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "50a5acfe28dca3965b410b973d1be372a67beabb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
244102902
pes2o/s2orc
v3-fos-license
Interior epsilon-regularity theory for the solutions of the magneto-micropolar equations with a perturbation term We develop here a particular version of the partial regularity theory for the Magneto-Micropolar equations (MMP) where a perturbation term is added. These equations are used in some special cases, such as in the study of the evolution of liquid cristals or polymers, where the classical Navier-Stokes equations are not an accurate enough model. The incompressible Magneto-Micropolar system is composed of three coupled equations: the first one is based in the Navier-Stokes system, the second one considers mainly the magnetic field while the last equation introduces the microrotation field representing the angular velocity of the rotation of the fluid particles. External forces are considered and a specific perturbation term is added as it is quite useful in some applications. Introduction Micropolar equations were introduced in 1966 by Eringen [6] and were first studied mathematically in 1997 by Galdi & Rionero [8]. Some very recent results concerning the regularity of the solution to this system were obtained in [9,17] (see also the references there in). In this article we will consider a slightly more general framework by introducing a magnetic field, some external forces and a perturbation term. We will develop here the ǫ-regularity criterion which was not, to the best of our knowledge, treated before for this type of problem. The incompressible 3D-Magneto-Micropolar (MMP) system studied in this article is of the following form div( U ) = div( F ) = div( B) = div( G) = div( a) = 0, U (0, x) = U 0 (x), B(0, x) = B 0 (x), ω(0, x) = ω 0 (x), x ∈ R 3 and div( U 0 ) = div( B 0 ) = 0. Here U denotes the fluid velocity field, B is the magnetic field, ω is the field of microrotation representing the angular velocity of the rotation of the fluid particles and p is the scalar pressure. The quantities F and G represent external forces (assumed divergence free) and they are given as well as the initial data U 0 , B 0 and ω 0 . The perturbation a which appears in the first equation above in the term div( U ⊗ a + a ⊗ U ) is a given divergence free vector field and the presence of this particular type of perturbation is mainly inspired by quantitative studies for the rate of possible blow-up for the Navier-Stokes equations (see in particular the article [2]), see also the book [16,Section 12.6] for other interesting applications of this type of perturbation. As pointed out in the Remark 1.1 below, the assumptions over a will have some impact in the general set of hypotheses needed in order to perform our computations. Now, in order to simplify the computations we introduce the Elsasser formulation, which was initially used for the Magnetohydrodynamics equations (MHD) see [7]: indeed, by a suitable change of variables we will obtain a more symmetric problem and if we define u = U + B, b = U − B, f = F + G and g = F − G, then for all x ∈ R 3 we can write , ω(0, x) = ω 0 (x) and div( u 0 ) = div( b 0 ) = 0. (1.1) It is worth to remark here that as long as we want to perform a generic study for the functions u and b, this previous system presents a simpler framework and thus, for the rest of the article we will focus ourselves in this formulation. We remark also that since div( u) = div( b) = 0, then we can deduce from (1.1) that the pressure p satisfies the equation 2∆p = −div(( b · ∇) u) − div(( u · ∇) b) − div(div(( u + b) ⊗ a + a ⊗ ( u + b))), (1.2) and we see from this expression that the pressure p is only determined by the couple ( u, b) (recall that a is given) and we will see how to exploit this relationship later on. We are interested here in studying some properties of (local) weak solutions of the system (1.1) and in order to fix the notation we consider now Ω a bounded subset of ]0, +∞[×R 3 of the form Ω =]a, b[×B(x 0 , r), with 0 < a < b < +∞, x 0 ∈ R 3 and 0 < r < +∞. (1.3) and we will say that ( u, b, ω) ∈ L ∞ t L 2 x ∩ L 2 Based in the classical Navier-Stokes problem, we can study at least two main regularity theories for the MMP equations: the local regularity theory (also known as the Serrin criterion, see [18], [21]) and the ǫ-regularity criterion (also known as the partial regularity theory, based in the seminal work of Caffarelli, Kohn and Nirenberg [4], see also [10] and [11,12,13]). As said previously, in this article we want to develop a particular version of the ǫ-regularity criterion for the system (1.1) and we need to impose some assumptions over the functions u, b and ω as well as some hypothesis over pressure p and from now on we will always assume that we have the following controls f , g ∈ L 10 7 t,x (Ω), (1.4) where Ω is a subset of R × R 3 of the form (1.3). Remark 1.1 The conditions over u, b, ω and f , g are rather classical in the setting of equations arising from fluid dynamics. Note that the connection between the perturbation term a and the pressure p is explicitly given in the relationship (1.2) above, thus if we assume the local integrability condition a ∈ L 6 t,x (Ω) (which appears naturally in some recent results, see [2]), then following our computations we need to impose the condition p ∈ L 3 2 t,x (Ω) ∩ L 5 2 t L 1 x (Ω) for the pressure. Observe that conditions of the form L q t L 1 x (Ω) for the pressure were also considered in the setting of the Navier-Stokes equations, see [22]. Finally note that the (L ∞ t,x ) loc information is usually asked in regularity theory, but in this work we only assume it for the variable ω (not for u nor for b) and this will be crucial to study the term ∇div( ω) which appears in the micropolar equation (1.1). See also Remark 4.2 below, where alternative and more general assumptions are given for the variable ω. Remark 1. 2 We do not claim here any optimality on the space L 6 t,x (Ω) related to the perturbation term and we believe that it is perhaps possible to consider a slightly more general perturbation term by asking a ∈ L m t,x (Ω) for m ≥ 5, however, as far as we can see, this will introduce some quite difficult technical problems and will probably induce some extra hypotheses over the pressure. On the other hand, if we assume some additional information (say a ∈ L 2 tḢ 1 x (Ω)), then we can relax the hypotheses on the pressure and work only with p ∈ L 3 2 t,x (Ω). Once this local framework is clear, we can now introduce a special class of weak solutions: Definition 1.1 (Suitable solutions) Let ( u, b, ω, p) be a weak solution over Ω for the perturbed magnetomicropolar equations (1.1) which satisfies the local hypotheses (1.4) stated above. We say that ( u, b, ω, p) is a suitable solution if the distribution µ given by the expression µ = −∂ t (| u| 2 + | b| 2 + | ω| 2 ) + ∆(| u| 2 + | b| 2 + | ω| 2 ) − 2(| ∇ ⊗ u| 2 + | ∇ ⊗ b| 2 + | ∇ ⊗ ω| 2 ) −div (| b| 2 + 2p) u + (| u| 2 + 2p) b + 1 2 (| u| 2 + | b| 2 ) ω + 2 ∇div( ω) · ω − 2| ω| 2 is a non-negative locally finite measure on Ω. Remark 1.3 It is worth noting here that the local hypotheses stated in (1.4) guarantee that each one of the terms in the previous expression is meaningful. The main purpose of this article is to prove the following theorem which gives a gain of regularity in space and time variables for suitable solutions. for some indexes τ a , τ b > 5 2−α with 0 < α < 1 12 . There exists a positive constant ǫ * which depends only on τ a and τ b such that, if for some (t 0 , x 0 ) ∈ Ω, we have lim sup r→0 1 r ]t 0 −r 2 ,t 0 +r 2 [×B(x 0 ,r) | ∇ ⊗ u| 2 + | ∇ ⊗ b| 2 + | ∇ ⊗ ω| 2 dxds < ǫ * , then ( u, b, ω) is Hölder regular (in the time and space variables) of exponent α in a neighborhood of (t 0 , x 0 ) for some small α in the interval 0 < α < 1 12 . Some remarks are in order here. • Following standard procedures it is possible to construct Leray-type weak solutions for the problem (1.1). However we are only interested here to study the local behavior (for some points of the subset Ω) of the solutions of such equations. • The hypothesis over the pressure p (i.e. p ∈ L 3 2 t,x (Ω)) is useful to give a sense to the quantities div(p u) and div(p b) that are present in the definition of the measure µ given in (1.5). It is worth noting that in the setting of the classical Navier-Stokes equation this hypothesis can be removed and a generic pressure p ∈ D ′ can be considered. See [5] for the details. • Some additional hypothesis over the external forces f and g are stated in Morrey spaces. We will see in the computations below that this functional framework is particularly well suited to the study of the regularity for this type of equations. The plan of the article is the following: in Section 2 we recall some notation and useful facts about our framework. In Section 3 we establish a first gain of regularity under some particular hypotheses stated in terms of Morrey spaces. The rest of the article (Sections 4, 5 and 6) is devoted to the proof of these hypotheses. Notation and functional spaces < +∞ with the usual modifications when p = +∞ or q = +∞. We also define the space L p ([0, +∞[,Ḣ s (R 3 )) with 1 ≤ p ≤ +∞ and s > 0 as the set of distributions such that f L p is the usual homogeneous Sobolev space. See the books [15] and [16] for details about these functional spaces. We recall now the notions of parabolic Hölder and Morrey spaces and for this we need first to consider the homogeneous space (R × R 3 , d, µ) where d is the parabolic (quasi)distance given by d (t, x), (s, y) = |t − s| 1 2 + |x − y| and where µ is the usual Lebesgue measure dµ = dtdx. Associated to this distance, we define homogeneous (parabolic) Hölder spacesĊ α (R × R 3 , R 3 ) where 0 < α < 1 by the usual condition: (2.1) and this formula studies Hölder regularity in both time and space variables. Now, for 1 < p ≤ q < +∞, parabolic Morrey spaces M p,q t,x are defined as the set of measurable functions ϕ : These spaces are generalization of usual Lebesgue spaces, note in particular that we have M p,p t,x = L p t,x . See [1] for more details on these spaces. We refer the readers to the book [16] for a general theory concerning the Morrey spaces and Hölder continuity applied to the analysis of PDEs from fluid mechanics. Here are some useful fact concerning Morrey spaces: 3) More generally, let 1 ≤ p 0 ≤ q 0 < +∞, 1 ≤ p 1 ≤ q 1 < +∞ and 1 ≤ p 2 ≤ q 2 < +∞. If 1 and g ∈ M p 2 ,q 2 t,x , we have the following Hölder inequality in Morrey spaces f · g M p 0 ,q 0 A parabolic gain of regularity: the first step The proof of Theorem 1.1 is essentially based in the following regularity result for parabolic equations which is stated here in the framework of (parabolic) Morrey spaces: where σ is a smooth function on R 3 \ {0}, homogeneous of exponent 1 1 and σ(D) is the Fourier multiplier operator of symbol σ (acting component-wise). See [16,Proposition 13.4] for a proof of this result, see also [14]. We will apply this proposition to our system (1.1) but, as we only assume the controls (1.4) over a subset Ω of the form (1.3), we need to localize our framework and for this we first fix the point (t 0 , x 0 ) ∈ Ω considered in the hypotheses of Theorem 1.1 and then for a small enough radius 0 < r < 1, we consider the parabolic ball such that Q 5r (t 0 , x 0 ) ⊂ Ω (these parabolic balls will be denoted by Q r for simplicity). Note here that since by (1.3) we have Ω =]a, b[×B(x 0 , r) with 0 < a < b < +∞ and x 0 ∈ R 3 , then the condition Q 5r (t 0 , x 0 ) ⊂ Ω guarantees the fact that t 0 − r 2 > 0 and thus the time interval ]t 0 − r 2 , t 0 + r 2 [ does not contain the origin: this condition is important in order to obtain a system of the form (3.1) for which the initial data is such that v(0, x) = 0. Now, we construct an auxiliary non-negative function φ : we define the localizing function η by (remark that we have supp η ⊂ Q R ) and we define the vector U = η( u + b + ω). As we can observe, we have the identity U = u + b + ω over a small neighborhood of the point (t 0 , x 0 ) and the support of the variable U is contained in the parabolic ball Q R (t 0 , x 0 ) ⊂ Q r (t 0 , x 0 ) ⊂ Ω. Moreover, this localization forces the property U (0, ·) = 0, we can thus consider the following problem: (3.4) where the vector B is given by the scalar function β is given by 6) and the tensor B is given by Indeed, in order to verify that we have the equation (3.4) with the terms (3.5), (3.6) and (3.7) above, we compute ∂ t U and we have We use now the identity to obtain the expression which is the first step to obtain an equation of the form (3.4). We need now to organize the expression above in a suitable manner and for this we need to rewrite three particular terms, indeed, since we have the identities η( ∇p) = ∇(ηp)−( ∇η)p, ηdiv( a ⊗ u) = div(η( a ⊗ u))−( a⊗ u)· ∇η and η ∇div( ω) = ∇(ηdiv( ω))−( ∇η)div( ω), we obtain We recall now that, from the expression (1.2) and using the fact that div( u) = div( b) = 0, we have the following identity for the pressure p = div which is (3.4) as announced with the terms B, β and B given in (3.5), (3.6) and (3.7), respectively. Once we have deduce the equation (3.4), in order to obtain the conclusion of the Theorem 1.1 it is enough by Lemma 3.1 to verify that we have In the next proposition we will prove that under some extra hypothesis over the quantities u, b, ω (that will be proven in the next sections) the terms B, β and B belong to the suitable Morrey spaces mentioned above. where R is fixed by the condition (3.3) above. Let ( u, b, ω, p) be a suitable solution for the equations MMP (1.1) over Ω. Assume that we have the following points: with p 0 ≤ p < +∞ and q 0 < q 1 ≤ q < +∞. Note that the conclusion of this proposition is exactly the input needed to apply Proposition 3.1 from which we will obtain the wished gain of regularity. Proof of the Proposition 3.2 In order to prove this proposition, and for the time being, let us take for granted the assumptions 1) -6) above and let us prove that the quantities B, β and B belong to the announced Morrey spaces. For the first term of (3.10), since we have x for τ 0 > 5 and since we have the support property supp (∂ t η − ∆η) ⊂ Q R , it follows by Lemma 2.2 (as we have 1 ≤ p 0 ≤ 6 5 < 3 and q 0 < 3 < τ 0 ) that where we used the information available in the point 1) of the Proposition 3.2. For the term (4) of (3.10), due to the symmetry of the information available in the point 5) of the Proposition 3.2, it is enough to study the following term for 1 ≤ i, j ≤ 3 and due to the support properties of the function η, we obtain where we applied Lemma 2.2 with p 0 ≤ p and q 0 ≤ q. For the terms (5), (8) and (6) of (3.10) can be treated in the same manner, indeed, by the assumption 2) of Proposition 3.2 we have where we used Lemma 2.2 with p 0 ≤ 6 5 < 2 and q 0 < τ 1 (see Remark 3.1). By essentially the same arguments, using the point 6) of Proposition 3.2 (and since p 0 ≤ 6 5 < 10 7 and τ a , For the term (7) of (3.10), as we have the same information over u and b we only need to study (for where we used the Hölder inequality in Morrey spaces, Lemma 2.2 (with 2 < 6 and τ 1 < 6 by Remark 3.1), the point 4) of Proposition 3.2 and the fact that M 6,6 t,x = L 6 t,x . For the term (9) of (3.10) we easily deduce η ω M p 0 ,q 0 < +∞ (by Lemma 2.2 since p 0 < 3 and q 0 < τ 0 ). Due to the symmetry of the information available for the terms u, b and ω and following the same ideas displayed in (3.11), we have η(( u + b) · ∇) ω M p 0 ,q 0 t,x < +∞. Finally, since by the point 3) of Proposition 3.2 we have 1 Q R 1 ∇∧ u and 1 Q R 1 ∇∧ b ∈ M 2,τ 1 t,x , and since p 0 < 2 and q 0 < τ 1 , by Lemma 2.2 we obtain η ∇ ∧ ) < +∞. We thus have: • For β. By the expression (3.6), we have, for 1 < p 0 ≤ 6 5 and q 1 = 5 1−α with 0 < α < 1 12 , Since by the point 3) of Proposition 3.2 we have 1 Q R 2 div( ω) ∈ M t,x and since p 0 ≤ 6 5 and q 1 < 12 5 (see Remark 3.1), then, by Lemma 2.2 we have for the first term fo the right-hand side above: For the second term of the right-hand side of (3.12), we use the point 5) of Proposition 3.2 and due to the symmetry of the information available, it is enough to study, for 1 ≤ i, j ≤ 3 the term η , and we write where we applied Lemma 2.2 with p 0 < p and q 1 < q. • For B. By (3.7) we need to study the quantity B M p 0 ,q 1 , for the sake of simplicity we only study η a ⊗ u M p 0 ,q 1 t,x as the other terms can be treated in the same manner. We thus have where we used the Hölder inequalities in Morrey spaces with 1 Since by the point 4) of Proposition 3.2 the index δ ≫ 1 can be chosen big enough such that δ ′ < 6, thus we have by Lemma 1 12 , and thus the proof of Proposition 3.2 is finished. Local Energy Inequality and Useful estimates In order to obtain some of the assumptions stated in Proposition 3.2, we will exploit the information given by the local energy estimate that can be deduced from the structure of the equation (1.1). We know from the work of Scheffer [19,20] that the use of a special test function is particularly helpful to obtain good estimates. We will use the following function: where ω ∈ C ∞ 0 (R × R 3 ) is non-negative function supported on the parabolic ball Q 1 (0, 0) and is equal to 1 on Q 1 2 (0, 0) (see formula (3.2)), θ is a non-negative smooth function such that θ = 1 on ] − ∞, 1[ and θ = 0 on ]2, +∞[ and g t (·) is the usual heat kernel. Then, we have the following points 1) the function φ is a bounded non-negative function, and its support is contained in the parabolic ball Q ρ , and for all (s, See the book [16, Section 13.9] for a proof of this lemma. See also the Appendix B of [10]. Now, with the help of this function we have the local energy inequality: , ω, p) be a weak solution of the MMP equation (1.1) over a subset Ω of the form (1.3) and assume that φ is the function given in (4.1). Then the local energy inequality for the MMP equation is given by Proof. In order to deduce the local energy inequality announced, we multiply the three first equations of the system (1.1) by φ u, φ b and φ ω respectively and we integrate in the space variable to obtain Recalling that we have the generic identity are valid for any (smooth) divergence free vector field c, we obtain after some integration by parts and after an integration in the time variable: since u, b and a are divergence free vector fields, we easily see that the quantity for the last line above we will use the identity for divergence free vector fields, and using the bilinear structure of the terms, we have and we finally obtain and this ends the proof of Proposition 4.1. Once we have obtained this inequality, we will make use of the properties of the test function φ given in Lemma 4.1 in order to obtain suitable controls that will be used in the next section. Indeed, by introducing some scaled quantities it would be possible to exploit the previous inequality (4.2) to deduce by an inductive argument some stability of this scaled quantities in terms of Morrey spaces. In this sense we have the following definition. we consider the following scaled functions: Now we define the following invariant quantities with respect of the previous scaling: Remark 4.1 From the definition above we easily deduce the identities (rA r ) and similar identities for the variables b and ω. As announced, we will use these quantities to deduce two main estimates, which are stated in Proposition 4.2 and Proposition 4.3. In the next lemma we prove some useful relationships between some of the previous terms given above. Proof. We only detail the proof of the first estimate as the two others follow the same arguments. Thus, by the expression of λ r given in the Definition 4.1 and Hölder's inequality, we have the estimate λ . Now, using an interpolation inequality we have the control L 6 (Br ) and applying the Hölder inequality with respect to the time variable, we obtain u x norm of u, we use the classical Gagliardo-Nirenberg inequality (see [3]) to obtain u L 2 x (Qr) and using Young's inequalities we have Noting that u L ∞ t L 2 x (Qr) = r A first estimate We give now the first general inequality that bounds all the terms given in the Definition 4.1. Proposition 4.2 (First Estimate) Under the hypotheses of Theorem 1.1, for 0 < r < ρ 2 < 1, we have t,x (Ω) Remark 4.2 Note that the hypothesis ω ∈ L ∞ t,x (Ω) is crucial at this step. It can be relaxed assuming for is the exponent of the expected Hölder regularity. Proof. It is worth noting here that the structure of this estimate follows closely the one of the local energy inequality given in (4.2) and in order to deduce this control, we will start estimating the terms of the right-hand side of (4.2). • Indeed, by the point 4) of Lemma 4.1 and using the quantities introduced in Definition 4.1 we have, for the first term of the right-hand side of (4.2): • For the second term of the right-hand side of (4.2) we have: and we will study the two previous terms separately. For the first term of the right-hand side above we introduce the quantity (| u| 2 ) ρ as the average and since b is divergence free, for any test function ψ compactly supported within B(x, ρ), we have Then, since the test function φ is supported in the parabolic ball Q ρ (by Lemma 4.1) and using Hölder's inequality, it follows that where we used the fact that ∇φ L ∞ ≤ C r 2 (by the point 3) of Lemma 4.1). Thus, by the Poincaré inequality and using the Hölder inequality (in space and time variable), we obtain ρ . Using the second inequality of the Lemma 4.2 we obtain 6) and this control ends the study of the first term of the right-hand side of (4.8). For the second term of (4.8), we simply write (using the properties of the function φ given in Lemma 4.1 as well as the quantities given in Definition 4.1 and Lemma 4.2): With estimates (4.6) and (4.7), coming back to (4.4) we finally obtain • The third term of (4.2) can be treated in a completely symmetric manner and we have the estimate: • For the fourth term of (4.2) we have and due to the symmetry of the information available it is enough to study one of the terms above. We thus write, by the properties of the function φ given in Lemma 4.1: x (Qρ) and the Definition 4.1. Thus, with the second term involving ( ∇ ∧ ω) · (φ b) we finally obtain the estimate: • For the term related with f , g in (4.2) we have by the properties of the function φ given in Lemma 4.1: . Recalling the control u x (Qρ) ) and since we have the identities and ρ 1 2 Gρ t,x (Qρ) , we obtain: • For the sixth term of (4.2) we have, by the properties of the function φ given in Lemma 4.1, by the Hölder inequalities and by the Definition 4.1 from which we obtain 2 : • For the seventh term of (4.2) we need to study the following quantity [( a · ∇) u] · (φ u) dxds and we write, by the Hölder inequalities: Performing the same computations for the remaining terms of (4.10) we have t,x (Ω) . • The eighth term of (4.2) is · a (s, x)dxds and again, it is enough to study the following generic term which contains the term u · ∇ φ u · a and we have t,x (Ω) , where we used the properties of the function φ given in Lemma 4.1, the Definition 4.1 and the Lemma 4.2. Thus, considering the remaining terms we can write • For the ninth term of (4.2) we have to consider the quantity has the same structure of the first term of the right-hand side of (4.4) and thus, by the same arguments we obtain • The last term of (4.2) is given by the expression ] · (φ ω)dxds and we remark that it is of the same structure of the term (4.9), so we obtain Once we have estimated all these terms, in order to obtain (4.3) it is enough to gather them: doing so we obtain an uniform estimate with respect to the radius r and to end the proof we remark that the left-hand side of the energy inequality is controlled (using the quantities given in Definition 4.1) by the left-hand side of (4.3). A second estimate The control obtained in the previous section is crucial but it is not enough to our purposes as we need to obtain a deeper control over the pressure. For this Lemma 4.3 For some 0 < σ < 1 2 and for a parabolic ball Q σ of the form (3.2), we have the following estimate on the pressure Remark 4.3 For the time being we assume the controls of the right-hand side of the previous estimate. We will see later on, by a suitable change of variables, how to recover the information over the balls Q r ⊂ Ω. Proof. First, we introduce a smooth function η : R 3 −→ [0, 1] supported by the ball B 1 such that η = 1 on the ball B 3 5 and η = 0 outside the ball B 4 5 . By a straightforward calculation we have the identity . (4.11) • For the first term of (4.11) above, we use the expression of the pressure given in (1.2) which allows us ⊗ ( u + b))) and, due to the fact that div( u) = div( b) = div( a) = 0, we obtain the expression from which one gets . (4.12) In order to study the term (a.1) above, we introduce the quantity is the average of b j over the ball of radius 1 (recall the definition (4.5)) and since u is divergence free we have the identity Noting now that we also have the identity t,x (Qσ) . The first term of the right-hand side above is easy to control, indeed denoting by R i = ∂ i √ −∆ the usual Riesz transforms on R 3 , by the boundedness of these operators in Lebesgue spaces and using the support properties of the auxiliary function η, we have (recalling that where we used Hölder and Poincaré inequalities in the last line. Now taking the L 3 2 -norm in the time variable of the previous inequality we obtain (4.14) The second and the third term of the right-hand side of (4.13) are treated in a similar manner., so we will only consider one of them. Since ∂ i η vanishes on B , with the integral representation of the operator ∂ i (−∆) we have for the second term of (4.13) the inequalities (taking into account only the space variable): where we used the same ideas as previously. Taking the L 3 2 -norm in the time variable, we obtain . For the last term of (4.13), we recall that the convolution kernel associated to the operator 1 (−∆) is C |x| , and thus following the same ideas we have the inequality Thus, gathering the estimates (4.14), (4.16) and (4.17) and coming back to (4.13) we finally obtain We study now the term (a.2) of (4.12). Due to the symmetry of the quantity η∂ i ∂ j ((u i + b i )a j + a i (u j + b j )) it is enough to treat one term of the form η∂ i ∂ j (u i a j ) for which we use as before the identity . (4.19) For the first term of the right-hand side above, introducing the Riesz transforms and using the support properties of the localizing function η we have: , now taking the L where in the last estimate we used the local inclusion between Lebesgue spaces. Now, just as before (when studying (4.13)), the second and the third term of (4.19) can be treated in a similar manner and we will just study the second term and we have, following the same ideas displayed in (4.15): and with an integration in the time variable applying the Hölder inequalities it comes For the last term of (4.19) we proceed in a similar manner noting that the convolution kernel associated to the operator 1 (−∆) is C |x| and due to the support properties of the localizing function η we can write ≤ Cσ 2 u i a j L 1 (Bσ) from which we easily deduce the estimate Thus, gathering the estimates (4.20), (4.21) and (4.22) and coming back to the inequality (4.19) we obtain: x (Q 1 ) . Now, considering the terms of the form η∂ With the previous estimates for the terms (a.1) and (a.2) given in (4.18) and (4.23), respectively, and coming back to the expression (4.12) we obtain x (Q 1 ) . (4.24) • We can now study the term (b) of (4.11) and we have (proceeding just like in (4.15) with the kernel of the operator 1 (−∆) and the support properties of η): (∆η)p and taking the L . (4.25) • The last term of (4.11) can be easily treated by following the same ideas displayed previously and we obtain . (4.26) To end the proof of the Lemma, it is enough to use the estimates (4.24), (4.25) and (4.26) in (4.11) to obtain the wished inequality. Now, using a scaling argument and the control given in the last lemma, we have the following proposition. Proof. Set σ = r ρ and consider the following functions thus, by Lemma 4.3 and using the rescaled function above we obtain . Now, recalling that, by the Definition 4.1 (see also Remark 4.1) we have the notation r and we obtain (as ρ − 11 6 ≤ ρ −2 since 0 < ρ < 1) which is the desired estimate. Inductive Argument Once we have obtained the estimates (4.3) and (4.27) it is possible to perform an inductive argument in order to obtain a (local, parabolic) Morrey information over the variables u, b and ω. then there exists a parabolic neighborhood Q R 1 of (t 0 , x 0 ) with 0 < R 1 < 4R such that Note that the conclusion of this proposition is exaclty the first hypothesis of the Proposition 3.2. Proof. Recalling that from the global hypothesis of Theorem 1.1 we have a local control over the set Ω, thus as we want to obtain a local information and since we assumed Q R (t 0 , x 0 ) ⊂ Ω and by the definition of Morrey spaces, we only need to prove that there exists a radius R 1 small enough such that for all 0 < r < R 1 and for all (t, x) ∈ Q R 1 (t 0 , x 0 ) we have the following control Qr | u| 3 + | b| 3 + | ω| 3 dyds ≤ Cr In order to obtain this estimate, we will implement an inductive argument using the averaged quantities introduced in the Definition 4.1. Indeed, using the Lemma 4.2, we can write Then in order to obtain the control (5.3) for all small 0 < r < R 1 , and all point (t, x) ∈ Q R 1 , it is enough to show the estimate: Let us introduce the following quantities: Note that the introduction of the quantity W r in the first term above is reminiscent from the estimate (4.3) obtained previously. Thus to prove (5.3) we only need to show that there exists 0 < κ < 1 and some 0 < R 1 < R such that for all n ∈ N and (t, x) ∈ Q R 1 , we have and the idea is to use an inductive argument that ensures that we have these estimates above for all radius of the following type κ n R 1 > 0. Remark that due to the definition of the quantity A r given in (5.4), we will also obtain some information over the gradients of u, b and ω (see Corollary 5.1 below). In order to simplify the arguments, we shall need to introduce the following quantities B r = (α r + β r + γ r + W r ), P r = 1 for some τ c > 0 such that 2 + 5 τ 0 − 5 τc > 0. Our starting point is the estimate (4.3) obtained previously: t,x (Ω) . Multiplying both sides of the inequality (5.7) by A r + B r + C r + α r + β r + γ r + W r + r 2 H r = A r + H r . Now we will study each term of the right-hand side above multiplied by • For the term (1) above we have, using the definition of the quantity A ρ given in (5.4): • For the term (2) of (5.7), by the definition of A ρ and B ρ given in (5.4) and (5.6) respectively, we can write • For the term (3) of (5.7), using the expressions of A ρ and P ρ given in (5.4) and (5.6) respectively, we have • The term (4) of (5.7) can be treated in the same manner as the term (2) and we obtain • The term (5) of (5.7) can be treated in the same manner as the term (3) and we obtain • By the definition of A ρ and B ρ given in (5.4) and (5.6) respectively, the term (6) of (5.7) can be rewritten as follows • The term (7) of (5.7) is estimate using the definition of D ρ given in (5.6): • For the term (8) of (5.7) we use the definition of B ρ given in (5.6) to obtain: Remark 5.1 Note that, following Remark 4.2, if we assume ω ∈ L p t L q x (Ω) with 10 τ 0 − 1 − 2 p − 3 q > 0 (which is possible since 11 2 > τ 0 > 5 1−α ), then the previous bound is ρ • Since we have (α ρ , thus for the term (9) of (5.7) we write from which we deduce: • The term (10) of (5.7) is treated as follows: recalling that γ ρ ≤ B ρ by (5.6) and since we have C ρ by (5.4), then we can write • The last term of (5.7) is easy to estimate as we have (α ρ + β ρ ) Once we have all these estimates for the right-hand side of (5.7) we finally obtain the following control Now, we study the estimate for the pressure (4.27) which is given by the control and in the same spirit as before, we will introduce the quantity P r = 1 r 3 2 (1− 5 τ 0 ) P r given in (5.6) in the left-hand side above. To this end, we will first rise the inequality above to the power 3 2 and then we will multiply both sides by t,x (Qρ) + r ρ P ρ . We remark now that we have (by the definition of A ρ given in (5.4)): all these estimates we have t,x (Qρ) + P ρ . (5.9) Now we fix 0 < κ < 1 such that r = κρ. Then, we define a new expression that will help us to set up the inductive argument We will see how to obtain from (5.8) and (5.9) a recursive equation in terms of Θ r from which we will deduce (5.5). Indeed, we have the following lemma. , for all 0 < r < ρ 2 and for all ρ small enough we have the inequality where ǫ is a small constant that depends on the information available on the forces f , g and the perturbation a. Proof. We will use the estimates (5.8) and (5.9) obtained previously. Indeed, introducing the quantity κ = r ρ we easily obtain: . (5.11) We will now study each one of the previous terms. • The first term above can be easily treated as we obviously have A ρ ≤ Θ ρ , thus we write • For the term (2) of (5.11) we write, by the Young inequalities • For the term (3) of (5.11), we obtain by the Young inequalities (and noting that we have κ • The term (4) of (5.11) is treated as follows: • For the term (5) of (5.11) we simple write: • The term (6) of (5.11) needs no particular treatment. • For the last term of (5.11), using the fact that κ Gathering all these estimates we observe that from (5.11) we can write Θ r ≤ C κ We claim now that we have, for the term (5.12) above the following control Indeed, we recall that κ = r ρ < 1 is a fixed small parameter and that 0 < ρ < 1 is also a small parameter. Moreover we recall that due to the hypothesis (5.1), we have lim sup ρ→0 B ρ ≤ ǫ where ǫ > 0 is also very small. Then all the terms of the form κ a , κ a ρ b with a, b > 0 and κ −c B ρ or κ −c B 1 2 ρ with c > 0 can be made very small. Note that the size of the perturbation term, reflected in the quantity κ 10 τ 0 ρ t,x (Ω) can be easily absorbed as ρ can be very small (we have 5 τ 0 − 5 6 > 0 as 5 1−α < τ 0 < 11 2 ). We remark that since ρ is small, we have that the term κ By essentially the same arguments if ρ > 0 is small enough, we have the following control for (5.13): t,x (Qρ) < ǫ, where ǫ > 0 can be made small (remark that the quantity ω L ∞ t,x (Ω) can easily be absorbed for ρ small enough as we have 10 τ 0 − 1 > 0 since τ 0 < 11 2 . Note that the condition ω ∈ L p t L q x (Ω) with 10 τ 0 − 1 − 2 p − 3 q > 0 stated in Remark 4.2 will give a similar result. See also Remark 5.1 for this particular point). With these last observations, then from the inequality (5.12)-(5.13), we obtain Θ r ≤ 1 2 Θ ρ + ǫ which is the conclusion of the Lemma 5.1. With this lemma at hand, we continue the proof of the Proposition 5.1. Indeed, for any radius ρ such that 0 < ρ < R < 1 (and since we have Q R (t 0 , x 0 ) ⊂ Ω) by the set of hypotheses (1.4) we have the bounds x (Ω) < +∞ (and the same estimates for b and ω) and p t,x (Ω) < +∞. Then, by the Definition 4.1, we have the uniform bounds sup 0<ρ<R ρA ρ , ρα ρ , ρB ρ , ρβ ρ , ρC ρ , ργ ρ , ρW ρ , ρ 2 H ρ , ρ 2 P ρ < +∞ from which we can deduce by the definition of the quantities A ρ (t 0 , x 0 ), H ρ (t 0 , x 0 ) and P ρ (t 0 , x 0 ) given in (5.4) and (5.6), the uniform bounds Note now, that there exists a 0 < κ < 1 2 and a fixed 0 < ρ 0 < R small such that, by (5.14), the quantities A ρ 0 , H ρ 0 and P ρ 0 are bounded: indeed, recall that we have τ 0 > 5 1−α > 5 (where 0 < α < 1 12 ) and this implies that all the powers of ρ in the expression above are positive. As a consequence of this fact, by (5.10) the quantity Θ ρ 0 is itself bounded. Remark also that, if ρ 0 is small enough, then the inequality (5.12) holds true and we can write Θ κρ 0 (t 0 , x 0 ) ≤ 1 2 Θ ρ 0 (t 0 , x 0 ) + ǫ. We can iterate this process and we obtain for all n > 1, and therefore there exists N ≥ 1 such that for all n ≥ N we have Θ κ n ρ 0 (t 0 , x 0 ) ≤ 4ǫ from which we obtain (using the definition of Θ ρ given in (5.10)) that This information is centered at the point (t 0 , x 0 ), in order to treat the uncentered bound, we can let 1 2 κ N ρ 0 to be the radius R 1 we want to find, thus for all points (t, Having obtained these bounds, by the definition of Θ R 1 , we thus get Θ R 1 (t, x) ≤ C. Applying Lemma 5.1 and iterating once more, we find that the same will be true for κR 1 and then, for all κ n R 1 , n ∈ N. Since by definition we have A κ n R 1 (t, x) ≤ Θ κ n R 1 (t, x) we have finally obtained the estimate A κ n R 1 (t, x) ≤ C and the inequality (5.5) is proven which implies Proposition 5.1. Remark 5.2 From the Corollary 5.1, we can easily deduce that We have proven the points 1), 2) of the hypotheses of Proposition 3.2 (recall that the point 6) is given for free, due to the hypotheses on the external forces) and we still need to prove the points 3), 4) and 5). In order to achieve this task, we will need different arguments that are displayed in the next section. More estimates Let 0 < a < 5 be a parameter, we define the parabolic Riesz potential L a of a locally integrable function Then, we have the following property Lemma 6.1 (Adams-Hedberg inequality) If 0 < a < 5 q , 1 < p ≤ q < +∞ and f ∈ M p,q t,x , then for λ = 1 − aq 5 we have the following boundedness property in Morrey spaces: See a proof of this fact in the book [16, Corollary 5.1]. We will use this result in the next result to obtain the hypothesis 4) of the Proposition 3.2. With the help of these localizing functions we will study the evolution of the variable V =φ( u + b + ω) and we obtain the system   where, following the same computations of (3.8) we have Now we will perform some computations over the termφ ∇p that contains the pressure. Indeed, as we have the identity p =ψp over QR, then over the smaller ball Q R 2 (recalling thatψ = 1 over Q R 2 by (6.2) since We recall now that by (1.2) we have 2∆p = − 3 i,j=1 ) and thus, the first term of the right-hand side of the previous formula can be written in the following manner: and introducing the functionψ inside the derivatives we obtain Now for the first terms of each line above we use the identities (recall thatφψ =φ): and with this lengthy and tedious formulation for the first term of (6.5), we come back to the term N given in (6.4) to obtain (20) +φ( ∇ ∧ ω) (21) +φ( f + g) With this expression of N , we obtain that the solution of the equation ( for some σ close to τ 0 such that τ 0 < σ. Proof. Fortunately many of the terms above share a similar structure as we have essentially the same information over the variables u, b and ω. Recall that we have proven so far the estimates (5.2) and (5.15). • For V 1 , recalling that e (t−s)∆ N 1 = g t−s * N 1 where g t is the usual 3D heat kernel, we have Thus, by the decay properties of the heat kernel as well as the properties of the test functionφ (see (6.2)), we have Now, recalling the definition of the Riesz potential given in (6.1) and since Q R 2 ⊂ QR we obtain the pointwise estimate|1 x) and taking Morrey M 3,σ t,x norm we obtain . Now, for some 2 < q < 5 2 we set λ = 1 − 2q 5 and we define 3 = a λ and σ < 10 < q λ (remark that a ≤ q). Thus, by Lemma 2.2 and by Lemma 6.1 we can write: where in the last estimate we applied again Lemma 2.2 (noting that a ≤ 3 and q < τ 0 ) and we used the estimates over u, b and ω available in (5.2). Remark that the second term of the right-hand side of (6.9) can be treated in the same manner as the term V 1 so we will only study the first term: by the properties of the heat kernel and by the definition of the Riesz potential L 1 (see (6.1)), we obtain Taking the Morrey M 3,σ t,x norm we obtain . Now, for some 4 ≤ q < 5 we define λ = 1 − q 5 , noting that 3 ≤ 3 λ and σ < 10 < q λ , by Lemma 6.1, we can write • For the term V 3 we have from which we deduce . (6.10) As we have completely symmetric information on u and b it is enough the study one of these terms and we will treat the first one. We set now 5 3−α < q < 5 2 and λ = 1 − 2q 5 . Since 3 ≤ 6 5λ and τ 0 < σ < q λ , applying Lemma 2.2 and Lemma 6.1 we have Recall that we have 5 1−α < τ 0 < σ < 10 and by the Hölder inequality in Morrey spaces (see Lemma 2.1) we obtain Note that the condition 5 1−α < τ 0 < σ < 10 and the relationship 1 q = 2 τ 0 + 1 5 are compatible with the fact that 5 3−α < q < 5 2 . Applying exactly the same ideas in the second term of (6.10) we obtain • The term V 4 is the most technical one. Indeed, we write and taking the M 3,σ t,x -norm we have Now, we study the second term of the right-hand side above, which is easy to handle as we have r < r and we can write , and sinceφ is a regular function and is a Calderón-Zydmund operator, by the Calderón commutator theorem (see the book [15]), we have that the operator φ , t,x and we can write (using the support properties ofψ given in (6.2) and the information given in (5.2)): where in the last line we used Hölder inequalities in Morrey spaces and we applied Lemma 2.2. The first term of the right-hand side of (6.11) requires some extra computations: indeed, as we are interested to obtain information over the parabolic ball Q r (t,x) we can write for some 0 < r < r: Recalling that 0 < r < r =R −R 2 2 , by the support properties of the test functionφ (see (6.2)), the integral above is meaningful if |x − y| > r and thus we can write , with this estimate at hand and using the definition of Morrey spaces, we can write where in the last inequality we used the fact that 1 q = 2 τ 0 + 1 5 , which implies r − 3 2 r ) . Thus we finally obtain We have proven that all the term in (6.11) are bounded and we can conclude that • For the quantity V 5 , based in the expression (6.8) we write where we used the decaying properties of the heat kernel (recall that R i = ∂ i √ −∆ are the Riesz transforms). Now taking the Morrey M 3,σ t,x norm and by Lemma 2.2 (with ν = 4τ 0 +5 5τ 0 , p = 3, q = τ 0 such that p ν > 3 and q ν > σ which is compatible with the condition τ 0 < σ) we have Then by Lemma 6.1 with λ = 1 − τ 0 /2 5 (recall 5 1−α < τ 0 < 10 so that ν > 2λ) and by the boundedness of Riesz transforms in Morrey spaces we obtain: • The quantities V 6 and V 7 based in the corresponding terms of (6.8) can be treated in a very similar fashion since their inner structure is essentially the same. We thus only treat here the term V 6 and following the same ideas we have where in the last estimate we used the space inclusion L ν t L ∞ x ⊂ M ν, 5ν 2 t,x . Let us focus now in the L ∞ norm above (i.e. without considering the time variable). Remark that due to the support properties of the auxiliary functionψ given in (6.2) we have supp(∂ i ∂ jψ ) = Q R 1 \ QR and recall by (6.2) we have suppφ = QR whereR <R < R 1 , thus by the properties of the kernel of the operator ∇ (−∆) we can write (6.14) and the previous expression is nothing but the convolution between the function (∂ i ∂ jψ )(u i b j ) and a L ∞ -function, thus we have and taking the L ν -norm in the time variable we obtain where we used the fact that 1 < ν < 3 2 < τ 0 2 and we applied Hölder's inequality. Gathering together all these estimates we obtain The terms V 9 , · · · , V 18 are studied in the following lemma. Lemma 6.3 1) The quantities V 9 and V 14 based in the corresponding terms of (6.8) can be treated in the same way as the term V 4 . 2) The terms V 10 and V 15 are controlled as V 5 . 3) The terms V 11 , V 12 , V 16 and V 17 are controlled as V 6 . 4) The terms V 13 and V 18 are controlled as V 8 . Proof. Following the estimates given previously for the terms V 4 , V 5 , V 6 and V 8 , all the terms V 9 , · · · , V 18 can be controlled by the quantities t,x (Ω) < +∞ since 5 1−α < τ 0 < 6, which is possible if 0 < α < 1 12 . • The quantity V 19 based in (6.8) can be treated in the same way as the term V 8 . Indeed, by the same arguments displayed to deduce (6.13), we can write (recall that 1 < ν < 3 2 ): x and if we study the L ∞ -norm in the space variable of this term, by the same ideas used in (6.14)-(6.15) we obtain φ ∇ Thus, taking the L ν -norm in the time variable we have < +∞. • The study of the quantity V 20 follows almost the same lines as the terms V 8 and V 9 . However instead of (6.14) we have and thus we can write: • For the term V 21 based in (6.8) can be treated in the same manner as V 2 and we easily obtain • The study of the quantity V 22 is easy to handle, indeed, we have and taking the Morrey M 3,σ t,x norm we obtain , then if we set 11 5 < q < 5 2 and λ = 1 − 2q 5 we thus have 3 ≤ 10 7λ and σ < 10 < q λ . Now by Lemma 2.2 and 10 7 ,q t,x but since q < 5 2 < 5 2−α < τ a , τ b , by Lemma 2.2 we obtain thus, gathering all the estimates above we have 1 Q R 2 V 22 M 3,σ t,x < +∞. • For the quantity V 23 of (6.8) we first note that the quantityφdiv(( u + b) ⊗ a + a ⊗ ( u + b)) can be decomposed asφ∂ i (u j a k ) with 1 ≤ i, j, k ≤ 3 (and other similar terms with b j instead of u j ) and thus we have: and by the same arguments as in the previous lines we obtain For the first term of the right-hand side above we set p = 2, q = 6τ 0 6+τ 0 and λ = 30−τ 0 30+5τ 0 . Note that p λ ≥ 3 and q λ ≥ σ (if σ > τ 0 > 5 is close enough to τ 0 ) and thus, by the Lemma 2.2 and by Lemma 6.1, we have For the second term of the right-hand side of (6.16), we fix p, q = 2 and λ = 1 5 and we have p λ ≥ 3 and q λ ≥ σ. Thus, by the same arguments as above we can write t,x (Ω) < +∞. Applying these estimates to all the terms of the formφ∂ i (u j a k ) andφ∂ i (b j a k ) we finally obtain that • For the last term V 24 given by the corresponding quantity in (6.8), we have , (6.17) and we will study each of the previous term separately. Indeed, for the term (a) above, proceeding in a similar fashion as in (6.9), we have (for 1 ≤ i ≤ 3): g t−s (x − y)[φ∂ i div( ω)](s, y)dyds ≤ C1 Q R 2 L 1 (|1 QR div( ω)|)(t, x) + L 2 (|1 QR div( ω)|)(t, x) . Due to the symmetric information available for the variables u, b and ω it is easy to see that the term (c) of (6.17) can be treated as the term V 3 while the term (d) of (6.17) can be studied as V 2 . With all these remarks we finally obtain that 1 Q R 2 V 24 M 3,σ t,x < +∞. With all these estimates Lemma 6.2 is now proven. Remark 6.1 Note that by iteration the value of δ can be made big enough. We have obtained the hypotheses 1), 2), 4) of the Proposition 3.2 and with these results at hand we will now study the hypothesis 5). Proof. Recall that from (1.2) we have the expression p = 3 i,j=1 ∂ i ∂ j 2(−∆) (u i b j + (u i + b i )a j + a i (u j + b j )), which corresponds with the terms that we want to study and consequently we only need to prove that we have 1 Q R 2 p ∈ M p,q t,x . Thus introducing suitable localizing functionsφ andψ as in (6.2) and following the computations made in (6.5), (6.6) and (6.7) we havē ] (6.18) where we used the space inclusion L 11 5 t L ∞ x ⊂ M . Following the same ideas displayed in formulas (6.13)-(6.15), due to the support properties of the auxiliary functions we obtain L ∞ t L 2 x (Ω) u 6 11 L 2 t L 6 x (Ω) . The terms of the form φ (−∆) (∂ i ∂ jψ )(u i a j ) are treated in exactly the same fashion as we have a ≤ C a L 6 t,x (Ω) . • The term of the form (iii) can be studied in exactly the same manner as the terms of the form (ii). • The term (v) can be treated in the same manner as the previous point. Remark 6.2 The condition p ∈ L 5 2 t L 1 x (Ω) is needed here in order to treat these two previous terms. If we have some additional information over the perturbation term ( e.g. a ∈ L 2 tḢ 1 x (Ω)) then these terms can be controlled by the information p ∈ L We have thus proven that all the terms of (6.19) belong to the Morrey space M We have now all the hypotheses of the Proposition 3.2, and thus Theorem 1.1 follows.
2021-11-15T02:15:50.432Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "895c975e93bef0313a5bbf091608472f13be5d79", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "895c975e93bef0313a5bbf091608472f13be5d79", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
233706618
pes2o/s2orc
v3-fos-license
The Relationship between Time Management and Academic Burnout with the Mediating Role of Test Anxiety and Self-efficacy Beliefs among University Students Background: Academic burnout is one of the most important problems throughout all levels of the education system. Objectives: The present study aimed to investigate the relationship between time management and academic burnout with the mediating role of test anxiety and self-efficacy beliefs among university students in 2019. Methods: The study was a descriptive correlation performed by path analysis. The statistical population included all students of Islamic Azad University of Ahvaz and 222 of which were selected as the sample of the study using convenience sampling. The research instruments included the Academic Burnout Questionnaire, the Time Management Questionnaire, the test anxiety inventory, and the General Self-Efficacy Scale. The proposed model was evaluated using path analysis with AMOS software. Results: A direct and positive relationship was observed between time management and self-efficacy beliefs (β = 0.345, P = 0.0001) and between test anxiety and academic burnout (β = 0.515, P = 0.0001). The relationship between time management and test anxiety (β = -0.586, P=0.001) and between self-efficacy beliefs and academic burnout (β = -0.305, P = 0.0001) was negative. The relationship between time management and academic burnout was not significant (β = -0.051, P = 0.425). The results indicated that test anxiety and self-efficacy beliefs had a mediating role in the relationship between time management and academic burnout (β = -3.964, P = 0.001). Conclusions: According to research results, the proposed model had good fitness and is considered an important step in identifying the effective factors in students’ academic burnout. Background Academic burnout is a significant problem in the educational system at all levels of education, which weakens academic performance and wastes expenses and human resources. Burnout can be considered a type of disorder occurring in an individual for being exposed to stressful environments for long periods, and its symptoms appear in physical, psychological, emotional, and mental dimensions (1). Burnout is caused by hard and un motivating work, and its symptoms appear in different forms. The symptoms also vary from person to person (2). Academic burnout among learners is identified by fatigue due to academic demands and requirements, feeling pessimistic about merits, and low self-efficacy, which can be discussed as a chronic reaction of students who have been involved with academic requirements from the start. This is caused by the difference between the students' abilities and ex-pectations of academic success of themselves compared to others (3). Students suffering from academic burnout usually experience a lack of willingness to attend classes continuously, lack of participation in-class activities, apathy towards the lessons, consecutive absences, and feeling meaningless and incompetent in learning lessons (4). Studies in the field of health, specifically academic burnout, have shown that time management is important when people face unsuitable situations. In general, academic programs are one of the life affairs with tasks and objectives that students often face difficulties in assigning time to. Academic performance also depends on students' abilities in time management and performing tasks correctly. Support from family and friends anticipate high academic performance (5). Time management is a personal discipline that, when performed, anything can be achieved. Time management Copyright © 2021, Journal of Medical Education. This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/) which permits copy and redistribute the material just in noncommercial usages, provided the original work is properly cited. makes one spend time on targeted activities, while lack of time management leads to doing leisure activities. Hence, good time management increases academic performance and reduces academic burnout among students (4,6,7). Since time management skills can be taught and learned and since the inability in time management is one of the causes of not completing the homework among the learners, which per se can lead to academic failure in students and reduction of motivation to continue their studies, this strategy is selected to reduce the test anxiety among students (8). Mediating factors play a role in the relationship between time management and academic burnout among students, which are better to be investigated in order to reduce academic burnout. One of these mediating factors is test anxiety. Test anxiety is a common phenomenon among students and is considered a problem in the educational system (9,10). Test anxiety is situational anxiety that can be observed in all socioeconomic classes and is closely related to the academic performance of the learners in educational centers, and 10 to 20 percent of the pupils and students faces it during their education (11). Different studies have focused on the relationship between test anxiety and academic burnout (12) and the relationship between time management and test anxiety (13). Besides test anxiety, the other variable that plays a mediating role in the relationship between time management and academic burnout in students is self-efficacy beliefs. So, the person trusts his/her abilities in controlling his/her feeling, emotions, and behavior and can affect the consequences of affairs (14). According to Buckworth (15), self-efficacy plays a more significant role in people's motivations and behaviors, such that people who strongly believe in their abilities try and insist more on doing their homework. However, people who doubt their abilities stop doing their homework. Therefore, self-efficacy acts as a driving force in people (16). Different studies have addressed the role of self-efficacy beliefs in reducing academic burnout (17)(18)(19). Objectives The present study sought to investigate the relationship between time management and academic burnout with the mediating role of test anxiety and self-efficacy beliefs among university students. Methods The study was a descriptive correlation performed by path analysis. The statistical population included all students of Islamic Azad University of Ahvaz in 2019 and 222 of which were selected as the sample of the study using convenience sampling. In order to collect the required data, 250 questionnaires based on the research variables were administered. A total of 222 questionnaires were analyzed following the elimination of incomplete questionnaires. Willingness to participate in the research, information confidentiality (confidentiality principle), and observance of participants' rights were the ethical considerations of the research. The proposed model was evaluated using path analysis with AMOS software. Academic Burnout Questionnaire The Academic Burnout Questionnaire was designed by Bresó et al. (20). This questionnaire consists of 15 items that are rated based on the 5-point Likert scale (strongly disagree to strongly agree). Academic burnout consists of 5 items, academic apathy consists of 4 items, and academic inefficacy consists of 6 items of the measurable components of the questionnaire. Items 3,6,8,9,12,and 15 are scored in reverse. The validity of the questionnaire was calculated by factor analysis, and the Comparative Fit Index, Incremental Fit Index, and Root-Mean-Square Error were reported good, and the reliability of the questionnaire was reported to be 0.88 by Cronbach's Alpha (21). Cronbach's alpha coefficient was 0.87 for the questionnaire in the present study. Time Management Questionnaire The Time Management Questionnaire was designed by Trueman and Hartley (22) to measure time management. This research tool consists of 14 items, and the scoring method follows the 5-point Likert scale (from always, often, sometimes, rarely, to never). This questionnaire was translated by Savari (23). The reliability of this questionnaire was calculated by Cronbach's alpha and bisection method to be 0.88 and 0.63, respectively. The validity of this scale was also tested by confirmatory factor analysis (CFA). All the items except for items 8, 11, and 13 had acceptable factor loads (23). In the current study, the reliability of the questionnaire was 0.83 using Cronbach's alpha coefficient. The General Self-efficacy Scale This scale was developed by Sherer et al. (26). This scale includes 17 items, measuring three aspects of behavior, i.e., the desire to initiate a behavior, continuing to strive to complete the behavior, and resistance in the face of obstacles. This scale is scored based on a 5-option Likert scale from 1 to 5. Questions 1, 3,8,9,13, and 15 are scored as 5 (completely agree), 4 (agree), 3 (neither agree nor disagree), 2 (disagree), and 1 (completely disagree), while the other items are scored in reverse. The minimum score for this scale is 17, while the maximum score is 85. Higher scores indicate a high sense of self-efficacy. In a study, Dougherty et al. (27) reported a Cronbach's alpha coefficient of 0.84 for the scale. In the current study, the reliability of the scale was 0.82 using Cronbach's alpha coefficient. Results According to the descriptive statistics, the participants were in the age range of 19 -30 years. Moreover, 64.56% of students were female, and 35.44% of students were male. In terms of marital status, 59.50% of students were single, and 40.50% of students were married. Also, 73.74% of the participants were undergraduate students; 22.09% were master, and 4.17% were Ph.D. students. Table 1 represents the mean and standard deviation (SD) and Pearson correlation coefficients matrix among the study variables. The primary proposed model was achieved based on time management, test anxiety, and self-efficacy beliefs to determine academic burnout. Figure 1 shows the first proposed model. Table 3 showed that there was no significant relationship between time management and academic burnout (β = -0.051, P = 0.425). The relationship between time management and self-efficacy beliefs was positive and significant (β = 0.345, P = 0.0001). There was a negative and significant relationship between time management and test anxiety among the university students (β = -0.586, P = 0.0001) and between self-efficacy beliefs and academic burnout (β = -0.305, P = 0.0001). The relationship between test anxiety and academic burnout was positive and significant (β = 0.515, P = 0.0001). The bootstrap method was utilized to determine the significance of the mediating-based relations. The indirect path from time management to academic burnout through the mediating role of test anxiety and self-efficacy beliefs was significant (β = -3.946, P = 0.001) ( Table 4). Discussion The present study aimed to investigate the relationship between time management and academic burnout with the mediating role of test anxiety and self-efficacy beliefs among university students. In general, results showed that all direct relationships between the variables were significant except for the relationship between time management and academic burnout. The indirect relationships became significant by the mediating role of test anxiety and self-efficacy beliefs in academic burnout. According to research results, the proposed model has good fitness and is considered a significant step in identifying the influential factors in students' academic burnout. It can also be used as an appropriate model in designing academic burnout prevention programs. The first finding of this study showed that there was no direct and significant role between time management and academic burnout. This finding is inconsistent with the findings of studies carried out by Erdemir and Tomar (7), Charkhabi et al. (17), Butcher (6), and Ghadampour et al. (4). It can be stated that in the aforementioned studies, the relationship between time management and academic burnout was significant according to the correlation coefficient and regression tests. However, path analysis was used in the present study. The relationship between time management and academic burnout was also significant according to Pearson test. However, in the present model, the effect of time management on academic burnout was indirect and through the mediating variables. In other words, time management also affects academic burnout in this study but indirectly. Hence, it can be stated that this finding is somehow consistent with the findings of previous studies. It should also be noted that the statistical population of these studies has been quite different. In general, students should find out how to learn their lessons in a relatively specific period. They should also increase their insight into making the best use of time to have a more accurate estimation of the time required for their tasks. Mastering time is an essential step in learning lessons. Time is a strategic source to advance objectives and making dreams come true. An investigation of the behavior of successful and influential people shows that time has an irreplaceable role to them to the extent that they focus on time even before beginning a task. Moreover, they manage their time by eliminating useless and irrelevant activities. The second finding showed that there was a negative association between time management and test anxiety. That is, by improving students' time management skills, test anxiety can be expected to reduce in them. Hence, their academic achievements and performance will improve. This finding is in accordance with that of Ebrahimi et al. (13) and Poudel et al. (28). As an explanation for this finding, it can be stated that students can use lesson planning to manage their time and reduce tension and anxiety due to homework overload. Accordingly, it can be stated that time management reduces tension and anxiety. Consequently, the cognitive reactions to tension increase by time management. Time management includes individual perceptions and different attitudes toward time. It can be stated that people's different attitudes to time are derived from their personality traits. That is, some people need more time to finish their tasks, and some need shorter periods. If students have a good understanding of themselves, they can manage their tasks better and be prepared for their homework and exams to reduce test anxiety (13). The third research finding showed a direct and significant relationship between time management and selfefficacy beliefs. This finding is in accordance with that of Poudel et al. (28). As an explanation, it can be stated that, in general, students who have control over their homework schedules get higher marks. Hence, doing homework on time creates the concept of positive self in students. Time management strategies increase students perceived self-efficacy against the threatening experiences of hard lessons and test anxiety. As a result, the anxiety will decrease, and the students' social performance and selfefficacy will improve (28). The fourth research finding showed that there is a direct and significant relationship between test anxiety and academic burnout. This finding is consistent with the findings of studies carried out by Faramarzi and Khafri (12) and Ebrahimi et al. (13). As an explanation, it can be expected that test anxiety is a type of undesirable emotional reaction to school and class assessments. This emotional condition is usually accompanied by worry, nervous system arousal, and confusion. At the time of test anxiety crisis, that is, situations accompanied by imminent danger or disintegration, the student will feel helpless and unable to find any reason for his/her emotional condition. These anxieties are almost always accompanied by physical symptoms such as paleness, shivering, rapid heartbeat, respiratory problems, etc., and the individual is unable to actualize their potential abilities (13). These negative symptoms caused by test anxiety in learners and students are accompanied by academic burnout symptoms such as mental and emotional fatigue, psychological pressure such as lack of required resources to do tasks and homework, mental fatigue, time restrictions, role overload, inability to constantly attending the classes, not participating in-class activities, being uninterested in courses, feeling unable to learn the lessons. These symptoms gradually pave the way for academic burnout in students with test anxiety (12). The fifth research finding showed that there is an indirect and significant relationship between self-efficacy beliefs and academic burnout. That is, by the increase of selfefficacy beliefs in students, their academic burnout is expected to reduce. This finding is in accordance with that of Felaza et al. (18), Charkhabi et al. (17), Yu et al. (21), and Lee et al. (29). To explain this finding, it can be stated that students with high self-efficacy have higher levels of en-ergy and show more self-devotion, and are less probable to experience academic burnout (21). Self-efficacy beliefs are indirectly related to deindividuation and emotional fatigue and directly related to diminished personal success. Findings also showed that people with higher self-efficacy points experience less burnout (29). Results also showed that people with high self-efficacy face problems instead of running away and have a higher commitment to achieving their goals. These people attribute failure to not trying, which is compensable. Hence, they feel less burnout and academic stress. The sixth research finding showed that test anxiety and self-efficacy beliefs played a parallel mediating role in the relationship between time management and academic burnout. In the first hypothesis, it was shown that there was no significant relationship between time management and academic burnout. However, the present study showed that a reduction in time management skills is related to an increase in test anxiety and self-efficacy beliefs, and it can lead to academic burnout through them. As an explanation, it can be stated that naturally, various factors affect students' progress and academic performance. Some of them improve academic performance, and some others weaken it. Academic burnout is one of the factors that negatively affect students' progress and academic performance. Students' burnout addresses feeling fatigued and uninterested in learning lessons and/or feeling pessimistic and unworthy as a student (17). The present study had some limitations. Some of the limitations of this study included the fact that the study was carried out among the students of the Islamic Azad University of Ahwaz, and the attention should be turned in generalizing the results of this study to other students in other universities of Iran. Moreover, there are influential variables such as gender and age in academic burnout that have not been controlled in the present study and are recommended to be controlled in future studies. Since the present study was carried out on students, it is recommended that it should be carried out on other populations such as the students in other levels of education. Some of the practical recommendations include the fact that the university experts and officials in Iran note that universities should be programmed such that students can make better use of their positive personality traits and behaviors and take steps in progressing by increasing their selfefficacy and get away from academic burnout that prevents them from improving and progressing academically. Footnotes Authors' Contribution: Zahra Kordzanganeh: Study concept and design, acquisition of data, analysis, and interpretation of data, and statistical analysis. Saeed Bakhtiarpour: Administrative, technical, and material support, study supervision. Fariba Hafezi and Zahra Dashtbozorgi: Critical revision of the manuscript for important intellectual content. Conflict of Interests: No conflict of interest to declare. Ethical Approval: The study was approved by the Ethics Committee of Islamic Azad University-Ahvaz Branch (Code: IR.IAU.AHVAZ.REC.1399.025). Funding/Support: This study did not receive any funding. Informed Consent: Questionnaires were filled with the participants' satisfaction, and written informed consent was obtained from the participants in this study.
2021-05-05T00:09:10.764Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "048b5db6c1e4bcb0452309caea0350cfef29fda8", "oa_license": "CCBYNC", "oa_url": "https://jme.kowsarpub.com/cdn/dl/97d8b9d4-b437-11eb-b129-f37f54213809", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "39d9aef3d7c7a6c8778011614acce54cc32c8dda", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
158882567
pes2o/s2orc
v3-fos-license
Some Aspects of Actual CBI and Inflation in the Countries of Southeast Europe Received February 02, 2018 Revised from March 09, 2018 Accepted May 20, 2018 Available online June 15, 2018 Although the countries of Southeast Europe are connected in many ways, there are a lot differences among them with reference to development of the market economy, and especially in the way of conducting of the monetary policy and achieving of the price stability. The subject of this article's research is the actual central bank independence and its impact on monetary stability in the specific environment of Southeast European countries. Therefore, we applied it TOR as a research method because shorter average duration of the mandate of a central bank governor can be an obstacle for conducting of the monetary policy in the long run, as in such a case central bank would be less interested in obtaining its primary goal – keeping of monetary stability. The main hypothesis in this study is that there was a significant influence of the actual central bank independence to monetary stability, regardless of different implementations of the monetary policies of the central banks of the observed countries. We have used statistical methods to prove the hypotheses and then we gave an adequate explanation of the research results. The result of our research has shown that in the period 2000-2016, despite their differences, actual independence of the respective central banks was strengthening, while the inflation rate in the countries of the Southeast Europe was decreasing, but the connection between the two was weak. However, we have established that in the Southeast European countries in the end relatively higher degree of the actual independence of their central banks has been obtained, as well as lower inflation rate in 2016, while their negative correlation has become very strong. The observed countries can have obtained their monetary stability greatly thanks to the higher degree of the actual independence of their respective central banks, but at the same the independence by itself is not enough to keep the inflation rate at the desired rate, like the one requested by Maastricht's criteria. In modern circumstances lower inflation rate can depend on some other factors, such as political lobbies, mutual adjustment of fiscal and monetary policies, imperfection of the labour market, national culture of the inflation, etc. JEL classification: INTRODUCTION Only a few decades back most of the central banks in the world were under a strong political influence of the government and they had no independence, or the independence was of a very low level in conducting of the measures of the monetary policy.The growth of the central bank independence have substantially changed definition and understanding of the creators of the monetary policy, as well as responsibility for the goals achieved in its conduction.A very strong support of the central bank independence were the results achieved in conducting of the monetary policy of the German central bank (Deutsche Bundesbank), which was considered the most independent central bank in the world, until it transferred its independence to the European Central Bank.It was breaking of the link between government of a country and its authority in conduction of monetary policy that influenced such authorities to be transferred to an independent and competent institution that would not depend on any political pressures.Thus, government has lost its discretion ability to trick other economic participants by its acts, on the contrary it has been put into 'more equal position' with other economic participants, whereas monetary authority has been given to central bank exclusively, whose credibility became very important for all other economic participant's behaviour. Considering the undisputable importance of establishing the rate of the central bank independence, ways and methods of its measuring became very important in order to for this measuring to be more credible and calculation more precise.Measuring of the central bank independence that uses legal indicators can show substantially different results compared to measurement of the so called actual central bank independence, due to its flaws.Therefore actual central bank independence became an issue of the research of more and more authors.Namely, it is difficult to measure certain factors that can be important for establishing of the central bank independence, such as the Governor's or other bank leaders' personalities, factors that arise from the tradition, etc.At the same time it is possible to apply certain legislative regulations differently in practice, although they may be expressed in the same way in the legislation regulating the work of the central bank.Hence, it is often due to various types of pressure that the mandate of a Governor of the central bank can last shorter than his term, in order to elect the Governor of the central bank who would be more convenient for conducting of such monetary policy that would suit to certain political, interest and other groups. Although the countries of Southeast Europe have common similarities, and are additionally connected by common regional area, at the same time it is obvious that there are differences in their political and economic development, achieved transition results, etc., while in the context of our research we especially point out differences in conducting of monetary policy.Time that has passed from the beginning of the transition, as well as a degree of total liberalization of the economies have created basis for a quality research of the influence of the actual central bank independence to the monetary stability in the countries of Southeast Europe, and also possibility to properly observe mutual correlation of the researched phenomena, and consequently, the possibility of making adequate conclusions. THEORETIC CONSIDERATIONS AND RESEARCHES OF THE CENTRAL BANK INDEPENDENCE AND THE INFLATION Broad consensus of economists, politicians and public has influenced the decision to direct monetary policy to keep monetary stability as a long-term goal.Such a position of the central bank enabled it to be free of the obligation of some other macro-economic goals that have remained as an obligation of the governments.Tinbergen (1954) is considered to be modern creator of an idea that central banks should be independent, because he thought that if there were more economic goals which were in conflict, each of them should have been solved by an independent specialized institution.However the very idea of central bank independence is based on time-inconsistency of the monetary policy that was explained by Kydland and Prescott (1977).The above authors relate negatively to implementing of economic policy on the basis of the discretion and they support its implementing on the basis of the set rules ("rules against discretion").The very conducting of the monetary policy by the rules can efficiently solve the problem of the inflation.Economic participants would deem conducting of such an economic policy credible, unless there are inflation surprises.Therefore, nowadays, solving of the problem of time-inconsistency of the monetary policy is in the very scope of operation of central banks.On the other hand, Buchanan and Wagner (1977) have clearly pointed out that only an independent central bank can be resistant to political pressure that would result in inflation tendency.Bade and Parkin (1982) have conducted their research of 12 OECD countries and they have calculated the level of the independence of their central banks for the period 1951-1975.They have established that the central bank independence have negative influence to the inflation rate.While researching elements of the central bank independence, Rogoff (1985) has established that its Governor has conservative approach to the monetary policy, i.e. more important meaning in its conduction should be directed to the price stability instead some other goals of the economic policy, such as increasing of the employment rate, economic growth, etc. Several authors, such as Alesina (1988), Burdekin and Willet (1991), Cukierman (1992), Cukierman, Webb and Neyapti (1991), Eijffinger and Schaling (1993) have pointed out that the central bank independence is institutionally essential for obtaining of the monetary stability.Alesina (1988) has also used Bade-Parkins index in his researches and has established negative correlation of the central bank independence and the inflation.Furthermore, Alesina (1989) has additionally concluded that the independent central bank can reduce fluctuation in monetary policy during election cycles.Neumann's (1991) research has shown clear reasons for entrusting central bank to independently implement monetary policy, but also certain elements that could be used for creating of different models for measuring of the central bank independence.Grilli, Masciandaro and Tabellini (1991) have constructed (GMT) index for measuring of political and economic central bank independence.By its application in the research the authors have established negative correlation between central bank independence and the inflation in developed western countries.On the other hand Lohmann (1992) has been researching optimum design of the institution of the central banking. While researching the influence of the central bank on the monetary stability, Cukierman andthe others (1992), andlater Cukierman again (1994) have established that the legal central bank independence is more convenient to be measured in developed countries, where it is connected to the lower inflation rate.On the other hand, turnover rate of governors (TOR) is more convenient for researching of the central bank independence in developing countries, where it is connected to the lower inflation rate.Furthermore, Alesina (1993) has established negative correlation between inflation rate and central bank independence for a group of developed western countries, where he used and upgraded model made by Bade and Parkin. Capie, Mills and Wood (1993) came to the conclusion that in order to keep the stability of prices, central bank independence is not the only necessary condition, but also some other conditions are needed.Fischer (1994), and again Debelle and Fischer (1995) attributed larger importance to the economic central bank independence and not political independence, while Walsh (1995) researched central bank as government's mediator that tried to maximize goal function. A unique historic period, such as transition of socialist economies into modern market economies created at the same time new opportunities for researching of certain social and especially economic phenomena together with their specific characteristics.These circumstances differed greatly when compared to conditions where monetary policy was conducted by central banks of developed countries.Radzyner and Reisinger (1997) were researching central banking of Czech, Hungary, Poland, Slovakia and Slovenia.However, the authors have established, as a negative side, that there was still certain direct crediting of the governments from the central bank.Loungani and Sheets (1997) have established that the central bank independence in transition conditions negatively influences inflation rate, which has also been established by Cukierman (1998), whereas he pointed out necessity for economic liberalization of adequate level.Lybek (1999) has researched connection of the central bank independence and certain macro-economic indicators in former Soviet countries, whereas he included elements of the accountability of the central banks into the model designed for measuring of the independence.His model is more rigorous than Cukiermans's or GMT model, in order to be less influenced by subjective elements when applied.Lybek has established that the countries with higher level of de jure independence and accountability of their central banks have lower annual inflation rate for the period 1995 -1997.Within our research context it is important to point out that Lybek was researching the influence of the actual central bank independence on the inflation rate, but due to the short period of the research, he was not able to establish significant correlation.Maliszewski (2000) was researching independence of central banks in 20 transition countries, where he used GMT model as a research basis.The model that Maliszewski used differed from the original model because he presumed that central bank has higher level of political independence if countries' Governor could only be relieved by non-political factors and that crediting of the government by the central bank is less harmful if the whole direct credit is securitized.Maliszewski has concluded that there is significant influence of the central bank independence to the inflation rate in the observed countries, but that there was no such an important correlation with the economic independence of the central banks.Ilieva et al. (2001) have constructed index of the central bank independence that included legislative and behavioural aspects of its independence and they have established that the central bank independence is higher in transition countries that are in the process of being admitted into EU than in other transition countries.Cukierman et al. (2002) have researched influence of the central bank independence to certain macro-economic elements in 26 transition countries, whereas they have used two Cukierman's measures of the independence of the Central bank: LVAW, made out of 16 indicators and its modified version LVAU.The authors have established that following elements have the most important influence on the level of the central bank independence: allocation of the authorities in conducting of the monetary policy, procedure of resolving the issues between the government and the central bank and attributing the importance to the price stability within the legislation of the central banks.Freytag (2003) has researched legal central bank independence in some of transition countries, whereas he has created monetary commitment index and has established its very high independence. Dvorski ( 2004) has used index of frequency of changing the Governors of the central banks and has researched legislative regulations in the work of central banks of Southeast Europe, whereas she has included Maastricht Criteria requirements.The author has concluded that Maastricht Criteria have mostly been implemented into legislations, but that the central banks are not completely free from political influence in practice.Piplica (2012) has researched actual central bank independence and the inflation in Croatia and transition countries EU members and has concluded that there is relatively higher level of the actual central bank independence and lower inflation rate for the period 1998-2010.The author has also concluded that the higher level of the actual central bank independence is no more correlated to the lower inflation rate once monetary stability is already obtained.Furthermore, Piplica (2015) has also transformed and upgraded GMT model and has researched the influence of the central bank independence to the monetary stability in transition countries EU members in early and in later phase of the transition process.The author has concluded that in the early and in the whole transition period there is a significant negative influence of the legal central bank independence to the inflation, but that this is not so obvious in the latter phase of the transition.Bogoev and Petrevski (2015) have analysed political and economic arguments for establishing of the independent central banks, and have especially critically valued different element for quantifying of the legal and actual central bank independence.The authors have also given a review of the evolution of the central bank independence in the transition countries of the Central and Eastern Europe.Angelovska-Bezhoska, A. ( 2017) has oriented to researching of the legal regulations that define the independence of the National Bank of the Republic of Macedonia based on the index of Cukierman et al. (1992) and the index of Jacome and Vasquez (2005), important for keeping the stability of the prices.The author has established the growth of the independence of the National Bank of the Republic of Macedonia, whereas an improvement in the area of expressing of the monetary policy is essential, as well as the process of appointment of the non-executive members of the council of the National Bank of the Republic of Macedonia.Of course, there are also the authors that in their researches of the central bank independence have come to the opposite results with reference to the above mentioned.Thus for example, de Haan and Sierman (1994) and Cargill (1995) have established that there is no significant correlation between the central bank independence and the inflation rate.Also, Eijffinger and Van Kuelen (1995) in their research used models designed by Alesina, as well as Bade and Parkin model, and Eijffinger and Schaling model, and finally model designed by Grilli, Masciandaro and Tabelllin, and they have also concluded that there is no significant influence of the central bank independence to the inflation.Cargill (2016) thinks that the central bank independence is only a myth and he states that: "The conventional wisdom so widely accepted in the academic literature is based on a confused perception of independence that fails to distinguish between legal (de jure) and actual (de facto) independence."Many other authors have researched central bank independence and its influence to the monetary stability, but we have pointed out only authors important to the context of this conducted research. APPLIED MODEL AND MEASUREMENT OF THE ACTUAL CENTRAL BANK INDEPENDENCE The researches have often shown that the actual central bank independence differs from its legal independence, whereas lower level of actual independence was shown.Actual central bank independence has become research subject of greater number of the authors, because it is not possible to get some answers by applying of the measures of legal central bank independence.Prevailing theoretical opinion of large number of the authors considers that monetary policy in the long run has a neutral character and that government must not gain from direct measures of the monetary policy in the short run, because it can result in unwanted effects.That is why shorter mandate of the Governor of the central bank can be an obstacle for credible conduction of the monetary policy in the long run. It is not possible to precisely measure the actual central bank independence, because it is very difficult to measure all the elements that in fact actually impact its independence.One of the means of measuring of the actual central bank independence is the grade (index) of frequency of the changing of the Governors of the central bank and we will apply it in our research.The central bank independence is often measured by questioner (questionnaire based index), whereas monetary experts from the observed countries fill in the questioner.We are of the opinion that such questionnaires are subject to certain subjective feeling of the questioned persons, hence they are less credible.Regardless of the fact that they are monetary experts, they can enter a certain subjectivity that will be expressed in positive or negative perception of some measuring elements that can result in a distorted picture of reality. The rate of frequency of replacement of the Governor of the central bank is, in its nature, a very simple indicator, based on the fact that the Governor of the central bank is the most important person in conducting of the monetary policy, and if often replaced it is a sign of political (or some other) influence in restraining of the central bank independence.Moreover, this index is not so subject to subjective approach as questionnaire based index, hence it is more appropriate for measuring of the actual central bank independence.Because all mentioned above, and some other circumstances, it is presumed that the term of mandate of the Governor of the central bank is an assurance for stability of the conducting of the monetary policy, whose primary goal will be obtaining and maintaining the price stability.Turn-over rate of the Governor of the central bank (TOR) represents average term of mandate, and it is obtained as a relation of the number of the Governors of the central bank in the period of time and the length of the observed period.Thus, Turn-over Rate of the Governor of the Central bank (TOR) It is obvious that lower result at the same time means higher level of the actual central bank independence.Considering the length of the term of mandate of the Governor of the central bank, that mostly lasts 4-5 years (somewhere even longer), as well as the length of duration of the election cycles in democratic countries, Cukierman et al. deem it is not desirable for the upper level of turnover rate of the Governors to be above 0,2 or 0,25.Of course, low level of TOR does not automatically mean high level of the central bank independence and this way of measurement should not be taken as exact way.We cannot exclude the fact that the Governor of the central bank is subject to government pressures, in order to prolong his mandate as a head of the central bank.Therefore, sometimes there are situations when the Governors of the central banks for certain reasons were connected to the governments, such as in Romania, Czech Republic, etc. where they later became members of the governments in these countries. INFLUENCE OF THE ACTUAL CENTRAL BANK INDEPENDENCE TO THE INFLATION IN THE COUNTRIES OF SOUTHEAST EUROPE The research has taken place in the countries of Southeast Europe: Albania, Croatia, Romania, Bulgaria, Serbia, Bosnia and Herzegovina, Montenegro, Macedonia, Kosovo and Greece, which although connected geographically, historically, economically, culturally, nationally and in many other ways, at the same time show significant differences in forming of the states, democratic tradition, structures of the national economies, (non)membership of the EU, conducting of the monetary policy, etc.Therefore, each of the central banks conducts monetary policy in somehow different political and economic surrounding, which can influence the success in obtaining the goals of the monetary policy. Monetary policy of the Southeast European countries has many differences, but primary goal of all the central banks was obtaining and maintaining monetary stability.Thus for example, the National Bank of Romania conducts direct inflation targeting as of 2005.Inflation goals are established at the level of yearly changes of consumer price index, for precisely established percentage points.Similar to the above, Serbia is conducting inflation targeting since 2009, whereas such monetary strategy has been gradually introduced since 2006.National Bank of Serbia conducts independent monetary policy with floating exchange rate of its national currency.Inflation targeting is also conducted in Albania in order to modify and anticipate inflation.On the other hand national currencies of Bosnia and Herzegovina and Bulgaria function in the currency board system.The Croatian National Bank keeps the stability of the exchange rate of its national currency according to euro as so called nominal anchor of monetary policy in order to stabilize inflation expectations.The stability of exchange rate regime is kept by currency interventions (according to euro).In 2009 IMF considered Croatia as a regime of directed floating currency exchange.Also, the National Bank of the Republic of Macedonia uses direction of nominal exchange rate since 1995, first according to Deutsche Mark and since 2002 according to euro as a nominal anchor.Greece is the only observed country of the Southeast Europe that is a member of Euro zone, and accordingly it has transferred its responsibility for monetary policy to ECB during 2001.On the other hand, Montenegro and Kosovo have done one-sided euroisation (so called dollarization) of their national currencies.Such a rigid form of the regime, where a country uses other countries' currency instead of its own, was present in Montenegro first as unofficial euroisation, than as partly official euroisation, and in the end Montenegro became officially euroised economy by introducing firstly Deutsche Mark and later euro as an only legal means of payment.Similar as Montenegro, Kosovo has also simply adopted Deutshe Mark and later euro as a means of payment with no agreements whatso-ever with the central bank of Germany (Deutsche Bundesbank), or later ECB.It is total of the mentioned similarities and differences that has created a unique surrounding, and an opportunity to explore the influence of the central bank independence to the inflation in different circumstances, than for example in developed western countries, or elsewhere. Source: own calculation Damir Piplica, Ivo Speranda, and Zvonimir Josip Perkovic / Montenegrin Journal of Economics, Vol. 14, No. 2 (2018), 41-57 49 In our research we have set a thesis that in the observed period a significant actual independence of the respective central banks of the countries of Southeast countries of Europe measured by TOR was achieved.In calculation of the actual central bank independence measured by the index of frequency of changing of the Governors of the central banks, we have taken only the Governors of the central bank and not the persons that substituted them in certain periods.If some of the Governors were re-elected to the position of the head of the central bank, we took the whole period of his function as one mandate, but if in the meantime another Governor was elected, than we deem that such a change in the implementation of the monetary policy can occur, so we entered such a change in the calculation.It is obvious that most of the central banks have had TOR values that are lower of around limits of election cycles in the observed countries, which means that they have expressed high or higher level of the central bank independence.It is also obvious that in several of the observed countries Governors were often replaced before they have completed their mandates.Furthermore, we can see that the inflation rate in all the countries of Southeast Europe was at low level and within Maastricht Criteria. Influence of the Actual Central Bank Independence to Inflation in the Countries of Southeast Europe Considering all the circumstances of the conducted research, we have observed TOR values since 2000, when cumulative liberalization index (CLI) was on a higher level in all of the observed countries, i.e. when there was a certain time lapse from the very beginnings of the transition and when it was possible to count some average time of the Governors' terms of mandate.In our research we have set the thesis that there was a significant influence of the actual central bank independence to monetary stability, regardless of different implementations of the monetary policies of the central banks of the countries of Southeast Europe. Central bank independence of the countries of Southeast Europe expressed in TOR values would be set into relation to the inflation expressed in index of depreciation of the actual value of money whereas π presents inflation rate.We'll observe inflation for years when TOR values of the independence of the central banks were observed.The inflation rate π is expressed by the index of consumer's prices at the end of the year.Our research comprises period of 2000-2016.The above is clearly visible from the following graph. In our research we have taken total of 161 indicators, whereas it is obvious that the average change of the Governors of the central banks in the period 2000-2016 was 0,28, meaning that in the observed countries' Governors are often replaced before completing their terms of mandate.However, we should point out that in the mentioned period, TOR value was continuously lowering and actual central bank independence was growing.At the same time inflation rate was on average slightly higher than the one requested by Maastricht Criteria, but by the end of our time research was within requested limits. Own calculation Regression line is positive (Y=0,2585 + 0,6233 X) whereas graph visually shows that there is a very weak correlation between frequency of the turnover of the Governors of the central banks in the observed transition countries (TOR) and the inflation measured by the depreciation of the actual value of money index.It is also obvious that in the period 2000-2016, for which this research was done, we had relatively low inflation rates, regardless of the frequency of the changes of the Governors of the central banks of the countries of Southeast Europe. It is interesting that Romania shows high level of the actual independence of its central bank, but in some years still had high inflation rates.In that sense, expressed TOR value for 2000 is 0,10, but the inflation rate was 40,71%, in 2001 expressed TOR value was lowered to 0,09, but the inflation rate was still high 30,19%, in 2002 TOR value was 0,08 and inflation rate was 17,5%.In later years of the research Romanian inflation rate is within limits of Maastricht Criteria, and even showing deflation character. At the same time, it is interesting that Greece for a sequence of years had inflation rate above Maastricht Criteria requests, although it has euro as payment means, so in 2010 it amounted 5,16%, but in the latter years of our research it is however low, even expressing deflation.Similar situation is in Kosovo and Montenegro (in 2012 inflation rate was 5,12%). Our research comprises total of 161 cases, for the period between 2000 and 2016, for which period data was available.Standard error of evaluation is 0,058, determination index of 0,044 is low, while the level of the correlation is positive, amounting 0,211, and such correlation can be considered as almost insignificant.Since lower level of TOR values means higher level of actual central bank independence, it is obvious that the actual CBI and the inflation have a weak negative correlation.The regression results are shown in the following table: Source: own calculation On the other hand, actual central bank independence of the countries of Southeast Europe in later years of our research had common influence to monetary stability of the mentioned countries.Although in Serbia, Kosovo and Albania, expressed TOR values are still significantly higher than 0,25, i.e. they show that the Governors are frequently changing before completing their mandates, Damir Piplica, Ivo Speranda, and Zvonimir Josip Perkovic / Montenegrin Journal of Economics, Vol. 14, No. 2 (2018), 41-57 53 price stability have been obtained and inflation rate is below 2% in last years.The last year of our research shows that there was a very strong mutual connection between the frequency of changing the Governors of central banks of the observed countries (TOR) and inflation, measured by depreciation of the actual value of money index, i.e. that actual CBI strongly influences lowering of prices, which is visible from the graph below. Despite small number of the observed cases, graph No. 3 clearly visually shows that in 2016 there was a very strong positive connection between frequency of the turnover of the Governors of central banks of the observed transition countries (TOR) and inflation measured by the index of depreciation of the actual value of money.Linear regression equation is positive (Y=.1883 + 8.8875 X), just like correlation index which is 0,83, while determination index is 0,69, of course, is high, meaning that at the same time there was a strong negative correlation between actual CBI and inflation for 2016.Therefore, it is obvious that we have substantial change of the influence of actual CBI at the end of the observed period, regarding the whole period of the research. FACTORS THAT ENDANGER ACTUAL CENTRAL BANK INDEPENDENCE OF SOUTHEAST EUROPE AND REFLECTION TO MAINTAINING OF THE MONETARY STABILITY All economic participants in the observed countries have accepted point of view that monetary stability is long-term goal of implementing of monetary policy and that inflation results in numerous harmful consequences for their economies.Obtaining of high (higher) level of the central bank independence, and lower inflation rate represent at the same time a task for maintaining of such values, at the same or better level.However, countries of Southeast Europe are in their regulative have differently incorporated regulations that leading persons of their respective central banks should fulfil, which is not in accordance with theoretical considerations regarding the central bank independence.(Neumann, J.M.M., 1991). There are various factors that endanger preserving of the central bank independence, as well as monetary stability, while political pressure (of the government) is always continuous and can hardly ever be stopped.Namely, political interests of the leading parties are always aimed to beautify reality in the eyes of the public, in order to gain trust from voters for a new mandate.Therefore, they often try to use the influence of monetary policy in order to obtain certain economic goals that were not realized by other measures of economic policy (especially by fiscal measures).On the other hand, insufficiently restructured economies of the observed countries still have economic subjects that used continuous depreciation of domestic currency in comparison to the foreign one, and the stability of the prices and exchange rates does not suit them, as they could lose their preferential position.Furthermore, there are continuous pressures of the workers unions for irrational recovering of some poor economic subjects that represent a huge pressure to the government, requesting deficit financing, in order to keep social peace of the citizens.Underdeveloped labour market and necessary restructuring of the economy has resulted in surplus of the labourers, and higher unemployment rate in the observed countries.That, again, has created social pressure to the governments to increase employment, economic growth, and the like, making these issues more important than obtaining monetary stability. A special danger for preserving of the monetary stability represents lack of coordination in conducting measures of monetary and fiscal policies.Inefficient fiscal policy, which was conducted in a part of the observed countries, resulted in higher budget deficit than desirable, lacks on nonbudget accounts, insolvency, etc.At the same time, significant fiscal evasion and still very strong grey economy resulted in inability to finance budget expenditures which further creates pressure for such expenditures to be financed by a new money emission.Coordination of the monetary and fiscal authorities should be reached at the moment of establishing of macro-economic goals it the process of parliamentary acceptance.Of course, unpredicted situations should always be kept in mind, such as big natural or social catastrophes, that can create disturbances in the economy, and that can reflect to the activity of the central bank and monetary stability of a country. On the other hand, monetary sovereignty of several countries of Southeast Europe has been transferred to ECB and euro has been introduced as a means of payment, which has strengthen monetary stability from abroad.It, however, means that certain inner (or outer) factors could not endanger price stability.A strong factor that supports monetary stability in the countries of Southeast Europe is financial sector, since it is a significant source of the funds and financial institutions get short-term funds, while they grant the long-term.Bank system of the countries of Southeast Europe is largely owned by foreign bank houses, that prefer monetary stability and an independent institution whose primary goal is maintaining price stability.However, it is not rear that financial institutions inspire irrational strengthening of domestic consumption above the abilities of the economic participants.Therefore, the central banks of the observed countries have to evaluate properly future expectations of the economic participants, but also foreign influences, in order to conduct an adequate strategy of measures of monetary policy. A very specific situation developed in the countries that used exchange rate as a nominal anchor (and other reasons as well) that resulted in the fact that many domestic prices were indexed in foreign currency, firstly in Deutsche Mark, and later in euro, which has supported monetary stability of these countries.Furthermore, a large part of the companies' deposits, and citizens' deposits too, was expressed in the foreign currency, and the loans were granted in foreign currency, and all of this resulted with stabilization of the financial sector.Some of the observed countries have applied monetary policy where their central bank targeted inflation in order to strengthen the credibility of their central banks towards the domestic public.Thus, all economic participants can adjust their activities toward inflation expectations, without fear that they would be deceived by discretion measures of the monetary policy. Strengthening of the central bank independence of the observed countries was enlarged by higher level of transparency in conducting measures of the monetary policy as well as legislative regulations that have established mandate of the leaders of central bank to be conditioned by achieving of the set goals, i.e. preserving the monetary stability, while they could be relieved only by non-political reasons.At the same time in all of the Southeast countries in the observed period, national culture of inflation has grown.Large number of citizens of the mentioned countries work in developed western countries, where public is aversive to the inflation, and such citizens present power that had pressured political structures of their countries to reach for values existing in the countries where they have worked.Strengthening of the level of democratic standards has at the same time strengthened the predispositions for the level of the actual independence of the central bank to be increased, as it cannot exist in non-democratic systems.A part of the observed countries are fully equal members of EU and they are obliged to fulfil Maastricht Criteria, while the other part of the countries are in the process of being accessed to the EU membership, and accordingly they will have to implement the same standards. CONCLUSION The research of the influence of the actual central bank independence to the monetary stability was conducted in the countries of Southeast Europe: Albania, Croatia, Romania, Bulgaria, Serbia, Bosnia and Herzegovina, Montenegro, Macedonia, Kosovo and Greece, that are significantly connected, but at the same time there are differences in formatting of certain state communities, democratic tradition, structures of the national economies, (non)membership of EU, conducting of the monetary policy, etc.Although each of the respective central banks conducts monetary policy in somewhat different political and economic surrounding, that can influence the success in obtaining the goals of the monetary policy; primary goal of all the central banks was achieving and maintaining of monetary stability. In our research we have used the rate of the frequency of turnover of the Governor of the central bank, considering the Governor of the central bank to be the most important person in conducting the measures of the monetary policy, and that frequent changes of the Governor show political (or some other) influence that limit the independence of the central bank.The research has shown continuous strengthening of the actual central bank independence and lowering of the inflation rate in the observed countries in the period 2000 -2016.However, in the whole observed period there is weaker influence of the actual central bank independence to the lowering of the inflation.But at the same time, we have established that at the end of the observed period in the countries of Southeast Europe relatively high or higher level of the independence of the respective central banks have been reached, and lower inflation rate as well, in 2016, whereas the influence of the actual central bank independence to the lowering of the inflation rate became very strong. There is a large number of factors that can result in damaging consequences at preserving of high level of the central bank independence and monetary stability in the countries of Southeast Europe, such as inconsistency in monetary and fiscal policies, continuous pressure for depreciation of the national currency in relation to foreign currencies, increasing of the debit side of the state budget, and deficit financing of the necessities, underdeveloped labour market, etc.However, recent law regulations have made an important step towards higher accountability of the central bank in conducting the measures of the monetary policy, and transparency of its work to all of the economic participants, that have a positive influence in maintaining of monetary stability.Membership, i.e. accession of the observed countries to EU has resulted in significant support to monetary stability of the observed countries. Table 1 . Table No. 1. Term of Mandate of the Governors of the Central Banks in the Countries of South East Europe Table 2 . Actual CBI and Inflation 2016 Table 5 . Regression Results TOR and Inflation 2016
2019-05-20T13:05:22.905Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "fce7407e03ebf4968349afcde24d9072bc805bf2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14254/1800-5845/2018.14-2.3", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2d1d867cf5c790166ac821d5c5490a5c4ca0684b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
257584137
pes2o/s2orc
v3-fos-license
A novel prognostic model based on ferritin and nomogram‐revised risk index could better stratify patients with extranodal natural killer/T‐cell lymphoma Extranodal natural killer (NK)/T‐cell lymphoma (ENKTCL) is an aggressive lymphoma with marked heterogeneity, resulting in a distinct prognosis even in patients with the same disease stage. The nomogram‐revised risk index (NRI) has been proposed to stratify patients with ENKTCL. Numerous reports have revealed the prognostic role of serum ferritin in various cancers. | INTRODUCTION Extranodal natural killer (NK)/T-cell lymphoma (ENKTCL) is an aggressive malignancy associated with Epstein-Barr virus infection. In the past decade, upfront radiation therapy, immunotherapy, and nonanthracycline-based chemotherapy were found to largely improve the prognosis of ENKTCL. 1,2 However, considerable challenges and unmet clinical needs persist in patients with advanced and relapsed/refractory ENKTCL. Currently, patient management involves heterogeneous strategies, primarily depending on the Ann Arbor staging system. 3 Several prognostic models, including the International Prognostic Index (IPI), 4 Korea Prognostic Index (KPI), 5 and prognostic index of natural killer lymphoma (PINK), 6 have been validated in patients with ENKTCL. However, these models have failed to predict prognosis consistently. Moreover, some of these models developed in the pre-asparaginase era, indicating that they might lack relevance in the era of asparaginase-based therapy. In 2020, the nomogram-revised risk index (NRI) was reported to predict the prognosis of all patients with ENKTCL, especially early-stage patients. 7 Xiong et al. classified ENKTCL into TSIM, HEA, and MB subtypes according to multi-omics analysis, and this molecular subtyping system could predict outcomes, as well as guide selection of appropriate drug therapy. 8 Furthermore, a novel single-nucleotide polymorphism prognostic evaluation system was established in 2022, which could be employed as an effective tool to predict the prognosis of patients with ENKTCL and determine which patient population would benefit from chemotherapy. 9 In the present study, we examined and verified the NRI to generate additional evidence regarding its prognostic value and general applicability. Furthermore, we found that several clinical indicators, such as ferritin, might significantly impact the prognosis of patients with ENKTCL. Therefore, we constructed a new prognostic index by combining NRI and clinical indicators to further predict the prognosis of patients within the same NRI stratification. | Patients and treatments In total, 236 patients diagnosed with ENKTCL from January 2010 to December 2021 were included in our analysis as the training cohort. All patients had received initial treatment with asparaginase-based chemotherapy with or without radiotherapy according to the stage and disease location. For patients with early-stage disease, sandwich protocols (radiotherapy after an initial three to four cycles of asparaginasebased regimens, followed by an additional two to three cycles of chemotherapy as consolidation) were employed. Considering patients with advanced-stage disease, six cycles of asparaginase-based chemotherapy were administered, with or without local radiotherapy as consolidation. The chemotherapy regimens used in this cohort were mainly GELOX/ P-GEMOX (gemcitabine, oxaliplatin, and L-asparaginase/ pegaspargase) or EPOCHL (etoposide, doxorubicin, vincristine, cyclophosphamide, prednisone, and L-asparaginase). After three to four cycles of induction chemotherapy, patients with stage IE/IIE disease received radiotherapy, with a total dose of 50-56 Gy for the primary tumor lesion. 18 F-FDG positron emission tomography/computed tomography and magnetic resonance imaging (MRI) scans were performed to assess the efficacy every two cycles of chemotherapy. After completing all treatments, patients were followed up every 3 months for the first 2 years, then every 6 months for the next 3 years, and yearly thereafter, by performing clinical examination and MRI. Moreover, we obtained an independent external cohort (n = 90 patients) from Xuzhou Medical University and South Medical University to validate the role of this novel prognostic model. | Statistical analysis The primary endpoint was progression-free survival (PFS), defined as the time from initiating treatment to disease progression, relapse, death of any reason, or last followup. Overall survival (OS) was defined as the time from initiating treatment to death attributed to any reason or last follow-up. All data were processed by R software and respective software packages (http://www.r-proje ct.org/). patients, the male-female ratio was 2.37:1; the median age was 46 years. Extensive primary tumor invasion (PTI) was noted in 59.7% of patients. The majority of patients were aged ≤60 years (84.7%), with stage I/II disease (66.1%), and presented normal lactate dehydrogenase (LDH) levels (74.2%). A small number of patients exhibited lymph node involvement (33.1%). More than 50% of patients had normal levels of albumin (50.4%), hemoglobin (61.9%), and D-dimer (D-D) (65.3%). The percentage of patients with NRI 0-, 1-, 2-, 3-, 4-, and 5-points was 17.8, 11.9, 29.7, 26.3, 11.9, and 2.5%, respectively. Considering a median follow-up time of 34 months (1-119), the predicted 5-year PFS and OS rates were 56% and 61%, respectively ( Figure 1A). Patients with early-stage disease had significantly better 5-year PFS and OS rates than those with advanced-stage disease (66.6% vs. 42.9% for PFS, and 80% vs. 47.5% for OS, p < 0.0001) ( Figure 1B,C). Patients with different NRI presented significantly different PFS and OS (p < 0.0001) ( Figure 1D,E). As is shown in Table 1, the basic characteristics of the external validation cohort (90 patients) did not differ from those of the training cohort. NRI-based prognostic model Using univariate Cox analysis, the p-value of the following factors was found to be significant (p < 0.05): Ann Arbor stage, LDH concentration, PTI, lymph nodes, ferritin concentration, NRI, albumin concentration, D-D concentration, and hemoglobin concentration ( Figure 2A). Given that NRI includes stage, LDH concentration, age, PTI, and other factors, and lymph node involvement and B symptoms exhibit a clear relationship with stage, we finally selected NRI, ferritin concentration, hemoglobin concentration, albumin concentration, and D-D concentration to construct a new prognostic model. Furthermore, we used the ROC curve for further data interpretation of these clinical features. Herein, we found their AUC approximated 0.6, among which the AUC of NRI was 0.66 ( Figure 2B). Considering the patient selection and regional differences, this result was more consistent with the previously reported NRI test results. However, the AUC of the new prognostic model increased to 0.707 ( Figure 2C), indicating that this new model was of greater value in prognosis prediction than the NRI model. Moreover, the concentrations of ferritin, albumin, hemoglobin, and D-D were substantially easy to obtain, reducing the limitations of this model in terms of clinical application. | Prognostic value of ferritin in NK/T-cell lymphoma We established a ferritin value of 380 μg/L as the best cutoff value in our analysis. For easy memorization and application in clinical practice, we selected 400 μg/L as the cutoff value ( Figure 3A). Based on the survival analysis results, the ferritin concentration played a significant role in predicting prognosis using either 400 or 380 μg/L as the cutoff value ( Figure 3B and Figure S1). | Construction of a novel, easily applicable NRI-based prognostic index of ENKTCL We used the nomogram function to establish the weightings of each component in the novel model described above. Among examined indicators, albumin concentration accounted for the least, whereas NRI and D-D concentrations showed substantial accountability ( Figure S2). According to the results of the univariate Cox regression analysis of the NRI score in combination with clinical data, we proposed an applicable index for clinical convenience. The final index was established using the ROC curve and nomogram. The weightings of every single component are listed in Table 2. Patients were divided into one of three risk groups by combining the sum of these parameters (low, 0-2; medium, 3-5; high, ≥6). To further evaluate the prediction role of this model, we plotted the calibration curve using 3-and 5-year PFS data, respectively. (Figure 3C,D). Finally, we scored each patient using the new model and plotted a new ROC curve ( Figure 4A). To render the new prognostic model more convenient for clinical application and facilitate memorization, we stratified patients into three risk levels (low, medium, and high), and the fitting survival curve was also meaningful ( Figure 4B). Simultaneously, our results were verified in a separate external validation cohort of 90 patients ( Figure 4E and Figure S3). To verify the independence of the novel risk score, we conducted a multivariate Cox analysis ( Figure 4C,D), which revealed that the new prognostic model was independent of the Ann Arbor stages and the NRI model. | The novel NRI-based prognostic index could better stratify patients with ENKTCL As shown in Figure 5A,B, this novel NRI-based prognostic model could further group the early-and advanced-stage patients (p = 0.0014 and 0.02, respectively). Although NRI significantly improved prognostication with respect to the capability for discrimination and the effectiveness of clinical decision-making, combined with clinical features such as ferritin concentration, we could achieve a more detailed stratification of the prognosis of patients based on NRI. In patients with NRI of zero point, that is low risk, we could effectively further distinguish the risk level using our new prognostic model ( Figure 5C). In patients with NRI of 1, that is intermediate low risk, the survival analysis was inaccurate owing to the small number of patients. Therefore, we combined patients with NRI of 1 and 2 to fit the survival data, who were regarded as medium risk of NRI, and found that the model remained applicable (Figure 5D). For patients with highrisk NRI scores (3 and 4), this novel prognostic model could further stratify patients into different prognostic groups (p = 0.0046, Figure 5E). Considering patients with an NRI of 5, survival fitting analysis was not performed, given the limited number of cases. In the external validation cohort, we only performed survival fitting of the new prognostic model for patients with an NRI of 0-2 owing to the relatively small sample size, and the model remained applicable ( Figure 5F). Overall, our new prognostic model can further distinguish the risk stratification of patients with ENKTCL based on NRI. | DISCUSSION In patients diagnosed with ENKTCL, the current treatment era involves the application of precision medicine or individualized treatment; thus, clinical stratification is particularly important for subsequent treatment selection. As a recently reported index, the prognostic prediction value of NRI has been confirmed. However, we attempted to establish a new prognostic model to subdivide patients with the same NRI and select appropriate treatment regimens with additional individuality. The new model combines the NRI score with multiple clinical indicators. The serum ferritin concentration is one clinical factor included in this model. Ferritin, an iron-binding protein, 10 plays a central role in several metabolic pathways. It is well-established that elevated serum ferritin can be linked to malignancy and poor outcomes, including potentially devastating diseases such as hemophagocytic syndrome (HLH). 11 Elevated ferritin levels can reflect an increase in iron reserves throughout the body, but paradoxically, these iron reserves are isolated and unavailable for hematopoietic use, a process that leads to inflammatory anemia. 12 This relative iron deficiency during inflammation and malignancy is considered a defense mechanism that limits the use of serum iron by pathogens and tumors. 13,14 Ferritin has been shown to significantly reduce the proliferation and number of granulocyte-macrophage, erythroid, and multipotential progenitor cells, inducing myelosuppressive responses in patients. 15 Early in vitro studies have revealed that ferritin regulates immune response by inhibiting lymphocyte function, which depends on interleukin (IL)-10 production, given that monoclonal antibodies against IL-10 can attenuate the inhibitory function of ferritin. 16,17 The mouse T-cell immunoglobulindomain and mucin-domain 2 (TIM-2), a member of the T-cell immunoglobulin and mucin-domain (TIM) gene family, was the first cell-surface receptor of ferritin to be cloned. This receptor has a negative regulatory effect on the immune response, potentially indicating that high serum ferritin inhibits immune response. 18,19 Ferritin may be an attractive target for cancer therapy, given that its downregulation can disrupt the supportive tumor microenvironment, reduce immunosuppression, and enhance sensitivity to chemotherapy. Our analysis also confirmed that ferritin concentration is closely related to the prognosis of ENKTCL. The hemoglobin concentration is another clinical indicator incorporated into the novel model. Previously, we have examined the relationship between ENKTCL and hemoglobin concentration and found that it can be used to improve the prognostic role of IPI. 20 Low hemoglobin concentration is known to significantly impact tumor treatment outcomes, 21 potentially resulting in hypoxia and hypotonia in tissues, thereby impairing immune cell function. 22 Radiotherapy is critical to mediate a curative effect during early-stage ENKTCL, and hypoxia can also cause radioresistance, resulting in a poor prognosis in patients with low hemoglobin concentrations. 23 The third clinical indicator is D-D. Elevated D-D levels have been associated with increased tumor burden in solid malignancies, including advanced tumors, regional lymph node involvement, and distant metastases. 24,25 In hematologic tumors, we have demonstrated that high levels of pretreated D-D can be associated with severe adverse clinical features and poor survival in NKTCL and is an independent predictor of adverse prognosis. 26 Therefore, we incorporated D-D into the new prognostic model. The albumin concentration is the final factor included in the novel model. The albumin concentration can reflect the nutritional status of the body and has been associated with the prognosis of several cancers. 27 Studies have shown that malnutrition and inflammation can inhibit albumin synthesis. 28,29 Furthermore, asparaginase is the main drug used for chemotherapy of ENKTCL, which has been shown to cause liver damage and ultimately reduce albumin production. Thus, low albumin levels in patients with cancer can be attributed to various factors. Although underlying mechanisms remain controversial, the role of serum albumin as a predictor of cancer survival remains indisputable. 30 We established a new prognostic model by combining the above four clinical features with NRI. Then, to facilitate clinical application, we selected the best cutoff value for the ferritin concentration and the respective scores for all factors. We found that the new prognostic score is generally better than NRI alone. Finally, in patients with the same NRI stratification, we observed that our new prognostic model could further distinguish patients with ENKTCL into low and high risk, despite the limited data. These distinctions are of considerable importance for guiding clinical treatment. Considering patients with low NRI scores and low risk in our new prognostic model, radiotherapy alone could be selected as the treatment choice. In patients with high-risk factors, commonly used chemotherapy regimens, such as P-GEMOX, failed to achieve long-term survival. In recent years, programmed death 1 (PD-1) inhibitors have shown marked efficacy in relapsed/refractory NK/T-cell lymphoma, 31 and a growing number of studies are evaluating the efficacy and safety of first-line PD-1 inhibitors combined with chemotherapy. 32 Moreover, maintenance therapy with PD-1 inhibitors could be considered in the context of clinical trials for high-risk patients. Furthermore, as determined in our new prognostic model, anemia or decreased albumin level was considered a risk factor. Thus, more active, supportive care may overcome the negative effects and improve patient tolerance to intensive treatments. However, expanding the sample size by including a large number of qualified patients is essential to further verify our prognostic model. Moreover, our gathered data pertain to China, and additional data need to be included to verify whether our new prognostic model can be applied to Western countries. In addition, we need to further enrich the gene sequencing data of patients to determine the relationship between characteristic genes and factors of the new prognosis model. Subsequently, the new ENKTCL prognostic model can be optimized at the gene level, which would help promote the individualization of treatment. In summary, our study verified the role of NRI in ENKTCL and combined it with concentrations of ferritin, hemoglobin, D-D, and albumin to construct a new prognostic model for further stratifying patients with the same NRI risk score. Our new model may improve the effectiveness of clinical treatment decision-making and provide a new concept for the prognostic evaluation of ENKTCL. ETHICS APPROVAL AND CONSENT TO PARTICIPATE This study was approved by the Ethics Committee of Beijing Tongren Hospital (approval certificate no. TRECKY2020-022). Given that all personal patient information was de-identified and anonymized, informed consent from patients was waived.
2023-03-18T06:17:44.836Z
2023-03-16T00:00:00.000
{ "year": 2023, "sha1": "faab9973edbbffd1623200e9c2f0a2e8c4d1ee48", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.5820", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "e249b4f3361bdd5a8e4115d699f0a0bdde3b5163", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235579607
pes2o/s2orc
v3-fos-license
Sensing Technologies, Roles and Technology Adoption Strategies for Digital Transformation of Grape Harvesting in SME Wineries : The article improves understanding on leveraging new technology for DT (digital transformation) of grape harvest in SME wineries. It provides evidence on technologies used and workplace types deployed in grape harvesting, as well as strategic paths in deploying new technology, thereby contributing to the literature on networked sensing and seizing capabilities in the wine industry The research approach is explorative and qualitative drawing on 31 interviews with wine industry 4.0 experts and managers, mostly owners of SMEs: wineries, wine software and wine machinery enterprises. Resulting findings serve as a roadmap for digital transformation of grape harvest process in SME wineries explaining technologies and work roles necessary for DWT (digital workplace transformation), as well as strategic paths of deployment of novel grape harvest technology. Previous research on the wine industry 4.0 has focused on BMI, while this research expands the focus to include a wider concept of technology adoption strategy as well as DWT. The research identifies two types of factors impacting the strategic deployment of grape harvest technology: pull factors, also termed servitization factors, as well as push factors, termed also digital transformation factors. Introduction The study at hand provides an evidence-based sensing and technological forecasting roadmap to the field of the wine industry by deploying open innovation between different actors involved. Having in mind that no previous research has dealt with the changes in work roles and new technology adoption strategies for the wine industry 4.0, this article closes this research gap. Relevant practice-oriented implications for networked, open innovation of grape harvesting as well as theoretical contributions to the emerging field of open innovation in SMEs are delivered. The paths to the digital transformation of firms are numerous, as are the theoretical approaches and practical tools available to navigate this change. One of the more notable theoretical approaches to digital transformation of firms is that of dynamic capabilities, which are essential for digital transformation: (1) digital sensing capabilities (filtering and evaluating digital opportunities), (2) digital seizing capabilities (prototyping and defining business model (BM) value proposition) and (3) digital transforming capabilities (governing and aligning assets in accordance with innovation ecosystem) [1][2][3]. Sensing and seizing, have also been identified as key activities related to open innovation of local innovation ecosystems, which precedes the transformation of the businesses [1]. Having this theoretical framework in mind, this study deals with collaborative sensing and seizing activity of relevant actors in a low-tech wine production ecosystem regarding future change of the grape harvesting logistics in wine production SMEs. Logistical ecosystems and the potential use of digital platforms are identified to be one of the most promising future venues for digital ecosystem research [4]. The wine industry is an agricultural industry and is therefore considered to rather belong to low-tech and highly networked industry, where the implementation of new digital technologies for productivity growth is usually lagging behind other industries [5][6][7][8][9]. However, digital transformation of all processes is identified as an inevitable process of transition [10]. In order to research the collaborative efforts of SMEs towards transformation of grape harvesting logistics in the wine production ecosystem, two research questions have been created: RQ1: What is the current state of the grape harvesting process among networked SMEs in a low-tech wine industry, regarding technologies used as well as work roles involved? RQ2: What are possible digital transformation pathways of networked SMEs on the example of grape harvesting for wine production? Open innovation is an innovation path which focuses on external sources and inbound paths of innovation towards the organization [11]. Open innovation is relevant for all the aforementioned research questions, because the higher the number of actors and the more diverse they are, the more they can benefit from tearing down knowledge barriers through collaboration [12][13][14]. There are, however, limitations to firms benefiting from open innovation which relate to firms capacity to absorb innovations as well as strategic focus of external cooperation [15]. Therefore, the research questions are based in open innovation as a research approach by considering both the demand pull of SME wineries as customers of wine hardware and software producers, as well as DT push, related to the new technologies being developed and offered inside industry 4.0. They are two basic strategic paths of technology adoption. The workplace seems to be one of the central elements of open innovation, where compassion is important not only for fighting uncivil behavior, but more importantly for supporting organizational culture with open innovation at its core [16]. In order to understand the nature and scope of DT of grape harvesting, existing technologies and work roles need to be identified so as to understand the scope of changes for skilling and reskilling the workforce inside open innovation. Developing the right digital skills at the regional level appears to be the key to successful DT efforts [13]. Having this research framework in mind, the present research provides evidence regarding opportunities (sensing) for changing technologies and work positions (roles) for grape harvesting. It also maps major factors influencing organizational change strategies (seizing) around grape harvest innovation. In the discussion, it provides an outlook on the possibilities for governing and aligning assets inside wine SME network. This is especially important having in mind that presently no institutional arrangements exist for common, networked digital transformation of the researched SMEs, although they are located in several neighboring wine regions in Germany. The regions include the Mosel-Saar-Ruwer, Rheingau, Nahe, Rheinhessen, Palatinate and Hessische Bergstrasse. It is important to notice that, digital transformation does not rely only on digital transformation capabilities, but is preceded by digital sensing and digital seizing. The digital transformation path of organizations also largely depends on the level of digitalization of the industries in which the company operates, and of the innovation ecosystems it takes part in. Organizations belonging to traditional industries, can be classified as traditional or pre-digital as opposed to born-digital organizations which developed from high-tech startups [17,18]. The pre-digital organizations are in clear need of catching-up in terms of new technology, but there is a research gap on how these processes happen in a networked industry setting, where born-digital organizations offer their services to traditional pre-digital organizations, while other pre-digital actors are well underway with their digitalization strategies? The wine industry is therefore a suitable pre-digital industry for observing these phenomena. Having in mind the ever-increasing digitization of all societal processes from analog to digital, the digitalization is an inevitable transformative force shaping the way people interact, communicate, model their business and generate revenue [19][20][21][22]. In recent years, digital transformation in SME's has been spurred by the industrial revolution 4.0 with the abundance of new technological opportunities [23,24]. Digital transformation should be a process of strategic value for SMEs, taking one of three basic trajectories: (1) customer value proposition, (2) operating model, (3) simultaneous transforming of customer value proposition and operating model [25]. Innovating the operating model is usually driven by technology push, while innovating customer value proposition is usually driven by demand pull, thereby forming two most important innovation trajectories [26]. An important aspect of innovating operating model is the question of the future workforce needs. New ICT technologies are blurring market boundaries and consequently disrupt roles of different actors while some actors are even deemed unnecessary-co-creation with customers, co-opetition with competitors [22]. The consequences of digital transformation on work have both positive (less routine work, more flexibility in place and time) as well as negative aspects (24 h online burnout, unsecure and underpaid freelance status, de-professionalization and substitution of certain jobs such are journalists, para-legals, educators and sommeliers) [27]. Therefore, digital workplace is and under-researched field with ample opportunities for new value creation, by disrupting the existing workplaces and creating new, digital ones [28]. Innovating in a strategic way is important for optimally deploying available technologies and radically transforming both overall sustainability as well as economic performance [29]. In this sense, new technologies need to be defined through business models, as key levers for understanding and effectively communicating competitive strategies [30]. SMEs in commercial settings seem to prioritize technologies which can contribute to overall SME results in a quick, tangible fashion, in order to manage the risk associated with innovation adoption [31]. While undergoing digital transformation, company shouldn't lose sight of their core objectives, which follow from the profit logic, and are based in the clearly identifiable and profitable target market [32,33]. Firstly, the existing knowledge on the changing nature of the work in relation to changing technological landscape inside digital transformation is presented. This review of existing knowledge covers technology adoption strategies in SMEs as well as the specificities of networked innovation in SMEs. Following, the qualitative research method deployed in this study is presented and discussed in detail, as well geographical distribution and positions of interviewees. The results section starts by the technologies deployed as well as work roles involved, along with the most interesting verbatim citations for both categories. Then, a unified framework on opportunities for digital transformation of grape harvest process is being presented. The second part of the results deals with pull strategies and push strategies of technology adoption, firstly by presenting the underlying verbatim citations, and then by presenting a unified theoretical framework of wine SMEs grape harvest technology adoption strategies. The discussion deals with the contribution of the findings to the human, technological and organizational literature on redefining the future of work as well as digital transformation of SMEs. The contributions are than discussed regarding the SME network aspects of the present research methodology. Results summarize both the theoretical contributions of the research as well as practical implications for furthering DT in the wine industry and creating the wine industry 4.0. Work in the Age of Digital Transformation Managers need to be aware of the different strategies for workforce training and associated costs (through rate of forgetting, technology depreciation and advancement) when considering the technology upgrade decision [34]. Having in mind the complexity involved in such investments, the phenomenon has been termed digital workplace transformation (DWT) in the literature. DWT includes several important dimensions which should be considered: physical space, culture, social system and technology [35]. At the level of the individual workers inside DWT, support needs to be provided in realigning and managing their non-work identities with their work identity, as well as balancing between regular and dynamic routines [36,37]. Therefore, the use of digital technologies in the workplace should be designed to promote mindfulness, empower workforce through participation and alter leadership culture in order to reduce technostress and promote compliance [38][39][40]. Furthermore, workers should feel and effectively be enabled to be autonomous, competent and connected in order to support their performance and well-being [41,42]. Some authors classify DWT as a non-technological field of innovation, but nevertheless acknowledge its crucial importance for digitalization and acceleration of technological developments as well as industry-level competitiveness [40]. Creating digital workplace is not about emails and social media, nor is it about integrating digital technologies-it is about transforming personal, team and organizational performance [43][44][45]. This process of change includes also the process of deinstitutionalizing the entrenched workplace practices by deliberately delegitimizing and abandoning them [46]. A modern workplace should get rid of rigid rules and instead empower employee participation and networking through value-based guidelines-this provides the basis for an increasing workforce maturity, and consequently business innovation and growth [3,25]. However, this process is not straightforward nor is it without perils. Crafting a digital workplace in pre-digital organizations presents a disruptive process for what was previously approached as long-term information system planning [17]. In addition, the labor practices of new app-based platforms have sparked litigations around whether work provided through a platform constitutes employee status or not [47]. It has become a common place for all organizations to outsource activities relating to IT and software development to external companies, thereby creating different work positions with different skillsets sought for in non-IT companies (more technical skills) and IT companies (more business and project management skills) [48]. Technology Adoption Strategies in SMEs Technology is a construct that goes beyond engineering and manufacturing only, to include the whole process of transforming production inputs (labor, capital, material, information) into production outputs (products, services) [29,49]. Technology adoption strategies are directly connected to the issues of business model transformation, as well as the interplay between path dependence, strategic flexibility and a number of business modules involved [24,50,51]. The business-level perspective of technology adoption inside industry 4.0 recognizes that the redesign of operating processes is an important element of this transitioning process which can take different pathways, from being dominated by demand pull (high servitization level), to being dominated by technology push (high DT level) [24,26]. This is why the present research orders the adoptions strategy factors into these two major groups-servitization challenges and digital transformation opportunities. The previous literature has recognized the need for industrial BMs to transition to solutionsbased BMs [52], which is of particular relevance for industrial SMEs in the wine industry. This integrated, solution-based BM balances between the front-end push and back-end pull for delivering value to the customers [53]. Having in mind that a large proportion of SMEs in the researched wine industry are family owned, this factor is very important for understanding the adoption of new technology. Previous research has confirmed that the approach to technology adoption in SMEs depends also on the digital leadership style of the SME owners as well as on the impact a family has in the company [54]. Family influence is proven to negatively influence the pace of technology adoption in SMEs, especially if they are minority, rather than majority owners [55]. However, it has been proven that family influence has an impact only on the later identification of discontinuous change, while the implementation, once initiated, is being conducted more quickly and with more stamina [56]. SME wineries seem to be reluctant to adopt sustainability innovations which bring only environmental and social benefits, with no tangible economic or commercial benefits [57,58]. Networked Innovation in SMEs The knowledge-based interdependence of SMEs is often termed coopetition (consisting both of cooperation as well as of competition) and motivates entrepreneurs to participate in innovation processes by boosting their network reputation and increasing cooperation with suppliers and consumers [59]. There are three major types of relationship coordination mechanisms inside SME networks: (1) market, (2) hierarchy, (3) social relations, which points to the fact that agents inside networks exchange knowledge even if no market or hierarchy is present, which is also called open innovation [60]. Having this in mind, many organizations are deliberately building open structures and systems which remain in a dynamic, spontaneous and multi-directional relation with the environment [61]. However, SME networks also need suitable governance models, in order to discourage participants from exiting or defecting and to manage the knowledge-based interdependence of firms in a common innovation process [62]. Therefore, researching innovation on the network configuration level is an important strategic instrument for increasing innovation performance and competitive advantage in open innovation approach [63]. SMEs have different strategies when interacting with the SME network both regarding network adaptation as well as external resource dependence [64]. However, it should be noted that for each set of network characteristics, a certain combination of organizational characteristics (goal complementarity, resource complementarity, fairness trust, reliability trust, and network position or embeddedness) correlates with superior performance [65,66]. This research does not deal with the SME network level phenomena directly. However, it takes an explorative, networked methodological approach, thereby providing relevant implications for different actors in a wine industry SME network, ranging from producers of experimental and commercial machinery and software to SME wineries. The results point to the complexity of the researched phenomena, thereby calling for a networked approach to DT and DWT in grape harvesting. Methodology Semi-structured telephone interviews have been deployed as a primary data collection method. Thirty-one interviews with SME winery CEOs, quality managers, R&D officers, owners and a professor have been conducted in total. All of the companies involved fulfill official requirements for an SME, as defined by the EU: less than 250 employees [67]. Other financial indicators have not been taken into detailed consideration. Another specificity of wine industry is the existence of cooperatives, which are a coordinated network of small grape producers with one big winery dealing with one making and selling. Some of the respondents were also cooperatives. Sampling has focused on selecting interviewees that were either involved in the grape harvest process (twenty SME wineries), or were providers of commercial technology for grape harvest (five software companies and three harvester and/or cellar technology producers). One interview partner is both a winery owner and a is running a wine software company, one runs experimental wine software development at a university and one is a professor of robotics and geoinformatics in wine industry. Twentynine of the interview partners were located predominantly in the state of Rheinland-Pfalz (RLP) and two in the bordering region of Hessen, with links to the wine industry in RLP. This approach provided a network perspective across the state-level value chain. The sampling of data sources was expanded iteratively, allowing the emerging theory and the saturation of our knowledge of subject areas and practices to guide data collection. The data has been analyzed through MaxQDA by engaging in open coding in the first step, and then developing second order themes in connection with the aggregate theoretical concepts. The verbatim citations are presented in Tables 1-3, while the whole theoretical construct with underlying second order codes and connection to first order codes are presented in Figures 1 and 2. Additional tables presenting the first-order codes of push and pull strategies, along with their detailed descriptions and the underlying motivations, have been presented in the Appendices A and B. The two separate questionnaires (one for wine producers and another one for software and hardware producers) used for conducting the semi-structured interviews are presented in the Appendix C, at the end of the article. Grape berry/must assessment Inf. 28 "Take a look at Bordeaux, they go and bite the kernels to check whether they taste bitter, woody or green. Sensory tests are essential! I think the consciousness of physiological maturity has receives more attention in other countries (than in Germany)." Multi-year field data collection in a database Inf. 24 "We work in an Informix database . . . and the historical values of our company go back to the mid 80's . . . We are not in the cloud, the database is located in clients' servers . . . reverse tracking is important . . . this data is being backtracked by vintners themselves . . . our software makes that possible . . . the vintner can trace every wine to its creation, every processing step that he made, every substance that he had added, he can document, even above the wine law requirements. This infrastructure is available and is also used, at least partially." Fieldwork logistics and visualisation of processes Inf. 12 "A tremendous relief of the working day is useable information, no matter where one is located. I notice this now, that I can access my whole cellar book from my phone: as if I am standing on the tank and say "how is the tank doing"?" "If I optimize the interface and identify on the tablet that "he is there" or "this is going on there", and have that on the PC or on the screen, I have less stress. This is because I can than identify certain risks better. I have less tension and get a better picture; this is very important . . . We are three managers, and there is some degree of exchange between us, but we still need to know what's going on and plan accordingly. The important part of a day is that certain data and facts are being updated quickly." Inf. 29 " . . . our dream would be to visualize all our vineyards. It allows to visualize both the locations of my customers (B2B), my suppliers, as well as vineyards. A further dream would be to have must weight, acidity and rot-affected areas, so harvesting can be directed precisely." " . . . we had a lot of winery successions (regarding wineries as customers of software producers) and the people are just better educated, have different vision of running a winery, and this is an absolute plus point. The market is growing for these technologies and when I project this into the future, from monitoring of vegetation processes in the field to sales, everything will be one digitalized track that monitors all these processes." Fieldwork/cellar assistance Inf. 6 "I have to take a look at the spot how ripe the grapes are. We have Excel sheets where we write down what we want to do and this is than verified every day to check for changes." Inf. 6 "All possible (communication) options are present, from Email, WhatsApp, Phone and personal contact, depending on the situation. If it concerns everybody, then it is posted to WhatsApp group and when it is about instructions on the fieldworkers, then it is one on one." Inf. 21 "It is very important for us to see the progress of the work. During the last harvest, a voluminous harvest, it was very important to us to see how well did we progress and how much surface from which grape variety have we processed and how is still left to be done. Also, regarding how much we are allowed to harvest: do we have to leave it as it is or how are we going to divide it? These are the things that one otherwise does more through gut instinct and rough estimates, and here it is pretty precise . . . It is about dividing the workforce and estimating how long do we still need with how much workforce." Gaining competitive advantage Inf. 7 "We are committed to innovation and plan accordingly. We have dealt with it intensively, we also have a conversation tomorrow, the grape selection plant, optical sorting. The cost pressure drives this decision. I think people cost us too much money. 15 people do a lot of work and I think this people management is a huge problem, also because I cannot get any German workers. So that means I have to do the work, but without workers. This will be a solution that will be faster, but I don't think it will be better." Inf. 24 "The more ambitious they (the wineries) are, and the higher the quality they produce, the more they ask for such quality-optimizing options: to select as soon as possible, what will I get when and whom do I assign the order. The Pino Gris-I don't need 14.5% as in the 2018 harvest. I would like 13.5% alcohol, so it is easily digestible, with higher acidity, etc. These are the elements that are interesting for quality and are of interest for many users, because there is an added value behind this that is reflected in the quality and thus in the revenues." Table 3. Verbatim codes for company opportunities, fueling the push, DT strategies of wine SMEs. Advances in geoinformatics Inf. 23 "This foreknowledge capability, in which field do I have which oechsle degrees [measuring the sugar content in the grapes] or anthocyanin, that you can get with one hand pass. This is so advanced that there are this Eurorobots who tackle this. The research center X was also a partner on this project. But in Germany they are not allowed to drive through the field-in Spain yes, because they have different legislation. I see this from the perspective that our goal should than be harvester, that could provide different information-most weight, etc. This data should be delivered in order to support this smart spinning systems. Harvesters can already do a phenotype reading." Inf. 29 " . . . can we not attach a kind of scanner (on the tractor)? We have so many passes through the vineyard for crop protection, leaf trimming, etc. If a simple and affordable system had scanned the leaves to assess if they look dry, are they dark green or yellowish, you could detect the grape color. These would be simple sensory systems that could inform the application if there is a dry or wet zone. This would be helpful things, especially for harvesting later." Advances in technology convergence, connectivity, usability Inf. 28 "We often have the requirement for the process data to be sent digitally from the press to an external location and thereby do a proactive maintenance, because for example, a valve could break. In addition, to the oenological side, this is very interesting and useful story where digitalization could be applied. This is remote control, so that we as a manufacturer can make remote maintenance and the press is often ready for use much faster than if the serviceman had come." Advances in machine learning Inf. 23 "there are currently some companies in Germany which deal with precision viticulture. There is the Fraunhofer, there is the Geobox, there are several places that can do this, at least for precision fertilization. Some rely on satellite data, some measure with drones or with NDVI and others with sensors in the vineyard. They all have their algorithms . . . And they then also network the devices for fertilizer application, also zone-dependent fertilizing" Technologies Deployed Different types of technologies are being used in the grape harvest process in the state of Rheinland-Palatinate (RLP). They can all be grouped into three value-creation activities, according to the work task: (1) grape berry/must assessment, (2) multi-year field data collection in a digital database, (3) fieldwork logistics and visualization of processes, as presented in the Table 1 below and Figure 1 later in text. Grape berry and must assessment take place in the various stages along the wine making process and it is of critical importance for getting accurate data about the state of the grape. This in turn is very important later, for product quality, as it enables conducting crucial activities (plant protection, watering, fertilizer use, harvesting) in the field at the precisely right point and with the right amount. However, acceptance of new routines for grape assessment has traditionally been rather low in Germany. Multi-year field data collection in a digital database is a collection of data on all field parameters (weather, grape ripening, diseases, treatment, harvest, etc.) as well as later processing in the cellar. This could prove to be a very powerful basis for deploying new technologies like Big Data and Artificial Intelligence to support automated or semiautomated decision-making support systems for grape ripening, harvesting and further processing. Current databases build on SQL or Informix technology, with some new players in the market successfully offering cloud-based databases that facilitate mobile app usage. Some players possess databases that date back forty years, which could be of use for prediction algorithms and big data analytics. Fieldwork logistics and visualization of processes is one the areas characterized by a big transformation in the recent years with applications across different industries. The advantages of new technologies for logistics are evident to some of the vintners, such as the Informant No. 16. For example, visualization of data serves also to relieve managerial stress, as observed by Informant 12. However, in the RLP wine industry commercial visualization capabilities are still limited as there only few offerings exist, which are predominantly tailor-made solutions. These companies are still looking for ways to expand in terms of scale and scope. The potential of these technological advances is visible through several successful examples of technology transfer from other agricultural fields to big cooperative wineries. The potential of fieldwork data collection and use is still underutilized because of gaps in data collection and analytics for different purposes. Informant 29 reveals that, in terms of back tracing for new blockchain technologies. Roles Involved Numerous actors with different roles are involved in the grape harvest process, either as individuals or as groups. Three major types of roles are: (1) inhouse professional personnel, (2) freelance professionals, (3) helper and/or hobby workforce. Each group differs in terms of approach, seniority and level of involvement as well as in dedication. For this categories, no verbatim citations have been included, as this aspects have been the object of post-hoc analysis, and no direct referral to this roles have been made during the interviews, but only indirect. The three categories are represented in Figure 1 below. The major differences are between entrepreneurial (family) wineries and cooperatives. The family companies' workforce core is made up of family members and salaried professional staff (usually cellar masters), while foreign and domestic helpers are added during the harvest. Cooperatives are marked by the existence of a team of professional staff working on the grape and wine processing and selling, while the grapes are grown by farmers that vary in surface size as well as their professionalism. Some cooperative vintners live off of wine and some are part-time or even hobby vintners. For the sake of the quality, cooperatives possess quality managers who coordinate between management and farmers in order to ensure the matching of the grape quality with the wine production plans for each product category. Servitization Needs, Acting as Pull Factors The market adoption of innovation and the underlying servitization needs of wine SMEs are major themes for technology companies trying to develop and market innovative solutions on the market. Major challenges which pull the new technology adoption in the harvesting process are the key levers that the wineries are trying to take advantage of: (1) management assistance, (2) fieldwork/cellar assistance, and (3) gaining competitive advantage. The most important challenge pushing innovation in both management and fieldwork/cellar assistance is the lack of (qualified) workforce. The three categories and the underlying verbatim citations are presented below in Table 2 as well as categories themselves in Figure 2 later in text. Wine estate management needs to reduce the stress level through routine-oriented tasks, better traceability and overview of production process for faster decision-making, as Informant No. 16 contends. Furthermore, a new generation of wine estate managers and entrepreneurial vintners is adopting new technology, changing the way things are done, as noticed by Informant No. 30. Fieldwork and cellar assistance are mostly concerned with possibilities of better assessment of weather and grapes, as well as efficient logistics and coordination of effort between workers. Informant 6 describes how they build their field record database using only Excel sheets. The same informant has also described the process of communication during the harvest, using the same tools as for private communication. In contrast, Informant 21 describes the change when using a specialized software for tracking the work in the field. Adopting new technology is also connected to the lever of gaining competitive advantage, through lowering production costs but also refreshing winery's image and adding value to the customer offer. As the Informant 7 observes, the new technology is both cheaper and more reliable than the alternative human workforce which would be engaged in processes like grape sorting, therefore having huge impact as a cost-cutting measure. On the other hand, Informant 24 states that regarding field machinery and processing equipment, fine-tuning and quality optimizing options are interesting in the higher-quality segment. New Technologies Acting as Push Factors Technological advances that are perceived as adding the most value and hence motivating for enhancement of capabilities by adopting new technological processes are: (1) advances in geoinformatics and robotics, (2) advances in technology convergence, connectivity, usability, and (3) advances in machine learning. The three categories and the underlying verbatim citations are presented in the Table 3 below, while categories themselves are also presented in Figure 2 later on in text. Advances in geoinformatics and robotics, the core of precision agriculture, are changing the way things are done in agriculture: from precision harvest mapping to pay-permeter harvesting services or remote yield assessment. However, Informant 23 notes that although there are many useful technologies, some developments are being slowed down by legal framework in Germany. The informant 29 points to the need for affordable, multiplatform, flexible hardware that can extend functionalities of software in the wine industry. Advances in technology convergence, connectivity and usability mainly relate to technologies like drive-over scale, cheap mobile sensors, remote machine maintenance. As observed by Informant 28, remote maintenance is one of the major servitization advances, adding considerable value to the users of wine machinery. Advances in machine learning also seem to be very present and relevant topics in the viticulture, with no mainstream, commercially successful applications of AI or Big Data present, but some important R&D processes are under way, as presented by Inf. 23. Discussion Regarding the results presented in the Figure 1, previous systematic research of the literature on digital transformation has identified (1) technologies and (2) actors, as two relevant aggregate themes or dimensions. This dichotomy-based approach has previously been deployed by Nadkarni and Prügl [68]. Further relevant literature goes beyond these human and technological aspects, to include also organizational aspects as relevant for redefining the future of work [69,70]. The present research deals with human/work related aspects of DT in Figure 1, while organizational aspects are dealt with in Figure 2, by distinguishing between push and pull factors of SME digital transformation. Previous research has identified a multitude of drivers of digital transformation in SMEs: process engineering, new technologies and digital business development digital leadership and culture, the cloud and data as well as customer centricity and digital marketing [51]. However, present research distinguishes in Figure 2 between pull factors and push factors, as two distinct types of factors influencing the digital transformation strategy of grape harvesting in SMEs. Management assistance, fieldwork/cellar assistance and gaining competitive advantage have been identified as the most relevant pull factors for DT, while advances in geoinformatics, advances in technology convergence, connectivity and usability as well as advances in machine learning have been identified as the most relevant push factors driving the DT of wine SMEs. Previous research on wine industry 4.0 has acknowledged the importance of BMI (Business Model Innovation) [24], while this research contributes to this research stream by exploring technology adoption strategies and DWT, thereby expanding the range of researched phenomena related to a strategic DT. The findings on the importance of winery business succession adds to the discussions of the impact of family status on the new technology adoption in SMEs, by expanding the understanding on the timing of change in family-owned business. The present study results demonstrate that the generation succession is the time of the greatest change and new technology adoption in a family-owned SME. These findings therefore confirm previous findings that family-influenced SMEs are later at an identification of a discontinuous change, and faster when it comes to implementation ones a discontinuous change has been identified [56]. The findings also contradict the identified a priori reluctance of SME wineries to adopt sustainability innovations if no tangible economic benefits can be identified [57,58], but point to the need to identify the generational cycle stage of family SME wineries. In this sense, future research should take into consideration the generational cycle stage when considering new technology implementation: discontinuous change appears to be lower-than-average at the end of a generational cycle, and higher-than average at the beginning of a generational cycle, in the years after succession. The present research explores work and technology as well as the organizational aspects of regional, networked innovation and transformation processes on the example of wine industry in the German state of RLP. Similarly, the regional and networked approach to innovation has previously been conducted on the example of the biotech industry [71,72]. However, the present research does not research network-related phenomena, such are the governance structure, external context or advantages/disadvantages of being part of the network, as there is no formalized network between the researched SMEs. The research deploys a sample of compatible SMEs, who deal with grape harvest innovation to provide insights into important aspects of grape harvest transformation-related to DWT and business transformation. In this sense, future research should carefully consider the possibilities of building wine industry 4.0 networks for digital transformation of work processes as well as whole organizations and industry. The questions of governance structures, external context and the benefits vs. drawbacks for SMEs to be part of the network, should be addressed by future research on wine 4.0 networks. The results presented are of relevance for managers as they provide empirically based roster of work roles. This roster is suitable for further separate research of each role involved as well as cooperation arrangements inside/outside teams. In addition, a detailed specification of technologies used in the grape harvest process, in relation to the work roles involved has been presented. The results can help managers in identifying training and retraining needs for digital workforce transformation by providing a detailed ontology of roles involved in the grape harvest process. In addition, wine technology companies should be aware of generational successions and create different strategies for transforming family wineries with a stable family ownership and ones in the years after a succession. In this sense, the results provide the basis for digitalization efforts of both workplaces as well as work routines inside a digital workplace transformation in wine industry SMEs. The results can be of relevance for other agricultural SMEs dealing with complex harvest logistics operations. Future research needs to expand this explorative research by conducting quantitative research on work roles, cognitive aptitude and team organization in the wine industry. It also needs to delineate guidelines and major elements for future professionals in the wine industry on how to be successful in the emerging digital wine industry paradigm. The major limitation of the study is its explorative nature. The models created are for exploratory purposes and therefore lack numeric relationship specification, which are important for theoretical purposes and could be achieved by quantitative studies and structural equation modelling. The creation of the codebook has undergone a rigorous process in an attempt to establish reliability, however biases still might exist regarding both data-driven first-order codes as well as second order themes, and to a lesser extent aggregate theoretical dimension. Further limitation is related to the interviewee selection. Interview partners have been recruited through a winery register, by contacting wineries undergoing or interested in digital transformation, as well as their partner companies in this process. The article does not deal with digital transformation capabilities, but only with its antecedents, namely digital sensing capabilities and digital seizing capabilities, thereby opening possibilities for future research on digital transforming capabilities in wine industry 4.0. Conclusions The findings of this study enhance the understanding of a still under-researched area of leveraging novel technologies by redesigning jobs and redefining business strategy, for the example of SMEs involved in the wine industry 4.0. The research further contributes to the literature on open innovation and redefining professional identity, by defining existing work roles beyond their professional boundaries: skilled permanent workforce, skilled temporary workforce and amateur temporary workforce. The framework therefore provides ample space for dismantling knowledge boundaries for open innovation, by placing the traditional and future jobs into these three broad categories. Contrary to the findings in the previous literature [48], this research demonstrates the importance of digital tools for advancing managerial and business capabilities in non-IT, traditional SMEs. Managerial assistance tools are found to be important both in wineries dealing with grape harvest for providing grapes for low-cost wines, as well as wineries wanting to get a hold of fine-tuning mechanisms in grape harvest for achieving top quality wines. The article identifies opportunities and challenges for strategic deployment of new grape harvest technology. It examines both pull-oriented servitization challenges as well as push-oriented, digital transformation opportunities. The results also explore the dynamics of the digital transformation by providing a detailed overview of work roles and technologies used for digital transformation of grape harvest process. Both of these areas contribute to better understanding of the strategic deployment of new technology for the wine industry 4.0. The results also point to the decisive role of work-related (work positions, work processes) and organizational (strategy, business model) aspects in the digital transformation of the wine industry. The article provides implications on the level of digital sensing capabilities as it presents the multitude of opportunities for DT of grape harvesting process regarding grape berry/must assessment at different stages along the process, multi-year field data collection in a digital database as well as fieldwork logistics and visualization of processes. Contributions toward sensing the opportunities in the field of DWT are provided by defining three types of transdisciplinary work roles in grape harvesting: inhouse professional role, freelance professional role and helper/hobby workforce. These three types of workforce differ in terms of level of involvement in the wine SME as well as professional expertise needed for conducting tasks. The results also provide implications on the level of seizing capabilities. Firstly, two types of forces impacting strategic adoption of grape harvesting technology are presented: pull-oriented servitization strategies and push-oriented digital transformation strategies. Servitization aspects of a technology adoption relate to management assistance, fieldwork and cellar assistance ang gaining a more favorable competitive position or creating competitive advantage. On the other hand, digital transformation aspects involve advances in geoinformatics and robotics, advances in technology convergence, connectivity and usability, as well as advances in machine learning. The role of a technology adoption strategy on an organizational level is to balance between these two important aspects. The interviews have confirmed the critical importance of the grape harvest process for both SME wineries searching for cost-oriented competitive advantages as well as for SME wineries looking for quality-oriented improvements through more precise management of wine taste profiles. First-Order Codes Description of the Challenge Motivation for Overcoming the Challenge Management assistance Less stress: creation of routine-oriented work tasks The work of a wine manager/entrepreneur is highly stressful and includes often long hours Any tool promoting work task routinization can help in reducing stress-levels induced by the unstructured nature of the production process. Traceability of products and transparent production process Traceability is being more and more demanded by certification bodies, but also consumers and new digital technologies can help these efforts Better quality products and more direct risk management, better management options in a crisis situation of having to trace back production steps after a recall Better time management through better overview of activities The work of a wine manager/entrepreneur is highly unpredictable and therefore stressful. Any tool promoting real-time data tracking can help retain control over production process, while reducing stress-levels induced by a lack of data. Precise crop yield estimates, measurement, analysis Using harvesters often reduces the capability to apply fertilizer and plant protection in the most optimal way, as well as to harvest the best grapes, which can be overcome by precise digital field records. Better planning capabilities-building for reducing unnecessary work steps, optimize existing ones in scale and scope. Generational change-new managerial routines New generation of vintners is more open to digital technologies and even demands them or even build them themselves. This is especially pronounced after company take-over. Fulfilling the law requirements with as least effort as possible. Lack of available staff Reliable supply of skilled and unskilled workforce is hard to find. Reducing the need for large workforce in the production process. Higher quality products Higher quality product means the opportunity for higher prices. Gaining competitive advantage over competition. Precise logistics/coordination of work in the field Better time-management of the workforce as well as grape processing to in order to lose as least quality due to unforeseen events as possible, for example unwanted fermentation in the sun. Reducing waste in the production process and thereby making savings. Clearly delineating risks and addressing them properly. Gaining competitive advantage Competitive pressure on lowering the production cost The wine industry is very competitive and economies of scale are very important. Providing the lowest price possible in certain price ranges. Possibilities for benchmarking productivity Digital tracking of activities and productivity can enhance industry benchmarking, Identifying the possibilities for further optimization of processes. Refreshing the winery's image Deploying the newest or the most exotic technology can enhance the company image inside the industry itself. Presenting the winery as future-oriented and innovative. Value added and ease of use The new technology introduced needs to be highly practical and usable as vintners are no hackers or digital natives. The vintner needs to see clear value added from new processes and he has to clearly understand the way it can be deployed. First-Order Codes Description of the Opportunity Motivation for Seizing the Opportunity Remote grape quality and yield assessment Getting the data from the field with no need to be present all the time. Reducing field visits during ripening period and better resource planning. Advances in technology convergence, connectivity, usability Scale integrated into loading bin harvester or drive-over scale Integrated scales can help with getting the data on the quantity of grapes for processing. Better planning of the grape processing for lower costs and higher quality. Cheap, mobile sensors (spectrometer)+ GPS+ tractor/robot New affordable sensors are being developed for different kinds of devices and for different uses. Enhancing capabilities of existing hardware with low additional investments needed. Enhanced usability of interface and multilingual support Interfaces between different hardware and software components need to be optimized as well as usability for a diverse workforce. New devices need to be compatible with old ones and design for use by an international workforce. Cost-effective inline measuring devices Affordable solutions need to be developed in order to enhance grape and wine processing even further. Higher quality wine on a relatively tight budget. Digital transfer of production data to wine traders The digitalization of production data enables automatic transfer of data to wine traders, enabling the customers to profit from better and more reliable data in an otherwise complex industry. Providing production data in a modern and accessible way with no extra cost of additional certification. Remote maintenance of machines There is a possibility to conduct remote maintenance for some high-end grape processing facilities. Time and effort saving, better coordination with technical support. Advances in machine learning Artificial Intelligence (AI)assisted harvester (automatic turn off function for bad areas) New harvesters are being launched on the market, which can automatically recognize bad grapes and not harvest them. Considerable quality improvement closer to hand harvest, with no extra effort needed by the harvester driver. Big data analytics Putting to use an abundance of historical digital data in some historic companies in order to make better decision in relation to weather, ripening and harvest timing. Harness the power of experience currently buried in decades of unused historical data, to enhance the vintner decision-making as well as capabilities of machinery. Appendix C Table A3. Questionnaire with open-ended questions used to conduct semi-structured interviews with SMEs on the left and software and hardware producers on the right. Questions for Wineries Questions for Wine Software and Hardware Producers 1. Please describe the harvest planning process in your company in detail (which actors are involved, which routines have been developed, which technologies are being used, how long does the whole process lst, which key competences and capabilities are needed?). 1. What are the latest Industry 4.0 technologies that could be used for grape ripeness measurement, harvest planning and harvesting itself? Which technologies have already been implemented, which are coming soon and which have already been used in other areas of agriculture? 2. How does the digitalization of data transfer between grape growing grape and grape must processing look like in your company? 2. Which key competencies and skills are required or will be required in the future? How well is the (university) education adapted to these changes? To what extent is (university) education pursuing or promoting these changes? 3. To what extent is the data transfer between grape production and grape processing already digitalized? 4. What motivates the optimization of interfaces between different IT systems in your company? 4. Which data is already digitalized (from a technical point of view), at what speed can they be delivered? (e.g., geo-positioning-via GPS or otherwise, harvest volume: estimated and actual, grape variety, quality parameters such as must weight/acidity, botrytis content, type of decay, etc.) 5. What is the structure of your employees when it comes to the digitization perspective or motivation for digitization? (Are there differences in the acceptance of digitalization? If so, which ones and why?) 5. Which data could be digitized (from a technical point of view) and at what speed could it be delivered? (e.g., geo-positioningvia GPS or otherwise, harvest volume: estimated and actual, grape variety, quality parameters such as must weight/acidity, botrytis content, type of decay, etc.) 6. If you use harvesters: how does the planning and consultation work with regard to harvester take place? (Do you harvest according to local availability or are quality aspects in the foreground?) 6. Which Industry 4.0 technologies could be used in terms of planning and consultation with the harvester? 7. How do you deal with purchased goods (grapes, must or wine)? 7. What could the latest technologies do when it comes to product traceability systems? (e.g., because of product safety and faster collection of defective series, traceability of sales of products back to raw material receipt) 8. What are the needs regarding tools/systems for product traceability? (e.g., because of security and quick collection of faulty series-from sales of the products back to raw material receipt) 8. How dynamic have changes and innovations in harvest planning been over the past 10 years? (What has changed? To what extent?) 9. How dynamic have the changes and innovations of harvest planning been over the past 10 years? (What has changed? To what extent?) 9. What is the outlook for the changes and innovations in the area of harvest planning in the next 10 years? (What will change? To what extent?) 10. Have measures to improve the harvest planning process and/or grape logistics already been planned? (If yes, which?) 11. What are the priorities for innovation in your company? (Please give examples for each applicable category: increase in efficiency-less waste of resources, increase in effectiveness-achieve goals with greater success, increase quality-produce products with higher/more stable quality) 12. How do you deal with innovations? (More carefully, step by step, or rather as a paradigm shift and one-off, radical change)
2021-06-22T17:55:27.867Z
2021-04-28T00:00:00.000
{ "year": 2021, "sha1": "2d016d1b2ed84d2b633f110a3214a67ea1b56e3c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2199-8531/7/2/123/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2cef01dabcd89ef2f3181ac5794bc3025281d782", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Business" ], "extfieldsofstudy": [ "Business" ] }
139490586
pes2o/s2orc
v3-fos-license
Synthesis and Characterization of Polysulfone (PSU)/Philippine Halloysite (PH-HAL) Nanostructured Membrane via Electrospinning Membrane technology is widely used in many separation processes because of its multi-disciplinary characteristics. One of the techniques that is used in the fabrication of membranes is the electrospinning process which can create nanofibers from a very wide range of polymeric materials. In this study, electrospun nanostructured fibrous composite membranes of polysulfone (PSU), commercial halloysite (COM-HAL), and Philippine halloysite (PH-HAL) were synthesized. The concentrations of COM-HAL and PH-HAL were both varied from 0.5%, 1%, and 2%. The FTIR results showed that there were changes in the intensity of the PSU-IR spectra which confirmed the presence of COM-HAL and PH-HAL in the synthesized membranes. The SEM revealed that nanofibers can be successfully produced by the addition of LiCl salt in PSU with varying HAL concentrations. Also, it was observed that the addition of HAL with varying concentrations have no significant effect on wettability due to the strong hydrophobic character of the PSU membrane. Moreover, it was found from the analysis of mechanical properties that the tensile strength of the membranes weakened by the addition of HAL due to its weak interaction with PSU. Introduction Membrane technology is an emerging technology which can be used in many separation processes. Numerous membrane processes emerged in which applications are based on different separation principles and mechanisms. Various techniques and fabrication methods have been explored to synthesize membranes, and one of those techniques is the electrospinning process. Electrospinning is a versatile process that can create nanofibers from a very wide range of polymeric materials. The practicality of electrospinning has been greatly improved with recent advances in mass production scalability, leading to higher production rate and lower cost materials. Moreover, the addition of salt concentration to polymer solution improves the nanofiber formation of the membrane [1]. One of the widely-used polymer materials for electrospinning is polysulfone (PSU). Since it exhibits excellent thermal, mechanical, chemical stability, and low cost, PSU is widely used in separation processes such as water and wastewater treatment, chemical, metallurgical, and bioseparation area. Previous research studied the influence of addition of nanoclay such as halloysite (HAL) on PSU nanocomposite to serve as excellent support for nanoparticles due to their unique geometry and properties. However, most of the work done on the addition of clay to PSU nanocomposite membranes had been prepared using dispersion methods. This study focused on the synthesis and characterization of a nanostructured PSU/HAL membranes via electrospinning process. Specifically, this study aims to: (1) determine the effect of LiCl on the surface morphology of the nanostructured membrane; and (2) compare the following mixtures of electrospun membranes: pure PSU, PSU/LiCl, PSU/LiCl with commercial halloysite (PSU/LiCL/COM-HAL), and PSU/LiCl with Philippine halloysite (PSU/LiCl/PH-HAL) using different characterization techniques; namely, (a) effect of HAL addition on the chemical composition of the membrane by Fourier Transform Infrared (FTIR) analysis, (b) surface morphology through Scanning Electron Microscopy (SEM), (c) wettability of the surface with the use of contact angle goniometer, and (c) mechanical properties using the Universal Testing Machine. Considering the low cost and availability of PSU and HAL, this study would serve as guide on the proper and efficient way of fabricating PSU incorporated with HAL. Also, this undertaking may serve as a gateway for further improvements and use of PSU/LiCl/HAL nanofiber membrane via electrospinning into various areas of applications. Methodology PSF, with an average molecular weight of 35,000, and COM-HAL were both obtained from Sigma-Aldrich. Lithium chloride (LiCl) salt and anhydrous dimethylacetamide 99.8% (DMAc) were obtained from the Mapúa University Chemical Supplies. PH-HAL was obtained from the Department of Science and Technology of the Philippines. The electrospinning procedure was partly based on methodology of Chang and Lin (2009) [1]. PSU mixtures were prepared at different compositions; specifically, pure PSU, PSU/LiCl, PSU/LiCl with varying concentrations of COM-HAL (0.5%, 1% and 2%), and PSU/LiCl with varying concentrations of PH-HAL (0.5%, 1% and 2%). 40 g of each polymer solution was prepared and stirred at 60 °C for 6 h. Initially, the solvent was prepared in a media bottle and preheated by submerging it in a water bath at 60 °C for 15 mins prior to dissolving the solutes. The flow rate of the solution and the voltage was set to 1.0 mL h-1 and 25 kV, respectively. The tip-to-collector distance was set to 15 cm. The equipment used to detect the chemical components and functional groups of the membranes was PerkinElmer Spectrum 100 FT-IR Spectrometer. The electrospun membranes were subjected to SEC Mini-SEM SNE-3200M micrograph analysis with 10 kV accelerating voltage for the comparison of surface morphologies. Moreover, 25 fiber diameters were measured using ImageJ ver. 1.50i software to calculate the average fiber diameter. Also, contact angle goniometer was used to determine the wettability of the membranes. The mechanical properties of the membranes were determined using Instron 3225 Single Column UTM. Effect of HAL on chemical composition of the membranes Based on the results of FTIR analysis (Figure 1.a.), the IR spectra of PSU and PSU/COM-HAL at different concentrations of COM-HAL were observed. For pure PSU, band peaks of the different functional groups present were determined having the following wavenumbers: 1013 cm -1 and 1103 cm -1 (aromatic C-H in-plane bend); 1148 cm -1 and 1171 cm -1 (O=S=O symmetrical stretch); 1239 cm -1 (C-O-C symmetrical stretch); 1292 cm -1 and 1322 cm -1 (O=S=O symmetrical stretch); 1488 cm -1 and 1586 cm -1 (C=C-C aromatic ring stretch); 2966 cm -1 (methyl C-H asymmetrical/symmetrical stretch).The band peaks in the IR spectra of PSU were all evident which confirmed the presence of PSU in the membrane. Moreover, the HAL IR spectra were also observed in accordance to the FTIR analysis by Bordeepong et al. (2011) [2]. The similarity between the obtained IR spectra for PSU/COM-HAL (Figure 1.a) and the expected IR spectrum for PSU/HAL (Figure 1.b) confirmed the interaction between PSU and HAL. Figure 2.b illustrates the SEM image of the electrospun PSU with the addition of LiCl, and as observed smooth fibers without beads and uniform fibers were produced as compared with pure PSU (Figure 2.a). Effect of LiCl and HAL to the membrane surface morphology The obtained results confirmed that the addition of salt improved the conductivity of polymer solution which helps fiber formation significantly [1], but as observed, it also resulted to the decrease in the average fiber diameter from 697.37 ± 193.20 nm for pure PSU to 372.72 ± 130.50 nm for PSU/LiCl (Figure 3). With regards to the addition of HAL to PSU/LiCl, curvy fibers with slight spindle-like bead can be observed with 0.5% COM-HAL (Figure 2.c) with an average fiber diameter of 521.78 ± 166.14 nm. Further increase in COM-HAL concentration (1% and 2%) showed no significant changes in the surface morphologies (Figures 2.d and 2.e), but it can be seen that the average fiber diameter somehow increases (Figure 3). For the addition of PH-HAL to PSU/LiCl, 0.5% and 1% concentrations resulted to the formation of spindle-like bead and beadon-string morphologies (Figures 2.f and 2.g) with an average fiber diameter of 403.38 ± 99.000 nm. It can be seen in Figure 3, an increasing trend can also be observed for the average fiber diameter as the concentration of PH-HAL increases. The SEM micrographs revealed that the average fiber diameter decreased significantly with the addition of LiCl to PSU membrane. Conversely, the incorporation of COM-HAL and PH-HAL on the membrane increased the average fiber diameter. Effect of HAL to the wettability of the membrane The contact angle of water on pure PSU was approximately 142 ± 2.94° which is in good agreement with the strong hydrophobic nature of PSU. Based on the contact angle measurements of water on the electrospun membranes, the addition of LiCl and HAL had no significant effect on the wettability of the membranes (Figure 4). The statistical analysis that was conducted, wherein p-value was greater than the significance level, also showed that hydrophobicity of the composite membranes was not affected by the incorporation of HAL. Although HAL is hydrophilic in nature, the increasing HAL content caused the surface of the nanofiber membranes to become rougher, which led to the decrease of the interfacial tension and surface energy [3]. Thus, the contact angle measurements lie within the range of each other. 3.4Effect of HAL to the mechanical properties of the membranes As shown in Figure 5.a, increasing the concentration of PH-HAL decreases the tensile strength of the composite membrane. This is due to the weak interaction between PSU and HAL. The phenomenon was observed in the study of Peng et al. (2009) [4] in which they stated that the addition of nanoclay to the nanocomposites loss of tensile strength is due to the decreasing uniaxial orientation. Similar effect can be also observed with the addition of COM-HAL. Although there is a sudden increase in the tensile strength at 1% COM-HAL, the statistical analysis resulted to a p-value greater than the significance level suggesting that the addition of COM-HAL and PH-HAL had no significant effect to the tensile strength of the membranes. Moreover, it can be seen on the graph (Figure 5.b) that the addition of PH-HAL increased the maximum strain of the membrane as compared to the effect of adding COM-HAL. For the modulus ( Figure 5.c), the incorporation of LiCl had no effect, but the values decreases with the addition of both COM-HAL and PH-HAL (0.5% and 2%). A sudden increase that can be observed at 1% concentration was due to the poor dispersion of the clay to the composite films resulting to high modulus. Considering the maximum tensile extension ( Figure 5.d), the values increases with the concentration of COM-HAL, but no specific trend can be seen with the addition of PH-HAL. In general, decrease in the mechanical properties can be observed due to the effect of increasing HAL content which weakened the membrane; thus, decreasing the tensile strength [5]. However, upon statistical treatment of the data gathered, the p-values were found to be large and greater than the significance level which suggest that the addition of COM-HAL and PH-HAL at the concentrations used had no significant effect on the PSU membrane. Conclusion The researchers were able to synthesize nanofibrous membranes of PSU which contains COM-HAL and PH-HAL by electrospinning technique. It was observed form the SEM micrograph that the addition of LiCl salt produced bead free nanofibers with reduced average fiber diameters. The FTIR analysis verified the hydrogen bonding interaction of PSU and HAL with changing band peaks intensities from 3600 cm -1 to 3900 cm -1 . The addition of HAL on the membrane had no significant effect on the wettability of PSU. The composite membranes produced were evidently remained hydrophobic even with the incorporation of hydrophilic HAL. Also, the addition of COM-HAL and PH-HAL to the membrane decreased its mechanical strength. That is due to the exfoliation of the clay which optimized the number of available reinforcing elements; thus, increasing the matrix rigidity while often decreasing its fracture toughness [5]. However, the statistical analysis suggested that there was no significant difference in the values.
2019-04-30T13:08:27.562Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "a5a5993dcce91a067577d1a7302bfefa02c5ba98", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/72/matecconf_acmme2018_03001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "52791ff29cae0df4251333eecedaa39c7cef9528", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
20814376
pes2o/s2orc
v3-fos-license
Rapid and accurate viral diagnosis In recent years, there has been increased recognition of the importance of viral infections. In addition, new antiviral agents have become available. These factors have led to a marked increase in utilization of viral diagnositc services. In this review, both conventional and rapid methods for viral diagnosis are presented, with emphasis on recent advances. The antiviral agents currently available and the major drugs under investigation are also briefly discussed. It is hoped that this review will serve as a useful adjunct for the management of patients with virus infections. Despite the prevalence of viral infections, viral diagnostic laboratories have traditionally existed only as part of either regional health departments or university research laboratories. Conventional viral diagnostic methods have been considered time-consuming, expensive and inaccessible to the practising physician (Herrmann, 1974;Herrmann and Herrmann, 1976;Hsiung, 1977). Thus an accurate viral diagnosis was infrequently attempted. However, in recent years the importance of viral infections has been increasingly recognized particularly as a cause of morbidity and mortality in the immunosuppressed patient (Muller et al., 1972;Ho, 1977;Shields et al., 1985), of both severe and subtle disease in the neonate (Stagno et al., 1975;Whitley et al., 1980b), as well as a cause of venereal disease (Ng et al., 1970;Jordon et al., 1973;Handsfield et al., 1985). The epidemic of acquired immunodeficiency syndrome (AIDS) has focused the world's attention on viruses as potentially serious pathogens (Barre-Sinoussi et al., 1983;Popovic et al., 1984;Shaw et al., 1984). In addition, viruses may be etiologically linked to cancer (Rawls et al., 1969;Henle et al., 1969;Hanto et al., 1981;Andiman et al., 1983;Durst et al., 1983;Wong-Staal, 1983). Most importantly, promising new antiviral agents are becoming available. Therapy, if it is to be effective, must be instituted early in the course of the disease; thus, there has been increasing interest in viral diagnosis and particularly in the development of more rapid diagnostic procedures for viral infections (Gardner, 1977;Yolken, 1980;Richman et al., 1984a,b). The awareness on the part of the medical community and the public of the significance of herpes infections in particular has led to the establishment of viral diagnostic laboratories in an increasing number of community hospitals and a tremendous increase in utilization of viral diagnostic services previously available in regional laboratories or university hospitals. There has also been a burgeoning of commercial laboratories offering viral diagnostic tests to those hospitals or practitioners without such services readily available. The number of commercial companies and products available to aid in viral diagnosis has also greatly increased. With the advent of antiviral therapy, it is no longer acceptable for lack of accurate viral diagnosis to hinder or delay the treatment of patients. Thus, physicians are beginning to demand laboratory diagnosis of their patients' illnesses in order to have specific and proper treatment. To accomplish this, viral diagnostic facilities are becoming more accessible and, additionally, health practitioners must be more knowledgeable regarding procedures used for viral diagnosis. The purpose of this review, therefore, is to discuss both new and standard methods for virus recognition and identification with special reference to rapid diagnosis and the advances made in the last few years. The selection of antiviral agents currently used is also briefly discussed. It is hoped that this review will serve as a useful adjunct for the management of patients with virus infections. NEW AND STANDARD METHODS FOR VIRUS RECOGNITION AND IDENTIFICATION Conventional methods of viral diagnosis consist of virus isolation and serology; light and electron microscopy are performed in certain situations. Although there is tremendous interest in the development of rapid diagnostic techniques, conventional diagnostic methods remain the most widely used and are essential in confirming the usefulness of newer techniques. However, it must be emphasized that only standard virus isolation and electron microscopy allow for success in recognition of unexpected or 'new' viruses. SPECIMEN COLLECTION The critical first step in making a successful viral diagnosis is obtaining the proper specimens. This includes the choice of specimens, and proper collection and transport. If these initial steps are not appropriately undertaken, the subsequent time and effort spent in attempting virus isolation will be wasted. The choice of specimens depends upon the clinical syndrome and the viruses suspected. Since one syndrome can be associated with many viruses, a set of specimens is often recommended. In general, specimens for virus isolation should be collected early in illness as many viruses are excreted for only a few days. However, certain viruses, such as cytomegalovirus (CMV), enteroviruses and adenoviruses, can be excreted for prolonged periods. Table 1 contains the commonly encountered clinical syndromes, the associated viruses and the appropriate specimens to be obtained. For throat swabs, a vigorous swab of the posterior pharynx and of any visible lesions should be obtained. Stool specimens are preferred over rectal swabs because the larger sample size results in greater yield of virus isolates. First-voided morning urines are best and two or three specimens are optimal for CMV isolation. Aspiration of nasopharyngeal mucus has been found to be superior to nasal swabs or nasal washes for isolation for respiratory syncytial virus (RSV) (Bromberg et al., 1984) and bronchoalveolar lavage specimens have been found superior to bronchial washings for CMV (Stover et al., 1984;Martin and Smith, 1986). SPECIMEN TRANSPORT Since viruses are obligate intracellular organisms, they require living cells in which to replicate. As a result, a significant decrease in virus infectivity titer will occur if clinical specimens are allowed to stand for any period at room temperature. For best results, direct inoculation of cell cultures at the bedside should be done. Generally this is not feasible, therefore swabs and tissues should be placed in viral transport media containing a balanced salt solution and a protein stabilizer, gelatin or calf serum. A variety of collection and Throat swab, stool, CSFt *Acute and convalescent sera to be collected in each case. tCerebrospinal fluid. transport devices are now available commercially and have been the subject of several recent studies (Johnson et al., 1984;Warford et al, 1984). Urine, stools, spinal fluids and other body fluids should be placed in sterile containers. Prompt transport to the laboratory is imperative. If a delay is necessary, specimens can be held at 4°C until inoculation into cell culture. If a long delay is necessary, specimens should be frozen at -70°C. For transport to a distant laboratory, specimens can be shipped by rapid delivery service on wet ice; frozen samples can be shipped on dry ice. If swabs dry out or specimens are left at room temperature for any period, virus infectivity will deteriorate markedly. Serum is usually obtained for serodiagnosis and is helpful if no virus is isolated or to confirm an unusual isolate. Whole blood or leukocytes can also be useful in virus isolation, e.g. for CMV or Epstein-Barr virus. VIRUS ISOLATION Once specimens arrive in the laboratory, they must be inoculated promptly into sensitive test systems. Since viruses require living cells in which to replicate, the inoculation of cell cultures or laboratory animals is necessary. Unfortunately, no single culture system will support the growth of all viruses. Thus a variety of cell cultures are routinely used in a diagnostic laboratory. In certain circumstances, embryonated eggs or small animals may be utilized. It is apparent that laboratory personnel must know the clinical syndrome and/or the viruses suspected in order to choose the appropriate system. If an insensitive system is utilized, it is unlikely that any virus will be isolated even though virus is present in the specimen. Cell Cultures It was the discovery that poliovirus could replicate in nonneural tissue culture (Enders et al., 1949) that revolutionized diagnostic virology. Currently, cell cultures are the mainstay of most viral diagnostic laboratories and, for many labs, they are the only system employed. Embryonated eggs, small animals and serology are reserved for the larger reference laboratories. There has been a proliferation in the types of cell culture available. In general, three main types of cell culture are used: primary cell cultures, diploid cell strains and continuous cell lines. Primary cell cultures are made directly from animal or human tissues and can be subpassaged only a few times. Diploid cell strains are generally derived from human embryonic tissues, particularly embryonic lung, and can be subcultured for about 50 passages. Continuous cell lines are usually derived from human or animal tumors and can be propagated indefinitely. A wide variety of primary cell cultures (e.g. monkey kidney, rabbit kidney), human diploid fibroblast (HDF) cell strains (e.g. WI-38, MRC-5) and continuous cell lines (e.g. HeLa, HEp-2) are now available commercially. The choice of types of cell cultures employed in any laboratory is dependent upon the viruses sought, the patient population and the economic constraints. A fairly broad spectrum of viruses can be cultured if one set of the following cell cultures are used: primary monkey kidney (MK), HDF and HEp-2 cells. The use of several cell types facilitates the chances of recovering a variety of virus types from clinical specimens. Recognition of Virus-Induced Cellular Changes After inoculation into cell culture, the presence of a virus may be detected in several ways. Most commonly, virus-induced changes such as rounded refractile cells or grape-like clusters are noted (Fig. 1, C-H). These changes are called cytopathic effect (CPE) and vary depending on the causative virus. The formation of syncytia is characteristic of certain viruses, such as respiratory syncytial virus and measles, as well as parainfluenza types 2 and 3 when inoculated in continuous cell lines (Fig. 1, J). Some viruses produce no visible change, therefore indirect tests for their presence are necessary. For influenza and parainfluenza viruses, the hemadsorption test is utilized whereby a dilute solution of guinea-pig red blood cells is added to the infected monolayer of cells, allowed to adsorb at 4°C, then washed off. If influenza or parainfluenza is present, the red cells will adhere to the infected cell monolayer (Fig. 1, I). For rubella virus, the interference test has traditionally been used. By this method, cell cultures infected with suspected rubella virus are superinfected with an echovirus, a virus that readily produces CPE. In the presence of rubella, however, the expected virus-induced CPE does not develop but is interfered with. The speed of appearance and progression of CPE can also be helpful in distinguishing viruses; however, this is also dependent upon the concentration of virus in the inoculum and the sensitivity of the particular cell culture used. Preliminary identification of a virus isolate can be made based upon the type o~" cell culture the virus is growing in and the character of the virus-induced cellular changes. For example, cytomegalovirus induces CPE only in human fibroblast cells, whereas herpes simplex virus (HSV) induces CPE in both human fibroblast and rabbit kidney (RK) cells (Fig. 2). Final identification usually requires a neutralization test using type-specific antiserum; however, in many laboratories more rapid methods of identification are now being applied, such as immunofluorescence (IF) with monoclonal antibodies (see Section 2.6). THE USE OF MINI-LABS AND REFERENCE LABORATORIES Several reports have demonstrated the feasibility of using mini or satellite laboratories, whose services are tailored to both the facilities of the laboratory and to the needs of the patient population they serve (Herrmann and Herrmann, 1977;Peterson et al., 1980;Landry and Hsiung, 1981). For example, the cost of virus isolation can be significantly reduced by the use of microtiter plates containing different types of cell cultures. Virus isolation using the latter system compares favorably with standard techniques (Herrmann and Herrmann, 1977). Peterson et al. (1980) reported the advantage of using satellite laboratories. Time in reporting results was reduced when primary virus isolation was performed in a local, small, hospital-based laboratory when compared with sending specimens to a state-wide reference laboratory. The number of virus isolations by the satellite laboratory was slightly greater than from the reference laboratory and the cost was comparable to that of routine bacteriological specimens. In laboratories where the nature of the patient population is such that HSV is most frequently encountered, the most sensitive cell systems appear to be nonprimate cells, i.e. rabbit kidney or guinea-pig embryo (GPE) cells (Landry et al., 1982c;Hsiung et aL, 1984). Since other human viruses generally do not grow well in nonprimate cells, presumptive identification can be made according to cell susceptibility and characteristic CPE. As shown in Table 2, HSV induces a characteristic CPE in both human diploid fibroblast and RK cells. Cytomegalovirus and varicella-zoster virus (VZV) only replicate in human fibroblasts. Enteroviruses grow best in primary MK (vhereas adenoviruses Advantage has been taken of selective cell-culture systems for presumptive identification of enteroviruses (Hsiung, 1961(Hsiung, , 1962Landry et al., 1982b) and more recently for typing for HSV types 1 and 2 (Nordlund et al., 1977). Because HSV-2 produces plaques in both GPE and chick embryo (CE) cell cultures whereas HSV-1 induces plaques in GPE cells but not in CE cells, the two virus types can be easily identified when these two cell systems are used. However, IF with type-specific monoclonal antibodies is now available, is more rapid and as accurate as selective cell systems (Balkovic and Hsiung, 1985). Those viruses not identifiable by cell susceptibility or characteristic CPE can be referred to larger reference laboratories for final identification. Although the number of community hospitals with virology laboratories is increasing yearly, the majority still do not have in-house viral cultures available. Therefore, if an accurate viral diagnosis is to be made, specimens must be sent out to reference laboratories or commercial laboratories for virus isolation. With the ready availability of overnight rapid delivery services, specimens can now be processed promptly with results comparable to in-house processing (Ray and Minnich, 1982). In addition, with a new specimen transport device using human fibroblast cell cultures, virus may replicate during transport (Warford et al., 1984). RECENT ADVANCES IN VIRUS ISOLATION With the increased utilization of virus isolation comes a demand for improved isolation rates and more rapid results. Therefore, common diagnostic procedures are being re-evaluated in an attempt to optimize the collection and transport of specimens, specimen processing, the conditions of culture incubation and the selection of the most sensitive cell culture system for each virus. Specimen Processing Different processing methods have been examined to determine the optimum for detection of enterovirus viremia (Prather et al., 1984), and the factors influencing recovery of varicella-zoster virus (VZV) have also been studied (Levin et al., 1984). A number of studies have determined the importance of centrifugation of specimens onto the monolayer. Improved isolation rates and more rapid results for HSV and CMV have been reported (Gleaves et al., 1984(Gleaves et al., , 1985aSalmon et al., 1986). Fractionation of semen with inoculation of the pellet fraction into culture has been associated with elimination of monolayer toxicity and enhanced CMV detection in AIDS patients (Howell et al., 1986). Cell Culture Selection and Conditions of Incubation A continued search for better culture systems for each virus remains an important task of the diagnostic virologist. Recent reports have indicated that a mink lung cell line is highly sensitive to infection with HSV (Fayram et al., 1985;Smith et al., 1985). In another study MRC-5 cells were found to be more sensitive than WI-38 for CMV isolation (Gregory and Menegus, 1983a). Incubation temperature has long been recognized as important in the isolation of respiratory viruses, for which 33°C is optimal. Recently, it has been reported that 36°C when used for isolation of cytomegalovirus (CMV) results in doubled isolation rates and more rapid onset of CMV CPE, by an average of 4 days (Gregory and Menegus, 1983b). Cultivation of Fastidious Viruses Perhaps the most important development in virus isolation has been the cultivation of several viruses previously considered not amenable to isolation in cell culture. Hepatitis A virus (HAV) has now been isolated directly from fecal extracts in several cell culture types (Provost et al., 1981;Daemer et al., 1981;Siegl et al., 1984). Since the virus is not cytopathic, immunologic assays such as radioimmunoassay (RIA) or IF are necessary to detect its presence. Human rotavirus was able to be cultivated in cell cultures when trypsin was added to the media and now has been successfully isolated and propagated in several different cell cultures (Graham and Estes, 1980;Naguib et al., 1984); IF was the most reliable method for detection and identification of rotavirus in culture. Several enteric adenoviruses, first detected by electron microscopy and considered fastidious, have now been isolated and propagated in several cell systems, with 293 cells considered the most sensitive (Brown et al., 1984). Ability to isolate these viruses in cell culture greatly facilitates the study of these viruses, allows antigen production and makes way for the development of vaccine(s). However, for diagnostic use, other methodologies, such as serology for detection of HAV IgM antibody and ELISA for detection of rotavirus antigen, remain the methods of choice. The initial isolation of the human immunodeficiency virus (HIV) was a discovery central to the identification of the causative agent of AIDS and to the development of simpler screening tests for this virus (Barre-Sinoussi et al., 1983;Gallo et al., 1984;Levy et al., 1984). The mainstay for diagnosis of human immunodeficiency virus (HIV) is detection of viral antibody by ELISA, with positive serum samples retested with supplementary tests such as the Western blot, IF and radioimmunoprecipitation (Schupbach et al., 1986). However, detection of viral antibody alone does not determine whether the individual is currently infected with the virus. In addition, antibody may not develop for six to twelve months after infection or may become undetectable late in the course of AIDS. The isolation of virus from the infected individual can serve this purpose. However, it remains an elaborate, labor intensive, and lengthy process, that is currently performed primarily in specialized research centers. The procedures for isolation have recently been reviewed elsewhere (Schupbach et al., 1986;Griffith, 1987) (Fig. 3). Briefly, human mononuclear cells are separated from the peripheral blood of normal donors, and suspended in growth media containing a mitogen such as phytohemagglutinin and a T-cell growth factor such as interleukin 2. Several days after lymphocyte cultures have been initiated, patients' specimens are inoculated and are then observed for 3 to 4 weeks for viral cytopathic effect (Fig. 4) and the supernatants are assayed weekly for the products of viral replication such as reverse transcriptase or viral antigen. Freshly prepared stimulated lymphocyte cultures are added once a week. Although continuous cell lines are available that support the growth of HIV, these lines are not as sensitive as primary lymphocyte cultures for isolation of virus from patients' specimens . Continuous cell lines have the advantage however of showing little CPE and producing large amounts of virus and thus are essential for the production of viral antigens for diagnostic tests. Other human Although isolation of HIV is currently too tedious and expensive for routine diagnostic use, with anticipated methodologic improvements it will certainly play a larger role in the future. ADVANCES IN VIRUS IDENTIFICATION After preliminary identification of virus isolates by CPE in cell culture, final identification has required labor-intensive neutralization, hemagglutination inhibition or complement fixation tests. Despite the time invested, the results of these tests are not always clear-cut. In addition, with the increasing importance of viral diagnosis in patient care, more rapid specific identification is needed, as mistakes or delays in identification can adversely affect treatment and patient management. RNA Genome Analysis In recent years, a number of the tools of molecular geneticists have been used for the identification and fingerprinting of RNA and DNA viruses. Oligonucleotide mapping and polyacrylamide gel electrophoresis of viral proteins have been used to determine genetic epidemiologic relationships between polioviruses and have been useful in determining the relations between cases of paralytic polio and vaccine strains of polio (Minor, 1980;Nottay et al., 1981). Oligonucleotide mapping has also been used to study the evolution of influenza A virus strains in nature (Nakajima et al., 1978;Nakajima et al., 1980;Young and Palese, 1979). Electropherotyping of human rotavirus strains has been used to identify strains involved in disease outbreaks within hospital settings and in different parts of the world (Albert et al., 1982;Chiba et al., 1984;Rodger, et al., 1981;Rodriguez et al., 1983;Spencer et al., 1983). This technique has been useful in confirming the difference in rotavirus strains isolated in China from previously recognized rotavirus strains (Hung et al., 1984). Restriction Endonuclease Analysis In recent years, restriction enzyme analysis has been used to identify and classify DNA viruses of the herpes-, adeno-and papovavirus groups. By this technique, viral DNA is incubated with a specific endonuclease resulting in cleavage of all susceptible DNA sequences. Then the fragments are separated by gel electrophoresis and a characteristic 'fingerprint' for that virus is obtained (Summers, 1980). The application of restriction endonuclease analysis has been particularly useful in the study of HSV. HSV-1 and HSV-2 can be readily distinguished by this technique and it is considered the gold standard for typing isolates (Mayo et al., 1985b). In addition, strain-specific differences are evident, allowing further subclassification of isolates within an HSV type (Fig. 5). As a result, restriction endonuclease analysis has proved useful in the typing of HSV-1 and HSV-2 isolates on a large scale (Lonsdale, 1978), in tracing nosocomial outbreaks of HSV Linneman et al., 1978), and in dispelling concern that a clustering of cases of herpes encephalitis was due to circulation of a single neurovirulent strain of virus (Landry et al., 1983). The same methodology has now been applied to tracing sources of CMV infection (Wilfert et al., 1982;Yow et al., 1982;Handsfield et al., 1985), as well as studying the molecular epidemiology of VZV (Martin et al., 1982) and adenoviruses (Kemp et al., 1983;Wadell et al., 1985). Restriction enzyme analysis has also been shown to be more reliable and specific than neutralization and hemagglutination tests for the identification adenoviruses (Fife et al., 1985a,b;Hammond et al., 1985). Thus, restriction endonuclease fingerprinting provides a useful additional method for virus identification. LIGHT MICROSCOPY Direct smears from skin lesions have long been useful in the rapid diagnosis of HSV, VZV and poxvirus infections. HSV and VZV both induce multinucleated cells and characteristic intranuclear inclusions (Cowdry type A), whereas poxvirus infections induce typical cytoplasmic inclusions (Guarnieri bodies) in infected cells. Where no viral culture facilities are available, Pap smears have also been used to detect the presence of HSV infection of the cervix. Characteristic CMV-induced intranuclear inclusions in both Pap smears and infected tissues have been used as markers for diagnosis of CMV infections. For certain virus infections, the cellular changes themselves in the affected organs are sufficiently characteristic to permit a presumptive diagnosis. Examples are the spongiform degeneration in the brains of patients with Creutzfeld-Jacob disease (Gibbs and Gajdusek, 1969) and the balloon degeneration of liver cells seen in viral hepatitis (Ishak, 1976). The recent commercial availability of high-quality immunologic staining reagents and nonradioactively labelled viral probes for in situ hybridization has allowed a more specific and sensitive diagnosis of viral infections to be made using tissue sections in a routine pathology laboratory (see Section 3). ELECTRON MICROSCOPY AND IMMUNE ELECTRON MICROSCOPY The electron microscope (EM) has been used in the diagnosis of viral diseases for several decades. Only by this method can a virus be directly visualized. Virus size and shape can be easily identified (Figs 6 and 7). However, different viruses with the same morphology cannot be distinguished by routine examination (e.g. smallpox and vaccinia, or HSV and VZV). The EM techniques most commonly used include the negative staining of virus particles with the electron-dense salts of phosphotungstic acid (Figs 6 and 7, top row) or preparation of uitrathin sections of cells or tissues suspected of harboring virus (Figs 6 and 7, middle row). Clinical specimens or virus-infected culture fluid can be examined directly using the negative staining technique, thus providing a rapid diagnosis of virus infection (Hsiung et al., 1979). However, difficulties are encountered when the number of virus particles in the sample examined is low. A number of techniques have been developed to enhance virus visualization, including the pseudoreplica technique (Smith and Melnick, !962), agar gel diffusion (Anderson and Doane, 1972) and ultracentrifugation (Smith and Gehle, 1969). Although thin sectioning of tissues usually requires 3 or more days of specimen preparation for EM, a more reliable diagnosis may result since the fine structure of the virus particles and cells is more likely to be preserved. This may be especially important in cases where very few virus particles may be present or in determining the location of the virus particles. The recognition of a human papovavirus in the brain cells of a patient with progressive multifocal leukoencephalopathy (ZuRhein and Chou, 1965) and the identification of Epstein-Barr virus in cultured lymphoblastic cells derived from a Burkitt's lymphoma patient (Epstein et al., 1964) would have been missed had not this EM technique been used. ENTERO- In the 1970s, the application of EM techniques uncovered a number of new viruses which could not be isolated in culture. These included hepatitis B virus (Dane et al., 1970), enteric adenoviruses in the stools of children with gastroenteritis (Flewett et al., 1975), and, with the use of immune electron microscopy (IEM), hepatitis A (Feinstone et al., 1973), rotavirus (Flewett et al., 1973) and Norwalk agent (Kapikian et al., 1972) were first visualized in stool contents. IEM, which involves the mixing of the patient's specimen with immune serum resulting in aggregation of viral particles rendering them readily visible, has also been useful in the rapid diagnosis of respiratory viruses in clinical specimens (Doane et al., 1967;Joncas et al., 1969). Recent innovations have included the development solid-phase IEM by the use of protein A, which was found to be 30 times more sensitive than EM and 10 times more sensitive than ELISA for the detection of rotavirus in stools (Svensson et al., 1983). Another modification is the use of the solid-phase IEM double-antibody technique, by which formvar carbon-coated grids are treated with diluted antibody, resulting in approximately 30-fold increase in virus particles. Viewing of the virus is facilitated by the addition of a second 'decorator' antibody. This has been used with success in the detection of papovaviruses (Giraldo et al., 1982). However, despite the many contributions of EM and IEM to virus diagnosis, it is still too expensive and cumbersome for routine application in the average diagnostic laboratory. RAPID VIRAL DIAGNOSIS The rapidity with which the isolation of a virus can be accomplished is variable and depends upon the virus type, the amount of virus in the clinical specimen and the sensitivity of the culture system utilized. Certain viruses, such as HSV, can often be isolated within 24 hr of inoculation into cell culture, whereas other viruses require 7 or more days for isolation and some have not been amenable to culture by routinely employed methods. The delay encountered in the diagnosis of many common virus diseases has been a source of frustration to both physicians and laboratory personnel. With the advent of antiviral chemotherapy, this dilemma has become more acute. In order to have a beneficial effect on the outcome of an illness, therapy must be instituted early. This has led to tremendous interest in the development of so-called 'rapid viral diagnostic methods'. The formation of both European and Pan American groups for rapid viral diagnosis with regular symposia to keep members abreast of recent advances in the field is a direct result of this interest (Mclntosh et al., 1978(Mclntosh et al., , 1980Richman et al., 1984a,b). Ideally, rapid diagnostic methods should be capable of yielding results within a few hours of a patient's admission to the hospital with testing performed directly on clinical material. However, test results obtained within 1-2 days of admission would render viral diagnostic methods comparable to those routinely used in microbiology laboratories. Such 'rapid' techniques would include those used to identify viral antigens or nucleic acid directly in clinical specimens or after amplification of virus in cell cultures before cellular changes occur or in cases where no changes occur. Many of the techniques to be discussed in this section have an immunologic basis, i.e. they depend upon the specific reaction between antigen and antibody. The reaction must be labeled with a marker to render it detectable. The marker can be a fluorescent dye, a radioisotope or an enzyme such as peroxidase. An important development leading to the increased utilization of immunologic detection techniques have included the availability of high-quality commercial reagents including monoclonai antibodies. Another significant and very recent change in the field of rapid viral diagnosis has been the introduction of nucleic acid hybridization technology into the field of clinical virology. Recent advances that have allowed the application of these techniques include: first, molecular cloning, resulting in the production of well-characterized and specific reagents for use as probes; second, the recognition of the ability of nucleic acid to bind to nitrocellulose, which allows screening of large numbers of samples; and, third, the development of non-radioactive biotinylated probes suitable for use in clinical laboratories. Hybridization techniques in clinical diagnosis remain experimental at this time; however, owing to the tremendous interest that exists in this area and the proliferation of studies published in the last few years, an overview will be presented. The immunologic and hybridization techniques will be reviewed in terms of their application to both direct detection of viral antigens or genomes in clinical specimens and detection of virus infection after amplification in cell culture. In general, for viruses that replicate well in cell culture, direct detection methods ae less sensitive though more rapid than virus isolation. However, application of these methods to infected cell cultures can significantly shorten the time to reporting positive results and, in addition, confirm the identification of the virus. It must be emphasized however, that all of the techniques discussed in this section are directed at specific viruses that are 'suspected'. They are not 'open minded', only virus isolation and EM will lead to the discovery of 'unsuspected' or 'new' viral agents. Immunofluorescence Immunofluorescence (IF) techniques, which include the direct fluorescent antibody (FA) procedure and the indirect fluorescent antibody (IFA) procedure, have long been used in the diagnosis of viral diseases. First introduced in 1941, IF was developed specifically to detect antigens in animal tissues (Coons et al., 1941). By this technique, specific antibody is tagged with a fluorescent dye, allowed to react with the antigen and, after a short incubation, the site of the antigen-antibody reaction can be visualized using a microscope with a u.v. light source. Direct IF is used to detect antigen by utilizing a specific antibody which is conjugated directly with a fluorescent dye. It is quicker, simpler and exhibits less nonspecific staining than the indirect method. The indirect method utilizes specific antibody that is not conjugated but is allowed to react with the test antigen. Then, conjugated antibody is added which is directed against the animal species from which the primary antibody is made. This test can be used to detect antigen or antibody, and has the advantage of requiring only a single conjugate for the detection of many antigen-antibody reactions provided that all antisera are made in a single species. Although the indirect test is slightly more sensitive, it also gives more nonspecific results. Many difficulties have been encountered since the introduction of IF techniques, but in recent years many of the problems have been overcome. For example, an adequate number of infected respiratory epithelial cells are essential for respiratory specimens. It is necessary to see labeled intracellular antigen in a distribution (intranuclear or intracytoplasmic) and in the cell type expected for the particular virus. Also, experience is required in distinguishing the nonspecific fluorescence seen with bacteria, fungi and mucus commonly present in respiratory specimens. Proper specimen collection and sample preparation are important in minimizing these problems. For skin lesions, there is little problem in the vesicular stage but once lesions have become crusted, nonspecific fluorescence becomes a problem. In brain biopsies, nonspecific fluorescence is not usually problematic, but in autopsy specimens, if bacterial overgrowth has occurred, again experience is required in distinguishing nonspecific fluorescence (Gardner, 1977). Owing to difficulties in obtaining specific sensitive antisera, it has been difficult to reproduce results outside of the research setting, until now. The availability of quality reagents and the demand for rapid diagnosis have contributed to this change. IF was first applied to the direct detection of virus in clinical specimens with the identification of influenza A in nasal smears (Liu, 1956). Subsequently, rabies was detected in mouse brains utilizing this technique and quickly became the method of choice for rapid diagnosis of rabies virus infection (Goldwasser and Kissling, 1958). In addition, IF has been used to detect HSV in skin lesions (Biegeleisen et al., 1959). More recently IF has been applied to the detection of a number of viruses including HSV (Schmidt et al., 1980(Schmidt et al., , 1983, VZV (Schmidt et al., 1980), RSV , and parainfluenza (Wong et al., 1982;Waner et al., 1985) in clinical specimens with varied results. It was also by IF that the delta hepatitis virus (HDV) was detected in liver cell nuclei and in serum of hepatitis B virus (HBV) carriers (Rizzetto et al., 1977). The application of IF using monoclonal antibodies to direct detection of influenza (Shalit et al., 1985) and CMV (Martin and Smith, 1986) in clinical specimens has produced promising results. Perhaps RSV has generated the greatest enthusiasm due to the difficulties encountered with culture and the benefits of rapid diagnosis with the availability of ribavirin treatment Lauer, 1982). Numerous investigators have found IF examination of nasopharyngeal aspirates using either polyclonal or monoclonal antibodies more sensitive than culture (Cheeseman et al., 1986;Freymuth et al., 1986;Swenson and Kaplan, 1986). The advantages of immunofluorescent procedures performed directly on clinical specimens include speed, simplicity, low cost, and the ability to make a diagnosis in convalescence in some viral infections where virus is rendered non-infectious by the presence of antibody but is still visible by fluorescence. The ability to make a diagnosis when specimens have been delayed in their arrival in the laboratory is a great advantage. However, IF is highly dependent on proper collection of specimens. Even under study conditions, a significant percentage of specimens are unacceptable due to inadequate numbers of epithelial cells, which makes the specimen untestable. IF techniques were also first used years ago for the rapid detection and identification of viruses after amplification in cell cultures. Examples include the rapid detection and identification of measles (Cohen et al., 1955), VZV (Weller and Coons, 1954) and poliovirus (Kalter et al., 1959) and subsequently rubella (Schmidt et al., 1966). For this application, there have been a number of exciting and potentially useful innovations within the last two or three years. One group has used centrifugation of specimens onto monolayers in shell vials, followed by application of IF at 36 hr (Gleaves et al., 1984) and then 16 hr post inoculation (Gleaves et al., 1985a) for the rapid detection of CMV in urine. All CMV isolates were detected by IF at 36 or 16 hr respectively whereas an average of 9 days was required for detection of CMV CPE using standard virus isolation without centrifugation or IF staining. When BAL and blood specimens are tested for CMV by this technique, some false negative results are obtained (Paya et al., 1987). It is also important to inoculate two or, for blood samples, three shell vials per specimen for optimal results (Paya et al., 1988). The same methodology was applied to the early detection of HSV with excellent results (Gleaves et al., 1985b). Centrifugation was shown to be important in early detection. However, when this same methodology was applied to rapid detection of influenza virus using monoclonal antibodies, only 56% of influenza isolates were detected at 24 hr post inoculation by IF, compared with an average of 4 days for conventional isolation (Espy et al., 1986). Another study compared short term (24 hr) tissue culture followed by IF with standard virus isolation and found complete agreement between the two methods. However, when the same reagents were applied directly to clinical specimens, both false-negative and false-positive results were obtained (Nerurkar et al., 1984a). Immunoperoxidase Immunoperoxidase (IP) techniques follow the same principles as IF techniques, however, the conjugate is an enzyme, most often horseradish peroxidase. The enzyme is coupled to specific antibody in the direct method, and to an antianimal species globulin in the indirect test. The presence of the enzyme conjugate bound to the virus-infected cells is detected by adding a substrate, diaminobenzidine or aminoethylcarbazole, then oxidizing it in the presence of hydrogen peroxide resulting in a reddish-brown color which is permanent. The test has the same potential applications as IF and it has a number of advantages over IF: the reaction can be detected with the naked eye or with a light microscope, which is important for laboratories with limited budgets; many of the products are electron dense and thus can be visualized with the electron microscope; most preparations are permanent; the reagents are more readily standardized and are more stable; there are less nonspecific reactions; and IP has been more successful than IF on processed tissue. However, this procedure was first described in the early 1970s (Avrameas and Ternynck, 1971) and experience with it is much less extensive than with IF. A major problem has been the endogenous peroxidase present in leukocytes in clinical specimens, especially from the respiratory tract, which leads to nonspecific staining. Techniques have been developed to remove the endogenous peroxidase (Straus, 1971;Weir et al., 1977), but they can also result in removal of unstable virus antigen, and if there is only a small amount of virus present, a false-negative result can be obtained. The application of IP techniques to clinical material includes the identification of rabies (Atanasiu, 1975), HSV in a variety of clinical specimens (Morisset et al., 1974;Schmidt et al., 1983) including brain tissue (Benjamin and Ray, 1975), measles in the brains of patients with SSPE (Brown and Thormar, 1976), and hepatitis B in fixed liver sections (Burns, 1975). It has also been compared to IF for the detection of influenza A and respiratory syncytial virus (RSV) in respiratory specimens (Gardner et al., 1978). The two techniques were in excellent agreement, but removal of endogenous peroxidase was a significant problem in specimens containing RSV, where removal of peroxidase resulted in loss of RSV antigen. Recent modifications that have resulted in more sensitive assays include the peroxidase-anti~peroxidase (PAP) (Sternberger and Joseph, 1979) and avidin-biotin complex (ABC) techniques (Hsu et al., 1981). IP methods were used early on to detect viral antigen in cell culture to obtain a more rapid diagnosis (Benjamin and Ray, 1974) and it is for this purpose that it has received much wider application recently. IP has been used to identify rubella isolates in cell culture (Schmidt et al., 1981). More importantly, commercial kits for HSV cultivation and identification have been developed using Vero cell culture, followed in 48 hr by staining with the PAP technique. Although these kits provide a valuable introduction to virus isolation for those laboratories without virology expertise (see Fig. 8), numerous studies have not found them to be as sensitive as standard virus isolation. The sensitivity of the kits has ranged from 73 to 79% when compared with standard tissue culture (Fayram et al., 1983;Hayden et al., 1983;Rubin and Rogers, 1984;Sewell et al., 1984). However the problem may well lie in the kits' use of Vero cells which are fairly insensitive to HSV infection when compared with more widely used HDF or primary RK cells. When other workers used HDF cell culture followed by IP staining at 24 hr, all HSV isolates were detected at 24 hr by IP staining that were eventually detected by standard culture (Miller and Howell, 1983). An additional study demonstrated that it is possible to significantly shorten the time involved in maintaining and observing cell cultures by application of the PAP technique for early detection of HSV in HDF cell culture. Over 16,000 specimens were processed for HSV; essentially all cultures positive for HSV were detected by 72 hr (two-thirds by 24 hr) by PAP staining, resulting in significant savings in time and materials (Mayo et al., 1985a). The combination of centrifugation of specimens onto cell monolayers followed by overnight incubation and IP staining was found to be more sensitive as well as more rapid than standard cell culture for diagnosis of HSV (Salmon et al., 1986). Thus this technique has much potential in rapid viral diagnosis, especially for laboratories without a fluorescence microscope. Enzyme-Linked Immunosorbent Assay In 1971, Engvall and Perlmann introduced the enzyme-linked immunosorbent assay (ELISA) for the quantitation of rabbit IgG (Engvall and Perlmann, 1971), a technique as sensitive as the radioimmunoassay (RIA), but with many advantages over the RIA. ELISA is similar to RIA except that an enzyme is used as the immunoglobulin marker instead of a radioactive isotope. When substrate is added to the enzyme-labelled immunoglobulin, a visible color reaction occurs which can be read visually or quantitated using a spectrophotometer. The ELISA can be used either for detection of antigen or antibody and has several variations modelled after the RIA. For detection of antigen, either the antibody sandwich or the competitive assay can be used. In the antibody sandwich method, specific antibody to the antigen to be detected is used to coat the surface of a solid phase support (such as polystyrene beads, microtiter plates, test tubes, etc.). Then the test sample (e.g. stool, body fluid) is added and allowed to react. For the direct or single antibody sandwich test, enzyme conjugated to specific antibody is then added and allowed to react. For the indirect or double-antibody sandwich test, unlabeUed specific antibody is first added, then enzyme conjugated antiglobulin is added. As a final step, the amount of enzyme bound is detected by the addition of a substrate. The intensity of the subsequent color reaction is proportional to the amount of antigen in the test sample. In the competitive assay, specific antibody is adsorbed to the solid phase and the test specimen is added as above, in addition to a known amount of labeled antigen. The unlabeled antigen in the test specimen competes with the labeled antigen for antibody binding sites. Then substrate is added. The bound enzyme, and resultant color change, is less if antigen is contained in the material. The amount of antigen in the test sample is determined quantitatively by comparing the color obtained to known standards. The two enzymes most widely used in ELISA are horseradish peroxidase (Avrameas and Ternynck, 1971) and alkaline phosphatase (Engvall and Perlmann, 1971), but a number of others have also been used, each with advantages and disadvantages Hosli et al., 1978;Watanabe et al., 1979). The problems in ELISA are similar to those in other immunologic tests. The purity, the sensitivity and specificity of the reagents must be carefully controlled. Nonspecific binding is a problem that can be diminished by careful washing, addition of 1~,% species specific serum to the reaction mixture, and the use of high quality specific reagents. The introduction of monoclonal antibodies should also reduce this problem. In addition, the optimal conditions for the assay vary depending on the antigen, enzyme, substrate etc., and must be carefully monitored. Because of the variables, a number of control specimens with known amounts of antigen should always be included in every test. Since its introduction, the ELISA has been used for the detection of a variety of antigens, antibodies and other biologic substances (Yolken, 1980). It has been widely applied to viral antibody detection with great success, most notably hepatitis B virus, for which it has supplanted the RIA, and human immunodeficiency virus (HIV). The ELISA has also been used for the detection of viral antigens of viruses which are difficult to propagate in culture, such as a group A coxsackieviruses (Yolken and Torsch, 1980), human coronaviruses (Macnaughton et al., 1983), enteric adenoviruses (Anderson et al., 1983), Norwalk agent (Gary et al., 1985) and hepatitis A (Mathieson et al., 1977;Coulepis et al., 1985). ELISA, for detection of these viruses, remains a research tool, since there has not been sufficient demand for these tests in clinical laboratories. To date, ELISA has been especially useful in the diagnosis of rotavirus infections (Yolken et al., 1977). ELISA kits for rotavirus antigen detection have been available commercially for a number of years now and have been found comparable to EM (Cheung et al., 1982;Rubenstein and Miller, 1982). Recent modifications have resulted in an even more sensitive rotavirus ELISA kit (Doern et al., 1986). However, group B rotaviruses recently detected in China (Hung et al., 1984) are not detected by the current commercial ELISA kits. Hepatitis B virus has not yet been propagated in cell culture which limits laboratory methods to serologic detection of HBV antibodies and antigens and more recently, hybridization for detection of viral DNA. Tests for at least six serologic markers for HBV are available commercially. Determining the pattern of these markers in the individual patient will help to establish the stage of the disease, the infectivity, immune status and prognosis of the patient. The application and interpretation of these tests has been reviewed in detail elsewhere (Chernesky et al., 1984). The ELISA has also been applied to detection of delta virus antigen and antibody in serum (Crivelli et al., 1981;Shattock and Morgan, 1983;Buti et al., 1986), which should result in less need for diagnostic liver biopsy in these patients. ELISA has also been used to detect a number of routinely cultured viruses in clinical specimens such as RSV (Hornsleth et al., 1982;Mclntosh et al., 1982;Freymuth et al., 1986;Swenson and Kaplan, 1986), influenza A, adenovirus (Harmon and Pawlick, 1982), HSV in lesion swabs (Morgan and Smith, 1984;Nerurkar et al., 1984b;Warford et al., 1984) and HSV in cerebrospinal fluid of patients with encephalitis (Coleman et al., 1983). When used for direct detection of HSV in clinical specimens, ELISA was not sufficiently sensitive when compared to cell culture results (Sewell and Horn, 1985). However, when applied to HSV infected cell lysates, results were significantly improved (Morgan and Smith, 1984). ELISA could prove useful for the rapid and early identification of HSV when large numbers of cultures are processed. The most recent innovation has been an HSV ELISA spin amplification technique, in which samples are centrifuged onto monolayers and incubated for 2 days. The cell cultures are then lysed and assayed by ELISA for HSV antigen. This test was found to be highly sensitive and specific (Michalski et al., 1986). A significant recent application has been the development of ELISAs to detect the core protein (p24) of the AIDS virus, HIV (Higgins et al., 1986;McDougal et al., 1985). Although current techniques for the isolation of HIV are more sensitive than antigen detection, they are highly specialized and beyond the capabilities of a routine viral diagnostic laboratory. The ELISA has been used to detect HIV core antigen in serum and cerebrospinal fluid Allain et al., 1986). The presence of HIV antigen in blood has been found as early as two weeks after infection , whereas development of HIV antibodies may require six months or more. Antigenemia, with a decline in HIV core antibodies, has also been found to precede the onset of AIDS Paul et al., 1987). Direct detection of viral antigen also is useful in following patients on antiviral therapy, where a decline in core antigen in serum has been demonstrated in patients receiving azidothymidine (AZT) (Chaisson et al., 1986). The availability of ELISA for detection of HIV antigen, therefore, could provide a useful additional diagnostic test for AIDS virus infections. The advantages of ELISA include low cost, less specialized equipment, stability of reagents, avoidance of use of hazardous radioisotopes, wide applicability, and the ability to automate the test or read it visually. Its greatest potential is for the testing of large numbers of specimens for the same virus. Radioimmunoassay RIA was developed in 1960 and first applied to the detection of insulin levels in plasma (Yalow and Berson, 1960). Since that time RIA has been utilized to detect a wide variety of biologic substances in clinical chemistry laboratories. It combines the high sensitivity of radioisotope labelling with the specificity and broad applicability of the antigen-antibody reaction. In addition, large numbers of specimens are readily tested. The sensitivity and specificity also depend upon the quality of the reagents before and after labeling and adherence to rigid test procedures. Both a direct and indirect assay can be used, as in IF, IP and ELISA. RIA has been utilized in the detection of hepatitis B antibody since 1971 (Lander et al., 1971) and hepatitis B antigen since 1972 (Ling and Overby, 1972). However, in many laboratories, it has now been replaced by ELISA. Besides hepatitis B, RIA has been used to detect viral antigens in infected cells, generally in cell culture (Hayashi et al., 1972(Hayashi et al., , 1973Joseph et al., 1976;Laush et al., 1974), but also in clinical specimens (Forghani et al., 1974(Forghani et al., , 1978Halonen et al., 1980), and to detect viral antibody . The localization of antigen within cells is not possible by this method. RIA has less nonspecific reactivity than the enzymatic methods and its sensitivity could be useful in detecting small amounts of antigen in clinical specimens. However, it has the disadvantage of the deterioration of radioactive isotopes, requiring new reagents and standardization every few months, the hazards associated with the use of radioisotopes, and the expensive equipment required which limits its use to large centers. Owing to increasing concerns about the potential hazards to personnel, the disposal problems associated with radioactive isotopes, and the availability of alternatives of equal sensitivity, utilization of RIA can be expected to decrease. Latex agglutination In the past few years, the use of the simple latex agglutination test for the detection of rotavirus has been reported (Cevenini et al., 1983;Haikala et al., 1983). The sensitivity and specificity compare favorably with ELISA (Hughes et al., 1984;Sambourg et al., 1985;Doern et al., 1986). By this technique, latex beads are sensitized to a specific antigen by incubation with immune serum or specific IgG. In the case of rotavirus, the test is performed by mixing clarified stool suspensions with the sensitized latex beads, than after a short incubation, examining macroscopically for clumping (agglutination) of the latex beads. Clumping should occur if the rotavirus antigen is present in the stool. The test is not sensitive for detection of small amounts of antigen, but during rotavirus gastroenteritis large quantities of antigen are usually excreted. This test has several potential advantages: it can be performed by unskilled personnel, it is rapid, relatively cheap and may prove useful for screening in doctors' offices or developing countries. Latex agglutination has also been applied recently to detection of HSV in clinical specimens but it was not found to be sensitive. However, it was very sensitive and specific for positive identification of HSV after the appearance of viral CPE in cell culture (Ignotofsky et al., 1985). VIRAL GENOME DETECTION Nucleic acid hybridization techniques have only recently been introduced into the field of clinical virology and to date they have been applied to studies of viral pathogenesis and to rapid viral diagnosis using clinical specimens (Landry and Fong, 1985). The principle of hybridization is simple. In its natural state, the DNA molecule is made up of two strands with each base specifically linked by hydrogen bonds to a complementary base on the other strand. The bonds between the bases can be broken by heating, or treatment with alkali, so that the two strands of DNA are dissociated from each other (denatured). However, under proper conditions, the dissociated strands will reassociate with complementary partners. Under test conditions, a labeled single-stranded nucleic acid probe containing the specific sequences being sought is mixed with denatured (dissociated) sample DNA or RNA. If complementary nucleic acid sequences are present in the sample, labeled probe will reanneal with these sequences forming double-stranded 'hybrids' which now contain label. The labeled hybrids can be detected by a variety of methods and quantitated. Current techniques largely involve the hybridization of labeled probe to nucleic acid immobilized on a solid support, such as nitrocellulose. The technique most widely used in research, including studies of viral pathogenesis, has been the Southern blot. By this method, purified DNA samples are first cleaved with restriction endonucleases, the fragments separated by gel electrophoresis and then the DNA is transferred out of the gel and onto a nitrocellulose filter by the method of E. M. Southern (Southern, 1975). The nitrocellulose is then immersed in a hybridization solution containing labeled probe. After adequate time has elapsed for reannealing to occur, the nitrocellulose filter is removed from the solution and subjected to a series of washes, which can vary in stringency, to remove untreated probe and unstable hybrids. The binding of the labeled probe is confined to distinct bands, corresponding to nucleic acid fragments separated by electrophoresis; therefore it is possible to identify even weak signals as specific. For detection of viral nucleic acid in clinical specimens, the most widely used technique to date has been the spot or dot-blot. By this method, nucleic acid or cell suspensions are spotted directly onto nitrocellulose filters, in a grid pattern, with or without suction filtration. The obvious advantages are .the speed and simplicity (avoiding restriction enzyme analysis, gel electrophoresis and DNA transfer) and it does not require the laborious extraction and purification of DNA that is necessary for the Southern blot. In addition, large numbers of specimens can be processed simultaneously. However, since visually, only a spot is identified, it is of utmost importance to guard against non-specific results. False positive results in spot hybridizations have been reported due to reactions of residual bacterial plasmid vector sequences in the probe with patients' samples (Diegutis et al., 1986). Careful attention to stringency of conditions, probe specificity, and positive and negative controls is essential. Spot hybridization has been used to detect a number of viruses in clinical specimens. When applied to detection of less readily isolated viruses, such as VZV, spot hybridization had a greater sensitivity than culture . For CMV, the time to detection was greatly shortened, but 103-105 tissue culture infectious doses (50%) (TCIDs0) per ml were necessary for a positive result (Chou and Merigan, 1982). In another study spot hybridization was found to be more sensitive than culture for detection of CMV in buffy coats (Spector et al., 1984). When applied to viruses not routinely isolated, such as rotavirus (Flores et al., 1983), enteric adenoviruses (Stalhandske et al., 1983(Stalhandske et al., , 1985Takiff et al., 1985), parvoviruses (Clewley, 1985;Anderson et al., 1985); papovaviruses (Gibson et al., 1985;Wickenden et al., 1985) and Epstein-Barr virus (Andiman et al., 1983), spot hybridization could prove useful. Detection of HBV-DNA in serum by spot hybridization correlates with active virus replication (Carloni et al., 1987). HBV-DNA has been detected in the absence of other serologic markers for HBV infection (Brechot et al., 1985) and thus provides a new diagnostic tool that may be useful in prognosis and therapy (Bonino et al., 1981;Hadziyannis et al., 1983;Bonino, 1986). However, this technique was not found to be sensitive for the direct detection of viruses readily isolated in culture, such as enteroviruses (Hyypia et al., 1984) and HSV (Redfield et al., 1983). A recently reported modification is the 'sandwich hybridization', which is based on the use of two separate nucleic acid fragments, one of which is attached to the filter and the other is labeled. The nucleic acid sequences of both fragments are complementary to that of the nucleic acid sought in the sample, but the two reagents have no sequences in common and therefore do not hybridize to each other. Thus a positive sample attaches to the reagent bound to the filter and then results in a three component DNA 'sandwich' by mediating the attachment of the labeled probe to the filter. Since the sample is kept in solution throughout the process, as opposed to being spotted onto the filter, components contained in crude samples, such as lipids, mucopolysaccharides, proteins etc., which can non-specifically bind nucleic acids, are not fixed to the filter. This allows the processing of crude samples and the assay of either RNA or DNA, but the sandwich method has not been as sensitive as spot hybridization (Ranki et al., 1983;Virtanen et al., 1984). In addition 32p or 125I were used as labels in most reports to date which is a disadvantage for a clinical laboratory. Biotinylated probes have now been used for spot hybridization (Hyypia, 1985) and in situ hybridization, in which intact cells, such as paraffin embedded tissues, frozen tissues or touch preps are examined for viral genomes (Brigati et al., 1983;Forghani et al., 1985;Beckmann et al., 1985). When used for detection of CMV in lung tissue, in situ hybridization was found to be similar in sensitivity to culture and IF with monoclonal antibody and more sensitive than routine histology (Myerson et al., 1984a;Myerson et al., 1984b) (Fig. 9). In situ hybridization has proven useful in the detection of human papiUomavirus (HPV) in genital tract tissues. HPV has not yet been propagated in cell culture, but over 40 types have been identified by restriction enzyme analysis and hybridization studies. Certain types, such as types 6 and 11, are commonly associated with genital warts, but are rarely associated with cervical cancer, whereas genital infection with other types, such as types 16 and 18, are considered high risk for progression to malignancy (Campion et al., 1986;Crum et al., 1984). One recent report on detection of HPV infection in clinical specimens, found in situ hybridization with radiolabeled probe inferior to Southern blot and spot hybridization (Caussy et al., 1988). Yet others have found in situ techniques with biotinylated probes highly sensitive (Beckmann et al., 1985). Biotinylated DNA probes are now available commercially to distinguish infection with types 6 and 11, from infection with 'high risk' type 16. This should have an impact on management of patients with cervical dysplasia. In situ hybridization has the advantage that histology can be evaluated at the same time, it gives information about the localization of sequences within a tissue and what cell type is infected, and it can be more sensitive if only a few sequences are present but are concentrated in one area. However, procedures are labor intensive and sampling can be a problem. In addition to direct detection of viruses in clinical specimens, recent studies have also applied spot hybridization with radioactive probes to the detection of HSV (Stalhandske FIc. 9. Detection of cytomegalovirus (CMV) infected cells in lung tissue using in situ hybridization with a biotinylated CMV DNA probe. In situ hybridization was performed using a biotinylated CMV DNA probe (Myerson et al., 1984a) and formalin fixed, paraffin embedded lung tissue from a bone marrow transplant patient with pneumonia. CMV infected ceils were rendered readily visible by dark nuclear and cytoplasmic staining. (Photograph courtesy of Dr D. Myerson.) and Petterson, 1982) and enteroviruses (Rotbart et al., 1984) in cell culture lysates. An infectivity titer for enteroviruses of 106--107 TCIDs0 per ml in the lysate was necessary for positive results. When in situ hybridization with a biotinylated cloned DNA probe was compared with avidin-biotin IP staining for detection of HSV infected cells in two different cell systems, IP staining was found to be more sensitive (Landry et al., 1986). Significantly, when a highly sensitive cell system was used, CPE alone was comparable in rapidity and sensitivity to viral antigen or DNA detection methods applied in a less sensitive cell system. ANTIVIRAL AGENTS As knowledge of the biology and biochemistry of viral functions increases, the potential for the discovery of new specific antiviral agents increases accordingly. The current need for accurate, reliable diagnosis of viral infections is to a great extent the result of the discovery and availability of new antiviral agents. Although it is beyond the scope of this review to present a comprehensive report of antiviral chemotherapy, several of the currently available antiviral agents and some of the most promising new antivirals will be discussed. Amantadine The precise mechanism of action of this compound is not clear although early events of virus penetration and uncoating are almost certainly involved. In vitro, several viruses are sensitive to the antiviral activity of amantadine, a cyclic primary amine, but influenza type A is particularly sensitive. Inhibition of influenza A virus replication occurs with 25 #g/ml or less. One study using a plaque reduction assay, reported that most clinical isolates were sensitive to 0.4 pg/ml or less (LaMontagne and Galasso, 1978). Early animal studies demonstrated the effectiveness of amantadine in protection of animals from influenza A virus infection. Doses of 0.6-40 mg/kg protected mice against subsequent influenza A challenge. Protection was observed when the drug was started as late as 72 hr after infection but no protection was afforded when administered after 72 hr (Davies et al., 1964). Amantadine is considered effective for both prophylactic and therapeutic use in humans against all strains of influenza A viruses. Studies have demonstrated that amantadine was approximately 70% effective in preventing influenza and was also effective in treating the disease (LaMontagne and Galasso, 1978). Signs and symptoms of disease disappeared more rapidly in patients receiving drug when compared with a placebo group. There was also a decrease in duration and quantity of virus shedding in the treatment group. Side-effects, primarily central nervous system symptoms, occurred in 2-5% of patients. More recent studies again have demonstrated the effectiveness of amantadine prophylaxis of influenza A (Pettersson et al., 1980;Younkin et al., 1983) and it is recommended particularly for unvaccinated persons at high risk. Iododeoxyuridine 5-Iodo-2'-deoxyuridine (IDU) is incorporated into viral DNA in place of thymidine resulting in essentially nonfunctional viral DNA. The nucleotide of IDU may also interfere with various enzyme systems involved in viral DNA synthesis. This mechanism of action is similar to that of other halogenated deoxypyrimidine nucleosides such as bromodeoxyuridine and fluorodeoxyuridine (DeClercq and Torrence, 1978). Concentrations of IDU which inhibit replication of vaccinia virus by 95% (2.8 #M) have no effect on noninfected cells (Prusoff and Goz, 1975). The antiherpetic effect of IDU in vivo was demonstrated in rabbits soon after the discovery of the effects in cell culture (Kaufman, 1962). Controlled studies in humans followed quickly and confirmed that IDU was effective in treating herpes keratoconjunctivitis Burns, 1963;Laibson and Leopold, 1964). Toxicity or allergic reactions may occur with prolonged use of IDU and alternative therapy may therefore be necessary (McGill et al., 1974;Amon et al., 1975). IDU-resistant HSV strains can occur experimentally (Underwood et al., 1965) and such resistant mutants have been isolated from patients (Hirano et al., 1979). IDU was the first effective antiherpetic drug approved for human use; however, it is too toxic for systemic administration and is not effective topically on skin or mucous membranes. 4.1.3. Trifluorothymidine 5-Trifluoromethyl-2'-deoxyuridine (TFT) exerts the highest antiviral activity of any of the fluorinated pyrimidines (Heidelberger, 1975). Its mechanism of action (Kalman, 1975) is similar but not identical to that of other pyrimidine nucleoside analogs (see above). TFT specifically inhibits herpesvirus replication in vitro (Umeda and Heidelberger, 1969) and has been shown to be effective in treatment of herpes simplex virus and vaccinia virus keratitis in rabbits (Kaufman and Heidelberger, 1964). In clinical trials of TFT treatment of herpes keratitis, it has been shown to be at least as effective as IDU or adenine arabinoside (ara-A) and its use has been associated with fewer side-effects. One trial has shown TFT to be more effective than IDU (Pavan-Langston and Foster, 1977). Another trial compared TFT to ara-A in the treatment of herpetic ameboid ulcers and found that healing of TFT-treated ulcers was slightly more rapid than that of ara-A-treated ulcers . However, TFT is also too toxic for systemic administration and, like IDU, its use is limited to eye infections. Adenine Arabinoside The primary mechanism of action of adenine arabinoside (9-B-D-arabinofuranosyladenine, ara-A or vidarabine) is inhibition of DNA synthesis by inhibition of virus DNA polymerase and incorporation into viral DNA. Both cellular and viral DNA inhibition occurs but inhibition of cellular DNA synthesis is less marked (Muller et al., 1977). In cell cultures, vidarabine exhibits a broad range of antiviral activity against DNA viruses including HSV 1 and 2, VZV, human CMV as well as other animal herpesviruses and poxviruses (Shannon, 1975). Topical vidarabine therapy is effective in treating HSV keratitis (see above), but more important is its use in treatment of systemic diseases. An early study demonstrated the efficacy of treatment of HSV encephalitis in mice (Sloan et al., 1968) and a similar more recent study found decreased titers of HSV in the brain and prolonged survival of vidarabine-treated mice (Griffith et al., 1975). Topical treatment of mice inoculated cutaneously with HSV reduced mortality and decreased establishment of latency in sensory ganglia of vidarabine-treated mice if treatment was begun soon after infection (Klein and Freidman-Kien, 1977). Vidarabine had only a minimal effect on CMV in a murine model (Overall et al., 1976) and resulted in decreased urinary excretion in a human study, but no clinical improvement was apparent (Ch'ien et aL, 1974). Treatment of VZV infections in man with vidarabine has demonstrated some antiviral effect (Walden et al., 1977). Some of the most encouraging results utilizing vidarabine have come from the study of HSV encephalitis victims. In 1977, the results of a collaborative encephalitis study demonstrated the efficacy of the drug. Mortality due to biopsy-proven HSV encephalitis was 70% whereas treatment with vidarabine reduced it to 28% (Whitley et al., 1977). A follow-up study has confirmed the original observations and established that age and level of consciousness at the start of therapy are two important factors that influence outcome (Whitley et al., 1981). A beneficial effect of vidarabine treatment on neonatal HSV infection has been reported. It was also suggested that very early institution of therapy might improve outcome of the disease (Whitley et aL, 1980a), but increasing the dose of drug did not further decrease morbidity or mortality . Thus, vidarabine was the first drug approved for systemic use in serious herpesvirus infections. However, it is not absorbed well after topical administration. Acyclovir Acyclovir (ACV), also known as acycloguanosine or 9-(2-hydroxyethoxy-methyl)guanine, is phosphorylated in herpesvirus-infected cells by a virus-coded enzyme, thymidine kinase (TK). The resulting ACV monophosphate is further phosphorylated by cellular kinases to ACV triphosphate. ACV triphosphate is a competitive inhibitor of viral DNA polymerase and may further inhibit viral DNA synthesis by being incorporated into the DNA thereby causing termination of the DNA chain (Elion et al., 1977). In vitro, ACV inhibits HSV 1 and 2, varicella-zoster and Epstein-Barr viruses. Human CMV has been reported to be sensitive to high levels of ACV in vitro but clinical isolates are usually resistant at levels of drug attainable in patients (Crumpacker et al., 1979). Animal HSV experiments using rabbits (Pavan-Langston et al., 1978), mice (Mayo et al., 1979), hairless mice (Klein et al., 1979) and guinea-pigs (Landry et al., 1982a) demonstrated the effectiveness and low toxicity of ACV. Human trials followed rapidly. One study demonstrated effectiveness of topical ACV administration in ocular disease . Another uncontrolled study of patients with neoplastic disease or bone marrow transplants noted improvement in cutaneous or systemic HSV or VZV infections (Selby et al., 1979). A randomized, double-blind study in bone marrow transplant recipients demonstrated the effectiveness of intravenously administered ACV in preventing the appearance of culture positive HSV lesions. ACV did not cure latent infection as evidenced by appearance of HSV lesions after the cessation of therapy (Saral et al., 1981). A preliminary report comparing vidarabine with ACV for treatment of neonatal HSV infections suggests that ACV is at least as effective as vidarabine for treatment of these severe infections . Importantly, topical treatment of human primary genital HSV lesions with a 5% ACV ointment shortened the mean duration of virus shedding and also the time to complete crusting of lesions (Corey et al., 1982). In addition, short term, oral therapy of both primary and recurrent genital HSV infections significantly reduced virus shedding and time to healing of lesions (Nilsen et al., 1982;Bryson et al., 1983). Long-term, oral therapy prevents recurrences of genital lesions in most ACV-treated patients as long as therapy is maintained. However, when treatments are discontinued, the recurrence rates are similar to placebo-treated groups . In addition, acyclovir has been reported to be more effective than vidarabine in the treatment of HSV encephalitis (Whitley et al., 1986). Ribavirin Ribavirin (virazole) is a purine analog resembling guanosine with a wide range of activity against both RNA and DNA viruses. The drug interferes with the synthesis of guanosine monophosphate, with resultant inhibition of both RNA and DNA synthesis. Influenza viruses are among the most sensitive to inhibition (Sidwell et al., 1979). Ribavirin has been shown to inhibit RSV replication in vitro (Hruska et al., 1980) and in an animal model (Hruska et al., 1982). Several double-blind studies have shown that aerosol administration of ribavirin to infected infants resulted in more rapid improvement in overall severity of illness and increased disappearance of RSV from respiratory secretions. There was no evidence of intolerance or toxicity in the treated babies Taber et al., 1983). This drug has been approved for aerosol treatment of infants and young children with severe lower respiratory infections due to RSV. The trisodium salt of phosphonoformate (PFA) inhibits herpesvirus DNA polymerase at levels of drug which do not appreciably affect cellular polymerase. In cell culture, 100mi PFA inhibits herpesvirus replication by 59-96% depending on the virus (Helgstrand et al., 1978;Reno et al., 1978;Larsson and Oberg, 1981). This mechanism of action is the same as that of phosphonoacetate (PAA) but PFA is preferred because of the dermal toxicity associated with topical PAA application (Harris and Boyd, 1977;Alenius and Oberg, 1978). Recent in vitro studies have demonstrated greater activity against HSV-1 and HSV-2 when PFA was used in combination with 5-methoxymethyldeoxyuridine than when either drug was used alone (Ayisi et al., 1985). In animal models, PFA is effective in treating cutaneous herpes in guinea-pigs (Alenius and Oberg, 1978), herpes keratitis in rabbits (Alenius et al., 1980), and genital herpes in guinea-pigs (Alenius and Nordlinder, 1979). In the latter genital herpes model in guinea-pigs, treatment was effective only if begun within 24 hr after infection. A more recent investigation has found that PFA treatment can also be effective in the treatment of guinea-pig genital herpes when begun near the time of appearance of symptoms Lucia et al., 1983). A double-blind controlled study on cutaneous labial herpes in humans has similarly demonstrated a beneficial effect of PFA treatment on duration of HSV-induced lesions (Wallin et al., 1980). There have been some concerns, however, about long-term deposition of the drug in bone. Bromovinyldeoxyuridine Bromovinyldeoxyuridine (BVDU) is a nucleoside analog which is preferentially incorporated into viral DNA. HSV TK is involved in this preferential incorporation because TK mutants of HSV-I are resistant to the effects of BVDU. Although active against both HSV-1 and HSV-2 in vitro, BVDU inhibits HSV-2 at a concentration that is 100 times greater than that necessary to inhibit HSV-1 (DeClercq et al., 1980b). The preferential inhibition of HSV-1 may be due to the different rates at which the virusassociated kinases catalyze the second step of BVDU phosphorylation from the monoto the diphosphate (Fyfe, 1982). BVDU has been found to be nontoxic and effective in topical treatment of experimental herpes keratitis in rabbits (Maudgal et al., 1980), orofacial herpes in mice (Park et al., 1982) and cutaneous herpes in guinea-pigs (Freeman et al., 1985). Oral administration has been used in humans to treat herpes zoster (DeClercq et al., 1980a). No drug-induced toxicity was found in the patients studied while progression of lesion formation was arrested within 24 hr after the start of therapy. Topical treatment of ocular HSV and VZV infections has been shown to be very effective (Maudgal et al., 1984). Animal studies have shown FIAU and FMAU to be more active than ACV in treatment of HSV encephalitis in mice (Schinazi et al., 1983). In rabbits, topical application of FIAC and FMAU was effective in the treatment of eye infections (Trousdale et al., 1981(Trousdale et al., , 1983. A guinea-pig model of genital HSV infection compared FIAC, FIAU, FMAU, ACV and PFA and found that the three fluoropyrimidines were all more effective than either ACV or PFA for treatment of primary genital HSV-2 infections. FMAU was the most effective of all the drugs tested . In humans, FIAC was reported to be therapeutically superior to ara-A for treatment of VZV and HSV infections in immunosuppressed patients . Dihydroxypropoxymethylguanine The compound 9-(1,3-dihydroxy-2-propoxymethyl)guanine (DHPG) is also known as BIOLF-62, 2'NDG and BW759. This acyclic nucleoside is structurally related to ACV and has a similar mode of action against the herpes group of viruses in vitro (Ashton et al., 1982;Cheng et al., 1983;Martin et al., 1983). In vivo, mouse models have shown DHPG to be very effective, more so than ACV, for the treatment of encephalitis and vaginitis due to HSV-2 . DHPG is also effective against HSV-2 in a guinea-pig model of primary and recrudescent genital herpes (Fraser-Smith et al., 1983). When compared with ACV, however, DHPG is more toxic, but it has increased activity against both Epstein-Barr virus and CMV. This increased activity against CMV makes DHPG unique, although there are variable reports as to the degree of such activity (Cheng et al., 1983;Smith et al., 1982;Freitas et aL, 1985;Shanley et al., 1985). DHPG appears to be effective in controlling CMV associated retinitis and colitis as long as treatment is continued (Masur et al., 1986). Azidothymidine Azidothymidine (Y-azido-Y-deoxythymidine or AZT) is a nucleoside analog which competitively inhibits the reverse transcriptase of HIV in cell culture and also inhibits infectivity and cytopathic effect in vitro. Concentrations which effectively block in vitro infectivity and CPE of HIV do not affect in vitro immune functions of normal human T-cells (Mitsuya et al., 1985). In clinical trials with AIDS and ARC (AIDS-related complex) patients, there were 19 deaths among the 137 patients receiving placebo and one death among the 145 patients receiving AZT. There also appeared to be fewer opportunistic infections in the AZT group (Fischl et ai., 1987). Additional trials are underway . In another study, AZT treatment was associated with a significant decrease in HIV core antigen in the serum of AZT treated patients compared with untreated controls (Chaisson et al., 1986). As a result, AZT has been made available on an investigational basis to AIDS patients who have had Pneumocystis carinii pneumonia and who satisfy certain other criteria. AZT also has excellent penetration across the blood-brain barrier, which hopefully will benefit patients with HIV-associated neurologic disease . Unfortunately, bone marrow toxicity can be a significant problem. DRUG SENSITIVITY TESTING Drug sensitivity testing of clinical isolates is an important function of microbiology laboratories and is essential for the administration of appropriate and effective drugs. Antiviral susceptibility testing will also be necessary and is within the capability of the virus laboratory, but performance standards need to be established. Two methods are commonly used in the laboratory for testing drug sensitivity of a given virus. One of the methods is to determine the virus yield in liquid culture medium. Basically this is done by adding varying concentrations of drug to the culture medium of virus-infected cells and assaying aliquots of the medium for the yield of virus. The resulting reduction of virus yield can be plotted against virus yield without drug. The second and perhaps the simplest method of antiviral assay which can be performed by a routine laboratory is a plaque reduction assay. Plaque formation in the absence of the test drug is compared to plaque formation in the presence of the drug at different concentrations (Fig. 10). It should be noted that different results are obtained when different cell culture systems are used for the plaque reduction assay. As illustrated in Fig. 10, a 0.25/AM concentration of ACV is necessary to inhibit 80% of HSV-2 induced plaque formation when CE cells are used for the assay, whereas 4 ,UM of the same drug is needed to inhibit the same amount of virus when GPE cells are used. Thus, the importance of selection of the cell culture system used for drug sensitivity tests is apparent. Rapid techniques such as nucleic acid hybridization screening (Gadler et al., 1984) and automated CPE inhibition assays (Moran et al., 1985) are now being applied to drug sensitivity testing and can significantly facilitate the ease with which large numbers of antiviral agents can be tested for effectiveness against virus isolates. THE IMPORTANCE OF ACCURATE VIRAL DIAGNOSIS To the practising physician, in the absence of specific treatment, there seems to be little to be gained from diagnosing viral diseases. However, for the following reasons, an accurate viral diagnosis can benefit both the individual patient and the public at large. PATIENT MANAGEMENT Although no treatment is available for the majority of viral illnesses, obtaining an accurate diagnosis still has important implications for patient management. When the exact etiology of an illness is known, unnecessary and often uncomfortable diagnostic procedures, as well as unwarranted antibiotics, can be avoided, and in addition, the physician can more effectively manage any problems that may arise. PROGNOSIS In addition to aiding the management of the acute illness, an accurate viral diagnosis allows for prognostication. The expected course of the illness can be described. This would be particularly important in congenital infections such as rubella and CMV. In genital herpes simplex infection, the patient and contacts should be advised about risk of recurrency, especially in relation to pregnancy, infections of newborns, as well as the increased risk of cervical cancer. In genital HPV infections, detection of low risk or high risk HPV types would be critical in determining potential for progression to cervical cancer. PROPHYLACTIC AND THERAPEUTIC INTERVENTION In certain situations, prophylactic intervention is critical. Pregnant women with a history of genital herpes, infection with herpes below the waist or a sexual contact with genital herpes should be monitored frequently with cervicovaginal cultures for HSV the last 4-8 weeks of pregnancy (Visintine et al., 1978). If HSV is isolated with the week prior to delivery, caesarean section should be performed within 4 hr of the rupture of the membranes to prevent infection of the fetus. Knowledge of the immune status to CMV of kidney transplant recipients and donors is critical for a successful outcome. Seronegative recipients receiving kidneys from seropositive donors have a significant risk of contracting CMV infection and of rejecting the kidney (Lopez et al., 1974;Ho et al., 1975). Passive immunization with immunoglobulin is available for certain serious infections, such as hepatitis contacts, immunosuppressed children exposed to VZV, and is combined with vaccination in persons exposed to rabies. Amantadine, as discussed above, can prevent or lessen the severity of infection with influenza A and has been useful in protecting unvaccinated, high risk populations. As described in the preceding section, specific antiviral therapy is now also possible for serious herpes infections such as herpes simplex encephalitis, neonatal infection with HSV, and VZV infections in the compromised host with acyclovir or adenine arabinoside. HSV keratitis can be treated with topical IDU, vidarabine, ACV, or TFT. Acyclovir has also proved of benefit in treatment of genital herpes infections. Ribavirin therapy is effective in treatment of lower respiratory RSV infection in young children. In addition, newer and more promising drugs are being developed. CONTROL OF NOSOCOMIAL INFECTIONS Nosocomial viral infections, an important cause of morbidity and mortality in hospitalized patients, can be best prevented when an accurate viral diagnosis is obtained and the medical staff are educated as to the proper precautions to prevent spread of the disease. In-hospital transmission of numerous virus infections has been documented. These include influenza (Blumenfeld et al., 1959), respiratory syncytial (Hall et al., 1975), parainfluenza (Mufson et al., 1973), enteroviruses (Gear and Measroch, 1973), rotaviruses (Ryder et al., 1977), varicella-zoster (Meyers et al., 1979), herpes simplex (Linneman et al., 1978), hepatitis viruses (Matthew et al., 1973;Postic et al., 1978), rubella (Carne et al., 1973), and adenoviruses (Barret al., 1958). The newborn infant and the compromised host suffer the most serious consequences. When the offending agent is identified, proper precautions can be instituted. PUBLIC HEALTH MEASURES The importance of viral diagnosis in public health has long been recognized, as illustrated by the control of hepatitis, arbovirus and rabies infections. It has been the major impetus behind effective vaccination programs and allows for the continued evaluation of the efficacy of current vaccines. Continued surveillance is particularly important in determining the antigenic composition of influenza vaccines. ADVANCEMENT OF MEDICAL SCIENCE Since 1970, we have witnessed the discovery of rotaviruses (Flewett et al., 1973), Norwalk agent (Kapikian et al., 1972), JC and BK papovaviruses (Padgett et al., 1971;Gardner et al., 1971), delta agent (Rizzetto et al., 1977), and the recognition that non-A, non-B hepatitis viruses account for the majority of transfusion associated hepatitis (Hoofnagle et al., 1977). The most dramatic discovery however, has been that of HIV as the etiologic agent of AIDS (Barre-Sinoussi et al., 1983;Gallo et al., 1984;Levy et al., 1984). Viruses have been implicated in many well known diseases, such as Paget's, polymyositis, chronic neurologic syndromes, autoimmune diseases, diabetes, and cardiomyopathy. Although perhaps not of immediate benefit to the patient, enlarging our knowledge and understanding of the pathogenesis and spectrum of virus-induced diseases will lead to improvement in medical care in the future. PHYSICIAN EDUCATION A final and very important reason for obtaining an accurate viral diagnosis is the education of physicians. Because of the lack of therapy, it has not been important for physicians to be well versed on the specifics of many viral diseases. It has been adequate to diagnose a 'viral syndrome'. When specific diagnoses are obtained, the physician is stimulated to learn more. As we approach an age of chemotherapy, the increased clinical acumen of the physician in diagnosing viral disease will be decidedly more important. CONCLUDING REMARKS Since the discovery of tissue culture over 40 years ago, many changes have occurred in the field of diagnostic virology. Interest in different virus groups has fluctuated tremendously (Hsiung, 1980), there have been significant technological advances and many 'new' viruses have been discovered (Hsiung, 1984) of which HIV and other human retroviruses are the most striking example. Nothing, however, will have a greater impact on diagnostic virology than the availability of effective chemotherapy. Until recently, virus laboratories have existed either as part of health departments or university research laboratories and their services have not been readily available to community hospitals or practising physicians. However, over the next decade, with the expected progress in antiviral therapy, significant changes can be anticipated. Since minimal amounts of virus may be present in clinical samples, transporting them to a reference laboratory can result in loss of infectious virus and even negative findings. With facilities close by, time to virus isolation and numbers of isolations can be optimized. If significant numbers of specimens are processed, cost will be favorably affected. In addition, communication between the laboratory and physician will be facilitated. Several recent reports have demonstrated the feasibility of establishing satellite or mini laboratories (Herrmann and Herrmann, 1977;Peterson et al., 1980) or laboratories operated on a small scale (Landry and Hsiung, 1981) whose services are tailored to the needs of the patient populations they serve. High-quality commercial reagents are now becoming available for many rapid diagnostic methods. Continued progress in this area can be anticipated in the near future as the need increases. As we become more optimistic about our ability to intervene in the course of viral diseases a greater need to obtain an accurate viral diagnosis is evident.
2018-04-03T04:49:52.340Z
1989-12-31T00:00:00.000
{ "year": 1989, "sha1": "e236ffb90306287155716d3cb4497ba41a4d63b0", "oa_license": null, "oa_url": "https://doi.org/10.1016/0163-7258(89)90098-3", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "46ea6a4ccd9600ca67abc1967a4073470b98d541", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
20391545
pes2o/s2orc
v3-fos-license
Pharmacoeconomic analysis of antifungal therapy for primary treatment of invasive candidiasis caused by Candida albicans and non-albicans Candida species Background Cost-effectiveness studies of echinocandins for the treatment of invasive candidiasis, including candidemia, are rare in Asia. No study has determined whether echinocandins are cost-effective for both Candida albicans and non-albicans Candida species. There have been no economic evaluations that compare non-echinocandins with the three available echinocandins. This study was aimed to assess the cost-effectiveness of individual echinocandins, namely caspofungin, micafungin, and anidulafungin, versus non-echinocandins for C. albicans and non-albicans Candida species, respectively. Methods A decision tree model was constructed to assess the cost-effectiveness of echinocandins and non-echinocandins for invasive candidiasis. The probability of treatment success, mortality rate, and adverse drug events were extracted from published clinical trials. The cost variables (i.e., drug acquisition) were based on Taiwan’s healthcare system from the perspective of a medical payer. One-way sensitivity analyses and probability sensitivity analyses were conducted. Results For treating invasive candidiasis (all species), as compared to fluconazole, micafungin and caspofungin are dominated (less effective, more expensive), whereas anidulafungin is cost-effective (more effective, more expensive), costing US$3666.09 for each life-year gained, which was below the implicit threshold of the incremental cost-effectiveness ratio in Taiwan. For C. albicans, echinocandins are cost-saving as compared to non-echinocandins. For non-albicans Candida species, echinocandins are cost-effective as compared to non-echinocandins, costing US$652 for each life-year gained. The results were robust over a wide range of sensitivity analyses and were most sensitive to the clinical efficacy of antifungal treatment. Conclusions Echinocandins, especially anidulafungin, appear to be cost-effective for invasive candidiasis caused by C. albicans and non-albicans Candida species in Taiwan. Electronic supplementary material The online version of this article (doi:10.1186/s12879-017-2573-8) contains supplementary material, which is available to authorized users. Background Invasive candidiasis (IC), including candidemia, is associated with considerable morbidity and mortality. Managing IC is costly, with an additional healthcare expenditure of nearly US$300 million annually [1]. Our previous study showed that healthcare-associated infection due to Candida albicans was associated with a mean additional hospital stay of 18.4 ± 28.5 days and an extra cost of up to US$6584 ± 11,467 when amphotericin B deoxycholate (d-AmB) and fluconazole were the only two parenteral antifungal agents [2]. Current international guidelines [3][4][5][6] suggest the use of echinocandins (caspofungin, micafungin, and anidulafungin) for the primary treatment of IC because of their cidal activity, rarity of resistance, safety profile, and better clinical outcomes compared with those of fluconazole and d-AmB [7,8]. However, echinocandins have higher drug acquisition and administration costs. Studies from Spain, the United Kingdom, and Australia have shown that treatment with anidulafungin is cost-effective as compared to that with fluconazole [9][10][11]. However, cost-effectiveness studies of echinocandins for treating IC are rare in Asia. Also, published economic evaluations [9,10] compare fluconazole with anidulafungin only. In general, echinocandins are similar with respect to their broad spectrum of activity and in vitro activity against C. albicans and nonalbicans Candida spp., but each has its own unique features and drug acquisition cost. There is a lack of a thorough analysis comparing the economic advantages and disadvantages of the three available echinocandins. A change in the epidemiology of IC has been witnessed in recent decades, with a progressive shift from a predominance of C. albicans toward a predominance of non-albicans Candida spp. (including C. glabrata and C. krusei, which are less susceptible or resistant to fluconazole) [12]. In addition, the distribution of Candida species varies by geographic and healthcare factors [13]. However, no study has determined whether echinocandins are cost-effective for both C. albicans and non-albicans Candida spp. as compared to fluconazole. In this study, we assess the cost-effectiveness of individual echinocandins versus fluconazole in terms of either reduced hospital stay or better clinical outcomes. Subgroup analyses were conducted for C. albicans and non-albicans Candida spp., respectively. Methods This was a pharmacoeconomic study, which utilized the secondary data reported from published studies, so ethics approval was waived. Perspective This study was carried out based on Taiwan's National Health Insurance from a single-payer perspective, and included only direct medical costs (drug acquisition, hospitalization costs, and treatment of major adverse effects such as renal toxicity). Model specifications and assumptions The applied decision-analytic tree was based on the anidulafungin cost-effective model [11], which represents the treatment pathway for patients receiving different types of antifungal treatment (Fig. 1). This model was described by Auzinger et al. in detail [11]. The anidulafungin costeffectiveness model [11] was developed from the perspective of the United Kingdom to examine the costs and outcomes of antifungal treatment for IC based on the European Society for Clinical Microbiology and Infectious Diseases guidelines, which are consistent with the clinical practice for managing IC in Taiwan. Based on Reboli et al.'s study [8], the anidulafungin cost-effective model [11] assumes that the average weight for patients receiving liposomal amphotericin B is 76.4 kg ± 25.5 kg. Based on drug labeling in Taiwan, the loading and maintenance doses of micafungin were both set to 100 mg for candidemia [14]. The dosing forms for the treatments of interest in this study were assumed: fluconazole: 400 mg once daily, micafungin: 100 mg once daily, caspofungin: loading dose: 70 mg and maintenance dose: 50 mg once daily, anidulafungin: loading dose: 200 mg and maintenance dose: 100 mg once daily. Model 1 was adapted from the anidulafungin model [11], showing a person with fluconazole or echinocandins (i.e., caspofunglin, micafunigin, anidulafungin). If the treatment was successful, then the person continued the intravenous (IV) antifungal treatment for 14 days. If fluconazole had failed, the person was switched to one of the echinocandins (one-third of patients were treated with anidulafungin, caspofungin, and micafungin, respectively [11]). For those treated with anidulafungin initially, liposomal amphotericin B was the rescue agent after failure. Those who had experienced clinical failure and were switched to another type of treatment, which is assumed to clear the infection immediately, received an additional 14 days of second-line treatment and were followed for 6 weeks or until death. Patients who died within 6 weeks of treatment were classified as either "did not die during therapy" (they had completed the treatment but died) or "died during therapy" (they had not completed the treatment before death; died during treatment). Model 2, whose structure was similar to that of model 1, was designed to capture the downstream economic consequences of using echinocandins or non-echinocandins as primary therapy for C. albicans or non-albicans Candida spp. If the treatment had failed, liposomal amphotericin B was used as the alternative. The users of liposomal amphotericin B were observed for 6 weeks or until death (Additional file 1: Figure S1). Table 1 shows the parameters of treatment efficacy (i.e., success rate, mortality), percentage of patients that die during therapy, life expectancy, length of IV treatment for patients with treatment success and then survival, and drug adverse events (i.e., nephrotoxicity), which were primarily derived from the literature, costs (i.e., drug acquisition costs, length of hospital stay (LOS) in intensive care unit, and other types of hospital stay), which were based on Taiwan's healthcare system [15], and parameters related to LOS, which were based on expert opinions on clinical practice in Taiwan [16,17]. A four-member expert panel comprising clinicians and researchers within Taiwan's healthcare system with significant experience in infectious diseases provided consensus opinions for data (i.e., LOS) not available from the literature. Cost-effectiveness analysis The incremental cost-effectiveness ratio (ICER) was calculated as the ratio of the difference in medical and drug acquisition costs to the difference in life-years (LY) gained and is expressed in US dollars per LY gained (US$/LY). Noticeably, the LY gained is the difference or incremental value in the LY between two treatment groups. The LY for each treatment group was an expected value aggregated from two components in the decision model (i.e., Fig. 1) (1) time to death in the mortality cases within 6-weeks of follow-up, and (2) life expectancy for the survived cases during 6-weeks of follow-up. First, the time to death (presented in LY) in the mortality cases was obtained directly from Reboli et al.'s study [8]. The life expectancy for the survived cases was estimated in the following steps: (a) assume an average age of patients with IC were approximately 58 years old [8], (b) assume the remaining life expectancy of a 58-year-old person (without IC) is 25.29 years old in the United Kingdom (from Office of National Statistics), (c) the life expectancy was then adjusted by using the reported relative risk of death of 0.51 for sepsis survivors [18], (d) based on the known life table of general population (without IC) and the relative risk of death for sepsis, the expected life expectancy for a sepsis survivor was estimated as 12.9 years. This value was further discounted at an annual rate of 5% for 40 years of follow-up period, which turn out to 9.12 years as the remaining life years for the survived cases within 6 weeks of follow-up. There is no defined willingness-to-pay threshold for health interventions in Taiwan. Therefore, according to [19], the treatment was considered as cost-effective if the cost of one LY gained was less than three times the per capita national gross domestic product (GDP). Taiwan's per capita GDP was US$22,355 in 2015 [20], so the implicit cost-effectiveness threshold was calculated to be US$67,065 per LY gained. All costs are expressed in 2015 US dollars. Probability of success rate (%) Echinocandins 26 [Pfizer, Data in file] Non-echinocandins 50 a [7] Percentage that die during therapy (%) 23.26 [25] Life expectancy, mean (years) 9.12 [9] Length of IV treatment for patients with treatment success and then survival (days) 14 [6,18] Major drug adverse events Nephrotoxicity probability for amphotericin B (%) 33.7 [26] Relative risk of nephrotoxicity of fluconazole compared with that of amphotericin B 0.22 (95% CI, 0.15-0.32) [22] Relative risk of nephrotoxicity of echinocandins compared with that of amphotericin B 0.31 (95% CI, 0.17-0.57) [22] Additional LOS for nephrotoxicity, mean (days) 7 (95% CI, 5. Anidulafungin 200 mg/day 164.48 [28] Micafungin 100 mg/day 108.85 [28] Caspofungin 70 mg/day 517.70 [28] Liposomal amphotericin B 3 mg/kg for a patient with 76.4 kg ± 25.5 kg 962.61 [28] Echinocandins 263.67 c [28] Non-echinocandins 43.76 d [28] Maintenance dose cost Fluconazole 400 mg/day 43.76 [28] Anidulafungin 100 mg/day 82.24 [28] Micafungin 100 mg/day 108.85 [28] Caspofungin 50 mg/day 258.85 [28] Liposomal amphotericin B 3 mg/kg for a patient with 76.4 kg (± 25.5 kg) 962.61 [28] Echinocandins 149.97 c [28] Non-echinocandins 43.76 d [28] ICU cost per day 203.39 [28] Other hospital cost per day 43.27 [28] Abbreviations: LOS length of hospital stay, ICU intensive care unit a Transformation of data from literature [7,8]. Of note, because there appears to be no significant difference in the treatment success rate among the three echinocandins, we used the data (i.e., treatment success) for anidulafungin for C. albicans and non-albicans, respectively, in Reboli et al.'s study [8] for "echinocandins" for C. albicans and non-albicans. For example, Reboli et al.'s study [8] reported a success rate of anidulafungin treatment for C. albicans of 0.81 and Adnes et al.'s study [7] showed that the odds ratio for echinocandin treatment success as compared to that for non-echinocandins (including polyenes [amphotericin B and liposomal amphotericin B] and triazoles [fluconazole and voriconazole]) for C. albicans is 3.7, so the success rate of non-echinocandins was estimated to be 0.22. Similarly, Reboli et al.'s study [8] reported a success rate of anidulafungin treatment for non-albicans (including C. glabrate, C. parapsilosis, C. tropicalis, and other species) of 0.71 and Adnes et al.'s study [7] showed that the odds ratio for echinocandins treatment success as compared to that of non-echinocandins for non-albicans is 1, so the success rate of non-echinocandins for non-albicans was estimated to be 0.71. This transformation was also applied to estimate mortality rates for echinocandins and non-echinocandins for C. albicans and non-albicans, respectively b According to clinical practice in Taiwan [16,17], experts assumed an average of 30 days for total length of hospital stay (LOS), of which 7 days are for stay in intensive care unit (ICU) and 23 days are for other hospital stay c Average drug cost of echinocandins, including anidulafungin, micafungin, and caspofungin d The cost of non-echinocandins refers to drug cost of fluconazole Sensitivity analysis A one-way sensitivity analysis was carried out for efficacy and cost data in the models to determine the impact of uncertainty on model outcomes. A probabilistic sensitivity analysis based on 10,000 Monte Carlo simulations was also performed to assess the simultaneous effect of uncertainty on model results. The gamma, beta, and triangular distributions were used for the price, costs, transition probabilities, and other parameters, while the outcome variables were assumed to be normally distributed [21]. A cost-effectiveness acceptability curve was plotted using the probability of the treatment being cost-effective at a threshold value of willingnessto-pay per LY gained in Taiwan. TreeAge Pro 2016, R1.2 (TreeAge Software, Inc., MA, USA) was used for these economic analyses. Base case analysis For treating IC, including candidemia, due to any Candida species, our analysis estimated that as compared to fluconazole, micafungin and caspofungin are less effective but more expensive (dominated), whereas anidulafungin is more effective and more expensive (cost-effective), costing US$3666.09 for each LY gained under the assumption that the length of IV treatment for success and survival is 14 days (Table 2). Anidulafungin remains cost-effective, costing US$8015.39 for each LY gained, under the assumption that the length of IV treatment for success and survival is 30 days (Additional file 1: Table S1). Furthermore, as compared to anidulafungin, micafungin and caspofungin both are dominated (Additional file 1: Table S2). Table 2 also indicates that for C. albicans-infected patients, echinocandins are more effective and less expensive as compared to non-echinocandins, implying that the former are likely to be cost-saving. For non-albicans Candida spp., echinocandins are more effective but more expensive (cost-effective) as compared to nonechinocandins, costing US$652 for each LY gained. We also utilized treatment efficacy data at different time points from Reboli et al.'s study [8] to examine the robustness of our results. We found that anidulafungin is more effective and more expensive as compared to fluconazole, costing US$6310.01 for each LY gained (when the input is the success rate at 6 weeks follow-up) and US$3492.01 for each LY gained (when input is the success rate at the end of IV treatment) (Additional file 1: Table S3). Sensitivity analyses A tornado diagram showed that the ICER value is most sensitive to "mortality rate for fluconazole" (Fig. 2). In the cost-effectiveness analysis, the most influential variable for C. albicans-infected patients is "mortality rate for nonechinocandins" and that for non-albicans Candida-infected patients is "success rate for non-echinocandins" (Additional file 1: Figure S2). Of note, when the success rate for nonechinocandins for non-albicans Candida infection is less than 0.606, using echinocandins is cost-saving. Therefore, the ICER results are sensitive to the efficacy parameters of treatment (i.e., success rate or mortality rate associated with treatment). The cost-effectiveness acceptability curve showed that anidulafungin, as compared to fluconazole, has an 82% probability of being cost-effective at a threshold of three times the per capita GDP of Taiwan (US$67,065; Fig. 3). Echinocandins for non-albicans Candida-infected patients has an 89% probability of being cost-effective at a threshold of three times the per capita GDP of Taiwan, as compared to non-echinocandins. Because efficacy data of non-echinocandins were converted based on Andes et al.'s study [7], we further conducted one-way sensitivity analysis for the treatment success rate for non-echinocandins for C. albicans infection to ensure the robustness of our results (Additional file 1: Figure S3). We found that the direction of costeffectiveness results changed when different treatment success values for non-echinocandins were assumed. Specifically, when the treatment success rate for nonechinocandins was assumed to be 0.22, the ICER was estimated to be -US$4796 (our base case analysis), implying cost saving when using echinocandins instead of non-echinocandins. When the treatment success rate for non-echinocandins was assumed to be the same as that for echinocandins (0.81), the ICER value was US$1029. Since $1029 is below Taiwan's cost-effectiveness threshold, using echinocandins under this assumption is still considered to be cost-effective and acceptable. When the treatment success rate for non-echinocandins is 0.706, the ICER value is US$0, indicating no difference between echinocandins and non-echinocandins; i.e., the costs of these two treatments are the same. Even if the average cost of individual echinocandins was used as the drug acquisition cost for echinocandins, our sensitivity analyses showed that cost-effectiveness results were not sensitive to drug acquisition cost for echinocandins. Therefore, the direction of ICER is likely to stay the same regardless of drug acquisition cost of individual echinocandins. Discussion To the best of our knowledge, this is the first study to comprehensively assess the cost-effectiveness of echinocandins versus non-echinocandins such as fluconazole for different species (C. albicans vs. non-albicans Candida spp.) of IC in Taiwan. Our results indicate that among echinocandins, only anidulafungin is cost-effective as compared to fluconazole. For C. albicans-infected patients, the use of echinocandins is likely to be cost-saving as compared to the use of non-echinocandins. For nonalbicans Candida-infected patients, there is an 82% chance of the outcome favoring echinocandins. Three cost-effectiveness studies from other countries compared anidulafungin with fluconazole for IC, providing findings that are consistent with our study. Neoh et al.'s study based on an Australian hospital perspective and Reboli et al.'s trial data [8] indicated that, as compared to fluconazole, anidulafungin was associated with an ICER of AU$25,740 per LY gained, which was under the Australian ICER threshold, suggesting that anidulafungin is a cost-effective agent [9]. Our additional analyses, which applied Reboli et al.'s trial data [8], showed consistent results (Additional file 1: Table S3) with those in Neoh et al.'s study [9]. Grau et al.'s study from Spain showed that anidulafungin was cost-saving over fluconazole, with a higher clinical success (74% vs. 57%) at a lower total medical cost (€40,047 vs. €41,350) and that the clinical efficacy of antifungal treatment was the most influential factor in the cost-effectiveness analysis [10], which is consistent with the results of the sensitivity analyses in the present study (Fig. 2). Auzinger et al.'s study, from the perspective of the United Kingdom National Health Service and Personal and Social Services, showed that anidulafungin was cost-effective as compared to fluconazole (ICER: £813 per LY gained) and cost-saving versus caspofungin and micafungin [11]. However, none of these studies analyzed the cost-effectiveness of antifungal treatments for specific species (i.e., C. albicans). Fluconazole has been commonly used for systemic Candida infections; however, selected Candida spp. are intrinsically resistant to or prone to develop resistance to fluconazole. In addition, fluconazole is inactive against Candida biofilm formation [6,7]. Both may contribute to treatment failure. In contrast, echinocandins have very low resistance rates and are active against Candida biofilm. Echinocandins are associated with higher success rates as compared to those for fluconazole [7,8]. The present study shows that, as compared to non-echinocandins, echinocandins are likely to be cost-effective for both C. albicans and non-albicans Candida species. Among the three available echinocandins, anidulafungin is cost-saving as compared to caspofungin and micafungin because of its higher rate of survival combined with a higher probability of treatment success and lower total costs. Anidulafungin has shown better efficacy (i.e., treatment success) versus those of other echinocandins in a mixed treatment comparison [22]. Also, it does not require dose adjustments, which are required for caspofungin (according to hepatic function). Because anidulafungin is metabolized by slow chemical, rather than enzymatic, degradation, there is no need for dose titration in patients with renal or hepatic impairment. As compared to fluconazole, the use of anidulafungin costs US$8015 per LY gained and has an 89% probability of being cost-effective at a threshold of three times per capita GDP of Taiwan (US$67,065). Therefore, anidulafungin is a treatment option that allows better control of antifungal budgets and leads to better healthcare outcomes (i.e., LY gained) at lower total costs. The advantage of the present study is that it takes into account the downstream economic consequences of failed first-line antifungal treatment and considerable adverse drug effects (i.e., nephrotoxicity). The findings of this study might be extrapolated to other countries with similar healthcare systems (i.e., universal healthcare insurance coverage). In addition, the efficacy data were based on randomized controlled trials [7,8,22]. The various sensitivity analyses indicate fair robustness of the conclusions of this study. The overall conclusion remained the same in an additional analysis that changed the assumption of the length of IV treatment for patients with treatment success and then survival (i.e., 14 days or 30 days). Also, subgroup analyses for C. albicans and non-albicans Candida spp. show consistent favoring cost-effectiveness results for outcome of echinocandins. However, some potential limitations of this study need to be addressed. First, our decision-analytic tree that was based on the anidulafungin cost-effective model [11] might only present a simplified model of daily clinical practice. For example, our model and sensitivity analysis did not take into consideration the heterogeneity of the patient population. For example, current guidelines suggest echinocandins for moderately to severely ill patients (from intensive care unit vs. general ward) and neutropenic patients (vs. non-neutropenic) as pooled individual data showed better effectiveness compared to non-echinocandins. Also, since our efficacy data were based on Reboli et al.'s trial [8] that included predominantly non-neutropenic patients with IC, our economic results may not be generalizable to the population of neutropenic patients with IC. Thus, the current data might underestimate the cost-effectiveness of echinocandins particularly for the aforementioned high-risk patients. Furthermore, the cost estimates that only included the costs incurred during hospitalization may be underestimated (e.g., lack of long-term economic consequences of treatment or disease). However, because the final results of our interest were presented in the incremental costs between two treatment groups (i.e., a difference in cost estimates between two groups), the exact long-term economic consequences of treatment or disease might offset in the comparison between groups. Second, our model estimates based on clinical trials might be different from what occurs in practice. Future studies that incorporate actual use of medical resources, including antifungal consumption, additional intervention for treatment failure or drug-associated adverse reactions, and treatments effective against drug-resistant microbes, should provide more valuable information and better reflect actual practice. Third, although expert opinions are often used when there are no other sources of data available (i.e., LOS [23,24]) and are commonly seen in pharmacoeconomic studies [9][10][11], this approach might bias the study results. Thus, we conducted sensitivity analyses and found that the cost-effectiveness results were robust to different values of LOS value. Fourth, with regarding to the cost-effectiveness analyses specific to individual spp.(i.e., C. albicans and nonalbicans Candida spp.), the efficacy data (i.e., success and mortality rates) of "non-echinocandins" (Table 1) were obtained from Andes et al.'s study [7] in which non-echinocandins included polyenes (i.e., amphotericin B and liposomal amphotericin B), and triazoles (i.e., fluconazole and voriconazole). In contrast, the efficacy data for echinocandins were primarily based on Reboli et al.'s study [8] which only assessed the efficacy of anidulafungin for C. albicans and non-albicans Candida spp., respectively. This was done because very limited published studies reported the efficacy of echinocandins for individual spp.. However, since the efficacies of individual echinocandins appear to be similar [22], the data from anidulafungin might be representative of echinocandins. Moreover, all efficacy data (i.e., treatment success) and model assumptions (i.e., average weight of patients receiving liposomal amphotericin B) were from other countries, which might not be applicable to an Asian population (e.g., Taiwan). We found data from Asian countries but the data varied by country (Additional file 1: Tables S4 and S5) and were different from those in international studies (e.g., Mills et al. [22], Reboli et al. [8]). Hence, an effectiveness study of antifungal treatments in an Asian population is needed to enable future cost-effectiveness research specific to Asia. The parameters of treatment efficacy (i.e., treatment success, morality rate) might be different depending on the length of the evaluation period. The cost-effectiveness model applied here used a 5-day period to define treatment success and a 6-week period to measure mortality associated with treatment. However, these efficacy data (i.e., survival) might be different if a longer evaluation period is chosen. Also, the treatment success and mortality data in the present study were based on a meta-analysis study [22], which pooled data based on different evaluation periods. Hence, detailed efficacy data associated with a specific evaluation period is needed. Finally, this economic evaluation was conducted from the perspective of a medical payer, and thus only direct medical costs were included. Further study that considers all economic consequences of disease and treatment (e.g., indirect costs such as productivity losses) is anticipated to give a broader view from a societal perspective. Conclusion In summary, echinocandins are the dominant pharmacoeconomic alternative to fluconazole from Taiwan's healthcare system perspective for treating invasive candidiasis. The clinical efficacy of antifungal therapy (i.e., mortality and treatment success rate) is the most influential determinant for the results of cost-effectiveness analysis. In the case of echinocandins, anidulafungin is appears to be the dominant option because of its higher efficacy at a lower total cost in the treatment of invasive candidiasis.
2017-10-14T11:18:42.821Z
2017-07-10T00:00:00.000
{ "year": 2017, "sha1": "1efb0f551691ede96919934dc79555fe67d8927e", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-017-2573-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1efb0f551691ede96919934dc79555fe67d8927e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
213864511
pes2o/s2orc
v3-fos-license
The use of a Novel Bioactive Glass in Air Polishing for Subgingival Root Debridement Aims: To determine the abrasiveness of using a novel bioactive glass (BioMin™ F) in air polishing for subgingival root debridement by measuring dentine loss and compare this value to the reference powders. Furthermore, to confirm the tubular occlusion effect of air polishing with the bioactive glass using Scanning Electron Microscopy techniques. Material and Methods: Ivory derived from an elephant’s tusk was used as the study sample. A balled milled BioMin™ F powder (D90 = 87.9 μm), was used as the test powder; This choice was based on a previously performed pilot study [1]. This powder was compared to two reference powders, sodium bicarbonate and glycine. Each powder group constituted of six samples of ivory. The dentine lost was measured in μm using white light profilometry. Scanning electron microscopy was performed for all the tested powders, to evaluate particle shape, and to the study samples to assess the effect of the air abrasive/polishing procedure on dentinal tubules. Results: The depth of dentine removed (mean ± standard deviation) of the test group, air polishing with the bioactive glass, was 11.0 ± 1.05 μ m, control group 1, air polishing with sodium bicarbonate, was 44.1 ± 0.77 μ m, and control group 2, air polishing with glycine, was 28.1 ± 1.87 μ m. The differences between the three groups were statistically significant. SEM images showed a partial tubular occlusion effect in the test group, and this was absent in both control groups. The novel bioactive glass, BioMin™ F, with ball milled particles 90% sized less than 87.9 μm, was significantly more conservative than sodium bicarbonate powder and glycine powder. There was evidence of partial tubular occlusion following bioactive glass air polishing; however, no tubular occlusion was evident in the samples treated with either sodium bicarbonate or glycine air polishing. Introduction Periodontal diseases are strongly associated with the presence of bacterial biofilms on root surfaces [2]. Control and removal of bacterial biofilm from all dental surfaces is essential in the treatment and prevention of these diseases [3,4]. It is necessary for periodontal patients to receive frequently performed subgingival debridement in pockets >3 mm probing depth in order to maintain periodontal health since a pre-treatment composition of subgingival microflora can be re-established after several months [5]. The traditional modalities for plaque and calculus removal involve the use of hand instruments or ultrasonic devices or a combination of both. These are both uncomfortable, technically demanding, as well as being clinically time consuming. It may, also lead to severe, substantial, and irreversible root damage [6], and gingival recession over time if applied repeatedly [7,8]. For treatments that need to be repeated, time efficiency, high patient acceptance, and minimal tissue damage are essential requirements [4]. The use of other treatment modalities which are effective in removing plaque with minimal abrasion to root surfaces is preferable [9]. Subgingival air polishing (AP) has been suggested as a simplified alternative approach for root debridement [10]. AP has demonstrated to be a valid, highly efficient, and convenient treatment approach to subgingival debridement [10,11]. It is preferable to conventional treatment with respect to patient comfort, safety, and time efficiency and, therefore, may offer more patient compliance and economic benefits [4,[10][11][12][13][14]. Bioactive glasses are biocompatible, non-toxic, non-inflammatory, non-immunogenic bioactive agents having the ability to interact directly with living tissues and form chemical bonds. Once the bioactive glass dissolves, it forms a hydroxyapatite or fluorapatite like phase which, chemically is similar to the natural tooth mineral. AP with hydroxyapatite has been demonstrated to be effective in removing plaque, tartar (calculus), and stains from enamel and cementum Advances in Dentistry & Oral Health surfaces [15]. The treated enamel and cementum surfaces were covered with a layer rich in hydroxyapatite that was not removed by a water spray [15]. This high saturation of superficial enamel and cementum layers with calcium and phosphate supports remineralization of tooth hard tissues and may also reduce dentine permeability by occluding dentinal tubules and thus reduce dentine/root hypersensitivity [15,16]. The primary aim of this study was to determine the abrasiveness of using a novel bioactive glass BioMin™ F in air polishing for subgingival root debridement by measuring dentine loss and compare this value to the reference powders. A secondary aim was to confirm the tubular occlusion effect of air polishing with the bioactive glass using Scanning Electron Microscopy (SEM) techniques. A flat surface pristine ivory dentine (elephant's tusk) was used as the study sample. An elephant tusk had been previously seized by UK airport customs (illegal smuggling of ivory) and subsequently given to Queen Mary University of London for research use. The tusk was cut manually with a hacksaw in order to obtain a 15 mm thick section of flat surfaces ivory dentine. This was further divided into 18 (10×10 mm) squares. The samples were then mounted in a resin using Claro Cit (Struers ApS, Denmark), which is a cold mounting acrylic resin ( Figure 1). The plastic disc was first painted with a thin coat of Vaseline and then the material was used according to the manufacturer's instructions. After the resin had cured, the sample was polished to an optical finish using a Kemet 3000 LVAC (Kemet International Ltd, Maidstone, Kent, UK) polishing machine using several polishing discs in this order; 360 Grit, 400 Grit, 500 Grit, 800 Grit, 1000 Grit, and lastly 4000 Grit. Powder Preparation BioMinF® bioactive glass was obtained from CDL Ltd Stoke, UK in the form of glass frit (a water-quenched granular glass). The glass was Milled first with a Gyro Mill (Glen Creston, London, UK) then ball milled and sieved to give BioMin™ F powder with D90 of 87.9 µm; This was the test powder which was compared to the reference powders, sodium bicarbonate powder (Medivance Ltd, London, UK) and glycine powder (Clinpro TM Glycine Prophy Powder, was obtained from 3M TM ). Particle size analysis was undertaken for all three powders using a Malvern MASTERSIZER 3000 (Malvern Panalytical, Malvern, UK). SEM images were also taken for each powder. Experimental Method An Aqua Care Air Abrasion & Polishing System from Velopex International, Medivance Instruments, Ltd was used in the experiment. The procedure was performed according to manufacturer's recommendations, a distance of 4 mm, feed rate 1 and air pressure of 80 psi (551.5 kPa). A handpiece with a 0.8 mm tip was used together with disposable plastic tips, both were obtained from Velopex International, Medivance Instruments, Ltd. Plastic tips were changed after each application in order to standardize the experiment. Each sample was air abraded at a 90° angle to the surface for 5 and 10 seconds with the test powders. The amount of powder present in the Powder chamber was checked and always filled to the same level before each application to ensure reproducible and standardized conditions. After the experiment, the substance loss/cutting depth of each sample was evaluated using White Light Profilometry. A Proscan 2000 by Scantron Industrial Products ltd was used to scan each sample individually. SEM images of each sample were taken to confirm any tubular occlusion effect on the dentine surface. The samples were coated with a layer of silver (Agar Scientific Ltd, UK) to prepare it for SEM. Statistical analysis was based on the comparison between the three treatment groups. Differences in cut depth or dentine loss were tested by the use of the Independent Samples t-test. A P-value <0.05 was considered statistically significant. Data handling and statistical testing were performed with the use of the Microsoft Excel software. The individual cut depth values, in µm, of the test and control samples, following 5 seconds of air polishing, are shown in Table 1 & Figure 2. The mean ± standard deviation of the cut depth of the test group was 11.0 ± 1.05 µm; this was significantly less than the mean ± standard deviation cut depth of the control group 1 (sodium bicarbonate) which was 44.1 ± 0.77 µm, and control group 2 (glycine) which was 28.1 ± 1.87 µm. White Light Profilometry Analysis Furthermore, the difference in the cut depth values between the two control groups were statistically significant. Air polishing with the novel bioactive glass, BioMin™ F, resulted in statistically significant less cut depth and dentine loss compared to air polishing with sodium bicarbonate or glycine for the same duration (P <0.05). Thus, the null hypothesis, which stated that there was no significant difference in the cut depths between the three powders, was rejected. The small value of the standard error of the mean of all groups, 0.43 for the test group, 0.31 for the control group 1, and 0.76 for control group 2, can indicate the reliability of the means and that these means are more accurate reflection of the actual true mean. The 95% confidence interval values of all groups are shown in Table 2 and indicates that we are 95% confident that the true actual mean of each group lies within this range. Scanning Electron Microscopy Images Evaluating Dentinal Tubules The surface characteristics of ivory samples of each group were assessed under the scanning electron microscope at a magnification of x10000 at two different points of the experiment: a) Before air polishing. b) After 5 seconds of air polishing application with the test or control powders. Extra images at a reduced magnification of x1000 were further used to view each ivory sample at the same points of the experiment. The rationale behind this was to complement the result seen at the higher magnification and to allow for observation of any tubule occlusion at a wider landscape of view. Also, at this re-duced magnification, the dentine tubules appear smaller, thereby increasing the effective field of view which enabled better observation of the spread of the surface deposition. All observations were assessed using the naked eye of the author without any adjunctive aids. For ease of data analysis, the results will be presented individually, for each group before they are compared with each other. Figures 3 & 4 represent the surface of ivory dentine before the air polishing procedure, i.e. the normal state of ivory dentine. These figures show a clear surface with zero tubule occlusion, featuring opened dentine tubules. Figure 5 was taken following 5 seconds of air polishing with sodium bicarbonate; the dentine surface showed no sign of tubule occlusion and appeared almost similar to the previous image ( Figure 6). A similar finding was observed following 5 seconds of glycine air polishing (Figure 7). At a lower magnification, ×1000, the post-operative images, Figures 6 & 8, show the development of cracks or microfractures connecting the dentinal tubules. These were not present or were only minimally present in the pre-operative image, Figure 4. Figure 9 shows the untreated ivory dentine surface with clear, well-defined, and open dentine tubule margins ( Figure 10). SEM analysis of the treated dentine surface with the novel bioactive glass air polishing showed surface structural changes most probably caused by apatite deposition (Figure 11). A narrowing of the dentinal tubules and some tubular occlusion were observed together with scattered deposits on the surface, indicating the for-mation of an apatite rich smear layer. At the lower magnification, a similar observation to that observed in the control group was also evident in the test group. Cracks and microfractures between the dentinal tubules developed following the application of air polishing ( Figure 12); these were not present in the pre-operative image and were probably due to shrinkage effects within the SEM following water loss ( Figure 10). Discussion The rationale for using pristine ivory dentine (elephant's tusk) in this study may be justified by references to several studies where it was demonstrated that calcium and phosphate ratios were comparable to other animal models [17] as well as the human tooth [18]. The impact of solid particles on the treated surface is the basic event leading to substance removal using air polishing with a water slurry cutting element [19]. This abrasive process is affected by many factors, such as the properties of the applied powder [1], time of exposure, and some parameters of the air polishing device itself (Pressure and Feed Rate) are also influential, such as water pressure and powder emission rate [19][20][21][22]. To the best of our knowledge, there are no previous published studies on the abrasive effect of bioactive glass air polishing, used for debridement, on the root or dentine surfaces. Also, there are no published investigations that directly compare the cut depth following bioactive glass air polishing with either sodium bicarbonate or glycine air polishing used for tooth surface debridement. Two previous studies had investigated the effect of bioactive glass air abrasion, used for cavity preparation and caries removal, on the dentine surface [23,24]. The air abrasion unit, bioactive glass powder characteristics, and experimental settings such as appli-cation time, distance pressure were, however, completely different from our study. Therefore, direct comparison between the present study results and previous results are not valid. The tubular occlusion effect due to the deposition of apatite minerals in the dentinal tubules with the formation of a surface smear layer was observed in our study following bioactive glass air polishing in agreement with other studies that examined the tested surface following bioactive glass application. Litkowski et al. [25] evaluated the dentine surface after treatment with bioactive glass compounds and reported an increase in tubular occlusion compared with non-bioactive glass containing controls [25]. Furthermore, Wang et al. [26] demonstrated that dentine remineralization and complete occlusion of dentinal tubules was evident after bioactive glass treatment [26]. In addition, Sauro et al. [16] concluded that air polishing with bioactive glass powder reduced dentine permeability and created a dentine surface resistant to citric acid attack, thereby indicating that the procedure was suitable for the treatment of dentinal hypersensitivity [16]. This was further confirmed in in vivo studies where patients reported decreased dental sensitivity immediately following bioactive glass air polishing and up to 10 days following the procedure. On the other hand, patients reported increased sensitivity immediately following air polishing with sodium bicarbonate and up to 10 days following the procedure [27]. Also, the bioactive glass appeared to offer a more effective tooth whitening effect when compared to sodium bicarbonate and patients reported increase in comfort of procedure with the bioactive glass over that when using sodium bicarbonate [27]. Studies had also demonstrated that the use of bioactive glass was more beneficial to the tooth surface compared to sodium bicarbonate. In vitro studies confirmed the formation of apatite rich smear layer on enamel, dentine and cementum surfaces following bioactive glass application; thus, it supports regeneration and remineralization of dental tissues [26,[28][29][30][31]. Taha et al reported the remineralization of enamel white spot lesions following air polishing with a fluoride containing bioactive glass [30]. Another group confirmed that pre-conditioning enamel white spot lesion surfaces using bioactive glass air abrasion enhanced the subsequent remineralization therapy [32]. Coupled to this remineralising effect, bioactive glasses have been shown to have an antibacterial effect on oral flora [33,34]. The microfractures and cracks that developed within the ivory samples following air polishing with the tested powders in the present study can be attributed to the dry nature of the tusk as the tusk was stored in dry air. The mean cut depth following 5 seconds of glycine air polishing in the present study was 28.1 ± 1.87 µm, which was comparable to the abrasion data obtained by Buhler et al. [21], who reported on a defect depth of 27.56 ± 3.01 µm following a 5 second application of glycine powder but at a 45° angle [35]. The results from the present study are also comparable to the results of Herr et al, who reported a defect depth of 31 ± 28 µm following 5 seconds of glycine air polishing at a 45° angle [36]. Several studies Advances in Dentistry & Oral Health have compared the effect of glycine air polishing to sodium bicarbonate air polishing. There is a general agreement that sodium bicarbonate was more abrasive on dental tissues than glycine. The resultant defect depth was significantly greater following sodium bicarbonate air polishing compared to glycine air polishing for the same duration [35,37,38]. Therefore, the present study is comparable to previous investigations. The mean defect depth following 5 seconds of sodium bicarbonate air polishing was 44.1 ± 0.77 µm, which agrees with previous reports [39], and, similarly, the difference in the resultant defect depths following glycine and sodium bicarbonate air polishing was statistically significant. The softer effect of glycine air polishing was also evident on the gingival tissues. Glycine air polishing has been reported to cause less gingival erosion compared to sodium bicarbonate air polishing and hand instrumentation [40]. This can partially explain the increased patient comfort reported following glycine air polishing compared to the other procedures [37]. The present study showed that air polishing with the smallest ball milled BioMinF particles, D90 of 87.9 µm, was significantly less abrasive than the reference powders, sodium bicarbonate and glycine. However, the efficacy of this novel bioactive glass powder in removing plaque from dental surfaces has yet to be confirmed. An In vivo study will be required in order to confirm this de-plaquing effect and determine the clinical effectiveness of using a BioMin™ F powder in an air polishing procedure.
2020-03-19T19:57:18.315Z
2019-11-13T00:00:00.000
{ "year": 2019, "sha1": "1c5aa69e1a864fe6cebe1c6aa1e36782f6088986", "oa_license": null, "oa_url": "https://doi.org/10.19080/adoh.2019.11.555819", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "42d0a99d7bfe452c13525e978faebb1a613d7559", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
234420438
pes2o/s2orc
v3-fos-license
Multi-Period Portfolio Optimization with Investor Views under Regime Switching We propose a novel multi-period trading model that allows portfolio managers to perform optimal portfolio allocation while incorporating their interpretable investment views. This model’s significant advantage is its intuitive and reactive design that incorporates the latest asset return regimes to quantitatively solve managers’ question: how certain should one be that a given investment view is occurring? First, we describe a framework for multi-period portfolio allocation formulated as a convex optimization problem that trades off expected return, risk and transaction costs. Using a framework borrowed from model predictive control introduced by Boyd et al., we employ optimization to plan a sequence of trades using forecasts of future quantities, only the first set being executed. Multi-period trading lends itself to dynamic readjustment of the portfolio when gaining new information. Second, we use the Black-Litterman model to combine investment views specified in a simple linear combination based format with the market portfolio. A data-driven method to adjust the confidence in the manager’s views by comparing them to dynamically updated regime-switching forecasts is proposed. Our contribution is to incorporate both multi-period trading and interpretable investment views into one framework and offer a novel method of using regime-switching to determine each view’s confidence. This method replaces portfolio managers’ need to provide estimated confidence levels for their views, substituting them with a dynamic quantitative approach. The framework is reactive, tractable and tested on 15 years of daily historical data. In a numerical example, this method’s benefits are found to deliver higher excess returns for the same degree of risk in both the case when an investment view proves to be correct, but, more notably, also the case when a view proves to be incorrect. To facilitate ease of use and future research, we also developed an open-source software library that replicates our results. Introduction Since Markowitz formulated portfolio selection as an optimization problem trading off risk and return over sixty years ago, mean-variance optimization has occupied a central role in constructing portfolios in both academic literature, and industry (Markowitz 1952). The reasons for its success are diverse. The model was the first to quantify the benefits of diversification towards reducing portfolio risk. Further, it simplified the portfolio selection problem by introducing the concept of an efficient frontier. On this delimitating line or frontier, we can find the portfolio with the highest return for a given level of risk. Despite its vast success, the model has its drawbacks. To arrive at a mean-variance portfolio, an optimization problem is solved for one fixed period: hours, days, months, and years. However, an investor's end goal is broader than what could be achieved by a single mean-variance portfolio. The investor cares about maximizing their wealth over their entire investment period, which could last until a significant event or purchase, their lifetime or many generations (sovereign wealth funds). Superimposing one static set of returns and risk completely ignores the time-varying properties of asset prices over a long period of time. To address this drawback, we propose a reactive multi-period portfolio optimization framework that allows the direct incorporation of investor views and quantitatively generated degrees of confidence in each view on behalf of the investor. Multi-period optimization (MPO) is a promising research area that allows us to optimize portfolio holdings for the immediately adjacent time period simultaneously and multiple periods beyond it. Considering only one period at a time, single-period meanvariance optimization is a sub-optimal nearsighted strategy. The objective for the current period, unlike the real world, is oblivious to unavoidable future constraints and, at a minimum, unaware of reasonable expectations for further future periods. Suppose our long-term return forecast encourages us to build a large position in one asset; however, our short term return forecast is negative. In this case, an optimal solution might be to buy over periods of negative returns to prepare for the long term expectation, a solution not easily incorporated in a single period setting. Similarly, we can incorporate known macroeconomic events such as the US election directly into upcoming future periods. Suppose a reduction in portfolio holdings is desirable to prepare for the event. In that case, it can be expressed in MPO as a forecasted increase or a hard constraint of the underlying securities' risk in a future period. Reducing the portfolio size over multiple periods would likely achieve this with lower market impact as opposed to one period. Using MPO, the investor also gains the ability to incorporate time-varying return predictions into one model, e.g., mean reversion or alpha decay (Boyd et al. 2017). This subset of examples serves to showcase the vast potential that MPO has to improve on the existing single period models. Starting with Samuelson (1969) and Merton (1969)'s work, the literature on multiperiod optimization (MPO) has focused on dynamic programming, which appropriately incorporates updated information for each period in the sequence of trades (Gârleanu and Pedersen 2013). Unfortunately, applying dynamic programming to the problem of trade selection is impractical for non-trivial cases due to the 'curse of dimensionality' (Powell 2007). Most studies focusing on dynamic programming only include simple objectives and constraints and a minimal number of assets. Various approximations to the dynamic programming problem are employed to achieve tractability, such as approximate dynamic programming or simpler formulations that generalize SPO into MPO (Boyd et al. 2014). The method we will be leveraging in this article was recently introduced by Boyd et al. (2017) and consists of a relaxation from dynamic programming's consideration of the entire time horizon. Successfully used in many industrial applications, model predictive control (MPC) incorporates new information into the optimization problem. At each time step, a multi-period optimization problem using information known at time T is solved for H periods ahead. Despite obtaining optimal actions for multiple time periods, only the first period's actions are implemented, and the optimization problem is solved anew with updated information gained at time T + 1. We apply this receding horizon procedure to the MPO setting and simplify the full horizon dynamic programming problem while maintaining fast reaction times to changing financial markets. Applications to finance include portfolio optimization (see Herzog et al. 2007;Nystrup et al. 2019), optimal trade execution (Anis and Kwon 2020) and index tracking (Primbs and Sung 2008). Using the same MPO framework as (Boyd et al. 2017), Nystrup et al. (2019) leverage multi-period forecasts in order to minimize the chances of the portfolio falling below a certain level relative to its previous peak, i.e., achieve a lower maximum drawdown. Boyd et al. (2017) demonstrate that this MPO method remains computationally tractable since it leverages convex programming throughout, can incorporate many costs and constraints and improves the risk-return frontier over SPO for daily equity trading in an ex-post example. From an optimality perspective, it is possible to produce a bound on the optimal performance for the dynamic trading of a portfolio of assets over a finite time horizon (Boyd et al. 2014). This performance bound can be used to judge the performance of any sub-optimal policy. While there is no theoretical guarantee of the performance of the method we are using, Boyd et al. show through Monte Carlo simulations that its results are typically close to the optimal performance bound. Although this optimization method is designed to look into the future, there is no set optimal horizon H to use. This article analyzes the results of portfolio allocation performance across one, two and five-period horizons. We leverage the Black-Litterman (BL) model to generate more stable risk and return estimates for the optimization problem while avoiding common pitfalls found in direct and risk factor model-based estimation. Even in MPO, the mean-variance portfolio remains the core of portfolio optimization. However, when attempting to use the original Markowitz mean-variance optimization model for portfolio allocation decisions, the resulting portfolios are often uninvestable. Green and Hollifield (1992) documents the tendency of mean-variance portfolios to be skewed by having large positions in only a small subset of assets, thus going against the very concept behind their inception, diversification. Similarly, the model would almost always result in large short positions in many assets when allowing short positions. These troublesome results stem from two well-documented problems. First, portfolio managers tend to be extremely knowledgeable only on a specific set of assets, while a standard optimization model requires them to produce both return and risk estimates across all assets. We know that estimation errors can cause mean-variance optimized portfolios to perform poorly (see Michaud 1989;DeMiguel et al. 2009). Second, as a compounding effect, mean-variance portfolios are extremely sensitive to the return assumptions used. When any constraints are introduced to the optimization problem, a surprisingly small change to the return estimate of even one asset shifts half the portfolio's allocation of assets while leaving the portfolio's return and variance unchanged (Best and Grauer 1991). Since estimation error is a leading cause of unexpected results from mean-variance optimization, significantly reducing the parameters to estimate also improves the meanvariance results. As mitigation, Fama and French (1992) introduced size and book-to-market equity factors that, when combined, capture the cross-sectional variation in stock returns. Leveraging explainable factors as drivers of returns enables financial practitioners to reduce the need to estimate n(n − 1)/2 parameters for a full risk covariance matrix to only m(m − 1)/2, where n is the number of assets and m is the number of explanatory factors. Although extremely popular in academia and industry, implementing factor models in practice is not trivial. The presence of correlated factors can cause unit factor portfolios that are unintuitive to even experienced practitioners. Further, using trailing averages of factor returns implies a follower momentum strategy at the factor level without strong empirical justifications (Carvalho 2016). Different mathematical techniques can also be employed in order to reframe the problem of portfolio selection. Credibility theory has been used to expand from traditional MVO to a fuzzy multiobjective model that also includes liquidity constraints beyond risk and return measures (Garcia et al. 2020). Similarly, uncertainty theory can be used to introduce new sources of background risk (income shortages, health-related expenses) that affect individual investors' risk preferences into the portfolio selection problem (Huang and Yang 2020). Models using different choices of risk measures (semi-variance) and objectives (entropy, price-to-earnings ratio, satisfaction functions and environmental, social and governance (ESG) scores) have also been shown to be good alternatives to traditional MVO (see Chen and Xu 2019;Mansour et al. 2019;. As a widely-known approach to solving the Markowitz model's problems, Black and Litterman (1992) developed their namesake model that combined the mean-variance optimization framework with Sharpe's capital asset pricing model (CAPM) and applied it to global assets. The model starts with a baseline of global equilibrium returns defined as the asset returns that would stabilize the global supply-demand of risk assets. In practice, these returns are equivalent to portfolio holdings that are market capitalization-weighted, proportionally more allocated to better-capitalized countries (Sharpe 1964). Layered on top of the baseline equilibrium returns, the model allows an investor to incorporate their own return views for the areas where they have expertise while leaving the remaining assets to be allocated according to equilibrium returns. This approach addresses both inadequacies that exist in the standard mean-variance optimization. Managers are empowered to focus only on their subset of views while the layered approach anchors the final result to the well-diversified market capitalization-weighted portfolio. The enrichment of baseline returns with dynamic investor views is the reasoning behind using the BL model at our framework's core. Although conceptually simple, the Black Litterman (BL) model is imperfect. BL is static and effectively single-period. Once the target weights are obtained, portfolio managers are expected to actively track the results of their view and reoptimize upon any changes in their view or confidence levels. Using data-driven methods to infer dynamic confidence levels in an investment view eliminates the need to choose an entry/exit point and enables its expansion to multiple periods without further investor input. Further, BL is relatively complex to understand even for a quantitative researcher, as evidenced by the number of papers dedicated to presenting it in more straightforward ways (see He and Litterman 2002;Idzorek 2004;Walters 2007). We introduce a regime-switching component that makes the model reactive to market regime changes and, in turn, reduces the work needed to interact with the model. A growing set of literature shows that we can exploit shorter-term trends in both returns and volatility, similar to what we propose in this article. Hidden Markov models (HMM) have been successfully used in speech recognition (Jelinek 1997), natural language modelling (Manning and Schutze 1999) and the analysis of biological sequences such as proteins and DNA (Krogh et al. 1994). Ang and Timmermann (2012) explored their predictive power on financial variables and discovered that they could be used across various financial markets and macro variables. HMMs can describe the financial market's tendency to abruptly change its behaviour and the propensity for financial variables to maintain their behaviour over more extended periods. Within the field of finance, their application is referred to as regime-switching. Incorporating their predictions within mean-variance optimization has been found to improve portfolio performance in multiple ways. Nystrup et al. (2017) found that regime based asset allocation improves portfolio return and risk metrics over rebalancing using static weights. Costa and Kwon (2019) used regime-switching to build factor models to assist with the difficult problem of estimating covariances and demonstrated higher ex-post return for the same level of risk compared to a nominal factor model. Despite numerous examples of using regime-switching in finance, there is a dearth of literature on the benefits of tactically improving the BL returns and risk through regimeswitching predictions. The only directly connected article uses a two-state regime-switching model as the return estimates provided to the Black-Litterman model. It finds that regimeswitching returns outperform directly estimated returns (Fischer and Seidl 2013). Our approach is different; the BL equilibrium returns are kept intact as a base while our goal is to improve the investor views. This article extracts predicted returns from the most straightforward HMM consisting of only two states and uses them to compute dynamic confidence values in investment views. The reason for choosing a two-state model as opposed to more is two-fold. First, Nystrup et al. (2019) find no benefit from increasing the number of states above two out of sample when using long-term daily data. Second, a simpler model is less likely to overfit the training data and is more likely to be embraced in practice due to its increased interpretability. The lack of interpretability is a significant barrier to model adoption by investors. By leveraging regime-switching, we propose that it is possible to compute a practical dynamic confidence level by comparing the investor view to the regime-switching predicted view return. This dynamic comparison serves to remove the BL model's dependency from correctly chosen confidence levels or entry and exit points. Our essential assumptions regarding the trading frequency should be noted, given our use of both market equilibrium returns and regime-switching models based on daily return data. Market equilibrium returns are based on supply and demand reaching a stable balance. At higher frequencies (tick, second, minute), we expect this stable balance to be more fleeting, making market microstructure and short term effects much more critical. Conversely, it is more likely to observe stable supply-demand equilibria to base trading decisions on at a lower frequency. Further, our regime-switching training was performed over multiple years, with only two states (bull and bear), making the predictive power at a higher frequency (intraday or daily) lower. That said, reacting to a regime change faster is better than reacting to a regime change slower. Therefore, the interval we chose to strike a balance between these two factors was a weekly trading frequency and was applied in most simulations performed. Introducing a more dynamic allocation as proposed (based on direct market data) can lead to potential problems that we mitigate against, namely, return instability and overtrading. Since regime-switching models using higher frequency data are faster to update their confidence in each regime, this can lead to fleeting regimes and unnecessarily high portfolio reallocation. To prevent this, similar to Nystrup et al. (2015), we incorporate a minimum probability threshold to overcome before allowing the regime to change. The threshold reduces regime jitter at the expense of a slightly slower reaction to regime changes. We know that transaction costs can be high when trades are made frequently (Kolm et al. 2014) and that SPO can be augmented to efficiently include many types of costs and constraints in the portfolio selection (Lobo et al. 2007). Therefore, we incorporate transaction costs into the portfolio optimization. These two changes, regime-switching thresholds and transaction cost optimization, serve to mitigate the potential adverse effects we mentioned above. Market microstructure issues such as liquidating large positions or information leakage to other participants are only partially addressed by using a temporary impact cost component. The first to introduce the concept of optimal intraday execution of portfolio transactions, Almgren and Chriss (2001) developed a simple linear temporary impact cost model and introduced efficient frontiers trading off the minimum expected cost versus a given level of uncertainty. However, since its introduction, the field of optimal execution has advanced significantly through the introduction of permanent impact costs, dynamically adaptive strategies and stochastic volatility, among others (see Lorenz and Almgren 2011;Almgren 2012). Given our lower frequency of trading (daily or longer) and liquid developed country based ETFs, we do not focus further on this topic but note that it would be an exciting area of future research. Overall, the proposed model improves traditional single-period mean-variance portfolios by incorporating multi-period trading and interpretable investment views into one easy-to-use framework. We leverage a multi-period portfolio optimization model introduced by Boyd et al. (2017) that approximates the entire trading range optimization by repeatedly optimizing smaller, more tractable consecutive sub-ranges. Although trading decisions are made for multiple periods in advance, only the next period trading decisions are executed. The model requires accurate risk and return estimates for the portfolio optimization, which we obtain from the BL model as the combination of the market portfolio and investment views. To mitigate against BL's static nature and provide dynamism to the estimates, we propose a novel method of using regime-switching to determine each investment view's confidence. We do not address the actual generation of investment views; instead, we focus on trading them effectively once provided. Outline The article is structured as follows: Section 2 introduces the multi-period optimization model based on receding horizon MPC. Section 3 presents the computation of risk and returns estimates needed to instantiate the portfolio optimization model. It starts by showing how the Black Litterman model is used to incorporate investor views in Section 3.1, then introduces the dynamic investor view confidence levels obtained through regimeswitching in Section 3.3. It concludes with pseudo-code that details all the parts needed for the complete algorithm and simulation. Section 4 showcases our empirical results using this framework. Finally, Section 5 concludes. Contribution The main contributions to the existing literature are two-fold. First, we are the first to develop a multi-period optimization model to solve a portfolio allocation problem based on return and risk estimates from the Black Litterman (BL) model. As opposed to traditional dynamic programming, this method enables the optimization to be dynamic across time and, as such, allows new information to be incorporated as soon as it's realized. This multi-period relaxation method aptly named receding horizon, is borrowed from model predictive control and was first introduced by Boyd et al. (2017). Given the increase in trading frequency over static models, convex transaction costs are also considered in the optimization objective. Second, we introduce a novel data-driven method to infer dynamically updated confidence levels for investment views. The confidence is obtained by computing the view's regime expected return (based on each underlying asset's current regime) and comparing it to the investor inputted expected return. The more the two forecasts are in agreement, the higher the confidence obtained and vice versa. Overall, the result is a framework that is reactive, numerically tractable and easy to use by a portfolio manager looking to trade researched investment views optimally. We have also developed an open-source software library that implements all of the methods in the paper and can be used to replicate our results easily: https://github.com/roprisor/alphamodel. Multi-Period Optimization Multi-period optimization (MPO) has shown great promise as a flexible solution for constructing optimal portfolios over multiple separate but connected time periods. The traditional mean-variance is designed for only one time period and, therefore, more fit for stationary risk and return assumptions. In practice, financial asset prices exhibit nonstationary behaviour, which is better incorporated in a multi-period optimization model. Academic literature has focused on dynamic programming, a method that has proven impractical for non-trivial cases due to the 'curse of dimensionality' (Powell 2007). Recently, Boyd et al. (2017) developed a model that generalizes from single-period optimization (SPO) to MPO. Its advantages include tractability and flexibility while still achieving nearoptimal results. Through convex optimization for all objectives and constraints, the model can remain tractable despite introducing multiple periods and many constraints for each period. While there is no theoretical guarantee of the performance of the method we are using, Boyd et al. (2014) show through Monte Carlo simulations that its results are typically close to the optimal performance bound. This model, presented below, will be leveraged as a base for the framework presented in this paper. At each time period T, a multi-period optimization problem using information (risk, return, and transaction cost estimates) known at time T is solved for H periods ahead, where H << T. Despite obtaining optimal actions for multiple future time periods, only the first period's actions are implemented, and the optimization problem is solved anew with updated information gained at time T + 1 (see Figure 1a,b). This receding horizon procedure simplifies the full horizon dynamic programming problem while maintaining fast reaction times to changing financial markets. A natural question that arises when considering the horizon for each multi-period optimization problem is how exactly we decide how many periods ahead the optimization should consider, i.e., what should H be? Let us consider the limiting values for H. As a minimum value, when H = 1, we are performing sequential single-period optimization. As a theoretical maximum, when H = T, we are effectively considering the entire trading range all at once. Although possible, optimization across the entire trading range is impractical unless we have accurate return and risk forecasts that far into the future. Therefore, a practical value for H will depend heavily on the forecast horizon of our return and risk estimates. We will compare multiple values for H to validate its impact in the empirical results section. Multi-Period Optimization Model Consider a portfolio of n assets, plus a cash account, over a finite time horizon split into discrete time periods labeled t = 1, . . . , T. The time period in the model can be of arbitrary length however we will consider each period to be a trading day throughout this paper. Let h t ∈ R n+1 denote the portfolio (or vector of holdings) at the beginning of time period t, where (h t ) i is the dollar value of asset i. (h t ) i < 0 implies a short position in asset i. As a corollary, when (h t ) i > 0 for i = 1, . . . , n, we call the portfolio long-only. Since asset n + 1 represents the cash account, (h t ) n+1 = 0 implies that at time t the portfolio is fully invested, i.e., we hold zero cash, all holdings are invested in non-cash assets. The total value of the portfolio, in dollars, at time t is expressed by v t = 1 T h t . Another way to describe the portfolio is through fractions of the entire dollar value or weights. Given a portfolio with holdings h t , the weights (or weight vector) w t ∈ R n+1 are defined as w t = h t /v t . The portfolio weights always sum to one, by definition, 1 T w t = 1, and are unitless (dollar holdings divided by portfolio total dollar value). Equivalent to the holdings scenario, the last weight (w t ) n+1 is the fraction of the total portfolio value being held in cash. Let u t ∈ R n be the dollar value of our trades in period t. We will assume that all trading happens at the beginning of each time period. (u t ) i > 0 implies that we bought (u t ) i dollars of asset i and (u t ) i < 0 implies the opposite, for i = 1, . . . , n. Normalizing the dollar value trades u t relative to the total portfolio value we obtain the normalized trades SPO only considers the most recent trade decision z t while ignoring any future periods in the current optimization. Effectively, SPO is the specific case of MPO where the forward horizon only includes one period. In MPO, we obtain the current trade vector z t by solving an optimization problem over a planning horizon that extends H periods into the future as illustrated in Figure 1a for times: where z t , z t+1 , . . . , z t+H−1 and w t+1 , . . . , w t+H−1 are variables, γ risk t and γ trade t are positive parameters used to scale the respective costs and theˆdenotes an estimate rather than a known or realized quantity. These parameters are sometimes called hyper-parameters, analogous to the identically named parameters we obtain when fitting statistical models to data. The hyper-parameters can significantly affect the performance of the MPO method and should be chosen carefully through backtesting. As noted earlier, although only the trades for period t are executed for each optimization looking H periods into the future, the model has both the ability to incorporate newly discovered information and to consider the optimal allocation of trading across all the H periods into the future. When H = 1, we are solving an SPO problem. Degrees of Freedom The optimization model purposefully leaves open multiple degrees of freedom required for it to be instantiated (Boyd et al. 2017). The performance analysis associated with the originally proposed model is done ex-post (using realized future data that could not be known when optimizing) with no future projections provided. To build a complete trading model, we fill in the missing components as follows: • return and risk estimates: returnsr and riskψ are replaced byr BL andψ BL generated from the Black Litterman model • transaction cost estimates:φ trade remains the 3/2 transaction cost model Since returns are represented as a vectorr ∈ R (n+1) , replacing the vector with Black Litterman estimatesr BL is straighforward. Risk Assuming the returns r t are randomly distributed, with covariance matrix Σ t ∈ R (n+1)×(n+1) , the variance of the portfolio return R p t is given by We obtain the traditional quadratic risk measure from the Black Litterman model covariance for period t,ψ BL t (x) = x TΣBL t x. We note thatΣ BL t is an estimate of the return covariance based on sample data and model assumptions. The exact distribution of the process generating real asset price returns can never be known. Transaction Cost Trading in financial markets generally incurs a transaction cost, denoted as φ trade The model assumes that the transaction cost function φ trade t is separable, i.e., the transaction cost breaks down into a sum of transaction costs for each individual asset. This assumption ignores the cointegration effect of asset prices at the high-frequency level which is a reasonable assumption given the period considered in our paper is one day. (φ trade t ) i , a function from R into R, is the transaction cost function for asset i, period t. Similar to Boyd et al. (2017), the model chosen for the transaction cost function where a, b, σ, V, and c are numbers and x is a dollar trade amount (Grinold and Kahn 2000). a represents the asset's half-spread (half the bid-ask spread) at the beginning of the time period when trading occurs. This term is represented relative to the asset price and is therefore unitless. If desired, a can be increased by an amount representing broker fees expressed as a function of the dollar value traded. The second term represents the temporary impact cost of our trading. b is a positive constant of unit 1/dollars. V represents the total dollar value of the asset traded in the market in the current time period. The number σ reflects the asset price's standard deviation over the most recent periods, expressed in dollar units. As mentioned by Boyd et al. (2017), a common rule of thumb is that trading one day's volume is expected to move the price roughly by one day's volatility. This would lead to a value of b around one. Given that c is linear in dollars traded x, we can use the third term to express differences between buying and selling an asset. If the cx term is ignored (c = 0), the cost is the same regardless of the trade direction. However, when c < 0, selling is more expensive than buying, which could reflect a market with difficulty borrowing stock to short sell or otherwise increased selling pressure (more sellers than buyers). Return and Risk Estimates with Investor Views Successful multi-period portfolio optimization relies on having a set of accurate risk and return estimates to produce trading decisions that perform well out of sample (see Green and Hollifield 1992;Michaud 1989;DeMiguel et al. 2009). In this paper, we leverage the Black Litterman (BL) model for risk and return estimates. BL aims to reduce estimation error by combining the market portfolio with a set of investor views. The market portfolio is a portfolio based on a condition that must be satisfied (all assets change hands; each seller finds a buyer). The investor views are expressed as portfolios that the investor provides a target return and confidence level for. One of the most considerable drawbacks of the BL model is its static nature. It perfectly fits the category of single-period models that are not designed to adapt to changing market conditions optimally. Incorporating static by nature BL model risk and returns into multiperiod optimization where they will be used repeatedly, as proposed in Section 2, requires at least one component to be dynamic. Since the market portfolio is fixed (only changes with market capitalization) and equilibrium returns mostly reflect parameter changes, the key to making the risk and returns estimates dynamic is the investment views. Our proposal replaces the static confidence levels in each investor view with dynamic values generated by a regime-switching model. Regime switching has shown great value when applied to the financial markets (see Ang and Timmermann 2012;Fischer and Seidl 2013;Nystrup et al. 2017;Costa and Kwon 2019). The model's underlying idea leverages the observation that asset prices exhibit timevarying behaviour, such as their tendency to exhibit trends in their statistical properties (means, standard deviations). Multiple return distributions called states fit to financial data are used as both explanatory variables of their past properties and predictors of their future properties. This model fits perfectly with the concept of investment views, which, once researched, are investors' expectations of financial trends that will persist for some time in the future. To sum up, multi-period portfolio optimization requires a set of risk and return estimates that we obtain from the Black Litterman model. This model incorporates investment views that are expected by the investor to exhibit trends. As such, in order to capture the confidence level (likelihood) that the expected trend is already underway or has ended, we use a regime-switching model. This combination allows us to construct our set of risk/return estimates, a needed building block for dynamic multi-period portfolio optimization, that quantitatively follow the trend of the expected investor views. Black Litterman Model The original Markowitz optimization model has at least two significant drawbacks. First, it tends to skew towards building large position weights in only a small subset of assets, effectively negating its original goal of diversifying across multiple assets (Green and Hollifield 1992). Second, it is notoriously sensitive to minute changes in the return assumptions used. A small change to the return estimate of only one asset can shift half of the portfolio allocation (Best and Grauer 1991). To counteract these well-documented problems, Black and Litterman (1992) developed a new model that combines the Markowitz mean-variance optimization with Sharpe's CAPM through a Bayesian approach. Their model starts with a completely neutral view of the asset means (prior). The only reasonable definition of which-they argue-is the set of expected returns that would ensure the market is cleared if all investors had identical views (cleared implying that all assets are traded, each buyer finds a seller). Given this equilibrium state, their model allows investors to specify linear combinations of investment views that are overlayed on top (posterior). This overlay causes a subtle allocation shift from equilibrium depending on the strength of the investor's conviction. Since each view's confidence can be specified together with the overall willingness to diverge from equilibrium, the Black-Litterman model can counteract both problems with Markowitz optimization. The resulting weights are well diversified, and small changes in investor views of the asset means only exhibit localized effects, leaving most of the portfolio intact. From a mathematical modelling perspective, let us assume there are n investable assets in our universe. Their returns r are driven by normal distributions with mean µ and a covariance matrix Σ: r ∼ N(µ, Σ). At equilibrium, all investors as a whole will hold the market portfolio w e q. Therefore the equilibrium risk premiums are set such that, if all investors hold the same view, the demand for the assets equals the available supply (Black 1989). Assuming an average world risk tolerance which is represented by the risk aversion parameter δ, the equilibrium risk premiums are given by: From a Bayesian perspective, the prior consists of the expected returns µ being normally distributed and centered at the equilibrium values (mean of Π): where (e) is also a normally distributed random vector with zero mean and covariance matrix τΣ, τ being a scalar which represents the uncertainty of the CAPM prior. To overlay investment theses on top of the CAPM prior, the investor also needs to define a set of views. The views are expressed such that the expected return of a portfolio p has a normal distribution with mean equal to q and a standard deviation given by ω. If we let K be the number of total views, P a K × N matrix whose rows are the view portfolio weights and Q a K-vector of the expected returns of these portfolios. Given the above, we can express the investor views as: where (v) is an unobservable normally distributed random vector with zero mean and a diagonal covariance matrix Ω. We now have all the pieces required to combine the CAPM equilibrium returns together with the investor views in a Bayesian framework. The result is a set of posterior expected returns that are distributed as N(μ,Σ), where the meanμ is given by: and the covarianceΣ by: For a detailed derivation of Equations (4) and (5), the reader is encouraged to refer to Meucci (2008)'s working paper that provides a detailed analysis of the original formulation and its re-casting into the more computationally stable posterior representations listed above. Raised by Idzorek (2004), a problem with the original Black-Litterman model was that its formulation was meant to allow the incorporation of investor views, yet the process for doing so was complicated by the need to define uncertainty covariance values for each. This requirement was an unnecessary barrier preventing non-quantitative investors from adopting the framework more widely. To resolve this, Idzorek proposed specifying the investor's confidence in the views expressed as a percentage, 0-100%, where the confidence measures the change in weight of the posterior from the prior estimate (0% confidence) to the conditional estimate (100% confidence). According to this methodology, a coefficient of uncertainty α in the interval [0, ∞) is used to construct the Ω uncertainty covariance from the view portfolio covariance as such: Walters (2007) obtains a closed-form solution for Idzorek's confidence interval formulation, which greatly simplifies the process of obtaining Ω. This combined method allows investors without a quantitative model driving their investment theses to adapt their views to the Black-Litterman framework easily. Thus, the requirement is shifted from providing exact uncertainty covariance values to only providing a confidence value in the interval (0, 1]. In turn, Equations (6) and (7) transform these confidence values into model-driven uncertainty values. Regime Switching Model Our goal is to generate risk and return estimates that can be used to optimize repeatedly as time advances each period across the entire trading horizon. However, the Black Litterman (BL) model itself is static and, as such, does not lend itself to the dynamic incorporation of new information. To mitigate this problem, we leverage a regime-switching model that is well suited to predicting asset return trends and use it to generate dynamic confidence levels in each investor view. Once provided to the BL model, a view is considered both established and static until removed or manually edited by the investor. Considering the non-stationary nature of financial markets, the exact time a view is discovered might not be the best entry point. The investment thesis might either be too early or too late, which the model will not protect against. Further, any single view is not guaranteed to achieve consistent results over time, resulting in potential over-allocation to poorly performing views and hence under-allocation to strongly performing views over their lifetimes. Financial market trends can change abruptly. Consequently, their return means, volatilities, and correlations also shift according to economic, political or behavioural trends that underlie asset valuation. Once established, changes tend to persist over extended periods, thus leading to observations such as the clustering of volatility first noted by Mandelbrot (1963). Looking at the time series of returns for SPY, the most popular ETF tracking the S&P 500 Index (Figure 2a 1 ), we can visually observe these shifts. Following periods of more subdued volatility, abrupt spikes materialize and cluster together. The U.S. market underwent a very long stretch of a calm bull market from 2009 to 2020. However, this period sharply contrasts to the global financial crisis in 2008 and the onset of the severe COVID-19 pandemic in 2020. Looking closer at the return densities for 2018-2020 (Figure 2b 2 ), the difference between their shapes is apparent. None of the three years seem to be close to what Markowitz's original theory assumes, a Bell curve. 2018 and 2019 have pronounced left tails, while 2020 is an outlier with wide left and right tails. Similar regime shifts, some periodic (expansions followed by recessions) and some unique (global events such as the recent COVID-19 pandemic), are found across a wide array of financial markets and macro variables (Ang and Timmermann 2012). Regime switching attempts to exploit these clustering effects of financial time series to generate alpha and improve portfolio risk-reward metrics. Regime based asset allocation has been shown to improve portfolio metrics over rebalancing using static weights (Nystrup et al. 2015). In machine learning, the area focused on inferring a set of labels out of data (e.g., hidden regimes) is referred to as unsupervised learning. Due to the sequential nature of financial data, a natural choice to model regime transitions is a first-order Markov chain. Markov Chains A first-order Markov chain is a stochastic process describing a sequence of possible states s 1 , ..., s N in which the probability of the next state s n depends entirely on the previous state s n−1 . More formally: p(s n |s 1 , s 2 , ..., s n−1 ) = p(s n |s n−1 ), ∀n = 2, ..., N This memorylessness property allows us to loosen the i.i.d. assumption of more traditional non-sequential models. The model remains computationally tractable while incorporating past information into future probabilities of sequential data (Bishop 2006). Hidden Markov Models Hidden Markov models have been applied to speech recognition (Jelinek 1997), natural language modelling (Manning and Schutze 1999) and the analysis of biological sequences such as proteins and DNA (Krogh et al. 1994). The reason Markov chains are useful to model the shifting conditions of financial markets is that for each underlying state, we can attach a different probability distribution for the returns. For example, for a two-state chain, we can consider one state to represent an upwards trending (bull) market and the other state a downwards trending (bear) market. Similarly, for a three-state chain, the third state could represent a sideways (calm) market. We would expect each state's probability distribution to reflect the financial market-relevant at the time. A bull market would be represented by a Gaussian distribution with positive mean and low variance, while a bear market would have a negative mean and high variance: where µ bull > 0 > µ bear and σ bull < σ bear . Such a combination of multiple unobservable (latent) states connected through a Markov chain is called a hidden Markov model. To build a hidden Markov model from our notation for a Markov chain, assuming we have observations x 1 , x 2 , ..., x N , we introduce corresponding latent variables z 1 , z 2 , ..., z N for each observation. We further assume that the latent variables are the ones forming a Markov chain such that z n+1 and z n−1 are independent given z n (memorylessness property, Equation (8)). The latent variables z n which we just introduced are designated to represent which state the observation pertains to. Each state has its emission probability distribution, the most basic case being a Gaussian distribution. Mathematically they are represented as discrete multinomial variables in a 1-of-K coding scheme. Because the underlying states depend on each other through a Markov chain and each variable is K dimensional, the transition probabilities between states p(z n |z n−1 ) correspond to a table of numbers we denote as A, the transition matrix. They are given by A jk = p(z nk = 1|z n−1,j = 1). In our case, since we utilized a two-state model, the conditional distribution of the current latent variable z n is given by: Since the initial latent variable z 1 does not have a previous latent variable to form conditional distribution respective to, it has a marginal distribution p(z 1 ) represented by a vector of probabilities π. Each element of the vector π k denoting the initial probability that the underlying state for the first observation was state k. Specifying the probabilistic model is completed by defining the conditional distributions of the observed variables on the latent variables and φ, the set of parameters governing the distribution, p(x n |z n , φ). Thus, the joint probability distribution over both observed and latent variables is given by: where X is the set of all observations x n , same for Z, and θ = { π, A, φ} denotes the set of parameters governing the model. Estimation The parameters of hidden Markov models are typically estimated by attempting to maximize the joint probability distribution in Equation (9) with respect to θ, also known as the maximum-likelihood method. The most popular methods to maximize the joint probability distribution are direct numerical maximization and the Baum-Welch algorithm, a special case of the expectation maximization (EM) algorithm (see Baum et al. 1970;Dempster et al. 1977). In this article, we utilized the Baum-Welch algorithm to extract the model parameters and latent variable probabilities. The EM algorithm starts with an initial estimate of the model's parameters, π and A, which we can denote as θ old . The core proposition decouples the numerical maximization into two steps: expectation (E step) and maximization (M step). In the E step, we use the old parameter values θ old find the expected probability of the latent variables given the observations X and θ old (the posterior distribution of the latent variables). We can then use this to evaluate the expectation of the logarithm of the likelihood as a function of θ. In the M step we maximize Q(θ, θ old ) with respect to the parameters θ = π, A, θ while holding constant the posterior distribution of the latent variables that we computed in the E step. The Baum-Welch algorithm has several variants, of which one of the more popular is the alpha-beta algorithm. Bishop (2006) has an excellent exposition of both the algorithms employed, EM and alpha-beta, which we will not reproduce here. Dynamic View Confidence through Regime Switching We can now focus our attention on using the newly obtained asset return regimeswitching forecasts. As discussed at the beginning of Sections 3 and 3.2, one of the drawbacks of the BL model remains the static inputs of view confidence. Since our multi-period optimization method requires frequent re-solving, the risk and return estimates need to be updated according to the newly available information. Therefore, we propose converting our regime-switching based forecasts into dynamic confidence levels for investor views. The impact of an incorrect confidence forecast can be a significant drag on performance in the very competitive world of asset management. For example, in Figure 3, we track two opposite views considered ex-post: (long EWG, short SPY) versus (short EWG, long SPY). The delta between these two views peaks at over 125% at the end of 2007. This implies that there would have been a 1.25% performance gap between views for every 1% of portfolio allocation. To make matters more interesting, the winning view is counter to the generally accepted market equilibrium returns. Germany outperformed the United States by a surprising 50% over the 2005-2008 period. As always, when dealing with predictions, the problem is that not all investors would be able to correctly choose the right view all the time, never mind the right degree of confidence, without some kind of systematic process. Regime switching can be used to completely remove the need to define a subjective degree of confidence in the investor views provided to the BL model. From Equation (3), we know that the expected return for an the investor's view portfolio p i is expressed as i . However, given that we have already estimated the underlying regimes driving each asset's return, we can also compute it directly from these estimates: where rs represents that the quantity was obtained from the most likely underlying regimes for the current sample of returns. Since the investor has already specified their expectation for the view return to have mean q i , there are multiple methods we can employ to compute the confidence that should be assigned to it systematically. As a first method, we could focus on what quantity we are interested in for q i relative to the p i prediction. Specifically, a possible perspective on confidence could be to consider it to be the likelihood that the view return will be q i or better (Figure 4a): Similarly, in a more simplistic fashion, we also consider the confidence in a given view to be akin to a neural network node that is activated when our estimated P i µ rs is greater than the investor's q i input (Figure 4b). One of the most popular activation functions is the sigmoid function, largely owing to its step-like behaviour and ease of differentiation at all points (Bishop 2006). To adapt it to our current purpose, we need to incorporate two adjustments to the base sigmoid function. First, we add a slope parameter η, which we could consider similar to the BL model confidence in the investor views. The steeper the activation slope, the faster the change in confidence level. As a result, a view that becomes in play will be reflected in estimated returns faster. Secondly, a scaling parameter φ is introduced defined as 10 −om(q i ) , where om(x) is the order of magnitude of x. Its purpose is to counteract the unwanted effects resulting from comparing tiny magnitude returns and expand the range of confidence values. The slope parameter value could be set through backtesting to match the shifts in realized view closely returns with the shifts in view confidence; however, great care needs to be placed on avoiding over-fitting. For this paper's purposes, we used a higher base value of 4 for the slope parameter (η = 4) to both avoid using views that might not be active and switch fast to active views once identified. The value of 10 −4 was computed for the scaling parameter since daily returns are on the order of magnitude of basis points. No further fitting was performed that could introduce bias in the results. Multi-Period Portfolio Optimization with Investor Views To instantiate the multi-period optimization model we described in Equation (1), we outlined the need for accurate and reactive risk and return estimates. Our proposal depends on three key components to achieving this. First, regime-switching forecasts for the underlying asset returns; second, a method to construct confidence levels for investment views from the forecasts; and third, risk and return estimates obtained from the Black Litterman (BL) model that integrates the investment views with the newly constructed confidence levels. The Algorithm 1 pseudo-code illustrates in detail the entire simulation we propose in order to perform multi-period portfolio optimization while incorporating dynamic risk and return estimates from the BL model. For live trading, given a specific set of optimization hyper-parameters chosen carefully through backtesting, only the iteration of the loop for the most recent period T should be performed and, as such, only the z T trades applied. A natural question that arises while considering the structure of the algorithm is: what is the value of optimizing multiple periods ahead whenr BL is constant? We expect that the answer lies in the interplay between shifting weights to follow the return forecasts trend and the regime-switching component's dynamicity. More specifically, when considering only one period at a time, each optimization will effectively try to adjust the portfolio weights as much as possible in the current period. This could be an optimal strategy if we know that there is no new information to incorporate shortly after. However, this is exactly what we are trying to achieve through the regime-switching and dynamic confidence component. Every time we reoptimize, we incorporate all the newly available asset return information and refit the regime-switching model, which, in turn, generates a more up to date set of returns to use. Increasing the number of trading periods beyond one gives the portfolio optimization the chance to trade off fast portfolio shifts against transaction costs that can be spread out across time. Every time some part of trading is best left for a period further than the current one, we would have new information to use that would be incorporated in new return and risk forecasts. The performance loss or benefit from incorporating multiple trading periods is addressed empirically in Section 4.3. // Update Black Litterman return and risk estimates Apply trades z t only Empirical Results The proposed multi-period portfolio optimization framework can be applied across a wide range of asset universes with an infinite number of combinations of investor views overlayed on the market portfolio. From a practical perspective, to show the promise of the framework, we focus on a numerical example comprising two opposing views. Especially in finance, forecasting is a challenging endeavour, as shown by the increasing number of assets shifting to passive investment vehicles (index tracking). It is impossible to know with certainty ahead of time if an investor's view will realize. Therefore, a tricky trading scenario to test our framework is to provide it with both a 'correct' and an 'incorrect' view and observe how it updates its confidence in each view with each new price of information it is provided. While an investor would equally participate in both the upside of the 'correct' view and the downside of the 'incorrect' view, a successful quantitative model should be able to discover when the 'incorrect' view underperforms and, as such, reduce the downside exposure while leaving the upside in the 'correct' view case. In this section, we first present our computational setup used for the simulations. Second, we present the regime-switching and dynamic confidence results leading to the risk and return estimates. Finally, we instantiate the multi-period portfolio optimization model and validate its performance relative to its static counterpart, the Black Litterman model, and repeated single-period optimization. To achieve robust results out of sample, it is critical to validate each component separately before combining them into one cohesive model. Therefore, each component is tested separately and on disjoint time ranges to avoid look-ahead bias and over-fitting. All dynamic confidence simulations are performed on a weekly rebalance schedule as discussed in Section 3. Computational Setup All simulations were performed with daily adjusted close price, and volume data retrieved from the Quotemedia (2020) data source hosted on Quandl. An ideal analysis should encompass both a large enough test set and apply the model to a very recent period. Therefore, the entire range for the data set was chosen to be from 1 January 1997 to 31 August 2020. To minimize the chance of overfitting to the data, the regime-switching training and testing were performed from 1 January 1997 to 31 December 2005. The time range from 1 January 2005 to 31 August 2020 was dedicated to the Black Litterman and the multi-period portfolio optimization. Historical country level market capitalization data was obtained from the 'Stock Market Capitalization to GDP' and 'Gross Domestic Product' tables published by the St. Louis Fed (Fed 2020). The code was written in Python 3.7 and hosted online on 'github.com' under the project alphamodel. The regime switching's underlying HMM model was trained using the hmmlearn open-source package. Simultaneously, the multi-period optimization was performed with the cvxportfolio open-source package built by Boyd et al. (2017) where we implemented a new multi-period optimization policy that can handle forward horizons in a way that matched our code. In terms of simulation hardware, the experiments were run on different hardware depending on how many simulations were required. One-off simulations were run on a MacBook Pro 2016 laptop while regime-switching training period testing and efficient frontier experiments were run on Amazon Web Services (AWS) compute-intensive optimized hardware, specifically 'c4.8xlarge'. The MacBook Pro specifications include a quad-core 2.6 GHz Intel i7 CPU with 16 G of RAM running MacOS, while 'c4.8xlarge' AWS servers include an 18-core 2.9 GHz Intel Xeon E5-2666 v3 Processor with 60 GB of RAM running Unix OS. Return and Risk Estimates Successful multi-period portfolio optimization relies on accurate risk and return estimates to produce trading decisions. The Black Litterman (BL) model was chosen for this purpose in our framework as elaborated in Section 3.1. The universe selected lends itself to global equity allocation, similar to the original BL experimental setup (Black 1989). We construct portfolios using the nine oldest country ETFs listed in the US, all denominated in US dollars, as listed in Table 1. In this global context, we follow the results of the representative ETFs for Germany (EWG) and the United States (SPY) relative to each other ( Figure 5). As mentioned in Section 3.3 and illustrated in Figure 3, the opposing view portfolios are: (long Germany, short US) and (short Germany, long US). By definition, if one of the two view portfolios outperforms, the other will underperform. Focusing on two opposing views is both a realistic and difficult scenario since no investor would be able to know with certainty which is correct ex-ante. Suppose our framework is able to discover correct confidence values in each view. In that case, that will serve as a definite improvement to portfolio managers since they would be able to use the model to objectively and automatically reduce exposure to their underperforming views. For the Black Litterman model to be instantiated, we require the prior weights and return assumptions used in its Bayesian approach. To find the equilibrium weights (priors), we require historical country level market capitalization data. The sources used for this were the 'Stock Market Capitalization to GDP' and 'Gross Domestic Product' tables published by the St. Louis Fed (Fed 2020). The market capitalization values seen in Table 1 resulted from multiplying the above 3 .The equivalent equilibrium weights defined as the weight of a portfolio holding all securities proportional to their market capitalization are also shown. The first question we ask ourselves before embarking on the journey to improve the Black Litterman returns is: does it make sense to use Black Litterman equilibrium returns as base returns instead of using a factor model? Therefore, we computed the efficient frontiers for factor model approaches (implemented through the eponymous 5 factor model for developed markets proposed by Fama and French 2014) and BL model equilibrium returns (no views). Simulations were performed from 2005 to 2020 with a weekly rebalance. γ risk ranged between 0.001 and 100 while γ trade ranged between 1 and 5. The asset returns used to compute return expectations and covariance values were applied an exponential decay with a half-life of 20 trading days (1 calendar month) to improve the result's dynamicity. Figure 6 shows that we should be agnostic between the two approaches since no clear winner consistently outperforms across all risk and trading hyper-parameters. While the Fama-French model outperforms at the high end of the risk levels, the BL equilibrium returns outperform at the low end, showing that they are neither detrimental nor better overall. Thus, using BL equilibrium returns as base returns for our model appears to be a very reasonable choice, all the more since we can easily incorporate views which are expected to generate further outperformance. This result validates both the vast literature on the BL model and their choice for return and risk estimates. The BL model is meant to combine this set of prior allocations consisting of equilibrium views for the entire market (supply equals demand effectively clearing the market) with a set of conditional views that are provided by investors. A required calculation when combining any two quantities is the weighting used for each quantity in order to achieve the end result. The BL model uses the confidence parameter for this exact purpose. The more confident the investor is in a given view (effectively, the lower the uncertainty in the view), the more the combined portfolio should be skewed towards it. In financial markets, as in most fields of human endeavour, confidence is a doubleedged sword. If the investor is confident in a view that ends up exhibiting the expected results, then portfolio performance is improved. The flip side also needs to be considered, however. If the investor is confident in a view that does not perform as expected, then portfolio performance suffers. In Figure 7 we plot the effect on the portfolio weights from incorporating with 70% confidence both the 'correct' view (Germany outperforms the US by 4% annualized) and the 'incorrect' view (the US outperforms Germany by 4% annualized). As expected, in the 'correct' pro-Germany case, all of the capital allocated to the US gets reallocated to Germany. In the 'incorrect' case, the opposite happens. It is important to note that the BL model tends to maintain the weights of the securities that are not part of the investor's views. This is not true of the original Markowitz framework, which would have suffered large swings in most assets due to a view (return) change, a downside initially brought up in Section 3.1. As expected, the effect of the confidence in each view follows the path mentioned above. For Figure 8 we performed portfolio optimization only once at the beginning (1 January 2005), and the weights of the portfolio were entirely left to drift according to the return realized through price movement for the remainder of the period. As introduced by Sharpe (1994), the Sharpe ratio measures the excess return per unit of risk, commonly referred to as the risk-adjusted return. All things equal, a higher Sharpe ratio makes a portfolio or strategy more desirable. We can observe that introducing the 'correct' pro-Germany view has an increasingly positive effect on Sharpe's portfolio as confidence increases. This view's effect peaks at a 0.23 Sharpe ratio when the confidence increases past 60%. The opposite effect can be seen as confidence in the 'incorrect' pro-US view increases. In this scenario, the Sharpe ratio starts decreasing abruptly between 20-30% confidence from 0.10 to 0.08 and continues to decrease all the way until 100% confidence. These results suggest that investigating further how to both set and update the confidence in a given view is a ripe area of improvement for the Black Litterman model. For this purpose, we look towards regime-switching models as a natural aid. Regime Switching Since regime-switching is a model designed to track trends, we expect it can be used successfully to track the outperformance and underperformance of investor views. The model itself has two key parameters that we would need to arrive at, however. First, we need to determine how many days we should use to train the model and, second, what regime probability value we should use as a threshold to allow a regime change to occur. A regime change would shift our expectations of asset return and standard deviation from bull regime values to bear regime values and vice versa. To avoid over-fitting and look-ahead bias, the regime-switching training was performed from 1 January 1997 to 31 December 2005 (separate from Section 4.2.1). We define for each ETF a training window of a set size, fit a two-state HMM model on the observations within it and use the model parameters (mean, covariance) together with the last state's probabilities to predict properties of the forward return. The package used for HMM fitting is the hmmlearn open-source package. To quantify successful predictions, we use the information coefficient. Frequently used in financial literature on portfolio management, it is a handy metric that tries to quantify a signal's predictive power. Equation (13) shows how to compute the coefficient by comparing the direction of the ex-ante predicted signal µ predicted with the ex-post realized values as per Grinold and Kahn (2000). This direction comparison is called a win rate and is the key metric used to compute the information coefficient. In our case, we compare the mean predicted for the current regime the stock is likely to be in with the realized mean of the next five trading days. Comparing to a shorter time frame or a different metric such as a compound return does not make sense since the HMM model is tailored to identify the stock's regime rather than the next day's return. Conversely, comparing the regime mean with the average return over too long of a period would also not make sense as regimes, although clustered, change abruptly and can not be expected to remain static too long into the future. We empirically observed the predicted return forecast's expected performance decay at periods longer than a few weeks, further justifying our choice. µ predicted = E(µ) = µ bull · p(z n−1 = bull) + µ bear · p(z n−1 = bear) (12) In order to determine the appropriate length of training data, backtesting was performed on each ETF where the training window was increased from 50 trading days to 1750 trading days, in increments of 50. The information coefficient shows a stable increase only once incorporating more than 1000 days (4 trading years) of training data for the HMM model. It reached a peak of 0.195, corresponding to a win rate of 59.75%, at 1700 days (6.75 trading years) as shown in Figure 9. One standard deviation bars were also plotted to show the variability in the result within the universe cross-section. This heuristic search was limited to a maximum of 1750 days in our analysis. Although the information coefficient showed a sign of peaking at 1700 days, it is still possible that an extended training set could lead to better results. We will be using a training period of 1700 days for the remainder of the article. One potential area of improvement is the reduction of variability in the prediction. When considering the return mean in expectation, any change in the probability of a latent variable indicating regime one versus regime two will change the expected return mean over the next period. The probability linearly skews the expectation of return between the two regimes. This behaviour can introduce unwanted turnover in the portfolio since changes in returns lead to increased or reduced positions, thus increasing trading costs. One method that can mitigate this effect for slight changes to probabilities is to increase the latent variable's required threshold to be allowed to jump to a new regime. This is shown in Figure 10a on a 1700 trading days training set to significantly reduce the rate at which we observe changes in predicted mean return. However, going beyond a threshold of 0.985 also sharply reduces the model's accuracy by forcing it to remain static, although it had been able to predict a genuine regime change. Interestingly, actual regime changes (from low mean to high mean and vice versa, regardless of the actual return mean value itself) are much less frequent (see Figure 10b). For the same training set, the number of regime changes starts at only 1.99% (regime change days/trading days total) when using a threshold of 0.7 and remains relatively stable up to a threshold of 0.95. From a threshold of 0.95 to 0.985, the number of regime changes drops by half while the information coefficient remains relatively unchanged. This implies that a number around 0.975 is a reasonable choice; this is the threshold we will use throughout the article's remainder. Dynamic View Confidence through Regime Switching Having shown the regime-switching model's predictive power, all that remains to obtain dynamic return and risk estimates is to use its predictions to construct confidence values as defined in Section 3.3. Given the view performance shown in Figure 3, our ideal expectations for the confidence prediction for the long EWG and short SPY (EWG > SPY) from both methods would roughly be low confidence in 2005, increasing confidence from 2006 to mid-2008 and back to low confidence for the remaining period. The short EWG and long SPY (EWG < SPY) view would intuitively be assigned a similar but opposite confidence level. In practice, we observe two unexpected phenomena ( Figure 11). First, the confidence levels provided by the cumulative distribution function (CDF) method exhibit too little volatility, thus having a minimal potential of affecting the actual allocation one way or another. It is worth noting that the direction changes are correct and match our expectations based on ex-post information. Upon further investigation, this phenomenon is due to the high degree of daily volatility relative to the view's low daily mean. Using the CDF of a distribution with a low mean and high standard deviation would imply that a high absolute value view return would be needed. A small view return would inevitably result in a CDF value close to 0.5 (50/50), which is indeed what we observed. Second, the sigmoid method produces confidence levels that match the expected results very closely, surprisingly also including short periods of volatility when the confidence in the view craters, presumably due to fundamental regime shifts in the market. Using the sigmoid confidence construction method, we can produce a regime based confidence level in an investor view that appears to have reasonable predictive value. However, just having a predictive confidence level for the investor views is not enough to guarantee outperformance relative to the equilibrium portfolio. We will need to consider the difficult problem of deciding how closely we should aim for the portfolio to follow the resulting BL posterior returns. Following them too closely could lead to significant transaction costs, thus invalidating all of the potential benefits that we might achieve. Since we are leveraging these return and risk estimates in our multi-period portfolio optimization model that considers transactions costs, we can adjust the γ trade hyperparameter with appropriate backtesting. Multi-Period Portfolio Optimization Armed with a set of risk and return estimates that are dynamically adjusted for regime shifts in the investor views, the open question that remains to be answered is: does regime-switching based dynamic confidence outperform static confidence? In order to perform a reasonable comparison, we will be tracking the set of two opposing views analyzed in Section 3.3: long EWG and short SPY (EWG > SPY) paired with short EWG and long SPY (EWG < SPY)), both with static confidence of 70% assigned. Given the static confidence value, there is only a need for one trading period at the view's initiation and subsequently at the portfolio manager's termination of the view. The period used for the optimization test was chosen to include the analysis from Section 4.2.1 but also continue until the present day in order to analyze the long term effects to portfolio performance due to leaving an older view as an input to the BL model with dynamic confidence. Specifically, the test period ranges from 1 January 2005 to 31 August 2020. Precisely as in Section 4.2.3, the regime-switching training was also performed with a training window of a set size of 1700 trading days. The δ parameter for the Black Litterman model was set the same as He and Litterman (2002), a value of 2.5 representing the world average risk aversion, while for τ a value of 4 was selected. Given the use of dynamic confidence levels, we expect the provided views to be active only when being realized, thus providing confidence that the investment views should be weighed much more in computing the return estimates than equilibrium returns. This translates to a high value for τ. Providing a low value for τ implies high confidence in the equilibrium returns. This leans the model more towards them, which would defeat our empirical tests' purpose. We want to test using different types of confidence levels for investment views, not equilibrium returns. As Meucci (2008) mentions, in practice, calibration would be performed to select an appropriate τ value that satisfies the manager's mandate as to how closely the equilibrium return benchmark should be followed, as τ → 0 the estimated returns approach the equilibrium returns. One of the critical problems with the static BL approach is immediately apparent in Figure 12 4 where we can observe that an investor view needs not only to be correct but both the entry and exits points need to be timed right as well. Specifically, the (long EWG, short SPY) view was indeed a correct view from 2005 to 2008; however, its performance was choppy and generally trending negative for the next 12 years of data. The opposite is true for the (short EWG, long SPY) view where performance was initially lagging until 2009; however, it picked up significantly over the remaining period. The performance metrics of each view are tracked in Table 2 5 with the best performance belonging to the equilibrium portfolio (containing a large allocation to SPY) closely followed by the (short EWG, long SPY) view. The initially correct but long term incorrect (long EWG, short SPY) view underperforms the other two views in all categories by a significant amount. Figure 12 shows that using the dynamic confidence model proposed can correctly assign appropriate values for view confidence depending on the underlying asset regimes. When the underlying regimes corroborate a view's prediction, the confidence in the view will increase, while in the opposite case, the confidence will decrease. However, we also observe sudden shifts in the view confidence likely driven by shorter-term shifts in the asset regimes. The reduction in unnecessary regime changes would be a ripe area of further research. For example, Nystrup et al. (2020) find that incorporating higher frequency data such as intraday rolling means and volatility can improve regime-switching models' predictive abilities while also providing a parameter that directly penalizes frequent jumps. Although useful to review the view confidence results relative to view returns for the entire 15 year period, it is worthwhile to investigate what exact shifts in asset weights the changes in confidence cause in our portfolio. To achieve this, in Figure 13 we can observe the asset weight changes over a shorter period (2005)(2006)(2007)(2008)(2009)) as a result of confidence changes in the (long EWG, short SPY) view. Starting in 2006, the view shows excellent performance, gaining almost 40% from the start to January 2007. Given our high activation slope, the confidence in the view only triggers at the beginning of 2007 but quickly reaches 100%. Over this period, the portfolio weights remain close to the equilibrium weights, with the largest allocation being in SPY. Once the view is activated, the portfolio quickly rebalances to reduce US exposure to zero while building up to 85% exposure to Germany (EWG) from January 2007 to late 2008. We observe two short periods of view underperformance that get correctly detected by the regime-switching model during this period and hence result in an allocation back to SPY from EWG. In 2008, EWG experienced a correction much sharper than SPY, which the regime-switching model appears to have been slow to pick up, the turning point in confidence appearing after more than half the correction. As mentioned earlier, a more reactive regime-switching model using intraday features similar to Nystrup et al. (2020) could be the answer to improving the regime detection further and avoid slow regime changes such as the one in 2008. Incorporating the dynamic confidence levels introduced in Section 3.3 through the posterior returnsr τ and the posterior covariance inside the risk function ψ τ , we simulate the performance of repeatedly solving the multi-period optimization problem from Equation (1). The hyper-parameters γ risk t and γ trade t are each sampled from a designated list ranging from 0.0001 to 100. One, two and five periods in the future are considered in each multi period optimization, i.e., H ∈ {1, 2, 5}. In Figure 14 6 we observe the efficient frontiers generated by running multiple simulations of the multi-period optimization model (1) on out-of-sample data as elaborated in Algorithm 1. We use the sigmoid method to generate dynamic confidence across three forward horizons. Although the underlying data used to generate the regime predictions and hence view confidence is of daily granularity, the regime predictions are expectations of realized asset mean return until its regime shifts, not predictions for the next day. As elaborated in Section 4.2.2, our focus for performance validation was for the prediction realization five days ahead. As such, the portfolios were rebalanced at a weekly frequency. Across the tested horizon values, increasing the horizon shows no significant performance improvements for the 'incorrect' view. Figure 15 7 is the analog of the previous figure, but this time we consider the 'correct' view (the opposite). This figure highlights an interesting result that shows the performance difference that we were expecting when considering multiple future periods. Namely, when returns are more heavily weighted and thus risk weighted relatively less, a longer horizon is found to improve the efficient frontier with a lookahead of five periods outperforming both one and two periods. Given that we are projecting forward the same constant returns and risk estimates, the outperformance can only be caused by better trading off the speed of portfolio shifts with the costs of doing so. More specifically, considering multiple periods ahead tends to prevent sudden portfolio shifts when the return forecast does not justify the costs associated with doing so. Figure 14 shows that, across 15 years of trading, the downsides of this approach (slower positions shifting) are negligible while Figure 15 shows that the upside is tangible. These results show promise that incorporating better forward return forecasts and trading costs could lead to even better performance. Overall, we found that using return and risk estimates that incorporate dynamic confidence values outperforms the static confidence portfolio for both the 'correct' and the 'incorrect' view. We find that our model's efficient frontier during out-of-sample simulation is higher. Thus, our framework provides higher excess returns for the same excess risk, regardless of the view it is provided. Conclusions This paper developed a novel multi-period trading model that allows portfolio managers to perform optimal dynamic asset allocation while easily incorporating their investment views in the market portfolio. This framework's significant advantage is its intuitive design that provides a new quantitative tool for portfolio managers. It incorporates the latest asset return regimes obtained from Hidden Markov Models (HMMs) to quantitatively solve the question: how certain should one be that a given investment view is being realized in the current market? The main contributions to the existing literature are two-fold. First, we are the first to develop an optimization model based on return and risk estimates from the Black Litterman (BL) model in order to solve a portfolio allocation problem across a multi-period horizon. The BL model combines simple investment views with the market portfolio to arrive at its risk and return estimates (Black and Litterman 1992). As a result, placing the BL return and risk estimates in a multi-period framework allows for the introduction of dynamicity to the problem of optimally trading already provided investment views. The chosen multi-period horizon does not encompass the entire trading range, instead spanning a shorter period. This enables the optimization to remain tractable and dynamic across time since new information about the investment views is incorporated as soon as it's realized. The multi-period relaxation method employed, aptly named receding horizon, is borrowed from model predictive control and was first introduced by Boyd et al. (2017). Given the increase in trading frequency over static models, convex transaction costs are also considered in the optimization objective. Second, we introduce a novel data-driven method to infer dynamically updated confidence levels for an investor's views through the use of regime-switching. Using a sigmoid activation function to obtain the confidence levels from the underlying regimes was shown to perform better in a numerical example. This method replaces portfolio managers' need to provide estimated confidence levels for their views, instead replacing them with a dynamic quantitative approach generated directly from the latest available asset returns. The confidence in each view is obtained by computing the view's expected regime return (based on each underlying asset's current regime) and comparing it to the investor inputted expected return. The more the two forecasts are in agreement, the higher the confidence obtained and vice versa. The confidence effect size can be configured since the model has toggles that can increase or decrease the confidence's reaction to the regime predicted returns (activate faster or slower). It is worth noting that we do not address the actual generation of investment views since the optimization problem is focused on trading provided views effectively. To empirically test the dynamic advantage of our framework, we isolated two tricky exactly opposite views, the results of which we obtained through the benefit of hindsight. The model needs to identify when views are being realized (and allocate capital to them) and, more importantly, when views are not being realized in the market (and divest capital from them) to prove useful. We showed that its confidence in the 'correct' view increased as the view was realized, and its confidence in the 'incorrect' view decreased when the returns expected failed to materialize. Our proposed dynamic confidence level-based asset allocation model, despite its increased trading costs, outperformed realistic BL scenarios with static confidence levels from a trading perspective. Our framework produced higher expected returns for the same portfolio risk level in the tested tricky numerical examples. Further, optimizing for multiple periods ahead (two or five) showed increased performance over a single period model when returns are emphasized more than risk. In conclusion, we have shown our framework to be intuitive, tractable and improve performance (risk-adjusted return) over static Black-Litterman allocations through its ability to adjust to market conditions and asset regimes dynamically. This dynamicity allows it to allocate a risk budget away from underperforming investment views and into outperforming ones shortly after the latest price information is realized. Further investigation into the implications of trying to satisfy multiple investment views simultaneously, using higher frequency data for regime predictions, sensitivity analysis to trading frequency, incorporating multiple return forecasts and optimal execution sequences into the portfolio optimization problem is left for further research. The framework has been made available as an open-source library to facilitate future investigations and its use as an investment tool. Acknowledgments: The authors would like to thank Xia Li for her refreshing and practical perspective that helped cater our abstract towards its target audience, Alexander Remorov for his insightful feedback that lead to more ways to enhance our empirical tests and Giorgio Costa for his timely assistance with advice on how to generate clear mathematical charts. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
2020-12-24T09:04:36.700Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "37ffc099825c579c56bb35cfde3e7b38c8a3dba3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1911-8074/14/1/3/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8d76d80db34a94c18f64d01285c4015b5630dd16", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science" ] }
214705642
pes2o/s2orc
v3-fos-license
Comparison of three measurement models of soil nitrate-nitrogen based on ion-selective electrodes Ion-selective electrode (ISE) is a quick and low-cost method of soil nitrate nitrogen (N) detection. The measurement models of soil nitrate-N based on ISEs includes the linear regression model, multiple linear regression model and BP neural network model, and so on. Three models were analyzed in theory, measurement experiments of validation samples and soil nitrate-N concentrations were carried out in this study, and the measurement accuracies of the three models were compared. The results showed that, in the measurement experiments of validation samples and soil nitrate-N concentrations, BP neural network model had the highest accuracy (the average relative errors between results of the BP neural network model and the reference values were 5.07% and 8.81%, respectively) among the three models, multiple linear regression model had the second highest accuracy (the average relative errors between results of the multiple linear regression model and the reference values were 7.70% and 10.51%, respectively), linear regression model couldn’t exclude the interference of chloride ions so that it had the lowest accuracy (the average relative errors between results of the linear regression model and the reference values were 11.16% and 12.28%, respectively) among the three models. The BP neural network model can effectively restrain the interference of chloride ions, and it has a high accuracy for the measurement of soil nitrate-N concentration, so that the BP neural network model can be used to measure soil nitrate-N concentration accurately. Introduction  In recent years, the Chinese population has been increasing dramatically, but the arable land has been reducing year after year. Food security is facing the serious challenge. The Chinese government attaches great importance to this case under such a situation. Improving land productivity is the necessary way to improve the overall grain production capacity and it is also an important measure to guarantee food security [1] . Among many methods of agricultural increase yield, increasing fertilizer application is one of the most effective measures. Nitrogen (N) is an essential nutrient element of crops. The loss of N affects not only the yield of crops but also the quality of crops [2] . However, the excessive application of N fertilizer will increase the content of nitrate-N and nitrite-N in plants [3] , and some of the N will remain in the soil and underground water in the form of nitrate which causes environmental pollution [4] . The content of nitrate-N in soil can reflect the N supply capacity of soil, so the detection of nitrate-N has important significance for economic and reasonable fertilization to increase crop yield, meanwhile, reduce the waste of resources and environmental pollution caused by excessive fertilization [5] . The simple, rapid and low cost method is beneficial to the promotion of soil nitrate-N measurement technology [6] . The ion-selective electrode (ISE) method is with simple operation, fast response speed, low cost, and little pollution [7][8][9] , which provides technical support for the detection of soil nitrate-N content. There are many kinds of soil nitrate-N measurement models based on ISE. The choice of measurement model is one of the important factors affect nitrate nitrogen measurement precision. The traditional measurement model of soil nitrate-N is a linear regression model [10] , which can be used to characterize the relationship between reading of nitrate ISE and the concentration of nitrate ions. Because of the interference of chloride ion on the nitrate ISE [11,12] , there are some multi-parameter measurement models of the concentrations of chloride ions and nitrate ions. One of them is the multiple linear regression model, which reflects the linear relationships between ISEs and ion concentrations. The Back Propagation (BP) neural network has nonlinear mapping capability [13] , which can map the relationship between the nitrate ion concentration and the readings of nitrate ISE and chlorine ISE. This study analyzed the linear regression model, multiple linear regression model and BP neural network model, and used them to measure the soil nitrate-N concentrations. The measurement accuracies of three models were compared in order to find out the soil nitrate-N measurement model with the highest accuracy. 2 Theory background 2.1 Linear regression model ISE is a kind of electrochemical sensor which can make specific potential response to target ions in solutions [4] . The relationship between the electrode potential and the target ions is in accordance with the Nernst equation where, E is the potential difference of ISE, mV; E 0 is the standard potential of ISE, mV; R is the gas constant (J/K·mol), the ideal value is 8.314 J/K·mol; T is the absolute temperature, K; F is the Faraday constant (C/mol), the value is 96485 C/mol; z is electric charge number of the measured ion;αis the activity (concentration) of the measured ion. where, K is the intercept of the electrode, mV; S is the slope of the electrode, mV/decade; C is the concentration of the measured ion, mol/L. Using the least square method to estimate K and S, then the linear regression model between the electrode potential and the negative logarithm of the concentration of nitrate ions can be obtained. Multiple linear regression model ISE has responses not only to the specific ion but also to other interfering ions. The influence of interfering ions on the electrode can be described by the selectivity coefficient [14,15] . Nikolsky modified the Nernst Equation and put forward a semi-empirical formula according to the influence of interfering ions. Eisenman et al. carried out the strict derivation and confirmation of this formula called Nikolsky-Eisenman equation described as Equation (3): where, α A , α B and α C are the ion activities of ion A to be measured, interfering ion B and C, respectively; are the selectivity coefficients of B and C, respectively; z A , z B , z C are electric charge numbers of A, B and C, respectively. Chloride ion is the main interfering ion in soil nitrate-N measurement and the influences of other interfering ions are not so obvious so that they can be ignored. Therefore, Equation (3) can be simplified as a binary model that is described as Equation (4): where, C NO -3 is the concentration of nitrate ion, mol/L; C Cl -1 is the concentration of chloride ion; K pot is the selectivity coefficient of chloride ion. Multiple regression analysis is a typical multivariate calibration method for multicomponent simultaneous determination. The relationship between the change of the electrode potential and the concentration of coexisting ions is a non-deterministic correlation. Let the logarithms of concentrations of the measured ion NO 3 and the interfering ion Clbe independent variables, and nitrate ISE potential be the dependent variable. Assuming a linear relationship between the dependent variable and the independent variable, namely The following is the proof of the feasibility of using Equation (5) to estimate the coexistence ion concentration instead of Equation (4): Expand Equations (4) and (5) according to two-order Taylor expansion. Under conditions of satisfying practical precision in engineering, both formulae can be expanded into the form of Ax 2 +By 2 +Cx+Dy+E. In addition, in experiments of Mei [16] and the experimental validation in this study, the results of the multiple linear regression model satisfied the precision requirements. Therefore, Equation (5) can substitute Equation (4). Therefore, the multiple linear regression model can be described as 3 where, E 1 is the detection potential of nitrate ISE, mV; E 2 is the detection potential of chloride ISE, mV. n groups of experimental data (X 11 , X 21 , E 11 , E 21 ), (X 12 , X 22 , E 12 , where, X 1i is the i th group of the logarithm of nitrate ion concentration; 1 X is the average value of the logarithm of nitrate ion concentration; X 2i is the i th group of the logarithm of chloride ion concentration; 2 X is the average value of the logarithm of chloride ion concentration; 2.3 BP neural network model BP neural network, which is a multi-layer and feed-forward artificial neural network can perform non-linear mapping of an input space onto another output space without detailed information about the system [17,18] . It is composed of an input layer, a hidden layer(s) and an output layer as described in Figure 1. The BP neural network should be trained by training samples before used. The training of the BP neural network consists of two phases, forward propagation of signal and backward propagation of error [19] . In the forward phase, the signal is input through the input layer and sent to the hidden layer; it is transmitted to the output layer after being processed and it is processed again. The weights and thresholds of each layer are calculated by iteration. In the backward propagation, the weights and thresholds of each layer are revised from the output layer backwards to the first one based on the change of the total error, thus making error smaller. These two phases don't stop operating alternately and repeatedly until the total error meets the requirement. After training, the weights and thresholds of each layer are determined so that data can be measured by the BP neural network [20] . It has been proved theoretically that a three-layer BP neural network is able to approximate to any functions [21] as long as the hidden layer nodes are sufficient, so a three-layer BP neural network was adopted as the measurement model. Figure 1 Three-layer BP neural network structure It can be seen from Equation (3) that the relationship between the response potential of nitrate ISE and concentrations of nitrate ion and chloride ion is nonlinear. BP neural network has strong nonlinear mapping ability so that the trained BP neural network can reflect the relationship between the nitrate ISE response potential and concentrations of nitrate ion and chloride ion. Because the standard BP neural network converges slowly and is easy to fall into local minimum, it is necessary to improve the convergence speed and accuracy of the model. The improved methods include the following several ones which are adaptive learning rate, adaptive adjustment error signal, adjusting the parameter of activation function, additional momentum method, normalization method [22][23][24][25][26] . In this study, the improved BP neural network used 2-4-1 structure, namely, the BP neural network was with two input neurons, four hidden neurons and one output neuron of three-layer structure. The inputs of the BP neural network model were potential difference U1 (mV) of nitrate ISE and potential difference U2 (mV) of chloride ISE, and the output data was concentration C (mol/L) of nitrate ion. The structure of this model could be described in Figure 2. Methods Standard solutions were prepared with deionized water, sodium nitrate and sodium chloride reagent. The preparations of the modeling samples and training samples were shown in Table 1. The modeling samples of the linear regression model were the zeroth group of samples in Table 1. The modeling samples of multiple linear regression model were the samples of group 1-3 in Table 1. The training samples of the BP neural network model were the samples of group 1-6 in Table 1. The concentrations of nitrate and chloride ions in the solutions of validation samples were shown in Table 2. The experiment was conducted at 25 o C. ISEs should be activated before used. The activated nitrate ISE was connected to the ion meter ( Figure 3). The electrode potentials of the zeroth group of modeling samples in Tab 1 were detected by the ion meter successively and a linear regression model was established. The activated nitrate ISE, chlorine ISE and reference electrode were connected to the ion meter ( Figure 4). The electrode potentials of the first to third groups of modeling samples in Tab 1 were detected by the ion meter successively, and the multiple linear regression model was established. As well as connect the nitrate ISE, chlorine ISE, reference electrode and the ion meter (Figure 4). The electrode potentials of the first to sixth groups of modeling samples in Tab 1 were detected by the ion meter successively and the BP neural network model was trained. A magnetic stirrer was used to stir the solutions in the measuring process. After measuring each solution, electrodes were cleaned by using deionized water and dried by using filter paper. The electrode potentials in the mixed solutions of the sample 1-15 were detected successively, and the three models were verified and compared. The places, where these 50 soil samples were selected from, were Tongzhou in Beijing, Baiwang Mountain in Beijing, Olympic Forest Park in Beijing, Yichang City in Hubei Province, and Anyang City in Henan Province. These soil samples were collected from September to November in 2016. The soil samples were dried in an oven under 105 o C for 8 hours. After that, the soil samples after dried were crushed by using a stick, and sieved using a sieve with 1 millimeter screen mesh. After sieved, the soil samples were divided into two parts and labeled respectively. One part of the samples was sent to Beijing Center for Physical and Chemical Analysis (BCPCA) in China and detected by using the spectrophotometric method, and the other part of samples were detected by ISEs in the laboratory. Fifty 30 g dry soil samples were weighed and added into fifty 250 mL Erlenmeyer flasks. 150 mL deionized water was added to each flask. The flasks were capped, and shaken for 20 min on a horizontal oscillator. The mixture in each Erlenmeyer flask was poured into a filter device and the leachate was collected in an Erlenmeyer flask under the filter device. Electrode potentials in leaching solutions of soil samples were detected by ion meter. Concentrations of nitrate ions in leaching solutions were calculated by the linear regression model, multiple linear regression model and BP neural network model respectively, and they were compared with values detected by the spectrophotometric method in BCPCA. The nitrate ion concentrations calculated by three models above were converted to soil nitrate-N concentrations according to the Equation (9). where, R NO -3 is the mass ratio of nitrate-N to soil, mg/kg; 14×10 3 is the quality of one mole of N, mg/mol; 5 is water soil ratio, L/kg. Results and discussion According to the detection results of the modeling samples, a linear regression model and multiple linear regression model were established. According to the detection results of the training samples, the BP neural network model was trained. The detection results of validation samples were analyzed and accuracies of three models were compared. The linear regression model was obtained by the linear fitting calculation of the detection results of the zeroth group of sodium nitrate solutions: The multiple linear regression model was obtained by the linear fitting calculation of the detection results of the first to third groups of mixed solutions: The BP neural network model was trained with the detection results of the first to sixth groups of mixed solutions. Electrode potential values of Sample 1-15 of the validation sample solutions were plugged into the linear regression model (model 1), multiple linear regression model (model 2) and BP neural network model (model 3), and then the nitrate ion concentrations were obtained. The relative errors between these calculation values by three models and the reference values were analyzed and compared. The results were shown in Table 3. As seen from Table 3, the maximum relative error of nitrate ion concentration calculated by the linear regression model (model 1) reached 15.96%; the maximum relative error of nitrate ion concentration calculated by the multiple linear regression model (model 2) was 12.83%; while the maximum relative error of nitrate ion concentration calculated by the BP neural network model (model 3) was 8.47%. In addition, the average relative errors of nitrate ion concentration calculated by three models were 11.16%, 7.70% and 5.07% respectively after calculation. To sum up, in comparison of the three models, the BP neural network model effectively reduced the interference of chloride ion and had the highest accuracy, the accuracy of the multiple linear regression model came to the second, and the linear regression model had the lowest accuracy. The electrode potentials of the leaching solutions of soil samples were plugged into three models, and then the nitrate ion concentrations were obtained. Soil nitrate-N concentrations were calculated by Equation (9) The average relative errors of the BP neural network model, multiple linear regression model and linear regression model were 8.81%, 10.51% and 12.28% by calculation respectively. From Figure 3 to Figure 5, it could be known that the results of three models had good linear correlations with the reference values respectively, and three coefficients of determination were all bigger than 0.96. Therefore, it could be seen that the BP neural network model was the most accurate model to measure the soil nitrate-N concentration among the three models, the accuracy of the multiple linear regression model was the second, and the linear regression model had the lowest accuracy. The BP neural network model can effectively restrain the interference of chloride ions on the nitrate ISE and it can be used to measure soil nitrate-N concentrations accurately. Conclusions Measurement experiments of nitrate-N were carried out by using a linear regression model, multiple linear regression model and an improved BP neural network model, and the measurement accuracies of the three models were compared and analyzed. Results showed that the average relative errors of validation sample solutions measured by three models were 11.16%, 7.70% and 5.07%, respectively, the average relative errors of the measurement of soil samples were 12.28%, 10.51% and 8.81%, respectively, and the determination coefficient between values calculated by three models and reference values measured by spectrophotometry method were all greater than 0.96. The accuracy of the BP neural network model was the highest among the three models', the multiple linear regression model had the second highest accuracy and the accuracy of the linear regression model was the lowest. The BP neural network model can effectively restrain the interference of coexisting chloride ions, and can be used for accurate measurement of soil nitrate-N.
2020-03-12T10:36:35.107Z
2020-03-02T00:00:00.000
{ "year": 2020, "sha1": "d5ec89c35bf82d59653bb6de7b48c037ca781586", "oa_license": "CCBY", "oa_url": "https://ijabe.org/index.php/ijabe/article/download/3599/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "763a94ac94b5a439eb73114b8f3ef5eebba34c32", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
103396312
pes2o/s2orc
v3-fos-license
Surface-Enhanced Raman Scattering-Active Substrate Prepared with New Plasmon-Activated Water Conventionally, reactions in aqueous solutions are prepared using deionized (DI) water, the properties of which are related to inert “bulk water” comprising a tetrahedral hydrogen-bonded network. In this work, we demonstrate the distinguished benefits of using in situ plasmon-activated water (PAW) with reduced hydrogen bonds instead of DI water in electrochemical reactions, which generally are governed by diffusion and kinetic controls. Compared with DI water-based systems, the diffusion coefficient and the electron-transfer rate constant of K3Fe(CN)6 in PAW in situ can be increased by ca. 35 and 15%, respectively. These advantages are responsible for the improved performance of surface-enhanced Raman scattering (SERS). On the basis of PAW in situ, the SERS enhancement of twofold higher intensity of rhodamine 6G and the corresponding low relative standard deviation of 5%, which is comparable to and even better than those based on complicated processes shown in the literature, are encouraging. ■ INTRODUCTION Water is the most commonly used environmentally friendly solvent for chemical reactions in solution. Compared with other solvents, water is able to form a flexible dynamic hydrogenbonded network, in which hydrogen bonds (HBs) are dynamic in picoseconds, 1 which makes investigating its local structure challenging. 2−4 Thus, although most properties of liquid water are determined by its HBs, 5 all of water's commonly recognized properties are related to inert "bulk water" composed of tetrahedral hydrogen-bonded networks. Water as a solvent is conventionally considered a passive spectator in chemical reactions. Actually, liquid water has emerged as a promising active reactant using its characteristic property of donor− bridge−acceptor for proton transfer and electron donating. 6−8 Moreover, liquid water is conventionally considered an independent reactant. However, as shown in the literature regarding the hydrogen evolution reaction (HER), the interaction energy of H 3 O + −OH − is 46.9 kJ mol −1 , but it increases approximately 2.5 times when H 3 O + associates with an additional four water molecules by HBs. 9 Meanwhile, gasphase water is capable of catalyzing many chemical reactions 10−12 through its ability to form HBs with other molecules because of more free water molecules being available in the gas phase, compared with liquid-phase water with a more perfect tetrahedral symmetry. These facts inspired us to create active liquid plasmon-activated water (PAW) with reduced HBs from deionized (DI) water at room temperature using hot electron transfer (HET) on resonantly illuminated gold nanoparticles (AuNPs). 13 The created PAW can be innovatively employed as an environmentally friendly etching agent (vapor from hot electron-activated liquid water), 14 in more efficient HERs 15 and in increasing the efficiency and safety of hemodialysis. 16 These innovative applications of PAW instead of conventional DI water in green chemistry, energy, and medicine open new aspects of the effects of liquid PAW on various water-related chemical reactions. In surface science, compared with AgNPs, relatively stable AuNPs with well-defined localized surface plasmon resonance bands in the UV and near-infrared regions are applicable to surface-enhanced Raman scattering (SERS) 17,18 and HET for catalyzing chemical reactions. 19,20 To further improve the plasmonic efficiency of HET, novel hot dog-structured Au nanorod (NR)@Cu 2 O NPs were designed and synthesized. 21 In this unique structure, the head-exposed Au NR provides a plasmonic enhanced local electric field at the surface of the photocatalyst to improve the transfer of electrons. A similar strategy was proposed for enhancing the plasmonic efficiency of HET based on Au-decorated ZnO corn silk. 22 The formation of a Schottky barrier at the Au/ZnO interface can retard the electron−hole recombination, thus increasing the photoelectrochemical efficiency. With significant improvements in nanotechnology, the SERS enhancement is correspondingly increased, as expected, making the detection of single molecules possible. 23,24 However, the reproducibility of SERS signals is also an important parameter of concern because of its reliable application. Unfortunately, high SERS enhancement generally corresponds to low reproducibility and to poor stability of SERS signals, which makes its application unreliable. 25,26 Core@shell and array structures are two popularly employed strategies to improve SERS enhancement while maintaining both reproducibility and stability, but the fabrication procedures are correspondingly complicated. 27,28 By using femtosecond transient absorption spectroscopy with visible pump and infrared probe to observe generation of injected electrons, plasmon-induced electron transfer from 10 nm Au nanodots to TiO 2 nanocrystalline film was directly observed. It was revealed that the reaction time was within 240 fs and the yield was about 40%. 29 On the other hand, reaction and diffusion rates in water cage are interesting. The much higher intramolecular kinetic isotope effects (KIEs) compared with the intermolecular KIEs of the same chemical reaction, R− H + • OH → R • + H 2 O, indicate a high degree of mobility of the two reaction partners inside of the solvent cage. 30 As shown in our previous study, 31 the activity of created PAW decays after a few days. Thus, we developed a multifunctional excited AuNP-decorated artificial kidney with efficient hemodialysis and therapeutic potential using PAW created in situ. 32 In this work, we demonstrate the innovative advantages of in situ preparation of PAW used in electrochemical reactions with a higher diffusion coefficient and electron transfer rate, which generally govern electrochemical reactions. The effects of the in situ preparation of PAW on the correspondingly increased SERS signal and on improved signal reproducibility are also demonstrated in reference to traditional DI water. ■ RESULTS AND DISCUSSION Electrochemical Reactions in PAW of Roughened Au Substrates Prepared Using PAW. As shown in the literature regarding SERS studies, 33,34 controllable and reproducible surface roughness was readily produced by controlling electrochemical oxidation−reduction cycle (ORC) treatments using aqueous solutions containing chlorite-electrolytes. Figure 1a shows the image of in situ production of PAW. Figure 1b demonstrates typical triangular voltammetric curves in the fifth 6 in DI water with roughened Au substrates prepared using different waters with and without resonant illumination; the blank sample represents a mechanically polished Au electrode without further ORC treatment for reference. ACS Omega Article scan for anodic dissolution of Au and cathodic redeposition of Au on substrates in solutions of 0.1 M KCl based on PAW with reduced HBs and DI water with stronger HBs. 13 In these plots, the terms "DIW" and "PAW", respectively, represent ORC procedures performed in DI water and prepared PAW, without additional resonant illumination of green light emitting diodes (LEDs) in the experiments. The terms "DIW in situ" and "PAW in situ" represent ORC procedures, respectively, performed in DI water and prepared PAW under resonant irradiation of green LEDs in the experiments. Basically, the dissolution of Au and the redeposition of AuNPs onto substrates were easier in the PAW-based electrolytes, as indicated from the enhanced currents, than those in the DI water-based electrolytes. In the ORC treatment for roughening the Au substrate, AuNPs were deposited on the Au substrate. Therefore, water with stronger HBs at the AuNPs on the Au substrate under resonant illumination could be transferred into PAW with reduced HBs. 13 Compared with the "DIW" sample, the cathodic redeposition currents at ca. 0.33 V versus Ag/ AgCl, respectively, increased by 2.7 and 4.3% for the "DIW in situ" and "PAW" samples. These increases were more significant for the "PAW in situ" sample, which increased by 32.5%. With up to 20 scans, as shown in Figure 1c, these increases in cathodic redeposition currents at ca. 0.33 V versus Ag/AgCl were more significant. Compared with the "DIW" sample, the cathodic redeposition currents, respectively, increased by 9.5 and 10.0% for the "DIW in situ" and "PAW" samples. The increased currents between the "DIW in situ" and "PAW" samples approached a slight difference with increasing scans. This means that the original DI water with stronger HBs at the AuNPs on the Au substrate under resonant illumination indeed was transformed into in situ PAW with reduced HBs. Similarly, these increases were more distinguishable for the "PAW in situ" sample, that increased by 41.6%. This great increase of 41.6% compared with those of 9.5 and 10.0% indicated that the original PAW with reduced HBs at the AuNPs on the Au substrate under resonant illumination could be more easily transferred into in situ PAW with greater numbers of weaker HBs available. This also suggests that a synergistic effect on further reducing HBs of water indeed occurred for PAW with intrinsically reduced HBs on an AuNPdeposited Au electrode under resonant illumination. It is recognized that the obtained currents well depend on the kinetics-and diffusion-controls in electrochemical reactions. These results suggested that the employed KCl electrolytes could diffuse more efficiently in ORCs, and higher electron transfer rates could occur at the Au electrode in PAW-based solutions, especially in in situ-based PAW solutions, which contributed to the obtained higher currents at constant applied potentials. The different effects using PAW instead of conventional DI water on the corresponding ORC treatments for roughening Au substrates could result from the different respective kinetics-and diffusion-controlled reactions, which were shown in their significantly different cyclic voltammograms (CVs). These interesting phenomena are discussed below. After preparing roughened Au substrates as described in Figure 1c, the same substrates were further examined to obtain their specific surface areas, as shown in Figure 1d regarding the corresponding CVs at 50 mV s −1 of 50 mM K 3 Fe(CN) 6 in DI water-based solutions. An additional blank sample of a mechanically polished Au electrode without further ORC treatment was also used for reference. It can be observed that, as expected, both anodic and cathodic peak currents based on roughened Au substrates were higher than those of the blank flat Au electrode. Compared with the cathodic peak current of the blank Au electrode, those values, respectively, increased by ACS Omega Article ca. 7.6, 10.9, 10.9, and 11.5% for the "DIW", "DIW in situ", "PAW", and "PAW in situ" samples. Similarly, the increased currents for the "DIW in situ" and "PAW" samples based on waters with reduced HBs were at the same level, which were higher that that of the "DIW" sample based on water with stronger HBs. This increase was slightly enhanced for the "PAW in situ" sample based on water with greater numbers of reduced HBs. The specific surface areas of roughened Au electrodes can be calculated according to the Randles−Sevcik equation 35 An CD 269 000 where I p is the peak current, n is the number of electrons transferred (n = 1 for this system), A is the specific surface area of the electrode, D is the diffusion coefficient, C is the concentration of the electrolyte, and υ is the scan rate. In experiments using the same electrolyte, concentration, and DI water at the same scan rate, the specific surface area is proportional to the recorded current. Compared with the blank sample with a surface area of 0.238 cm 2 , the calculated specific surface areas were 0.256, 0.264, 0.264, and 0.265 cm 2 for the same geometric area. Figure 2 shows CVs of different scans using the model probe molecules of K 3 Fe(CN) 6 in different waters with and without resonant illumination in experiments on roughened Au substrates ("DIW" samples prepared using DI water without resonant illumination). According to eq 1, the peak current is proportional to the square root of the diffusion coefficient. The calculated diffusion coefficient in PAW water with resonant illumination at 50 mV s −1 was 1.54 × 10 −6 cm 2 s −1 [with a relative standard deviation (RSD) of 6.1%]. The calculated diffusion coefficient in DI water without resonant illumination was 1.14 × 10 −6 cm 2 s −1 (with an RSD of 8.6%). The other two sets of reproducible experiments are demonstrated in Figure S1. This was a ca. 35% higher diffusion coefficient in PAW with resonant illumination in the experiment at 50 mV s −1 compared with that in DI water without resonant illumination in the experiment at 50 mV s −1 . Moreover, this increase in the diffusion coefficient (1.39 × 10 −6 cm 2 s −1 , RSD = 8.3%) was ca. 22% of the magnitude for the experiment in PAW with resonant illumination on the roughened Au substrate (the "PAW in situ" sample) prepared using PAW with resonant illumination (calculated from Figure S2). This suggests that PAW in situ had a function of enhancing the diffusion ability of species in water. The reason might be attributed to the reduced size of hydrated Fe(CN) 6 3−/4− in the water. With a significantly reduced hydrogen-bonded structure, hydration is associated with large water clusters due to the strong HB network of water. Contrarily, breaking the HB structure can reduce the sizes of water clusters, resulting in higher mobility of Fe(CN) 6 3−/4− −H 2 O. Similarly, the diffusion coefficients increased by ca. 36, 50, 40, and 46% in PAW with resonant illumination compared with experiments in the DI water system without resonant illumination in experiments at 100, 200, 400, and 600 mV s −1 , respectively. Moreover, these diffusion coefficients increased by ca. 30, 43, 32, and 36% in PAW with resonant illumination on the roughened Au substrate ("PAW in situ" samples) prepared using PAW with resonant illumination (calculated from Figure S2) compared with those in the DI water system without resonant illumination in experiments at 100, 200, 400, and 600 mV s −1 , respectively. In addition, as the scan rate (υ) increased from 50 to 600 mV s −1 , the redox peak currents of Fe(CN) 6 3−/4− simultaneously increased. The low-conductivity solution, which cannot respond instantly at high υ because of the slow electron transfer rate in the absence of electrolytes, resulted in more positive and negative shifts in E pa and E pc , respectively, as υ increased. Furthermore, when the anodic peak-to-cathodic peak separation was >0.2 V and υ was >200 mV s −1 , the peak potentials were proportional to the natural logarithm of υ (Figure 2c,d). Two linear equations (with most of R 2 values of >0.999) were obtained for the anodic and cathodic peak potentials. According to eqs 2 and 3 36 where E 0 ′ is the formal potential (i.e., the average of E pa and E pc ), α is the electron-transfer coefficient, n is the number of electrons transferred, T is the absolute temperature, R is the gas constant, F is Faraday's constant, k a and k c are the slopes of anodic and cathodic peaks potentials, respectively, and versus ln υ. k s is the apparent heterogeneous electron-transfer rate constant. The n value of Fe(CN) 6 3−/4− was 1. Therefore, the k s constant can be calculated to be 0.1083 s −1 (with an RSD of 3.2%) for PAW with resonant illuminations in the experiments, which was higher (ca. 15% of the magnitude) than that of 0.0945 s −1 (with an RSD of 0.3%) for DI water without resonant illumination in the experiments. Moreover, this increase in the electron-transfer rate constant was ca. 16% of the magnitude for the experiment in PAW with resonant illumination on the roughened Au substrate (the "PAW in situ" sample) prepared using PAW with resonant illumination (0.1097 s −1 with an RSD of 9.7%) calculated from Figure S2. This indicates that interactions of Fe(CN) 6 3−/4− with its surrounding water molecules could influence the ability of electron transfer. Fe(CN) 6 3−/4− being embedded within large water clusters hindered its electron transfer. Moreover, we performed a similar experiment in DI water as shown in Figure 2a, but the CV experiment of K 3 Fe(CN) 6 was performed in a completely dark atmosphere, not in a normal condition with indoor lighting of fluorescent lamps, as shown in Figure S3. The calculated diffusion coefficient and electron-transfer rate constant were 1.06 × 10 −6 cm 2 s −1 and 0.0880 s −1 , respectively, which were ca. 7 and 7% lower in magnitude compared with those corresponding values in experiments performed in DI water under normal indoor lights. These results suggest that available indoor lighting of fluorescent lamps with a full visible wavelength are also effective in creating in situ PAW from DI water on the roughened Au substrate. Certainly, this effectiveness was less than that with resonant illumination of green LEDs. Performances of SERS Signals on SERS-Active Au Substrates Prepared Using PAW and DI Water. Figure 3 shows the Raman spectra of rhodamine 6G (R6G) adsorbed onto roughened Au substrates (similar to the "PAW in situ" sample shown in Figure 1c) prepared in 0.1 M KCl using PAW with resonant illumination in ORC treatments with different scans. They are characteristic of Raman spectra of R6G. 37−39 The band at ca. 1184 cm −1 was assigned to the C−H in-plane bend mode; bands at ca. 1313 and 1576 cm −1 were assigned to N−H in-plane bend modes; and bands at ca. 1361, 1510, and ACS Omega Article 1649 cm −1 were assigned to C−C stretching modes. Our previous study 40 suggested that 25 cycles was the optimum number to obtain the strongest SERS effect in roughening the Au substrate using ORC treatment in a DI water-based solution without resonant illumination. Increasing the ORC cycles can increase the specific surface area available. However, subsequent Au deposition would fill up the porous surface of the Au already deposited on the substrate if the number of cycles exceeds the optimum value. This results in a lower SERS effect demonstrated by increasing the number of cycles beyond 25 cycles. In this work, the ORC treatment was performed in PAW-based solutions with resonant illumination, in which electrolytes could diffuse and electrons could transfer to electrodes more efficiently. Thus, as shown in Figure 3, the observed optimum number of cycles to obtain the strongest SERS effect using the common ORC treatment for 20 cycles was sufficient. The obtained SERS intensity was comparable to that with 25 cycles, which is generally used in SERS studies. The SERS effect was decided based on the SERS intensity of R6G of the strongest bands (at ca. 1361 cm −1 ). This suggests that the treatment time for obtaining the strongest SERS effect can be significantly reduced by ca. 20% in the commonly employed ORC method using in situ PAW instead of conventional DI water. In addition to enhanced SERS intensity, the reproducibility of the SERS intensities is also a concerned topic in its reliable application. As shown in Figure 3, the extremely low RSD of 4% for 20 scans compared with a little higher RSD of 14% for 25 scans indicated that the optimal SERS-active Au substrate could be prepared using the ORC treatment for 20 scans in PAW under resonant illumination. This developed SERS-active Au substrate with excellent reproducibility using the easy one-step electrochemical method was comparable to and even better than those of SERS-active metal arrays based on complicated procedures shown in the literature. 41−44 The effect of the number of cycles used in the ORC treatment on the corresponding SERS performance of the roughened Au substrate (similar to the "PAW" sample shown in Figure 1c) was also examined using PAW without resonant illuminations in the ORC treatment, as shown in Figure S4. Similarly, 20 scans used in the ORC treatment were the optimal number of cycles from the viewpoints of the SERS effect and its reproducibility. Figure 4 shows the SERS spectra of probed R6G adsorbed on SERS-active substrates produced employing different kinds of water with and without resonant illuminations in ORC treatments for 20 scans. The corresponding RSDs based on five measurements are also shown in these plots. The four prepared SERS-active Au substrates were similar to those shown in Figure 1c. Interestingly, the average relative SERS intensities of R6G observed on the Au substrates produced in DI waterbased solutions with resonant illumination (the "DIW in situ" sample, Figure 4b) and in the PAW-based solution without resonant illumination (the "PAW" sample, Figure 4c) all increased, compared with that obtained on the Au substrate produced in the DI water-based solution without resonant illumination (the "DIW" sample, Figure 4a). The former and latter increased by ca. 67 and 52%, respectively. Moreover, this increase could be enhanced to ca. 2.1-fold (ca. 3-fold for other probe molecules of deposited polypyrrole, see Figure S5) for the Au substrate produced in the PAW-based solution with resonant illumination (the "PAW in situ" sample, Figure 4d). Detailed calculations, compared with the DIW system, the increases in SERS intensities at 1184 (C−H in-plane bend mode), 1313 (N−H in-plane bend mode), 1361 (C−C stretching mode), and 1510 cm −1 (C−C stretching mode) could be enhanced to 2.7-, 1.8-, 2.1-, and 2.1-fold, respectively, for the PAW in situ system. In obtaining the relative intensities, the normalized Raman intensity was used. This was calculated from the ratio of the strongest signal intensity of R6G adsorbed on the SERS-active substrate produced in the PAW-based ACS Omega Article solution to that of R6G adsorbed on the SERS-active substrate produced in the DI water-based solution without resonant illumination. Thus, it is not necessary for making a correction on the normal Raman scattering intensity to account for differences in sampling geometry and scattering phenomena, as shown in the literature. 45 Moreover, the RSD of the SERS intensity of R6G adsorbed on the ORC treatment-prepared "DIW" sample prepared in DI water with strong HBs was acceptable at ca. 16%, as shown in Figure 4a. Encouragingly, this RSD was significantly reduced to 7% for the "DIW in situ" and "PAW" samples prepared in PAW-containing water with reduced HBs, as shown in Figure 4b,c. In particular, this reduction in RSD (5%) (RSD 6% for other probe molecules of deposited polypyrrole, see Figure S5) was more significant for the "PAW in situ" sample prepared in water with greater numbers of reduced HBs, as shown in Figure 4d. Examining the results shown in Figures 1, 2, and 4, it is found that a higher diffusion coefficient for electrolytes in a solution and a higher electron-transfer rate constant are responsible for the higher current recorded in the ORC treatment for obtaining a SERSactive Au substrate. Correspondingly, these higher diffusion coefficients and electron-transfer rate constants are responsible for the higher SERS activity and the better signal reproducibility. The electrochemical ORC procedure in a DI water-based system is a good method to prepare a SERS-active substrate with acceptable signal reproducibility with an RSD of ca. 16%. This RSD can be further reduced to 5% using an in situ PAW system (with a higher diffusion coefficient for electrolytes in a solution and a higher electron-transfer rate constant) in the ORC procedure. In addition, the good reproducibility of the SERS signal of probed R6G obtained on SERS-active substrates produced in PAW-based solutions was further investigated using SERS mapping. Point-by-point maps recorded from an area of 20 × 20 μm 2 with 2 μm steps for SERS-active Au substrates produced in DI water-and PAW-based solutions with or without resonant illumination are demonstrated in Figure 5. In an individual plot, the variation in the color of the block is dependent on the highest and the lowest SERS intensities of R6G on the area for each sample. Thus, blocks with different colors represent different intensities of SERS signals of R6G. The maps were obtained using the band area of the baselinecorrected band at ca. 1361 cm −1 (the strongest SERS band of R6G). The more uniform color of the blocks just means the more uniform measured SERS intensities on the area. As shown in Figure 5a, the SERS map on the Au substrate (the "DIW" sample) prepared in DI water-based solutions without resonant illumination demonstrated obviously large spatial variations in SERS intensities. Encouragingly, the intensity maps on roughened Au substrates showed more uniform when the substrates were prepared in DI water-based solutions with resonant illumination (the "DIW in situ" sample, Figure 5b) ACS Omega Article and in PAW-based solutions without resonant illumination (the "PAW" sample, Figure 5c). Most importantly, the intensity map on the roughened Au substrate exhibited very uniform color for the substrate prepared in full-time PAW-based solutions with resonant illumination (the "PAW in situ" sample, Figure 5d). These phenomena were consistent with observations shown in Figure 4. This effect can be ascribed to the easier reduction of Au on the Au substrate in PAW-containing solutions with reduced HBs, in which electrolytes diffuse and electrons transfer at electrodes more efficiently. Thus, metal NPs with hot spots can be more uniformly deposited onto substrates. This available strategy for obtaining uniform SERS-active substrates is quite easy and convenient. Figure 6 demonstrates typical microstructures of AuNPdeposited substrates prepared in 0.1 M chloride-containing aqueous solutions after electrochemical ORC treatments. Compared with the "DIW" sample prepared in DI water with strong HBs shown in Figure 6a, it can clearly be observed that more even surfaces with closely packed NPs were demonstrated for the "DIW in situ" and "PAW" samples prepared in PAWcontaining water with reduced HBs, as shown in Figure 6b,c. A similar phenomenon was also observed for the "PAW in situ" sample prepared in water with greater numbers of reduced HBs, as shown in Figure 6d. Because molecules located between the two metallic NPs displayed the greatest SERS enhancement, 46,47 this kind of microstructure with closely packed NPs can provide more and more even chances for probe molecules to be adsorbed onto the two metallic NPs. Therefore, more and more even hot spots should be observed on these PAW-based SERS-active Au substrates. Surface morphologies of the roughened Au substrates prepared in different 0.1 M KCl-based waters were also examined by atomic force microscopy (AFM). Figure S6 shows 2D surface images of roughened Au substrates. As shown in all four of these images, structural features of the redeposited AuNPs demonstrated dimensions of ca. 100 nm, which is suitable for SERS studies. 48,49 Calculated average values of the mean roughness (R a ) on roughened Au substrates prepared using PAW and DI water with and without resonant illuminations were 92.3 (RSD 15.8%), 66.9 (RSD 15.6%), 66.6 (RSD 9.9%), and 45.2 (RSD 5.0%) nm for the "DIW", "DIW in situ", "PAW", and "PAW in situ" samples, respectively. In the calculations, we obtained five R a values, which were automatically calculated from a program attached to the instrument, based on five randomly selected lines on the same sample (see Figure S7 of the "DIW" sample for an explanation). The five random lines were roughly equally spaced across the entire image and were similarly selected for different samples to the greatest extent. The average R a of individual samples was determined from the three sets of medium R a values, removing the largest and smallest ones. The higher value of R a recorded for the DI water-based system without resonant illumination in preparation suggests that this AuNP-deposited substrate was rougher because of uneven sizes and a low density of deposited AuNPs on the substrate. Interestingly, R a values were markedly reduced by ca. 28% in magnitude for the "DIW in situ" and "PAW" samples prepared in PAW-containing systems with reduced HBs because of more even surfaces with closely packed NPs. Moreover, this reduction was enhanced to ca. 51% in magnitude for the Au substrate produced in the PAW-based solution with resonant illumination (the "PAW in situ" sample). This phenomenon indicates that an even AuNP-deposited surface can be created employing the PAW-containing water instead of general DI water in the ORC treatment, especially when using the PAWbased solution with resonant illumination in preparation. This ACS Omega Article promises that the SERS technique can be reliably applied in analytical chemistry, which was discussed before in Figures 4 and 5. These encouraging results all suggest that excellent reproducible SERS signals from easy one-step electrochemical methods are practicable using in situ PAW developed in this work. ■ CONCLUSIONS We successfully utilized the created in situ PAW with significantly reduced HBs to more efficiently perform electrochemical reactions. Compared with conventional DI water, the diffusion coefficient and electron-transfer rate constant on an Au substrate in this in situ PAW were, respectively, increased by ca. 35 and 15% in magnitude. These marked increases were responsible for more efficient chemical reactions, which generally are governed by diffusion and kinetic controls, in this newly developed water-based solvent. To prepare the SERS-active Au substrate using simple ORC treatment in this in situ PAW instead of DI water, the preparation time can be significantly reduced. Moreover, the SERS effect and signal reproducibility were all improved using the in situ PAW system. In particular, the extremely low RSD of 4% of SERS signals based on this easy one-step electrochemical method was comparable to and even better than those of SERS-active metal arrays based on complicated procedures shown in the literature. These findings that in situ PAW can intrinsically improve chemical reactions like some organic solvents are the first to be reported in the literature. These promise it to be applicable in environmentally friendly water-based relative fields to investigate innovative aspects of effects of liquid water with reduced HBs. ■ EXPERIMENTAL SECTION Preparations of PAW ex Situ and PAW in Situ. PAW (ex situ) was prepared using a previously described method. 13 PAW in situ was prepared in a DI water-filled glass cell containing a roughened Au substrate with AuNPs. In the experiments, the glass cell with DI water was illuminated with green LEDs to create PAW in situ at the AuNPs (see details in the Supporting Information). Preparation of the SERS-Active Au Substrate. The Au electrode was cycled in a deoxygenated 0.1 M KCl aqueous solution (40 mL based on PAW or DI water) from −0.28 to +1.22 V versus Ag/AgCl at 500 mV s −1 with and without resonant illumination (green LEDs) for 20 scans. Respective durations at the cathodic and anodic vertices were 10 and 5 s, respectively. Finally, the potential was held at the cathodic vertex before the roughened Au electrode was taken from the solution and thoroughly rinsed with DI water (see details in the Supporting Information).
2019-04-09T13:09:38.884Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "011ca5454d6aac5ca04028f9756ccf7a674821e4", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b00494", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab7f4516ba5426cb8bf0ef8ff42a0a4248ffda23", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
146498703
pes2o/s2orc
v3-fos-license
Creation, Christians and Environmental Stewardship This article is based on a theoretical discussion between religion and environmentalism. The text aims to present a debate between the principles of Christianity and the theoretical discussions that are fundamental to today’s environmentalist vision. It leads to a theological and culturalism argument with general concepts of the environmental movement, particularly in Western culture. The author appropriates the theological debate, with biblical texts of the Old and New Testament as a source, in order to present Biblical principles of respect for nature. Dialoguing with the concepts used in environmental movements and anthropocentrism biocentrism, this paper seeks to support Christian principles of stewardship as a theocentric environmental proposal of the relationship between humans and nature. he intensity of environmental degradation is so obvious that it may seem irrelevant to ask questions like "Should we be concerned?"or even "Why should we be concerned."But responses are still surprisingly naïve!While most Christians will be alarmed about environmental degradation, they will still question the need for any direct action as Christians."This will take away from our priority of preaching the Gospel," some will react.They would argue that environmental issues are only for government and specialized non-government agencies, certainly not for the church.And those who get involved will only justify their action as something good and creditable, or even urgent but with not because of any direct connection to the Gospel. A strong attack on the biblical doctrine of creation was issued by Lynn White Jr. and this could be a good starting point to help challenge our complacence (White Jr 1967).White argued that the teaching that "it is God's will that man exploit nature for his proper ends," has largely contributed to our present predicament.The Genesis passages commanding Adam and Eve to "rule" and have "dominion" are shown to have led to an arrogant exploitation of nature.These texts have received much scholarly attention recently, and renewed attempts have been made to understand their meaning within their right context.But the burden rests heavily on us to correct any such impressions that the Bible has actually commanded us to do whatever we want with creation. Lynn White Jr. added further: Especially in its Western form, Christianity is the most anthropocentric religion the world has seen.Christianity, in absolute contrast to ancient paganism and Asia's religions (except, perhaps, Zoroastrianism), not only established a dualism of man and nature, but also insisted that it is God's will that man exploit nature for his proper end (White Jr 1967). Lynn White's small but seminal article delivered as a lecture in 1966 has been more often quoted than any other environmental challenge and demands our attention.It raises two questions.The first question is -does the Bible authorize exploitation of the created order?And second, is Christianity an anthropocentric religion? There are various ways in which we can respond to these challenges and the scope of our treatment can be even wider.But considering the fact that we are dedicating these articles to Brian Wintle, a dedicated biblical scholar, I will restrict my treatment to a biblical exposition that will help us develop a responsible Christian attitude to get involved. WHAT IS THE BASIS OF OUR INVOLVEMENT? Let us begin on a more positive note and consider one of the main reasons for our involvement in environmental action.These are our opening words as we often repeat the Apostles Creed -"I believe in God the Father, maker of heaven and earth."In doing this we affirm our faith in a Creator God.This means, our environmental action is something we cannot help but demonstrate being God's created beings and living within the wider created order.Such a positive start will help us negate the attitude that many Christians still hold -the world is evil and too much of involvement in the world (or with creation) will make us "worldly." A careful look at the Bible will reveal that ecological and environmental concerns are very much central to its message.The Word of God starts with the glorious account of God's creation.God promised the best of created things to the people he made to be his own.The prophets looked forward to a renewed creation.Jesus displayed a very positive attitude to all that was around him.Paul spoke about creation groaning for redemption, just as much as human beings are groaning. BEGINNING WITH CREATION One of the first things to do is to recover a positive attitude towards creation and challenge the notion that the world and creation is evil.We must start with the powerful truth that there is an ongoing relationship between God and his creation.In saying God is Creator, we are affirming that it is God who is Lord, and that it is God who is the initiator, the sustainer and therefore continues to graciously relate to a creation of which we are only a part.The Bible claims that it is through creation that even God may be known."The heavens proclaim his righteousness and all the peoples see his glory."(Psalm 97.6)Several other portions of the Scriptures (for instance Psalm 19.1f.) bear testimony to God's glorious manifestation through creation. The Old Testament scholar Walter Bruggeman graphically depicts the systemic beauty of harmony and obedience between the Creator and creation as a process of communication.He calls it "speaking and listening."God creates by speaking and therefore the responsibility of creation is to listen and answer.Communication between partners is built on speaking and listening.Creation is an intimate and valuable partner with its creator, not just an object constructed or put together for pleasure (Brueggeman 1977 p.6). In becoming a partner God does not lose his distance from creation.He is both transcendent and immanent.This bond between the Creator and the creation is aptly explained by Brueggemann in terms of "closeness" and a "distance."While closeness signifies a constant care between creator and creation, distance underlines the individuality, identity and the respect that one shows to other.And this applies to both Creation to Creator.Each has its place of honour and purpose, and each is related to the other through this inextricable bond.This kind of a relationship avoids any confusion caused pantheism or dualism.Pantheism states that God is everywhere and in everything.Some environmentalists sound the praises of Hinduism claiming that it evokes a sense of respect for creation, which is otherwise lacking in the Christian religion.But monistic Hinduism, pantheistic in essence, confuses the Creator with creation making even humans to be identical with God.Added to this confusion is the teaching of maya or illusion.Creation is only illusion, even if it is seen to have an identity in God.Dualistic Hinduism, on the other hand, distances God from creation to the extent that there is no ongoing relationship.There is, in fact, an opposition between God and creation. God alone, who is Lord and the source of everything, is responsible for all that is created and must not be confused with his creation.This teaching comes through the concept of creatio ex nihiloout of nothing -which is a dominant note in the biblical account of God's creative work.This doctrine refutes any pantheism that confuses the creator with creation, or a dualism that claims a confrontation between God and evil.Further, God called everything "good" and therefore there is no opposition between God and creation.Any implication of a conflict is because of Satan and sin and the constant battle of sinful human beings to independently assume charge. WE ARE CREATED IN THE IMAGE OF GOD The biblical concepts of "image of God" and "dominion" have been topics of endless debate within discussions of environmental exploitation.Briefly, to be made in the image of God implies that humans have been created in order to responsibly represent God in creation, and in this sense exercise "dominion" not "domination" over creation.Humans are the climax of creation, we often assert implying that we are most special to God and all else is secondary.Critics show that the concept of image of God is included in the idea of dominion and both stem from the anthropocentric approach to creation which has led to exploitation and abuse of nature. The meaning of the term "image of God" has been variedly interpreted.Commentators have seen it in many faceted dimensions -creativity, individuality, freedom etc.Whatever it means, there is one thing that will be clear -God and human beings have a link that is different to the link between God and the rest of creation.Humanity is entrusted with a special task."By virtue of being created, it bears a responsibility; human dignity and responsibility are inseparable," says Claus Westermann (Genesis 1987).Although "humanity exercises sovereignty over the rest of creation," we are reminded that "there is no suggestion of exploitation."Just like the king, whose rule responsibly serves the well being of his subjects, so humans are to responsibly care for creation.Possessing God's image and exercising dominion, rather than being seen in authoritative or hierarchical terms, needs to indicate godly attitudes and gracious action towards nature.Too much is made of the special status of humans over and above the rest of nature by Christians, and hence it is hardly surprising that the ecological disaster has been seen to be linked with the biblical doctrine of creation.The image of God in humanity needs to be seen in terms of responsibility as well as a privilege.Humans are given the privilege of possessing a rational, moral and spiritual dimension that enables them to act creatively and responsibly towards the whole of creation.Being made in God's image we are to protect the environment in accountability towards God, and a responsibility towards our fellow creatures and the rest of creation. CREATION AND THE FALL While we speak of the glory of God's creation of human beings and our relationship, we cannot bypass the biblical fact of the fall.Sin and the fall clearly served to revert in part to the chaos from which creation came about.Creation continually is being pulled back into chaos by human sinful actions.Environmental complications and ecological disasters are to be expected with human beings fallen from God's originally intended purposes.But the fall has not obliterated the image of God in us. Hence, when we recognize that God is the God of order and harmony, we being God's image endeavor bring order into the present chaos.A proper assessment of the meaning of the image of God in us should help us move into this kind of involvement in our world today. God's image must reflect something of God in us.God wants us "to keep" and "rule over." We need to carefully accept this combination.-God's love as well as God's authority must be demonstrated through human beings over all other creatures.On the one side there is caring love and responsibility, but on the other is creative power.This power is not an unquestioned autocratic rule over creation but a productive force that empowers other fellow creatures to live, create, recreate, regenerate and fulfill their purposes here on earth. There are two insights that help tone down any overemphasis on the image of God and the special status given to us.First, there is a suggestion of what this rule is to be in the reminder that we are to rule in the same way as the sun and moon "rule" over the day and night (Gen.1:16) It is not harsh or destructive but purposeful.Human beings made in the image of God are called to represent God's righteous rule on earth.God is to be manifest in us not only in reverence for human life but in similar reverence for the non -human creation. Second, the New Testament reference to the image portrays Jesus Christ as the perfection of love and this must be underlined even more.If God's image was perfect in Jesus Christ, then this image is worthy of emulation.Jesus came to heal and not to harm.He came to carry out God's desires, not to satisfy his own cravings.When we consider such aspects of the image of God, the concept becomes a powerful tool in bringing environmental care through the Christian in our world today. But we should also consider that when the reality of the image of God is placed in the context of human sin, fall and destruction there is bound to be manifestations of human tendency to usurp and exploit authority.Sin is rebellion against God.It is a craving for autonomy rather than life in obedience to God.Hence, as Paul says of the law in Rom 7 -"I find this law at work.When I want to do good, evil is right there within me.(Rom.7.21).When God commands us "to guard and keep" creation, sinful humans would rebel and want to do the opposite.Creation, therefore, which was originally to be the source of blessing has turned to be a curse all because human beings chose to rebel against God. Sin brings disharmony within God intended relationships for creation.Far too many discussions on the environmental crisis make no reference whatsoever to the biblical account of sin and the fall of humanity.Without any reference to this fact, the crisis becomes inexplicable and therefore the attack on the doctrine of creation becomes justifiable.Creation's perfection is marred by human imperfection. The fact that it is Eve that is first enticed should not be taken to imply any blame on women. That will miss the point.What started with Eve, spread to Adam and then to all creation.The universality of sin is the underlining factor in this account.And the consequences are just as universal as the fall.The divine relationship between man and woman is now affected.Man will exploit woman. The exploitation is to extend to the entire world and creation itself suffers and groans..The very fact that creation is also influenced by our fallen-ness shows the intricate interlinking.It is not only in our created-ness but also in fallen-ness that we identify with nature. Discord within relationships has now entered in because of sin.At the heart of sin is rebellion. And this is clearly at the heart of all broken relationships.And when relationships are broken there is an exploitation of the stronger over the weaker.The ecological crisis is characterised by this kind of exploitation, whether it be humans over creation, or within the wider created order itself. UNDERSTANDING DOMINION IN CONTEXT It is now necessary for us to delve a little deeper into the wider context within which the word "dominion" is used and not just those initial chapters in Genesis.Looking at the word by itself there is reason to accept the criticism.Interestingly, while God gave the commands to 'be fruitful' and 'multiply' to other creatures, to man and woman was given an even greater responsibility; Adam and Eve were given the responsibility to 'subdue' and 'rule' and have dominion over all creation. The problem to critics like White, obviously, is with the words used.The Hebrew word kabas and radah are said to be much harsher than the English translations.Kabas means 'to tread down', to bring into bondage' or even 'to rape' while radah means 'to trample' or 'to press' and therefore to rule or dominate.The Hebrew words, like most of our Asian languages, have a rich array of meanings and need not necessarily be taken literally.As we look closer at the implications, we will get nearer to the fuller understanding as was intended in the command. Let us consider some of the wider context for dominion: a) God sanctioned Dominion in love: Very often Israel is reminded of God's love.Ezekiel 34 depicts the prophet reminding the kings of Israel that God is shepherd.In contrast they "ruled them harshly and brutally."The word radah -"rule" -is here placed alongside the concept of a caring shepherd, not the harsh and brutal leaders they are familiar with.We can confidently conclude that "dominion" or "rule" did not imply a cruel, heartless domination, but the loving and caring relationship of the shepherd to his sheep. b) God sanctioned dominion within a commonality: The Hebrew 'adam' taken from the word 'adamah' meaning ground must speak for itself.There is a commonalty that exists from the start and continues right through to the end.Adam is made from the "dust of the ground" (Gen 2.7).There is an integral link with the earth as well as with the environment around.This is the reason why human sin had its toll even on the environment.Ecology implies total interconnectedness of creation, and this connectedness is not strange to the biblical teaching.There is no blue blood that divides royalty from the common folk.Rightly, in the English language, we are referred to as "earthlings".Dominion, seen within this context of commonalty takes on a healthy perspective.It is a responsibility for others with common rights. c) God commanded dominion with responsibility: Dominion did not permit an irresponsible exploitation.Though God gives great authority to men and women, there is the constant reminder that "this sovereign authority does not include the killing or slaughtering of animals" Similarly, when God gave dominion to man over nature (Gen.1:26) it was not a mandate for totally annihilation.There are many other such commands (References).Proper and responsible care over creation was expected. Responsibility alongside God's creativity transforms authority into positive and productive expressions.Rather than destruction, there is the desire to bring something good even from the worst. God entrusts his property to men and women, resources that have limits but blessed with the potential to multiply phenomenally.The earth contained everything human beings needed.Therefore, according to the will of God as creator, both human and animal sustenance "was to be the products of the earth alone."()d) A Dominion in the interest of others: The word 'mashal' another word that means 'ruling over' is used to denote the authority of the sun to govern over the day, and the moon to govern over the night (Gen.1:16) This, interestingly, is equivalent to the authority of man to govern or to rule over his wife (Gen. 3:16).Taken in its right perspective, it did not mean harsh and domineering rule with only selfish interests.The sun and the moon had purposes for which they were created, the purpose of service to the rest of creation, and it is for the fulfilling of these purposes that any power was vested in them. Similarly, man's rule or dominion over woman is not to destroy her or consume her totally for his benefit Woman has her individuality.In the same way, men and women are not to destroy or to totally annihilate living creatures on earth just for their selfish satisfaction.Ultimate dominion belongs to Yahweh alone.One reminder that comes forcefully to our present world is that any rule or authority, be it political, religious or even domestic, carries privileges as well as responsibilities.When privileges are separated from responsibility exploitation is inevitable. e) A dominion in servanthood: While we look at the commands given to Adam and Eve at creation, it is necessary to also consider the commands subsequently given.Man and woman in the garden are instructed to 'till' and 'keep' it.These are words that beautifully temper the harshness of the other words.The Hebrew words for them are seen in Gen. 2:15.First, there is eabd which means 'to till'.The noun is 'servant' or 'slave.'Serving or service, even servanthood must definitely have been implied.Humankind is to be available to serve creation, and in so doing serve the Lord God. f) A dominion with stewardship: An even more powerful the word is the Hebrew Shamar, which means 'to keep'.The noun form is 'steward' or 'trustee', (10) implying watchful care and preserving of the earth.(11) These aspects are being heavily underlined today as the ecological cause is assuming alarming proportions.It is a shift in emphasis from users to keepers, from consumers to conservers.The concept of stewardship will be developed much further, but here we remind ourselves that we are called to serve, keep and preserve creation which God has entrusted to us as trustees, or stewards. g) Dominion with Respect: Any call to respect creation is immediately confused with calls to worship creation as in pantheistic practices.This is the plea of some environmentalists today. Criticizing biblical doctrines, they eulogize the teachings of Hinduism or Buddhism, pointing to the deep respect these religions evokes towards creation.The Biblical doctrine of creation, they claim, has ignored this attitude.Although this could be disputed, a corrective is needed by the Christian. Does not the Bible teach respect for creation?If God described creation as "good" there must be some inherent worth that makes it warrant much more than we have shown to it.Creation has a purpose for which it exists and it is in the fulfilling of these purposes that its existence can be fulfilled. Respect for creation will need to be seen as respect for the purposes of each aspect of God's world. Dominion does not call for domination but for all that we see in the wider context we have just considered. ARE CHRISTIANS ANTHROPOCENTRIC? We now move to the second question raised.White accuses the 'Western" Christian doctrine of being anthropocentric, i.e. centered around human beings, and that it is the command to have dominion over creation that has led to human exploitation of nature.Science and technology have emerged from a need to have even greater control and this has not helped.Better relationships will need to be fostered, ones that will show respect for creation as in other religions.Our task for a biblical theology is clear; we will need to get back to the Genesis texts to explore the meaning and significance of these issues. Anthropocentrism places humanity at the center.Everything in the universe is seen in terms of human values and human interests.The view was developed strongly in the post-Enlightenment period with confidence that humans can totally conquer nature for their survival and the betterment of their own kind.Anthropocentrism, we will have to admit, has become a predominant part of the modern materialistic way of life.The affluent lifestyles we are all gradually adopting within our growing economy, industrialization, and technological progress have led us subtly to accept such views.What is achievable by humans seems to be limitless, and all this with no miraculous interventions from God. Our attention has been drawn to the deep-rooted anthropocentrism in the Western perspective even by Western writers themselves.Here is a rather elaborate quote from R A Young: The anthropocentric predicament is somewhat paradoxical on two accounts.First, concern for personal well being and survival has raised ecological awareness to the level that many now question the anthropocentric basis for modern society.The motivating factor for change (selfpreservation) and the source of the problem (self-preservation) therefore only accentuates selfcentredness and the root of the problem does not go away.Second, humanistic society still approaches environmental problems from an anthropocentric perspective despite knowing that this attitude is ultimately self-destructive.To preserve wilderness areas for recreational purposes, to convert to compact fluorescents for economical purposes, or to save the rain forest because of the pharmaceutical products it can yield is to act out of anthropocentric interests.There has been much environmental activity recently, but most of it is, in one way or another, still anthropocentric.Anthropocentrism seems to be so entrenched in society that there is an ingrained resistance against accepting the observation that humanity's priority on self is self-destructive (Young 1994 p. 117). If anthropocentrism is problematic, the alternative that is recommended by various environmental movements is biocentrism.Biocentrism teaches that everything in life, nature or creation has equal value and must be respected for what it is.Traditional societies tend to be biocentric people, who relate in very practical everyday terms to the environment around them.The Earth's ecosystem is to be valued for its own sake and not for human benefit.Some ecologists issue strong reminders for us to accept that nature has value in itself.Biocentricism calls us to respect everything in our biosphere without any accent on human commercial calculations. With Christianity attacked for its anthropocentricity, environmental groups are turning to biocentrism.Biocentrism is the emerging ecological worldview and advocated as the only hope to save humanity.This is the product of the rising ecological awareness in society, the influence of Eastern religions and philosophies, quantum physics, and a resurgence of primitive paganism and native cultural insights.All this seems to be fashionable to follow within a pop culture that has emerged. Young comments: Environmentalists tend to embrace this new paradigm, for it coincides not only with what the science of ecology is teaching but also with the pop philosophy of eastern mysticism.Biocentrism's focus on the web of life precludes human ascendancy.No one organism can claim supremacy over anything else, for all are needed to support the ecosystem.As a result, humans are simply part of the complex whole, no higher or lower than any other part of nature.And people are listening with open ears.This sounds like the ideal corrective for harsh anthropocentrism (Young 1994 p.125). While biocentrism provides the needed alternative to anthropocentrism, it conflicts with the biblically justifiable solution for the Christian.It is certainly a valid corrective for the arrogance that we have been accused of, but these insights need to be placed alongside our commitment to God as Creator and one who continues to sustain this creation.Therefore, if we are to stay biblically anchored, theocentrism is the viewpoint we must consider. We could turn to Paul for a definition of theocentricity as submission to the Creator God: '…in him we live and move and have our being' (Acts 17.28).Transposing this to the entirety of God's creation, we affirm that everything finds existence, meaning and purpose in its relationship to our Creator and Redeemer God.Our being stands or falls in relationship to this God.But with the ecological crisis and the reminders that have come, we need to clarify the focus on our theocentricity. We can identify two varying approaches to theocentrism.One form would teach that everything exists for the sake of God and to serve his purposes.The Bible would justify this; except that some would take it to the extent of saying therefore God will rectify the damage in the new Creation.We do not need to do anything.But there is another kind of theocentrism that fits more appropriately into our eco-conscious world today.While accepting that God ought to be the centre of all that we are and do, we must not ignore the fact that what God wants us to do something by ourselves too.God created everything, but made each one to fulfil distinct purposes.These purposes refer back to the one overarching purpose that keeps it theocentric, but maintains the distinctive place for each for its own sake.These roles should take into account even the biocentric accent that is needed in some measure.Everything in God's created order has a distinctive place, keeping the ecological balance so essential to environmental harmony.There are chains and cycles that function within creation and these take into account the role each individual part has to play. Theocentrism in any form must underline that our relationships within creation revolve around a transcendent centre.Pure biocentrism tends to deify nature, while pure anthropocentrism will divinize humans.A relationship by itself with nature will either idolize or romanticize our dealings and not fulfill the ultimate God ordained purposes that are intended.It is when we relate to a Creator God that all else will take its rightful place.Paul Santmire suggests: To avoid setting the human creature over against nature on the one hand (the tendency of anthropocentrim), and to avoid submerging the human creature and humanity's cries for justice on the other hand (the tendency for cosmocentrism).I am suggesting that we see both humanity and nature as being grounded, unified, and authenticated in the Transcendent, in God.This is the theocentric framework (Santmire 1985 p.49). The Bible gives a distinct place to God as Creator.Claiming ours to be a biblical theology, our starting point must be the Bible and the forceful teaching that the transcendent God is Creator.It is this God who continues to motivate and energise us to become involved in restoring creation, towards becoming all that God has intended it to be.We have the role of being stewards in this magnificent created order recognizing that God is above all and in all that we experience. BEING STEWARDS Stewardship is an acceptable way to describe our position or place in relation to our role and responsibilities towards creation.John Hall, in an excellent book entitled "The Steward" stresses the "stewardship" metaphor "because it encapsulates the two sides of human relatedness, the relation to In the Old Testament a steward is a man who is 'over a house ' (Gen. 43:19;44:4;Is. 22.15,etc.)In the New Testament there are two words translated steward : epitropos (Mt.20:8; Gal.4:2), i.e. one to whose care or honour one has been entrusted, a curator or a guardian and this could appropriately describe our role in the world.Another word is oikonomos (Lk.16:2-3; 1 Cor.4:1-2; Tit.1:7; 1 Pet.4:10), i.e. a manager, a superintendent.Taken from the word oikos ('house') and nemo to "dispense' or "to manage" there is reference to the relationship within the home, an ownership with which this responsibility must be performed.However, the words are used to describe the function of delegated responsibility, as in the powerful parables of the labourers, and the unjust steward. RESPONSIBLE STEWARDSHIP FOR TODAY Responsible stewardship acting in God's love will result in practical outworking that will help develop right attitudes for living today.First, we Christians who are called to care for creation will see the need for recognition of the harmony, unity, purity, and integrity in creation.A respect for creation will elicit a respect for the rights of creation.Our care for creation will show in our love to protect, conserve and bring healing to a wounded world.Ecology, we have seen implies interrelatedness, and this will show in our own feeling of hurt for a creation that has been hurt. Second, we are called to preserve and conserve creation's resources.Preserving could imply abstaining from use, whereas conserving calls for responsible use.Conserving calls for protecting in the present for future use.We may be need to develop the responsibility to preserve some endangered species by protecting them, and conserve a forest by not only using it carefully for our present needs but protecting it for responsible use for generations in the future. Third, responsible stewardship calls for demonstration in responsible lifestyles.Greed and materialism have caused havoc and disparity, which continues unabated with human exploitation.We are called to a life of sharing in the world's community rather than accumulating for our selves.While this may start interpersonally, it must be realized internationally.In fact, when nations start living integrally, its people automatically develop more responsible attitudes.Some of the major ethical violations are those that have emerged through large scale international illegal operations. Fourth, responsible stewardship calls for an acceptance of the rights and privileges of all of God's community and creation.We must see the importance of according rights to nature as well as to other humans.One other aspect that has emerged in recent times is the need for us to demonstrate a responsibility towards future generations.The ecological crisis has brought people to recognize the need to protect the rights of future generations.The rate at which resources are depleting in our world at present is alarming.The question is asked -How much longer will these resources last?Whatever we do must therefore ensure the fundamental rights of those in the future to have sufficient resources. Finally, we have a responsibility towards God to honour him for the way in which he has honoured us with responsibility over all of creation.All that we have said above will fall into its right perspective when we see God as the one who invests integrity, dignity, and responsibility within humans.And in essence, our relationship to God will show in a responsible relationship to the world. Fronteiras
2018-12-20T23:00:56.808Z
2015-12-20T00:00:00.000
{ "year": 2015, "sha1": "4a665ad1f024ed156965739626107fe427b16980", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21664/2238-8869.2015v4i3.p122-135", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b11adf99cff86c40e7a8b8fac2ea376094d7e7d9", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Sociology" ] }
269275399
pes2o/s2orc
v3-fos-license
Grid-forming control for inverter-based resources in power systems: A review on its operation, system stability, and prospective The increasing integration of inverter based resources (IBR) in the power system has a significant multi-faceted impact on the power system operation and stability. Various control approaches are proposed for IBRs, broadly categorized into grid-following and grid-forming (GFM) control strategies. While the GFL has been in operation for some time, the relatively new GFMs are rarely deployed in the IBRs. This article aims to provide an understanding of the working principles and distinguish between these two control strategies. A survey of the recent GFM control approaches is also delivered here, expanding the existing classification. It also explores the role of GFM control and its types in power system dynamics and stability like voltage, frequency etc. Practical insight into these stabilities is provided using case studies, making this review article unique in its comprehensive approach. Lacking elsewhere, the GFMs’ real-world demonstrations and their applications in several IBRs like wind farms, photovoltaic power generation stations etc., are also analyzed. Finally, the research gaps are identified, and the prospect of GFM is presented based on the system needs, informed by GFM real-world projects. This work is a potential road map for the GFM large-scale deployment in the decarbonized IBR-based bulk power system. INTRODUCTION Renewable power generation (RPG) induction into the power systems is evidently booming.For example, the global annual increase in renewable capacity was a record-breaking 6% in 2021, reaching 295 GW, and is expected to increase by 8% in 2022, touching a 320 GW peak [1].Besides, the business for RPG is more favourable than ever before, with the reduction of PV module prices by 80% and wind turbines by 30−40% [2].This rapid growth directly results from policies that reduce anthropogenic greenhouse gas (GHG) emissions.Though the average annual emissions are higher in 2010-2019 than in 2000-2009, the growth rate of the former decade is lower (1.3%/year) than the latter one (2.1%/year)[3].Furthermore, by 2030, the United States alone plans to reduce GHG emissions by 50% [4]. Contrariwise, as most of these RPG sources are intermittent, inverter-based resources (IBRs) and are non-synchronous, they pose multiple challenges to the grid, such as power quality [5,6], i.e. voltage and frequency fluctuations [7].Therefore, the RPGs This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.© 2024 The Authors.IET Renewable Power Generation published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. in this manuscript are referred to those that are IBR-based.The predominant cause of frequency-related problems is linked to the lack of inertia [8][9][10] and damping [11][12][13] in these IBRssystems.Whereas, the voltage stability issues are mainly linked to the absence of reactive power reserves from IBRs [14,15].These power quality issues were well resolved in conventional sources (CS) synchronous machine (SM) based power systems [10].However, the high pace of replacement of these CS by the IBRs requires attention to understanding the system dynamics and proposing different control strategies.In this context, control approaches such as grid following (GFL) and grid forming (GFM) for IBR grid interfacing are reported and discussed here. Two primary converter topologies used in current power systems are the voltage source converter (VSC) and current source converter (CSC), employing transistor and thyristor technologies.VSC is typically favoured over CSC due to the losses associated with series switches and the lower efficiency of CSC [16].The essential function of VSC lies in converting DC power into AC active power for integration into the grid, and vice versa, depending on the direction of power flow.Furthermore, there are two sub-classes of VSC operational control strategies: One is the commonly known current controlled source (CCS) behaving as a current source, identified as GFL [17,18], and the other is the controlled voltage source (CVS) behaving as a voltage source and is recently called GFM [19,20]. State-of-the-art and scope of this paper Under different conditions, GFM and GFL controls suffer from stability issues.Refs.[21,22] investigate the small-signal stability issues of VSC, which are based on linear theory modelling that fails to perform in large-signal stability studies even when a stable equilibrium point exists.Whereas, [23][24][25] are based on large-signal stability modelling, i.e. nonlinear theory modelling. Here, ref. [23] uses the equal area criterion (EAC) with nonlinear characteristics in mind.However, the system with negative damping is not favoured for the EAC method.This shortcoming is handled in [25] using phase portraits.Besides, ref. [17] studies the VSC control strategy based on energy function modelling that produces the distribution of system damping, including negative and positive damping.Moreover, a qualitative analysis in [26] explores the positive influence of GFM on addressing the frequency stability challenge in low inertia scenarios, taking into account the existing constraints of both DC and AC converters.The study suggests that additional research is necessary to facilitate a smooth operational transition between different control approaches, such as transitioning from GFL to GFM.The impact of inertia on the power system stability in the presence of a high level of integration of IBR is discussed in [7,9].With few exceptions, most of the above articles focus on the dynamics of GFL in the power systems.Furthermore, no comprehensive study is available that demonstrates each GFM model's performance in the context of different power system stabilities.Therefore, this review paper thoroughly explains GFM and its diverse benefits for different system stabilities and emerging complexities. Another investigation that ought to be of keen importance is the application and demonstration of these inverters.References like [27,28] provide a theoretical-based approach to performance analysis of GFM in high-voltage direct-current (HVDC) systems and frequency stability issues in the Irish power system.On the other hand, the installed worldwide GFM-based control strategies with examples are reviewed in [29].These articles give a hint of possible strengths and weaknesses of the GFM inverters in terms of the system need provisions like black start capability [30] and system restoration that are conventionally otherwise provided by CS [31,32].Therefore, the relevant application-based review will be presented, accompanied by a comparative discussion on GFM and GFL's ability to operate according to their capabilities and power system needs. Recently, an effort has been put forward in the literature available including review articles [9,29,[33][34][35], research articles [26,36,37] and standards [38,39] and grid codes [39][40][41] to understand the concept of GFM and GFL.Yet, no agreement or clear definition is given to the GFM by any authority.Most of the concerned bodies are defining it as per their requirement or in the context they are using it [39,42,43].In contrast, the academic and industrial communities are deliberating to put forward a formal definition [20,34,42].For system planners, operators, and equipment manufacturers, it is still an open question of what requirements and capabilities constitute the new inverters [40].This article presents an effort to establish the needs of systems and assess the abilities and shortcomings of GFM. Comparison with the available review articles The available review articles cover most of the developments regarding the GFM till the time of their publishing.Under the high penetration of RPGs, the grid flexibility concerning inertia is studied in [9].Here, the discussion only surrounds the synthetic inertia emulation, estimation of inertia, and its coexistence with CS's -based inertia.A survey of the pilot projects is carried out in [29], where different types of GFM control approaches demonstrated worldwide are analyzed.On the other hand, articles [34] and [44] provide the various available control approaches of GFM.The classification in [34] is based on the subsystem functions that are later joined to make one complete control, which performs multiple tasks like frequency and voltage control.Whereas reference [43] classifies the GFM's approaches based on their main role, i.e. droop control, SM's inertial emulation etc.Some of these articles oversight many control structures of GFM that are proposed in the literature.While other papers do not consider their application in different IBRs, especially wind turbine generators (WTG), photovoltaic (PV), and battery energy storage systems (BESS).Others do not provide case studies demonstrating the GFM's ability to control and stabilize the system in uncertain situations.To address the above shortcomings, this article covers many aspects of GFM, like its applications, demonstrations in the real world, stability, and dynamic analysis, and various up-to-date control structures as listed in the comparison to other review articles in Table 1. Contribution and research question addressed here Herein, the GFM and GFL control strategies are comprehensively reviewed to understand their working principle while knowing their distinctions.GFM and its till date proposed control strategies and their role in power system stability and dynamics is also investigated.This investigating is supplemented by the GFM application in various IBRs.To summarize, this paper explores the below research questions by investigating more than 200 papers, reports, and books, of which around 140 relevant ones are reported here. The current understanding of GFM and GFL by academia and industry is presented through a discussion on their distinct working principle. a Only reference is given to GFM whereas the whole paper discusses the inertia role in renewable energy perspective.2. State-of-the-art proposed control approaches of the GFM inverters are also surveyed.Highlights of their comparison under different system conditions and characteristics are tabulated.3. The role of GFM control in power system dynamics and stability is explored in detail and is supported by case studies.These include phase angle, voltage, frequency, and converter driven-based stabilities.4. Applications of GFMs in various IBRs such as WTG, PV, BESS, and HVDC systems are reviewed in detail, supplemented by a survey of demonstrations of the GFM testing at the medium voltage (MV) level.5. Insight into the system needs and GFM-IBRs capabilities is provided.Besides, the deployment plan of GFM-IBRs into a bulk power system is presented.Finally, identified future research questions and directions are put forward. The rest of the paper is organized as follows.Section 2 discusses the GFL and GFM's working principles and various GFM control models.Section 3 investigates the GFM performances in power system stability and dynamics, whereas Section 4 analyses the real-world demonstrations of GFM and its applications in various IBRs.Section 5 sheds light on the system's needs with high penetration of IBRs.Section 6 presents the research gaps, future work, and deployment schedule for GFM.The paper concludes with Section 7, which summarizes the whole article. GFM AND GFL: THEIR DIFFERENCES, DUALITIES AND OPERATIONAL PRINCIPLES Non-synchronous inverter-based resources (IBRs) are displacing conventional synchronous-based power sources in the power system at a noticeable pace [45].This connection to the grid through the converters is the main reason IBRs are not the sole energy source of power systems [44].Hence, there is an ongoing search for novel control methods.This crucial statement is elaborated on in the remainder of the paper.On the other hand, the CS technologies and related theories are quite mature and readily available [10], whereas the system in the presence of these sources is reasonably stable [45].Subject to physical constraints, SMs are controllable in dynamic and steady states.Here, the performance is mainly predictable regardless of controls like excitation and governor behaviour due to the dominance of mechanical and electrical characteristics over the fast transients [10,44]. On the contrary, the IBR-based power system is a new phenomenon, and the related power system dynamics and stability are hot research issues.The reason can be that the control strategies of the GFL-IBRs and CS differ regarding their response time and response principle to the disturbance.The potential remedy in such a context is a GFM that controls the voltage and frequency through grid ancillary services similar to CSs, which are discussed ahead.The basic diagram of GFM and GFL is shown in Figure 1, their differences in Table 2 [16,45], whereas their capabilities in terms of their duality are listed in Table 3 [33]. In short, their primary objective is dispatching active (P) and reactive (Q) power to the grid by the GFL and GFM IBRs' control.The distinction comes during the transients, i.e. during and immediate post-disturbance time frames.This distinction is further elaborated below. Grid following inverter The present IBRs are based on GFL, which injects the current into the grid by reading its voltage and frequency to provide the scheduled active and reactive power with an assumption that instantaneous AC voltage is formed in the grid by the sources in dominance, i.e.SM [16].With no clear requirements and incentives from the market, the trend will stay the same, resulting in a further increase in GFL-IBRs [46]; however, this has to change due to the aforementioned reasons. During the transient period, the GFL-IBR keeps the active and reactive current components constant.Thus, they appear to be constant current sources.Phase-locked loop (PLL)-type fast-acting synchronizing components are used to determine the grid voltage angle at the point of common coupling (PCC).This angle is used to "follow" the grid voltage by tightly controlling the current's active and reactive components.If this "following," i.e. the tracking of grid voltage fails, and the stable output of the GFL IBR is compromised [46]. Besides, the stability in the weak grid caused by high impedance in the grid will be negatively impacted by PLL [47].For this reason, the current commercially available inverters do not participate in grid ancillary services, with some exceptions [48]. While ensuring the inverters' safe operation, the main objective of the PLL-based inverter control is the provision of active and reactive power as per pre-defined values.The reference current command must be generated as follows to achieve this objective. Here |V PCC | and is the magnitude and angle of the terminal voltage and the P ref and Q ref are the pre-defined active and reactive powers. Furthermore, the reference current must be matched by the inverter's actual current fed to the network.For this, the output voltage |E IBR |∠ is changed through an inner current control loop such that Here, the impedance of the output filter also includes the inverter transformer, and filter inductance is represented by This "follow" concept is based on the assumption of a stiff system due to enough SM in the system that forms the grid voltage and frequency to remain stable.However, this assumption may not be true in the near future as the IBRs' penetration level may surpass that of SMs in the grid.For this reason, more advanced control strategies like GFM could establish the grid frequency and voltage and thus open the gateway for the 100 % IBR-based power system [46]. Grid forming inverter The GFM concept initially used for islanded and microgrid (MG) operation [20,42] has the potential to sustain stability and operate with resilience in large interconnected power systems.The GFM-IBR keeps the voltage constant at the output, i.e. the internal phasor of the voltage is maintained during the transient time frame.In contrast, the magnitude and frequency are set locally at each inverter level.This feature makes GFM-IBR "forming" the voltage and frequency of the grid and thus enables them to synchronize to an external grid or operate in islanding mode.Furthermore, these features allow the GFM-IBRs to dispatch extra active and reactive power by instantly responding to external phase angle deviation during transient time when necessary.In short, the GFM-IBRs can support the grid in challenging circumstances and ensure grid stability.However, the loss of synchronization may still happen in certain adverse situations.Contrary to the GFL, the GFM-IBRs appear as a voltage source to the grid in a transient time frame, conceding that the limits of resulted currents are not breached, and the energy capacity is available.This idea of supporting the grid during frequency deviation and power imbalance through the introduction of the virtual synchronous machine (VSM) concept dates back to 2008 [49], whereas the first appearance of the "gridforming" term was in 2001 [50].However, till now, there is no clear definition from the relevant bodies [29].As this is a highly discussed issue now, there are some mentions of "grid-forming" like in the recent IEEE standards [39]. The control concept of GFM can be categorized into two types.The first involves the gradual adjustment of the inverterbased resource's (IBR) voltage in response to grid voltage and frequency, with concurrent control of current within specified limits.The second type entails modifying the IBR's active and reactive power based on grid voltage and phase angle derived from the PLL, while simultaneously regulating the current.The GFM can potentially contribute to the system strength by operating in low short circuit MVA, high voltage to current deviation ratio, and high rate of change of frequency (RoCoF).This is possible by increasing the hardware ratings, improving control methods, and participating in frequency regulations. Active power regulation depends on the energy availability behind the inverters.Thus, frequency control can be provided with a great deal of consideration of the energy source.On the other hand, reactive power/voltage supports are solely handled by the inverter.The GFM can be defined based on the objectives, controls, and tasks mentioned in Tables 2 and 3 [47]. Control strategies of GFM The GFM inverters are in different forms in terms of their control methodologies, which are majorly classified into three major groups in the literature as presented in Table 4 [9,34,43].These control strategies are majorly classified into three groups, i.e. droop control [22,[51][52][53][54][55], synchronous machine-based control [36,[56][57][58][59][60][61][62] and other controls (like virtual oscillator-based) [63][64][65][66][67].This classification is mainly based on the linkage of active power to the frequency and reactive power to the magnitude of voltage.The droop-based controls are subdivided into droop control based on angle [52] and frequency [51], synchronous power control (SPC) [54], power synchronization control (PSC) [22], enhanced direct power control (EDPC) [55] and extend direct voltage control (DVC) [68].A 98% GFM-IBRs-based system case study is provided in Supporting Information, demonstrating that the DVC can withstand a three-phase fault and sudden load changes similar to the SGs [69].Most of these controllers can damp the oscillation and improve steady-state system operation as explained by the frequency (w) relation to the droop (R), i.e.Δw = RΔP L ∕DR + K m [70], where R, ΔP L , D, K m is the droop constant, load change, damping term, and inertial term, respectively.However, it lacks inertia (H) capability.This shortcoming leads to a higher rate of change of frequency (RoCoF), i.e. Δw∕t = ΔP L f b ∕2H eq S b2 , that can trigger a blackout.Here, f b , H eq and S b2 are system base frequency, equivalent inertia constant, and base apparent power.The other controls are then further divided into virtual oscillator (VOC) [63], Robust H 2 /H ∞ [64], DC-link capacitorbased virtual synchronous control (ViSync) [66], and frequency shaping [67].These control strategies have their own merits.However, most of them are nonlinear and composed of complex structures, making it hard for real implementation.To overcome these limitations, i.e. remove complexity and provide both droop and inertia emulation, the VSM control is better for providing voltage and frequency supports.Besides, they No communication Dispatchable Frequency a Voltage related issues are resolved through integral and derivative terms.b The sign (?) here means that the author could not find a clear answer to this question and hence is an open research problem.c The output impedance of VSG is influenced by the parameters of proportional coefficients in high frequency whereas due to the cascaded control loop structure makes the controller tuning challenging in low frequency for WTG.d In power modes of low frequency oscillation can be observed.e Here the transient response and overall damping is improved whereas in GDC the frequency response is improved.f PLL use need critical attention during transient stability performance due to high chance of erroneous measurements. have features like tunable virtual inertia, overcurrent protection, self-synchronization, dispatchability etc.In synchronization, the dispatchable VOC may outperform in multiple VSM-based IBR scenarios [71].VSMs are further subdivided into virtual synchronous machine (VISMA) [57], synchronverter [62], swing equation emulation (VSG) [36,58], augmented VSG [56] (that has further subtypes called configurable natural droop (CND) [59] and generalized droop controller (GDC) [61]), and matching control whereby its electronic realization of SM (eSM) and control design are realized in [60].Furthermore, the overall control structure of GFM is divided into an outer control loop and an inner control loop [34].The internal control loop is mainly used for calculating the modulation signal for PWM or responsible for synchronization [43] of the controller terminal voltage with the grid at PCC.The outer control loop that provides input to the inner control loops mainly generates the angle, frequency, and voltage amplitude signal.The outer control loop of the control approaches in Table 4 can be subdivided into a power synchronization loop and voltage profile regulation.The first one has an angle loop that calculates the angle and a frequency loop that determines the frequency of the inner voltage virtual source loop.The second profile management loop of voltage is responsible for its regulation that has a specific subsystem in the control strategies of Table 4 [34]. Discussion on CS, GFM, and GFL's operational principals CS is regarded as a voltage source with a strong appearance of voltage to the grid.These machines have a voltage of steady magnitude and relatively small series impedance as they have internal electro-motive force or voltage due to electromagnetic induction [10,46].On the other hand, a DC side voltage is defined by a large capacitor configured in most of the IBRs, from which the AC side voltage is formed through chopping or modulations by semiconductors.With the creation of this AC voltage entirely through inverter modulators and control loops this voltage is constrained by the DC voltage and power availability and the semiconductor's current ratings.The CS can provide seven times more than its rated current for a short period, i.e. 1-100 ms [44] whereas the GFM VSC can only offer around 20% of the overcurrent their rated current [72].The IBR has a multiple loop-based control structure with power control at the highest level that dispatches the power as per the instructions or the maximum power point tracking (MPPT).Contrary to the GFM-IBR, the power control uses the measured real and reactive power to droop-control the frequency and voltage.For a basic understanding, the power control either establishes a voltage source (in the case of GFM) or a current source (in the case of GFL), as shown in Figure 2 [73].From Figure 2, it is evident that the internal AC voltage |E IBR |∠ IBR , and |V PCC |∠ t are separated by mostly inductive impedance R f + j L f .The flowing current then will be as in (3): This current (I) is ultimately viewed as flowing because of the established E IBR in the case of GFM or in the case of GFL, it is constant and follows a reference current through E IBR manipulations.For further details, read [50].In terms of power, if the formulation is with respect to current, the converter acts as GFL, i.e.P = ℜ ( VPCC Ī * PCC ) = V PCC I P cos().Whereas in the case where the formulation is based on voltage, i.e.P = ℜ( VPCC ( VIBR − VPCC )∕ jX c ) = (V PCC E IBR ∕X c )sin(), the converter acts as a GFM [16].Here X c and are the impedance and angle difference between the voltages of IBR and PCC. A duality of GFM and GFL is proposed for understanding purposes in [34].These dualities include synchronization control, grid interfacing and swing characteristics, extreme operation, and interaction.Table 3 summarizes the duality between the GFM and GFL among the aspects above.Besides, the main difference between the GFL and GFM can be pointed out in their response to the grid events, as can be seen in phasor diagram Figure 3.In Figure 3(a) I g (current phasor) remained constant, both magnitude and phase wise resulting in the volt-ages (V PCC and the inverter terminal voltage (V c )) variation in the GFL case.Meanwhile, in the GFM case, the internal voltage of inverters (E IBR ) remained constant while the rest of the parameters moved, including the phasor current.This trait makes the GFM attractive to the system operators [74]. Additionally, the small signal behaviour can also distinguish their reaction in weak and stiff grid conditions.Moreover, as per the grid code requirements [20,75] the voltage and frequency regulation can be achieved by both controllers at the PCC through supplementary outer loops for set points modification of active and reactive power.However, these control strategies are in the realm of the real operating situation wherein the limitations of physical voltage and current should be considered [20,42].Furthermore, the synchronization method of both converters to the grid is also a main difference between them [34], which is further elaborated later. POWER SYSTEM DYNAMICS AND STABILITIES WITH GFM AND GFL-BASED IBRS As stated before, the power system's global evolution towards renewable power sources mainly uses electronic-based inverters for interfacing with the grid.Traditionally, CS services provided the necessary stability to a power system with their synchronous capability.The displacement of CS by IBRs put the stability and other responsibilities burden (that are mentioned in Table 5 [45]) on the IBRs' shoulders.In the case of a 100% IBR-based system, the primary sources of stability become the IBRs.CS inherently possesses certain services, which are provided using synchronous torque and inertia.Thus, it is essential to enable the IBRs to offer such services [76] that lead to extra costs [77]. The dynamic characteristics of inverter-based resources (IBRs) differ significantly from those of synchronous generators (CS).In traditional power systems, the dynamics of CS are primarily influenced by its rotor, leading to a consistent increase in the rotor angle in the event of significant disturbance instability.Conversely, in a new power system relying on IBRs, the dynamics are predominantly governed by power electronicsbased control processes.According to specified guidelines, there are specific objectives for Voltage Source Converter (VSC) control that must be met to maintain stability; otherwise, the system is deemed to be losing stability. The outer control loop regulates the real and reactive power (P/Q) in accordance with the reference, while the inner control loop manages the current (i) to align with the reference value.In this context, nonlinearities play a crucial role in causing instability during large disturbances [78]. The inverters today that are in service are generally GFL and have innate features that are essentially different from synchronous sources.Whereas the GFM that is being developed recently can be designed to fulfil, the system needs listed in Figure 4, including the angle, frequency, and voltage stabilities, similar to synchronous sources.These GFM's different control strategies performance abilities are compared in Table 4.With significant limitations, voltage, frequency, and other stabilityrelated services can be carried out by the GFL.However, the black start is hard for GFL to execute as it requires a reference voltage signal to follow.In contrast, GFM, similar (not identical) to CS, can provide black start support to the grid [30,77,79].It is worth noticing that both GFM and GFL control approaches face multiple physical equipment bounds in the form of energy limits, voltage, and currents. The power system stability challenge is as old as the system itself [80].It emerges in new forms with the evolutions in the power system over time.Similarly, the high IBR penetration also indicates new stability phenomena.The impact of IBR on frequency, voltage, angular, and inverter-driven stability will be discussed next [6,77]. Frequency stability The conventional synchronous generators' inertia determines the initial RoCoF during a frequency event 1 .Next, the governor of generators kicks in to arrest the frequency drops, followed by the automatic generation control (AGC) to restore the frequency to nominal values (50 or 60 Hz) [76].In this sequence, the activation of a protective scheme is avoided, which may lead to generation/load shedding or blackout in extreme cases.Whereas frequency dynamic lies in the range of seconds to several minutes.This frequency stability is deeply linked with inertia.Therefore, the stability can be compromised with its drop in the IBRs-based system [77].The criteria of entering into service of IBRs with regards to frequency and voltage is summarized in Table 7. 1 By frequency events it means the sudden disconnection/connection of generation or load The frequency dynamics after disturbance are characterized by (1) the RoCoF, (2) its nadir, (3) and steady-state [81].To resolve these issues in higher-level IBR-based power systems, state-of-the-art technology is needed for control and is discussed in detail in the coming sub-section.An oscillation problem may arise due to sharp or aggressive control responses.Therefore, system operators should continuously seek revised needs of frequency response in the context of regulation reserves and performance.GFM-IBR performance is demonstrated in Supporting Information, showing that they are similar to the CS in handling frequency stability [69]. Another issue that requires attention is the size of the contingency that can affect frequency stability, as the synchronous generator's size is much larger than that of the individual IBR.Failure of new common modes may occur, affecting many IBRs simultaneously because of their high share.In a wide-area interconnected power system, a low voltage propagation over a wider area may induce voltage-based frequency dips during a fault.Synchronous area splitting may also occur and can be a concern in the future grid that causes frequency stability issues [77]. Inertial response To address the imbalance in a low-inertia power network, quick current injection methods can be employed, with careful consideration of suitable ramp rate limits.Various sources exhibit distinct ramp rates; for instance, BESS may have a faster ramp rate compared to WTG.When dealing with lower ramp rates, a greater number of sources need to engage in frequency regulations to collectively meet the demand [45]. More solutions like must-run synchronous generator reserves can be used [81].However, these solutions can be costly and sometimes technically challenging, as in the case of gas turbines.Another option is to assist the inertia of the power system through synchronous condensers (SynCons), flywheels, and GFM-based IBRs, and can also cope with the system split issue.In the case of GFM-based IBRs, they require the inverter's overcurrent capability and energy buffer to effectively provide inertia to the system [77].Besides, it was concluded in [26] that in terms of the metrics of frequency stability, namely nadir and RoCoF, GFMs perform better than all-SM systems used as a baseline due to their fast response capability compared to the slower dynamics of SM turbines.Here, a matching control approach [60] was applied as it considers DC quantities in the angle dynamics and hence shows efficacy in mitigating the saturation of the DC-source. The frequency response time might not meet for the frequency reserve due to high RoCoF values resulting in underfrequency load shedding.SMs can withstand larger values of RoCoF as compared to the IBRs because their design makes them tolerant to bolted faults [6,10,45]. Ref. [16] claims that inertia has an impact on both RoCoF and damping, which then results in a natural frequency increase.Besides, it has been observed that the inertia mainly affects the RoCoF, while damping of the system helps to reduce the steady-state oscillation [70].According to our research, inertia has a small impact on oscillation damping; rather, it primarily influences the RoCoF [76] and, consequently, nadir.For better understanding, the RoCoF and nadir can be determined as. Here e is the electrical frequency, and the peak time related to frequency nadir is represented by T peak .The RoCoF relays in distributed generators require an active power imbalance of 8.7% to 15% for islanding detection (with a time range of 60 to 200 ms with 0.1 Hz/s to 1.2 Hz/s relay setting) [82].The requirement for primary frequency response set by IEEE P2800 Standard [39] along with existing services related to inertia [8] are given in Table 6.It is a question in the UK as to how much inertia the GFM converter (in the context of the HVDC network) should provide [83], and it could be in the range of 2-25 MW/MVA [40]. Damping and synchronization of GFM-based system The basis for the power system stability is the synchronization among the generation sources, which plays a vital role in preventing blackouts and undesired outages [13].However, achieving synchronization is becoming increasingly challenging with the widespread adoption of non-synchronous generation sources (NSG).With this, the traditional stability preservation methods and synchronous sources' characteristics-based approaches that rely on certain assumptions may no longer be applicable.This shift arises from the kind of inertia and damp-ing provided by the IBRs.The synchronization mainly affects stability and dynamics related to the frequency.This frequency has to remain consistent during steady-state conditions as this is a key variable for coupling across the power system network. Not only the electrical engineering community but also physicists and applied mathematicians are attracted by this issue.In some of the literature, the power network is assumed to be a non-linear oscillator [84,85], whereby to reach the alikeness level between the swing equation and the desired oscillator, many factors of critical importance are either neglected or simplified.Furthermore, refs.[86,87] claims closed-loop solutions, out of which ref. [86] presents provisional conditions, and ref. [87] assumes the coupling damping factors homogeneity.Again, these assumptions deviate from the fact that the power system is heterogeneous, leading to a loss of potential inclusive investigation of the IBR-dominated power system.This concern of frequency synchronization adheres through three contributions in ref. [13].These contributions are in the form of the consideration of (1) factor for heterogeneous coupling, (2) parametric synchronization of power network characterization, and (3) testing 100% IBR base system with enough damping element. Regarding the differences between GFL and GFM, their synchronization to the grid is a key process, among others.As shown in Figure 1, GFMs do not necessarily require a dedicated unit for synchronism, whereas GFL requires units like PLL.To inject the proper amount of active and reactive power, this dedicated unit in GFL reads the angle of grid voltage and determines the converter current's phase shift.There is a chance of failure with PLL to lock onto the grid frequency after a disturbance in a low short circuit ratio (SCR) system [45].For understanding, a stiff grid is considered to be with SCR > 3 as per the IEEE Standard [88].An SCR is defined as the ratio between the AC system short circuit power (S ac ) and VSC station (source) rated power (P N dc ) and determined as, Conversely, the GFM working as a voltage source behind the impedance makes them a potential candidate for weak grids.A detailed review on this issue is provided in Supporting Information on GFM performance [69].Besides, the GFMs use the synchronism principle of SMs and, therefore, can self-synchronize [22,62]. Voltage stability The maximum limit for transferring power over long distances is often constrained by the transient stability of the initial swing, crucial for maintaining synchrony in traditional systems.This poses a complex challenge for grid operators and planners. Introducing IBRs as replacements SGs can help address this challenge by ensuring acceptable angular and voltage stability within the grid.However, incorporating a significant proportion of IBRs also introduces new concerns related to voltage issues.This can be attributed to factors such as fluctuations in the number of controllers with high gain online, variations in the responses of IBRs, and potential interactions with other dynamic devices in the system [77].Improper responses of IBRs during fault ride through (FRTH) situations after a fault can lead to low or high-voltage collapse in bulk power systems (BPS), especially when IBRs are concentrated in remote areas distant from load centres.Additionally, the high bandwidth and dynamic characteristics of IBR voltage control may introduce uncertainties and novel interactions with other reactive devices.Consequently, similar to SGs, there is a need to impose limitations on the control bandwidth of IBRs, as exemplified by the 5 Hz restrictions in Great Britain. Besides, supporting voltage stability can be achieved through converters' reactive power capability, for example, injecting reactive power during extreme voltage dips.Under extreme conditions, staying connected to the grid, i.e.FRT capability, is now becoming a grid code requirement for electronic power converters.Furthermore, forecasting the output of wind and solar power can be challenging due to their intermittency, and this issue can be commenced with energy storage batteries [81]. According to requirements, the steady-state voltage of any phase throughout the feeder should be within a specific range, like ANSI C84.1 range A. This range is designed by the characteristics of load in the (MG).According to ANSI C84.1, up to 3% maximum voltage imbalance is recommended [38]. here V av is the average voltage of any phase in the steady state. Whereas the voltage imbalance factor (VIF) is recommended to be less than 2% according to IEC 6100-3-x: where |V 1 | and |V 2 | are the positive and negative sequence voltages respectively.Besides the entering service criteria for and voltage is given in Table 7 [39]. Voltage imbalance can happen during normal steady-state operations due to load imbalance; therefore, it should be prevented to protect three-phase loads like induction motors from damage.According to some studies [45], if the GFM does not provide negative sequence voltage regulation, a severe imbalance of voltage can occur, and therefore, a clear requirement of this capability has to be stated; otherwise, the GFM may not provide this regulation by default.This voltage regulation from GFM requires enough capacity of negative sequence current for effective operation.On the contrary, it may aggravate the amount of DC capacitor power ripple and thus call for deploying a larger capacitor.A common requirement among various technical reports and grid codes is converters should behave as a voltage source behind impedance to provide frequency and voltage support during grid disturbances [83] such as voltage dips or swell and phase jumps.A three-phase fault case study is given in Supporting Information where the voltage has been swiftly recovered in a GFM-dominated system after the fault clearing [69]. Phase angle stability A reassessment may be necessary for higher penetration of IBRs into the system, as it may have a different impact on angular stability.The dynamic of this stability is linked to electromechanical oscillations and happens in the range of milliseconds to seconds. Two aspects, large signal angle stability or transient stability and small signal oscillations, can be ascribed to the high presence of IBRs in the grid, which is further explored below. Large signal angle or transient stability During grid faults, synchronism has been a long-standing proxy for maintaining large-signal rotor angle stability for the synchronous generator.This stability concept is based on the accelerating or decelerating magnetic fields of the stator and rotor angular displacement.The dynamics of clearing faults are dominated by the accelerating energy of the synchronous generator, which is accrued by the reduced mismatch between electrical and mechanical power.Some research articles have studied transient stability using an active power control approach while overlooking the effects of reactive power control.For example, transient instability was investigated in [89] and [90] that arose due to voltage sag causing current saturation.In addition, the Lyapunov function was used to evaluate the transient stability of the low-pass filterembedded droop control in [91].Furthermore, different grid faults were applied to investigate the transient stability of the PSC-based VSC.There is a cross-coupling between the active and reactive control loops [92] therefore, it is important to consider this for realistic studies.An attempt was made in [93] to investigate the deteriorating impact on the VSG's transient stability using a qualitative analysis approach for reactive power control using a power-angle (p-δ) curve.However, the fundamental challenge is the identification of control parameters like droop gains and virtual inertia and reactive power control's impact on transient stability posed by the inherent complexities due to its nonlinear dynamics.Therefore, a large-signal model is used for the systematic review of transient stability dynamics of the four GFM control strategies, i.e.VSG, droop control with and without low-pass filter, and power-synchronization control (PSC) in [22].Subject to the equilibrium points, the basic droop control and PSC retain stable operation.However, due to the lack of damping in the responses to inertial transient, the VSG and droop control with LPF could not keep stability [94]. Another important aspect is the current limitation issue with transient stability.A strategy for limiting current references in the inner control loop based on a current saturation algorithm (CSA) is reported in [95].A popular technique for limiting current while maintaining the voltage source nature of the GFM VSC, based on a virtual impedance (VI) approach, is presented in [96].According to ref. [97], the VSC based on CSA is effective in managing current limitations but faces difficulty in synchronization after fault clearance.In contrast, VI-based VSC excels in synchronization post-fault clearance but encounters overcurrent issues in the initial 25 ms after a fault.A hybrid model combining CSA-based and VI-based VSCs is proposed to address these challenges and enhance overall system performance.Additionally, the current limiting strategies discussed earlier do not tackle the issue of transitioning out of current saturation mode upon fault clearance.To address this concern, a VSG-based VSC is explored in [98] to investigate transient stability.The proposed approach incorporates an enhanced current limiting strategy and a hybrid synchronization control that integrates both PLL and power-frequency (p-f) synchronization control characteristics.This strategy effectively restores the system from the current saturation mode by selectively activating or deactivating the current limiting reference loop. 3.3.2 Small-signal oscillations The ability to maintain synchronism by the power system after facing small disturbances like small changes in generation or load is known as small-signal or small disturbance rotor angle stability [6].The angular stability and other notable aspects are the small signal oscillations damping that also needs attention in IBR-dominated systems.Practical experience to date has shown that IBR-based networks exhibit oscillations of up to 15 Hz, which is higher than the 4 Hz oscillations observed in CS-based networks that rely on electromechanical processes.There could be four reasons for these small signal oscillations.( 1) The displacement of CS can lead to power system degradation and oscillations.These oscillations can be damped by modifying the control system of GFL-IBR and GFM-IBR and equipping the dynamic reactive power with power oscillation dampers. (2) Electromechanical oscillations and new modes may appear with the addition of SynCons.The system damping can be improved by adding flywheels to SynCon's.( 3) GFL-IBRs can engender sustained low-frequency oscillations in weak systems, such as those found in Australian systems.( 4) Oscillations can occur between devices due to GFM controls with machine-like behaviour resembling electromechanical instabilities [77].Points (3) and ( 4) types of oscillations will be discussed further in the next section as they fall in the converter-driven based stability category. Converter driven-based stability This newly added stability type to the classification of power system stabilities set is converter-driven stability [6].The stability issues of this kind are primarily linked to IBRs and differ from the dynamic behaviour of conventional synchronous generators due to the leading VSC in the system [99].The IBR may induce oscillations due to cross-coupling between electromechanical and electromagnetic transients, which can be exacerbated by the fast response capabilities of the control loops and algorithms that operate on a wide timescale [100].The instability in such cases is further divided into two classes as: slow (<10 Hz) and fast (100 Hz to kHz) interaction-associated interactions converter-driven stability. The dynamics-related fast-interactions occur between the power system's fast response components, such as SM's stator dynamics or transmission networks, and power electronic-based systems, such as GFM, GFL, FACTs, and HVDC etc.For example, the GFM or GFM inner-current loops interact with the system's passive components, resulting in oscillation of high frequency ranging from hertz to many kilohertz.On the other hand, the dynamics linked to slow interaction arise between the power system's slow response devices, such as some controllers of generators and SM electromechanical dynamics, and the power converters.Although their primary causes differ, slow interaction stability can be similar to voltage stability, particularly regarding the maximum power transfer between the system and the converter.For instance, the instability may be rooted in a weak system.'Since 2014, sustained oscillation has been observed in real events in China's Xinjiang region caused by interaction between the AC weak grid and direct-drive permanent magnet generator (PMG) WTG.It is worth noting that as of the authors' understanding, the testing of GFM and GFL for this type of stability has not been conducted and remains an open question for the research community. GFM'S REAL WORLD DEMONSTRATIONS AND APPLICATIONS IN IBRS At present, the GFM is mainly applied to MG and transmission systems that have low rotational inertia and fault current.According to [44] the penetration level of instantaneous NSG has reached 60% to 80% in many small power systems.Here, the instantaneous NSG penetration is defined as the power converters-based generation divided by the demand plus export.As an example, an 89% instantaneous penetration level of PV and battery have been observed in St. Eustatius see Table 8.Furthermore, UK, Hawaii, Germany, and Australia are some examples of power systems with a high share of IBRs.These power systems are moving towards incentivizing and reforming grid services to enable the IBRs to participate in them.With this increase of IBRs in BPS the GFM appearance is inevitable there.Pilot projects of GFM-IBR are already providing in-depth knowledge and experience in Australia and Great Britain.These projects can serve as a learning platform for other power systems to follow this trend [46]. A simulation-based study was carried out on an all-island Irish transmission system to investigate the minimum requirement for frequency stability when using 100% IBRs based on VSGs in the system [27].An islanded AC microgrid is used to test a proposed bidirectional GFM converter that has fault tolerance and is applied through a centralized control architecture in [101].Multiple projects are initiated on the ground, some of which are reported in Table 8 with details related to their capacity and the type of source used with GFM technology [29,46]. Most of the pilot projects listed in Table 8 are operated at the medium voltage (MV) level of grid connection, with a few exceptions of projects connected at the high voltage (HV) level.A plausible reason for connections to the grid at this voltage level could be that the energy sources, such as WTG, PVs, and BESS interfaced through the GFM technology, are typically designed for application at the LV and MV levels.Besides, the immaturity of the GFM technology and uncertainties regarding its performance make demonstrations at the MV level a good compromise between testing the effectiveness of the service and the cost of the project installation and operation for the demonstrators.Furthermore, the services and targets offered by the demonstrators, such as black start, fuel consumption minimization, and MG islanded operations, are well-matched to the needs of distribution networks compared to the transmission systems.In the case of MGs, these GFM will be handier in extreme weather conditions to provide an uninterrupted power supply to users by utilizing the GFM features [102]. The primary stakeholders in this context are entities engaged in grid generation and power management sectors.Notably, the project demonstrators listed in Table 8 predominantly consist of major power converter manufacturers such as ABB, SMA, Siemens, GE, and a minor portion represented by transmission system operators (TSOs).This highlights a competitive landscape during the initial experimental phase, potentially yielding positive outcomes if the technology becomes integrated into the economic framework of the grid.Despite this, the significance of legislation remains constant and can play a pivotal role in incorporating this technology into national development plans. These demonstrations show that GFM has great potential to replace the CS and can even make 100% power supply from IBRs possible.However, it can be concluded that further research is needed before going into the implementation phase, which is further elaborated in the coming section.Next, research on GFM applications in PV and WTG systems is discussed, whereas a detailed review of GFM for HVDC can be found in Supporting Information [69]. GFM for photovoltaic (PV) system The IBRs can operate either with other GFL-IBRs or in parallel with GFM-IBRs.Disparate energy sources like BESS or PVs can be connected to the grid through these inverters. When operating through GFL inverters, PVs can provide services to the grid, such as injecting reactive power, supporting steady-state voltage, dynamic voltage support, FRT, and primary frequency control (PFC) [41,103,104].However, these services are not very effective through GFL for different reasons.Conversely, GFMs-based IBR has promising potential for allowing increased level integration into the grid as they can establish the frequency and voltage of the grid [99,105] its services, when deployed in PV systems, are summarized in Table 9 [40,69,[106][107][108].It is claimed that the GFM can outperform the GFL and SMs in short-term stability [109] and frequency stability dynamics [26]. In the literature, many articles assume energy storage like a battery or ideal source [26,109,110] this assumption does not represent reality as the nature of the primary source of RPG, like PV and WTG, is intermittent and should not be ignored.In the case of PV, they can provide ancillary services in two ways: (1) by operating below the MPPT (i.e.curtailed operation), or (2) through PV and energy storage hybridization [103,[111][112][113][114]. Both approaches have their own merits and demerits.For example, a curtailed operation may result in unavailability during night time, but is relatively simple.In contrast, PV and energy storage hybridization may be expensive, and there is a risk of under-usage.There is a growing consensus regarding the adoption of GFM for PV use in BPS [111][112][113][114]. a schemes of current limitation are generally applied to protect the inverters.b Current limiting scheme should be in conjunction with FRT.c SFC requires power reserves from seconds to minutes which is not available in case of IBRs. d Energy storage will boost the GFM abilities of most of the IBRs.e Wind farms connected through HVDC can be energized hence enabling black-start. f Generally energy storage is required for HVDC to support the grid. During curtailed operation, monitoring the MPP is a challenge as it varies with time.Some approaches [112,113] have been attempted to monitor this MPP based on estimation under deloaded conditions for GFM-based PVs.These studies, however, lack testing of these controllers under multiple common disturbances, such as irradiance changes, load changes, and network faults.Besides, accuracy in the estimation, uncertainty of parameters, and performance degradation during sudden irradiance changes are also unresolved issues in the literature [114].For example, the irradiance change is recorded up to 150-200 W/m 2 s due to the prompt movement of clouds.The estimation and current limiting issues are addressed in [106] through a model-free method and a new scheme for current limitations using a modified current reference. Another factor to consider is the presumption of a consistent DC-link voltage, a practical condition achievable through the use of sizable capacitors or battery energy storage systems (BESS).However, this approach adds significant costs, particularly in BPS.This assumption overlooks the intricacies of the DC-link capacitor, DC-source, and DC-to-DC converter control, all of which play a crucial role in determining how the DC-source responds to abrupt changes in load [115].Neglecting the limitations of the DC source can impede the inverter's effectiveness, resulting in discrepancies between input and output power and a subsequent decline in DC voltage [116].To sustain the voltage at v ref (reference value), the drop of DC voltage should not be more than v ref /1.1 where the modulation index is 1.1.GFM-based PV systems without the support of energy storage have been investigated in [117] and [106].However, ripples of lower voltage are produced while tracking the frequency when VSG-based GFM was used for PVs [117].Besides, the DC-link dynamics are considered in [118] and [106], wherein the DC-link stability is assured.To summarize, the PV equipped with GFMs can potentially replace SGs; however, further research is required to know their integration level and their combination with GFL and SGs. GFM for wind turbines generator (WTG) Among the different RPG types, WTG has a major contribution, with their per unit size and rated power also growing recently [107].However, wind power's variability and uncertain nature challenge balancing the power system [15].To overcome these issues, there is an urgent need for control strategies to guarantee the stability of power systems under the higher influence of wind power [68,119].The traditional WTG is mainly based on GFL, which requires strong grids to provide fixed frequency and voltage, as mentioned in the PV section.GFL-based WTGs provide no support for active power during contingencies since they operate as constant current sources with the turbine's kinetic energy practically decoupled [20].Despite this limitation, there are a few approaches using GFL-based current sources that enable WTGs to participate in the frequency regulation of power systems.This provision is carried out through (1) maintaining power reserve, (2) providing controllable power generation units, and (3) simulating virtual inertia [120]. There are a few shortcomings related to these above GFL approaches.In the first method, there is no direct involvement in frequency response; rather, it adjusts the active power in response to the system frequency.The second approach involves using diesel generators, which have a slower response compared to the IBR.BESS, on the other hand, is expensive to install in bulk.While virtual inertia is useful for resisting very fast frequency changes, it is only available for very short intervals and cannot support the frequency in the long term. To overcome the above issues, a GFM-based inverter is required for WTGs, whose characteristics are enlisted in Table 9. A study of an ideal doubly-fed induction generator (DFIG) WTG and BESS-based hybrid standalone system is conducted in [121] that utilizes GFMs.The performance improvement in terms of inertial response and active power tracking through a Synchronverter-based GFM for PMSG's grid-side converter under variable wind speed situations is claimed in [122].GFMbased Type 3 [123] and Type 4 [124] WTGs have been investigated for their potential use with HVDC systems.The GFM, which is proposed for the grid-side converters, can restrain fault current in weak grid situations; also, the GFM suggested for Type 4, i.e.VSG, has better impedance characteristics as compared to conventional methods of DFIGURE These studies ignore the characteristics of primary wind energy and the dynamics related to electromechanical transient as these studies are designed for large-scale wind farms.A decentralized GFM control strategy for an MG with high penetration of DFIGbased wind power is investigated in [120].This strategy doesn't rely on PLL and use DFIGs and BESS as GFM-based voltage source.As compared to other articles, Ref. [120] considers the use of GFM control for wind power with high penetration in an autonomous grid, providing continuous voltage and frequency support.In addition to analyzing the rotor speed dynamics and electromechanical transient dynamics, a stability analysis of the full-order small signal system is also provided. Currently, the use of GFM is a relatively new topic, and there are few literature reviews available solely on WTGs using GFM converters.A comprehensive assessment limited to GFM-based Type 4 WTG-PMSG is conducted in [107].It categorizes GFM for WTGs based on DC-link voltage regulation strategies, which differs from the categorization used in [43] where GFM control methodologies are mainly based on constant DC-link voltage assumptions.Most of the strategies in Table 4 are classified as grid-side GFM (G-GFM), machine-side GFM (M-GFM), and external energy storage GFM (E-GFM).The comparative study shows that during faulty conditions, the multi-loop and single-loop M-GFM perform better because the machineside converter controls the DC-link voltage, which is decoupled from grid disturbances [107].Another review article investigates GFM control strategies for Type 3 and Type 4 WTGs, examining various DC-link control and energy reserve schemes.It is found that the control scheme of DC-link voltage on the machine side performs better for Type-4 WTGs, whereas the control for DC voltage based on PLL is favoured for Type-3 WTGs due to zero steady-state error and speedy dynamics [125].To conclude, the WTG based on GFM is favourable regarding ancillary supports like frequency control, whereas it is recommended to provide additional constant power sources such as BESS and synchronous machine reserve etc., for reliable operation.The GFM-based BESS is reviewed next. GFM for BESS BESS is a low-hanging fruit for deploying GFM capability [46].For example, an energy storage of 100 GW is planned to be added to the system by the US Energy Storage Association [126].However, as mentioned in the previous sections, technical and economic concerns exist in electrochemical batteries, which form a major part of the project [126,127].Despite these concerns, BESS is not only a dispatchable source.Still, it is one of the best candidates for grid ancillary services such as active voltage and frequency support, black start capability to standalone systems, and other supports like coping with voltage sags, harmonics, and surges.A more economically viable option is the usage of electric vehicles batteries for grid services [76].In [128] the vehicle to grid concept is used to provide good harmonic rejection and voltage support using a coordinated virtual based control scheme for three phase four leg inverters.These sources can respond fast to events like frequency and have high energy density.While the provision of inertia emulation has not yet been reported in the industry, BESS has the potential to participate in this service to a degree in the future [129].Nonetheless, they play a role in transitioning from grid-connected modes to islanded modes and vice versa using some algorithm-based controllers.One of these controllers is the GFM methodology, which can potentially provide most of the services lacking in GFL-based control methodologies. Several articles have been published on the integration of BESSs and their role in the operation of power systems [108,[130][131][132].The pros and cons of AC-DC inverters, topologies, and performance of battery technologies related to the BESS integration into the distribution system at the MV level are discussed in [132].However, it doesn't cover the participation of BESSs in ancillary services, their operations, integration standards, and interoperability.The provision of behind the meter (BTM) and ancillary services are discussed in detail in [108,130].Where the opportunities, obstacles, requirements, policies, and techniques are highlighted.However, the discussion in [130] is limited to a very narrow scope of BTM, where the control mechanism is absent.On the other hand, ref. [133] provides a comprehensive review of BESSs, including grid-interfacing control strategies, common variations in BESS architecture, standards, and requirements for grid connections.Besides, practical applications of BESSs and their coordination with PVs are also discussed.Furthermore, refs.[51,134] explores an islanded converter-based AC microgrid using smallsignal precise mathematical modelling.Modelling and stability analysis based on an independent MG droop control is presented in [51] whereby low-frequency oscillations are generated by the droop controller without PLL.However, these articles consider ideal BESSs in the system without RPG. According to [111] inverters used for BESS are divided into four categories: GFL, GFM, grid-supporting, and grid-feeding, based on the interconnection to the grid and the services it can provide.Meanwhile, inverter topologies are classified into 2-level and multilevel topologies [133].As this article focuses on GFM, readers can refer to [111] for further details.With the ability to maintain AC voltage and frequency at the main terminal AC bus and allow a bidirectional power flow, the industry and system operators for BESSs favour GFM.In short, the duo of GFM and BESS acts as a synchronous generator operating in a conventional power system.While both 2-level and multilevel inverter topologies can be used for GFMs, the multilevel topology is preferred over the 2-level inverter topology.In short, energy storage like BESS will be essential for the IBRs' largescale deployment as it would assist other sources in performing different grid operations. With the summary in Table 9 highlighting various services provided to the grids by these different IBRs, the application section ends here.However, its use in HVDC is also reviewed in Supporting Information [69]. SYSTEM NEEDS WITH HIGH-LEVEL IBR INTEGRATION There are eight identified system needs that fulfil the primary objectives of the system in all credible conditions.These eight needs, as reported in Table 10, are divided into two groups: (1) stability and power quality and (2) security and service quality [46,135].While energy and capacity are the primary factors in investment decisions, there is a recognized shift towards other needs, particularly with high IBR and RPGs in the future [31,32].The six additional system requirements are subdivided into various categories, as illustrated in Figure 4. Presently, there is a lack of precise definitions for these subcategories of services and their corresponding needs.This ambiguity arises from the intricate interconnection and overlap between the two main groups.Although the specific types and subtypes of requirements may differ based on the system, they should collectively span the entire spectrum, being applicable in all plausible scenarios with minimal interdependence whenever feasible. It is important to note that a system need differs from a service that an IBR can provide.For example, a GFM-based IBR can emulate inertia and thus offer this service to the grid during frequency events.While inertia energy is not a fundamental system energy need, it is a feature of SMs that plays a vital role in regulating the grid frequency.By using special controls, IBRs can emulate inertia, thus competing with and even replacing the inertial energy generated by the rotating mass of SMs [31,46]. Another need for a power system is the black start capability; it is required after a power system shutdown, which leads to a loss of electric power.Blackouts can directly impact daily life, causing food spoilage, loss of life-support systems in hospitals etc. Restoring the system requires identifying a cranking path to find the voltage and frequency by using the first source, mainly energy storage systems, in the case of IBR-dominated systems.This ability requires the source to provide in-rush current for the transformer, line charging currents, and starting currents for induction motors.A GFM IBR can be used for this black start capability, and not all sources need to possess this capability.For reader's reference, a case study of the black start and grid restoration capability of GFMs is presented in [30] to show the efficacy of these inverters and their potential to perform like CS. DEPLOYMENT AND FUTURE PROSPECTIVE OF GFM GFM technology has to pass through multiple research, modelling, testing, and implementation stages to reach maturity and be widely accepted, as shown in Figure 5 [46].To increase the interfacing of generations and storage with the grid through inverters, speedy developments, research, and field trials are required, especially for GFM [111,136].In the medium term, priorities for GFM will change so that they can materially contribute to improving the performance of certain grids; where cheaper technologies cannot improve performance, the preferences will/are changed.Early devolvement will help in building consensus and standardizing GFM performance for grid operation improvement.Experience is required to scale GFM to a BPS.The multi-year activities are conceptualized in Figure 5, demonstrating the trends and key elements related to stability and integration into the grids associated with GFM. To move toward the GFM, the guides are stipulated in the chart in Figure 5.A 9-step guideline is provided for the potential GFM deployment, which may lead to the evolution of the technology and concepts.The three oval-shaped guidelines, labelled (A) to (C), represent the links between the manufacturers of IBR equipment and owners and project developers [46].Scaling and other aspects of GFM technology are discussed in the following subsections. From MGs to BPS A longer timeline (∼10-30 years) is required for the GFM to replace synchronous machines.This is a mammoth task that can only be accomplished when a robust standards environment defines the GFM functionality and an extensive research base is established for their control, protection, and interoperability.The maturation process of GFM will continue to expand for many years as operational experience and expertise are gained.GFM has shown promising demonstrations at various MGlevel settings over the past 20 years, for example, the Certs MG Testbed [137]. Besides, islanded MGs with high IBR penetration, such as that in Kauai, Hawaii, have seen GFM inverters as an emerging solution.By demonstrating its reliability in various contexts, GFM provides the confidence and foundational knowledge necessary to introduce it into larger electric grids. Ancillary services from GFM marketization The non-uniformity of market structure is particularly evident in the regulatory reserves, which are faster than frequency containment reserves, which IBR mainly offers.This can be attributed partly to the fact that no power system without CSs currently operates with significant loads.With the development of these new services, the existing structure of ancillary services requires revision in preparation for marketization.For example, the AEMO is taking significant steps towards introducing new services like other grid operators. Environment for technical standards for GFM The distinct behaviour of GFM, such as voltage source characteristics, calls for tailored standard and grid codes [138].GFMs are primarily used for voltage regulation instead of current regulation, while the current standards [75] focus on limiting reactive power, current harmonics, and anti-islanding functions at the distribution level.Such an effort of harmonic rejection is achieved in [128].The harmonic rejection capability of GFMs also needs a detail critical review and is a potential future work.During islanding conditions, GFMs are expected to provide an uninterrupted power supply.Grid authorities should focus on revising and modernizing grid codes such as the standards the function of unintentional islanding etc. Accurate models and simulation tools for GFM and high-level IBRs testing Existing state-of-the-art power system analysis tools are predominantly tailored for CS-dominated power systems.However, the growing integration of IBRs and their associated impacts challenge the validity of the assumption that synchronous speed remains near nominal values during and after transients in these tools.Consequently, there is a pressing need to prioritize research focused on refining models and advancing simulation tools to accurately capture these dynamics.Additionally, predicting adverse performance requires simulating inverters, such as GFM, as implemented in real-world scenarios. In summary, an IBRs-dominated grid necessitates substantial curtailment, suitable configurations to accommodate their high integration, and improved supply-demand alignment across various timescales.It is essential to advance compatible technologies as IBR comes in numerous types, replacing conventional sources that are well-understood [99] and coordinated [139]. CONCLUSION This paper critically reviews the GFM and GFL control approach for IBRs and its integration in a power system, focusing on the latter.These two inverter technologies are compared, considering their control structures, operations, and applications.Due to the unavailability of a universally agreed-upon definition for these two control methodologies, an understanding is derived from the existing literature while considering the context of their applications.Besides, the role of GFM in various aspects of power system stability is investigated, particularly in frequency, voltage, angle, and converter driven-based stability.The key process of synchronization process of IBRs with the grid through both types of these control approaches is also discussed.Furthermore, the current pilot projects utilizing GFM are listed, emphasizing their productivity in providing grid ancillary support.System needs and GFM-IBRs capabilities are also identified with details on the applications of GFM in WTG, PV, BESS, and HVDC are also critically investigated.Finally, the paper highlights the GFM prospective and challenges faced by its deployment in BPS and identifies the research gaps. Future research should prioritize the modelling of IBRs and their interface with the grid through GFM.Additionally, comprehensive testing of GFM against various system stabilities is necessary prior to its widespread deployment.Overall, this research work aims to contribute to the reliable operation of independent, standalone systems and BPSs, enabling high penetration of NSG with the assistance of GFM. FIGURE 1 FIGURE 1 Type of inverters for grid connectivity (a) GFL (b) GFM. FIGURE 2 FIGURE 2 GFM and GFL control parameters in IBRs when connected to Grid. FIGURE 5 FIGURE 5Possible roadmap for the deployment of GFM. TABLE 1 Comparison of this paper to other review papers (Y means Yes, and N means No). Most of the articles have not pointed out each GFM type role in the major stabilities.f This article lacks discussion on GFM Operations, applications in PV, exclusive discussion on future work and deployment, and converter driven stability and frequency stability. b Extended whereby more control strategies are added in this article as compared to others.c Only voltage and transient stabilities are discussed.d System needs are not explicitly discussed.e g BESS and HVDC are missing.h Only WTG is discussed.i PV is not discussed. TABLE 2 Differences between GFM and GFL. a Z g and Z c here are the grid and inverter impedances respectively.Y g and Y c here is the admittance of grid, and ac filter and inner current loop. TABLE 4 Comparison among different control methodologies for GFM and their role in power system stabilities. Reduced response to frequency event Dispersed and huge number of units Processing and communication delays Robust control and faster communication TABLE 6 Requirement from IEEE P2800 standard for PFC. TABLE 7 Criteria for IBRs to enter into service. TABLE 8 Demonstrations of GFM through various pilot projects. TABLE 10 IBR potential of meeting the system needs.
2024-04-21T15:41:02.505Z
2024-04-17T00:00:00.000
{ "year": 2024, "sha1": "5a1e7b7741bbd58c55e7be177aca920624031afd", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1049/rpg2.12991", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f0c5e8b5bc5c89719a574b11b4e8ea6ec2a9674", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
98321139
pes2o/s2orc
v3-fos-license
SYNTHESIS OF DIESEL-LIKE HYDROCARBON FROM JATROPHA OIL THROUGH CATALYTIC PYROLYSIS Due to economical, sosial and ecological reason, several studies have been done in order to obtain alternative fuel sources. In this respect, fermentation, transesterification and pyrolysis if biomass have been proposed as alternative solutions. Among these different approaches, pyrolysis seems to be a simple and efficient method fuel production. Pyrolysis, assisted by solid catalysts, has also been reported and it was recognized that the product selectivity is strongly affected by the presence and the nature of heterogenous catalysts. The catalytic pyrolysis of straight jathropha curcas oil (SJO) over nanocrystalline NiO/Al2O3 at 475 OC was studied. NiO/Al2O3 catalyst was used in pyrolysis for purpose of selectively cracking of triglyceride. Nanocrystalline NiO/Al2O3 was prepared by simple heating method with polymer solution as growth inhibitor. The liquid product (biooil) were analyzed by GCFID and FTIR, showing the formation of carboxylic acids, paraffins, olefins, and ketone. Measured physical properties of biooil is comparable to those specified for diesel oil. Introduction Nowadays, there are many ways to generate green energy. Based on the renewability of the resources, energy can be classified into 2 major categories, namely renewable energy (such as solar power, wind, water, geothermal, biomass, etc) and non-renewable energy (most resources classified in this category are fossil fuels, such as petroleum, coal, and natural gas) (Olasula et al., 2009). Fossil fuels are highly commercial in global scale, especially petroleum for which the demand steadily grows from time to time. On the contrary, the production rate of fossil fuels such as petroleum has been declining in these recent years. One of efforts to overcome this issue is through the cracking vegetable oil using heterogeneous catalyst. In this respect, fermentation, transesterification, and pyrolysis of biomass, industrial, and domestic wastes, have been proposed as alternative solutions for the increasing energy demand and environmental awareness. Among these different approaches, pyrolysis seems to be a simple and efficient method for fuel production. The pyrolysis of different triglycerides was used for fuel supply in different countries during the First and Second World Wars. These hydrocarbons were used as raw materials for gasoline and diesel-like fuel production in a cracking system similar to the petroleum process now used. Since then several studies on vegetable oil pyrolysis as an alternative method to obtain chemicals and fuels have been reported in the literature. Pyrolysis, assisted by solid catalysts, has also been reported and it was recognized that the product selectivity is strongly affected by the presence and the nature of heterogeneous catalysts (Maher and Bressler, 2007). In recent years, the utilization ofalumina catalyst was able to produce the same level of hydrocarbon as can be found in gasoline, diesel and kerosene (Wijanarko et al., 2006). Moreover, the effect from adding B2O3 into zeolite for decarboxilation of POME (Palm Oil Methyl Ester) produced hydrocarbon product which C range was equivalent to gasoline (Setiadi and Mailisia, 2006). However, a catalyst material in a conventional form is less effective. One of the reason is caused by low catalyst activities that unable to crack the reactants into conventional fuel fraction. Moreover, the activities of catalysts are limited by formation of cokes, and the pores of the catalyst covered by cokes formation. Nowadays, nanomaterials have become popular option to strengthen the characteristics of catalyst. For instance, Li et al. (2008) showed that utilization of NiO as nanocatalyst for pyrolysis of biomass improved their activity and selectivity compared to that of micro NiO. As a support, Al2O3 can increase the performance of catalyst due to its catalytic characteristic in a cracking process (Maher and Bressler, 2007). Methodology This study was conducted in two steps namely the preparation and characterization of nanocrystaline NiO/Al2O3, and synthesis of diesel-like hydrocarbon over NiO/Al2O3 catalyst in a catalytic pyrolysis reactor. Preparation of Nanocrystalline NiO/Al2O3 Nanocrystalline nickel oxide on alumina support was prepared by simple heating method adopted from Abdullah and Khairurrijal (2009). These catalysts were assigned as 5 wt% NiO/Alumina. For each catalys, required amounts of the precursor salts, i.e., Al(NO3)3.9H2O and Ni(NO3)2.6H2O were dissolved in deionized water and added to the support dropwise with constant stirring followed by drying in an oven at 120 °C overnight. Polymer solution, as continuous media, was used to avoid agglomeration of particle group of catalyst. Therefore the polymer should be remained until the end of the process. Finally the solution was heated in furnace to vaporize the PEG and to produce the final nanocrystalline NiO/Al2O3. Flow chart of preparation of nanocrystalline NiO/Al2O3 is described in Figure 1. Catalytic Pyrolysis Experiment The catalytic pyrolysis experiments were conducted using a self-designed 50 mL batch reactor system. Figure 2 shows a schematic diagram of the apparatus. A known amount of NiO/Al2O3 catalyst was charged into the reactor. Cracking reactions were carried out at 475 O C (atmospheric pressure) with internal temperature being measured by a thermocouple. The reactor was purged with nitrogen during experiment (90 min) to remove any oxygen that might have been dissolved and present in the straight jathropha curcas oil (SJO). The gas product Characterizations The surface area, pore volume and average pore radius of the catalysts were determined by the BET (Brunauer-Emmett-Teller) method. Identification of crystalline phase and its distribution in the catalyst was performed using X-ray diffraction (XRD). Analysis of the pyrolysis products (biooil) were performed using FT-IR (Fourier Transfrom Infra Red) and GC-FID (Flame Ionization Gas Chromatography Detector). Physical properties of biooil (product of catalytic pyrolysis), e.g. density and viscosity, were measured as well. Results and Discussions 3.1 Catalysts Characterization BET specific surface area and pore size of the samples are listed in Table 1. It can be seen that BET surface area of the catalysts decrease with increasing temperature. Increasing temperature led to more rapid PEG vaporization and sintering of catalyst which occured during reaction. These results correspond with those informed by Abdullah et al. (2008) and Garcia et al. (2001). The result of Sample 3 was different with the other ones due to the differences in the heating profile as shown in Figure 3. For Sample 3 the time of heating was not kept constant, and this caused the sintering that occurred on the catalyst to beweaker than other samples. XRD characterization results are shown in Figure 4. Based on XRD diffractograms, it can be seen that NiO and Al2O3 crystals were formed. Table 2 shows the results of crystallite size of catalyst, where the increasing temperature and time resulted in larger crystal sizes. Higher temperature accelerated PEG vaporization which was used as growth inhibitor, while a longer heating time could provide an opportunity to crystal growth. Therefore, sintering will occur faster in formation of larger crystal. However, the different case happened on Sample 3, which was caused by the different heating profiles of Sample 1 and 2 as shown in Figure 3. Biooil Characterization The visual appearances of pyrolytic products (biooil) are yellow, tawny, and blackish brown as seen in Figure 5. Pyrolysis of SJO was performed with three sample catalysts which had the smaller crystal size, namely Sample 1, 2, and 3. Tables 3 and 4 show the results of density and viscosity measurements. The results of density and viscosity measurement confirmed that cracking occurred and the values showed in the range of diesel fuel. It was proved by the decreasing of those two physical properties compared with SJO ones. The use of catalysts affected in reduction of density and viscosity of biooil, rather than that without catalyst. Crystallite size of catalysts affected the result of the physical properties of biooil products. Decreasing crystal size tended to reduce density and viscosity of biooil. This relates to the enhancement of material properties when its size became smaller. Surface area is also one of the factors of the decreasing density. Higher surface area caused jatropha oil molecules to be more reactive and gives the higher cracking effect. In order to identify biooil a GC-FID analysis was carried out. Among the classes of compounds formed (Table 5), diesel-like hydrocarbon was identified. The result of GC-FID shows that cracking process was selective towards C12-C18, diesel fraction with the most high percentage as many as 54.16%. Figure 6 shows FTIR spectra obtained for SJO, biooil and diesel oil. Each spectrum was normalized by the intensity of the absorption band centered at 2930 cm −1 (the strongest band). Characteristic vibrational modes are observed at 3080 cm −1 (CH stretching, olefinic), 2850-2990 cm −1 (CH stretching, aliphatic), 1710 cm −1 (C=O stretching), and 1642 cm −1 (C=C stretching, olefinic). The products of SJO cracking present some absorption features that are not observed in the other two oil cracking products, e.g. the absorption at 1700-1720 cm −1 that is characteristic of keton. It is also observed for SJO and biooil. The absorptions at 1285 and 1240 cm −1 , indicating the presence of carboxylic acids. It is worth mentioning that no vibrational feature, characteristic of aromatic compounds, was observed in the FTIR spectra, which is in good agreement with what was observed in the GC-FID analysis. Conclusions In this research, NiO/Al2O3 catalysts were prepared by simple heating method with polymer solution as a growth inhibitor. Characterization of catalysts have shown that the smallest crystal catalyst size is 153 nm which prepared at 700 o C when the heating temperature was not keep constant. Catalytic pyrolysis of SJO over NiO/Al2O3 was carried out in batch reactor. The experimental conditions were 475 o C for average reaction temperature and 90 minutes for residence time. These condition led to dominant yield of 54.16% of diesel fraction (C12-C18). Measured physical properties of biooil is comparable to those specified for diesel oil, i.e. the densities of biooil were in the range of diesel fuel specifications, but the viscosities of biooil were still lower than criteria of diesel fuel specifications.
2019-04-06T13:12:58.830Z
2018-10-02T00:00:00.000
{ "year": 2018, "sha1": "58eed419e1fc389072784f44a57adc6791ac6ab2", "oa_license": "CCBYNC", "oa_url": "http://aptekim.id/jtki/index.php/JTKI/article/download/26/25", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b9854ae599ecc099a8f226f1be8252bb6ebb611a", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
233036424
pes2o/s2orc
v3-fos-license
A Hybrid Artificial Intelligence Model for Aeneolamia varia (Hemiptera: Cercopidae) Populations in Sugarcane Crops Abstract Sugarcane spittlebugs are considered important pests in sugarcane crops ranging from the southeastern United States to northern Argentina. To evaluate the effects of climate variables on adult populations of Aeneolamia varia (Fabricius) (Hemiptera: Cercopidae), a 3-yr monitoring study was carried out in sugarcane fields at week-long intervals during the rainy season (May to November 2005–2007). The resulting data were analyzed using the univariate Forest-Genetic method. The best predictive model explained 75.8% variability in physiological damage threshold. It predicted that the main climatic factors influencing the adult population would be, in order of importance, evaporation; evapotranspiration by 0.5; evapotranspiration, cloudiness at 2:00 p.m.; average sunshine and relative humidity at 8:00 a.m. The optimization of the predictive model established that the lower and upper limits of the climatic variables produced a threshold in the population development rate of 184 to 267 adult insects under the agroecological conditions of the study area. These results provide a new perspective on decision-making in the preventive management of A. varia adults in sugarcane crops. Sugarcane (Saccharum officinarum L.) is an important crop in tropical areas, as it has a huge potential to produce sugar, ethanol, biodegradable products, energy, and food for animal production (Heinrichs et al. 2017). However, crop yields can be threatened by biotic factors, such as pests and diseases. In Latin American countries, yield losses have been caused by three spittlebug genera, Mahanarva Distant, Prosapia Fennah, and Aeneolamia Fennah (Hemiptera: Cercopidae), which feed on most of the commercial varieties of sugarcane in Brazil, Venezuela, Mexico, and Colombia (Cuarán et al. 2012). Sugarcane spittlebugs, including Aeneolamia varia (Fabricius), are considered important pests in American sugarcane farming because they are widely distributed from the southeastern United States to northern Argentina (Peck 2001, Cuarán et al. 2012. Despite the economic impact of spittlebugs on sugarcane crops, there is still no efficient method of controlling them in the Neotropical region. In addition to chemical control, some biological control strategies-including parasitoid wasps (Cotesia falivipes Cameron (Hymenoptera: Braconidae)), entomopathogenic nematodes (Steinernema spp. and Heterorhabditis bacteriophora (Poinar) (Nematoda: Heterorhabditidae)), and fungus (Metarhizium anisopliae (Metchnikoff) Sorokin (Hypocreales: Clavicipitaceae) and Beauveria bassiana (Bals.-Criv.) Vuill. (Hypocreales: Clavicipitaceae))-have produced varying levels of success (Rosero-Guerrero et al. 2012, Kassab et al. 2015. Aeneolamia varia has been reported as the most important insect pest limiting sugar production in the central-west region of Venezuela. However, advances in the development of an efficient management program have been limited due to insufficient knowledge of the effects of climate on population ecology in a sugarcane agroecosystem (Figueredo et al. 2013). Additionally, this cercopid species has developed a crucial mechanism to overcome climate variations through the production of diapause eggs, which are able to synchronize their life cycle with the rainy season (Fontes et al. 1995, Castro et al. 2005. Thus, considering the ability of cercopid species to adapt to their environment, several studies predicting population fluctuations have been carried out. García-García et al. (2006) developed a risk deductive model for Aeneolamia postica (Walker) (Hemiptera: Cercopidae), which showed that high temperature and precipitation were the most important factors in a field previously infested with nymphs and utilizing precarious weed control, triggering high densities of A. postica. These findings were corroborated by a study using a generalized linear model showing that the number of spittlebug adults increased by 1.92, 1.48, 3-4.5, and 0.051 due to the presence of nymphs, previous infestations, weed coverage, and temperature, respectively (Álvarez et al. 2017). Climate change has prompted the agricultural sector to optimize pest management. So-called 'smart agriculture', which is based on the use of experimental data for the implementation of intelligent algorithms through data mining methods, enables the development of statistical tools to address some agricultural problems in a dynamic environment (Issad et al. 2019). More recently, alternative modeling strategies based on artificial intelligence (AI) systems, such as hybrid models, have been proposed. These models are based on the integration of various AI algorithms, and work to explore the full potential of each of them (Chen et al. 2008). Thus, the univariate Forest-Genetic method results from the combination of the Random Forest and GAs as an alternative and complementary methodology to optimize the predictive modeling of phenomena from experimental designs (Villa-Murillo et al. 2016). Hybrid models have been developed and applied effectively in various areas of human development, such as education, industry, health, information technology, transportation, economics, state security and microbiology (Deeb and Jimenez 2003, Azizi 2017, Zheng et al. 2017, Panch et al. 2018, although thus far they still have limited applications in studies focusing on the population ecology of insect pests. In this study, optimal climate parameters producing greater effects on the population dynamics of A. varia adults were identified, quantified and modeled using the univariate Forest-Genetic method. This information provides new perspectives to help design an Integrated Pest Management Program for A. varia in sugarcane fields. Study Area Aeneolomia varia sampling was conducted in a 0.48 ha sugarcane plot in the Experimental Station Yaritagua, Peña municipality, Yaracuy state, Venezuela (10°02′N, 69°07′W, at 308 m asl). This agroecological zone is characterized as a tropical dry forest climate, according to Holdridge (1967), with a unimodal rainfall pattern from May to October. Sampling Sugarcane fields planted with the cultivar CR87-339 were surveyed. Samples of A. varia were collected from 6-mo-old plants over three crop cycles during the rainy period, lasting from May 11 to 2 November 2005 (plant-cane); May 11 to 2 November 2006 (secondyear ratoon); and May 10 to 1 November 2007 (3-yr ratoon). The spittlebug population was monitored through 30 permanent equidistant stations established in the field. Adults were captured with yellow traps installed 1.20 m above the ground between two sowing threads (Salazar et al. 1983). Each yellow trap consisted of a yellow plate (23 cm in diameter) covered with a transparent plastic bag (thickness 0.10 mm) and impregnated with diluted glue (1:1; glue:gasoline) on both sides. During each cycle, the adult population was monitored weekly for a total of 26 evaluations. Relative density was calculated every week for the total number of adults (TotalAd) captured on both sides of each yellow trap between 08:00 a.m. and 12:00 p.m. at each of the permanent monitoring stations. The data were used to estimate population dynamics. Insect control measures were not used during sampling times so as not to affect population dynamics. Statistical Analysis Population data and climatic variables from the three study periods were subjected to the univariate Forest-Genetic method (Villa-Murillo et al. 2016), in which optimum predictive modeling is performed in three phases: data normalization, modeling (identification of an objective function) and optimization of the parameter levels for the established model. During the normalization phase, a previous preparation of the data set was made to reduce variability. Modeling yielded the predictive model (objective function) for the set of normalized data and their respective variables by means of the Random Forest Algorithm. Finally, during the optimization phase, optimum parameter levels were determined for the response variable according to the objective function. All analyses were programmed and performed using R language (R version 3.5.2; R Core Team 2018) and its auxiliary libraries randomForest, rpart, rpart.plot, ggplot2, and dplyr. Results and Discussion The results of the univariate Forest-Genetic method were as follows. Normalization Phase Following Villa-Murillo et al. (2016), data normalization was performed using the function below. Modeling Phase The climate variables with the greatest effect on the developmental rate of adult spittlebug populations were identified by establishing an objective function through the Random Forest algorithm (PRED-RF) as follows: After the model was adjusted, a root mean square error (RMSE) = 0.001985308 model accounted for 75.8% of the variability among 1,000 trees. Those values are considered quite acceptable not only for predictive purposes, but also for estimation and optimization during the subsequent phase. The importance of climatic variables in modeling the adult spittlebug population was established using the mean square error (MSE) (Fig. 1). Thus, evaporation (Evapor), evapotranspiration by 0.5 (ET0.5), and Evapotranspiration (ET) were shown to have the greatest effect on the occurrence and population increase of A. varia, followed in order of importance by Cloud2pm, MeanInsol, RelHum8am, MeanRain, DegDayWD, WkRain and MeanTemp, respectively. Based on the results, predictive variables with the greatest influence on the initial model shown in equation 1 were selected as the main climatic factors that positively affect the population development rate of A. varia. Consequently, the resulting model was established as follows. Optimization Phase The model optimization process was initiated after elements of the selected Genetic Algorithm (GA) were defined as follows: GA Elements Initial population: a random sample of 100 chromosomes was generated according to the structure defined in the established model (equation 2); their responses were estimated and expressed according to the initial scale of the study. This constituted the initial population of the GA. The modeling phase was based on the Random Forest Algorithm under the scheme of classification and regression trees (CART); thus, each new observation (chromosome) was adjusted to the limits of the corresponding terminal node, which set the limits for the GA optimization process. Following the scheme of the univariate Forest-Genetic method, the fitness function was established by equation 3, defined as an interpolation function between nodes:ŷ where E fi , E mi , and E si correspond to the values of the Evaporation gene (Evapor) of the i th father, i th mother and i th son, respectively, since Evapor was the variable with the greatest weight, as shown in Table 1 and calculated by equation 4, where it represents the importance of the k th gene in our predictive model. Mutational Rate According to the univariate Forest-Genetic method, the mutational rate was established at 2% (Villa-Murillo et al. 2016). Crossing Criterion The formation of the following generations was based on a simple one-point weighted crossing, in which weighting corresponded to the allocation of weights among the climatic variables by means of the importance values calculated in Table 1. According to the Forest-Genetic method, this is done to increase the probability of crossing observations (chromosomes) in relation to the most important climate variables. Optimization criterion: In this phase, the GA aims to identify those climate parameters yielding the optimal response variable values according to the predetermined quality characteristic. Thus, considering the Physiological Damage Threshold (PDT) of 104 adults accumulated by Taa and estimated by Figueredo et al. (2003), the optimization criterion in the first quartile of our predictive model (184-267) for the TotalAd variable was established. The lower limit (184 adults) indicated the minimum number of spittlebugs required to induce visual symptoms of foliar damage to the sugarcane, while the upper limit (267 adults) indicated the population level required to cause economic damage to the crop. This is known as the Economic Damage Threshold (EDT). Linares (2002) stated that the lower limit indicates the control threshold (CT), that is, the optimum time to perform a management measure to prevent the insect population from reaching the EDT. The algorithm consisted of applying the corresponding crosses and mutations, starting with generation 0 (G0) as the initial population; then the responses were estimated using the fitness function (equation 3), and a new generation was created by combining the parents' and sons' chromosomes with their corresponding response values. Finally, the optimization criterion was applied to the selection of the 'fittest' individuals, here defined as those belonging to the predetermined threshold, resulting in the first generation (G1). This process continued until the algorithm reached convergence, that is, when individuals tended to be homogeneous in relation to the climate variable values that fell within the corresponding threshold level. Figure 2 shows this convergence using box plots, where the reduction in boxes for each generation and its constant median value from generation 8 (G8) reflects the convergence of the algorithm, leaving out the most heterogeneous individuals (reflected as outliers). Given the nature of the climate variables, the algorithm established a set of solutions rather than a single vector in the response. In each climate variable, an action interval was established for a response interval in the biological variable TotalAd. Thus, for the predictive model of the A. varia population in sugarcane obtained by Random Forest and the optimized model obtained with the GA for the PDT, the environmental variables influencing the TotalAd were defined as shown in Table 2. Previous studies have demonstrated the effects of climatic parameters, such as rainfall, temperature, and relative humidity, on fluctuations in the A. varia population (Castro et al. 2002, García-García et al. 2006. Additionally, studies carried out in Brazil and Mexico reported that evapotranspiration and potential evaporation influenced the adult populations of Deois flavopicta (Stal) (Hemiptera: Cercopidae) and Aeneolamia spp., respectively (Melo et al. 1984, Álvarez et al. 2017, García-González et al. 2017, which is consistent with the findings in the present study. On the other hand, a significant correlation between rainfall and the number of A. varia and D. flavopicta nymphs has been demonstrated (Melo et al. 1984, Figueredo et al. 2012; however, no correlation has been suggested between precipitation and the number of A. varia adults in sugarcane crops. Thus, apart from the abiotic factors mentioned above, the presence of nymphs could account for the increase in adults in the field (Castro et al. 2005). In sugarcane crops approximately 6 mo old, the abundant foliar area increases humidity, which along with the development of secondary roots at the soil surface level affects the development and abundance of spittlebug nymphs. The climatic variables showing the greatest effect on the population development of A. varia adults in sugarcane crops were Evaporation by 0.5, Evapotranspiration, cloudiness at 2:00 p.m., mean isolation, relative humidity at 8:00 a.m. and mean rainfall; (Castro et al. 2002). In Mexico, García-García et al. (2006) pointed out that temperatures in the range of 26 to 32°C are a determining factor in the development of Aeneolamia spp. nymphs and adults, which coincides with the upper limit of the PrecS (129.92 mm) and MeanTemp interval (25.67-28.74°C) established in the agroecological zone in the present study. The population development thresholds for A. varia adults generated by the optimized model provide a key element to predicting the effect of climate variables on the population dynamics of A. varia in sugarcane fields, thus offering decision-making tools to apply timely management measures to avoid a potential population increase. According to Graf et al. (1992), predictive models serve as valuable tools to help us understand pests as an element of the agroecosystem and assess the status of a given pest from a holistic point of view. Moreover, Vasconez et al. (2020) stated that the use of technology enables data acquisition and analysis in agricultural environments, which can help optimize current practices relating to pathogen and disease detection and management. According to Issad et al. (2019), combining the strengths of different methods confers a greater robustness to the results obtained through data mining and applied to smart agriculture. Thus, the Forest-Genetic method has proven to be an effective tool for a set of complex and high-dimension data, which require flexible and powerful tools for effective statistical analysis (Chen and Ishwaran 2012). Conclusions The univariate Forest-Genetic method has been shown to be an alternative tool to improve parameter design through the phases of normalization, modeling, and optimization. It efficiently combines the advantages offered by the Random Forest algorithm in pattern recognition and integrates its measures of importance into the GA's genetic operators (Villa-Murillo et al. 2016). This reduces the variation in products and processes for selecting control factor levels, thus providing the best performance and least sensitivity to noise factors. The univariate Forest-Genetic method was used for the first time to model pest damage threshold estimation, and it proved to be an adequate tool for predicting the interaction between insect (A. varia) and environment (climate), which allows us to introduce a new perspective on agroecological management of the pest insect at different geographical scales. Since agronomic management is staggered in sugarcane cultivation, outbreaks of the various life stages of A. varia are commonly observed in the field. Thus, this model allows us to predict the climatic conditions that will lead to higher population levels. In serving as an early warning, it enables us to identify those conditions most favorable to insect populations, and to adopt management tactics to reduce such populations within sugarcane crops. Author Contributions L.F. organized and performed field work, contributed to manuscript writing. A.V.-M. performed analytical mathematical work and contributed to manuscript writing. C.V. and Y.C. contributed with the analytical mathematical work and contributed to manuscript writing.
2021-04-07T06:16:53.793Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "20d7f10d0bf3290c927d86f96a0dc4b5ff05c0d3", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/21/2/11/37000437/ieab017.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f83b71ca07d96808531f56909d43243c535fdecd", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
35367708
pes2o/s2orc
v3-fos-license
Long-term Multi-granularity Deep Framework for Driver Drowsiness Detection For real-world driver drowsiness detection from videos, the variation of head pose is so large that the existing methods on global face is not capable of extracting effective features, such as looking aside and lowering head. Temporal dependencies with variable length are also rarely considered by the previous approaches, e.g., yawning and speaking. In this paper, we propose a Long-term Multi-granularity Deep Framework to detect driver drowsiness in driving videos containing the frontal faces. The framework includes two key components: (1) Multi-granularity Convolutional Neural Network (MCNN), a novel network utilizes a group of parallel CNN extractors on well-aligned facial patches of different granularities, and extracts facial representations effectively for large variation of head pose, furthermore, it can flexibly fuse both detailed appearance clues of the main parts and local to global spatial constraints; (2) a deep Long Short Term Memory network is applied on facial representations to explore long-term relationships with variable length over sequential frames, which is capable to distinguish the states with temporal dependencies, such as blinking and closing eyes. Our approach achieves 90.05% accuracy and about 37 fps speed on the evaluation set of the public NTHU-DDD dataset, which is the state-of-the-art method on driver drowsiness detection. Moreover, we build a new dataset named FI-DDD, which is of higher precision of drowsy locations in temporal dimension. Dapeng Chen Xi'an Jiaotong University Email: dapengchenxjtu@foxmail.com Abstract-For real-world driver drowsiness detection from videos, the variation of head pose is so large that the existing methods on global face is not capable of extracting effective features, such as looking aside and lowering head. Temporal dependencies with variable length are also rarely considered by the previous approaches, e.g., yawning and speaking. In this paper, we propose a Longterm Multi-granularity Deep Framework to detect driver drowsiness in driving videos containing the frontal faces. The framework includes two key components: (1) Multigranularity Convolutional Neural Network (MCNN), a novel network utilizes a group of parallel CNN extractors on well-aligned facial patches of different granularities, and extracts facial representations effectively for large variation of head pose, furthermore, it can flexibly fuse both detailed appearance clues of the main parts and local to global spatial constraints; (2) a deep Long Short Term Memory network is applied on facial representations to explore long-term relationships with variable length over sequential frames, which is capable to distinguish the states with temporal dependencies, such as blinking and closing eyes. Our approach achieves 90.05% accuracy and about 37 fps speed on the evaluation set of the public NTHU-DDD dataset, which is the state-of-theart method on driver drowsiness detection. Moreover, we build a new dataset named FI-DDD, which is of higher precision of drowsy locations in temporal dimension. I. INTRODUCTION It is reported that about 1.24 million people die on roads every year, while driver drowsiness accounts for 6% [1] of them. Driver drowsiness indicates a driver is lack of sleep, which can be detected by the variation of physiological signal [2], vehicle trajectory [3], [4] and facial expressions [5]. However the first two methods are hard to satisfy the requirement of convenience and timeliness. Drowsiness can be reflected by facial expression, such as nodding, yawning and closing eyes. We therefore aim to develop a drowsiness detection method based on video. Video-based method is possible to give the warning prompts and receive the driver's feedback in time, being of great value in practice. Video-based drowsiness detection is still full of challenges, mainly stemming from the illumination condition change, head pose variation, and temporal dependencies. In particular, the large variation of head pose causes serious deformation of facial shape, which makes it difficult to extract effective spatial representations. Conventionally, approach based on aligned facial points [5] is a better way to represent drowsy features, however, ignoring temporal relationships means it cannot distinguish blinking and closing eyes. Spatiotemporal descriptor [6] is proposed to collect spatial and temporal features but not good at distinguishing states with long-term dependencies, such as yawning and speaking. Besides, these handcrafted descriptors are not enough powerful to describe large variation of head pose and classify confusing states, e.g., looking aside and lowering head lead to large pose variation, while yawning and laughing are similar but belong to different states. Recently, deep learning methods are widely used to learn facial spatial representations automatically from global face [7], [8], [9]. Nevertheless, the global face without well alignment is weak to provide effective representations for large pose variation. Moreover, it is not flexible to fuse the configurations of local regions and concentrate representations on the most important parts such as eyes, nose and mouth on which the majority of drowsy information focuses. It is another challenge to distinguish easy-to-confuse states, such as blinking and closing eyes. 3D-CNN with fixed time windows [7] tried to describe spatial and temporal features, but it does not have enough capability to model long-term relationships with variable time length. We propose a Long-term Multi-granularity Deep Framework (LMDF) to detect driver drowsiness from well-aligned facial patches. Our method applies alignment technology to obtain the well-aligned facial patches over frames, and these patches mainly locate in the informative regions that supply critical drowsy information. A group of parallel convolutional paths are applied on the patches, and the outputs of these layers are fused by a fully connected layer to generate spatial representations, which is named as Multi-granularity Convolutional Neural Network (MCNN). MCNN is able to fuse appearance of those well-aligned patches and capture local to global constraints. To explore temporal dynamical characteristics, a deep Long Short Term Memory (LSTM) network is applied to the spatial representations over sequential frames, which can distinguish the states with temporal relationships, such as yawning and laughing, blinking and closing eyes. The proposed method can thus not only extract effective facial representations on single-frame images, but also mine temporal clues on videos. The contributions of our approach are mainly in three aspects: (1) We propose a Long-term Multi-granularity Deep Framework to learn facial spatial features and their long-term temporal dependencies. (2) We propose MCNN to learn the facial representations from the most important parts, which makes the detector robust to large pose variation. (3) We build a Forward Instant Driver Drowsiness Detection (FI-DDD) dataset with higher precision of drowsy locations in temporal dimension, which is a good test bed for evaluating practical systems that are required to detect drowsiness in time. II. RELATED WORK Driver drowsiness detection is becoming a hot topic of Advanced Driver Assistant System (ADAS). Many traditional methods are applied to deal with this problem. The change of pupil diameter was utilized by Shirakata et al. [10] to detect imperceptible drowsiness, which is effective but it is not convenient for a driver to take the equipment. Nakamura et al. [5] utilized face alignment to estimate the degree of drowsiness via k-NN, which cannot achieve online performance. Spatialtemporal features for driver drowsiness detection was proposed by Mahdi et al. [6], which was based on hough transformation, cannot work well in practical driving environment. Besides, the representations of those methods are hand-crafted, which may be not flexible to adapt to complex situations faced in driving, while our method automatically learns facial representations, which is more effective to the practical task. Deep learning approaches such as CNN have achieved success in representing information on images [11], [12], [13], and many researchers also applied CNN on driver drowsiness detection. Park et al. [14] combined the results of three existing networks by SVM to present the categories of videos, which cannot detect driver drowsiness online. 3D-CNN is applied to extract spatial and temporal information by Yu et al. [7], and the methods can only capture features with fixed temporal window. The above two methods utilize global face image, which cannot flexibly configure those patches containing the majority of drowsy information. Moreover, they are hard to capture dependencies with variable temporal length. Due to the well performance of LSTMs on sequential data [15], [16], [17], more and more researchers propose combinations of CNN and LSTMs to learn spatial and temporal representations of sequential frames. It is interesting that Liang M. et al. [18] came up with convolutional layers with intra-layer recurrent connections to integrate the context information for object recognition. Jeff D. et al. [19] provided a method which extracts visual features from images by CNN and learns the long-term dependencies from sequential data by LSTMs. Especially, the approach of Jiang W. et al. [20] processes image with CNN and models sequential labels by LSTMs concurrently, and then combines the two representations via projection layers. However, none of the above methods apply multigranularity method to concentrate representations on important parts and flexibly fuse configurations of different regions. Recently, Multi-granularity methods have achieved several excellent results in other applications of computer vision. Qing Li et al. [21] proposed temporal multi-granularity approach of action recognition. Their method achieved the state-of-the-art performance on action benchmarks, but cannot capture detailed appearance clues and local to global spatial information. Dong C. et al. [22] applied multi-scale patches based on face alignment on face recognition. Dequan W. et al. [23] utilized multi-granularity regions, detected by three granularities convolutional neural network, to generate multi-granularity descriptor for fine-grained categorization, but this method cannot process sequential frames. Different from the above, our method can capture spatial multi-granularity information and longterm temporal dependencies. Particularly, our MCNN can learn representations on the most significant regions from well-aligned multi-granularity patches, and the proposed method has achieved the state-of-the-art accuracy on NTHU-DDD dataset for driver drowsiness detection. III. OUR APPROACH The proposed method utilizes Multi-granularity Convolutional Neural Network (MCNN) to learn facial representations from single-frame images. The repreentations, extracted from well-aligned facial patches, contains both detailed appearance information of the main parts and local to global constraints. Furthermore, our approach takes advantages of a deep Long Short Term Memory (LSTM) network to explore dynamical characteristics of the facial representations from sequential frames. The detailed structure of our Long-term Multi-granularity Deep Framework combining MCNN and LSTMs is shown as Fig.1. A. Well-aligned Multi-granularity Patches It is well known that drowsy information is focused on several main facial parts such as eyes, nose and mouth. Alignment provides an excellent way to extract well-aligned features over frames, which effectively represent facial drowsy states. Besides, global patch provides rough information to estimate the states of a driver's head and full face, which assists the decision of driver's drowsy states when the locations of parts are not precise. Our method takes advantages of local regions and global face at the same time. We utilize face alignment technology to locate facial shape points. Given an image I t with a face in the t-th frame, we detect landmark points of facial shape S t via regressing local binary features proposed by Ren et al. [24]. From those points, it is convenient to get the locations of main parts and important local regions. According to center points and specific sizes of all regions, we crop those patches from the original image and resize them into the same size, which are The long-term multi-granularity deep framework for driver drowsiness detection. The first stage is well-aligned multi-granularity patches which consist of local regions, main parts and global face. Parallel convolutional layers are well-applied to process patches respectively and fully connected layer fuses local and global clues and generates a representation, which is the second stage of the framework. The first two stages construct the Multi-granularity CNN (MCNN). Recurrent Neural Network (RNN) with multiple LSTM blocks mining the clues in temporal dimension together with a fully connected layer form the third stage. the well-aligned multi-granularity patches as the input layers of the convolutional neural network. Those patches, including local regions, main parts, and the global face, are produced by three different mappings. Shown as Fig.2, a mapping Φ M p can select center points of eyes, nose, and mouth from facial shape S t , and crop patches of those parts from the input image I t with given sizes s p . And the mapping still needs to convert the patches into an unified size s u . Thus the single-granularity patches of those main parts I t p are generated. The operations of mapping Φ M l and Φ M g are similar to the mapping Φ M p , while the differences lie in the locations and sizes of regions. The mapping Φ M l selects the corners of the eyes and mouth and the sides of the nose as the interest of regions with size s l and output local patches I t l . A global facial region with size s g is chosen by the mapping Φ M g which finally produces a global facial patch I t g . Formally, the mappings are represented as Processing the input image I t by the three mappings, we can obtain a set of well-aligned patches I t c consisting of the main parts, local regions and global face, and it is presented as where I t i,: , i ∈ {l, p, g} represents all elements of a patch set I t i . Compared with the original image, the patches set I t c , including both detailed appearance clues of parts and rough information of full face, have more advantages to describe the facial states. Meanwhile, the relations between local and global regions are implied, which is the basis of mining useful features. Therefore, we take the set of patches I t c as the input layer of CNN to learn effective representations. B. Learning Facial Representations Our approach learns representations by convolutional neural network but not hand-crafted for its well performance in learning spatial features. We apply several convolutional layers to processing each one in the set of patches I t c independently. To fuse the information of all patches, a fully connected layer is arranged after all convolutional operations, which generates Ndimensional descriptors combining local and global clues. Every patch needs to be processed by convolutional operations at first. For a patch I t c,k , the k-th one of patch set I t c with length L, three convolutional layers are utilized to capture the spatial feature. The first one is made with convolution and rectified linear units (ReLU) activation followed by a max-pooling operation, which projects a normalized 3-channel image to a higher dimensional representation. Only convolution and ReLU activation are selected in the second layer to enlarge the dimension of representation sequentially. And the structure of the third convolutional layer is similar to the first layer but with different parameters to decrease the dimension. A representation x t k of the patch I t c,k can be generated by a mapping Φ C consisting of those convolutional layers with parameters θ C k , which is presented as where θ C k is the k-th element of convolutional parameter set θ C . A fully connected layer is utilized to combine those representations extracted by the mapping Φ C from the set of patches. But before combining operation, we concatenate those representations into a long vector x t c , formed as With a specific weighted matrix W C f and bias vector b C f , the combining N -dimensional representation x t can be presented by the fully connected layer as in which 0 is a zero vector. The descriptor x t contains not only detailed appearance information implied in every part, but also the constrained relations between local regions and global face. The effectiveness of the descriptor can be improved by appropriate objective functions and proper training methods. Driver drowsiness detection is a binary classified problem, thus the state of an input frame is just drowsy or not. We label drowsiness with 1 as the positive sample and normal state with 0 as the negative sample. And a label c are expressed with a one-hot vector y c , such as a vector [0, 1] means the positive label. To train the parameters of the convolutional neural network, we project the representation x t into the probabilities of each category c ∈ {0, 1}, by another fully connected layer with weights W C p and a bias vector b C p , and the probability vector p(c|x t , W C p , b C p ) are normalized via a softmax layer. The cross entropy which can indicate the correct rate of classification is selected as the objective function, and we utilize the adam optimizer to train the whole convolutional neural network. The visual representations can also be generated by the convolutional layers and the first fully connected layer. C. Exploring Dynamical Characteristics The representation x t is extracted in a frame, while whether a driver is drowsy or not is judged by a certain period. We apply LSTMs to model the temporal dynamical characteristics of spatial representations on driver drowsiness detection. A LSTM block consists of an input gate, a forget gate, an output gate and a memory cell. Because of the three gates, the LSTM block can learn long-term dependencies in sequential data and its parameters are easier to be trained. The memory cell can store longterm information in its vector, which can be rewritten or done other operations for the next time step. Besides, the number of hidden units should be chosen according to the dimension of the input representation x t . We employ multiple layers LSTMs to mine the temporal features for driver drowsiness. A mapping Φ R containing three layers LSTMs with parameters θ R is utilized to explore temporal clues of the representation x t generated by MCNN extractor and presents the hidden states h t 3 of the third layer as a representation containing temporal dependencies, which is presented as where } is the parameter set of these LSTM blocks in the last step. A fully connected layer with weight W R and a bias vector b R is used to project the output of the mapping Φ R into a two-dimensional vector that is then decoded by softmax operation to the probabilities p(c|h t 3 , W R , b R ) of the two categories. To solve the parameters, we take advantage of Adam optimizer to train the LSTMs with cross-entropy objective function. The label y t of the current frame can be predicted as the class with the maximum probability. Similarly, the labels y of the sequential data can be obtained. IV. EXPERIMENTS A dataset named National TsingHua University Drowsy Driver Detection (NTHU-DDD) is provided on the challenge of ACCV2016 workshop for driver drowsiness detection, on which we compare our approach with others. To make the sequential labels close to the practical driving environments, we relabel the video set with instant detecting principle. A new dataset is generated from the relabeled video set and it is called Forward Instant Driver Drowsiness Detection (FI-DDD) on which we learn parameters and analyze the performance of several subnetworks. While the performance of our entire approach is evaluated on the original NTHU-DDD dataset, we thus train a set of parameters to achieve long-term memory performance. Finally, the accuracy 90.05% is obtained by our Longterm Multi-granularity Deep Framework (LMDF) on the evaluation set of the NTHU-DDD dataset, and the proposed method achieves about 37 fps on GPU Tesla M40. A. Dataset NTHU-DDD Dataset: The NTHU dataset includes five scenarios listed as glasses, no glasses, glasses at night, no glasses at night and sunglasses. The training set involves 18 volunteers consisting of 10 men and 8 women who act as drivers with four different states in every scenario, while the evaluation set has four volunteers including two men and two women. Nonsleepy videos contain only normal state, while sleepy videos combine normal and drowsy states together. Besides, blinking with nodding and yawning videos only record drowsy eyes and mouth respectively. NTHU-DDD dataset offers four annotation files recording the states of drowsiness, eyes, head and mouth for every video. Table I gives the labels of drowsiness and three main parts. It is worth emphasizing that the labels on NTHU-DDD dataset are long-term memory, which means that the states of a frame may depend on the frames in the previous several seconds. FI-DDD Dataset: A problem comes due to the longterm memory in NTHU-DDD, which is that a driver would still receive the warning prompts even if he had revised his drowsy states to the normal for a few seconds. At the same time, those labels are unable to locate the drowsy states with high precision in temporal dimension. To solve these problems, we relabel those videos with instant principle, which means the latency is limited within 0.5 second namely 15 frames for 30 FPS videos. Those typical states, such as closing eyes, yawning and lowering head, are still considered as one of the evidences to judge whether a frame is drowsy. Those videos are cut into several clips which contain only the drowsy or the normal states alternatively according to our labels. To describe the transitional states between the normal and the drowsy, we reserve ten normal frames at the head and the tail of every clip with drowsiness. We name the relabeled dataset with Forward Instant Driver Drowsiness Detection (FI-DDD) which includes 14 drivers on trainset and 4 ones on testset. The trainset of FI-DDD at day time has 157 clips and the testset has 92 clips, while at night scenarios, the trainset has 126 ones and the testset has 75 clips with about 530 frames on average. Static image set: To train the parameters of CNN and analyze the effects of several factors, we build a static image set by sampling lots of frames from the FI-DDD dataset. The samples on the image set are labeled with drowsiness or normality, and the labels can almost indicate the truth states of the corresponding images, even if a small amount of images are matched with wrong labels due to lack of temporal dependence. The static image set has 7498 images in day time, and trainset includes 5239 images and testset has 2259 images. It has 2653 images in night scenario, and trainset includes 1750 images, testset has 903 images. B. Implementation Details Face Alignment: We apply face alignment technology to locate those facial shape points for all videos. Face detection and tracking are combined to increase detecting rate and provide more accurate positions for faces on videos. Face alignment algorithm is based on those face positions. The face detector is from OpenCV and the approach of face tracking is proposed by Danelljan et al. [25]. We implement the method of Ren et al. [24] and retrain the model, and preprocess all videos to obtain the 51 landmark points for every frame. Those frames with no face will be recognized as the empty and filled with zero coordinates for landmark points. Multi-granularity: We obtain Multi-granularity patches considering two factors: different positions and different sizes. We design to choose 15 positions from facial shape points, which are divided into three granularities: 1 global face with size s g = (160 × 160), 4 main parts with size s p = (64 × 64) and 10 local regions with size s l = (32 × 32). The specific locations of all patches are shown as Fig.2. Before sent to CNN, those patches are resized to size s u = (64 × 64), normalized to [-0.5, 0.5], and are converted to 3 channels to ensure that our framework can process RGB data. Dataset Usage: A static image set, required for training the CNN parameters, are sampled from the videos of FI-DDD with a specific frame interval. The results of CNN is directly related to multi-granularity patches and CNN parameters, we thus analyze the effects of those factors on the static image set. While all experiments for analyzing the effects of LSTMs parameters are carried on FI-DDD dataset. To compare with the previous methods, we evaluate the proposed method on the evaluation set of NTHU-DDD dataset. C. Experimental Analysis To further explain the effects of alignment, multigranularity and CNN extractor, several groups of experiments are conducted on the static image set. We also provide experiments on FI-DDD dataset to verify the effectiveness of LSTMs for detection drowsiness on video. 1) The Importance of Alignment: It is essential to carry out experiments to explain the significance of alignment and the effects of locating precision. None-alignment vs With Alignment: We provide another two none-alignment methods to sample those multi-granularity patches in facial bounding box: Uniform Sampling (US) and Specific Sampling (SS). The corresponding sizes of our Aligned sampling (AS) method and the two none-alignment ones are the same. Fig.3(Left) shows the comparison of AS, US and SS. AS considering alignment achieves the best accuracy 87.4% on the testset of the static image set, which is 4.9% higher than SS method and 6.2% higher than US one. As a conclusion, alignment of facial patches, providing aligned representations, is an effective way to improve the accuracy on driver drowsiness detection. Effects of alignment precision: We evaluate the effects of the alignment precision, and research the influence quantitatively by adding random noise with Gaussian distribution N (0, σ) over the well-aligned facial points. Fig.3(Right) shows the results on testset of the static image set, from which, we discover that the accuracy is decreasing with the increasing standard deviation of noise and even less than 80% if σ ≥ 10 px. While the accuracy is more than 83% with σ less than 5 px, we make a conclusion that the proposed MCNN is robust to the corrupted locations if σ ≤ 5 px. US 2) The Effects of Multi-granularity Patches: Multi-granularity patches consist of local regions, main parts and global face. It is significant to conduct experiments and explain the importance of those granularities on driver drowsiness detection. We apply a fully connected layer and softmax operation to classify representations presented by MCNN extractor, and analyze the effects of multi-granularity patches by results of the classification. Learning curve on different granularities: We take four different granularities, listed as local regions, main parts, global face and the combination of the above, into account to analyze the effects of multigranularity facial patches. Fig.4 illustrates the comparisons of those granularities, from which, we know that the convergent speed of method with global face granularity is the slowest compared with the others, and that of local regions is the fastest. While multigranularity method achieves good performance on both convergent speed and accuracy. Aligned points can achieve higher precision on those local regions with abundant boundary texture, which results in more aligned representations and easier being classified. Nevertheless, multi-granularity patches containing more aligned information is more effective on driver drowsiness detection. Effects of positions and sizes: We change the positions and sizes of facial patches respectively. Shown as Fig.5(Left), facial main parts, including eyes, nose and mouth, obtain the best accuracy 83.6% compared with the other single-granularity method. Obviously, the combination of those three granularities achieves the best accuracy 87.4%. A conclusion comes that the most effective representation is extracted from the three main facial parts, while the fusion of local and global clues is an excellent way to obtain better facial representations. We set their sizes as the same and change the sizes to research the difference between single-size and multigranularity methods, keeping the locations of these patches invariable. Fig.5(Right) shows different regions with different sizes achieve 2.3% accuracy more than that of those single-size patches. The phenomenon is the result of that different physiological parts are of different sizes, e.g., the size of global face is bigger than single eye. The above analysis presents that multigranularity method is an effective way to represent facial features. 3) The Parameters Selection of MCNN Extractor: The structure parameters of the convolutional layers are listed as Table II. A patch with size 64×64 processed by those convolutional layers is projected to a tensor with size 16 × 16 × 4. And a representation of the patch is generated by reshaping the tensor to a 1024-dimensional vector, which is the input of a fully connected layer. A fully connected layer is applied to combine the multi-granularity clues and generate MCNN representations. The number of its hidden units N , namely the dimension of representation, has effects on the combination of those patches. Changing the number of hidden units N , we explore the relations between the dimension of MCNN representations and classification accuracy with well-aligned multi-granularity facial patches. The comparison of different dimensions is shown as Fig.6, which indicates that the number of dimension almost has no influence on the convergent speed. But 256-dimensional representations achieve the highest accuracy. Therefore it is reasonable for us choose the number of hidden units as 256. 4) The Significance of LSTMs: We first apply MCNN to detect driver drowsiness on video, but it has no capacity to capture the temporal clues. MCNN+LSTMs is considered to deal with this drawback. It is necessary to compare the situation with LSTMs [26] and that without LSTMs for understanding the effects of LSTMs. All experiments at this part are carried on the FI-DDD dataset in day-time scenarios. Parameters setting: The representations given by MCNN extractor are 256-dimensional, and the number of hidden units in each LSTM block is equal to 256. The forget gate is enabled and the max memory step is set to 60 frames. We randomly select a batch with 1000 samples to train the LSTMs parameters with learning rate 3e −4 . The fully connected layer projects the states of the last LSTM block to a 2-dimensional vector which is decoded to the probability of drowsiness by a softmax operation. MCNN-Only vs MCNN+LSTMs: The experiments are carried on four different granularities to research the effects of the multi-granularity and LSTMs. Fig.7 shows the accuracy of MCNN only and MCNN+LSTMs for detecting videos on testset under different granularities. MCNN-Only method obtains 72.7% accuracy, while the accuracy achieved by MCNN+LSTMs is 15.6% more than that by MCNN-Only. The reason is that the LSTMs have ability to mine the clues in temporal dimension which is significant for recognizing lots of ambiguous states, such as closing eyes and blinking. Comparing the accuracies of different granularities, we discover that the well-aligned multi-granularity facial patches still achieve the best performance. The accuracy of the main parts ranks the second, which means the granularity of main parts certainly plays the most important role in improving the effectiveness compared to the other two granularities. D. Comparisons with The Previous Methods We evaluate the whole method on the evaluation set and compare with the previous methods [7], [14], [9] achieved on the same dataset. Due to the long-term memory characteristics on NTHU-DDD dataset, the max memory length is set to 120 frames and other parameters keep the same as the above experiments. Especially for night scenarios, we retrain a model with the night data of NTHU-DDD to detect driver drowsiness on near-infrared videos. Accuracy: Table III presents the comparison of our method and the previous work [7], [14], [9] and the proposed method achieves 90.05% accuracy, as the state-of-the-art method of driver drowsiness detection. Speed: We measure time consumption of all modules of our proposed method. From Table IV, CNN costs the most time and the approach achieves about 3 fps on CPU platform. While on GPU platform, the proposed method can achieve 37 fps and satisfy the real-time performance requirements. V. CONCLUSION We propose an effective and efficient Long-term Multi-granularity Deep Framework to detect driver drowsiness on videos. Well alignment of facial patches ensures the effectiveness of representations under large pose variation, and multi-granularity patches efficiently concentrate on the most significant regions. Multigranularity Convolutional Neural Network (MCNN) can effectively learn both detailed appearance information of the main parts and local to global spatial constraints. The deep Long Short Term Memory (LSTM) network works well on learning the temporal dependencies of spatial representations for driver drowsiness detection. Moreover, we build a dataset named Forward Instant Driver Drowsiness Detection with higher precision of drowsy locations in temporal dimension. The dataset performs well in training model parameters and analyzing effects of several factors. Finally, we evaluate our method on the evaluation set of NTHU-DDD dataset and achieve 90.05% accuracy and about 37 fps speed as the state-of-the-art method on driver drowsiness detection.
2018-01-08T07:21:46.000Z
2018-01-08T00:00:00.000
{ "year": 2018, "sha1": "04cb43806ca57040100b33af0781e4331f8daa56", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "04cb43806ca57040100b33af0781e4331f8daa56", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251328494
pes2o/s2orc
v3-fos-license
Determination of Johnson–Cook Material and Failure Model Constants for High-Tensile-Strength Tendon Steel in Post-Tensioned Concrete Members : The estimation of damage in steel tendons is important for evaluating the remaining capacity of existing tensioned members. This research focuses on calculating Johnson–Cook (LC) model and damage parameters of high-strength steel material through quasi-static and dynamic uniaxial tests. Finite element analysis is used to replicate the experimental procedure, and through dynamic image correlation analysis, the numerical results accuracy is verified. In this investigation, it is found that the JC model can accurately replicate deformation and stress concentration under different strain rates and triaxiality conditions and, thus, can be used for fracture analysis of prestressed concrete members. equivalent plastic strain used to model softening behavior of the material in this work under reference conditions. Introduction Clear understanding of damage and deformation caused in high-strength steel tendons due to plastic deformation, strain rate and stress concentration effects is of critical importance for the accurate evaluation of existing tensioned concrete members. During regular operation and function of tensioned members, tendons should remain elastic or well within their serviceability state. Despite that, in recent history, due to corrosion, damage on tensioned members of bridge girders has been reported by several researchers [1][2][3][4][5][6]. Corrosion-induced failure of tendons is not thoroughly included in typical design practices of prestressed members and may cause the collapse of bridges in which they are most commonly installed. In current practice, manual inspection along the length of tendons where corrosion is suspected to have occurred is necessary. Corrosion of tendons in these applications commonly occurs due to two primarily two reasons. On the one hand, cracks are formed in surrounding concrete or due to the prestressed member being in close proximity to chloride-contaminated water. On the other hand, tendon rupture at inadequate grouting positions in the sheathing occurs due to the ingress of water from anchoring points. Once corrosion has occurred in a tendon, depending on the severity, it can cause significant reduction in the cross-sectional area of the member [7] and thus induce an unfavorable stress concentration condition. Furthermore, in cases of severe corrosion, the remaining cross-sectional area might be inefficient for bearing the imposed service dead loads and might yield or fracture ( Figure 1). In the latter case, depending on the condition of the surrounding tendons, complete failure might occur if during fracture and dynamic loading, redistribution capacity of the remaining tendons is exceeded. That is why the investigation of fracture performance of high-strength tendons under high strain rates and stress concentration conditions is important to properly evaluate the remaining capacity of a tensioned member. Once the state of corrosion in a tensioned member is verified, risk assessment analysis can be performed, and thus, the remaining capacity can be simulated with the aid of finite element (FE) tools. One of the most common tools used by researchers [8][9][10][11] to evaluate the performance and characteristics of metals under coupled stress and high strain-rate conditions is the Johnson-Cook (JC) material model with damage [12] and it is currently one of the most widely incorporated models in commercial FE software packages, due to its ability to predict material behavior with accuracy and speed, and because it couples a flow stress model with strain rates, elevated temperatures as well as stress concentration conditions. K. Xu et al. [13] performed an experimental investigation of seven high-strength steels in an effort to propose a modification to the traditional JC constitutive model. In their research, BH300, HSLA350, 440 W, HSS590, TRIP590, DP600 and DP800 steel materials were used in uniaxial tensile tests with strain rates ranging from 0.005 s −1 to 1000 s −1 , at normal environmental temperatures, as the heat-related material softening was not the primary objective of the research. Results were used to calibrate material parameters of the traditional JC constitutive model as well as for the evaluation of resulting differences between their proposed model, the traditional JC model, and the experimental data. From their research, it was found that from strain ranges of 2-15% and a tensile strength of 450-850 MPa, the proposed model had an average error of 2%, which is acceptable. K. Vedantam et al. [14] investigated the mechanical response of two types of steel, Mild and DP590, in tension, at room temperature, using quasi-static and split Hopkinson bar techniques at strain rates ranging from 0.001 s −1 to 1800 s −1 , and the resulting data were used to calculate the JC model parameters. It was found that for increasing strain rates, fracture strain as well as ultimate stress values increased in a similar manner approaching ultimate stress values of 1000 MPa. Finally, detailed JC material parameters were presented. From the performed literature investigation, it was clearly identified that to accurately model the fracture behavior of high-strength steels, proper material definition and model calibration is required. The data necessary for the aforementioned FE modeling need to be obtained through expensive and time-consuming experimental effort under both high strain rates and high-stress concentration conditions to accurately consider both damage initiation and progression parameters. For high-tensile-strength tendon material commonly used in Japanese infrastructure in general, readily available model constants are not available. In this work the overall behavior, including plastic deformation and the fracture characteristics of medium-carbon high-strength steel used in tensioned members, has been studied through extensive experimental analysis under quasi-static and medium strain-rate loading conditions as well as stress concentration through the implementation of the tensile testing of notched specimens. Failure parameters and material constants for the JC model under room temperature have been calculated through the analysis of experimental data. Damage growth parameters are also introduced and proposed for accurate modeling of necking and fracture of tensile specimens under uniaxial loading conditions. The constants have been evaluated through numerical modeling of dog-bonetype tensile specimens under uniaxial loading conditions for strain rates similar to the experimental configuration as well as implementing digital image correlation (DIC) for the verification of strain propagation during different strain-rate and stress concentration conditions. Materials and Experimental Procedure In this research, the SBPR 930/1080 Type B No. 1 medium carbon high-strength steel was investigated, and its chemical composition is presented in Table 1 (in wt%). The material used for the manufacturing of the specimens was supplied by a local company according to JIS G 3109 [15], in 450 mm × 32 mm cylindrical pieces out of which the dog-bone type specimens were manufactured using a manual lathe. Geometrical details can be seen in Figure 2, for cylindrical specimens used throughout this research for both quasi-static and dynamic tensile tests, with smooth as well as notched gauge lengths. For the tensile tests, MTS 244.11 servo-hydraulic actuator ( Figure 3) was used. Acceleration, velocity, displacement, and excitation frequency characteristics are represented in Figure 4. Despite the dynamic characteristics of the experimental procedure, experimental parameters were well within the capability envelope of the utilized actuator. The actuator was mounted on a loading frame using ball-joints for both fixed and extendable part of the piston to allow for increased mobility and flexibility under various testing conditions and requirements. For this research, both specimens and actuator were positioned and fixed in a vertical orientation to ensure an inline application of pulling force ( Figure 5). Furthermore, to ensure constant pulling rate, the loading end of the specimens were constructed in such a way as to allow for initial retraction of the actuator's piston without exerting force onto the specimen. Once required velocity is achieved, and after that stage, contact between the top of the specimen and the mounting fixture (plate) at the movable end of the actuator is initiated, transferring the resulting load onto the specimen body. As can be seen in Table 2, for dynamic loading cases, three specimens were tested for each required strain rate, and the resulting force-displacement data were converted into true stress-strain data using standard equations for uniaxial tensile tests. To further aid the calibration of the initial part of flow stress-strain curves, 2 mm strain gauges were attached using adhesive onto the specimens, and strain data were captured for values up to 20,000 µm, before the adhesive failure resulting in detachment of the strain gauges positioned in the middle of the respective gauge lengths. From the flow stress-strain data of smooth (no notch) specimens (Figure 6a), it can be seen that for the same strain rate, similar stress-strain curves were obtained. For quasi-static strain rate (0.001 s −1 ), this specific material showed the most ductile behavior fracturing, on average at 0.127 strain. In uniaxial dynamic tensile (Figure 6b) tests, it can be seen that for the case of 0.5 s −1 strain rate, material showed a more brittle behavior while fracturing, on average at 0.092 strain. In comparison to the 0.5 s −1 strain rate case, specimens tested at 1 s −1 showed an increase in the ductility fracturing, on average at 0.099 strain, as well as showing the highest ultimate stress values overall. Finally, for the case of 2 s −1 strain rate, specimens presented most brittle behavior out of all cases, while not showing any significant increase in their ultimate stress value when compared to the slowest dynamic case. In Figure 7, engineering stress-strain (initial cross-sectional area is used for the strain calculations) and converted true stress-strain (strain calculations are performed based on the actual cross-sectional area, which changes with time) data can be seen for quasi-static Table 2. Similarly to the case of variable strain rate data, in the case of changing the notch size and radius, good agreement is observed between similar cases. Smooth specimens were the most ductile, and the fracture strain progressively reduced as the minimum specimen radius and notch radius decreased. Appl From the flow stress-strain data of smooth (no notch) specimens (Figure 6a), it can be seen that for the same strain rate, similar stress-strain curves were obtained. For quasistatic strain rate (0.001 s −1 ), this specific material showed the most ductile behavior est dynamic case. In Figure 7, engineering stress-strain (initial cross-sectional area is used for the strain calculations) and converted true stress-strain (strain calculations are performed based on the actual cross-sectional area, which changes with time) data can be seen for quasi-static testing of notched specimens listed in Table 2. Similarly to the case of variable strain rate data, in the case of changing the notch size and radius, good agreement is observed between similar cases. Smooth specimens were the most ductile, and the fracture strain progressively reduced as the minimum specimen radius and notch radius decreased. Notable reduction in both ultimate stress and failure strain for the C7_0.001R0.4 and C8_0.001R0.4, was observed, resulting in a considerably brittle fracture behavior in comparison to the smooth cases. est dynamic case. In Figure 7, engineering stress-strain (initial cross-sectional area is used for the strain calculations) and converted true stress-strain (strain calculations are performed based on the actual cross-sectional area, which changes with time) data can be seen for quasi-static testing of notched specimens listed in Table 2. Similarly to the case of variable strain rate data, in the case of changing the notch size and radius, good agreement is observed between similar cases. Smooth specimens were the most ductile, and the fracture strain progressively reduced as the minimum specimen radius and notch radius decreased. Notable reduction in both ultimate stress and failure strain for the C7_0.001R0.4 and C8_0.001R0.4, was observed, resulting in a considerably brittle fracture behavior in comparison to the smooth cases. Notable reduction in both ultimate stress and failure strain for the C7_0.001R0.4 and C8_0.001R0.4, was observed, resulting in a considerably brittle fracture behavior in comparison to the smooth cases. Johnson-Cook Model The Johnson-Cook model is able to accurately analyze and predict stress-strain behavior for ductile materials, and its applicability as well as accuracy has been studied thoroughly in the literature for steel or aluminum alloys under combined conditions of large deformation, high strain rate as well as elevated temperatures focusing on metal forming or impact performance [16][17][18][19]. The JC stress model is expressed in Equation (1). in which σ represents von Mises or equivalent stress, A is the yield stress of tested material under reference conditions (strain rate and temperature), B is the strain hardening constant, n is the strain hardening coefficient, C is a coefficient of strain rate resulting in postyield strengthening of the material, . ε * is the dimensionless strain rate, T * homologous (2) and (3). where . ε p is the accumulated plastic strain and . ε 0 is the reference strain rate which, in this work, was taken as 0.001 s −1 . T m is defined as the melting temperature of the material and T ref is the reference temperature. For the scope of this research, performance of the highstrength medium carbon steel material was investigated under quasi-static and medium dynamic strain rates as well as varying stress concentration conditions, but the temperature factor was not considered since the primary failure factor of tensioned members is usually due to corrosion, as mentioned in previous sections. Determination of Material Constants A, B, n ε 0 and T = T re f in Equation (1), the second and third parentheses are omitted, since the effects of strain rate strengthening and thermal softening are neglected. Modifying the remaining terms by taking the natural logarithm on both sides and using the averaged true stress-strain data from cases C1_0.001_NR, C2_0.001_NR and C3_0.001NR and linearly plotting the ln(σ − A) term with of Equation (4), a linear regression model was used to fit the data points, as can be seen in Figure 8. Determination of Material Constant C For the purpose of this work and while not considering thermal softening effects, Equation (1) can be modified as: To obtain the C parameter, stress-strain data at four different strain rates (0.001 s −1 , 0.5 s −1 , 1 s −1 , 2 s −1 ) were used to plot Figure 9, while utilizing A, B, n constants that were calculated in the previous section and substituted in Equation (5). Afterwards, first-order linear fitting was performed using a vertical axis intercept value of 1, since Equation (1) is in the form of y a bx = + (C parameter's calculation sensitivity analysis is presented in Appendix B). Similarly to Figure 8, from the slope of the linear regression fit, the C parameter was calculated, which can also be seen in Table 3. The A parameter was also calculated under reference strain conditions using the 0.2% offset method. For the linear fitting presented in Figure 8, an R 2 factor of more than 97.5% was achieved, resulting in a good accuracy of the regression model. As a result of the latter, material constants A, B and n were calculated from the slope and intercept of the curve, which can be seen in Table 3. Determination of Material Constant C For the purpose of this work and while not considering thermal softening effects, Equation (1) can be modified as: To obtain the C parameter, stress-strain data at four different strain rates (0.001 s −1 , 0.5 s −1 , 1 s −1 , 2 s −1 ) were used to plot Figure 9, while utilizing A, B, n constants that were calculated in the previous section and substituted in Equation (5). Afterwards, first-order linear fitting was performed using a vertical axis intercept value of 1, since Equation (1) is in the form of y = a + bx (C parameter's calculation sensitivity analysis is presented in Appendix B). Similarly to Figure 8, from the slope of the linear regression fit, the C parameter was calculated, which can also be seen in Table 3. To obtain the C parameter, stress-strain data at four different strain rates (0.001 s −1 , 0.5 s −1 , 1 s −1 , 2 s −1 ) were used to plot Figure 9, while utilizing A, B, n constants that were calculated in the previous section and substituted in Equation (5). Afterwards, first-order linear fitting was performed using a vertical axis intercept value of 1, since Equation (1) is in the form of y a bx = + (C parameter's calculation sensitivity analysis is presented in Appendix B). Similarly to Figure 8, from the slope of the linear regression fit, the C parameter was calculated, which can also be seen in Table 3. The material constants that were calculated from the aforementioned constitutive equations for the JC model are summarized in Table 3. The material constants that were calculated from the aforementioned constitutive equations for the JC model are summarized in Table 3. Johnson-Cook Damage Model Parameters Substituting the material constants from Table 3 into Equation (1), the following relationships can be formed according to the JC model for stress, strain and strain deformation rate, as can be seen in Equation (6). When comparing experimental data with analytical prediction from Equation (6), good accuracy can be observed until the onset of damage and necking of the tensile specimen ( Figure 10). (MPa) (6) mation rate, as can be seen in Equation (6). When comparing experimental data with analytical prediction from Equation (6), good accuracy can be observed until the onset of damage and necking of the tensile specimen ( Figure 10). To accurately simulate damage that occurs in the material model with regard to the JC damage parameter setting, it is important to define at which point damage is calculated. In this study, after careful consideration (Appendix A), the authors decided to use the Damage Initiation point of Figure 10. With that, the JC damage model is used to relate fracture strain with stress triaxiality ratio, strain rate as well as temperature [11,20], and it is expressed in Equation (7). D1 to D5 represent damage constants for the JC model, m σ is the mean stress (hydrostatic) and eq σ is the equivalent stress (von Mises). As damage occurs in an element governed by JC damage model, it is accumulated based on a damage law and can be represented by Equation (8) [21]. When damage occurs, during high levels of deformation, material strength is reduced [11] and the resulting relation for stress during this damage evolution step is presented in Equation (9). To accurately simulate damage that occurs in the material model with regard to the JC damage parameter setting, it is important to define at which point damage is calculated. In this study, after careful consideration (Appendix A), the authors decided to use the Damage Initiation point of Figure 10. With that, the JC damage model is used to relate fracture strain with stress triaxiality ratio, strain rate as well as temperature [11,20], and it is expressed in Equation (7). D 1 to D 5 represent damage constants for the JC model, σ m is the mean stress (hydrostatic) and σ eq is the equivalent stress (von Mises). As damage occurs in an element governed by JC damage model, it is accumulated based on a damage law and can be represented by Equation (8) [21]. When damage occurs, during high levels of deformation, material strength is reduced [11] and the resulting relation for stress during this damage evolution step is presented in Equation (9). where ∆ε is the equivalent plastic strain increment and ε f is the equivalent strain to fracture under certain stress, strain rate and temperature conditions. In Equation (9), σ D is the resulting stress after damage in an element has been initiated, and D is a damage parameter with the following conditions (0 ≤ D ≤ 1). In Equation (7), σ m σ eq can also be defined as stress triaxiality ratio η * [22,23], and along with equivalent stress can be obtained from undamaged material, while considering plastic deformation up until the onset of necking. According to the work of Bridgman [24], stress triaxiality values can be estimated from uniaxial tests of round specimens according to the analytical model which is presented in Equation (10). In the model, η * is the stress triaxiality state value, R represents the radius of notch that the specimen is manufactured according to and α represents the minimum cross sec-tion's radius. Triaxialities calculated according to Bridgman's model for different notched specimens can be seen in Table 2. Neglecting the effects of strain rate and temperature, Equation (7) can be simplified representing fracture strain in terms of the aforementioned D 1 to D 3 damage parameters and stress triaxiality ratio effects. When plotting the fracture strain-stress triaxiality ratio (Figure 11), experimental tension data [25] is used, with 0.001 s −1 strain rate for smooth and notched specimens from Figure 7b and Table 2, in the form of y = A + B · exp(R0 · x); D 1 to D 3 damage parameters can be calculated from the exponential coefficients of the equation similar in principal to the derivation of Equation (4). Afterwards, the D 4 strainrate-dependent parameter was calculated by rewriting Equation (7) complete with the previously calculated D 1 to D 3 damage parameters according to Equation (11). In detail, when plotting Figure 12) and using a linear regression fitting equation intercepting the vertical axis at a value of 1.0, from the slope of the resulting equation, the final JC damage parameter D 4 was calculated. The calculated JC damage model parameters can be seen summarized in Table 4 and The calculated JC damage model parameters can be seen summarized in Table 4 and can be used in FE software to simulate yield and fracture of high-strength tendons in prestressed concrete applications. Figure 11. Fracture strain and stress triaxiality relationship from uniaxial tensile test data. The calculated JC damage model parameters can be seen summarized in Table 4 and can be used in FE software to simulate yield and fracture of high-strength tendons in prestressed concrete applications. Numerical Simulation The purpose of the numerical modeling was to verify the reproducibility of the experimental results in the commercially available finite element software Abaqus Explicit [26], as part of a broader research work aimed at modeling the dynamic fracture behavior of prestressed concrete members. To accurately reproduce the dynamic fracture effects of high-tensile-strength tendons, non-linear dynamic analysis was used throughout the Numerical Simulation The purpose of the numerical modeling was to verify the reproducibility of the experimental results in the commercially available finite element software Abaqus Explicit [26], as part of a broader research work aimed at modeling the dynamic fracture behavior of prestressed concrete members. To accurately reproduce the dynamic fracture effects of high-tensile-strength tendons, non-linear dynamic analysis was used throughout the simulation process of uniaxial tensile tests. Full-scale, three-dimensional models were created, accurately reproducing the geometrical properties of the constructed specimens shown in Figure 2. In both single element and full-scale analysis, geometric nonlinearity related effects were taken into account by using the "NLgeom" option available in Abaqus Explicit. Numerical Simulation of Singular Finite Element Initially, to verify the accuracy of the analytical model in FE simulations, a single 8-noded cubical-shaped C3D8R element measuring 1 mm 3 was modeled, and suitable boundary conditions were applied to simulate biaxial symmetry in the two axes perpendicular to the loading direction. To maintain a stress triaxiality ratio η * of 0.333 throughout the tensile test, the bottom four nodes were restrained in the direction of applied force, and four nodes on each of the two faces perpendicular to the loading axis had their movement in the two orthogonal axes restrained, respectively, as can be seen in Figure 13a. The top four nodes were free to move in the direction of loading, and to achieve that a velocity-based loading condition was applied. Similarly to the experimental procedure, loading speeds of 0.1-200 mm/s were applied to the top four nodes simultaneously. To reduce inertia-related effects at the beginning of the simulation, velocity amplitude was smoothly applied to the simulation for the first 1/10th of the overall step's duration and then kept constant until the completion of each test (Figure 13b). Duration of the tensile phase of the simulation was adjusted each time according to the required strain rate in order to allow for sufficient simulation time and up until the complete damage being registered at the tested element. In Abaqus, several ductile material models are available that can accurately capture the deformation of steel materials, but in this work, the JC flow stress model and, correspondingly, the JC damage model was utilized using material parameters that were calculated in previous sections. Along with the data presented in Tables 3 and 4, for this material, according to the manufacturer's specifications, Young's Modulus E = 210 GPa and Poisson's ratio of ν = 0.28, and furthermore, the density was set as ρ = 7.85E − 09 tonnes/mm 3 . Similarly to the experimental procedure, loading speeds of 0.1-200 mm/s were applied to the top four nodes simultaneously. To reduce inertia-related effects at the beginning of the simulation, velocity amplitude was smoothly applied to the simulation for the first 1/10th of the overall step's duration and then kept constant until the completion of each test (Figure 13b). Duration of the tensile phase of the simulation was adjusted each time according to the required strain rate in order to allow for sufficient simulation time and up until the complete damage being registered at the tested element. In Figure 14, a comparison between the experimental true stress-strain curves for different strain rates of Table 2 with the results obtained from a singular finite element are presented. Overall, good accuracy was achieved between experimental and numerical results with the exception of 1 s −1 strain rate results in which although the failure strain was similar, the ultimate stress value had a difference of approximately 8%. It is believed that this is due to the non-monotonic nature of failure strain as well as ultimate stress that was observed during the experimental procedure (Figure 6b). To accurately model the material softening behavior past the damage initiation point [27] represented in Figure 15a, in which σ y0 is the yield stress, ε pl 0 is the equivalent plastic strain at the damage initiation point, and ε pl f is the equivalent plastic strain at failure when the scalar damage parameter D = 1 (Equation (8)), damage and strain are correlated. For modeling of post-damage-initiation softening until element failure data, post-peak stress σ D is calculated based on the difference between experimental and JC analytical prediction (Figure 10). Based on the difference between relevant stress value σ D and from Equation (9), sets of D − ε were calculated. Their respective correlation is presented in Figure 15b, showing the correlation between damage parameter and equivalent plastic strain used to model softening behavior of the material in this work under reference conditions. In Figure 14, a comparison between the experimental true stress-strain curves for different strain rates of Table 2 with the results obtained from a singular finite element are presented. Overall, good accuracy was achieved between experimental and numerical results with the exception of 1 s −1 strain rate results in which although the failure strain was similar, the ultimate stress value had a difference of approximately 8%. It is believed that this is due to the non-monotonic nature of failure strain as well as ultimate stress that was observed during the experimental procedure (Figure 6b). To accurately model the material softening behavior past the damage initiation point [27] represented in Figure 15a Numerical Simulation of Full-Scale Tensile Specimens To simulate the ductile failure of the dog-bone-shaped tensile specimens, a threedimensional model was constructed, replicating in detail the geometrical properties of the manufactured tensile specimens. After some initial mesh sensitivity analysis (Appendix C), the maximum size of C3D8R elements was chosen as equal to 3 mm × 2 mm × 2 mm (coarse mesh), and the minimum size at the working length was set as 1 mm × 1 mm × 1 mm (fine mesh), which resulted in a total of 25,344 elements ( Figure 16). For the material modeling, parameters stated in Section 3 were used, and boundary conditions were utilized in accordance with the experimental setup. Similarly to the experimental procedure, quasi-static 0.001 s −1 and 0.5 s −1 strain rate uniaxial tensile simulations were performed, and the results can be seen in Figure 17. Similarly to the experimental process, in the case of numerical simulations, load and displacement were monitored throughout the uniaxial tensile test. Afterwards, the obtained load-displacement data were transformed into engineering stress-strain curves and, subsequently, to true stress-strain curves. Overall, good accuracy is observed between experimental and numerical results for both cases. In Numerical Simulation of Full-Scale Tensile Specimens To simulate the ductile failure of the dog-bone-shaped tensile specimens, a threedimensional model was constructed, replicating in detail the geometrical properties of the manufactured tensile specimens. After some initial mesh sensitivity analysis (Appendix C), the maximum size of C3D8R elements was chosen as equal to 3 mm × 2 mm × 2 mm (coarse mesh), and the minimum size at the working length was set as 1 mm × 1 mm × 1 mm (fine mesh), which resulted in a total of 25,344 elements ( Figure 16). For the material modeling, parameters stated in Section 3 were used, and boundary conditions were utilized in accordance with the experimental setup. Similarly to the experimental procedure, quasistatic 0.001 s −1 and 0.5 s −1 strain rate uniaxial tensile simulations were performed, and the results can be seen in Figure 17. Similarly to the experimental process, in the case of numerical simulations, load and displacement were monitored throughout the uniaxial tensile test. Afterwards, the obtained load-displacement data were transformed into engineering stress-strain curves and, subsequently, to true stress-strain curves. Overall, good accuracy is observed between experimental and numerical results for both cases. In particular, for the quasi-static strain rate case, apart from some initial discrepancy in the postyielding stress capacity (numerical results overestimate the experimental case by 2.6%) the stress-strain curve follows closely the experimental results, and a 2.4% difference in ultimate stress is observed. Furthermore, although the fracture strain between the numerical analysis and experimental results is similar, the numerical simulation retains a higher stress state for larger strain values decreasing sharply, leading to element failure. For the case of the 0.5 s −1 strain rate, after yielding, the numerical model underestimates the experimental results by 2.7%, but after that, the numerical stress-strain curve follows closely the experimental one. Finally, a 3.6% difference is observed for the failure strain between numerical and experimental results. The above statements can be seen summarized in Table 5. (Table 4) JC damage parameters. Numerical Model Verifications for Smooth Specimens To further validate the accuracy of the JC flow stress model as well as the JC damage parameters that were calculated in the previous sections, DIC analysis was attempted for specimens subjected to uniaxial tension under different strain rates. To perform the DIC analysis, commercially available software was used called "GOM Correlate". It is a DIC evaluation software program that is extensively used for material research and compound testing purposes. GOM Correlate follows a parametric approach that guarantees reliability for measuring required strains through a parametric approach. While using GOM Correlate, users have to define initial parameters regarding strain surface components for the area of interest, and the software creates facets which are square shaped across the whole range of supplied image data. Based on a high-contrast stochastic pattern that the user has to apply on the area of interest where strain is required to be measured, the software identifies these facets based on the quality of the aforementioned stochastic pattern. An additional parameter that requires adjustment from the user is the distance between adjacent facets on a center-to-center basis. This setting directly influences and it is correlated to the (Table 4) JC damage parameters. Numerical Model Verifications for Smooth Specimens To further validate the accuracy of the JC flow stress model as well as the JC damage parameters that were calculated in the previous sections, DIC analysis was attempted for specimens subjected to uniaxial tension under different strain rates. To perform the DIC analysis, commercially available software was used called "GOM Correlate". It is a DIC evaluation software program that is extensively used for material research and compound testing purposes. GOM Correlate follows a parametric approach that guarantees reliability for measuring required strains through a parametric approach. While using GOM Correlate, users have to define initial parameters regarding strain surface components for the area of interest, and the software creates facets which are square shaped across the whole range of supplied image data. Based on a high-contrast stochastic pattern that the user has to apply on the area of interest where strain is required to be measured, the software identifies these facets based on the quality of the aforementioned stochastic pattern. An additional parameter that requires adjustment from the user is the distance between adjacent facets on a center-to-center basis. This setting directly influences and it is correlated to the (Table 4) JC damage parameters. Numerical Model Verifications for Smooth Specimens To further validate the accuracy of the JC flow stress model as well as the JC damage parameters that were calculated in the previous sections, DIC analysis was attempted for specimens subjected to uniaxial tension under different strain rates. To perform the DIC analysis, commercially available software was used called "GOM Correlate". It is a DIC evaluation software program that is extensively used for material research and compound testing purposes. GOM Correlate follows a parametric approach that guarantees reliability for measuring required strains through a parametric approach. While using GOM Correlate, users have to define initial parameters regarding strain surface components for the area of interest, and the software creates facets which are square shaped across the whole range of supplied image data. Based on a high-contrast stochastic pattern that the user has to apply on the area of interest where strain is required to be measured, the software identifies these facets based on the quality of the aforementioned stochastic pattern. An additional parameter that requires adjustment from the user is the distance between adjacent facets on a center-to-center basis. This setting directly influences and it is correlated to the point density within the area of interest, and by decreased said distance, higher spatial resolution can be obtained by decreasing the distance of adjacent facets [28][29][30]. In this work, a full-frame CMOS camera was used to capture 1920 × 1080 pixelsized image series and videos to be used for the DIC analysis. For DIC, the captured stochastic pattern (Figure 18) was processed using a facet size of 14 and a distance of 9 to evaluate the corresponding strain fields. To compare the experimental results with the numerical modeling, strain was recorded along the axis of the cylindrical specimens and was compared with the resulting strain in central nodes of the FE model. Due to camera limitations regarding video framerate capturing capabilities, as well as applied pattern-related shortcomings, although DIC analysis was attempted for all experimental cases, it was only successful for quasi-static and 0.5 s −1 experimental cases. Namely, for higher strain-rates, to capture a significant amount of image series, 120 fps video recording was attempted, but only a resolution up to 1280 × 720 pixels was available. The lowered resolution coupled with brittleness of the coloring used to create the stochastic speckle pattern severely limited the amount of usable data for the DIC analysis. The resulting strain profile can be seen in Figure 19 for (a) the quasi-static tensile case and for (b) the dynamic case with a strain rate of 0.5 s −1 . In both Figure 19a,b, the horizontal axis data for finite element modeling (FEM) case have been shifted by an amount suitable to each case in order to align the portion of the gauge length in which necking and, correspondingly, fracture occurred. To aid with the visualization of the data, true strain and normalized gauge length data points are being used. For the quasi-static case, due to stochastic pattern degradation, strain data were captured up to approximately 0.2 strain. Appl modeling, strain was recorded along the axis of the cylindrical specimens and was compared with the resulting strain in central nodes of the FE model. Due to camera limitations regarding video framerate capturing capabilities, as well as applied pattern-related shortcomings, although DIC analysis was attempted for all experimental cases, it was only successful for quasi-static and 0.5 s −1 experimental cases. Namely, for higher strain-rates, to capture a significant amount of image series, 120 fps video recording was attempted, but only a resolution up to 1280 × 720 pixels was available. The lowered resolution coupled with brittleness of the coloring used to create the stochastic speckle pattern severely limited the amount of usable data for the DIC analysis. The resulting strain profile can be seen in Figure 19 for (a) the quasi-static tensile case and for (b) the dynamic case with a strain rate of 0.5 s −1 . In both Figure 19a,b, the horizontal axis data for finite element modeling (FEM) case have been shifted by an amount suitable to each case in order to align the portion of the gauge length in which necking and, correspondingly, fracture occurred. To aid with the visualization of the data, true strain and normalized gauge length data points are being used. For the quasi-static case, due to stochastic pattern degradation, strain data were captured up to approximately 0.2 strain. (a) (b) Figure 18. Stochastic speckle pattern used for DIC analysis during (a) beginning of the tensile test and (b) one frame before rupture. When comparing the FEM analysis results with DIC for 0.001 s −1 strain rate, around the area where necking occurs, high strain region is concentrated for 22.6% of the normalized length as opposed to the DIC, which is 26.5%, resulting in the tensile specimen forming a longer necking region by 3.9%. For the case of 0.5 s −1 strain rate, even better accuracy In Figure 20, the strain map results from DIC analysis and FE simulations are overlapped visualizing the results presented in Figure 19 for smooth specimens at the last captured frame before rupture. In each corresponding figure, FE simulation strain contours have been scaled to the value of DIC analysis, and the overlapping FEM image has been repositioned to align the necking region with the DIC image. As a result, in the case of 0.001 s −1 strain rate, near the center of the necking region, the resulting strain values exceed the visualization boundary limits of 0.0-0.2 strain and, as a result, the region with a strain rate higher than these values are presented in grey color. When comparing the FEM analysis results with DIC for 0.001 s −1 strain rate, around the area where necking occurs, high strain region is concentrated for 22.6% of the normalized length as opposed to the DIC, which is 26.5%, resulting in the tensile specimen forming a longer necking region by 3.9%. For the case of 0.5 s −1 strain rate, even better accuracy is observed with the FEM results underestimating the length where necking occurs by 1.6%. Overall, in both cases, good accuracy is observed between the DIC and FEM results, further reinforcing the suitability of the proposed JC model and damage parameters for SBPR 930/1080 Type B No. 1 tendon high-strength material. In Figure 20, the strain map results from DIC analysis and FE simulations are overlapped visualizing the results presented in Figure 19 for smooth specimens at the last captured frame before rupture. In each corresponding figure, FE simulation strain contours have been scaled to the value of DIC analysis, and the overlapping FEM image has been repositioned to align the necking region with the DIC image. As a result, in the case of 0.001 s −1 strain rate, near the center of the necking region, the resulting strain values exceed the visualization boundary limits of 0.0-0.2 strain and, as a result, the region with a strain rate higher than these values are presented in grey color. Appl Conclusions and Recommendations In this work, numerous tensile tests at room temperature and strain rates 0.001 s −1 -0.5 s −1 were performed in order to calculate the Johnson-Cook model and damage parameters for SBPR 930/1080 Type B No. 1 tendon material aimed at fracture analysis of posttensioned concrete members. Overall, the results obtained in this work, after the calculation of JC parameters, showed good agreement with the experimental data. To verify the agreement between numerical data and experimental tensile data, commercial FEM software was used. The experimental tensile tests were replicated in detail in order to verify the performance of the damage model using both experimental observations, experimental stress-strain data, as well as DIC analysis. The JC model is found to be able to closely predict experimental data with less effort in comparison to other analytical models, but on the other hand, to properly calibrate the related parameters, extensive numerical data are required from several experimental cases. It was found that besides small prediction differences between the FE simulation and experimental results, good accuracy was achieved in predicting the effects of strain concentration and geometrical deformation (necking). Based on these research outcomes, the followed procedure can be applied to closely predict the performance of tested material for fracture analysis of post-tensioned concrete members. As for recommendations, in this work, all testing was performed at room temperature, and thus, we were not able to identify damage-related parameters with regard to material softening under elevated temperatures; therefore, further testing could be performed to identify these parameters, and dynamic testing could be performed under even higher strain rates, in order to gain a broader picture of strain-related hardening effects. Furthermore, by utilizing the JC model and damage parameters presented in this work, researchers can simulate and estimate the remaining strength of a damaged or corroded PC tendon. By measuring or estimating the remaining cross-section of a tendon, it is possible to accurately estimate stress-concentration states surrounding the damaged or corroded area and construct relevant fragility curves based on the anticipated loading conditions. Author Contributions: I.G.: conceptualization; methodology; investigation; data analysis and curation; validation; writing-original manuscript. N.C.: supervision, reviewing and editing. All authors have read and agreed to the published version of the manuscript. Conclusions and Recommendations In this work, numerous tensile tests at room temperature and strain rates 0.001 s −1 -0.5 s −1 were performed in order to calculate the Johnson-Cook model and damage parameters for SBPR 930/1080 Type B No. 1 tendon material aimed at fracture analysis of post-tensioned concrete members. Overall, the results obtained in this work, after the calculation of JC parameters, showed good agreement with the experimental data. To verify the agreement between numerical data and experimental tensile data, commercial FEM software was used. The experimental tensile tests were replicated in detail in order to verify the performance of the damage model using both experimental observations, experimental stress-strain data, as well as DIC analysis. The JC model is found to be able to closely predict experimental data with less effort in comparison to other analytical models, but on the other hand, to properly calibrate the related parameters, extensive numerical data are required from several experimental cases. It was found that besides small prediction differences between the FE simulation and experimental results, good accuracy was achieved in predicting the effects of strain concentration and geometrical deformation (necking). Based on these research outcomes, the followed procedure can be applied to closely predict the performance of tested material for fracture analysis of post-tensioned concrete members. As for recommendations, in this work, all testing was performed at room temperature, and thus, we were not able to identify damage-related parameters with regard to material softening under elevated temperatures; therefore, further testing could be performed to identify these parameters, and dynamic testing could be performed under even higher strain rates, in order to gain a broader picture of strain-related hardening effects. Furthermore, by utilizing the JC model and damage parameters presented in this work, researchers can simulate and estimate the remaining strength of a damaged or corroded PC tendon. By measuring or estimating the remaining cross-section of a tendon, it is possible to accurately estimate stress-concentration states surrounding the damaged or corroded area and construct relevant fragility curves based on the anticipated loading conditions. Calculations and parameter estimations in Sections 3 and 4 were performed considering the damage initiation point of Figure 10 and subsequent material softening described in Section 4.1 and Figure 15b. Following that procedure, FEM simulations up to the point of fracture are able to closely match the experimental data both on a singular element basis ( Figure 14) as well as full model simulations ( Figure 17). When using fracture point as the basis for calculating JC damage parameters, the correlation between fracture strain and stress triaxiality from uniaxial tensile test data can be seen in Figure A1. The curve, although exponential shaped, has a distinctively different shape as a result of the D1-D3 parameters. Similarly, when inputting newly calculated parameters into Equation (11), the angle of linear fit equation as well as corresponding data points result in different D4 parameter, as can be seen in Figure A2. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Calculations and parameter estimations in Sections 3 and 4 were performed considering the damage initiation point of Figure 10 and subsequent material softening described in Section 4.1 and Figure 15b. Following that procedure, FEM simulations up to the point of fracture are able to closely match the experimental data both on a singular element basis ( Figure 14) as well as full model simulations ( Figure 17). When using fracture point as the basis for calculating JC damage parameters, the correlation between fracture strain and stress triaxiality from uniaxial tensile test data can be seen in Figure A1. The curve, although exponential shaped, has a distinctively different shape as a result of the D1-D3 parameters. Similarly, when inputting newly calculated parameters into Equation (11), the angle of linear fit equation as well as corresponding data points result in different D4 parameter, as can be seen in Figure A2. The calculated JC damage model parameters for fracture point-based calculation are summarized in Table A1, and similarly to Section 3.3, they can be used in FE software to simulate yield and fracture of high-strength tendons but with an evident overestimation of their corresponding softening behavior. With the parameters of Table A1, a comparison similar to Figure 14 can be seen between experimental true stress-strain curves for different strain rates of Table 2, with the results obtained from a singular finite element. It is evident that due to the usage of fracture strain instead of corresponding damage initiation value, larger discrepancies are observed between the experimental FE simulation results, especially in the case of the 0.001 s −1 strain rate. The calculated JC damage model parameters for fracture point-based calculation are summarized in Table A1, and similarly to Section 3.3, they can be used in FE software to simulate yield and fracture of high-strength tendons but with an evident overestimation of their corresponding softening behavior. Table A1, a comparison similar to Figure 14 can be seen between experimental true stress-strain curves for different strain rates of Table 2, with the results obtained from a singular finite element. It is evident that due to the usage of fracture strain instead of corresponding damage initiation value, larger discrepancies are observed between the experimental FE simulation results, especially in the case of the 0.001 s −1 strain rate. Figure A3. Comparison between experimental (Exp, continuous lines) and FE simulation (Sim, dotted lines) results for 0.001 s− 1 -2s− 1 strain rates using JC damage parameters of Table A1. When performing numerical simulations of full-scale tensile specimens similar to Section 4.2, it can be seen that using damage parameters of Table A1, both ultimate stress state and rupture strain are being overestimated in comparison to the experimental results, and thus, the JC damage parameters of Section 3.3 are recommended for usage in FE software applications ( Figure A4). Exp 0.001s -1 Sim 0.5s -1 Exp 0.5s -1 Sim 1s -1 Exp 1s -1 Sim 2s -1 Exp 2s -1 Figure A3. Comparison between experimental (Exp, continuous lines) and FE simulation (Sim, dotted lines) results for 0.001 s −1 -2 s −1 strain rates using JC damage parameters of Table A1. When performing numerical simulations of full-scale tensile specimens similar to Section 4.2, it can be seen that using damage parameters of Table A1, both ultimate stress state and rupture strain are being overestimated in comparison to the experimental results, and thus, the JC damage parameters of Section 3.3 are recommended for usage in FE software applications ( Figure A4). Figure A4. True stress-strain plot comparison for 0.001 s− 1 and 0.5 s− 1 strain rates for average experimental and numerical simulation using JC damage parameters of Table A1. Appendix B Regarding the calculation procedure for the C parameter followed in Figure 9 and Equation (5), in order to reduce the influence of quasi-static data (ln * = −6.90), 1/5th of the data was removed, and the remaining data were plotted again in Figure A5. As it can be seen from the comparison of Figures 9 and A5, the influence of the amount of data used is minimal for the calculation of the C parameter since the data accumulation trend remains similar. Table A1. Appendix B Regarding the calculation procedure for the C parameter followed in Figure 9 and Equation (5), in order to reduce the influence of quasi-static data (ln ε * = −6.90), 1/5th of the data was removed, and the remaining data were plotted again in Figure A5. As it can be seen from the comparison of Figures 9 and A5, the influence of the amount of data used is minimal for the calculation of the C parameter since the data accumulation trend remains similar. Table A1. Appendix B Regarding the calculation procedure for the C parameter followed in Figure 9 and Equation (5), in order to reduce the influence of quasi-static data (ln * = −6.90), 1/5th of the data was removed, and the remaining data were plotted again in Figure A5. As it can be seen from the comparison of Figures 9 and A5, the influence of the amount of data used is minimal for the calculation of the C parameter since the data accumulation trend remains similar. Appendix C To ensure that the results and findings of this work are applicable to other FE model simulations, mesh sensitivity analysis was performed for the case of a full-scale 3D model. According to the Abaqus Analysis User's Manual [27], the stress-strain relationship that is used to define material behavior, can no longer represent material behavior after the onset of material damage (stated as damage initiation point in Figure 10). If the finite element model were to continue following the behavior defined in the stress-strain relationship, a strong mesh dependency would occur based on strain localization. In order to overcome this issue, Abaqus uses a different approach (Damage Evolution Law) to model material softening behavior past the damage initiation point. Specifically, Hillerborg et al. fracture energy proposal [31] is adopted, which decouples mesh dependency from material behavior once damage is initiated. In their proposal, fracture energy is defined according to Equation (A1). where is the equivalent plastic displacement as work per unit area of crack that has formed. Prior to damage initiation point, is considered as zero, and after that, it is calculated based on Equation (A2). = ̅ (A2) where L is defined as the characteristic element length, and for 3D elements used in this work, it is calculated as the ratio of element volume to area of the largest face of the element ( = . . ⁄ ). To illustrate the mesh independency from the JC model and damage parameters (Tables 3 and 4), in Figure A7, a comparison can be seen between average experimental results for the 0.001 s −1 strain rate from Table 2 and FE simulation for average mesh sizes of 1 × 1 × 1 ( = 1) and 2 × 2 × 2 ( = 2) mm that were used in the gauge length region of the full-scale FE model. Appendix C To ensure that the results and findings of this work are applicable to other FE model simulations, mesh sensitivity analysis was performed for the case of a full-scale 3D model. According to the Abaqus Analysis User's Manual [27], the stress-strain relationship that is used to define material behavior, can no longer represent material behavior after the onset of material damage (stated as damage initiation point in Figure 10). If the finite element model were to continue following the behavior defined in the stress-strain relationship, a strong mesh dependency would occur based on strain localization. In order to overcome this issue, Abaqus uses a different approach (Damage Evolution Law) to model material softening behavior past the damage initiation point. Specifically, Hillerborg et al. fracture energy proposal [31] is adopted, which decouples mesh dependency from material behavior once damage is initiated. In their proposal, fracture energy is defined according to Equation (A1). where L is defined as the characteristic element length, and for 3D elements used in this work, it is calculated as the ratio of element volume to area of the largest face of the element (L = Vol./L. Area). To illustrate the mesh independency from the JC model and damage parameters (Tables 3 and 4), in Figure A7, a comparison can be seen between average experimental results for the 0.001 s −1 strain rate from Table 2 and FE simulation for average mesh sizes of 1 × 1 × 1 (L = 1) and 2 × 2 × 2 (L = 2) mm that were used in the gauge length region of the full-scale FE model. It can be seen from Figure A7 that the FE analysis results closely match regardless of the mesh size that was utilized. The results of this analysis were expected and can be used to further validate the element damage evolution law followed by Abaqus. In Figure A8, strain map results similar to Figure 20, can be seen for DIC, and average mesh sizes of 1 × 1 × 1 and 2 × 2 × 2 mm are aligned around the necking region for smooth specimens at the last captured frame before rupture. It can be seen that regardless of the mesh size, similar results are obtained, but due to the decreased number of elements along the gauge length, small strain discrepancies are observed. Figure A8. Comparison between DIC and FE simulation strain map results for smooth specimens at the last captured frame before rupture at 0.001 s −1 strain rate for 1 × 1 × 1 and 2 × 2 × 2 mm average mesh sizes. Figure A7. True stress-strain plot comparison for 0.001 s −1 strain rate for average experimental and numerical simulation using average mesh sizes of 1 × 1 × 1 and 2 × 2 × 2 mm. It can be seen from Figure A7 that the FE analysis results closely match regardless of the mesh size that was utilized. The results of this analysis were expected and can be used to further validate the element damage evolution law followed by Abaqus. In Figure A8, strain map results similar to Figure 20, can be seen for DIC, and average mesh sizes of 1 × 1 × 1 and 2 × 2 × 2 mm are aligned around the necking region for smooth specimens at the last captured frame before rupture. It can be seen that regardless of the mesh size, similar results are obtained, but due to the decreased number of elements along the gauge length, small strain discrepancies are observed. It can be seen from Figure A7 that the FE analysis results closely match regardless of the mesh size that was utilized. The results of this analysis were expected and can be used to further validate the element damage evolution law followed by Abaqus. In Figure A8, strain map results similar to Figure 20, can be seen for DIC, and average mesh sizes of 1 × 1 × 1 and 2 × 2 × 2 mm are aligned around the necking region for smooth specimens at the last captured frame before rupture. It can be seen that regardless of the mesh size, similar results are obtained, but due to the decreased number of elements along the gauge length, small strain discrepancies are observed. Figure A8. Comparison between DIC and FE simulation strain map results for smooth specimens at the last captured frame before rupture at 0.001 s −1 strain rate for 1 × 1 × 1 and 2 × 2 × 2 mm average mesh sizes.
2022-08-05T15:13:40.587Z
2022-08-02T00:00:00.000
{ "year": 2022, "sha1": "9efb1316d9b1b2628e3b0fb7dc3c543b7b89f922", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/15/7774/pdf?version=1659528697", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "344cc24027c6629c1ddd24699766e89d4a2cc6b4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
256055713
pes2o/s2orc
v3-fos-license
Zirconium Phosphate Assisted Phosphoric Acid Co-Catalyzed Hydrolysis of Lignocellulose for Enhanced Extraction of Nanocellulose The high mechanical strength, large specific surface area, favorable biocompatibility, and degradability of nanocellulose (CNC) enable it to be a potential alternative to petroleum-based materials. However, the traditional preparation of CNCs requires a large amount of strong acid, which poses a serious challenge to equipment maintenance, waste liquid recycling, and economics. In this study, a solid and easily recoverable zirconium phosphate (ZrP) was used to assist in the phosphoric acid co-catalyzed hydrolysis of lignocellulose for extracting CNCs. Due to the presence of acidic phosphate groups, ZrP has a strong active center with a high catalytic activity. With the assistance of ZrP, the amount of phosphoric acid used in the reaction is significantly reduced, improving the equipment’s durability and economic efficiency. The effects of the process conditions investigated were the phosphate acid concentration, reaction temperature, and reaction time on the yield of CNCs. The Box–Behnken design (BBD) method from the response surface methodology (RSM) was applied to investigate and optimize the preparation conditions. The optimized pre-treatment conditions were 49.27% phosphoric acid concentration, 65.38 °C reaction temperature, and 5 h reaction time with a maximal cellulose yield (48.33%). The obtained CNCs show a granular shape with a length of 40~50 nm and a diameter of 20~30 nm, while its high zeta potential (−24.5 mV) make CNCs present a stable dispersion in aqueous media. Moreover, CNCs have a high crystallinity of 78.70% within the crystal type of cellulose Ⅰ. As such, this study may pioneer the horizon for developing a green method for the efficient preparation of CNC, and it is of great significance for CNCs practical production process. Introduction The abuse of petroleum-based materials has not only resulted in a rapid decline in their storage capacity but has also led to a number of serious ecological problems [1][2][3]. Growing societal concern for the earth's ecology, sustainability concepts, and rigorous government regulations have conspired to worry about our dependence on non-renewable petroleum-based materials and stimulated the exploration of novel and environmentally friendly materials and processes [4]. Recently, nanocellulose (CNCs) has become a potential alternative to petroleum-based materials by virtue of its high strength, high specific surface area, excellent biocompatibility, degradability, and renewable properties [5][6][7]. CNCs can be extracted from various plant resources, commonly used such as microcrystalline cellulose (MCC), wood pulp, cotton, hemp, bacterial cellulose, and crop waste, which was prepared by different raw materials and different methods usually have large differences in their morphology, crystallinity, and dimensions [8][9][10]. Pennisetum Sinese Roxb (PSR) is a high-yielding and high-quality mycorrhiza suitable for growth and artificial cultivation in tropical, subtropical, and temperate zones [11]. Owing to its capacity for a rapid biomass accumulation, PSR is an ideal cellulosic source for producing biomaterials and biofuels, as well as a significant source of high-quality fibers as a substitute for wood and synthetic fibers or fillers. Cellulose accounts for about 25-30% of the chemical composition of PSR. The high-quality and abundant cellulose content makes it ideal for the preparation of CNCs. The conventional chemical preparation of CNCs is to use strong acid to hydrolysis the β-1,4 glycosidic bond between cellulose molecules, which is a kind of acetal bond that is easily hydrolyzed in the presence of strong acids [12,13]. The hydrogen ions ionized by the strong acid are first transferred to the interior of the cellulose to destroy and degrade the amorphous region in the cellulose molecule and then penetrate into the part of the defective crystalline region to degrade it. Finally, the crystalline region of the cellulose is retained to obtain CNCs [14]. However, this preparation method will produce a large amount of waste liquid that is difficult to recycle, seriously corrode production equipment, and disposing of them will cause a serious pollution to the environment. Compared with inorganic acids, solid acids have the advantages of being reusable, less corrosive to equipment, and less polluting to the environment, replacing inorganic acids with solid acids is a compelling research direction in green chemistry [15][16][17]. As a solid acid, zirconium phosphate (ZrP) has a strong catalytic activity due to the presence of phosphate groups within its structure which increases the number of acidic sites of the catalyst accessible to the reactants. The acidic phosphate group forms a powerful active center and releases a large number of hydrogen protons, which can greatly reduce the amount of phosphoric acid used in the acid hydrolysis process and reduce the generation of waste solution. In addition, ZrP is characterized by a large thermal stability and mechanical strength and are practically insoluble in water, which means it can be easily recycled and reused [18]. To our knowledge, ZrP-assisted phosphoric acid co-catalyzed hydrolysis for the preparation of CNCs from PSR has not been investigated before. Response surface methodology (RSM) uses multiple quadratic regression equations to analyze the relationship between the factors and experimental results to obtain better process parameters. This method is widely used in manufacturing, agriculture, medicine, and the chemical industry because of its high accuracy and low number of tests [19]. In this study, the treatment of ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR to prepare CNCs was measured on the yield of CNCs to estimate the feasibility of the process based on the CNCs extraction. The optimization of the treatment conditions (phosphate concentration, reaction temperature, and reaction time) was achieved statistically by response surface methodology. Changes in the morphology and characteristics of cellulose imparted by the treatment were characterized through a fiber analyzer, transmission electron microscope (TEM), Fourier Transform Infrared Spectrometer (FT-IR), X-ray diffraction (XRD), ZETA potentiometry, and thermal analyses. This study blazes the trail to utilize the low-cost and readily available PSR for CNCs production through an optimized ZrP-assisted phosphoric acid co-catalyzed hydrolysis process. Materials PSR was obtained from the China National Engineering Research Center of JUNCAO Technology (Fuzhou, China); phosphoric acid (AR) and sodium hydroxide (AR) were purchased from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China); zirconium phosphate (AR) was purchased from Xiamen Xindakang Inorganic Material Co., Ltd. (Xiamen, China). Extraction of Cellulose The treatment of raw PSR by sodium hydroxide to prepare cellulose included the following conditions: 30 g of dried PSR was heated at 165 • C in 180 mL of 18 wt.% NaOH for 2 h to remove most of the lignin. After that, the insoluble residue was filtered and washed with deionized water, the remained lignin was disposed by a 7% NaOCl solution treatment, then the obtained cellulose was repeatedly filtered and washed with deionized water until the pH of the filtrate was neutral. At last, the obtained PSR-cellulose was dried at 50 • C for 24 h. Production of CNCs A total of 2 g of cellulose and various amounts of ZrP (0.1, 0.5, 1.0, and 1.5 g) were placed in a 100 mL phosphoric acid solution, then the mixture was ultrasonicated with a frequency of 40 kHz at the ultrasonic power of 250 W. After that, the ZrP was separated. The CNCs were purified by centrifugation at 10,000 rpm for 10 min, then the upper suspension was collected to obtain the CNCs colloid and freeze-dried to obtain CNCs powder. Optimization of Preparation of CNCs Conditions The Box-Behnken Design (BBD) of response surface methodology was applied to design a suite of experiments for the optimization of the effective parameters in the preparation of CNCs by ZrP-assisted phosphoric acid co-catalytic hydrolysis of PSR. A quadratic model with 3 factors and 17 experiments was employed, including 5 replications to estimate the error. The design variables of 3 factors were the phosphate concentration (A), reaction temperature (B), and reaction time (C), while the response variables were the CNCs yield. As illustrated in Table 1, each factor had three levels, low, mid, and high, denominated as −1, 0, and 1, respectively. where m 1 is the dry weight of the total CNCs and weighing bottle (g), m 2 is the weight of the weighing bottle, m is the dry weight of the starting material (g), and V is the total volume of the CNCs colloid (mL) collected in Section 2.3. Fiber Morphology Analysis A fiber analyzer (Morfi Compact, Techpap Co., Ltd., Lyon, France) was applied to explore the effect of the dissociation process on the PSR morphology, 0.02 g of dried PSR fiber and dissociated PSR fiber was diluted by deionized water as 0.02 g/L of suspension, and it was characterized by a fiber analyzer at room temperature. Morphological Characterization by TEM Aqueous CNCs and PSR fiber suspensions of a 0.1% (w/v) concentration were sonicated for 20 min and the droplet was placed on a carbon-coated copper grid. After drying, the sample was negatively stained with 2% phosphotungstic acid dye for 60 s and left to dry at room temperature. Then, the transmission electron microscope (TEM) (Hitachi-H7650, Hitachi, Ltd., Tokyo, Japan) was operated at 100 kV to measure each sample morphology. Fourier Transform Infrared Spectroscopy (FTIR) The CNCs and PSR-cellulose were dried, grounded, pelletized using KBr and scanned by an FTIR spectrophotometer (Thermo Electron Instruments Co., Ltd., Madison, WI, USA). Each spectrum was acquired by averaging 32 scans per sample in the mid-infrared range (500-4000 cm −1 ) at a 4 cm −1 spectral resolution. 2.5.5. X-ray Diffraction (XRD) An X-Ray Powder Diffractometer (X'Pert Pro MPD, Philips-FEI, Amsterdam, The Netherlands) was used to obtain the XRD spectra. The Cu-Kα scattering radiation is detected at a scanning rate of 0.1 • /s in the range of 2θ = 6~90 • at 50 kV and 300 mA. The degree of crystallinity (Crl, %) was calculated by Equation (2) [20]: where I 002 is the maximum intensity of the (002) diffraction at 2θ value of about 22.2 • , while I am is the intensity diffraction at 2θ value of around 18 • . Zeta Potential Analysis A system ZETA potentiometry instrument (SZP-06, BTG Co., Ltd., Almholt, Switzerland) was used to determine the surface charge of PSR and CNCs. The change of the ZETA potential values was contributed to know CNCs dispersion stability in the aqueous. A total of 0.02 g of sample was diluted by deionized water as a 1 g/L suspension, which was fully sonicated and characterized at room temperature. Thermal Gravimetric Analysis (TGA) The thermogravimetric stability of CNCs and PSR cellulose was analyzed using a thermal analyzer (STA449F3 thermal analyzer, NETZSCH Co., Ltd., Munich, Germany). The TGA analysis was performed in a 150 mL/min N 2 flow with a heating and cooling rate of 10 • C/min within 25-600 • C. Chemical Composition of PSR The PSR chemical composition varies according to the growth time, and the nitrate ethanol method was used to determine the main chemical composition of PSR. The main components of PSR in different production stages are shown in Table 2. As seen in Table 2, the PSR with a growth cycle of 8 weeks had the least ash content and a high fiber content similar to the ten weeks, so the PSR with a growth cycle of 8 weeks was selected to extract CNCs. Response Surface Analysis The levels of each factor for the 17 experiments designed according to the RSM design are shown in Table 3, and the corresponding CNCs yields for each round are presented in parallel. The yields of CNCs prepared from PSR fibers varied between 40.00 and 49.00% under different conditions. Table 3. Experimental designs and results. The analysis of variance (ANOVA) method was adopted to evaluate the significance level and accuracy of the fitted model (Table 2). If the value of "Prob > F" is less than 0.05 and 0.0001, respectively, it means that the effect of the model term is significant and highly significant, respectively. Another condition that proves that the model's compatibility is significant is that the F value is greater than or equal to six [21]. The F value of the quadratic polynomial model is 67.60, and the "Prob > F" is less than 0.0001, meaning that the model is highly significant. The value of "Prob > F" for the Lack of Fit term is 0.8039, which is much larger than 0.1000, indicating that the Lack of Fit term of the model is not significant, suggesting that the quadratic polynomial model suggested that the model described the CNCs preparation data well. Moreover, the coefficient of determination R 2 and the R 2 correction value of the model can reflect the degree of model fit. The R 2 and Adj R 2 of the quadratic polynomial model is 0.9886 and 0.9740, which indicates that the correlation between the predicted value and the test value of the model reaches 98.86% and the model can reflect 97.40% of the variation in the response value, respectively [22]. The result reveals that the experiments designed with a quadratic polynomial model have less errors and can analyze and predict the preparation of CNCs accurately. The relationship between the CNCs yield (%) and the independent variables is given by the regression Equation (3). Model Parameters Responses The values of F and Prob > F in ANOVA (Table 4) indicated that the linear terms (A, B, and C) and quadratic terms (A 2 , B 2 and C 2 ) have a significant effect on the CNCs yield, however, the interaction terms (AB, AC, and BC) have less significant effects on the response values, indicating that the phosphoric acid concentration, reaction temperature, and reaction time all had significant effects on the yield of CNCs, while the effects of the interactions between the factors were less significant. The three factors influenced the response values in the following order: phosphoric acid concentration > reaction time > reaction temperature. Surface Plots, Optimization and Model Verification The constructed 3D response surface plots and corresponding contour plots are shown in Figure 1. These plots depict the synergistic effects of the two factors on the yield of CNCs, while the unobserved factors remain constant at their medium levels. The effect of the two-factor interaction on the CNCs yield can be judged by the shape of the contour plots, with elliptical contour plots indicating significant interactions between variables and circular contour plots indicating those with insignificant interactions [23]. However, only the interaction between the acid concentration and the temperature was significant. As is shown in Figure 1, the three independent variables of the phosphoric acid concentration, reaction temperature, and reaction time all had significant effects on the yield of the CNCs. However, only the interaction of the reaction temperature and reaction time was significant (Figure 1f). The optimum conditions for the CNCs preparation from PSR with the highest yield were identified using Design-Expert. The predicted optimal reaction conditions were an acid concentration of 49.27%, a temperature of 65.38 °C, a time of 5 h, and the prediction yield was 48.33%, which were coherent with the experimental results. CNCs were prepared from PSR with a yield of 50.00%, using a phosphoric acid concentration of 49% at When the other variables were kept at a medium level, the yield of the CNCs increased gradually with the phosphoric acid concentration increasing until 50% and then decreasing, which could be owing to the fact that phosphoric acid can promote the release of hydrogen protons from zirconium phosphate, and the hydrogen protons in turn to act on the breakage of the β-1,4 glycosidic bond within the cellulose molecule, decreasing the cellulose polymerization and hydrolyzing the amorphous region to obtain CNCs. On the other hand, excess phosphoric acid will result in the excessive hydrolysis of fibers to glucose, leading to the decrease in the CNCs yield. Meanwhile, the effect of the reaction temperature conditions and reaction time conditions on the cellulose yield showed a similar trend to that of the phosphoric acid concentration. The yield of the CNCs reached the maximum at the reaction temperature of 65 • C and the reaction time of 5 h, respectively. A high reaction temperature or long reaction time can lead to the excessive hydrolysis of CNCs, resulting in lower yields. Furthermore, when the reaction temperature and time levels were at high levels, the CNCs yields remained low in spite of the phosphoric acid concentration, thus demonstrating the importance of reaction temperature × reaction time interactions for maximizing the CNCs yields. The optimum conditions for the CNCs preparation from PSR with the highest yield were identified using Design-Expert. The predicted optimal reaction conditions were an acid concentration of 49.27%, a temperature of 65.38 • C, a time of 5 h, and the prediction yield was 48.33%, which were coherent with the experimental results. CNCs were prepared from PSR with a yield of 50.00%, using a phosphoric acid concentration of 49% at 65 • C for 5 h. This is in close agreement with the model's results, i.e., within the 95% confidence interval, thus validating the sufficiency and accuracy of the model [24]. Morphology Analysis The fiber analyzer was used to investigate the changes in morphology and microstructure of PSR-cellulose and CNCs. As is depicted in Figure 2a,b, both PSR-cellulose and CNCs can be observed with a relatively large L/W ratio and a similar rod-like shape. Figure 2d,e show the length distribution of PSR-cellulose and CNCs calculated by the fiber analyzer, respectively. The length of PSR-cellulose was mainly distributed between 200 and 3524 µm, while the length of CNCs was mainly distributed between 200 and 1031 µm, and 45% of the PSR fibers is longer than 1000 µm, which decreases to 5% after hydrolysis. On the other hand, the main distribution range of the CNCs width was between 20 and 60 µm, which is more concentrated than PSR-cellulose (Figure 2f,g). This indicates that phosphoric acid hydrolysis leads to the decomposition of the cellulose structure, the breakage of PSR-cellulose chains, and the CNCs were obtained under the assistance of ZrP. To further observe the morphology of CNCs, TEM was conducted (Figure 2c). In the TEM views, the CNCs obtained by ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR present as the granular form. The average size of the CNCs was measured, and the diameter of the CNCs produced from the PSR was found to be approximately 20~30 nm, while the average length was 40~50 nm. The TEM observations indicate that the cellulose obtained from ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR-cellulose is of a nanoscale. FTIR Analysis The FTIR spectra of PSR-cellulose and CNCs presented similar spectra in Figure 3, and the characteristic peaks of CNCs were almost not shifted after hydrolysis, indicating that the chemical structure of the CNCs was not destroyed or changed, and the basic backbone structure of PSR-cellulose was still maintained [25]. At a high wavenumber, a broad peak caused by the aliphatic and phenolic O-H stretching vibrations of cellulose is positioned at 3450 cm −1 , while another absorption peak formed by the deformation and stretching vibrations of the C-H and OH-groups of the glucose unit is found at 2900 cm −1 [26]. Further, the peaks nearby 1060 cm −1 and 895 cm −1 are both from the backbone of the cellulose chain, which can be attributed to the CO-stretching of secondary alcohols and ether groups and β- [27]. The absorbed water of PSR-cellulose and CNCs is observed at 1640 cm −1 [28]. phosphoric acid hydrolysis leads to the decomposition of the cellulose structure, the breakage of PSR-cellulose chains, and the CNCs were obtained under the assistance of ZrP. To further observe the morphology of CNCs, TEM was conducted (Figure 2c). In the TEM views, the CNCs obtained by ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR present as the granular form. The average size of the CNCs was measured, and the diameter of the CNCs produced from the PSR was found to be approximately 20~30 nm, while the average length was 40~50 nm. The TEM observations indicate that the cellulose obtained from ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR-cellulose is of a nanoscale. FTIR Analysis The FTIR spectra of PSR-cellulose and CNCs presented similar spectra in Figure 3, and the characteristic peaks of CNCs were almost not shifted after hydrolysis, indicating that the chemical structure of the CNCs was not destroyed or changed, and the basic backbone structure of PSR-cellulose was still maintained [25]. At a high wavenumber, a broad [26]. Further, the peaks nearby 1060 cm −1 and 895 cm −1 are both from the backbone of the cellulose chain, which can be attributed to the CO-stretching of secondary alcohols and ether groups and β-1,4-glycoside-linked O-H stretching, respectively [27]. The absorbed water of PSR-cellulose and CNCs is observed at 1640 cm −1 [28]. Crystal Structure The XRD spectra of the PSR-cellulose and CNCs are depicted in Figure 4. All samples exhibit four diffraction peaks nearby 2θ = 15.5 • , 17.5 • , 23 • , and 35 • , corresponding to (1-10), (110), (200), and (400) diffraction planes of cellulose lattice, respectively, suggesting that the crystalline type of CNCs is not altered in the manufacture processing and the CNCs remain the cellulose I crystal form [29,30]. Crystalline cellulose exists in four isomeric forms as I-IV, of which cellulose I is the most common crystalline cellulose of a natural origin [31]. Zeta Potentiometric Analysis The PSR and the CNCs under different preparation conditions were measured by the ZETA potentiostat, and the results are shown in Table 5. The zeta potential represents the effective charge of the charged particles dispersed in a liquid-phase medium. A higher absolute value of potential means a stronger mutual repulsion between particles and a better stability of the dispersed system cellulose fibers are negatively charged in aqueous media owing to the structure of cellulose-containing glyoxylate groups, polar hydroxyl groups, etc. Compared to PSR, the zeta potential values of CNCs were significantly higher, indicating that CNCs have a good dispersion in aqueous media. This is mainly because the lateral repulsion generated by PO4 3− during the preparation of ZrP-assisted catalytic phosphate hydrolysis resulted in a stronger repulsion between the particles and increased zeta potential values. Thermal Analysis The TG and DTG profiles of PSR-cellulose and CNCs are shown in Figure 5. As Figure 5a depicted, the weight loss below 120 °C for two samples can be attributed to the water evaporation [32], the main thermal decomposition temperature of the molecular structure ranges from 310 °C to 380 °C , and the carbonation occurs when the temperature rises above 400 °C . The calculated onset decomposition temperature of PSR-cellulose and CNCs is 321.8 °C and 313.5 °C , and the maximum decomposition temperature is 351.0 °C and 330.4 °C , respectively (Figure 5b). It has been reported that the thermal stability of The Crl values for the PSR-cellulose and CNCs were 68.12 and 78.70%, respectively. As the alkali and bleaching treatment steps resulted in the removal of amorphous hemicellulose and lignin from the region, the crystallinity of PSR-cellulose was high (68.12%). After the ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR, the crystallinity of the CNCs raised to 78.70%, indicating that the amorphous region and parts of the defective crystalline regions in PSR-cellulose were affected by phosphoric acid hydrolysis catalyzing by zirconium phosphate. During this process, hydrogen ions enter into the amorphous region of cellulose to accelerate the hydrolytic splitting of the glycosidic bonds, and the amorphous region of cellulose and the surface of the crystalline region are partially destroyed, thus making the CNCs crystallinity higher than that of natural cellulose. On the other hand, some cellulose single crystals undergo a rearrangement, which further increases the CNCs crystallinity. Zeta Potentiometric Analysis The PSR and the CNCs under different preparation conditions were measured by the ZETA potentiostat, and the results are shown in Table 5. The zeta potential represents the effective charge of the charged particles dispersed in a liquid-phase medium. A higher absolute value of potential means a stronger mutual repulsion between particles and a better stability of the dispersed system cellulose fibers are negatively charged in aqueous media owing to the structure of cellulose-containing glyoxylate groups, polar hydroxyl groups, etc. Compared to PSR, the zeta potential values of CNCs were significantly higher, indicating that CNCs have a good dispersion in aqueous media. This is mainly because the lateral repulsion generated by PO 4 3− during the preparation of ZrP-assisted catalytic phosphate hydrolysis resulted in a stronger repulsion between the particles and increased zeta potential values. Thermal Analysis The TG and DTG profiles of PSR-cellulose and CNCs are shown in Figure 5. As Figure 5a depicted, the weight loss below 120 • C for two samples can be attributed to the water evaporation [32], the main thermal decomposition temperature of the molecular structure ranges from 310 • C to 380 • C, and the carbonation occurs when the temperature rises above 400 • C. The calculated onset decomposition temperature of PSR-cellulose and CNCs is 321.8 • C and 313.5 • C, and the maximum decomposition temperature is 351.0 • C and 330.4 • C, respectively (Figure 5b). It has been reported that the thermal stability of CNCs is enhanced thanks to the removal of the amorphous region from the cellulose and the rearrangement of the crystal sequence in the crystalline region [33], however, the thermal analysis results showed that the thermal onset decomposition temperature of CNCs was decreased compared to PSR-cellulose. This may be due to the fact that although most of the lignin and hemicellulose were removed after the alkali bleaching treatment of PSR-cellulose, there was a strong linkage between some of the lignin and cellulose, forming a lignin-cellulose complex, and thus improving the thermal stability of the PSR-cellulose. These perspectives can be confirmed by the residual mass of the samples at the end of the thermal decomposition (Table 6). Hu [34] et al. concluded that the alkali bleaching treatment removed most of the lignin and reduced the strength of the lignin-cellulose complex, which made the lignin less thermally stable and showed a decreasing trend in the residual amount of thermal decomposition. At 500 • C, the cellulose had been completely decomposed, and the residual mass of PSR-cellulose was 20%, and the residual mass of CNCs by ZrP co-catalyzed hydrolysis was 19%. Compared to PSR-cellulose, CNCs are less thermally stable and have a reduced residual mass due to the absence of the lignin-cellulose complex in nanocellulose. On the other hand, the thermal stability of CNCs is reduced owing to the introduction of phosphate groups into the crystalline region of the cellulose during the preparation of CNCs. Figure 6 shows the DSC profiles of PSR-Cellulose and CNCs. Both samples have two distinct intervals of heat uptake changes in DSC curves. The initial heat uptake process occurred before 120 • C, representing the moisture loss due to evaporation, which corresponds to the analysis of TGA. The evaporation process in PSR occurs in a wider temperature interval, mainly because of the presence of the lignin-cellulose complex, which enhances the sorption of water by the PSR and makes it more favorable for water retention. While CNCs have only a single adsorption force on the water, which leads to the water loss process occurring in a narrow temperature interval. The second heat absorption process is the thermal decomposition process of cellulose. During the CNCs preparation process, the hydrolysis of the glycosidic bonds breaks the long molecular chains of cellulose, resulting in a lower onset decomposition temperature for CNCs than for PSR-cellulose. The enthalpy values resulting from the evaporation of water and thermal decomposition processes in the PSR-cellulose are higher than those of CNCs (Table 7), which can be attributed to the strengthening of hydrogen bonds by the lignin-cellulose complex present in the PSR-cellulose. residual amount of thermal decomposition. At 500 °C , the cellulose had been completely decomposed, and the residual mass of PSR-cellulose was 20%, and the residual mass of CNCs by ZrP co-catalyzed hydrolysis was 19%. Compared to PSR-cellulose, CNCs are less thermally stable and have a reduced residual mass due to the absence of the lignincellulose complex in nanocellulose. On the other hand, the thermal stability of CNCs is reduced owing to the introduction of phosphate groups into the crystalline region of the cellulose during the preparation of CNCs. Figure 6 shows the DSC profiles of PSR-Cellulose and CNCs. Both samples have two distinct intervals of heat uptake changes in DSC curves. The initial heat uptake process occurred before 120 °C, representing the moisture loss due to evaporation, which corresponds to the analysis of TGA. The evaporation process in PSR occurs in a wider temperature interval, mainly because of the presence of the lignin-cellulose complex, which enhances the sorption of water by the PSR and makes it more favorable for water retention. While CNCs have only a single adsorption force on the water, which leads to the water loss process occurring in a narrow temperature interval. The second heat absorption process is the thermal decomposition process of cellulose. During the CNCs preparation process, the hydrolysis of the glycosidic bonds breaks the long molecular chains of cellulose, resulting in a lower onset decomposition temperature for CNCs than for PSR-cellulose. The enthalpy values resulting from the evaporation of water and thermal decomposition processes in the PSR-cellulose are higher than those of CNCs (Table 7), which can be attributed to the strengthening of hydrogen bonds by the lignin-cellulose complex present in the PSR-cellulose. Conclusions This study explored the feasibility of ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR for the enhanced extraction of CNCs. The treatment conditions (the phosphate acid concentration, reaction temperature, and reaction time) were optimized by a Conclusions This study explored the feasibility of ZrP-assisted phosphoric acid co-catalyzed hydrolysis PSR for the enhanced extraction of CNCs. The treatment conditions (the phosphate acid concentration, reaction temperature, and reaction time) were optimized by a quadratic model from RSM-BBD. The CNCs yield increased with an increase in the phosphate concentration, reaction temperature, and reaction time when the three independent variables were at low levels. When each of the three conditions reached an inflection point, the yield decreased as they increased. The optimum conditions were found at 49% phosphate acid, 65 • C, and a 5 h reaction time, resulting in a CNCs yield of 50.00%, which was closely coherent with the predicted value of 49.27%. The characterization analyses confirmed that ZrP could effectively assist the phosphate acid hydrolysis of PSR for the preparation of CNCs with a smaller crystallite size, higher crystallinity, and stable dispersion in an aqueous medium. This work has a significant reduction in the phosphoric acid consumption, process safety, low corrosiveness, high economic efficiency, and high yield of the CNCs obtained. These findings may contribute to the development of practical processes for the high-volume extraction of CNCs from PSR and potentially other biomasses.
2023-01-22T06:16:10.178Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "28a0876679320c46b6515b5b24e0118c22f8b6c3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/15/2/447/pdf?version=1673689608", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a802817d4d9af06fefff6066ed544c320e3f65fb", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
237299557
pes2o/s2orc
v3-fos-license
Point-of-care ultrasound (POCUS) practices in the helicopter emergency medical services in Europe: results of an online survey Background The extent to which Point-of-care of ultrasound (POCUS) is used in different European helicopter EMS (HEMS) is unknown. We aimed to study the availability, perception, and future aspects of POCUS in the European HEMS using an online survey. Method A survey about the use of POCUS in HEMS was conducted by a multinational steering expert committee and was carried out from November 30, 2020 to December 30, 2020 via an online web portal. Invitations for participation were sent via email to the medical directors of the European HEMS organizations including two reminding notes. Results During the study period, 69 participants from 25 countries and 41 different HEMS providers took part in the survey. 96% (n = 66) completed the survey. POCUS was available in 75% (56% always when needed and 19% occasionally) of the responding HEMS organizations. 17% were planning to establish POCUS in the near future. Responders who provided POCUS used it in approximately 15% of the patients. Participants thought that POCUS is important in both trauma and non-trauma-patients (73%, n = 46). The extended focused assessment sonography for trauma (eFAST) protocol (77%) was the most common protocol used. A POCUS credentialing process including documented examinations was requested in less than one third of the HEMS organizations. Conclusions The majority of the HEMS organizations in Europe are able to provide different POCUS protocols in their services. The most used POCUS protocols were eFAST, FATE and RUSH. Despite the enthusiasm for POCUS, comprehensive training and clear credentialing processes are not available in about two thirds of the European HEMS organizations. Due to several limitations of this survey further studies are needed to evaluate POCUS in HEMS. Supplementary Information The online version contains supplementary material available at 10.1186/s13049-021-00933-y. ultrasound machine [4]. These advantages make POCUS useful in many acute point-of-care settings including prehospital resuscitation, emergency departments, intensive care units, and operation theatres [1,5]. POCUS performed in the pre-hospital and mass casualty incidents may affect the clinical decisions, notifications, transport modes, and hospital destination [4]. Pre-hospital POCUS was established two decades ago in various pre-hospital emergency medical services (EMS) in Europe, Australia, and North America [6,7]. It was available in 9% of the French EMS units [8] and in 4.1% in the USA and Canada [9]. Furthermore, 21% of EMS services in the USA and Canada considered implementing it [9]. Pre-hospital POCUS is not widely used possibly due to limited availability and lack of strong evidence of its clinical value [10][11][12]. The extent to which POCUS is used in different European helicopter EMS (HEMS) is unknown. There exist no data on applied POCUS protocols, its training and credentialing methods, and the opinions of health care providers in the HEMS on its value. We aimed to study the availability, perception, and future aspects of POCUS in the European HEMS using an online survey that needs less than ten minutes to answer. Methods A multinational steering expert committee of 12 experts from 7 countries developed the questionnaire about the use of POCUS in HEMS. Prerequisite for the questionnaire was the ability to answer all possible questions within 10 min and to include the availability, perception, and future aspects of POCUS in the European HEMS. After the agreement between of the experts about the basic areas to be addressed in the questionnaire, the first draft of the questionnaire was written by two of the authors (PH-C and FMA-Z), it was sent for other international experts for their input and modified accordingly. The first and second drafts of the questionnaire were edited via email while the third draft was edited online after sharing it. After approval from all experts, the survey was made available online. This implies that we depended mainly on surface validity for validation while content validity depended mainly on the experts' experience in this area including one of the international experts who has more than 32 years' experience in POCUS training and research including educational and qualitative research (FMA-Z). We did not pilot the questionnaire for linguistic clarity because it was reviewed by 12 experts who stemmed from 7 countries of different languages which assured that the questionnaire was clear. The ten-minutes survey consisted of 24 questions regarding demographics, availability, present and future use of POCUS in HEMS, importance of POCUS in different conditions, used POCUS protocols, and if there were any necessary credentialing POCUS processes for medical providers (Additional file 1: Table S1). The questionnaire was developed to determine the POCUS availability, used protocols and the prerequisites for its use by the medical staff. The survey was provided online via the web portal SurveyMonkey ® . To ensure that every participant could only answer the survey once, the IP-address was recorded, whereas all data were analyzed anonymously. The invitation link and the QR-code for the survey was sent via email to the medical directors of 45 European HEMS organizations and Search and Rescue (SAR) bases of 28 countries across Europe with known HEMS use and a second and third reminding note was sent to non-respondents. The survey was online available from November 30 to December 30, 2020 and it was possible to answer it with any mobile device (smartphone, tablet) or PC. Descriptive analysis was done using the analysis tools provided by SurveyMonkey ® and the statistic software GraphPad Prism 9.0 (GraphPad Software, San Diego, CA, USA). Data were presented as median (range) and mean (SD) for ordinal and continuous data, and number (%) for categorical data. If data were missing, valid percentages were calculated from the available data. The study is in line with the current European general data protection regulation (GDPR). General data During the study period, 69 participants from 25 countries (89% of the invited 28 countries) and 41 different HEMS organizations (85% of the invited 45 HEMS organizations) took part in the survey. The survey was completed by 96% (n = 66 of 69) of the participants. Most of the participants 95.5% (n = 65 of 69) were males, between 41 and 50 years old, and had a leading position within their HEMS organization (71%, n = 49 of 69). Almost all HEMS programs (97.5%, n = 40 of 41) were physician staffed, in which the physician was joined by a paramedic in 65% (n = 26 of 40) or a flight nurse in 20% (n = 8 of 40) (Table 1). An Infirmier Siamu (Infirmier-French term for a nurse; Siamu-abbreviation for the French term "Soins Intensifs et Aide Medicale Urgente"; intensive care and urgent medical aid) a nurse that combines clinical intensive care medicine and preclinical emergency medicine, as well as HEMS-TC competency, were part of the medical team in 7.5% (n = 3 of 40), and a paramedic or flight nurse in 2.5% (n = 1 of 40) respectively (missing data were in 5%, n = 2). The non-physician staffed HEMS was paramedic only service. POCUS and HEMS organizations Unrestricted availability of POCUS was given in 56% (n = 23 of 41) of the HEMS organizations (standardized equipment at all related HEMS bases), occasionally possible in 19.5% (n = 8 of 41), and not possible in 24.5% (n = 10 of 41) ( Table 1). The time since POCUS had been established in the different HEMS organizations ranged from less than one year up to 20 years. Of the HEMS organizations not yet providing POCUS, 70% (n = 7 of 10) stated planning to integrate it in the future within a median (range) time of 2 (1-4) years. Responders of the HEMS providers in which POCUS was available estimated that POCUS had been used in a median (range) percentage of 15% (0.8-37.5) of treated patients ( Table 2). Regarding the credentialing process for using POCUS in the different HEMS organizations providing POCUS, only 35% (n = 11 of 31) has an established credentialing process. If a credentialing process was established, a POCUS-course led by an expert was requested in 9 HEMS, an additional didactic teaching of an average of 6.5 h and hands-on training of an average of 5.5 h were requested in four HEMS. In two of the four mentioned HEMS organizations, documented POCUS cases were needed before using POCUS in HEMS. In two HEMS organizations, own didactic teaching and hands-on training were requested. Generally, comprehensive training and credentialing activities are scarce in the European HEMS organizations. Table 3 summarizes the results of the importance of POCUS in general, in different areas and different patient conditions. Most participants think that POCUS is important in both trauma and non-trauma patients (73%, n = 46 of 63), whereas 19% (n = 12 of 63) think that POCUS is more important in trauma patients, while 8% (n = 5 of 63) think that it is important in non-trauma patients. Standard examination protocols are being used by the majority of participants 63% (n = 38 of 60), whereas 32% (n = 19 of 60) do not use such protocols and 5% (n = 3 of 60) were not sure. The (e)FAST protocol is the most used protocol (77%). The findings of POCUS were recorded in a reliable way (video clip or electronic database) in less than 30%, and mainly put down in writing on the mission protocol (Table 4). POCUS devices The most commonly used portable ultrasonography devices were, GE healthcare V-scan in 40% (n = 21), FUJIFILM Sonosite iviz in 36% (n = 19), Philips healthcare Lumify and Butterfly Network iQ in 6% (n = 3) respectively. Some HEMS organizations use more than one POCUS device manufacturer. Most of the participants (71%, n = 39) were pleased with the devices used. Discussion Our study indicates that more than two-thirds of the European HEMS organizations provide POCUS in their helicopters and that a considerable number is planning to establish it soon. HEMS providers appreciate the increased need for POCUS integration in pre-hospital care. To our knowledge, this is the first survey regarding the pre-hospital use of POCUS in HEMS organizations across Europe. Data suggest that POCUS is feasible and useful in HEMS. Nevertheless, the evidence regarding improving direct patient outcome is weak which needs properly designed prospective studies [10,11,[13][14][15][16][17][18]. There are different POCUS protocols that can be used in the pre-hospital setting which include extended (e)FAST to search for intraperitoneal fluid, peri-cardiac fluid, haemothorax and pneumothorax, [19,20], Rapid Ultrasound for Shock (RUSH) to define the cause of the shock, and Focused Assessment Transthoracic Echocardigraphy (FATE) or Focused Echocardiography in Emergency Life support (FEEL) to quickly evaluate the cardiac function [21][22][23][24][25]. Our results show, that (e)FAST is the most used protocol in HEMS. Independent of the used protocols whether (e)FAST, RUSH, FATE, FEEL or others, we think that it is important to carry out POCUS in patients in critical conditions or shock to find or exclude free fluid in the abdomen, in the thorax or in the pericardium, to detect or exclude pneumothorax, to find causes of shock and to exclude or confirm reversal causes of cardiac arrest. In this context POCUS is a physiological study, an on spot clinical decision tool, a clinical examination extension, a unique and expanding, safe and repeatable tool [1,2]. With advancements in technology and training, the use of POCUS extended to more indications like diagnosis of eye injuries and bone fractures [26,27]. POCUS training should be tailored towards the specific needs of the HEMS staff. The operators should be familiar with their own ultrasound machines and should be particularly knowledgeable of the sonographic artefacts that can mislead them [1,28]. On the other hand, if the operators are familiar with their ultrasound machines they are able to make use of the record function of modern machines to record images or loops of the findings. As shown in Table 4, only minority of participants of this survey made use of the "record function" of their ultrasound machines. More than one quarter does not record the findings at all and more than 50% outline the findings in the mission protocol. Only 12% of the participants are doing both, recording as video and in the mission protocol (data not shown in Table 4). There is much potential for further improvement regarding this issue. This is very important for medicolegal issues, credentialing, closing the learning loop by reviewing the video clips, and using the clips for training and research so as to refine and advance the use of POCUS. The participants thought that POCUS examinations of the chest, abdomen and heart are very important, vascular access are important, while POCUS for airway management and regional anesthesia is less important, (see Table 3). It is of interest to note that the needed POCUS skills for airway management and interventions are more advanced. Currently less than one-third of the participating HEMS organizations seems to have a credentialing process for using POCUS. The other two-third assumed that the HEMS crews can perform POCUS. Training must be standardized to maximize the benefit of POCUS. European HEMS organizations should agree on common POCUS curriculum with an accepted standard that suits their needs. Competency is a key factor in successful clinical applications [1,29]. Using a Delphi methodology, Micheller et al. defined a total of five modalities (cardiac, thoracic, FAST, aorta, and procedural), with 32 measured competencies and 72 sub competencies [30]. Consecutive quality assurance and governance is probably more challenging, as POCUS findings are interpreted in a dynamic clinical context. The availability and operator acceptance of the POCUS equipment seem to be less of a challenge, at least in Europe. Besides the more frequent use of POCUS compared with North America, the survey underlines that HEMS in Europe is mainly physician staffed which can explain the frequent use of POCUS [9,29]. Some participants stated that POCUS is used in more than 30% of their patients indicating proper training in a wide range of applications. Limitations The represented study has some limitations which we would like to highlight. First, it was a voluntary online survey that carries the risk of selection bias of participants who encourage the use of POCUS. This may overestimate the value of POCUS. Second, respondents were heterogeneous, from different levels, with unequal numbers from diferent organizations. Majority were leaders in their HEMS organization, with the risk of reporting results that are preferred by them and may be different from those who use it. We decided to analyse as many answers as possible because some HEMS providers do not provide uniform POCUS approaches. Not all helicopters are equally equipped (e.g. general availability of an ultrasound machine or type of ultrasound machine), even if they are operated by the same HEMS provider. Furthermore, some points of the questionnaire were about personal opinions of the participants, which are not identical. Third, we did not get the response of all invited HEMS organizations and we are unable to make sure, that all HEMS in Europa have been reached due to constant changes in the European HEMS scenery. This carries the risk of selection bias. The survey was asked in a limited period of 30 days possibly explaining the small sample size. Fourth, female responders were few with the majority being males. Fifth, no information regarding the time required to carry out POCUS and if there were any time limiting rules when carrying out POCUS were included in the survey. Sixth, we have to acknowledge that the current study is not a hypothesis testing study trying to answer a specific research question but aimed at collecting general data on the current status of POCUS use in Europe which will help us to define more hypothesis generating questions in the future. Accoridngly, specific details on each application (like the use of local anesthesia) are missing. Finally, some of the participating countries and HEMS organizations were over represented. This was taken into consideration when reporting availability of POCUS in the organizations but could have skewed the opinion data.
2021-08-26T13:46:11.235Z
2021-08-26T00:00:00.000
{ "year": 2021, "sha1": "fb853a01257e048f56402b66dc800ebf73c4be42", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "fb853a01257e048f56402b66dc800ebf73c4be42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259252329
pes2o/s2orc
v3-fos-license
How to Efficiently Adapt Large Segmentation Model(SAM) to Medical Images The emerging scale segmentation model, Segment Anything (SAM), exhibits impressive capabilities in zero-shot segmentation for natural images. However, when applied to medical images, SAM suffers from noticeable performance drop. To make SAM a real ``foundation model"for the computer vision community, it is critical to find an efficient way to customize SAM for medical image dataset. In this work, we propose to freeze SAM encoder and finetune a lightweight task-specific prediction head, as most of weights in SAM are contributed by the encoder. In addition, SAM is a promptable model, while prompt is not necessarily available in all application cases, and precise prompts for multiple class segmentation are also time-consuming. Therefore, we explore three types of prompt-free prediction heads in this work, include ViT, CNN, and linear layers. For ViT head, we remove the prompt tokens in the mask decoder of SAM, which is named AutoSAM. AutoSAM can also generate masks for different classes with one single inference after modification. To evaluate the label-efficiency of our finetuning method, we compare the results of these three prediction heads on a public medical image segmentation dataset with limited labeled data. Experiments demonstrate that finetuning SAM significantly improves its performance on medical image dataset, even with just one labeled volume. Moreover, AutoSAM and CNN prediction head also has better segmentation accuracy than training from scratch and self-supervised learning approaches when there is a shortage of annotations. Introduction The success of Generative Pre-trained Transformer(GPT) [3,20,24] series models demonstrates that if trained on large scaled data, the performance of large language model on zero-shot and few-shot tasks in unseen domain is comparable with state of the arts. Inspired by GPT, Segment Anything (SAM) [16] introduces a "foundation model" for image segmentation task. They collect 11 million images and de-Synapse COCO ADE20k ACDC Figure 1. T-SNE plot of embeddings encoded by SAM's image encoder from four datasets. The four datasets are Synapse [1], ACDC [2], ADE20K [30], and COCO [17]. As is showed, there is a apparent domain shift from natural images to medical images in latent space. This may explain why SAM fails to have good performance on unseen medical image datasets. sign a semi-automatic data engine to yield on average ∼100 masks per image, thus 1 billion masks in total. Then SAM trains a large promptable model with Vision Transformer [8] (ViT) backbone on this SAM-1B dataset. After being evaluated with various zero-shot tasks on over 23 datasets, SAM demonstrates promising generalization to most natural images. However, as SAM draws attention in medical image domains, it is observed that SAM does not generalize well to medical images in zero-shot settings [12,18]. The challenges of transferring model trained with natural images to medical images can be attributed to two main factors: 1) Large difference in appearance: Natural images and medical images exhibit significant differences in terms of color, brightness, and contrast. Medical images often have distinct characteristics due to the imaging modalities used, such as CT scans, MRI, or ultrasound; 2) Blurred boundaries of target objects: Medical images frequently present blurred bound-aries between different tissues and organs. Trained medical experts possess the necessary understanding of anatomical structures and can identify subtle boundaries that may not be apparent to models trained solely on natural images. Considering the difficulty of collecting a medical segmentation dataset with comparable size as SAM-1B, it is critical to explore if there is knowledge in the pre-trained SAM that can be exploited for medical image segmentation. Furthermore, prompt-based segmentation might not be well-suited for real-world application scenarios due to the following reasons: 1) Providing prompt for multi-class is timeconsuming. For most public medical image segmentation challenges, it always require segmenting multiple classes simultaneously. Inputting accurate prompts for each class can become cumbersome especially when organs and tissues are small and adjacent to each other; 2) The segmentation performance is heavily dependent on the prompt quality. Crafting precise prompting needs expert domain-specific knowledge, which is not available for all circumstances. With these limitations in mind, this paper proposes a straightforward way to finetune the SAM on medical image datasets, that is, freezing the weights of SAM encoder and adding a prediction head on it for training. The reason for freezing the weights is that SAM is a large model and most of the weights are contributed by the encoder. Finetuning both the encoder and the decoder is not only less accessible to all developers due to high hardware requirement, but also results in worse segmentation performance according to experiment results. On the other hand, to improve SAM's feasibility for clinical applications, we replace the mask decoder in SAM with a prediction head requiring no prompts for both training and inference. Three different types of prediction heads are evaluated in this paper, including Vision Transformer (ViT), Convolutional Neural Network (CNN), and Linear layer. The ViT prediction head is adapted from SAM mask decoder, which is named as AutoSAM, composed of lightweight cross-attention modules and transposed convolutional layers. We remove the prompt tokens and replicate image embedding as well as other auxiliary embeddings so that the decoder can generate multiple masks for different classes at the same time. In order to showcase the label-efficiency of our method, we conduct experiments in a few-shot learning setting, where the model is finetuned using only 1 or 5 labeled MRI scans. The results obtained on a publicly available medical image segmentation dataset highlight the significant improvement achieved through customizing pre-trained SAM compared with zero-shot prompt-drive SAM. Moreover, our approach outperforms both training from scratch and state-of-the-art self-supervised learning methods by a substantial margin, highlighting the potential of SAM's application to medical domains. Related Works Large Vision Models After the emerging of large language models (LLM), some works are devoted to introducing image in LLM to accomplish multi-modality tasks. For example, CLIP [21] and ALIGN [14] utilizes contrastive learning to align web images and their captions in embedding space. They find this simple pre-training task can be generalized well to other zero-shot downstream tasks, like object classification and action recognition in videos. Also, DALL-E [22] achieves great generalization with a scale autoregressive transformer for zero-shot text-to-image generation. However, these large scale vision model fails to tackle the wide range of all computer vision tasks, like image segmentation. The difficulty of obtaining label mask is the key for a large image segmentation model. SAM (Segment Anything) [16] is the first work to develop a promptable segmentation model and pre-train it on a broad dataset by themselves. Given suitable prompts, SAM is capable of generating promising mask for target object without task specific training. On the other hand, DINOv2 [19] scales the pre-training of a ViT model in terms of data and model size, in order to produce all-purpose visual features, with which the finetuning of downstream tasks can be much easier. Customizing Large Models for Medical Images This family of works mainly focus on finetuning SAM for specific segmentation dataset, as SAM shows significant performance degradation on medical images. MedSAM [18] simply finetune SAM decoder with prompt generated from label masks on over 30 medical image datasets, and results show improvement over zero-shot predictions generated with prompts. Kaidong Zhang et al. [28] applies the low-rankbased finetuning strategy to the SAN encoder and train it together with SAM decoder to customize SAM to abdominal segmentation tasks. Junde Wu et al. [25] freezes weights of SAM model and adds trainable adaptation module in SAM to reduce the cost of re-training. Background Firstly, we will give a brief introduction to the SAM model as background knowledge. There are three major components in SAM, image encoder, prompt encoder, and mask decoder. The image encoder has the same architecture as Vision Transformer (ViT) [8], and is pre-trained with MAE [10] on their own collected SAM-1B dataset. They provide weights of three different scale image encoder, ViTh, ViT-l, and Vit-b, as options for trade-off between real-time performance and accuracy. The image encoder takes any size of input images, and reshape it to 1024*1024. Then the images are converted to sequential patch embeddings with patch size 16*16 and embedding size 256. After several transformer blocks with window attention and residual Figure 2. Comparisons of SAM inference process and our SAM finetuing process. We freeze the weights in SAM encoder, and adds various of prediction heads to generate segmentation mask without prompts, including Vision Transformer (ViT), CNN, and linear layer. Also, our model can generate masks of difference target objects. propagation, the output of image encoder has dimension of (64x64, 256). The prompt encoder support both sparse prompts (points, boxes, text) and dense prompts (masks). Sparse prompts are projected into prompt tokens and concatenated with image embedding, while dense prompts are embedded using convolutions and summed element-wise with the image embedding. The mask decoder firstly applies a two-way attention module on output token, prompt token, and image embedding. Then the image embedding are upsampled by two transposed convolutional layer, and the prediction is made on point-wise product between the upscaled image embedding and output token. More details of the mask decoder will be discussed in the following section. Prediction Head To adapt SAM to a certain medical image dataset in an efficient way, we keep the weights in SAM encoder and appends an additional task-specific prediction head for finetuning. Also, we design the prediction head to be not promptable and the only input is the image embedding from SAM encoder. We explore three most common architecture types, ViT, CNN, and linear layer. Vision Transformer We notice that the original mask decoder in SAM has ViT backbones, thus we can make a light modification on it so that the prediction head is not only non-promptable but is also able to utilize the weights in SAM mask decoder. As is illustrated in Fig. 2, for SAM decoder, asides from prompt token and image embedding, there are also trainable output tokens, including mask token for generating mask and IoU token for predicting the confidence for the mask. Furthermore, the mask tokens comprise foreground mask token and background mask token. The output tokens are concatenated with prompt tokens, which we name as auxiliary embeddings. In the two-way attention module, each layer performs both self-attention and cross-attention. Regarding cross-attention, it includes from tokens (as quries) to the image embedding and from image embedding to tokens (as keys and values). After that, image embedding is upscaled by two transposed conv layers, and the foreground mask token is selected to perform point-wise product with the upscaled embedding to get the mask. In comparison, AutoSAM deletes the prompt token in auxiliary embeddings so that it is not a promptable model any more. The other modification is the duplication of auxiliary embeddings and image embedding by the number of classes to generate masks for multiple classes. The computation of each pair can be conducted in parallel thus the overhead associated with generating extra mask is ignorable. An alternative way to generate multiple masks for one inference is simply adding more foreground mask tokens in the output tokens. However, we choose the first strategy because, intuitively, one group of auxiliary embeddings represents one object to be segmented in SAM. AutoSAM enables generation masks for each class independently. Convolutional Neural Network This type of prediction head is the representation of decoder in many popular segmentation model for medical images, like UNet [23], UNet++ [31], TransUNet [5], and Swin-UNetr [9]. We firstly reshape the image embedding to a feature map of size (256, 64, 64). Following the structure in UNet, the CNN head has k stages (k >= 2), and each stage consists of conv layer with stride being 1 and transposed conv layer with stride being 2 to upscale. Different value of k are tried in the experiment part, and when k > 2, the transposed conv layer is replaced with conv layer in k − 2 stage(s), so that the output feature maps are always upscaled by 4x. Finally, a point-wise conv layer with kernel size 1 is applied to produce prediction masks for each class. Linear Layer The simple classification head is always used to evaluate the generalization of feature representation learned in the pre-training task [7,10,11,19]. In this work, we also apply a linear head to test if there is high-level semantic information extracted by the SAM encoder. Same as CNN head, we resahpe the image embedding to a 2D feature map, and then directly deploy two transpose conv layers. After that, we use two conv layers with kernel size 1 in replace of MLP to get the classification for each pixel. For preprocessing, we do normalization for each volume so that all pixels in a volume are zero mean and unit variance. We then convert the pixel value to RGB format, and store each slice within the volume as PNG files, since SAM are trained on RGB images and we aim to keep the input format consistent. Until then, although the MRI scans are given in 3D volumes, the segmentation is conducted on 2D images We compute the dice score as well as average symmetric surface distance (ASSD) for each volume in the test set, then regenerate the splits and repeat the experiments. The average score and the standard deviation for the four runs are reported. Training Recipe The implementation of training is based on deep learning package PyTorch. The GPU device used is a NVIDIA Tesla V100 with 16GB memory, which is more accessible than A100. In comparison, SAM distribute training across 256 A100 GPUs. During training, we randomly apply data augmentations on the input images including, Gaussian noise, brightness modification, elastic deformation, and rotation. Table 1. Comparisons of different methods trained with different number of labeled volumes on ACDC dataset. The three classes to be segmented are, "RV" (right ventricle), "Myo" (myocardium), and "LV"(left ventricle)."unsup" means no fine-tuning stage and the mask is generated based on given box prompts. Methods Dice% The training loss is a combination of Cross-Entropy loss and dice loss. The optimizer algorithm used for updating is based on the Adam [15]. The learning rate is set as 0.0005 with (β 1 , β 2 ) = (0.5, 0.999). The maximum batch size for one single GPU is 4 for all three prediction heads. The default training epoch is 120 because we observe convergence of losses on validation set after that number of epochs. Baselines To validate the effectiveness of our proposed method, we conduct experiments with some baseline approaches under the same setting as comparison. The first one is training a UNet from scratch, the most common method to get a automatic segmentation model for a specific dataset. Secondly, We also try a self-supervised learning method, SimCLR [7], which is widely used for label-efficient segmentation in medical image domain [4,13,27]. This SimCLR baseline consists of two stages, pre-training and finetuning. In the pr-training stage, we use all data in the training set without any annotation information. We get two random views from the input images, and project them into feature space with encoder of UNet. Then a contrastive loss is applied to maximize the agreement between the embeddings of the two views. During the finetuning, the encoder of UNet is initialized with the pretrained weights and all parameters in the model are trained on labeled data. Lastly, we try the original SAM without any finetuning to address the necessity of customizing SAM to a specific dataset. Regarding the prompt, we use the box-style prompt, and the box coordinates are calculated based on the ground truth masks. Label-efficient Adaptation When finetuning a model on a new dataset, to reduce the cost of annotating, it is desired that the finetuning achieves promising results with only limited annotated images. Therefore, in Table 1, we only provide 1 or 5 labeled volumes to evaluate the data-efficiency of our methods. Here are the key observations drawn from Table 1. i) Firstly, AutoSAM and CNN head shows the best segmentation accuracy compared with all other methods for both settings. Especially when only prvoided with 1 labeled volume, the average dice score of AutoSAM is 39.32, almost twice higher than that of UNET and SimCLR. This provides compelling evidence that the features learned in SAM encoder is general enough to be transferred to medical images. In terms of statistical significance, it is hard to tell whether AutoSAM or CNN has higher dice score, why also implies that the strong power of SAM is mainly a consequence of representative features extracted by image encoder not the mask decoder. Additionally, we observe that AutoSAM has lower ASSD compared to CNN head. This difference can potentially be credited to the training of SAM decoder, which is designed to generate mask of a object concentrated near the prompt's location. In comparison, CNN head has no information loaded from SAM decoder, leading to higher ASSD values. ii) Secondly, SAM shows worse segmentation performance compared with AutoSAM and CNN encoder even trained with only 1 volume, which strongly supports that finetuning SAM is an efficient way to solve its performance drop on a medical image dataset. However, it is also noticed that, SAM has much lower ASSD than other methods. This observation can be contributed to that SAM benefits from the provided localization information embedded in the box prompt. This localization information forces the prediction mask to be around box area. On the other hand, the dice score of LV is always 0 for SAM. According to Fig.4, we can find that Myo is a thin circle surrounded by the other two classes and the boundary is also blurred. Since the box of Myo is close to the box of RB, the Myo is actually mistaken as part of RV, and consequently all LV area is predicted as Myo. iii) As is showed in Table 1, the linear prediction head has substantially worse performance than the other two prediction heads. Especially, when the number of labeled data is increased from 1 to 5, the linear head fails to gain much segmentation accuracy improvement. We believe this out-come is due to the extreme light weight architecture. When the visual features produced by SAM encoder do not have rich semantic information for medical images, such a simple prediction head results in weak model capabilities and may suffer from underfitting. Ablation Study The first ablation study we conduct is about how the number of depths in the CNN prediction head influences the finetuning results. In Table .2, the dice increases as the number of depths increases until depth=4. As is discussed above, a linear prediction head might suffer from underfitting, When the depth is less than 4, larger prediction head leads to better model capabilities Nevertheless, when the number of depth exceeds 4, the benefits gained from increasing parameters in prediction head begin to diminish. At this point, the quality of image embedding or prediction head architecture become more crucial factors in determining the performance. We also evaluate the performance of AutoSAM and Encoder + CNN with different encoder sizes provided by SAM, which are vit-b, vit-l, and vit-h. Table. 3 shows that generally large model size leads to better finetuing results on the downstream tasks, but AutoSAM is less sensitive to encoder architecture than Encoder + CNN. When using vit-h backbone, CNN head has significantly higher dice score than AutoSAM, though it still has higher ASSD. Table. 3 can also serve as a reference regarding trad-off between efficiency and performance, as vit-h results in longer finetuning time and higher inference latency compared with vit-b. Lastly, we plot the results of using more labeled data for finetuning in Fig. 5. We find that AutoSAM only has advantages over UNet (with no additional information) and SimCLR (pretrained knowledge on the same dataset) when the labeled volume number is less than 10. This is because SAM is pretrained on a large scale image dataset and the image encoder is capable of extracting semantic information, which can benefit downstream segmentation task. However, since SAM has never been exposed to medical images, this semantic information can be biased and specific to natural images. It seems that with enough annotated data, the knowledge obtained from natural images poses negative impact when adapting prediction head specifically to medical image domain. Therefore, to establish a real "foundation model" for all image modalities, a large scale medical image dataset is in need for pretraining SAM in the future. Conclusion Despite the success of SAM in natural images, how to efficiently adapt SAM to out-of-distribution medical image datasets remains an open-ended question. Different from existing works, this paper provides a new perspective to solve this problem, that is freezing the weights in SAM image encoder and appending a light-weight task-specific prediction head. To promote widespread applications, we modify SAM to be non-promptable and be able to generate multi-class masks. We explore three types of prediction heads, ViT (called AutoSAM), CNN, and linear layer, in which AutoSAM and CNN head shows promising results in a few shot learning setting. The fact that finetuned with only one labeled volume has better peformance than boxprompted SAM demonstrates the necessity of customizing SAM for a new dataset. With limited number of annotated volumes, our methods is superior to training from scratch and self-supervised learning baselines. Future works Please note that this project is still ongoing, and there are several future directions we plan to explore. Firstly, we intend to evaluate our method on more medical image datasets to verify the generalization of our findings. We will try medical image dataset with different modalities and different objects. Furthermore, we recognize that there is still space for improvement for finetuning SAM using a limited number of volumes, such as one or five, compared to utilizing all available labeled data for training. We will try prediction heads with more complex architecture, like DeepLabV3 [6], PSANet [29], and UPerNET [26]. Lastly, we aim to include more baselines in our evaluations, which will involve comparing with other segmentation models and different selfsupervised learning approaches.
2023-06-27T01:01:16.538Z
2023-06-23T00:00:00.000
{ "year": 2023, "sha1": "fc754cb6bc93045e9052e1476b76e149c6b1cac2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fc754cb6bc93045e9052e1476b76e149c6b1cac2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
212835051
pes2o/s2orc
v3-fos-license
ANTI - CANCER ENZYME(L - ASPARAGINASE) PRODUCTION, PURIFICATION AND CHARACTERIZATION FROM A SOIL ISOLATE OF PSEUDOMONAS SP 1. Department of Biotechnology, R.V. college of Engineering, Bengaluru, India. 2. Azyme Biosciences Private Limited, 1188/20, 3rd Floor, 26th Main, 9th Block, Jaya Nagar, Bengaluru 560069. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 10 October 2019 Final Accepted: 12 November 2019 Published: December 2019 754 of the enzyme. L-asparaginases from a bacterial source although known for its therapeutic anti-tumor importance, they have been reported to lead to unfavorable allergic reactions and hypersensitivity in the long run. Although, sequential therapy with serologically unrelated asparaginase can be performed to avoid these immunological complications described above [3]. This has given a cue to find a diversity of asparaginases from different sources including fungi, basidiomycetes, and even small mammals such as hedgehogs. There is a check in the functional enzyme preparation for industrial/pharmaceutical applications due to the fact that the microbial sources used requires the presence of certain interfering proteins [2]. Balasubramanian et al., HPLC assay of enzyme activity for L-asparaginase and its molecular weight was determined as a characterization study of purified enzyme Lasparaginase. SDS page results revealed molecular weight of about 94 kDa. Screening the culture for l-asparaginase: The colonies formed in the pour plate method were isolated and cultured in the m9 medium agar plates by using streak plate method. Nitrate reduction Negative Analytical Studies: Enzymatic assay of L-Asparaginase (nesslerization method): Enzyme assay was performed to determine the organism with maximum enzyme production according to wriston and yellin (1973). The M9 media( KH 2 PO 4 -3g/l, Na 2 HPO 4 -6g/l, NaCl-5g/l, NH 4 Cl-2g/l, MgSO 4 -0.1g/l, Asparagine-8g/l, Phenol red-0.005%, Agar -20g/l pH-7.3 distilled water-1l) was prepared and each organism was inoculated separately and incubated for 24 hours. Meanwhile, UV-Visible spectrophotometer was used to perform the enzyme assay at 37°C. The principle was based on finding the enzyme activity through catalysis of L-asparaginase by the Nessler's reagent which was directly proportional to the ammonia liberated in the reaction. The reaction mixture typically comprised of 0.05M Tris-HCl buffer and 0.01M L-asparagine. It was incubated at 37°C and a pH of 8.6 for a duration of 10 minutes. Additionally, 0.5 ml of 15% trichloroacetic acid solution was added to end the reaction. At the end an ammonium sulphate reference was used as a standard which helped in the quantitative determination of the released ammonia. Optimization:-Optimizing the incubation time, pH, Temperature, nitrogen an carbon sources and their concentration for enzyme production. The organism was inoculated in the M9 media broth(25ml) and incubated for 24hrs and 48hrs and pH 5,6,7,8,9,10 and various temperatures of 27•C,30 •C,35•C 40•C and 45•C. The enzyme assay was performed using nesslerization method. The obtained maximum enzyme activity conditions were used for further optimization. This was followed by 1% concentration of different nitrogen sources such as ammonium nitrate, peptone, tryptone, ammonium sulfate, gelatin, sodium nitrate which were added in each of 6 test samples and the M9 broth(25ml) was inoculated and incubated. The enzyme assay was performed through nesslerization method to find the best suitable nitrogen source. 756 It was followed by similar steps with concentrations ranging between (0.5% -3%) of the highest enzyme activity producing nitrogen source. Additionally, 1% concentration of different carbon sources-glucose, sucrose, mannitol, lactose, starch, cellulose was added in each of 6 test samples and the M9 broth(25ml) was inoculated and incubated. The enzyme assay was performed through nesslerization method to find the best suitable carbon source for maximum enzyme production. It was followed by similar steps with concentrations ranging (0.5% -3%) of carbon source with the highest enzyme activity and the concentration was then determined which was further used for optimization studies. 757 Strain Development For Overproduction Of Enzyme:- Effect of UV-induced stress on the enzyme production: The inoculated LB agar plates were exposed to UV-B rays for 5min,10min,15min,20min,25min respectively. The plates were incubated for 24 hours at 37•C at a distance of 55cm. After the incubation the enzyme assay was performed by the nesslerization method. Effect of height wise induced UV stress on the enzyme production: The inoculated LB agar plates were exposed to UV-B rays for 5min,10min,15min,20min,25min respectively. The plates were incubated for 24 hours at 37•C at the distance of 20cm. The incubation was followed by enzyme assay Effect of x rays on the enzyme production: The inoculated agar plates were exposed to grace period 30,60,90 minute respectively. The plates were incubated for 24 hours at 37•C.After the incubation the enzyme assay was performed by the nesslerization method. Fig 6:-Effect of X-radiation on enzyme activity on the microbial sample and its trend in the graph shown above when exposed to grace time of 30,60,90 minutes of grace time. 758 Purification Of Enzyme: Salt Dialysis: The amount of salt to be added was measured according to the formula and salt purification was performed. Amount of ammonium sulphate (g)= (44 x volume of supernatant)/ 100. The membrane was then activated using 2% sodium bicarbonate(w/w). It was followed by diffusion of the sample for partial purification. Ion Exchange chromatography: In 7 test tubes 250 µl of tris HCl was added . And in test tube-1 and test tube-2 250 µl of NaCl was added to the remaining test tubes. NaCl was added with an increase of 250 µl each. It was made up to 5ml by adding autoclave water. Elution-1 was added to equilibrate DEAE cellulose in the ion exchange column, it was completely removed and then sample was added into the ion exchange tubes. Followed by addition of elution-2,3,4,5,6,7 and the samples were collected. Estimation of protein: The method proposed by Lowry et al. (1951) was used for the determination of the amount of protein. Gel Filtration Chromatography: The sephadex g-75 gel was poured into the gel filtration column, followed by the sample and f 0.1M phosphate buffer at Ph 7. The sample were eluted in 25 Eppendorf tubes. The absorbance was measured at 280 nm. concentrations ranging 0.2%, 0.4%, 0.6%, 0.8%, 1.0% (w/w) was used for characterization. Enzyme assay was performed through nesslerization method for each of the steps above and the enzyme was characterized. The enzyme producing colony was selected for further studies (Chennai industrial area). The microbial strain was characterized by morphological and biochemical analysis. Microscope observation of the colony disclosed that the strain with rod-shaped colonies was gram-negative. The microbial strain was able to utilize citrate, liquify gelatin and hydrolyze casein and lipid. The strain also showed positive catalase and oxidase tests. Through these results, we identified the microorganism to belong to the Pseudomonas genus. The bacterial strain chosen elucidated a maximum growth at 24h for the batch fermentation process and the maximum production of the enzyme was found to be at 24h as well (fig 3) .The enzymatic activity-906.78U/ml was observed at pH 9 which was noted to be the optimum pH for maximum enzyme production (fig 3). The enzyme activity increased to 958.03U/ml at 40 o C ( fig 3) .Similar results were recorded at (40 0 C) in studies done by [15] on pseudomonas aeruginosa. These conditions were used for further optimization studies. The strain development was performed using UV-B rays and X rays ( fig 5,6). The enzyme activity increased when exposed to UV-B, hence we infer that exposure to UV-B created a mutation in the microbial strain thereby increasing the enzyme activity. Highest activity was found for the plate which was exposed to 15 minutes(enzyme activity-878.23 units/ml). When exposed to different time intervals with a distance of 10cm, plate which was kept for 10 minutes showed maximum enzyme activity(921.65 Units/ml)( fig 5). However, X rays did not show any significant consequence on the enzyme activity and a decline in enzyme activity can be seen in the grace time 60 and 90 min (fig 6). Modified M9 media, when used with different carbon and nitrogen sources optimized with different concentrations showed maximum enzyme activity for sucrose ( fig 4) and ammonium sulfate (fig 4) respectively. The trend of these carbon and nitrogen sources is evident in the (fig 4.a, fig 4.b) and it is clear the maximum production of the enzyme is at a concentration of 1.5%(w/w) for each one of these sources. This was in line with the studies done by [22] for Pseudomonas sp. Ammonium sulfate precipitation, salt dialysis consecutively ion exchange chromatography and gel filtration chromatography was done for the purification of the enzyme. The purified enzyme was further used for characterization studies. The enzyme activity decreased from crude to gel filtration gradually this was in line with studies done by Ashraf et.al., 2003Vidhya Moorthy et.al., 2010.The percentage yield of protein decreased up to 19.8% in the gel filtration and the fold purified up to 1.854 in gel filtration. The protein profiling to determine the molecular weight of the enzyme was done using SDS-PAGE. A protein band of a molecular weight of approximately 55 kDa (Fig 7) was obtained. The study done by [9]Bacillus sp 760 had a molecular weight of 55 kDa. Table 2 represents the protein purification trend from crude to gel filtration. The purified enzyme was then characterized with optimum pH, temperature, incubation time and substrate asparagine concentration. The enzymatic activity of the purified enzyme gradually increased up to 40 minutes (644U/ml) and then decreased on further increase in the incubation time (fig 8). The enzyme showed maximum activity at a pH 9-472.51 U/ml (fig 8). A similar trend was found for enzyme activity at 40 0 C and decreased on further increase of temperatures (fig 8). The obtained results illustrated a gradual enzyme activity increase with increasing substrate concentration up to 1.20% substrate concentration (maximum enzyme activity for 1.20% concentration was 580.61 U/ml) as illustrate in fig 8. Conclusion:- The current study disclosed L-Asparaginase production from soil isolates of Pseudomonas sp. The purification was achieved through ion exchange chromatography and surplus salt was removed using dialysis. The enzyme was then characterized which showed the optimum pH found to be at 9.The enzyme had its maximum activity at a temperature of 40 0 C and an incubation time of about 40 minutes .6 different carbon and nitrogen sources were used for production among which sucrose and ammonium sulphate showed maximum enzyme activity. Exposure to UV-B rays had increased the enzyme activity. Highest activity was found for the plate which was exposed to 15 minutes (enzyme activity-878.23 units/ml). For height wise effect, when exposed to different time with a distance of 10cm, kept for 10 minutes showed maximum enzyme activity (Enzyme activity 921.65 Units/ml). The X rays did not show any considerable effect on the enzyme. The high amount of catalytic activity along with remarkable stability over a comprehensive pH and temperature makes it a considerably good anti-cancerous agent. To sum up, L-Asparaginase from different microbial sources shows such properties which make it a potent enzyme for both pharmaceutical and industrial application. Future studies need to be performed in order to reduce the cost of enzyme production. This could be done by increasing the yield of L-asparaginase through optimization of the production process or by strain improvement. Although bacterial L-Asparaginase causes certain unfavorable reactions [14]. The purification and characterization studies on the enzyme would definitely open unexplored prospects in the health care industry [9] Conflict of Interest: There is no conflict of interest between any of the research personnel. Author's Contribution: All the Authors have made significant contributions towards the research manuscript. Funding (Clearly mention grant number, if any): There was no funding received from any of the authorities for the research project. Data Availability: The data has been taken from the lab work, also by referring to related research papers. Ethics Statement: There was no harm done to any animals and it involved none of the transgenic procedures which would be ethically wrong.
2020-01-23T09:07:42.197Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "2d295bd12222bb49d6ad223ad24d634404432d63", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/617_IJAR-29819.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8515c95829c6b643654cd84dbc656a4cccf548fe", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
232364222
pes2o/s2orc
v3-fos-license
Drug-drug interactions in subjects enrolled in SWOG trials of oral chemotherapy Background Patients with cancer are at increased risk of drug-drug interactions (DDI), which can increase treatment toxicity or decrease efficacy. It is especially important to thoroughly screen DDI in oncology clinical trial subjects to ensure trial subject safety and data accuracy. This study determined the prevalence of potential DDI involving oral anti-cancer trial agents in subjects enrolled in two SWOG clinical trials. Methods Completed SWOG clinical trials of commercially available agents with possible DDI that had complete concomitant medication information available at enrollment were included. Screening for DDI was conducted through three methods: protocol-guided screening, Lexicomp® screening, and pharmacist determination of clinical relevance. Descriptive statistics were calculated. Results SWOG trials S0711 (dasatinib, n = 83) and S0528 (everolimus/lapatinib, n = 84) were included. Subjects received an average of 6.6 medications (standard deviation = 4.9, range 0–29) at enrollment. Based on the clinical trial protocols, at enrollment 18.6% (31/167) of subjects had a DDI and 12.0% (20/167) had a DDI that violated a protocol exclusion criterion. According to Lexicomp®, 28.7% of subjects (48/167) had a DDI classified as moderate or worse, whereas pharmacist review indicated that 7.2% of subjects (12/167) had a clinically relevant interaction. The majority of clinically relevant DDI identified were due to the coadministration of acid suppression therapies with dasatinib (83.3%, 10/12). Conclusions The high DDI prevalence in subjects enrolled on SWOG clinical trials, including a high prevalence that violate trial exclusion criteria, support the need for improved processes for DDI screening to ensure trial subject safety and trial data accuracy. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-021-08050-w. Background Drug-drug interactions (DDI) can cause treatment to be unsafe for patients by increasing drug toxicity or decreasing treatment efficacy [1]. Patients with cancer have particularly high risk of DDI due to their increasing age, numerous comorbidities and high rates of polypharmacy [2]. An estimated 16-41% of patients receiving cancer treatment have a potential DDI [3][4][5][6][7], which increase risk of severe toxicity nearly threefold [8]. DDI can be detected using high performing DDI screening tools [9] and effectively managed by incorporating clinical pharmacists or pharmacologists on the healthcare team [4,5]. DDI can affect drug levels by altering drug absorption, distribution, metabolism or excretion or can affect drug response through mechanistic synergy or antagonism [1]. Cancer treatment is shifting from primarily infusionbased treatment towards oral agents [10]. In addition to the typical concerns with metabolic DDI, oral agents have additional DDI concerns relating to their need to be absorbed from the gastrointestinal tract. Intestinal absorption of oral agents can be affected by changes in gastrointestinal pH and activity of uptake transports. Concomitant administration of gastric acid suppression such as proton pump inhibitors (PPI) or histamine H 2 antagonists (H2RA) with tyrosine kinase inhibitors can reduce drug absorption decreasing AUC as much as 60% [11], which decreases systemic exposure and treatment efficacy [12,13]. Additionally, these oral agents are often given daily over an extended period of time increasing the risk of DDI. DDI management is particularly critical for subjects enrolled in oncology clinical trials, within which the benefits and harms of trial agents are determined. Current processes to detect DDI during trial eligibility screening are inadequate and lack standardization across sites, even within the National Cancer Institute's National Clinical Trials Network (NCTN) system [14]. Few studies have examined the prevalence of DDI in oncology clinical trial subjects [15,16]. In our prior work, nearly 25% of subjects enrolled on an NCTN clinical trial at the University of Michigan Rogel Cancer Center were found to have at least one major or contraindicated DDI [16]. This high prevalence suggests many DDI are not being detected and managed during eligibility assessment screening, which raises concerns about trial subject safety and data accuracy. Based on the high prevalence of DDI in subjects at their time of enrollment on NCTN trials at a single institution, the objective of this study was to determine the prevalence of DDIs involving trial agents in subjects at enrollment in multi-center SWOG clinical trials. A secondary objective was to determine the prevalence of DDI caused by the addition of medications in subjects while on SWOG clinical trials. Data collection/selection All closed SWOG clinical trials with available data were evaluated for inclusion. SWOG clinical trials of commercially available agents that collected comprehensive concomitant medication information at the time of enrollment were eligible for inclusion. Trials were excluded if the trial agent did not have any possible DDI. Complete medication lists at enrollment and medication changes during the trial for each subject were collected from the existing trial record. Concomitant medications that were noted to be administered for two or fewer doses were not included in the total number of medications a subject was taking or evaluated during DDI screening. Protocol-guided screening Detailed methods for protocol guided screening have been previously described [16]. Briefly, clinical trial protocols were reviewed for all language discussing concomitant medications with DDI concerns that should be considered exclusion criteria, medications to avoid, or medications to use with caution. Medication lists were compared to this protocol information to determine whether each subject had a DDI according to protocolguided screening for the trial on which they were enrolled. Lexicomp® guided screening Medication lists were screened for major or contraindicated DDI involving the trial agent using Lexicomp® Drug Interactions. Lexicomp® was selected based on its strong performance when screening for DDI with oral chemotherapy [9]. DDI clinical relevance determination DDI identified by protocol or Lexicomp® guided screening were manually reviewed by a pharmacist and student pharmacists for clinical relevance. Clinical relevance was defined as a DDI that would warrant a drug change or discontinuation to ensure subject safety and drug efficacy. This process is similar to the process we used in previous studies to allow for cross-study comparison [16]. Statistical analysis The prevalence of DDI by protocol-guided screening, Lexicomp® guided screening, and clinically relevant DDI were calculated for each SWOG trial and combined across trials. The mean, median, and range of medications per subject was also calculated. Statistical analysis was performed using R software. The primary analysis did not count any DDI involving antacids, as these can be avoided by properly separating timing of administration. A secondary analysis that includes antacids as DDI was also conducted since administration timing information was not available, therefore, these potential DDI cannot be excluded. The following were considered antacids: aluminum hydroxide, magnesium hydroxide, magnesium carbonate, and calcium carbonate dosed as needed. Protocol characteristics and subjects Two SWOG trials of commercially available agents that had potential DDI and concomitant medication lists were identified. SWOG 0711 (S0711) and 0528 (S0528) were pharmacokinetic trials of dasatinib and everolimus/ lapatinib, respectively. S0711 started October 2008 and closed June 2014, and S0528 started September 2006 and closed August 2009. Medications lists were collected when each trial was conducted as a step within protocol procedures and were available for retrospective review for all subjects enrolled on S0711 (n = 83) and S0528 (n = 84). At enrollment subjects were receiving 0-29 concomitant medications (mean: 6.6, standard deviation: 4.9). Medication additions during the trial occurred in 40.7% (68/167) of subjects, with a mean of 1.9 (standard deviation = 4.8, range 0-23) medications added per subject. DDI detected by protocol-guided screening Protocol-specified concomitant medications that would warrant subject exclusion, or medications that should be avoided or used with caution are shown in Table 1. At the time of enrollment to either of the two trials, 18.6% (31/167, Fig. 1a) of subjects had at least one DDI based on protocol-guidance, the majority of which violated exclusion criteria (12.0% of subjects, 20/167). In the secondary analysis including DDI with antacids, 24.6% (41/ 167) of subjects had at least one DDI and 17.4% (29/ 167) of subjects had a DDI that violated exclusion criteria. During the trial, 9.6% of subjects (16/167) had a medication added that was considered a DDI based on protocol guidance. A total of 8.4% (14/167) of subjects had a medication added that violated exclusion criteria. In the S0711 trial, 18.1% (15/83) of subjects had at least one DDI at enrollment based on the trial protocol, and 12.0% (10/83) of subjects had a DDI that was a violation of an exclusion criterion ( Table 2). Most of these exclusion criteria violations were due to the combination of dasatinib with a PPI (80%, 8/10, Online Resource 1) and the rest were due to an H2RA (20%, 2/10). Including antacids as DDI, 22.9% (19/83) of subjects had an exclusion criterion violation at enrollment. A medication that violated an exclusion criterion was added during the trial in 13.3% (11/83) of subjects, all of which were PPI. In the S0528 trial, 20.2% (17/84) of subjects had at least one potential DDI at enrollment based on protocol-guided screening ( Table 2). The majority of DDI violated protocol exclusion criteria (11.9%, 10/84); these were nearly evenly split between the combination of lapatinib with PPIs (60.0%, 6/10) and H2RAs (50.0%, 5/10). No subjects were taking antacids at baseline, so the results of the secondary analysis were the same as the primary analysis. Three subjects had a protocol identified DDI added while on trial (3.6%, 3/84) and each of these DDI violated an exclusion criterion. DDI detected by Lexicomp® At baseline, 28.7% of subjects (48/167, Fig. 1b) had at least one major or contraindicated DDI detected by Lex-icomp®. The majority of these interactions were detected in S0711 and were due to the combination of dasatinib with acid suppression therapies and/or acetaminophen (acid suppression only: 13/46, acetaminophen only: 20/ 46, both: 11/46, other: 2/46). During the trials 10.2% (17/167) of subjects had a medication added that caused a DDI, all of which were S0711 subjects. Clinically relevant DDI In the primary analysis, 7.2% of subjects (12/167) had at least one DDI at enrollment that was considered to be clinically relevant, and this increased to 12.6% (21/167) when including antacids in the secondary analysis. The majority of these clinically relevant interactions (83.3%, 10/12) were between dasatinib and PPIs or H2RAs. The clinically relevant interactions with lapatinib/everolimus were with verapamil (n = 1) and fluconazole (n = 1). The interaction of dasatinib with acid suppression therapy was considered clinically relevant, so 6.6% (11/167) of subjects, all on S0711, had a drug added during the trial that led to a clinically relevant DDI. Discussion Patients with cancer have a high prevalence of DDI [3][4][5][6] that can decrease patient safety and increase toxicity [1]. Oncology clinical trial subjects also have high prevalence of DDI due to the lack of standardized screening procedures [14]. In our previous work, approximately 25% of subjects enrolled on an NCTN trial at a singlesite had DDI at enrollment [16]. This follow-up analysis of subjects enrolled on SWOG trials across sites confirmed a high prevalence of DDI, though the exact estimate depends on whether the determination is based on the protocol (19-25%), is limited to protocol exclusion criteria (12%), or is based on Lexicomp® (29%) or clinical judgement (7%). This study also found inadequacies in DDI screening for drugs added while a subject is on a clinical trial. The prevalence of at least one major or contraindicated DDI at enrollment detected by Lexicomp® (29%) is similar to the prevalence detected in subjects enrolling on NCTN trials at UM Rogel Cancer Center (24.2%) [16] and within the ranges previously reported in patients with cancer (16-41%) [3][4][5][6]. Direct comparison of these rates should be done cautiously as the prevalence of DDI is largely determined by the interaction potential of the agents used in the trials included in the analysis. This analysis included two trials of agents with numerous DDI, whereas our prior analysis included subjects enrolled in 35 trials with a variety of trial agents and DDI potential. Nevertheless, these findings suggest that the ineffectiveness of DDI screening for oncology clinical trial enrollment is not limited to a single or subset of institutions but is a systemic issue across sites. Based on manual pharmacist review, 7% of subjects had a DDI that was considered clinically relevant, further supporting the conclusion that improved DDI screening is necessary to prevent harm in clinical trial subjects and ensure accuracy of trial data. The vast majority of DDI detected in these trial subjects were DDI that prevent drug absorption [17]. Absorption DDI are common for oral medications, which are being used more often in cancer treatment due to their improved convenience over parenteral administration [18]. Dasatinib absorption decreases with increasing pH [19], consequently, acid suppression therapy (e.g., PPIs, H2RAs, and antacids) decreases absorption of dasatinib leading to an AUC decrease of between 43 and [11]. Reduced drug absorption leads to lower systemic concentrations that could cause dasatinib treatment efficacy to be decreased, as has been shown for erlotinib and pazopanib [12,13,20]. This DDI is particularly concerning given that the primary objective of S0711 was to investigate the pharmacokinetics of dasatinib. This is just one of many possible scenarios where ineffective DDI screening can meaningfully affect the accuracy of the data collected within a clinical trial. The high DDI prevalence in oncology trial subjects is likely due to the lack of standard DDI screening procedures during trial enrollment eligibility assessment [21]. In our prior survey of SWOG sites, most sites reported that DDI screening relies primarily on DDI guidance within the trial protocol and approximately half of sites indicated that DDI screening is only conducted for DDI that are explicit trial exclusion criterion [14]. Despite sites self-reported reliance on protocols and particular attention to exclusion criteria, in this analysis 12% of subjects had DDI at enrollment that warranted trial exclusion and 7% had a medication added during the trial that warranted trial exclusion. This is perhaps the strongest evidence of the inadequacy of the current systems for DDI screening within oncology clinical trials. One solution that has been proposed is to have pharmacistled comprehensive DDI screening for all oncology clinical trial subjects [21], however, only 17% surveyed SWOG sites reported pharmacists currently conduct DDI screening. Most sites rely on clinical research coordinators (56%) and study nurses (45%) who may have insufficient knowledge and training on DDI [22,23], and pharmacist-led DDI screening likely is not given the lack of pharmacists at up to 6% of sites that enroll subjects on SWOG clinical trials [14]. An alternative approach that we have advocated is to deploy a DDI screening tool designed specifically to assist clinical trial staff with screening for DDI for trial subjects [24]. Standardizing DDI screening procedures for clinical trial subjects, either through pharmacist involvement [25] or developing a point of care tool, could dramatically improve the effectiveness and efficiency of DDI screening, yielding significant benefits to trial subjects, staff, and investigators. A limitation of this study is that only two trials were included. Few SWOG protocols require collection of concomitant medication information, limiting the availability of the data necessary to more comprehensively investigate the prevalence of DDI in SWOG trial subjects. Additionally, the medication lists could not be further verified from what was recorded as part of the original study protocol. The medication lists were likely collected by multiple individuals across trial sites, and it is possible inaccuracies exist. The concomitant medication data did not specify administration times, so it is unknown whether antacid interactions should have been included. Antacids accounted for 50% of the DDI that were classified as exclusion criteria or clinically relevant, consequently, our estimated DDI prevalence is somewhat sensitive to whether it can be assumed that timing of antacid administration was appropriate. Additionally, some subjects had multiple strengths or routes of administration of the same medication. These were treated as individual DDI occurrences since the route and dose of medications can impact the likelihood of a DDI. Finally, we used our standard approach of manual review by a pharmacist to determine DDI clinical relevance; however, slightly different prevalence estimates would likely have been obtained if clinical relevance was determined by a different pharmacist. Conclusions DDI in clinical trial subjects have the potential to adversely affect subject safety and compromise trial data accuracy. Our results confirm a high prevalence of DDI in subjects enrolled on SWOG clinical trials, further supporting the need for improvements to DDI screening procedures, particularly for trials of drugs that have high DDI potential including oral anti-cancer agents. Additional file 1. Drug-drug Interactions Identified at Baseline and Added After Enrollment. Table containing all drug-drug interactions identified at baseline and after enrollment by screening through protocol guidance, Lexicomp®, and by pharmacist review for clinical relevance.
2021-03-27T05:15:50.714Z
2021-03-26T00:00:00.000
{ "year": 2021, "sha1": "878af1fb352fa8d4c9765d938390cec1b5bd50da", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-08050-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75586eb742b10532e97e219de588821efca13ae9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
132436075
pes2o/s2orc
v3-fos-license
Palynomorphs Dispersal of Plantago Type , in Elbasani Town – Albania Paleopalynological data reported in this research, are received from underground layers that belongs to the geological period of New Holocene, in five different places belonging to the city of Elbasan. This research gives us certain paleopalynological input for the palynomorphs spreading to Plantago Type during New Holocene period. A significant number of environmental studies are done in Elbasan city during last thirty years. The goal where our paper is based, deals with the presentation of relationship amongst profoundness and distribution for the Plantago Type palynomorphs at various time intervals. The samples were received 25 cm depth from the surface to the 4 m of deepness, through a dry drilling sonde. Palynological input about this Type there were from any similar palynological research before, as from domestic and outside authors. Monitoring, numbering and microscopic photographing of palynomorphs is completed with Motic microscope BA310. Sample treatment also microscopic examinations were performed at “La Sapienza”, University Rome. See from our perspective, some essential data were found, showing exactly the relationship amongst profoundness and quantitative presence of pollen to Plantago Type. Introduction Paleopalynology is the discipline that deals with the study of microscopic fossils made of organic materials resistant.This science was initially known as "Pollen Analysis" with the purpose the study of pollen grains and spores including fossils of the Quaternary period to paleoflora rebuild (Von Post, 1916).Comparison of herbal spores and pollen present in those primitive, allows us judging the performance of primitive and specialized features of outer wall of the grains (Pacini & Franchi, 1978;Pacini & Hesse, 2005). The presented material gives palynological data of New Holocene deposits in Elbasani town.Palynology is focused on the study of finding the factors of vegetation change and human impact on the surrounding environment (Moore & Webb, 1978). Plant microfossils of this type are not analyzed earlier in the Elbasani Region as well as no kind of studies by foreign or native researcher on palynomorphs of this plant in underground layers belonging the geological period of New Holocene in Elbasani town.Palynological studies in our country in recent years have built and created an organized and collaborative group already in various disciplines (Jance & Kapidani, 2011). Through these kinds of researches are constructed a palynomorphs diagrams, reconstructed flora, vegetation, plant landscape and gives data on the natural history, ecological and climatic origin of the region under study (Forest et al., 1999;Davis, 1999). The research in underground layers belonging the two thousand-year geological period ensure important information and tries to shed light on possible changes to the vegetation in this area, as well as the factors that have contributed to these changes over the years (Shalla, 1983;Muhameti et al., 1984). Monitoring, numbering and microscopic photographing of palynomorphs is completed with Motic microscope BA310, with overstatement 1500x.Information on the manner of taking samples as well as final preparation mode in laboratory to the palynomorphs ready for preview of this plant is presented below in this paper.(Kapidani, 1996;Kapidani & Jançe, 2004). Paleopalynological data help in the discovery of traces of history on the use and cultivation of plants, feeding mode and the origin of agriculture (Bryant & Holloway, 1996).The pollen quantifiable presence of Plantago Type in underground layers reveals the manner of distribution over the years about Plantago Type. Material and Methods Physical-chemical composition of the spores allows palynomorphs well saved and easily extracted from soil sediments.Basically all the extraction ways, join in principle methods of physical and chemical processing of 1 cm 3 sediment (Faegri & Iversen, 1989).The relief in all stationing where drilling is done has been flat, sub horizontal, small-angle slope on their way to the southern part of the city.Samples were taken every 25 cm depth from the surface to the depth reaching 4 m.A total of 105 samples are taken and analyzed. The method of processing with hydrofluoric acid To prepare the palynomorphs for microscopic study of a sample, the material initially is treated only with HCl and then with concentrated HF (Wood et al., 1996;Green 2001).This method consists in the processing of 1 cm 3 sediment with 10 ml HCl 37%, leaving together for a time of 15-20 minutes. Then the material is mixed with 6 ml of 40% HF, leaving together for a period of 24 hours.Material rinsed with distilled water and is centrifuged for six minutes with 3,500 rotations per minute.Once obtained neutral environment, the precipitate is mixed with glycerol. To avoid any difficulty in the microscope preparation, the glycerin is mixed with magnesium oxide MgO in a 5:1 ratio and it is ready to be observed in optical microscope (Moore & Webb, 1978;Davis, 1999). Evaluation of processing methods. For chemical treatment of samples, there are many processing methods.We practiced all the possible methods.For our conditions more appropriate methods for the chemical treatment of the samples results that of processing with hydrofluoric acid (2.1).It is worth noting that, for the closure of preparations the gelatin method was used (Kisser, 1935). Analysis Results and Arguments If we look at table 1, we will find the quantitative data for the pollen presence of Plantago type, representative to the family of Plantaginaceae distributed by their presence in a defined deepness as well as the total spores' quantity for Plantago Type.The minimum number (36 palynomorphs) is provided exactly in 400 cm of deepness meantime the greater presence of palynomorphs of Plantago Type that is 98, is battling close to surface exactly in the 25 cm of deepness. Figure 1. The palynomorphs distribution of Plantago Type by deepness If observe figure 1 we see distinctly a significant increment to the palynomorphs of Plantago Type, part of Plantaginaceae family (Photos 1) from the profoundness to the superficies.Observing the quantitative information, which we are given in the above table the palynomorphs aggregate number of Plantago Type, we find a fairly significant presence of 1133 spores. By a thorough monitoring of the Table 1, gives us the right to say that: The presence of Plantago Type palynomorphs are observed at all deepness.As well if we observe carefully the Figure 1, it is clearly noted that: The Plantago Type palynomorphs have a significant augmentation of the attendance from the profoundness to the terrestrial area.The representatives of this plant usually grown in compressed and stable ground, so repeatedly this plant occurs along the paths, through rocks or in crevices of pavements.This plant, also rarely occurs along the edge of the driveway. The leading cause for this presence of Plantago Type palynomorphs maybe should be connected with the mandatory requirement to this plant for human beings.We found enough material to contemporary literature that proves the use as herbal medicines of Plantago varieties, very long time ago.Plantago varieties as a medicinal plant and daily use find a spread as: disinfectant, against the swellings, treacle, anti-allergic, diuretic, soothing pain, tanner, contractile and cough suppressant.(Samuelsen, 2000). In the traditional beliefs thought that this herb is capable for the treating and healing of the poisonous snake nip.Also among this plant has found a use as nutrition of ancient humanity.Decoct boiled or salt-glaze derived from this plant also find use for the treatment of various problems coming from the streets of the respiratory system. Upward curve or the addition of Plantago Type palynomorphs towards the ground surface can be explained on the grounds that the palynomorphs of the surface can be well maintained than those exclusively deep, not avoid the impact of weather and environmental elements. The quantitative data through spores and pollens variety of Plantago type shows the direction of evolution of this family mentioned in the study. The palynomorphs amount of Plantago Type in subterranean strata reveals the spreading way throughout a long time about Plantago Type, part of Plantaginaceae Family. Table 1 . The presence of palynomorphs by deepness
2018-12-05T02:55:48.083Z
2016-12-30T00:00:00.000
{ "year": 2016, "sha1": "a1b88d66454a937dc089a104896c6f84eaf4a48d", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/ajis/article/download/9747/9385", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a1b88d66454a937dc089a104896c6f84eaf4a48d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
195818950
pes2o/s2orc
v3-fos-license
Early Detection of Microvascular Changes in Patients with Diabetes Mellitus without and with Diabetic Retinopathy: Comparison between Different Swept-Source OCT-A Instruments Optical coherence tomography angiography (OCT-A) has recently improved the ability to detect subclinical and early clinically visible microvascular changes occurring in patients with diabetes mellitus (DM). The aim of the present study is to evaluate and compare early quantitative changes of macular perfusion parameters in patients with DM without DR and with mild nonproliferative DR (NPDR) evaluated by two different swept-source (SS) OCT-A instruments using two scan protocols (3 × 3 mm and 6 × 6 mm). One hundred eleven subjects/eyes were prospectively evaluated: 18 healthy controls (control group), 73 eyes with DM but no DR (no-DR group), and 20 eyes with mild NPDR (DR group). All quantitative analyses were performed using ImageJ and included vessel and perfusion density, area and circularity index of the FAZ, and vascular complexity parameters. The agreement between methods was assessed according to the method of Bland-Altman. A significant decrease in the majority of the considered parameters was found in the DR group versus the controls with both instruments. The results of Bland-Altman analysis showed the presence of a systemic bias between the two instruments with PLEX Elite providing higher values for the majority of the tested parameters when considering 6 × 6 mm angiocubes and a less definite difference in 3 × 3 mm angiocubes. In conclusion, this study documents early microvascular changes occurring in the macular region of patients at initial stages of DR, confirmed with both SS OCT-A instruments. The fact that early microvascular alterations could not be detected with one instrument does not necessarily mean that these alterations are not actually present, but this could be an intrinsic limitation of the device itself. Further, larger longitudinal studies are needed to better understand microvascular damage at very early stages of diabetic retinal disease and to define the strengths and weaknesses of different OCT-A devices. Several studies were performed using different OCT-A devices, and this could explain some discrepancies in the available results. In fact, even if all OCT-A devices rely on the common principle that erythrocytes could be used as a motion contrast to differentiate vessels from static tissues [22], they use different algorithms for image acquisition and processing and different methods for layer segmentation [23][24][25][26][27]. Recently, Corvi et al. evaluated the reproducibility of quantitative parameters using seven different OCT-A devices in healthy subjects and concluded that the measurements obtained were too different to allow reliable comparisons [28]. The aim of this study is to evaluate and compare early quantitative changes of the macular perfusion parameters in patients with DM without DR and with mild nonproliferative DR (NPDR) by two different swept-source (SS) OCT-A instruments and using two scan protocols (3 × 3 mm and 6 × 6 mm). Patients and Study Design. In this prospective crosssectional comparative case-control study, we consecutively enrolled 111 eyes of 111 subjects, consisting of 18 healthy control eyes (control group), 73 eyes with DM without clinical signs of DR (no-DR group), and 20 eyes with mild NPDR (DR group). The right eye was considered for the analysis, unless a better quality in the left eye images was present. All patients with DM were referred from the Diabetes Unit to the Medical Retina Service, University Hospital "Maggiore della Carità," Novara, Italy, for evaluation. Normal controls were recruited among subjects referring to our clinic for a routine annual examination or for preliminary exams for cataract surgery (the eye that was not planned for surgery was chosen for the study). Inclusion criteria for the study were as follows: patients over 18 years of age with a diagnosis of type 1 or type 2 DM according to the updated diagnostic criteria by the American Diabetes Association [29] and confirmed by an expert diabetologist (G.A., M.C.P., and A.N.); no signs of DR or signs of mild NPDR on slit-lamp fundus examination with 90D lens (Volk Optical Inc., Mentor, OH, USA) performed by an expert ophthalmologist (S.V.) according to the International Clinical Diabetic Retinopathy Disease Severity Scale [30]; and subjects with normal glucose test for the control group. Exclusion criteria were as follows: any retinal disease other than mild NPDR (including diabetic macular edema); any previous intraocular treatment (such as intravitreal injections of anti-VEGF/steroids or retinal laser); cataract surgery within 6 months in the study eye; refractive error of greater than +/−4D; glaucoma or history of ocular hypertension (IOP > 21 mmHg); neurodegenerative diseases (e.g., multiple sclerosis, Alzheimer's disease, and Parkinson's disease); uncontrolled systemic blood pressure (BP ≥ 120/80 mmHg) [31]; and poor quality of OCT and/or OCT-A images due to significant media opacity or poor patient cooperation. Anamnestic data were collected for each patient, including type of DM, value of glycated haemoglobin (HbA1c), use of antidiabetic agents (insulin and/or oral hypoglicaemic drugs), use of other drugs for concomitant pathologic conditions (e.g., systemic hypertension, cardiovascular diseases, and rheumatic diseases), and previous ocular or other surgery. Each patient underwent a complete ophthalmologic examination including best-corrected visual acuity (BCVA) determination using the standard Early Treatment Diabetic Retinopathy Study (ETDRS) protocol at 4 meters distance, IOP measurement, slit-lamp dilated fundus examination with 90D lens, and acquisition of color fundus photography of the posterior pole. On the same day, SS-OCT and SS-OCT-A images were acquired with two different instruments. The study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional Ethics Committee (CE123 2017); each patient approved to participate in the study and signed a written informed consent. Swept-Source Optical Coherence Tomography and Optical Coherence Tomography Angiography. On the same day, each patient underwent OCT and OCT-A with two different SS instruments, after pupil dilation. The same scanning protocol was used for image acquisition. The devices were prototype PLEX Elite 9000 (Carl Zeiss Meditec Inc., Dublin, California, USA) and DRI OCT-A Triton Plus (Topcon Medical Systems Europe, Milano, Italy). Zeiss PLEX Elite uses a 1,060 nm wavelength, with a scanning speed of 100,000 A-scans/second, and image processing is obtained through the so-called OCT-microangiography complex algorithm (OMAG) [23,24]. Topcon DRI-OCT uses a 1,050 nm wavelength, with a scanning speed of 100,000 A-scans/second and image processing relying on a motion contrast measure named OCT-A Ratio Analysis (OCTARA) [26]. The acquisition protocol performed included the following scans: a linear 12 mm high-definition B-scan centered on the fovea at 0°, OCT-A maps covering the central 3 × 3 mm and 6 × 6 mm macular area. All OCT-A images were carefully reviewed to check automatic segmentations of the superficial capillary plexus (SCP) and deep capillary plexus (DCP), and manual corrections were applied, when necessary, in order to ensure a correct segmentation. For PLEX Elite device, the projections' removal tool was applied for evaluation of DCP. Poor quality images and/or with artifacts were excluded from the analysis. Quantitative Evaluation of OCT-A Images. Both 3 × 3 mm and 6 × 6 mm OCT-A maps were used for quantitative analysis. All images were saved and analyzed in anonymous and masked fashion. The following quantitative parameters were evaluated: area and circularity index (CI) of the FAZ; perfusion density (PD) and vessel density (VD); and branch analysis including the number of branches (NoB) and total branch length (tBL). All these parameters were evaluated on both SCP and DCP using ImageJ software, version 1.51 (http://imagej.nih.gov/ij/; provided in the public domain by the National Institutes of Health, Bethesda, MD, USA). For DRI-Triton Plus OCT-A, the SCP slab was segmented with an inner boundary at the inner limiting membrane (ILM) +2.6 μm and an outer boundary at the inner plexiform layer (IPL)/inner nuclear layer (INL) +15.6 μm, while the DCP slab was segmented between IPL/INL +15.6 μm and IPL/INL +70.2 μm. For PLEX Elite OCT-A, the SCP slab was segmented between ILM and IPL, while the DCP slab extended from the IPL to the retinal pigment epithelium (RPE fit) −100 μm. ImageJ Analysis. All DRI-Triton Plus OCT-A images were exported and analyzed with their original resolution of 320 × 320 pixels (9.4 μm lateral resolution for 3 × 3 mm images and 18.7 μm lateral resolution for 6 × 6 mm images). PLEX Elite OCT-A images were exported with their original resolution of 300 × 300 pixels for 3 × 3 mm angiocubes (10 μm lateral resolution) and 500 × 500 pixels for 6 × 6 mm angiocubes (12 μm lateral resolution) and analyzed after a process of cropping in order to match the DRI-Triton Plus's smaller field of view (images were cropped down by about 10%). All images were then opened in ImageJ analysis software. The FAZ profile was manually outlined using the freehand selection tool on images of SCP and DCP using a previously published method [32], and the software automatically calculated FAZ perimeter and area. FAZ CI was then measured using the following equation: FAZ CI = 4π × area / perimeter 2 . CI is the expression of the regularity of a shape: the more its value is closer to 1, the more the shape is similar to a perfect circle [31]. Images were then converted into 8-bit files, and the Otsu method of thresholding was applied before automatic measurements were performed, as previously reported [33]. Otsu's method of thresholding uses a bimodal distribution and determines the optimum threshold by minimizing intraclass variance and maximizing interclass variance [34]. PD on SCP and DCP (PDS and PDD) was calculated on binarized images as the ratio between all the perfused area in pixels and the total area of the image in pixels. VD on SCP and DCP (VDS and VDD) was calculated after skeletonization of the binarized image; it is a measure of the statistical length of moving the blood column, as previously described [35]. The process of skeletonization reduces all vessel diameter to 1 pixel; therefore, VD has the advantage of not being influenced by vessel dimension (Figures 1 and 2). The Analyze Skeleton function of ImageJ was then applied to skeletonized images. This plugin tags all pixels in a skeleton image, counts all their junctions, triple and quadruple points, and branches and then measures the average and maximum lengths [36,37]. When activating this function, a results table called "Branch information" is created; from this table, we considered only two parameters: tBL (total sum of the single branches' length in the area) and NoB (number of branches in the area), as previously described in the peripapillary region of patients with DM [17]. Statistical Analysis. The clinical and demographic variables were compared among the three subject groups using one-way ANOVA. The means of populations were estimated as least square means, which are the best linear estimates for the marginal means in the ANOVA design. In case of an overall statistically significant difference among subject groups, pairwise comparisons among the three groups were done using Scheffé's test. The ANOVA analyses were performed using statistical version software 6.0 (StatSoft, Inc., Tulsa, OK, USA), using a two-sided type I error rate of p ≤ 0 002, after Bonferroni's correction for multiple comparisons. The agreement between methods was assessed according to the method of Bland-Altman [38]. The mean of the differences (bias), the 95% limits of agreement (LAs), and the 95% confidence intervals for the bias and the LAs were calculated. The distribution of the differences was compared with a Kolmogorov-Smirnoff test to check for normality, as a prerequisite for the Bland-Altman method applicability. Results Of 111 examined subjects/eyes, 73 had no DR (mean age: 51 ± 20 4 years), 20 had mild DR (mean age: 63 ± 14 5 years), and 18 were healthy controls (mean age: 50 ± 21 1 years). There was no significant difference in the mean age among the three groups (one-way ANOVA, p = 0 06). Of 93 patients with DM, 38 had type 1 DM and 55 had type 2 DM. Mean duration of DM was 12 7 ± 10 7 years in the DM with no DR group and 18 3 ± 11 4 in the DR group (p = 0 049). Mean value of HbA1c was 7 1 ± 1 1 in the DM with no DR group and 7 6 ± 1 2 in the DR group (p = 0 055). Mean BCVA value was 85 ± 0 0 ETDRS letters in the control group, 84 8 ± 1 2 in the DM with no DR group, and 84 3 ± 1 6 in the DR group (p = 0 15). Table 1 shows the mean values of the significant parameters evaluated on 6 × 6 mm angiocubes in different groups. The following parameters were significantly decreased in the DR group versus controls with both instruments: CI and tBL in the SCP and VD and NoB in the DCP. The FAZ area in the DCP was significantly greater with both instruments in the DR group versus the controls. The following parameters were significantly decreased in the DR group versus controls only with PLEX Elite OCT-A: PD and VD in the SCP and PD, FAZ CI, NoB, and tBL in the DCP. The following parameters were significantly different in the no-DR group versus controls: a decrease in PD and tBL in the DCP and an increase in FAZ area in the DCP detected only with PLEX Elite and a decrease in FAZ CI in the SCP detected only with DRI-Triton Plus. Table 2 shows the mean values of significant parameters evaluated on 3 × 3 mm angiocubes in different groups. The following parameters were significantly decreased in the DR group versus controls: PD, VD, CI, and tBL in the DCP with both instruments; NoB in the DCP with only PLEX Elite; and CI in the SCP with only DRI-Triton Plus. FAZ area in the DCP was significantly greater only with PLEX Elite. FAZ CI in the DCP was significantly reduced only with PLEX Elite in no-DR group versus controls. Table 3 summarizes the results of Bland-Altman analysis for PD, VD, FAZ, NoB, and tBL, showing comparison between the two OCT-A instruments. A systemic bias exists between the two instruments with PLEX Elite providing higher values for all tested parameters, except for FAZ CI in the SCP, when considering 6 × 6 mm angiocubes. However, when evaluating the 3 × 3 mm angiocube, the difference between the two instruments is less clear, with PLEX Elite providing higher values only for PD and FAZ area. As representative examples, Figure 3 shows the Bland-Altman plot for VD in 3 × 3 mm angiocube scans evaluated at the DCP. The width of the LA's interval is quite narrow, amounting to only 31.5% of the mean value, thus indicating a good agreement between the two instruments. Figure 4 shows the Bland-Altman plot for the FAZ area in 6 × 6 mm angiocube scans evaluated at the DCP. The width of the LAs' interval is wide, amounting to 197.8% of the mean value, thus indicating a poor agreement between the two instruments. Discussion In the present study, a quantitative evaluation of microvascular changes occurring in the macula in patients with DM with and without clinical signs of DR was performed, using two different SS OCT-A devices and two different angiocube scan sizes. A significant alteration of specific OCT-A parameters was confirmed with both instruments in patients with initial signs of DR when compared to healthy controls. OCT-A is a method recently introduced in clinical practice that allows for a detailed characterization of retinal microvasculature through the segmentation of individual retinal vascular layers. Recently, Gildea published a review focusing on the diagnostic value of OCT-A in the evaluation of a number of microvascular parameters in patients with diabetes and highlighting the usefulness of this technique in the identification and localization of microaneurysms; visualization of preretinal neovascularization and areas of capillary nonperfusion; detection of FAZ enlargement; and remodeling and quantification of vascular perfusion and branching complexity [39]. However, different OCT-A devices and segmentation methods that have been used as well as different regions of interest have been analyzed in these studies, making it difficult to draw final conclusions, especially when considering quantitative vascular perfusion parameters such as VD and PD [39]. In particular, the majority of available data are obtained with spectral domain OCT-A devices and just few studies were performed with SS-OCT-A. Swept-source OCT-A devices use a longer wavelength (1050 nm), thus having a better ability to penetrate deeper into the tissues than spectral domain devices that use a shorter wavelength. While many studies reported high intra-and interoperator reproducibility in the evaluation of different OCT-A parameters, both in normal and pathologic eyes, using the same scan type and the same device (in particular, FAZ area evaluation at the SCP and perfusion parameters) [40][41][42][43][44][45][46][47], concerns remain on the results interchangeability when using different scan sizes and devices. Rabiolo et al. recently published a study performed with PLEX Elite, comparing FAZ area and VD measurements in different angiocube scan sizes (3 × 3, 6 × 6, and 12 × 12 mm) after cropping original images to obtain the same size. The authors concluded that FAZ area is a robust parameter even if measured on different angiocubes, while VD depends on image size [47]. Different studies performed with OCT-A focused on FAZ measurement as a marker of microvascular damage, documenting that patients with DM had larger FAZ areas versus healthy controls [10,[12][13][14][15]48]. Different methods for quantitative evaluation of FAZ circularity in DM have been recently proposed [49,50]. In the present study, CI turned out to be an early parameter showing FAZ changes both in the SCP and DCP. Indeed, a clear decreasing trend was documented from controls to no-DR and DR groups, meaning that FAZ regularity was gradually lost as retinal microvascular damage, induced by DM, progressed. Moreover, the present study documents a significant decrease in VD and PD in patients with initial signs of DR versus healthy controls. This difference was detected with both instruments. These data are in agreement with previously published studies reporting a significant decrease in VD in the macular region in patients with DR compared to healthy controls [51,52]. In the present study, both angiocube scans (3 × 3 mm and 6 × 6 mm) detected a significant difference in VD and PD evaluated in DCP, while significant differences in the SCP were found only in 6 × 6 mm scans, in particular, using PLEX Elite OCT-A. Hirano et al. evaluated PD and VD on different scan sizes (3 × 3, 6 × 6, and 12 × 12 mm) using PLEX Elite [53]. The results are partially in agreement with our data, reporting a significant decrease in both PD and VD on all scan sizes between healthy controls and eyes with DR, but no significant differences between healthy and diabetic eyes without DR. However, different from what we found, these differences were described both in the DCP and SCP even on 3 × 3 mm images [53]. We would like to point out that two aspects should be considered when discussing the findings reported in the present study. First of all, the two devices use different segmentation methods and have different resolutions. In particular, lateral resolution of the two instruments is similar for the 3 × 3 mm images, while the lateral resolution of PLEX Elite's 6 × 6 mm images is significantly higher compared to that of 6 × 6 mm images acquired -2 3 × 10 6 [-5 4 × 10 6 , 0 8 × 10 6 ] 33 9 × 10 6 31 3 × 10 6 , 37 5 × 10 6 88.3 * Comparisons were always performed considering the difference between method B (PLEX Elite) and method A (DRI-Triton). Thus, a positive bias means PLEX Elite mean values are greater than those of DRI-Triton's. * * LA interval was calculated and the ratio between the amplitude of the interval (difference between upper LA and lower LA) and the mean value of the considered parameter in percentage. PD: perfusion density; SCP: superficial capillary plexus; DCP: deep capillary plexus; VD: vessel density; FAZ: foveal avascular zone; CI: circularity index; NoB: number of branches; tBL: total branches length; LA: limits of agreement. with DRI-Triton Plus device. This could explain why PLEX Elite was able to detect significant changes not only in 3 × 3 mm but also in 6 × 6 mm images. Another important consideration that should be made is that our analysis of 6 × 6 mm images (obtained with PLEX Elite) allowed detecting changes occurring not only in the DCP but also in the SCP. Recent studies performed with OCT-A suggest that changes induced by DM first occur in the DCP and then involve the SCP with disease progression [54][55][56]. This may be due to a higher density of smaller vessels (more susceptible to hypoxic damage) in the DCP compared to the SCP [57,58]. Based on our results, we could confirm that lesions induced by diabetes were firstly detectable at the DCP and secondly at the SCP. As the decrease in macular perfusion parameters at SCP level was detected, only on 6 × 6 mm angiocubes and not on 3 × 3 mm angiocubes, we may hypothesize that lesions at the SCP start from a more peripheral macular area and then involve into the inner perifoveal region (more central area). This would need to be confirmed with further studies. Lastly, in this study, we found a significant reduction in NoB and tBL in patients with DR compared to healthy controls in the macular region. To the best of our knowledge, this is the first study to perform this kind of automatic evaluation of vessel complexity in the macular region. Previously, the same method was used to investigate the peripapillary region of patients with diabetes, finding a significant reduction also in patients with DM without clinical signs of DR when compared to healthy controls [17]. It is hypothesized that NoB and tBL reduction could be a consequence of loss of small branching vessels resulting in reduced branching complexity of retinal vasculature [17,38]. Previously published studies on OCT-A used a different method, called fractal dimension (FD), to analyze the complexity of retinal microvasculature in the macular region [35,51,55,[59][60][61][62]. FD was significantly altered in patients with DM when compared to healthy subjects and seemed to be associated with increasing severity of DR [35,51,55,[59][60][61][62]. Therefore, these studies support the hypothesis that the complexity of microvascular network progressively decreases with increasing severity of DR [35,51,55,[59][60][61][62]. We performed a Bland-Altman analysis to assess the agreement between the two OCT-A devices used in the present study. We found that the agreement between the two instruments was extremely variable depending on the parameter taken into account. Indeed, LA intervals ranged from acceptable values of ≤50% for some parameters (such as PD and VD) to very high values for some other parameters. In particular, LA intervals > 100% were detected for FAZ area and were probably due to the fact that this was the only parameter evaluated in a noncompletely automatic way (FAZ profile was manually outlined using ImageJ). In addition, the two instruments use different segmentation boundaries to delineate SCP and DCP. The major limitations of this study include the small sample size of patients with multiple comparisons and the lack of homogeneity in the number of different study groups. However, we decided to use Bonferroni's correction for multiple comparisons in order to reduce the risk of having false-positive results, strengthening the validity of our results. In addition, the power of the study is given by the size of the smallest group (control group); thus, the difference in the group numbers should not influence the final results. In conclusion, this study documents early microvascular changes occurring in the macular region of patients at the initial stages of DR. These changes were confirmed with both SS OCT-A instruments. Based on these results, we would suggest to perform 3 × 3 mm macular angiocube scans when using DRI-Triton Plus OCT-A, due to its higher resolution. On the other hand, PLEX Elite 6 × 6 mm angiocube scans seem to detect earlier vascular perfusion changes. Therefore, we should be careful in the evaluation of OCT-A results obtained with different devices: the fact that early microvascular alterations could not be seen does not necessarily mean that these alterations are not actually present, but this could be an intrinsic limitation of the device itself. Further, larger longitudinal studies are needed to better understand the exact extent of microvascular damage in very early stages of diabetic retinal disease and to precisely define the strengths and weaknesses of different OCT-A devices and different scan protocols. Data Availability The data used to support the findings of this study are included within the article. Disclosure This study was partially presented at the 29th EASDec Annual Meeting 2019, 16th-18th of May, Amsterdam, Netherlands.
2019-07-09T13:05:17.014Z
2019-06-12T00:00:00.000
{ "year": 2019, "sha1": "7d28ecf490bcceb541e306f58cc2408beb7485ab", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jdr/2019/2547216.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5af578ba94905b97b6081178653837ec232f463", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55481514
pes2o/s2orc
v3-fos-license
Obtaining the Specific Heat of Hadronic Matter from CERN/RHIC Experiments The specific heat of hot hadronic matter is related to particle production yields from experiments done at CERN/RHIC. The mass fluctuation of excited hadrons plays an important role. Connections of the specific heat, mean hadronic mass excited and its fluctuation with properties of baryon and electric chemical potentials (value, slope and curvature) are also developed. A possible divergence of the specific heat as 1/(To-T)^2 is discussed. Some connections with net charge fluctuations are noted. Introduction The statistical model of very high energy collisions can account for particle production yields from very high energy collisions [1][2][3][4][5]. The same model also contains information regarding the thermodynamic proper ties of this system of particles. One important thermodynamic property is the specific heat. In this paper, expressions for the specific heat will first be developed. The importance of a study of the specific heat stems from the fact that sudden changes in the specific heat have been used as signals for phase transitions. A classic example of this statement is the lambda transition in liquid helium. The name lambda transition reflects the lambda shape of the specific heat with a very sharp rise followed by a sudden decrease. In the liquid-gas phase transition of nuclear matter at moderate excitation energy or temperature a very similar sharp peak in the specific heat was found in a theoretical model developed in ref [6,7]. This is associated with the increase in the surface energy of the system as the original nucleus breaks into small and smaller clusters with increasing temperature. For the situation discussed here, a rapid rise in the specific heat is associated with large fluctuations in the mass spectrum of the excited particles. Event-by event studies [8] have also been stressed along with temperature fluctuations [9]. Large values of the specific heat are associated with large energy fluctuations. The compressibility is associated with density fluctuations [10]. Fluctuations associated with net electric charge and baryon charge have also been of recent interest [11,12] as well as t p fluctuations [13]. An overview of fluctuations and correlations can be found in [14,15]. Connections with some of the quantities that appear in this paper with baryon and electric charge fluctuations will be mentioned. The organization of this paper is as follows. First, results of the statistical model are given for particle production yields and for thermodynamic quantities in situations where constraints associated with conservation laws are important such as in heavy ion collisions. The specific heat is then connected to properties of the particle production yields and conserved charges such as baryon number B and electric charge Z . Limiting cases of the specific heat are discussed which show a connection of the specific heat to the mass fluctuation in the spectrum of excited particles. Connections of the specific heat with the behavior of the chemical potential (its value, slope and curvature) are also developed. The distribution of particles obtained from the detailed analysis of fitting the statistical model to hadronic multiplicities in Pb-Pb collisions at 30A, 40A and 80A GeV data [3] is then used to study the behavior of the specific heat. A parameterization of the behavior of the chemical potential with T from this analysis may indicate a sharp increase in the specific heat. The statistical model of heavy ion collisions assumes that hadron multiplicities are the result of an established thermal and chemical equilibrium [16] in some interaction volume V at some temperature T . The interaction volume is the freeze out volume which is the largest volume over which equilibrium is maintained in the evolution of the fireball. The statistical equilibrium is developed from the underlying collisions between particles from the strong force. Here initial reaction rates are assumed fast enough compared to an expansion rate so that a quasi-equilibrium can be achieved. As the system expands, reaction rates quickly drop because of rapidly decreasing densities and equilibrium is broken at some point in the evolution. The simplest assumption is that all particles freeze out at the same volume V and temperature T . The particle multiplicity distributions is model are then The j b and j q are the baryon number and charge of particle j which has spin degeneracy j g and mass j m . The B  and Q  are the baryon and charge chemical potentials. The strangeness chemical potential S  will be set equal to 0 and the strangeness suppression factor S  will be set equal to 1. The main focus will be on the baryon and electric charge conservation in systems with large B and Z . The energy of particle j is given by The arguments of the Bessel K functions in eq(2) are the same as in eq(1). The energy equation has the particles rest mass within it. The sum over k in the above equations gives the degeneracy corrections, with the 1  k term the non-degenerate or Maxwell Boltzmann limit. For non-relativistic particles in the non-degenerate limit, the The thermal wavelength  of particle j is given by . The energy in this limit is simply T . For zero mass particles:  . Features of the chemical potentials B  and Q  . The two chemical potentials are determined by the constraint conditions on total baryon number B and total charge Q or Z . Namely: Moreover, the derivative of these chemical potentials with respect to T can be obtained from the conditions T B   / = 0 and T Z   / = 0. Also, use will later be made of 0 As an example, consider a system with the multiplicity of all particles given by the non-degenerate non-relativistic limit of eq(3). Then, Here, the various quantities that appear in eq(6) are defined by the following equations: where C = bb C qq C -2 bq C . If we neglect antibaryons production and take j b = 1 for all baryons (no composites) then bb C = B . Mesons don't contribute to either bb C or bq C . Antiparticles enhance bb C , qq C and contribute to bq C with the same sign as the associated particle. It is important to note that bb C and qq C depend on 2 j b and 2 j q . They are measures of baryon number and charge fluctuations and depend on the fundamental baryon number and electric charge. In a Q-g phase these coefficients will be different since the unit of charge is 1/3 rather than 1 as used in ref [11,12]. Here, the focus will be properties of the hadron phase and the rise of the specific heat as it approaches a possible transition temperature 0 T . In a future paper a discussion of the specific heat of the Q-g phase will be given. It can be calculated in the simple approximation of ideal gases of gluons and quarks using , where S is the entropy. For the case of massless pions and all other mesons and baryons taken in the non-relativistic limit, the results of eq(8) become while the Z is the total conserved charge and also contains the contribution of the charged pions. However, massless pions do not affect Q M . For nonrelativistic pions, the contribution of pions appears in Q M and the extra  Z term in eq(9) is no longer present. Expressions for the specific heat of hadronic matter. The specific heat of hadronic matter in the non-degenerate and the non-relativistic limit for all particles is given by The first term on the right hand side of the last equation is just the ideal gas specific heat of each non-relativistic particle, with both mesons and baryons contributing. The second term involves the mass spectrum of all particles produced. The curly bracket or third term in the first equality has three contributions and involves the three coefficients bb C , qq C and bq C . The second and the third term arise from the possibility that the particle The second term will be cancelled by the third term for a system which has and connects V C to information about the behavior of the baryon and electric charge chemical potentials. When pions are taken in the zero mass and non-degenerate limit, the V C is somewhat modified and now reads The first two sums over j on the right hand side of this equation exclude the pion contribution in their evaluation. The pion contribution is now contained in the following terms in that equation. The third term is the direct contribution of the pion as if it were independent of the charge conservation law and the curly bracket term arises from the chemical potentials and associated constraints. When these constraints are neglect, the curly bracket term is zero. The independent pion contribution can also be calculated using the results of eq(1) and eq (2). The exact expression for the specific heat per particle of an unconstrained meson or boson including statistical corrections reads: . If statistical corrections are neglected this limit would be 12, with the zeta functions giving the corrections from the sums over k in eq (12). The non-degenerate and large T m / limit of eq (12) is This is the characteristic dependence of the first 2 terms in eq(10). Simplified model 1; only conserved baryon charge To gain some further insight into properties of V C , a simplified situation of one conserved charge will be considered. Namely, in this subsection only baryon number conservation will be imposed on the system. Then all charge and neutral states of the same baryon will have equal yields. Mesons and baryons will also completely decouple and the specific heat V C will be a sum of independent contributions from mesons given by eq (12) The sums over j are over both baryons and anti-baryons. Anti-baryons are usually a small fraction of the total baryon number. If we allow only baryons with j b =1, then bb C = B , and eq(19) can be rewritten in a simpler form where the mean mass and its fluctuation are determined by f can be used to obtain the mean baryonic mass that is excited by rearranging the baryon constraint condition to read Again, if anti-baryon production is neglected, the coefficient bb C B / = 1. The condition 2  2 / T B  =0 can be used to obtain an expression for the mean square fluctuation in the masses that are excited. This condition and the case for all j b =1 gives Using this last result, the Role of anti-baryons The presence of anti-baryons will modify some of the results given in sect [2.4]. For collision energies  100A GeV anti-baryons make up a few percent of B . From ref (3), the anti-proton, proton ratio is ~2% for the 80AGeV Pb Pb  collision. This ratio rises to ~5% for the 158A GeV collision. For a B ~300 MeV and T~150, exp( B  2  / ) T = exp(-4.)~2%, which determines the anti-particle/ particle ratio in the absence of an electric chemical potential. The anti-particle, particle ratio will increase at much higher energy because The last equality arises from . The specific heat is also very large, the rhs of eq(20,21) reduces to eq(19). The specific heat now involves both the curvature and slope of the chemical potential and the value of the chemical potential itself. In the limit 2 / 1   y x the B V C , of eq(20) becomes the unconstrained limit: Thus, the values of B  , its derivative and curvature also contain the information necessary to evaluate various quantities of interest regarding the mass excitation. Role of electric charge conservation A hybrid case consists of no anti-baryon production (all j b = 1), and having both baryon and electric charge conservation. This model reflects the fact that anti-baryon production is suppressed compared to  charged particle production. The production of  charged pion pairs is easier than a baryon-antibaryon pair. This hybrid model is also useful as a way of seeing how the two constraints act together in the expression for the specific heat and mass spectrum of produced hadrons. To keep final results as simple as possible within this hybrid case, the assumptions (25) for the mean square fluctuation for baryons and the mean square fluctuation for mesons is The specific heat can then be obtained from the above equations when they are substituted into eq(23). The result relates V C to properties of B  and . and eq(23) gives for  and T given in ref [3] for Pb Pb  collisions at 30, 40, 80 and 158A GeV and for Au Au  11.6A GeV collisions, and the above parameterization of the behavior of B  with / are shown in Table1. The last three columns are obtained from results in sect[2.5] that include antibaryons. The previous three columns are without anti-baryons obtained from expressions in sect [2.4]. The curvature and slope functions are obtained from this parameterization . The anti-baryon case also used this parameterization to evaluate the chemical potential, while the case with just baryons used the chemical potential of ref (3). The error bars in T and B  are not given and generate large error bars in the results for the mean mass, mass fluctuation and specific heat, especially at the higher temperatures. These errors are typically %. 20  Two sets of numbers for each energy appear in the table since ref [3] has two main analysis of the data, called A and B. The results presented in the table show that the specific heat per particle for baryons are very different from the ideal gas contribution of 1. (12), each of these mesons contributes to V C as follows: Summary and Conclusions The properties of the specific heat of hadronic matter produced in very high energy nucleus-nucleus collisions such as those at RHIC and CERN are studied in this paper. The grand canonical statistical model is used to develop expressions for the particle multiplicity distribution and energy caloric equation of state which is then used to obtain the specific heat. The constraints associated with baryon number and electric charge conservation are included to obtain an expression for the specific heat which contains the particle yields, the mass spectrum of produced particles, baryon number B and overall charge Z values, and three coefficients bb C , qq C and bq C . These coefficients come from the correlations introduced by requiring overall baryon number and electric charge conservation in the spectrum of produced particles. The coefficients also depend on the fundamental unit of charge being different for the case of 1/3 (Q-g phase) compared to 1 (hadron phase). bb C and qq C are also measures of baryon number and charge fluctuations [11,12]. The specific heat is not simply a sum of independent contributions arising from each type of particle. Rather V C has additional terms which significantly alter its value from this independent particle result. Resonance excitations allow for the possibility that individual particle yields change with T and redistribute the conserved charge and baryon number on other particles. The behavior of V C is studied in some limiting cases to see how various quantities such as baryonic charge conservation, the production of antibaryons and electric charge conservation affect it. The mass fluctuation in excited resonances is shown to play an important role. Using properties of the constraint equations, the specific heat and mass spectrum of excited hadrons is related to properties of the baryon and electric charge chemical potentials B  and Q  . In particular, the specific heat is related to the curvature of the chemical potentials, its slope and its value. The mean hadronic mass that is produced in a heavy ion collision involves the chemical potentials and their slopes, and the mass fluctuation involves these quantities and the curvature of the chemical potentials. A recent parameterization of the baryonic chemical potential with T is shown to lead to a very rapid increase in the baryonic component of the specific heat. If this expression correctly describes the behavior of B  near a limiting temperature 0 T , then V C would diverge as 1/ ( 2 0 ) T T  . Moreover, the exponent 2 is independent of the functional form used near 0 T . The presence of anti-baryons plays a key role in the temperature behavior of V C , and this independence property.
2018-12-10T23:34:11.022Z
2005-03-03T00:00:00.000
{ "year": 2005, "sha1": "992cc86f2a54eaf5492b1a67dec0e074ed0778e0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "18b3adbc9598837cd52491612c91812ddef0606c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239878527
pes2o/s2orc
v3-fos-license
Cryogen Free Cryostat The advanced neutron facilities at low temperature below 1k are mainly based on helium refrigerator inserts and dilution that are used with orange cryostat (OC) or similar type of system. Recently global helium supplied problem causes the increased cost of liquid helium that has been raised the concerns about the affordability of cryostat. In this model the design and results are mainly based on the top loading cryostat which are cryogen free. The dilution refrigerator insert which are used in this model will provide sample environment for neutron scattering experiment that is seen between the temperature ranges from (36 mK to 293 mK). The refrigerator insert are operated in the continuous regimen. The cooling time and orange cryostat which are insert are always similar to each other. The performance criteria thar are created are based on the base temperature, rate of compatible and the cooling power which are present in the specification of the standard dilution refrigerator. This system will offers many types of operating parameters that are similar to orange cryostat but does not cause any complications of cryogens. In this model the first scientific result are obtained which are in the low temperature based neuron scattering experiment with the system are also discussed. INTRODUCTION The neutron scattering experiment at low temperature below 1k will provides the helium and dilution where the refrigerator that are inserts in the orange cryostat which is top loading. Sample environment has been increased in its demand from the last decade and has exceeded eighty low temperature of experiment per year. The cryogenic equipment always needs the significant resources that can develop many types of problems which includes health, safety issues and also the considerable cost in required cryogen. Cryo cooler technology offers the new system where the cryogen consumption or completely eliminated or reduced. This system also have an advantage of less space convention, simplicity and also improved safety for the users. The successful representative that are found in the family cooler of cryo which is seen in the pulse type of tube refrigerator. The incomparable character is absence of cold based moving parts which mainly used for reducing the vibrations and noises that are generated by cooler. The cold head and its reliability are also increase because there is no high precision of seals are demanded and it can be operated without the inspection over the services. The first scientific results and operational experience are obtained by experiment which is named as neuron beam that are at low based temperature which are provided by dilution refrigerator that accommodate 25 mm in diameter in cryogen free top loading cryostat. This will gives 2 the sample type environment at the temperature that ranges from 2K to 293 K at high cooling power 0.19 W @ 2.0K . In cryostat the dilution refrigerator insert are loaded which allows the neutron distraction measurements at the level of temperature of 40mK. This system are improved and developed by Oxford collaborative project and by the ISIS. Design of the system. The model of this design is depends upon the idea of the top loading cryogen system along with the helium condensation loop which are successfully analysed in the first prototype. The design of the second prototype shows the major changes which are used in the experiment that are presented in this model has been the replacement of top loading cryostat and the inclusion of pulse tube refrigerator that provides higher efficiency at a very high-level flow rates that are mainly favoured by the beam line of operation. The neutron scattering experiment uses modern dilution based refrigerator that are based on the designs that includes sintered silver heat exchanger, little quantity of scattering experiment also needs low temperature along with the combination of high cooling power which uses the influential cryogen of free dilution that builds around PTR type of cryo cooler. For this condition dilution insert are used with the cryostat VTI. This system will required cooling power which produces VTI heat type of exchanger. Cool down procedure mainly consists of three important phases. In phase one insert of pre cooling with helium heat exchanger in the refrigerator vacuum jacket with the room temperature down to the 2.0K. After the VTI has been warmed for up to 4K it will exchange pumped gas for approximately in one hour. If the vacuum jacket achieved the good vacuum then the VTI is swapped off and the dilution refrigerator will begin the condensation of helium which are circulated around the automated regime. Operation of the system Base temperature of the refrigerator cooling system is shown at the figure 1. It will represent the temperature behaviour at the VTI type of exchanger and the dependence of the mixing chamber temperature related to time. Fully automated technique is used for operating the refrigerator. The total cool down of dilution insert will start after inserting in the VTI which is typical to the conventional cryogen based top loading cryostat. In primary over night the temperature will make the system to run which stable the performance. In figure 2 it represents the heat exchanger time requirement and mixing chamber. The solidity are enlightened by the absence of time where the changes of liquid level in nitrogen and helium vessels which are existing in the cryostat conventional operation. In the central part stage, drop in the temperature are caused by the switching of nitrogen beam off. Then crossing some time, the beam of nitrogen will appear back and the mixing chamber temperature then will return to its previous values. So the estimation of the power of heating neutron beam are at the close of few microwatts. Experimental Outcomes The experiment of neutron scattering at small level temperature environment sample by the cryostat are created. In figure 3 the diffraction pattern is obtained and derived from the powder sample of Sr2Ir0.5Rh0.5O4. The limited type of insulators are supposed to present at the magnetic order of 0.7k. There will be increase concentration in 5d oxide because of the stages of matter that show which are parallel to the energy scale of U. The magnetic interaction and the spin order coupling competition are seen. Lab results will offers the quality neutron powder diffraction that have the capability to show the magnetic ordering. The order phase at the level of 0.7k. Hence the dilution refrigerator is mandatory to calm the sample at the low level of temperature. The polycrystalline is the high quality sample which is sealed within the tinny based wall vanadium where the (He) gas is added at the warmth exchanger media. This type of method shows the effective way for the compulsory temperature symmetry which crosses the vanadium and sample. The statistics in this research will establish the good coverage and angular quality. The temperature at high point are very stable at temperature base which remains in 0.2 mK during the whole measurements. The internal progress database are mostly proposed progressively substituting all the predictable cryogenic system at the use of free system that are mainly depend on the pulse tube refrigerator (PTR). Here the examination and the model displays the scientific result and experience operation that obtained by beam line experiment at low temperatures in the sample environment. The refrigerator quartered in recently established in top loading based cryogen free cryostat. The model also demonstrate the long-time steadiness with the base temperature of 37mk. Upper resolution data are found in the experiment. The operating parameters are mostly similar to standard orange cryostat but without any complications. Cryostat is the device which allows the maintenance of the cryogenic environment as long as required. There are different type of cryostat are available but everyone cryostat has the different advantage and disadvantage when comparing to other. In nineteen century the orange cryostat are mostly used in the workhorse of the scattering of neutron services. This is based on top loading of helium bath cryostat along through the nitrogen that are cooked infrared radiation shield at the temperature of 1.5k. The infrared Shield are designed for highly conductive copper which helps to cover the high emissivity of the aluminium foil. Conclusion In neutron scattering instruments cryogen free technology are compactness and reliability will occupy the new niche of the cooler components. The mostly used application of CCR are depends on the cryogenic beryllium filters. A blocks of beryllium will only transmits the low energy of neutrons but produces scatters neutrons at the high energy level. The transmission of the neutrons that are filtered is found on the wavelength outside the cut off mostly on the scattering of the phonon. These are removed by the filter below the 100k and this type of removing energy will help to gain the scattering from the thermal which are excited phonons. The advantage of the cryogen free system includes operation simplicity, reduced or complete elimination of the liquid cryogen top ups, safer mechanism, significant reduction in the resources, high level of thermodynamics efficiency, reduces in system sizes and operational mechanism, this system are much more environment friendly and conventional among other types of cryogenics. Some disadvantage are it will limit the cooling power of CCR. It also increase the demand for cooling water supplies and electricity. It may generate noise, vibrations and disturbance in the magnetic field. This creates difficulty radically improved by the non-appearance of cooling parts which are found in the PTR.
2021-10-26T20:07:48.411Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "864bcc89444573ad7ee27ef3695cdc1874a4ab06", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2054/1/012069", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "864bcc89444573ad7ee27ef3695cdc1874a4ab06", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
4378144
pes2o/s2orc
v3-fos-license
Low power hybrid PG_Filter-AGC analog baseband for wireless receivers : A low power hybrid PG _ Filter-AGC analog baseband is presented, including a programmable fi lter ( PG _ fi lter ) and an auto gain control core ( AGC _ core ). It adopts the digital-plus-analog mixed gain control methodology, resulting in an e ff ective power reduction and a decibel gain error improvement. To further reduce the power of the AGC _ core , a low power Variable Gain Ampli fi er (VGA) adopting sub-threshold design methodology is presented. Furthermore, a self-adaptive threshold voltage compensation (SATC) scheme is proposed to guarantee the good anti-process variation performance for sub-threshold design methodology. The hybrid analog baseband has been fabricated under SMIC 0.18 µm CMOS process, with a die size of 0.45 mm 2 , where the AGC _ core occupies an area of 0.28 mm 2 . The test results demonstrate a total power of 4.1 mW, where the AGC _ core consumes a power of 0.39 mW. A consecutive gain dynamic range of 80 dB, with a decibel gain error small than ² 0.39 dB, is achieved and the cuto ff frequency ranges from 0.5 MHz ∼ 30 MHz. Introduction The analog baseband composed of intermediate frequency (IF) filter and Auto Gain Control (AGC) circuit is an indispensable circuit block for wireless receiver. The AGC plays an important role in stabilizing the output power of the receiver and relaxes the required dynamic range of cascaded analog to digital converter (ADC). The IF filter selects desired signals from interferences and anti-aliases unwanted noise and blockers. Both of these two circuits are crucial for excluding unwanted effects, such as the interferences, aliased noise, and gain variation [1]. Power consumption is increasingly becoming the dominant factors and inevitable urgent requirements for wireless receivers, especially when taking into account of the explosive growth of mobile, portable applications [2,3]. This is also quite an issue for the analog baseband. However, most studies, which tend to require huge power, do not pay sufficient attention to these issues [3,4]. In this work, a low power hybrid PG_filter-AGC analog baseband is proposed. By adopting a hybrid architecture, a digital-plus-analog mixed gain control methodology is achieved. Then, the required gain dynamic range of AGC_core is relaxed. As a result, a less gain of the AGC_core is required, and the power of the analog baseband is reduced from the system level design. Moreover, a less gain dynamic range also results in an improved decibel gain error of AGC_core. For the AGC_core, the necessary decibel linear gain control characteristic of VGA is crucial for stabilizing the settling time of the AGC loop. For full-CMOS implementation of VGA, typical approaches includes the digital or analog design strategy. The digital solutions usually adopts switchable resistor network [5], while the analog solutions include the Taylor series approximation [6] or the pseudoexponential approximation [7]. However, these solutions suffer from high-power and large decibel gain error, especially when a large gain dynamic range is required. These defects greatly restrict their application prospect when facing the urgent low power requirement. With the progress of the semiconductor technology and design strategy, the sub-threshold design methodology, which demonstrates a well exponential I-V curvature, has been verified as a realistic low-power solution. Lee [8] has verified the reliability of the sub-threshold low noise amplifier (LNA) and the feasibility of the sub-threshold voltage controlled oscillator (VCO) [9]. A sub-threshold voltage reference is proposed by Du [10] with satisfactory performance. The sub-threshold design methodology also provides a new low power solution for VGA. In this work, a low power VGA adopting sub-threshold design methodology is proposed. Furthermore, a self-adaptive threshold compensation (SATC) methodology is introduced to guarantee a good anti-process variation performance for the sub-threshold design methodology. The hybrid analog baseband has been fabricated in SMIC 0.18 µm CMOS process. Test results demonstrate a dynamic gain rang of 80 dB with a decibel gain error small than AE0:39 dB, while the cut-off frequency ranges from 0.5 MHz∼ 30 MHz. The total power of the hybrid analog baseband is 4.1 mW, where the AGC_core consumes a power of 0.39 mW. System architecture The proposed analog baseband shown in Fig. 1 adopts a hybrid architecture, where the PG_ filter, whose gain is digitally programmable, is subsumed into the AGC gain control loop and acts as the gain stage of the VGA. As a result, by cascading the PG_ filter and a VGA with an analog consecutive controllable gain, a digitalplus-analog mixed gain control strategy is achieved. Then, even with a small gain dynamic range of VGA, a large gain dynamic rang of the analog baseband is achieved, and the decibel gain error is only determined by the VGA, as shown in Fig. 2. In this work, the gain dynamic range of the VGA is 23 dB (including a 3 dB redundant gain for gain variation suppression). Combined with the PG_ filter, the gain step of which is set to be 20 dB, a gain dynamic range of 80 dB of the analog baseband is achieved. With the above architecture design methodology, a less gain dynamic range of VGA is required, which means less gain stages of VGA. Consequently, the power and hardware cost of the VGA is greatly reduced. Moreover, a smaller dynamic gain range of the VGA also means a smaller decibel gain error. With the hybrid digital-plus-analog mixed gain control strategy, the reduction of the gain dynamic range of VGA also helps reducing the decibel gain error of the analog baseband, as the decibel gain error is determined by the VGA, which is also shown in Fig. 2. Detailed circuit design of hybrid analog baseband A. The VGA adopting sub-threshold design methodology a) Architecture design Fig. 3(a) shows the configuration of the proposed VGA. The sub-threshold exponential current generator (Iexp_gen) and the SATC block form the sub-threshold decibel gain control circuit to realize decibel gain for VGA. There are two gain stages for the VGA and each gain stage is comprised of two sub-amplifiers. The gain characteristic of the sub-amplifier is shown in the upper-left side of Fig. 3(a). When the exponential I-V function between V cg and I exp holds, the gain of the VGA demonstrates a good decibel gain characteristic. Then, how to achieve the exponential I-V function between V cg and I exp is crucial. Mexp, the core transistor of Iexp_gen is biased in the sub-threshold region. According to the well-established exponential I-V curvature of sub-threshold MOS transistor [8, 9, 10], the function between V cg and I exp can be expressed as: where: S and V th is the aspect ratio and threshold voltage, V ds is the drain and source voltage difference. V T ¼ kT=q (26 mV@27°C). n is defined as the differ- ential of the gate voltage V G to the cut-off voltage V p , which ranges from 1 to 2. V off is the gate-source voltage tested when I ds ¼ 0, and for the process adopted in this work, V off ¼ 130 mV. 0 , " si , È s and N ch are parameters defined by process. When V ds > 4V T , the last term in Eq. (1) can be neglected. I s0 S Mexp is a constant when design parameters are set, and supposed it equal to λ, then: Then, the exponential I-V function between V cg and I exp holds, and the decibel gain tuning characteristic of the VGA is guaranteed. The exponential I-V curvature of sub-threshold MOS transistor is crucial for the sub-threshold design methodology. However, as shown in Eq. (3), I exp shows a direct correlation with V th . The variation of V th may be bigger than 100 mV or even larger. Thus, two main problems in the sub-threshold design methodology is presented: 1) MOS transistor should be biased in sub-threshold region reliably to guarantee the exponential I-V curvature; 2) the process variation of V th is of great concern, which may cause a huge variation of I exp or even drive MOS transistor out of sub-threshold region. Both problems can collapse the performance and even the function of the exponential I-V curvature between V cg and I exp . However, to the authors' knowledge, few studies reports solutions to these problems. To address the questions for sub-threshold design methodology, a SATC block shown in Fig. 3(a) is introduced. With the help of the SATC block, which will be proved in the following text, Mexp is biased in sub-threshold region reliably. Moreover, the process variation of V th is simultaneously cancelled, which guarantees a stable exponential I-V curvature between V cg and I exp . Finally, a stable decibel gain of the VGA even under huge process variation is guaranteed. b) SATC scheme and circuit design The schematic of the V-I converter is shown in Fig. 3(b). Then, the voltage difference across the gate and source of Mexp, V cg , can be deduced as: where, V dth generated by the Vth-detector is a crucial parameter for SATC scheme, as proved in the following formulas. The schematic of the Vth-detector is shown in Fig. 3(c). Transistor m1∼m4 are all biased in the sub-threshold region, and transistor m3 works as a pseudo-diode. Combined with amplifier A 0 , three feedback loops as shown in Fig. 3(c) are achieved, the total gain of which forces V a % V b , and this is the base for the following deduction. It should be note that, the introduction of the pseudo-diode helps to improve the total loop gain of the Vth-detector. This helps V a and V b equal to each other more accurately, as proved in Appendix A. Then, the detected threshold voltage is: When V cg is applied on Mexp, I exp can then be expressed as: The first term in the exponential function and the coefficient of V ctrl are denoted as α and β respectively, then: As long as the design parameters are set, α, β and λ are all constants. Then, Eq. (7) shows that I exp is only determined by control voltage V ctrl , while V th is cancelled by V dth . Moreover, when all design parameters are reasonably set to make: then, V cg ¼ V gs,Mexp < V th will always holds, Mexp is always biased in subthreshold region reliably, even with a large process variation. Moreover, the variation of I exp due to V th drift is also compensated, which can be seen from Eq. (3) and (7). Based on the above analysis, the problems of sub-threshold design methodology mentioned above are successfully solved, which guarantees good performance and robustness of the proposed sub-threshold decibel gain control circuit. c) Gain stage design To realize the gain stage with a gain characteristic as shown in upper-left side of Fig. 3(a), the most simple and direct way is adopting two cascaded sub-amplifiers with resistive load, as shown in Fig. 4. However, under the bias of a changing exponential current for gain tuning, the output quiescent output voltage of each subamplifier is: V 1 (or V op ) = RI exp . As a result, the gain controlled by tuning I exp will result in a huge deviation of the quiescent operation point of the sub-amplifier. This is of great danger, which may drive the sub-amplifier into linear or off region. As a result, the allowable operating range is restricted, and the gain dynamic range of the VGA is also narrowed, as shown on the right side of Fig. 4. To stabilize the quiescent operation point of the sub-amplifier, a current network loaded sub-amplifier is proposed, as shown in Fig. 3(d). It is composed of a diode connected MOS transistor and a current source biased by I exp . Then, the output quiescent operation point of each sub-amplifier can be re-written as: Thus, the output quiescent operation point of each sub-amplifier remains stable, even under a large variation of the exponential current I exp for gain tuning. According to Fig. 3(d), with reasonable parameters setting, the gain of each gain stage can be derived as: With proper design parameters settings, the constant "1" in the square brackets can be neglected (this approximation may introduce some decibel gain error. Nevertheless, the hybrid architecture of the analog baseband helps to reduce the gain tuning rang of the VGA, and decibel gain error is also reduced to a negligible level as mentioned above), then: Eq. (11) indicates good decibel gain of the VGA. Moreover, as the V th variation of I exp has been compensated by the proposed SATC scheme, the gain of the gain stage of VGA is also only determined by the control voltage V ctrl . Finally, a stable gain of VGA is achieved despite the process variation. B. Ctrl-loop circuit design The loop control circuit Ctrl_loop is shown in Fig. 5, where the Clamp circuit is used to set the highest voltage level of V ctrl to guarantee Eq. (8) always holds. The Peak_detector senses the highest voltage level of the output of the analog baseband, while the target output amplitude is set by V amp (the input of the Error amplifier). The Comparator logic generates 4 bits digital control code G_ prg, and it is then send into the PG_ filter to decide the digital gain of the PG_ filter. Moreover, the Error amplifier, Clamp circuit and Comparator logic are all biased in the sub-threshold region to save power without sacrificing performance, while the Peak detector is biased in the saturation region for a higher work speed. C. Citcuit desing of PG_filter For PG_ filter, the Sallen-Key Biquade filter synthesizing method is discarded for its sensitivity to the process variation and the parasitic effect. In this study, the Tow-Thomas Biquade filter synthesizing method is adopted for its relatively high stability and low sensitivity to the parasitic effect and the process variation. Fig. 6 shows the schematic of the 4th-order butterrworth (which is chosen for its relax requirements on core operational amplifier) low pass PG_ filter. It has two cascade 2nd-order Tow-Thomas Biquades, with the transfer function expressed as: Then, the detail parameters of the Tow-Thomas Biquade including pass-band gain, 3-dB cut-off frequency and Q factor are shown as follows: The methodology of setting the design parameters is of great importance to realize an independent cut-off frequency programming and digital gain control, without changing the Q factor. Then, the following strategy is adopted: tunable resistor R 1 under the control of G_ prg is responsible for digital gain control. Adjustment of the product of C 1 and C 2 under the control of BW_ prg is responsible for the cut-off frequency manipulating, while C 1 =C 2 remains to be a constant and R 2 , R 3 and R 4 remain unchangeable to keep a stable Q factor. Measurement results The analog baseband was fabricated under SMIC 0.18 µm CMOS technology. The VGA determins the performance of the analog baseband. Thus, an independent AGC_core is taped out and a performance test is setup individually. The photo is shown in Fig. 7(a) with an area of 0.28 mm 2 , including the Ctrl_loop but without test pads. Two test modes for AGC_Core are adopted, including an open loop test and a closed loop test: 1) the open loop test is dedicated for the performance of VGA. It is realized by forcing a programmable voltage source on V ctrl . The performance of the proposed VGA is summarized in Table I, which shows much lower power compared with previous works. The test results in Fig. 8 show that the dynamic gain range is 23 dB while the decibel gain error is less than AE0:39 dB. The test results demonstrate the correct function and good performance of the proposed The test result is shown in Fig. 9, which demonstrate the good performance of the proposed AGC_core. Fig. 7(b) shows the photo of the hybrid analog baseband, occupying a die area of 0.45 mm 2 without test pads. The performance summary is shown in Table II. Due to the hybrid architecture, the dynamic gain of the hybrid analog baseband reaches 80 dB, while the AGC_core only provides a dynamic range of 23 dB. The total power is 4.1 mW. As compared to former reports, the proposed analog baseband is power and cost efficient. The cut-off frequency of the analog baseband is shown in Fig. 10(a), which demonstrates a programmable bandwidth ranging from 0.5 MHz to 30 MHz. The overall gain characteristic of the full analog baseband with the respect of input Power/mw 6.5 0.55 3.6 11 0.39 Ã Ã The power listed here is in fact the power of AGC_core including the power of the Ctrl_loop. frequency is shown in Fig. 10(b). Fig. 11 gives the measured closed loop gain and dB error of the full hybrid analog baseband circuits with respect to V ctrl , which shows the monotonically increasing of the gain with respect to the reduction of the input signal strength. Moreover, the dB error test result (which is smaller than AE0:39 dB) also demonstrates that, the same as depicted in Fig. 2, the hybrid digitalplus-analog mixed gain control strategy of this work helps to improve the decibel gain error of the full hybrid analog baseband. Conclusion A low power hybrid analog baseband is proposed in this study. By adopting a digital-plus-analog mixed gain control methodology, the gain dynamic range of the VGA is relaxed. Thus, the power of the analog baseband is optimized at the system level. Moreover, with a reduced gain tuning range of the VGA, the decibel gain error of the full analog baseband is also improved. For the AGC_core, the subthreshold design methodology is adopted for power reduction, while the process variation of sub-threshold design methodology is compensated by a SATC scheme. Moreover, a pseudo diode helping in enhancing the loop gain is introduced to improve the performance of the SATC scheme, while a current network loaded amplifier for the gain stage of VGA is proposed to stable the quiescent operating point. The test results demonstrate a total power dissipation of 4.1 mW, while the AGC_core dissipates 0.39 mW. The total gain dynamic range of the full analog baseband is 80 dB with a AE0:39 dB decibel gain error.
2018-03-29T13:02:50.701Z
2018-01-15T00:00:00.000
{ "year": 2018, "sha1": "353f8b89d399ceb73e1600c308fc6ad326de2a5a", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/elex/15/3/15_15.20171197/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "613157eb8d2a1aa243afaf17a7c141a1cc4a96fe", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17663765
pes2o/s2orc
v3-fos-license
Lifting D-Instanton Zero Modes by Recombination and Background Fluxes We study the conditions under which D-brane instantons in Type II orientifold compactifications generate a non-perturbative superpotential. If the instanton is non-invariant under the orientifold action, it carries four instead of the two Goldstone fermions required for superpotential contributions. Unless these are lifted, the instanton can at best generate higher fermionic F-terms of Beasley-Witten type. We analyse two strategies to lift the additional zero modes. First we discuss the process of instantonic brane recombination in Type IIA orientifolds. We show that in some cases charge invariance of the measure enforces the presence of further zero modes which, unlike the Goldstinos, cannot be absorbed. In other cases, the instanton exhibits reparameterisation zero modes after recombination and a superpotential is generated if these are lifted by suitable closed or open string couplings. In the second part of the paper we address lifting the extra Goldstinos of D3-brane instantons by supersymmetric three-form background fluxes in Type IIB orientifolds. This requires non-trivial gauge flux on the instanton. Only if the part of the fermionic action linear in the gauge flux survives the orientifold projection can the extra Goldstinos be lifted. In the case of Type IIA orientifolds with intersecting D6-branes, the relevant non-perturbative objects are Euclidean D2-brane instantons, short E2instantons, wrapping special Lagrangian three-cycles of the internal Calabi-Yau space [1,3]. An analysis of the zero mode structure of such instantons can be performed with the help of boundary CFT methods as originally applied to the D3 − D(−1) system in [19,20]. This has shown that under suitable circumstances the E2-instanton can generate couplings in the effective four-dimensional superpotential which are forbidden perturbatively as a consequence of global U(1) selection rules. The relevant instanton effect is genuinely stringy in that it cannot be understood in terms of four-dimensional gauge instantons. The imprints of this phenomenon in various corners of the string landscape are manifold. Of particular phenomenological interest has been the generation of Majorana mass terms for right-handed neutrinos [1,3,8,12,14]. Besides allowing for such terms in the first place, instanton effects admit a natural engineering of the intermediate mass scale required for these Majorana terms in the context of the see-saw mechanism. Other applications include the generation of hierarchically small µ-terms [1,3] or a modification of the family structure of Yukawa couplings [5]. In [15], the generation of perturbatively forbidden 10 10 5 H couplings in SU(5) GUT models based on intersecting branes is discussed. Globally defined examples of an instanton induced lifting of unwanted chiral exotics are presented in [7,15]. The benefits of instanton effects for realising metastability and supersymmetry breaking in explicit setups are explored in [4,9,16]. Using the CFT description for the computation of E2-instanton generated superpotential couplings proposed in [1], the non-perturbative Majorana mass matrix for right-handed neutrinos was determined in detail for a local GUT-like toroidal brane setup in [8]. An extensive search for realisations of this effect within the class [21] of global semi-realistic Gepner model orientifolds has been performed in [12], followed by further phenomenological studies in [14]. The main obstacle for finding appealing global string vacua exhibiting a nonperturbative superpotential of the described type are the severe restrictions on the zero mode structure of the instanton, which will be reviewed in detail in section 2 of this article. At least in the absence of other mechanisms to lift the fermionic zero modes associated with deformations of the cycle, the instanton has to be rigid. Unfortunately, for toroidal orbifolds, a popular playground for Type IIA model building, the only known examples of such cycles are the ones on the Z 2 × Z ′ 2 orbifold analysed in [22][23][24] and used in the local setup of [8]. A second complication, which is the central topic of this paper, occurs for E2-instantons on non-invariant cycles, called U(1) instantons in the following. It is given by the appearance of four Goldstino modes θ α , τα, α,α = 1, 2 instead of the two Goldstinos θ α required for the generation of a superpotential [10][11][12]. If the instanton lies on top of an appropriate orientifold plane, the two extra modes τα are projected out and the instanton can induce a superpotential term. Given its significance for the topography of the landscape of string vacua, it is obviously quite important to investigate if this is actually the only configuration of D-brane instantons which induces quantum corrections of the superpotential. The key point is to decide if there exists a way to lift the two extra Goldstinos τα other than by projecting them out. Generally speaking, this requires contact terms in the instanton moduli action involving the modes τα such that they can be soaked up in the path integral without giving rise to higher derivative or higher fermionic terms in the non-perturbative couplings. We investigate two different strategies to achieve this. In section 3, we analyse couplings of the τα modes to massless states in the E2−E2 ′ sector, which likewise have to be absorbed. As a consequence of the D-term constraints for the bosonic zero modes the lifting of these modes requires the presence of a non-vanishing Fayet-Iliopoulos term. The latter arises after slightly deforming the background such that the Ξ − Ξ ′ pair of instantonic branes recombines. We describe in detail the zero mode structure of the U(1) instantons and how it changes by the process of condensation of the bosonic modes. We find that for chiral Ξ − Ξ ′ recombination, due to charge conservation the recombined object always contains extra fermionic zero modes which cannot be absorbed by pulling down either closed string or matter fields. However, for non-chiral Ξ − Ξ ′ recombination one obtains an O(1) instanton with deformations. In the Type I dual model it corresponds to an E1-instanton which wraps a holomorphic curve moving in a family as discussed by Beasley and Witten in [25]. We show that such instantons can generate, in addition to the results of [25], multi-fermion couplings also for matter field superpotentials and under certain circumstances can also contribute to the superpotential. Independently of the issue of instanton recombination, in the absence of E2 − E2 ′ modes the measure of rigid U(1) instantons is just right to generate possibly open string dependent multi-fermion F-terms which correct the metric on the complex structure moduli space. This is the subject of section 4. An alternative mechanism to eliminate the τα modes, speculated upon already in the literature [10,12,16], consists in turning on supersymmetric background fluxes. The hope would be that in their presence the instanton does not feel the full N = 2 supersymmetry algebra preserved locally away from the orientifold, but only the N = 1 subalgebra preserved by the fluxes. This should then result in only two as opposed to four Goldstinos. The lifting of reparametrisation zero modes of M5-brane or Type IIB D3brane instantons has been studied in detail [26][27][28][29][30][31] (see also [32][33][34]). The analysis consists in determining the bilinear couplings of the fermionic zero modes to the background fluxes responsible for their lifting. In section 5 we recall, building upon the expressions for the fermion bilinears derived in [28,29], that in Type IIB orientifolds a lifting of the τα of E3-instantons is not possible as long as one sticks to supersymmetric three-form flux. As we then show, this generically continues to hold even for E3-instantons with gauge flux which are mirror symmetric to Type IIA U(1) instantons at general angles. A possible exception are compactifications with divisors allowing for anti-invariant two-cycles. We illustrate this point in a local example and finally summarize our findings in section 6. Instanton generated F-terms We are interested in N = 1 supersymmetric Type II orientifold compactifications to four dimensions. While what we have to say in the sequel applies, mutatis mutandis, equally well to Type IIA and Type IIB constructions, we focus here for definiteness on the first case. We will therefore be working in the context of intersecting D6-brane models (see [35][36][37][38][39][40] for reviews). The relevant spacetime instantons are given by E2-branes wrapping special Lagrangian three-cycles Ξ in the Calabi-Yau so that they are point-like in four-dimensional spacetime. Part of the following two subsections 2.1 and 2.2 reviews some of the findings of [1,3], while in 2.3 we discuss higher fermionic F-terms. E2-instanton zero modes There are two kinds of instanton zero modes according to their charge under the gauge groups on the D6-branes. The uncharged zero modes arise from the E2-E2 sector. They always comprise the universal four bosonic Goldstone zero modes x µ due to the breakdown of four-dimensional Poincaré invariance. Generically, for instantons away from the orientifold fixed plane, these come with four fermionic zero modes θ α and τα [10][11][12]. This reflects the fact that the instanton breaks half of the eight supercharges preserved by the Calabi-Yau manifold away from the orientifold fixed plane. Due to its localisation in the four external dimensions, an instanton breaks one half of the N = 1 supersymmetry preserved by the orientifold and one half of its orthogonal complement inside the N = 2 supersymmetry algebra preserved by the Calabi-Yau. As displayed in table 1, the θ α are the Goldstinos associated with the breakdown of the first N = 1 supersymmetry, while the τα are associated with the orthogonal N = 1 ′ algebra 1 . The internal part of their vertex operator is essentially given by the spectral flow operator of the worldsheet N = (2, 2) superconformal theory, see eq. (69) and (61) in appendix B. Table 1: Universal fermionic zero modes θ α , τα (τ α , θα) of an (anti-)instanton associated with the breaking of the N = 1 SUSY algebra preserved by the orientifold and its orthogonal complement N = 1 ′ . Besides there are b 1 (Ξ) complex bosonic zero modes c I , I = 1, . . . , b 1 (Ξ), related to the deformations and Wilson lines of the E2-instanton. Away from the orientifold plane, each of these is accompanied by one chiral and one anti-chiral Weyl spinor, χ α I and χα I . Furthermore there arise zero modes at non-trivial intersections of the instanton E2 with its image E2 ′ ; they will be discussed in detail in section 3.1. In addition to these uncharged zero modes, there can arise fermionic zero modes from intersections of the instanton Ξ with D6-branes Π a . If the instanton is parallel to Π a , there are also massless bosonic modes in this sector. The detailed quantisation of these charged zero modes, both for chiral and non-chiral intersections, is described in [8]. Let us focus for brevity on chiral intersections. An important point made in [8] is that states in the E2 − D6 sector are odd under the GSO projection contrary to the GSO-even states in the D6 − D6 brane sector. In particular, a positive intersection I Ξa > 0 of the instanton and a D6-brane wrapping the respective cycles Ξ and Π a hosts a single chiral fermion (i.e. with world-sheet charge Q ws = − 1 2 ) in the bifundamental representation (−1 E , a ). The strict chirality of the charged fermions is essential for the existence of holomorphic couplings between these modes and open string states in the moduli action and will also play a key role in the present analysis. For a generic instanton cycle Ξ away from the orientifold, this gives rise to the charged zero mode spectrum summarised in table 2. As a result, the instanton carries the zero modes Reps Qws number charge [1,3]. under the gauge group U(1) a . Generation of superpotentials The instanton measure contains all these zero modes. Thus in order to contribute to the holomorphic superpotential, whose measure is d 4 x d 2 θ, the instanton has to meet several constraints. Most importantly, the presence of the anti-chiral Goldstinos τα for generic instantons not invariant under the orientifold projection 2 prevents the generation of superpotential terms other than those corresponding to gauge instantons [10][11][12]. The latter case is special in that the instanton wraps the same three-cycle as one of the D6-branes [6]. In this situation, the τα play the role of Lagrange multipliers for the bosonic ADHM constraints and can consistently be integrated out [20]. For instantons not parallel to any of the D6-branes, these couplings in the moduli action do not exist since there are no massless bosons in the E2 − D6 sector. The most straightforward way to eliminate the τα is to project them out under the orientifold action [10][11][12]. Concretely, if one chooses Ξ = Ξ ′ the universal zero modes x µ , θ α , τα are subject to the orientifold action Ωσ in the way detailed in appendix A. Depending on the orientifold action one obtains an SO(N) or USp(N) gauge group. For the latter case the zero modes x µ , θ α are anti-symmetrised and the modes τα gets symmetrised, while for the SO(N) instanton x µ , θ α are symmetrised and τα get anti-symmetrised. It follows that single E2-instantons with orthogonal gauge group (called O(1) instantons in the sequel) can give rise to F-terms in the effective action since the universal part of their zero mode measure is of the form d 4 x d 2 θ. In order for this F-term to be of the usual superpotential form, there may be no further uncharged fermionic zero modes present. This situation corresponds to an instanton wrapping a rigid cycle Ξ with b 1 (Ξ) = 0. Alternatively, the additional fermionic modes have to be absorbed by some interaction in the instanton moduli action such that they can be integrated out without generating higher derivative terms. Known examples of such interactions involving the closed string sector are the quartic coupling to the curvature on the instanton moduli space [41,42], provided the latter is non-trivial, or the coupling to suitable background fluxes (see section 5). In section 3.4 we will describe another way to lift a pair of reparametrisation modes through couplings to the open string sector. Finally, also the charged zero modes appear in the measure and have to be soaked up. For an Ωσ invariant instanton, i.e. Ξ ′ = Ξ, the charged zero modes and their representations are displayed in Table 3 A careful analysis of their g s scaling in [1,8] revealed that for superpotential couplings this has to happen via suitable disk (as opposed to higher genus) amplitudes involving precisely two λ modes and in addition suitable matter fieldsprovided these amplitudes induce a Yukawa-type contact term in the instanton moduli action. As a result, E2-instantons induce superpotential terms of the form [1,3]. involving suitable products of open string fields Φ a i ,b i . For details of the rules of their computation see [1]. Generation of higher fermionic F-terms Our discussion has hitherto focussed on O(1) instantons which are either rigid or whose fermionic reparametrisation modes have paired up appropriately such that they give rise to genuine superpotential terms. Alternatively, there are situations where these additional zero modes induce so-called higher fermionic F-term couplings in the effective action. In the dual Type I/heterotic model this effect was first described in [25] 3 . There it arises for E1/worldsheet instantons moving in a family. On the type IIA side, this corresponds to non-rigid O(1) instantons such that the chiral reparametrisation modulini χ α I , I = 1, . . . , b 1 (Ξ) are anti-symmetrised and therefore projected out under the orientifold action. We will sometimes refer to them as instantons with deformations of the first kind 4 . The resulting uncharged part of the measure takes the form Beasley and Witten found that such instantons can generate higher fermionic couplings for the closed string moduli fields [25]. In superspace notation, these are encapsulated in interactions of the form for the simplest case that the instanton moves in a one-dimensional moduli space. Note that supersymmetry requires a holomorphic dependence of w ij (Φ) on the superfields Φ. Consider first the case of an E2-instanton with b 1 (Ξ) = 1 and no further charged zero modes in the E2 − D6 sector. Denoting by T = T + θ α t α the N = 1 chiral superfield associated with the Kähler moduli, we can absorb the instanton modulini by pulling down from the moduli action two copies of the schematic anti-holomorphic coupling χαtα. In general the open-closed amplitude χαtα does not violate any obvious selection rule of the N = (2, 2) worldsheet theory and is therefore expected to induce the above coupling 5 . Similarly, the 3 For another example in the context of heterotic M-theory see [43]. 4 This is to be contrasted with the case that the chiral deformation fermions survive the projection. As descrbed in [13] such a situation can generate corrections to the gauge kinetic function. 5 In particular, the total U(1) worldsheet charge is conserved. Still there might be situations, such as factorizable 3-cycles on (T 2 ) 3 , where some of the individual U(1) charges are violated by this coupling. For a generic background, though, the couplings need not be vanishing, as we demonstrate for the example of a non-factorizable T 6 in appendix B. two θ-modes can be soaked up by the holomorphic coupling θ α u α involving the fermionic partners of the complex structure moduli encoded in the superfield U = U + θ α u α . This results in a four-fermion interaction of the schematic form e −S E2 u α u α tαtα. Note that the coupling of the complex and Kähler structure modulini only to the universal and reparametrisation zero modes, respectively, is a consequence of U(1) worldsheet charges of the associated vertex operators. The derivative superpartner of the above four-fermi term arises upon integrating out two copies of the term which follows from evaluating the amplitude θ α χα T as demonstrated in appendix B. All this can be summarized in superspace notation by writing where U(Ξ) is associated with the specific combination of complex structure moduli appearing in the classial instanton action and the holomorphic function f i,j depends in general on the Kähler and open string moduli of the D6-branes ∆ i . In the presence of a suitable number of charged λ zero-modes there exist, in addition to these closed string couplings, terms which generate higher fermicouplings also for the matter fields. Consider again for simplicity the case b 1 (Ξ) = 1. If the Chan-Paton factors and worldsheet selection rules only allow the λ modes to couple holomorphically to the chiral open string superfields, as for the generation of a superpotential, the instanton induces an interaction as in (7), but with e −U (Ξ) replaced by e −U (Ξ) For suitable configurations, the action can also pick up derivative terms directly involving the open string fields. For this to happen the instanton moduli action has to contain couplings of the form 6 where the fermionic matter field ψα 1/2 lives at the intersection D6 a − D6 b and lies in the anti-chiral superfield Φ = φ + τ ψ, see figure 1. Integrating out two copies of this interaction term brings down the fermion bilinear ψ 1/2 ψ 1/2 . In addition, the two θ α modes again pull down a bilinear of chiral fermions u α or, in the presence of more λ modes, ψ α ab , as in the case of superpotential contributions. This induces again a four-fermi coupling. Alternatively, we can absorb one pair of θ α χα in a coupling of the form Figure 1: Absorbtion of θ andχ-modes leading to F-terms. The superscripts denote the ghost picture. After bringing the φ −1 into the zero ghost picture this clearly generates a derivative coupling of the form θσ µ χ λ a ∂ µ φ λ b . Integrating out two copies of this term yields the derivative superpartner to the above four-fermi term. Instanton recombination As just reviewed, for the case of E2-instantons in Type IIA orientifolds we know that single instantons wrapping rigid special Lagrangian three-cycles invariant under the orientifold projection and carrying O(1) gauge group have the right zero mode structure d 4 x d 2 θ to contribute to the superpotential. Under mirror symmetry to the Type I string these objects are mapped to E1-instantons wrapping isolated curves on the mirror Calabi-Yau. The contribution of such objects to the superpotential has been discussed in a couple of papers [25,44]. For D6-branes it is known that under certain circumstances a pair of D6-D6 ′ branes can recombine [45] into a new sLag D6-brane which obviously wraps an Ωσ invariant three-cycle 7 . If a similar story also applies to pairs of E2-E2 ′ instantonic branes, the recombined objects would be candidates for new O(1)instantons contributing to the superpotential. For example if one starts with an E2-instanton wrapping a factorizable cycle on a toroidal orbifold, the cycle wrapped by the recombined instanton would no longer be factorizable; still one could hope to determine the instanton contribution by appropriate deformation of the original instanton moduli action. In the mirror dual situation, the resulting objects are E5-instantons equipped with a vector bundle W defined via the nontrivial extension of the two line bundles L and L * . 7 For brane recombination in the context of D6-brane model building see e.g. [46][47][48]. In this section we investigate whether the naive expectation that such recombined O(1)-instantons exist is actually correct. Zero mode structure on U (1) instantons Consider a U(1)-instanton wrapping a general rigid cycle Ξ = Ξ ′ . From the E2 − E2 and E2 ′ − E2 ′ sectors we now have the zero mode measure As described in the previous section, if such an instanton also intersects the D6branes present in the model, this yields the fermionic zero-modes listed in table 2. From there, the overall U(1) E charge of these matter zero modes can be read off, In the last line we have used the tadpole cancellation condition 8 . This shows that in a globally consistent model the total U(1) E charge of all matter zero modes is proportional to the chiral intersection between the instanton and the orientifold plane. For an Ωσ invariant instanton this last quantity vanishes, whereas for a generic U(1) instanton it does not. If Ξ • Π O6 = 0, there must be additional charged zero modes in order for the zero mode measure to be U(1) E invariant. Indeed there are also zero modes from the E2 − E2 ′ intersection. This is the open string sector which is invariant under Ωσ and gets symmetrized or anti-symmetrized (see appendix A) . Taking into account that the sign of the orientifold projection changes from Dp-Dp to D(p − 4)-D(p − 4) sectors, for a single U(1) instanton we get the zero modes shown in Table 4. Table 4: Charged zero modes at an E2 − E2 ′ intersection. 8 Notice that Π O6 denotes the total homological charge of all orientifold fixed planes present in the background. In what follows we will always refer to the effective orientifold projection which arises after taking into account the contribution from all different sectors, which may be of different types individually. For concreteness we consider from now on the two simplest non-trivial cases. Case I The first case has intersection numbers It corresponds to a projection as would arise e.g. on T 6 /Z 2 in the presence of a single O − -plane. We get two additional bosonic zero modes m and m and two additional fermionic ones µα. Comparing with (12), we find that indeed the total U(1) E charge of zero modes vanishes. The charge of the two µα zero modes precisely cancels against the sum over all matter field zero modes. This analysis reveals that in a globally consistent model it is not possible to wrap an E2-instanton on a cycle Ξ = Ξ ′ without picking up additional charged zero modes λ i . Their U(1) E charge is guaranteed to cancel the U(1) E charge of the E − E ′ modes such that the resulting zero mode measure 9 , Case II The second case has intersection numbers Here we get no extra bosonic zero modes and only the two fermionic ones µ α . Unlike the previous case, this is due to a projection as would arise e.g. in the presence of a single O + -plane. In such a situation it is not possible to cancel the tadpoles in a supersymmetric way. Nonetheless, we can perform a similar zero mode analysis. Again the condition (12) tells us that there are extra fermionic matter zero modes whose U(1) E charge is equal to Q E = −4. The resulting zero mode measure reads Recombination of chiral E2 − E2 ′ instantons The question we would like to address now is whether one can absorb the zero modes for the U(1) instantons in such a way that contributions to the superpotential W are generated. The expectation that this might be the case arises from the analogous situation for intersecting D6-branes, where a slight deformation of the complex structure moduli induces a non-vanishing Fayet-Iliopolous term on the D6-worldvolume leading to condensation of the tachyonic charged matter fields [45]. This brane recombination process preserves the topological charge of the intersecting D6 − D6 ′ branes and therefore yields a supersymmetric brane wrapping a three-cycle which is invariant under Ωσ. Consider first the case I from the last section. Here we have the bosonic zero modes m and m, which appear in a D-term potential of the form The complex structure dependent Fayet-Iliopoulos parameter ξ is proportional to the angle modulo 2 between the cycle Ξ and its image Ξ ′ and vanishes for supersymmetric configurations. Starting from a supersymmetric situation with ξ = 0 one can always deform the complex structure to obtain ξ < 0 or ξ > 0 at least for small ξ. Since the geometry of the internal cycle is independent of whether it is wrapped by a D6-brane or an E2-instanton, we argue that the FI term is forced upon us even in the absence of four-dimensional dynamical gauge fields associated with the abelian gauge group on the instanton. The D-term constraint resulting from (17) is and has to be implemented by a delta function in the instanton measure. It is useful to parametrise the complex boson m as Note that the D-flatness condition as such does not constrain the phase α. The latter can be absorbed by fixing the gauge with respect to the U(1) E symmetry under which the instanton measure (14) is invariant. It follows that the bosonic part of the instanton measure takes the form As ξ becomes positive, the bosonic m modes get tachyonic, signalling an instability towards condensation of the tachyon such that the D-term constraint is satisfied. In the upstairs geometry this corresponds to recombination of the cycle Ξ ∪ Ξ ′ (recall that upstairs Ξ and Ξ ′ are not identified) to the unique sLag Ξ with homology class equal to [Ξ] + [Ξ ′ ] and ξ new = 0 [49]. Note that Ξ is rigid if Ξ (and Ξ ′ ) is rigid [49], i.e. the instanton wrapping it exhibits no uncharged zero modes apart from the universal ones. Now we have to determine what happens to the fermionic zero modes once the bosonic ones condense. As our analysis of the relevant amplitudes in appendix B shows, the instanton moduli action contains the term which means that after m gets a VEV the τ and µ modes pair up. After bringing down two copies of this terms and integrating out the fermionic zero modes, one is left with the measure This is encouraging as with the τ -modes dropping out everything seems to point towards a superpotential contribution. It only remains to absorb the matter zero modes λ a , λ b which were forced upon us by U(1) E invariance of the zero mode measure. Recall that the sum of all charges of these fields is It is clear that pairs of such zero modes with opposite U(1) E charge can generate the usual matter field couplings of the type but there will always be the surplus of four zero modes of type λ b . As shown in figure 2 10 , due to the U(1) E charge the only way to absorb these extra λ zero modes is via couplings of the type always involving the field m. In (24) the upper index indicates the world-sheet charge Q ws . Since all the fields except m are chiral (in the sense of the N = 2 world-sheet supersymmetry) and m itself is anti-chiral, the chiral ring structure tells us that all couplings of type (24) are vanishing: When we apply the picture changing operator to m −1 we do not pick up the right pole structure for a nonzero amplitude [50]. On the other hand, with no additional matter field φ in (24), the amplitude is vanishing right away due to violation of the U(1) world-sheet charge. Therefore, we conclude that in contrast to naive expectations, the recombined E2 ′ −E2 instanton cannot contribute to the superpotential. There always remain four charged fermionic zero modes which cannot be absorbed in a chiral manner. For case II there are no bosonic zero modes from the E2 ′ − E2 intersection and therefore no brane recombination. One only has the fermionic zero modes In this case we can write down the four-fermion coupling where again upper indices denote the U(1) ws charges. Therefore, two such couplings can absorb the eight appearing zero modes θ, µ, λ i a , (λ ′ a ) i so that one is only left with the measure where the total U(1) E charge of all the matter zero modes λ a and λ b vanishes. There is no way to absorb the remaining τ modes involving open string operators: Clearly no superpotential terms are generated, as couplings like are not allowed by Lorentz invariance and non-holomorphic interactions of the form vanish as a consequence of U(1) worldsheet charge violation. By contrast, it is possible to absorb the τ -modes through couplings to anti-chiral fermions in the closed string sector of the form ταχα , which will be discussed in section 4. Clearly, the induced interactions are non-holomorphic and thus nonsupersymmetric. This is however no wonder since, as we recall, the very presence of the effective O + -plane leading to this kind of orientifold projection does not admit supersymmetric tadpole cancellation. We conclude that in contrast to expectations based on spacetime-filling brane recombination processes, instanton recombination does not lead to new O(1) instantons which can contribute to the superpotential. The reason is that due to U(1) E charge conservation and the tadpole cancellation conditions there arises a net number of charged fermionic matter zero modes which cannot be absorbed by chiral couplings. For the Type IIB dual orientifold models this observation implies that magnetised E5 − E5 ′ recombination, i.e. instantons carrying extensions (10) of line bundles, do not generate superpotential contributions either. The only known contributions in this case come from E1-instantons wrapping holomorphic curves on the mirror Calabi-Yau manifold. Recombination of non-chiral E2 − E2 ′ instantons The deeper reason why chiral E2 − E2 ′ intersecting instantons as in case I do not lead, after brane recombination, to O(1) instantons seems to be that this E2−E2 ′ system carries charge along the "directions" of the orientifold O6 − plane. In the Type IIB dual situation this means that the magnetised E5 − E5 ′ system carries E5-brane charge. Consequently, it may be more promising to start with a magnetised E3 − E3 ′ system which after brane-recombination only contains E1-charge. Such a system necessarily has E3 • E3 ′ = 0 and can only support vector-like zero modes on the intersection. This immediately implies that there are no U(1) E charged matter zero modes necessary to ensure U(1) E invariance of the zero mode measure. The simplest non-trivial case involves one vector-like pair of zero modes, i.e. Therefore, for an O6 − -plane we have the zero modes shown in figure 5. (2) 1/2 so that the τα modes absorb one linear combination of the fermionic zero modes. In addition the single real bosonic D-term constraint 11 fixes m m = n n, where the lower index denotes the U(1) ws charge while the upper one refers to U(1) E . For initially rigid instantons, i.e. in the absence of E2-reparametrisation moduli, there exist no F-term constraints which would prevent a non-vanishing VEV m m = n n = 0 corresponding to brane recombination. As in the analogous process for chiral intersections, recombination breaks the U(1) E . The associated gauge degree of freedom can be used to set as opposed to merely (32). Integrating out the τ modes together with the linear combination µ = ρ − ν of fermionic zero modes as appearing in (30) brings down a factor of m 2 . After recombination, one is left with the measure where again the lower index denotes the U(1) ws charge in the canonical ghost picture and µα 1/2 = ρ + ν stands for the remaining linear combination of fermionic zero modes. In addition, there can of course be charged zero modes λ a , λ b . Ignoring the additional factor of m 2 1 for the moment, this zero mode structure is precisely that of an O(1) instanton with one deformation b 1 (Ξ) = 1 of the first type (see discussion around (4))). From our discussion in section 2.3 we expect this configuration to generate higher fermionic F-terms of Beasley-Witten type. Extrapolating from the CFT of the E2 − E2 ′ sector before recombination, the relevant couplings after recombination are inherited from where the fermionic matter field ψα 1/2 lives at the intersection D6 a − D6 b and lies in the anti-chiral superfield Φ = φ + τ ψ. Note that the above coupling does not violate any of the general N = 2 SCFT selection rules so that even without a direct computation we expect it to be present for sufficiently generic 11 One might expect that similar to the ADHM construction of gauge instantons one has three D-term constraints. But from the U (1) E and U (1) ws charges in Table 5 it is clear that one can build only the neutral combination in eq. (31). backgrounds. Integrating out two copies of this interaction term brings down the fermion bilinear ψ 1/2 ψ 1/2 characteristic for the higher fermionic terms described in [25] as well as a factor of m 2 −1 . The bosonic measure can then be brought into standard form by a simple change of variables with m = m 3 and we are left with Together with the chiral fermion bilinear pulled by the two θ α modes this results in the four-fermi terms as discussed in section 2.3. Its bosonic derivative superpartner involves absorbing one pair of θµ in a coupling of the form (after recombination) With φ −1 and m −1 in the zero ghost picture 12 this generates a derivative for the boson φ −1 . Bringing down two copies of this term indeed yields the derivative superpartner to the above four-fermi term, again in agreement with [25]. Contribution to superpotential It has been observed for world-sheet instantons in the heterotic string that instantons moving in a family not only generate higher fermionic F-terms, but can also contribute to the superpotential [41]. Recall that such instantons are dual to E2-instantons with deformations of the first kind and with a zero mode structure as in (4) for each deformation. As we just saw, recombination of a non-chiral E2-E2 ′ pair yields precisely such objects. For superpotential contributions to exist it must be possible to absorb the fermionic zero modes without generating higher fermionic or derivative terms as in (35) or (37). A way to do this for matter field superpotential contributions is shown in figure 3. There µ denotes the fermionic reparameterisation mode independently of whether the instanton is the result of recombination or not. In the first case, we should actually replace µ by m −1 µ as before. If this five point function has a contact term and if the remaining integral over the bosonic instanton moduli space does not vanish, then a contribution to the superpotential can be generated. We stress again that from a general N = 2 SCFT point of view, no obvious selection rules forbid such an interaction term. Having said this, one can easily convince oneself that for factorizable three-cycles on toroidal orbifolds the amplitude vanishes due to violation of the U(1) worldsheet charge which has to be conserved for each of the three tori separately. This, however, need not be so for more general setups. By contrast, it is clear that these disc amplitudes vanish for E2-deformations of the second kind as defined in [13]. Recall from section 2.3 that these give rise to chiral instead of anti-chiral deformation modulini. To summarize, non-chiral E2 − E2 ′ recombination results in an object with at least two bosonic and two fermionic zero modes from a surviving deformation of the first kind of the recombined instanton. These objects can generate higher fermion couplings and under certain circumstances can also contribute to the superpotential. F-term correction to complex structure moduli space Having analysed the consequences of zero modes in the E2−E2 ′ sector in addition to the four Goldstinos for a U(1) instanton, in this section we are interested in the induced couplings if the uncharged measure merely takes the form in the first place. Consider therefore a rigid U(1) instanton with the geometric intersection numbers This is easily realised e.g. for cycles parallel to, but not on top of the orientiold plane in some subspace. The uncharged zero mode measure (38) is to be supplemented by additional charged zero modes λ if present. Since there are no zero modes in the E2 − E2' sector which would be sensitive to the orientifold action, we might expect this type of instantons to be describable in terms of half-BPS instantons of the underlying N = 2 supersymmetry preserved by the internal Calabi-Yau before orientifolding. The correction to the complex structure moduli space metric by E2-instantons in type IIA Calabi-Yau compactification has been discussed recently in [51] 13 . Following this logic, we would anticipate the generation of E2-corrections to the complex structure Kähler potential by the U(1) instanton described by (38). However, while the chiral Goldstino modes θ are indeed associated with the breakdown of the N = 1 subalgebra of this N = 2 symmetry which is preserved by the orientifold, their anti-chiral partners τ correspond to the orthogonal N = 1 subalgebra. The above measure (38) does therefore not cover the full N = 1 superspace as required for the generation of a Kähler potential. Rather, the integral is only over half of the N = 1 superspace. While this calls for the generation of an F-term as opposed to a D-term, the additional fermionic zero modes τ will result in higher fermonic couplings of Beasley-Witten type discussed in detail in section 2.3. An important difference to the F-terms discussed previously is that now only the complex structure moduli receive derivative corrections. Denote by w and a the scalar and axionic parts of the scalar component U = w − ia of a complex structure superfield. Then evaluation of the amplitudes θ w τ and θ a τ gives rise to the terms in the moduli action. For the details of this computation in the context of toroidal orbifolds see appendix B. The absence of analogous terms for the Kähler moduli is a consequence of U(1) worldsheet charge conservation. Integrating out two copies thereof indeed generates a derivative coupling of the form e −S E2 ∂U ∂U . Together with their fermionic partners, the derivative F-terms can be summarized by where the complex conjugate part is due to the anti-instanton contribution. Note the difference to eq. (7) describing the higher fermionic terms for E2-instantons with deformation modes. In the presence of charged zero modes λ these Fterm corrections for the complex structure moduli involve appropriate powers of charged open string fields required to soak up the λ modes. This amounts to Flux-induced lifting of zero modes The additional two zero modes τα which, if present, prevent the generation of a superpotential by the instanton, are a consequence of the underlying N = 2 supersymmetry preserved in the bulk of the Calabi-Yau away from the orientifold plane in the way described in section 2. It has therefore been speculated in the literature [10,12,16] that these Goldstinos might be lifted in the presence of suitable background fluxes. An intuitive reason why this could be the case is that under appropriate circumstances the instanton is expected to feel only the N = 1 supersymmetry preserved by the flux in the bulk. In such situations the τ modes are not protected as the Goldstinos of the orthogonal N = 1 supersymmetry and it might be possible that indeed only the two θ α modes remain massless in the universal zero modes sector. While our previous presentation has focused on D-brane instantons in Type IIA orientifolds, the natural arena to study the effects of background fluxes is the framework of Type IIB compactifications, where we can take advantage of the by now quite mature understanding of a fully consistent incorporation of supersymmetric three-form flux (for references see e.g. [53,54]). The lifting of fermionic zero modes by supersymmetric three-form flux has been analysed in special cases in [28][29][30][31] in the context of E3-instantons wrapping a holomorphic divisor of the internal (conformal) Calabi-Yau. The most general such situation involves the presence also of supersymmetric gauge flux on the worldvolume E3brane. This corresponds on the Type IIA side to E2-instantons at general angles with the O6-plane and is the configuration we are primarily interested in. To the best of our knowledge the possible consequences of such gauge fluxes on the zero mode structure have not been analysed explicitly so far. Before addressing the more general case, we review first the situation of vanishing gauge flux. Zero mode lifting for unmagnetised E3-instantons In the spirit of [55], we consider Type IIB orientifold compactifications with an N = 1 supersymmetric combination G = F − τ H of RR and NS flux F = dC 2 and H = dB such that the complexified dilaton τ = C 0 + ie −φ is constant. The internal manifold is therefore conformally Calabi-Yau with constant warp factor. In order to preserve supersymmetry, the flux has to be of (2,1) type 14 and satisfy the primitivity condition J ∧ G = 0 in terms of the Kähler form J. We consider an E3-brane wrapping a holomorphic divisor Γ. Since our interest here focuses on the lifting of τ -modes, we assume that Γ is not invariant under the holomorphic involution σ defining the orientifold action Ω(−1) F L σ so that the τ -modes are not projected out. For the simple setup of unmagnetised divisors, we can then simply identify the instanton with its orientifold image and focus on the instanton action before orientifolding without further ado. The part in the E3-brane worldvolume action describing the coupling of such three-form flux to the (uncharged) zero modes ω 15 reads [29,56] The combination Gmñ p appearing above is defined as Gmñ p = e −φ Hmñ p + iF ′mñ p γ 5 in terms of F ′ mñp = Fmñ p − C 0 Hmñ p and the four-dimensional matrix γ 5 . The indicesm,ñ are along the four-cycle Γ and p is transverse to it. While the above action was derived in [29,56] entirely with the help of supergravity methods, one could in principle determine it by analysing the CFT coupling of the closed string fields to the boundary, see [57][58][59][60] for the relevant techniques. The Euclidean action (42) uses a particular gauge fixing condition to eliminate the unphysical degrees of freedom due to κ-symmetry (cf. eq. 4.9 of [29]). As a result, the spinor ω is a sixteen-component Weyl spinor since we consider a Euclidean action. Locally, we can choose complex coordinates a, b = 1, 2 along Γ and z, z for the transverse direction. It is convenient to use the standard definition of the Clifford vacuum |Ω , Γ z |Ω = 0, Γ a |Ω = 0 (43) and to decompose the spinor ω into its external and internal part. The latter can be grouped according to its chirality along the normal bundle of the divisor as In this language we can immediately identify the universal fermionic zero modes with four-dimensional polarisation θ α and τα as given by The fact that they are the "universal" zero modes follows from their correspondence with the cohomology group H (0,0) (Γ). The remaining components in (44) are associated with the reparametrisation modulini and Wilson line fermions of the four-cycle counted by H (0,2) (Γ) and H (0,1) (Γ), respectively [29,61,62]. Starting from the above action, i.e. in the absence of gauge flux, [29] computed the remaining zero modes in the presence of primitive (2,1) three-form flux. In particular, their analysis shows that the four universal zero modes (45) are not lifted in such a situation. In fact, one can easily convince oneself that the zero mode ω (2) 0 does not couple to primitive (2,1) flux. E.g. The last equation follows from the identity [29] G|Ω = i G|Ω (47) together with primitivity of G, Likewise, potential (0, 3) components of G-flux can be shown not to couple to the universal modes. This type of flux is allowed by the equations of motion and supersymmetric once the non-perturbative superpotential is taken into account in the analysis of the gravitino variation [31,63]. Zero mode lifting for magnetised E3-instantons We are now ready to address our main question, the inclusion of non-trivial gauge flux on the instanton. The worldvolume action of the E3-instanton contains, in addition to (42), two pieces linear and quadratic in the gauge invariant combination F = F gauge −B of the worldvolume gauge field and Neveu-Schwarz two-form. Since we are considering an orientifold, we have to add the contribution of the E3-instanton together with its image under Ω(−1) F L σ. As described in [61], this amounts to considering the instanton wrapping the divisor Γ = Γ + σΓ and to expand the worldvolume fields, according to their parity under σ, into their components along the invariant and anti-invariant cohomology on Γ. Since F is anti-invariant under Ω, the linear terms in the action survive only for the components of F along elements of H − (1,1) ( Γ). Before orientifolding, the relevant part of the quadratic term is the sum of the two terms 16 [28] 16 Note that for simplicity, we are using here the gauge of [28], eq. (29). This κ−symmetry fixing is different from the one in which (42) is written and corresponds essentially to the one of [64,65]. As emphasized in [29,66] the gauge fixing condition and the orientifold projection have to be compatible for branes invariant under the orientifold. Since we are interested in the more general situation of non-invariant branes or instantons, it suffices for our purposes to work in the gauge of [28]. In four Euclidean dimensions, solutions to the field equations and Bianchi identity can be taken to satisfy the self-duality constraint F = ⋆F . Together with √ detg 1 4 F 2 = 1 2 F ∧ ⋆F we find that the relevant couplings combine into By the same reasoning as above, this interaction does not induce any mass terms for the universal zero modes provided we stick to supersymmetric (2,1) (or even (0,3)) flux. Let us now discuss if the term linear in F saves the day, given in the upstairs geometry by [28] Again self-duality of the gauge flux, 1 2 ǫ e i e j e k e l F e i e j = F e k e l , is used and a tilde denotes indices parallel to the worldvolume, whereas s, t are general internal indices. While the index structure of the Γ-matrices is still of type (2, 1) due to contraction with the hermitian metric, the above action may in principle induce non-vanishing couplings involving the universal modes. After all, the vanishing of such couplings in the absence of gauge flux rested also upon primitivity of G 3 , which is not necessarily satisfied by the combination of F and G 3 contracted with the Γ in (51). As we stressed, these couplings, being linear in F , only survive the orientifold action in the presence of anti-invariant two-cycles on the divisor Γ. We will illustrate this issue in more detail in the next subsection. On the other hand, in the absence of such cycles, as e.g. for the T 6 /Z 2 example studied in [67], the τ -modes remain massless even after taking into account the backreaction of the three-form flux on the instanton moduli action. While this may seem counter-intuitive because they are no longer protected as Goldstinos in the presence of three-form flux, this is just an example of the familiar fact even though all symmetries broken by the instanton result in associated zero modes, the converse need not be true. A simple example with linear gauge fields In the presence of suitable three-form flux and for non-vanishing gauge flux F , the linear term in F leads to a coupling of the zero mode ω (2) 0 proportional to As stated above, this does not vanish directly due the primitivity condition for G-flux and the hermitian Yang-Mills equation for the gauge flux F . Under the orientifold projection the flux components F ∈ H + 1,1 ( Γ) are mapped to −F and the associated terms in (52) vanish trivially. But for the components F ∈ H − 1,1 ( Γ) there is a chance that the zero modes ω (0) 2 become massive. More precisely, the action (51) leads to a coupling between ω (2) 0 and the mode φ ab Γ ab |Ω . Integrating out the two types of zero modes lifts both the extra universal modes and the deformation modes φ ab Γ ab |Ω . For this mechanism to work, the deformations associated with φ ab Γ ab |Ω have to be unobstructed, of course. On the other hand, the topological index N + − N − counting the difference between zero modes of positive and negative chirality with respect to the normal bundle of the divisor (see discussion around eq. 44) remains unchanged. This is reassuring as by turning on suitable background B-field in addition to the gauge flux we may continuously set the quantity F appearing in the coupling (51) to zero, which should not change any topological quantities. Let us illustrate this in a simple local example on a toroidal orientifold. We compactify Type IIB on T 6 with metric and mod out by the orientifold projection Ωσ(−1) F L with σ : z 2 → −z 2 . Ignoring the resulting tadpole cancellation conditions for a second, we now turn on Ωσ(−1) F L invariant G 3 -form flux. Consider an E3-brane in this background on the divisor Γ given by the first two T 2 s times a point on the third one. For vanishing Wilson line along the first T 2 , Γ is invariant under the orientifold projection and the instanton is of type O(1). Since we are interested in lifting the τ -modes we assume the presence of a Wilson line rendering the instanton non-invariant. On this E3-brane we turn on constant gauge flux of type with Γ = Γ + σΓ as before. This flux is invariant under the orientifold projection and satisfies the HYM equation. Consistently, the brane couples to the likewise invariant two-form (C 2 ) 1,2 . Then the coupling of the zero mode ω 0 on the instanton is proportional to which can be non-vanishing. Indeed the flux component G 123 is invariant under the orientifold projection. This simple example shows that, ignoring tadpole constraints, it is possible that the ω 0 modes decouple for non-vanishing G 3 form flux. However, when it comes to satisfying the tadpole constraints, we have to introduce both further D7-branes to cancel the O7-plane tadpole and an O3plane to cancel the tadpole induced by the G 3 -form. The easiest way to get the O3-plane is to also mod out the model by the Z 2 action z 1,3 → −z 1,3 , essentially turning the configuration into the fluxed K3× T 2 Z 2 model studied in [68]. However, in this case the E3 is not invariant under this Z 2 , but mapped to an E3 brane with opposite gauge flux −F 12 . Therefore, the coupling of the ω (2) 0 modes again trivially vanishes. We leave it for future work to study more general concrete global models of such a configuration in detail and to verify if the τ -modes can actually be lifted. Conclusions This paper has investigated in detail under which circumstances D-brane instantons can contribute to the superpotential in Type II orientifolds. A key role is played by the two universal zero modes τ which are a remnant of the local N = 2 supersymmetry felt by instantons not invariant under the orientifold action. Their presence obstructs the generation of a superpotential. If these modes are not lifted and in the absence of additional zero modes between the instanton and its orientifold image, the instanton generates higher-fermionic F-term corrections which in general depend also on open string operators. Previously, such terms had been considered in the context of heterotic worldsheet instantons moving in a family [25]. Our main interest has been in possible mechanisms to lift the τ modes such that superpotential contributions are possible. Clearly, this question is of significance for an analysis of the quantum corrected moduli space of string vacua as well as for determining the effective interactions in the vacuum. We first focused on an effect which, for E2-instantons in Type IIA orientifolds, is describable as recombination of the instanton with its orientifold image. Equivalently, we asked whether in the Type IIB/Type I dual picture E5-instantons carrying non-trivial extension bundles generate superpotential couplings. If so, this would have important consequences also for the heterotic string. We found that while the τ -modes are indeed absent in such situations, there arise generically additional charged zero modes which cannot be lifted, thus obstructing a contribution to the effective action. By contrast, for the special case that the instanton and its orientifold image preserve a common N = 1 supersymmetry, no such zero modes arise and the recombined object can generate a superpotential provided its reparametrisation moduli can be lifted. For general Calabi-Yau manifolds, we identified appropriate open-string dependent couplings in the instanton moduli action. Their presence hinges upon the details of the underlying N = (2, 2) superconformal worldsheet theory. These couplings generalise known examples of the lifting of instanton reparameterisation modulini through curvature couplings or background fluxes. Concerning this latter point, we tried to substantiate the well-motivated speculation [10,12,16] that closed string background fluxes might also lift the universal τ modes, restricting ourselves to the familiar framework of Type IIB orientifolds with supersymmetric three-form flux. In agreement with the results in particular of [29], in the absence of gauge flux on the E3-instanton no such lifting occurs. We showed, building on the instanton action derived in [28], that once worldvolume fluxes are turned, a lifting might be possible, but only in situations where the divisor wrapped by the instanton contains non-trivial two-cycles anti-invariant under the orientifold action. As it stands we have to leave it open whether this effect can actually be realised in explicit models and, if so, whether it enables the instanton to contribute to the superpotential. As one of the most imminent open questions it therefore remains to study a concrete global example in the spirit of the setup discussed in the last section. Also, it would be desirable to gain comparable understanding of the effects of Type IIA fluxes on the E2-instanton zero modes. A Orientifold projection of instanton zero modes In this appendix we describe explicitly the orientifold action Ωσ on the zero modes of an E2-instanton wrapping the cycle Ξ. If Ξ is not invariant under the orientifold action one includes, in the upstairs picture, the orientifold image E2 ′ wrapping the image cycle Ξ ′ . The orientifold action identifies the E2 − E2 modes with the E2 ′ − E2 ′ modes and E2 − E2 ′ -modes with E2 ′ − E2-modes. The E2 − E2 ′ modes arising at invariant intersections on top of the orientifold plane are symmetrised/anti-symmetrised as will be described momentarily. The same applies to the E2 − E2 sector if the E2 wraps a cycle invariant under the orientifold action, Ξ = Ξ ′ . The orientifold action on the bosonic and fermionic instanton zero modes in the invariant sector can be deduced from the action on spacetime-filling D6branes wrapping the same internal cycle Ξ (and possibly its image) as follows: (1) The orientifold action on the internal oscillator part of the vertex operators agrees in the D6 and E2 case. The only difference in the E2 case is that the external 4D space is orthogonal to the E2-brane and thus counts as transverse when applying the usual rules for representing Ωσ. This entails the inclusion of an additional minus sign for bosonic excitations in the external 4D space and the inclusion of a factor e iπ(s 0 +s 1 ) for all fermionic zero modes. Here e iπ(s 0 +s 1 ) acts on the (anti-)chiral 4D spin fields S α (Sα) as e iπ(s 0 +s 1 ) S α = −1 ( e iπ(s 0 +s 1 ) Sα = 1). The + and -cases for the projection relevant for D6-branes are referred to as orthogonal (SO) and symplectic (SP) projections, respectively, because for invariant D6-branes they yield gauge bosons in the adjoint of the respective gauge groups. In the latter case, invariant cycles have to be wrapped by an even number of D6-branes. It is straightforward to apply these rules to the zero modes for two different cases: (i) the universal zero modes for Π Ξ = Π Ξ ′ and (ii) the modes in the E2 − E2 ′ sector arising on top of the orientifold for Π Ξ = Π Ξ ′ . In case the instanton wraps a cycle Π Ξ = Π Ξ ′ the orientifold action on the universal zero modes x µ and θ α , τα leads to where for x µ and θ α the minus sign due to the excitation gets cancelled by the minus sign due to rule (1). Thus for a single instanton subject to the projection γ E2 = γ T E2 , only x µ and θ α survive. Modes in the E2 − E2 ′ -sector arising at intersection on top of the orientifold get (anti-)symmetrised as follows: If for D6-branes wrapping the same cycle the invariant states get anti-symmetrised, then This results in the intersection numbers displayed in table 4. In particular, for a single instanton, the zero mode µ α gets projected out and only m, m, µα survive corresponding to case I in section 3.1 If for D6-branes the invariant states are symmetrised, everything just changes sign. B Details of the CFT computations In this appendix we demonstrate the computation of the amplitude m τ µ as well as of some of the couplings of the fermionic zero modes of the instanton to the closed string background relevant for the F-term corrections investigated in section 2.3 and 4. For simplicity we focus on the case of an instanton wrapping a factorizable cycle of a toroidal orbifold. More details of the CFT computation in this context can be found in [8]. While for other backgrounds the presence of the couplings in question has to be checked in concrete computations, all couplings which do not violate any of the general selection rules of the N = 2 SCFT on the worldsheet are generically present. Let us start with the open string coupling m τ µ used in eq. 21. The relevant vertex operators take the form 17 17 Here we assume the most symmetric configuration in which all intersection angles θ i E2E2 ′ > 0 and 3 i=1 θ i E2E2 ′ = 2. Now we turn to interactions between fermionic zero modes of the instanton and closed string background fields. We start with the coupling between the reparametrization moduliniχα surviving the orientifold action and the anti-chiral Kähler modulinitα. Their vertex operator in type IIA takes the form V (− 1 2 ) χ I (z) = Ωχχ Iα e −ϕ/2(z) Sα(z) e −i/2H I (z) Note that on a factorizable torus T 6 = T 2 × T 2 × T 2 only the diagonal modulī t JJ survive. We see that the couplings χ t respect the total U(1) worldsheet charge. However, only the amplitudes <χ ItJK > for I = J = K = I preserve the internal U(1)-charge in each T 2 separately and lead to a couplinḡ While on factorizable tori (T 2 ) 3 and orbifolds thereof no such couplings exist, on general Calabi-Yau threefolds there is no reason for them to vanish. On the other hand, one can convince oneself that the anti-chiral complex structure modulini with vertex operators =ū IJα e −ϕ/2(z) Sα(z) e −i/2H I (z) 3 i =I e i/2 H i (z) e −φ(z) e iH J (z) (z) e ikX(z,z) (66) do not couple toχ due to non conversation of the total U(1) world sheet charge. This is therefore a universal result. The corresponding bosonic superpartner terms to (65) arise from amplitudes of the form < θ (+1/2)T (−1,−1)χ(−1/2) > , where the superscripts denote ghost picture of the respective vertex operator. Note that with the choice displayed in (67) we ensure the total ghost charge constraint. The vertex operator of the Kähler moduli takes the form =T IJ e −ϕ(z) e −iH I (z) e −φ(z) e −iH J (z) e ikX(z,z) while the one for the θ-mode in (+ 1 2 )-ghost picture is given by i =I e i/2H i (z) e ϕ/2(z) . The correlators are easily evaluated and lead to couplings proportional to θ σ µχI ∂ µT JK . As above, by non-conservation of U(1)-charge there is no coupling to the bosonic complex structure field U. On the other hand the amplitude < θu IJ > is nonvanishing. Here the vertex operator of u IJ is the complex conjugated of (66) Now, one can easily check that U(1) world sheet charge is conserved only in case I = J and the resulting coupling takes the form For the couplings relevant in section 4 we also need the corresponding bosonic partner arise from amplitudes involving a IJ andω IJ . For brevity we only display the computation of the amplitude < θ (1/2)ω(−1,−1) τ (−1/2) >, where the vertex operator forω is V (−1,−1) ω IJ (z) =ω IJ e −ϕ(z) e iH I (z) e −φ(z) e −iH J (z) e ikX(z,z) , while the vertex operator for the θ andτ in the respective ghost picture are given by (68) and (61). Again U(1) world sheet charge requires I = J and a computation analogous to the one leading to the amplitude < θTχ > gives the coupling On the other hand due to the U(1) world-sheet charge there the amplitudes <τt > as well as < θTτ > vanish.
2007-10-15T20:44:33.000Z
2007-08-02T00:00:00.000
{ "year": 2007, "sha1": "07c81bded2e17d3b7bd37fb9595b228e976c4117", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0708.0403v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "86e869c86b671679aca6bedcc4db765caf063885", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230552912
pes2o/s2orc
v3-fos-license
Integrated analysis of multi-omics data to identication of prognostic genes for pancreatic cancer We aim to develop core modules related to pancreatic cancer (PC) to predict the prognosis of PC patients and explore their tumor microenvironment. We merged 175 pancreatic cancer samples in the TCGA database with gene mutation expression, methylation level distribution, mRNA expression and pancreatic cancer-related genes into a public database, and then through weighted correlation network analysis (WGCNA), Two expression modules associated with pancreatic cancer are combined. Then, by integrating these selected genes identied from the rst 10 genes of the two co-expression modules, a model risk score is established, and patients are divided into high-risk and low-risk subgroups. Kaplan-Meier survival analysis method is used to analyze differences, analyze the correlation of survival between subgroups, and analyze prognostic models. These selected core genes can divide early pancreatic cancer into two subgroups, compare the prognosis of these two groups, and screen for differentially expressed genes. Use GO and KEGG enrichment analysis to predict the function of differentially expressed genes. The differential expression level and immune cell inltration level of these selected core genes were further analysis. Results Our ndings shown nine core genes (MST1R, TMPRSS4, PTK6, KLF5, CGN, ABHD17C, MUC1, CAPN8, B3GNT3) were prognostic biomarkers of pancreatic cancer. These 9 genes could divide early pancreatic cancer into two subgroups, and the two subgroups had signi cant differences in prognosis, and were mainly different in functions such as digestion and extracellular cell adhesion. Further analysis revealed that the expression of these 9 genes were expressed at high levels in pancreatic cancer tissues. In addition, we validated pancreatic cancer cells and pancreatic epithelial cells through quantitative realtime PCR (qRT-PCR), suggesting that the MST1R, PTK6, ABHD17C and CGN expressed higher in PC cells. CIBERSORT analysis indicated that these genes expression were closely correlated with B-cell naive, CD8 + T cells, Macrophages M0 cells, suggesting that these genes may play a carcinogenic role in the preservation of immune-dominant status for tumor microenvironment. Conclusions Our research identi ed 9 key genes which may enhance our understanding of the molecular mechanisms associated with pancreatic cancer. Page 3/15 Background Pancreatic cancer(PC) is the most invasive malignant tumor in the digestive system [1]. Its incidence is low, but due to early metastasis to local lymph nodes and distant organs, the 5-year survival rate of patients with PC is very low. The only treatment that may cure PC is surgical resection. However, about 80% of tumors are unresectable at the time of diagnosis [1][2][3]. For patients with advanced PC, chemotherapy is the treatment of choice, but chemotherapy has a wide range of side effects [2,3]. Therefore, early detection of PC is essential in order to provide patients with an optimal treatment. Based on the analysis of single omics data, researchers had found many factors related to PC from various aspects [4,5]. As a complex regulatory system, the occurrence of diseases usually involved genetic mutations, epigenetic changes, and abnormal gene expression. Therefore, it was very meaningful and important to identify the prognostic biomarkers of PC through the integrated analysis of multi-omics data. TCGA data download Pancreatic cancer mutation data was downloaded using the R package TCGAbiolinks [6] (https://bioconductor.org/packages/release/bioc/html/TCGAbiolinks.html). Screen the cancer type as PAAD from http:// rebrowse.org/, download the SNP6 Copy Number segment data of PC samples and the methylation chip data of PC samples (platform was illumina 450K chip). Both mRNA and miRNA expression pro les and sample clinical data were downloaded from the TCGA o cial website (https://portal.gdc.cancer.gov/). Finally, the data of 175 patients who included 5 data sets (mutation, CNV, methylation, mRNA, miRNA) were analyzed. The clinical data of these 175 patients with PC were shown in Table 1. Mutation (SNV) analysis MutSigCV was used to analyze high-frequency mutation genes in tumors. By screening genes with higher mutation frequencies, more mutations, and mutations that occur more frequently in conservative sites. The analysis was performed by the corresponding MutSigCV module in the online analysis tool GenePattern [7] (https://cloud.genepattern.org/gp/pages/index.jsf) developed by the Broad Institute. Various types of mutations can occur in cancer, including six basic mutations: C>A, C>G, C> T, T> A, T> C, T>G. In order to study different types of mutations, the researchers proposed the concept of mutational signatures [8]. First, based on six basic types of mutations, and then consider 1 base upstream and 1 base downstream of the mutation site. There were four cases of A, T, C, and G, so a total of 96 mutation types (4 * 6 *4) may be obtained. The frequency of these 96 mutation types was different in different cancers. Non-negative matrix factorization was performed on the frequency of 96 mutation types in each sample to obtain mutation signatures for PC. Here we use the R package maftools [9] (https://bioconductor.org/packages/release/bioc/html/maftools.html),somatic signatures [10] (https://bioconductor.org/packages/release/bioc/html/Somatic Signatures.html) for mutation signature analysis of tumor samples. Then unsupervised hierarchical clustering was performed on the samples based on the contribution of each label, and the clinical characteristics of the sample subgroups with different mutation characteristic labels were observed. Difference analysis The limma [11] package in R was used for differential methylation site analysis, and gene annotation was performed based on the position information of the site. Similarly, the limma package was used to analyze the mRNA and miRNA expression pro le data of cancer samples and control samples to screen for differential mRNA and miRNA. Identify candidate gene sets The differentially expressed genes were searched together with pancreatic cancer-related genes searched in the GENE database, the OMIM database, and the KEGG database, and genes that appeared at least once were selected as the object of investigation. Then, the genes annotated in the differential methylation site analysis and the markedly mutated genes obtained by the MutSigCV analysis were added to further expand the scope of the investigation and nally obtain a candidate gene set. The expression pro le data of candidate gene sets in TCGA in PC samples were selected for analysis of weighted co-expression networks in the next step. Weight co-expression network analysis and cluster analysis WGCNA [12] is a systems biology method using gene expression data to construct a scale-free network. Using the WGCNA package of R, a weighted co-expression network is constructed on the expression pro le data of the candidate gene set obtained in the previous step. Screen for modules related to PC, construct an interaction subnet for the co-expressed gene sets within the module, and screen candidate core genes for PC based on the network node degree. All clustering heatmaps were completed using the R package pheatmap. The clustering method selected is ward.D. Survival analysis According to the clinical data downloaded and compiled by TCGA, Kaplan-Meier analysis was performed on candidate core genes of PC to nd genes that had a signi cant effect on patient prognosis (P<0.05). Function and path enrichment analysis GO and KEGG enrichment analysis was performed on the gene set of interest using R package cluster pro ler (https://bioconductor.org/packages/release/bioc/html/cluster Pro ler.html) [13]. Immune correlation CIBERSORT analysis tool was applied for estimate the abundance of immune cells in 175 tumor samples, and the correlation between nine core genes and immune in ltration level. Quantitative real-time PCR Expression levels of MST1R, PTK6, ABHD17C and CGN were detected by ABI 7500 Real-time PCR System. Relative expression levels were normalized to β-actin which is internal control. Statistical analysis We used R software (3.5.3) for statistical analysis. The T test was used to compare subgroups. We carried out a Chi-square test to describe clinical parameters and compare patient characteristics. Pearson's correlation was applied to analyze the correlation between genes and the level of immune in ltration. K-M survival curves were used to analyze OS between different groups. Enrichment analysis was accomplished by using the hypergeometric test. Sample and clinical data The study included omics data and clinical information of 175 patients in total, including mutation results and CNV results of 175 samples, methylation expression pro le data including 175 tumor samples and 10 control samples, and mRNA expression pro le data including 175 of the tumor samples and 4 control samples, the miRNA expression pro le data included 175 tumor samples and 4 control samples. The analysis process of our study is shown in Fig. 1 The limma package was used to screen the methylation chip data of 175 cancer samples and 10 control samples for differential methylation sites. When the signi cance threshold was 0.05 and |logFC|>0.3, a total of 1106 differential methylations were screened. The methylation level of 351 methylation sites decreased, and the methylation level of 755 methylation sites increased. Analysis of mRNA differentially expressed genes and mutations Differential gene analysis was performed on the mRNA expression pro les of 175 cancer samples and 4 control samples. When the signi cance threshold was 0.05 and | logFC|>1, a total of 246 differentially expressed genes were screened. The effect of each mutation site on genes was different. We counted the effects of mutations on genes in all samples of each gene. These effects could be divided into the following categories, including frame_shift_del, frame_shift_ins, in_frame_del, missense_mutation, nonsense_mutation, nonstop_mutation, splice_site, translation_start_site. statistical analysis by t test found that the distribution of mutation types was signi cantly different in frame_shift_del (P=1.56e-12), in_frame_ins (P=3.95e-11), nonstop_mutation (P=6.22e-05) and splice_site (P =0.005), which indicated that SNV has a certain effect on gene expression. Further enrichment analysis was performed on the differential genes and the signi cant mutation gene set analyzed by MutSigCV. The hypergeometric distribution test showed that the differential genes were signi cantly enriched into the signi cant mutation gene set analyzed by MutSigCV and p value was 0.037. Candidate mRNA gene set and construction of weighted co-expression networks for candidate gene sets The differentially expressed genes were searched in the pancreatic cancer-related genes searched in the GENE database, the OMIM database, and the KEGG database. The relationship between the genes and differential genes in the database was shown in the Fig. 2a, of which 216 gene were identi ed in the differential analysis, and these genes might be related to the onset of PC. In order to further expand the scope of the investigation, the 501 genes annotated in the differential methylation site analysis and the 991 signi cant mutant genes obtained by MutSigCV analysis were added to nally obtain a candidate gene set with a total of 3284 genes. The PC sample expression data of 3284 genes in TCGA were selected for the next analysis of weighted co-expression networks. The WGCNA software package of R was used to construct a weighted co-expression network for candidate gene sets. To ensure that the network was scale-free, we choosed the optimal β=5 (Fig. 2b). We used average-linkage hierarchical clustering method to cluster genes, and obtained 13 modules in total. In order to determine the correlation between the genetic module and the disease, calculate the Pearson correlation coe cient of each module and the sample characteristics (cancer or normal) (the higher the module was more important) and the signi cance P value of the corresponding correlation. The results of the correlation coe cient between gene module characteristics and phenotypes were shown in Table 1. Subsequently, the gene signi cance (GS) value of each gene module was calculated (Fig. 2c, d). The larger the GS value, the more relevant the module was to the disease. Through correlation analysis and GS value calculation, two modules related to PC were nally selected. These two modules were turquoise and black, respectively. Based on the expression relationship of the genes in the two co-expression modules analyzed above, we construct a co-expression network for each module separately. In the end, we selected the top10 gene as the key gene in the co-expression network, that was, the key gene related to PC. Survival analysis In order to determine whether the 20 key genes associated with PC obtained in the previous analysis were signi cantly correlated with prognosis. We performed Kaplan-Meier survival analysis based on the clinical data of these 175 PC samples in TCGA and the expression of these genes in the samples. When the signi cance threshold was set to 0.05, a total of 9 module core genes, including MST1R(P=0. Further multi-factor Cox regression analysis was performed on these 9 genes, and a regression model Score=-0.0877*MST1R-0.0325*TMPRSS4+0.0693*PTK6 +0.1893*KLF5-0.0634*CGN-0.1143*ABHD17C+0.1589*MUC1-0.2122*CAPN8+ 0.3480*B3GNT3. According to the R package maxstat, then determined the classi cation's best score threshold point was 2.110527 and divided the sample into high score and low score groups according to 2.110527 (Fig. 4). Then Kaplan-Meier analysis was performed based on the survival time and status of the samples, and the analysis results showed that there were signi cant differences in the prognosis of the samples with high and low groups (P=0.00035). Driver genes mediate molecular typing of early pancreatic cancer We selected 150 early PC samples (Stage I, Stage II) from TCGA, and used the expression of the 9 prognostic-related genes (MST1R, TMPRSS4, PTK6, KLF5, CGN, ABHD17C, MUC1, CAPN8, B3GNT3) in the early cancer samples analyzed in the previous step to perform unsupervised clustering analysis (Fig. 5a). As shown in the Fig. 5b, all PC samples could be divided into two categories, and the two types of samples had signi cantly different prognosis. In order to further analyze the differences of these key genes in different subgroups, the average expression value of each gene in each type of sample was used as the expression value of the gene in that category. After t-test analysis, it was found that these 9 genes related to prognosis genes were differentially expressed in two subclasses (Fig. 6). Functional differences in early pancreatic cancer subgroups The limma package in R was used for differential gene analysis of the two types of PC samples obtained in the previous step. When the signi cance threshold was 0.05 and |logFC|>1, a total of 563 differentially expressed mRNAs were screened (Fig. 7). Perform functional and pathway enrichment analysis of these genes (Fig. 8). These differential gene results show that these genes mainly were related to various digestion and absorption and cell adhesion functions. The KEGG pathway shown that these genes were related to pancreatic secretory function. It was inferred that the process of PC was related to various digestive disorders and changes in extracellular cell connections. The analysis of these genes combined with clinical characteristics and the proportion of tumor in ltrating leukocytes (TILs) These genes of expression level in PC tissues were higher in those of paracancerous tissues (P <0.05) (Fig. 9a). These genes showed high expression levels in stage III/IV of PC, but it is not statistically signi cant (Fig. 9b). It may be because of the sample size was not large enough to pick up a statistical difference. In order to further con rm the correlation between the expression of these nine genes and the immune microenvironment, CIBERSORT algorithm was used to analyze the correlation of 22 kinds of TILs proportion with these 9 genes expression. Among them, B-cell naive, CD8 + T cells, and Macrophages M0 cells were correlated with these 9 genes expression (Fig. 10). These results support that the expression levels of these nine genes affect the immune activity of tumor microenvironment. Later, according to the drug-target interactions recorded in the drug-bank database, MUC1, PTK6 were the target genes of 2 drugs (Potassium nitrate, Vandetanib). MST1R, PTK6, ABHD17C and CGN is highly expressed in PC cells We next investigated the expression level of four of these genes in cancer cell lines and pancreatic epithelial HPDE lines, MST1R, PTK6, ABHD17C and CGN were expressed relatively high in BxPC-3, Panc-1 and PATU-8988 (Fig. 11, Table 3). potential prognostic biomarkers of PC and selected to construct a prognostic model [15]. In this study, we integrated multi-omics data of 175 cases of PC from TCGA database to generate the feature matrix of genes, which include genomic mutation, methylation and pancreatic cancer-related genes. These four kinds of data comprehensively provide 3284 genes. By WGCNA analysis, we identi ed two PC related modules. Eventually, by comparing the selected genes in two modules with prognostic values, 9 genes (MST1R, TMPRSS4, PTK6, KLF5, CGN, ABHD17C, MUC1, CAPN8 and B3GNT3) were identi ed as hub genes. Studies have found that a 15 gene set was associated with the prognosis of PC, and the high expression of the CAPN8 gene in this gene set was associated with a poor prognosis [16]. MST1R kinase was overexpressed in more than 80% of PC, and its effect on epithelial cells and macrophages which can accelerate the progression of PC [17]. However, some studies have found that although RON (also known as MST1R) was involved in the progress of PC in experimental models and may constitute a therapeutic target, its expression was not related to the prognosis of patients with PC who have undergone surgical resection [18,19]. The prognostic effect of MST1R on early PC has not been found in the literature. PTK6 promotes tumor migration and invasion in PC cells that rely on the ERK signaling pathway [20], and no prognostic effect of PTK6 on early PC has been reported in the literature. Studies have analyzed the expression pro le data of PC in GEO and found that TMPRSS4 can be combined with other genes as candidate markers for the prognosis of PC [16]. ABHD17C and CGN have not been reported to be related to the prognosis of PC. MUC1 was associated with tumor invasion and metastasis, and was highly expressed in PC, and its expression was related to overall prognosis [21]. KLF5 expression was increased in pancreatic ductal adenocarcinoma, which can promote mouse cell proliferation, acinar-ductal metaplasia, pancreatic epithelial neoplasia and tumor growth in mice [22]. Other studies reported patients with high expression of KLF5 had shorter overall and tumor-free survival time, and KLF5 promoted cell cycle progression of PC cells [23]. The high expression of glucosyltransferases B3GNT3 plays an important role in the self-renewal of PC stem cells [24]. The absence of B3GNT3 leads to increased proliferation and invasion of PC cells [25], and it was related to prognosis [26]. This indicates that the unreported genes (including MST1R, PTK6, ABHD17C and CGN) can be used as potential driver genes for PC. Using qPCR, we elucidated its expression in cell lines. Four hub genes were expressed relatively high in PC cell lines. At the same time, we analyzed the potential links between these two modules and 68 differentially expressed miRNAs and found that four differential miRNAs were associated with these two modules. Among them, hsa-miR-375 is related to KLF5. Afterwards, we constructed a multi-factor Cox regression analysis of these 9 genes and divided the sample into high score and low score groups. Our study found that there were differences between high and low score group in the prognosis. We further con rmed these 9 genes were differentially expressed in two subclasses. Enrichment analyses demonstrated that various digestion, absorption and cell adhesion functions were important pathways. In particular, KEGG pathway demonstrated that these differential gene were closely associated with pancreatic secretory function. It may infer that potential connection existed between the process of PC with various digestive disorders and changes in extracellular cell. In addition, we con rmed gene expression in PC tissues, and performed CIBERSORT analysis revealed that these 9 genes were negatively correlated with immune in ltrating level of B cell, and CD8 + T cell. The high expression of T cell and B cell is related to the prolonged overall survival rate, which can be predicted in many types of tumors including PC [27]. Our study also found that MUC1, PTK6 were the target genes of 2 drugs (Potassium nitrate, Vandetanib). Vandetanib was a multi-target oral small molecule inhibitor that inhibited tumor cells by inhibiting different intracellular signaling pathways during rearrangement during VEGF, EGFR, RET tyrosine kinase transfection, growth and angiogenesis [28]. There are some limitations in current research. First, the number of normal samples of patients is too small. Second, these genetic screenings are based on public data sets. In order to further con rm that these genes are related to prognosis, clinical cohorts need to con rm. In conclusion, we constructed a co-expression network by using WGCNA and obtained 9 core genes, which can provide theoretical basis for studying dearly PC. In future research, we need to use molecular biology methods to verify our ndings. Moreover, functional studies on key genes are needed.
2020-12-17T09:08:31.854Z
2020-12-11T00:00:00.000
{ "year": 2020, "sha1": "40dbc85a9e8884e54c7055ac737a95f7844fc5b4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21203/rs.3.rs-125359/v1", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "d093994e656aa832fde94f06feb867ea731b37e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
237398428
pes2o/s2orc
v3-fos-license
Multi-agent system collision model to predict the transmission of seasonal influenza in Tokyo from 2014–2015 to 2018–2019 seasons The objective of this study was to apply the multi-agent system (MAS) collision model to predict seasonal influenza epidemic in Tokyo for 5 seasons (2014–2015 to 2018–2019 seasons). The MAS collision model assumes each individual as a particle inside a square domain. The particles move within the domain and disease transmission occurs in a certain probability when an infected particle collides a susceptible particle. The probability was determined based on the basic reproduction number calculated using the actual data. The simulation started with 1 infected particle and 999 susceptible particles to correspond to the onset of an influenza epidemic. We performed the simulation for 150 days and the calculation was repeated 500 times for each season. To improve the accuracy of the prediction, we selected simulations which have similar incidence number to the actual data in specific weeks. Analysis including all simulations corresponded good to the actual data in 2014–2015 and 2015–2016 seasons. However, the model failed to predict the sharp peak incidence after the New Year Holidays in 2016–2017, 2017–2018, and 2018–2019 seasons. A model which included simulations selected by the week of peak incidence predicted the week and number of peak incidence better than a model including all simulations in all seasons. The reproduction number was also similar to the actual data in this model. In conclusion, the MAS collision model predicted the epidemic curve with good accuracy by selecting the simulations using the actual data without changing the initial parameters such as the basic reproduction number and infection time. Introduction Seasonal influenza epidemics result in nearly 3 to 5 million cases of severe illness a year and have a great importance in public healthcare [1]. Influenza also causes cardiovascular disorders as well as other complications [2]. Therefore, controlling and preventing the epidemic of influenza is an important issue [3]. Mathematical models, such as truncated model and the SIR model [4,5], have been introduced to predict the transmission of infectious diseases [6]. The strength of these models is the simplicity of calculation due to the deterministic nature and the results would be identical for fixed initial values. However, disease transmission is a sum of many small individual effects, and random events cannot be ignored [7]. Therefore, stochastic models might predict disease transmission better than deterministic models. A multi-agent system (MAS) model is a stochastic method to predict various phenomena. MAS approach has been applied for hepatitis C virus infection modelling [8], pre-hospital emergency management [9], real-time scheduling for out-patient clinics [10], tumor growth [11], and immune responses [12]. Stochastic spatial models have also been applied to epidemic forecasting [13,14]. Most of these studies apply a mathematical method to estimate the interaction between different compartments. In contrast, by representing each individual as a particle, collision of particles would correspond to interaction between individuals. A previous study introduced a kinetic model of mobile susceptible and infective individuals in a two-dimensional domain [15]. They applied this model to predict the epidemic curve of measles. However, this was an in vitro study which compared to the SIR model. We referred to this model as a MAS collision model and sought that this model could be applied for prediction of actual seasonal influenza epidemic. Recent studies using MAS models attempt to increase the precision by including multiple parameters in order to simulate the daily schedule of each individual [16,17]. However, the calculation cost increase tremendously when sophisticated models are used, and a supercomputer would be necessary to perform these methods. Conversely, a simple model just focusing on collision might be able to predict the influenza epidemic using a commercially available computer. Our hypothesis was that a simplified model focusing only on collision of people with calibration using data of the first 4 weeks after onset could predict the epidemic curve of seasonal influenza. Therefore, the purpose of this study was to apply the MAS collision model to predict seasonal influenza epidemic in Tokyo for 5 seasons. Data Weekly sentinel influenza surveillance in Tokyo is performed in 419 clinics or hospitals. Weekly data of new cases per site are available at the Tokyo Metropolitan Infectious Disease Surveillance Center website (http://idsc.tokyo-eiken.go.jp/diseases/flu/flu/, Supplemental Table 1). A case is reported 1) if a patient has all four clinical symptoms (highgrade fever, malaise, cough, and sore throat of sudden onset) or 2) if a patient has some symptoms and is tested positive for influenza via a rapid antigen detection by immunochromatography using a nasopharyngeal swab sample [18]. The influenza season starts in the 36 th week and ends in the 35 th week of the next year. We collected the data of 5 seasons: from 2014-2015 to 2018-2019 seasons. The influenza epidemic threshold was defined as weekly onset of >1 patient per site. We started the prediction model at the onset of the corresponding season. The basic reproduction number (R 0 ) for each season was determined as the mean reproduction number of 5 weeks including the onset of the epidemic: for example, if the influenza epidemic started at week N, we calculated the mean reproduction number from week NÀ2 to Nþ2. The calculation method of the reproduction number is described below. Hardware and software Model experiments including the MAS collision model and the SIR model were performed on a computer with 16 GB CPU memory, an Intel Core i7-7700 3.60 GHz CPU (Intel, Santa Clara, CA), using Python 3.7. Basic principle We assumed each individual as a particle inside a square domain (0 x 1, 0 y 1) with a radius of 0.0075. The initial interparticle spacing was approximately 0.03 in this simulation. Susceptible, infectious, and removed individuals were drawn as green, red, and purple particles, respectively ( Figure 1). The initial position of each particle was selected randomly. To correspond to a heterogenous population, the initial particle velocity was randomly determined based on a normal distribution with a standard deviation of 0.1 times the mean speed. The particles move within the domain and are elastically reflected off the walls. When two particles collide, the velocities change such that both energy and momentum is conserved. We determined the total number of particles as 1,000 to correspond to the number of covered people per clinic or hospital in Tokyo. The estimation was performed using the following data. First, the number of clinics and hospitals in Tokyo was 13,429 and 647, respectively, in 2018, which is available at the Bureau of Social Welfare and Public Health website (http://www.fukushihoken.metro.tokyo.lg.jp/kiban/chosa_to kei_iryosisetsu/heisei30nen.html). Next, the population of Tokyo in December 2018 was 13,859,764. This leads to a mean population coverage of 985 people per clinic or hospital. Particle velocity and number of collisions Before performing the main experiment, we investigated the relationship between the mean particle velocity and the number of collisions. The number of particles and the size of the domain was the same as the main experiment. A total of 12 frames were performed in each simulation and the total number of collisions was recorded. We performed this experiment in 23 different particle velocities ranging from 0.56 to 1.00 with a step of 0.02, and 50 simulations were performed for each velocity. Main experiment The initial number of susceptible and infectious individulals was 999 and 1, respectively, because epidemic onset was defined as >1 patient per clinic or hospital and each site covers a population of approximately 1,000. Based on the preliminary experiment, we adopted 0.98 as the mean initial particle velocity. This makes approximately 250 collisions per frame. We defined 6 frames to correspond to a single day. This results in 3 contacts per particle per day (250  6Â2/1,000). Note that because 1 collision account for 1 contact with each particle, the total contacts would be the twice of the collision number. We performed the simulation for 900 frames (900/6 ¼ 150 days). In Japan, the activity of people decreases during the New Year Holidays. The number of new influenza patients reduces during the holidays each year. We performed simulations using the SIR model described below with various reproduction numbers during the holidays (data not shown). We determined that reduction of reproduction number to 75% during the 51 st and 52 nd week would be feasible. This accounts for reduction in particle velocity to 70% based on the preliminary experiment ( Figure 2). We did not perform velocity reduction in 2015-2016 season because the epidemic started in the 1 st week of 2016. The infectious period (1/γ) was estimated as 5 days [6,19]. Therefore, an infectious individual turns removed in 5 days (30 frames). When an infectious individual collides a susceptible individual, the susceptible individual turns infectious with a predefined probability. The probability was calculated from the R 0 determined from the actual influenza surveillance data using the following method. An infectious individual transmits to a total of R 0 susceptible individuals during the infectious period (1/γ). The infectious individual contacts 1 ∕ γÂ3 times during the infectious period. Therefore, the probability of an infectious individual to transmit to a susceptible individual would be R 0 Âγ ∕ 3 per contact. To correspond to this probability, a random number between 0 to 1 were generated when an infectious individual collides a susceptible individual, and the susceptible individual would turn infectious when the number was < R 0 Âγ ∕ 3. To explore the closeness of the estimation of the model to the actual incidence number, we performed 500 simulations for each season. We assumed each individual as a particle. The particle color represents the status: green, red, and purple particles correspond to susceptible, infectious, and removed individuals, respectively. Initially, one particle is infectious while the remaining particles are susceptible (A). At week 7, 49 particles are infectious while 188 particles are removed (B). Finally, at week 21, no particles are no longer infectious, and 353 particles are removed (C). A total of 647 particles remained susceptible. Comparison with the SIR model The SIR model is a compartment model to describe the transmission of infectious disease [5]. All individuals are classified as one of the 3 compartments: susceptible (S), infected (I), and removed (R). The total number of individuals (N ¼ S(t)þI(t)þR(t)) is fixed to 1,000. The model is described by the following ordinary differential equations. β is the transmission rate, calculated by multiplying R 0 and γ. The β value was calculated from the R 0 value, and we used a fixed γ value of 0.2 as described above. The initial values were determined as follows: We performed the simulation for 150 days. Reproduction number using weekly incidence The actual data for influenza incidence was reported weekly. We performed a following numerical analysis to estimate the reproduction number using the adjacent weekly incidence data. First, we started with 1 infected individual at day 0. When the reproduction number is 1 and the infectious period is 5 days, the individual will transmit to 0.2 individuals in days 1-5. Next, incident individuals at day 1 will transmit to 0.04 (¼ 0.2 ∕ 5) individuals in days 2-6. Furthermore, incident individuals at day 2 (0.24 ¼ 0.2 þ 0.04) will transmit to 0.048 (¼ 0.24 ∕ 5) individuals in days 3-7. Daily incidence of influenza patients could be obtained by repeating this method. Using this data, we calculated the ratio of weekly incidence which is defined as (weekly incidence at week Nþ1) ∕ (weekly incidence at week N). Because peak in daily incidence number is observed every 5 days during the first few weeks, the ratio will not stabilize until approximately week 7 (data not shown). Therefore, we calculated the daily incidence to 70 days and used the data of week 10 for analysis. We performed this analysis with a reproduction number between 0.30 and 2.50 with a step of 0.01 (Supplemental Table 2). We used this table to estimate the reproduction number when only weekly data is available. Reproduction number using daily incidence The reproduction number R(t) can be estimated by the ratio of the number of new infections generated at time t, N(t), to the total incident individuals at time t, given by P t s¼1 Nðt À sÞwðsÞ, where w(s) is the weighting factor of the infectivity [20]. In practice, transmissibility can change over time, and the generation time distribution is difficult to measure. Given that the infectious period was set to 5 days, we estimated that w(s) ¼ 0.2 during s ¼ 1 to 5, otherwise w(s) ¼ 0. In the real world, the viral shedding and the transmission potential is highest just after the onset and declines thereafter [19,21]. We assumed the transmission potential as the same during the infectious period to simplify the model in this study. We further calculated the reproduction number over sliding weekly windows. 2.5. Statistical analysis 2.5.1. The MAS collision model: preliminary experiment A regression analysis was performed to assess the relationship between the mean particle velocity and the number of collisions. We used a quadratic regression analysis rather than a linear regression analysis because a quadratic analysis fitted better than linear analysis in low and high velocities. The MAS collision model: main experiment Weekly new patients were recorded in each simulation. The means and 95% confidence intervals were calculated for each season. Because this model is a stochastic model, the number of infected patients in total varied from approximately <10 to >500 patients. In order to improve the prediction, we applied the following filter as a checkpoint at specific weeks to select the simulations for prediction analysis: when the number of weekly incidence was N(i) in week i, simulations with weekly incidence between N(i)Â0.6 and N(i)Â1.4 were eligible. We analyzed the simulated data using the following 4 models by applying the filter: Model 1, include all data with no exclusion; Model 2, apply the filter at week 2; Model 3, apply the filter at weeks 2 and 4; Model 4, apply the filter at the weeks with peak number of incidence before and after the New Year Holidays. Only a single peak was found in the 2015-2016 season. Therefore, we applied the filter at the week with peak incidence and 4 weeks later. We recorded the calculation time for each simulation. The SIR model The ordinary differential equations were solved using the odeint function found at scipy.integrate class. We assigned the initial state (numbers of susceptible, infected, and removed individuals), time interval for calculation, and the basic parameters (β and γ) in the function. The time interval was 1/100 day for calculation. The function gives the state of each compartment (susceptible, infected, and removed individuals) by numerically solving the equation. We recorded the daily status of each compartment. Comparison of models We calculated the mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE) to compare the accuracy of the models. The calculations were performed as follows. where P t is the predicted value and A t is the actual value. Sensitivity analysis We performed a qualitative sensitivity analysis in Model 1 to assess the robustness of the calculation [22]. The week and number of maximal weekly incidence of influenza patients was calculated using the first and last 250 simulations. We compared the results with the data using all simulations. Numbers are reported as mean AE standard deviation or N. MAS, multi-agent system. Actual epidemic data The week with maximal number of weekly incidence ranged from the 4 th to 10 th week after onset ( Table 1). The maximal number of weekly incidence ranged from 32.9 to 64.2 (Table 1). The SIR model The calculation time ranged from 0.808 to 0.112 s ( Table 2). The week of the maximal number of weekly incidence ranged from 11 to 19 weeks, and the number of maximal weekly incidence ranged from 16.5 to 46.7 ( Table 1). The increase in the weekly incidence was lower, and the week of peak incidence was approximately 8 weeks later than the actual data. The number of maximal weekly incidence was close to the actual data when the peak was before the New Year Holidays (2014-2015 season), and when reduction in incidence was not observed (2015-2016 season). However, when the incidence dramatically increased after the New Year Holidays (2016-2017, 2017-2018, and 2018-2019 seasons), the maximal weekly incidence was around 2 to 3 times higher than the number predicted using the SIR model. The errors between the SIR model and the actual data were larger than the error between the MAS collision models and the actual data (Table 3). The MAS collision model: preliminary experiment Quadratic regression analysis was performed, and a strong positive relationship was observed (R 2 ¼ 0.99) between the mean particle velocity and the number of collisions ( Figure 2). We assumed that a mean velocity of 0.98 would result in approximately 3,000 collisions per 12 frames including the lower limit of the 95% confidence interval. We also estimated the mean particle velocity to correspond to reduction in collision count to 75% (3,000  0.75 ¼ 2,250) during the New Year holidays. Velocity reduction to 70% (0.686) would result in a total collision count of 2,255. Hence, we reduced the particle velocity to 70% during the holidays. The MAS collision model: main experiment The mean calculation time was approximately 30 min per one simulation ( Table 2). The analysis including all simulations (Model 1) corresponded good to the actual data in 2014-2015 and 2015-2016 seasons ( Figure 3, Table 3). However, the model failed to predict the high peak incidence after the New Year Holidays in 2016-2017, 2017-2018, and 2018-2019 seasons. In the latter 3 seasons, the peak weekly incidence was higher (Supplementary Figure A) and the week of peak incidence was later (Supplementary Figure B) than expected. The reproduction number after the holidays was higher than the R 0 , which resulted in a steep curve (Figure 4). Sensitivity analysis showed that the week and number of maximal weekly incidence did not differ between the first and the latter half of the simulations (Supplemental Table 3). Simulations in models 2 and 3 were selected based on the weekly incidence of week 2, and weeks 2 & 4, respectively. The selected simulations ranged 20-29%, and 6-9% of the total simulations, respectively ( Table 3). The prediction error was slightly better in 2014-2015, 2016-2017, 2017-2018 seasons, but it was still difficult to predict the high peak after the New Year Holidays (Figure 3, Supplementary Figure B). The reproduction number after the New Year Holidays was lower than the actual value in the latter 3 seasons (Figure 4). Model 4 included simulations selected by the week of peak incidence. Therefore, the simulated incidence curve was close to the actual curve in all seasons (Figure 3). The reproduction number was also similar to the actual data ( Figure 4). However, the proportion of selected models were smaller than models 2 and 3, ranging from 3% to 9% (Table 3). Discussion The present study applied the MAS collision model to predict the influenza epidemic and tested the method in 5 influenza seasons. A model including all simulated cases (Model 1) predicted the peak number of incidence and week of peak incidence well in 2014-2015 and 2015-2016 seasons, but it underestimated the peak number of weekly incidence in the remaining 2016-2017, 2017-2018, and 2018-2019 seasons. In these three seasons, a small peak was observed before the New Year Holidays, but a second peak after the holidays showed a much higher number of incidence than the first peak. The initial model could not thoroughly predict the second peak, but models which selected the appropriate simulations worked better than the initial model. The proposed method might be close to the pairwise model [23,24], but this model is different in that pairs are generated visually by the collision of particles. The MAS collision model (especially Model 4) might be criticized that it is just presenting the best fit result, because we filtered the results using the actual incidence at the 4 th week, which is the week of the peak incidence in the first two seasons. However, the peak arrived much later (ranging from 7 to 10 weeks) during the last three seasons. The MAS collision model 4 was able to predict the week and intensity of the peak in these seasons. We mentioned to the model proposed in this study as a MAS collision model, but there are number of studies which used MAS models to predict the transmission of influenza. A simple model just focusing on the number of collisions might not predict the actual curve with good precision. Therefore, a number of studies attempted to improve the precision by including other parameters as follows in order to simulate the daily schedule of each individual: compartments such as home, supermarket, school, and workplaces, temperature, cognition to self-awareness, day of the week, age, and railway line [16,17]. Sophisticated models including the aforementioned parameters would help to perform a mean field approximation using stochastic models. However, unknown parameters might affect the epidemic curve and the results might change even when the initial conditions are the same. The mean value of the predicted curve might not precisely forecast the actual epidemic each year. Selecting the appropriate simulations after performing numerous simulations using a simplified model would be another approach to increase the precision. The MAS collision model in the present study only focused on the transmission process, hence the calculation time would be shorter than the other sophisticated models. Because of the simplicity of the model, the daily contact number was determined as 3 times per day. This is smaller than the daily contact number in the real world, which ranges from 7 to 18 contacts per day shown in the POLYMOD study [25]. In the model we proposed, the main concept was to focus on the collision, while considering the stochastic nature of each collision. We did not incorporate time schedule or compartment to keep the model simple. In other words, this model is a mean field approximation of the real world. Because the model itself is quite simple, this model could be easily implemented to a compartment model representing each district of Tokyo without increasing the calculation cost too much. Although the contact number per day was low, we adjusted the probability of transmission from an infectious individual to a susceptible individual as R0Âγ ∕ 3. This approach might not be robust when the number of individuals included in the model is small, but increasing the number of individuals would make the calculation results robust. Another strength of the proposed model is that the initial particle velocity was determined to be normally distributed. In the real world, the . Actual and predicted data of weekly influenza cases in different seasons. Blue and orange lines correspond to the actual incidence data and prediction using the SIR model. Four different models were used for the MAS collision model analysis: Model 1 (green), including all simulations; Model 2 (red), simulations selected by weekly incidence at week 2; Model 3 (purple), simulation selected by weekly incidence at weeks 2 and 4; Model 4 (brown), simulation selected by weekly incidence at the week of peak incidence. Shaded areas represent 95% confidence interval. CI, confidence interval; MAS, multiagent system. epidemic curve would substantially change when a "super spreader" is infected. An infected high-velocity-particle would collide more particles than a particle with intermediate velocity. The mean epidemic curve of this model might be close to the SIR model, but the result includes simulations when a highly active infected particle causes a surge in the epidemic curve. Also, the velocity of particles does not need to be normally distributed. If the mean age is young, a Poisson distribution with more particles with higher velocity than a normal distribution could be adopted. The activity of the population could be translated as velocity distribution in a easy way using this method. The estimated peak using the SIR method in the present study was approximately 8 weeks later than the actual data. This occurred because the R 0 derived from the data around the epidemic onset was smaller than the actual reproductive number especially at the beginning of the epidemic. Mercer et al. [26] showed that reproduction numbers are commonly overestimated early in a disease outbreak due to imported cases and outbreaks arising in subpopulations. In the MAS collision model, while some simulations ended with only a few infected individuals, some simulations showed a steep incidence curve. This reflects the rise in transmission rate due to outbreaks in subpopulations. The difference between the SIR model and the MAS collision model is that individuals in each compartment change continuously in the SIR model, but the change is discontinuous in the MAS collision model because a particle could not be divided into small parts. Therefore, a single infected particle might transmit to multiple particles in a short time and make a cluster, which does not arise in the SIR model [24]. The precision of incidence curve using the SIR model could be improved by correcting the reproduction number after the disease outbreak. The strength of the MAS collision model is that prediction could be performed using the reproduction number around the beginning of the season. A previous study proposed a real-time prediction model of influenza outbreaks by calibrating the parameters used in the SIR model (β and γ) every week [6]. Although the precision of the weekly incidence curve improves with the calibration method, data of peak incidence is necessary to increase the precision, similar to Model 4 in the present study. Therefore, predicting the key parameters using a small data might be difficult. In this context, multi-step prediction method which uses previous annual epidemic data could more accurately predict the epidemic [27]. A previous study using this method showed that implementing multiple single-output prediction in a six-layer long short-term memory structure achieved good accuracy to predict influenza incidence of 2-13 weeks ahead. A study by Yang et al. [28] attempted to forecast influenza epidemics in Hong Kong using Kalman filter in conjunction with the SIR model. In this model, when a new observation arrives, the system (including all model variables and parameters) is updated per filter algorithm. Although we acknowledge that this filtering is a sophisticated method, the main parameters which is used to calculate the SIR model is constantly renewed. However, the principal finding of the present study is that the incidence curve with good correlation with the actual data could be selected without changing the R 0 value. This is explained by the stochastic nature of the MAS collision model. Numerous factors influence the transmission of influenza including vaccination rate [29], age [30], and temperature [1]. Also, ascertainability might change with different age groups [30]. These factors were not included in the MAS collision model. Influenza epidemic curve is influence by these factors, and including these factors might further help to increase the precision. McGowan et al [31] showed the superiority of statistical models with stochasticity vs. SIR models and concluded that SIR models should include major environmental determinants for predicting peaks. Additionally, ensemble forecasts including various prediction models perform better than one single model and the SIR model can also be included into the ensemble. One challenge in the weekly incidence data of Tokyo is how to account for the decrease in detected patients during the New Year Holidays. First, the number of patients who visit to hospitals and clinics decreases because a lot of them are closed during the holidays. Second, the population of Tokyo decreases around 50-60% during the holidays because many residents return to their hometown (https://www.blogwatcher.co.jp/case/report_newyear _2017/). If the reproductive number maintains the same value, the weekly incidence in the first week of the new year should increase dramatically. The actual data shows that the incidence curve shifts towards right without showing a severe increase after the holidays. Therefore, we determined that the transmission declines during the holidays, but further investigation is needed to confirm the actual reduction in the reproduction number. Using a real-time surveillance data might serve to answer this question [32,33]. We acknowledge the following limitations in this study. First, the proposed model was verified in only 5 influenza seasons in Tokyo. Further study is needed to validate the MAS collision method in different epidemic seasons and different cities or countries. Second, there are multiple methods to estimate the R 0 value [3,34] and the results might change when different methods are used. Third, we did not consider the vaccination rate in Japan. The vaccination rate is gradually increasing from 14.9% to 28.0% between 2014 and 2018 (https://www.mhlw.go.jp /shingi/2008/06/dl/s0618-9a.pdf). The increase in the vaccination rate might have reduced the transmission of influenza. Vaccine effectiveness of influenza differs between seasons [35]. Forth, the model we proposed was based on the SIR model, but the precision might improve using other models such as the SEIR model. The latent period of influenza is about 2 days and the transmission occurs 1-2 days before onset [21]. The . Effective reproduction numbers using actual and predicted data in different seasons. Blue and orange lines correspond to the actual incidence data and prediction using the SIR model. Different models were used for MAS collision model analysis: Model 1 (green), including all simulations; Model 2 (red), simulations selected by weekly incidence at week 2; Model 3 (purple), simulation selected by weekly incidence at weeks 2 and 4; Model 4 (brown), simulation selected by weekly incidence at the week of peak incidence. Shaded areas represent 95% confidence interval. MAS, multi-agent system. infectious period starts just after the exposure of influenza. Therefore, the SIR model adopted in this study would fit well for influenza virus transmission. Finally, the calculation time was 30 min per simulation, which could be reduced by advancement in CPU or improvement in programming. Conclusions We applied the MAS collision model to predict seasonal influenza epidemic for 5 years in Tokyo. The model predicted the epidemic curve with good accuracy by selecting the simulations using the actual data without changing the initial parameters such as the basic reproduction number and infection time. Declarations Author contribution statement Nobuo Tomizawa: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Kanako K Kumamaru: Analyzed and interpreted the data; Wrote the paper. Koh Okamoto: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper. Shigeki Aoki: Conceived and designed the experiments; Wrote the paper. Funding statement This work was supported by Research Funds of the Ministry of Health, Labour and Welfare (20IA1012). Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2020.e03459.
2021-09-04T05:19:35.600Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "684ced25d84cc1c64d370a48c4196204db7697a8", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2405844021019629/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "684ced25d84cc1c64d370a48c4196204db7697a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
204754160
pes2o/s2orc
v3-fos-license
Climate of Hate: Similar Correlates of Far Right Electoral Support and Right-Wing Hate Crimes in Germany Since 2015, far right parties drawing heavily on radical anti-refugee rhetoric gained electoral support in Germany while the number of political hate crimes targeting refugees rose. Both phenomena – far right electoral support and prevalence of right-wing hate crimes – have theoretically and empirically been linked with socio-structural and contextual variables. However, systematic empirical research on these links is scattered and scarce at best. We combine official statistics on political hate crimes targeting refugees in Germany and far right electoral support of the far right party “Alternative für Deutschland” (AfD) in the German national elections 2017 with socio-structural variables (proportion of foreigners and unemployment rate) and survey data collected in a representative survey (N = 1,506) in 2016. We aggregate and combine data for all German municipalities except Berlin which were the level of analysis for the current study. In path analyses, we find socio-structural variables to be unrelated with each other but significantly correlated with both criterion variables in a systematic fashion: proportion of foreigners was negatively while unemployment rate was positively linked with far right electoral support. Right-wing crime was linked positively with unemployment rate across Germany and positively with proportion of foreigners only in East Germany while proportion of foreigners was unrelated to right-wing crime in West Germany. When including survey measures into the model, they were linked with socio-structural variables in the predicted fashion – intergroup contact correlated positively with proportion of foreigners, collective deprivation correlated positively with unemployment rates, and both predicted extreme right-wing attitudes. However, their contribution to the explained variance in outcome variables above and beyond socio-structural variables was neglectable. We argue that both far right-wing electoral support and right-wing hate crime can be conceptualized as behavioral forms of political extremism shaped through socio-structural and contextual factors and discuss implications for preventing political extremism. INTRODUCTION Much has been written about the recent wave of success for far right, right-wing populist and extreme right-wing parties, figures and movements globally but especially in the Western world. There seems to be agreement that we are witnessing what some scholars have called a "revolt against liberal democracy" (Eatwell and Goodwin, 2018), a "cultural backlash" (Norris and Inglehart, 2019), or -in more technical and less alarming words -growing support for the far right (Golder, 2016). These developments seem to temporally coincide with an increase in hate-crime in countries across the world (see, e.g., Osce Hate Crime Reporting, 2019) and a number of right-wing terrorist attacks that have gained media attention around the world: Attacks in Oslo and Utøya, 2011, Charleston, 2015, and Christchurch, 2019 among others have been directly linked with extreme right-wing ideology. While there is by now a rich literature on far right electoral support, its potential links with right-wing crimes are not well understood. In public discourse and discussions amongst practitioners, there seems to be an implicit assumption that both phenomena -far right electoral support and prevalence of right-wing hate crimes -are related (e.g., Chu, 2018). However, systematic empirical research on such potential links is scattered and scarce at best. In the current paper, we argue that both are not independent phenomena but have similar correlates and are also correlated with each other. We combine official statistics on reported right-wing hate crimes targeting refugees in Germany and far right electoral support in the German national elections 2017 and investigate links with socio-structural variables (proportion of foreigners and unemployment rate) on the one hand and with psychological variables measured in a representative survey (perceived threat, intergroup contact, and extreme right-wing attitudes) on the other hand. 1 We show that both phenomena co-occur geographically, and we do so in the German context where right-wing hate crime has recently peaked while the far right has seen increasing electoral support in the aftermath of the immense refugee in-migration since 2015. The German Context In recent years, wars and other conflicts including personal persecution, primarily in the Middle East and Africa have forced millions of individuals to leave their original places of residence. While most refugees temporarily settle in neighboring countries like Lebanon or Turkey, many have also migrated to Western Europe. Almost 1 million refugees have sought asylum in Germany in the year 2015 alone (German Federal Office for Migration and Refugees, 2019). Germany was not only the European country in which most refugees applied for asylum (Eurostat, 2017). It has also been at the center of attention for a number of events during this period, most prominently for its "welcome culture". As one example, German chancellor Angela Merkel famously announced that several thousand refugees would be allowed to cross the border from Hungary to Austria and into Germany in September 2015 (e.g., Hall and Lichfield, 2015). Her public press announcement "Wir schaffen das!" (We can do this) became historical. While solidarity and the willingness to help have generally been high (e.g., Akrap, 2015), Germany also experienced a wave of hostile and violent resistance against refugees (e.g., Benček and Strasheim, 2016). The number of political hate crimes targeting refugees and their homes in Germany rose dramatically and peaked in 2016 with a total of more than 3,000 incidents according to official sources with an unclear number of incidents that remained in the dark field (Federal Government, 2017;ProAsyl, 2017). 2 Such incidents range from propaganda crimes like libel, incitement of the masses and harassment to violent hate crimes like assault, arson and right-wing extremist terrorist attacks. During the same period, far right parties drawing on radical anti-refugee rhetoric gained electoral support in Germany -most notably the AfD (Alternative für Deutschland, Alternative for Germany). We begin by tracing the rise of this most prominent current far right party in Germany, the AfD, and summarize previous research and historical links between socio-structural variables such as unemployment and far right electoral support. We then briefly review two prominent social-psychological factors -perceived threat and intergroup contact -that can help understand both outcomes that are of interest for the current work. As we shall see, theoretically as well as empirically, far right electoral support and right-wing hate crimes have similar sociostructural as well as psychological correlates that can produce a dangerous climate of hate. Rise of the AfD The AfD was founded in 2013. During the first few years after its formation, the party strongly opposed European integration, the European currency "Euro" and especially European assistance programs during the European sovereign-debt crisis (Arzheimer, 2015), such as the European Stability Mechanism and the European Financial Stability Facility. The party's "Euro-skeptic" positions broadened during the so called "European migration crisis" from 2015 and onward. Since then, a shift in AfD policy has been reliably documented: The party's positions drastically changed from an economic critique of the European Union toward a far right ideological, nationalist and anti-immigration course (e.g., Franz et al., 2018). During this period of change, AfD personnel and party leadership also changed. This shift seems to have facilitated electoral success for the AfD. Since 2015, the AfD gained more and more electoral support and was able to consolidate as a relevant political force with a breakthrough in 2016: In the Eastern German states of Saxony-Anhalt (24.2%) and Mecklenburg-Vorpommern (20.8%), the AfD had their biggest electoral successes. However, the AfD also had substantial impact in the in 2016 Western German state elections in Rhineland-Palatinate (12.6%), Berlin (14.2%) and Baden-Wurttemberg (15.1%). This trend culminated in a striking 12.6% in the German general elections in September of 2017 (21.8% in the five states of the former German Democratic Republic) establishing the AfD as the third most powerful party and opposition leader in the German Parliament, the Bundestag. With the AfD in the Bundestag, the party's positions and views have entrenched large parts of German society seemingly independent of financial and societal status (Zick et al., 2016;Bergmann et al., 2017;Franz et al., 2018;Schröder, 2018) and support of AfD positions in the German society increased. Various polls currently have the AfD (13-14%) close to the Social Democrats (12%), making the AfD the third most successful party in Germany as of August 2019 (Wahlrecht.de, 2019). Before discussing psychological factors that should be relevant for the AfD's electoral success as well as right-wing hate crime, we shall now review socio-structural variables that have historically and empirically been linked with far right electoral support. Unemployment and Far Right Electoral Support The link between unemployment and far right electoral support is well-established in the literature and has been empirically demonstrated numerous times. In one of the earliest studies, Pratt (1948) analyzed the 1932 German Reichstag Elections and found that unemployed citizens tended to vote for extreme political parties, such as the Nazi-Party (NSDAP), but also the Communist-Party (KPD). Falter and Zintl (1988) further corroborated these findings arguing that high unemployment rates facilitated the electoral and overall success of the Nazis in Germany in the 1930s. O'Loughlin et al. (1994), however, showed that these findings seem to hold only for the 1932 and 1933 German general elections but not for the 1930 election. The authors stress the importance of "regional and local contexts of the voting decisions" (O'Loughlin et al., 1994, p. 373). In more recent studies, the general assertion that unemployment and far right electoral support are correlated still holds true (e.g., Norris, 2005;Mudde, 2007;Rydgren, 2007). In his work on far right electoral support, Rydgren (2009) investigated the broader concept of social isolation. Among other factors such as lack of social relations, weak family structures and less personal involvement in civil society, he lists unemployment as a key variable for supporting far right parties (Rydgren, 2009). On the basis of a geographically weighted regression (GWR) analysis focusing on the electoral results of the German Neo-Nazi party NPD (Nationaldemokratische Partei Deutschland), Teney (2012) found a connection between far right electoral outcomes and local unemployment rates while also emphasizing the importance of socio-spatial as well as regional variations among municipalities (Teney, 2012). However, there have been some studies showing that high levels of unemployment do not necessarily have a determining impact on far right voting. Based on election studies, far right voting outcome in seven European countries as well as supranational surveys, for example, Arzheimer and Carter (2006) found that higher unemployment rates are linked with less far right voting. Similarly, Oesch (2008) emphasizes that unemployment is no major factor influencing far right voting, but rather matters of identity claiming that "questions of identity are more important than economic questions" (Oesch, 2008, p. 370). It seems then that while unemployment has been linked with far right electoral support numerous times, it does not tell the full story. In socialpsychological theorizing there are two prominent constructs that should be of interest for the current research, that is, intergroup contact and perceived threat. Intergroup Contact According to intergroup contact theory, which was first developed by Allport (1954), opportunities for random encounters or cross-group friendships (e.g., Pettigrew, 1998) reduce negative attitudes and prejudice toward outgroups. Allport originally assumed that certain "optimal conditions" (i.e., equal status, perception of common goals, institutional support, perception of common humanity) would facilitate the positive effects of intergroup contact (Allport, 1954). In a meta-analysis of more than 500 studies, Pettigrew and Tropp (2006) found empirical support for the theory and showed that optimal contact conditions may yield greater reduction in negative attitudes but might not always be necessary to reduce prejudice. Several adaptations and extensions for Allport's original theory have been suggested (e.g., Wright et al., 1997;Pettigrew, 2009) but there seems to be general agreement that intergroup contact opportunities tend to decrease hostile and negative attitudes toward outgroups in general even in the absence of optimal conditions (but see Barlow et al., 2012). The proportion of foreigners in a given spatial unit (e.g., municipality) can be considered the most straightforward socio-structural indicator for contact opportunities and varies considerably across Germany (e.g., Wagner et al., 2003Wagner et al., , 2006Wagner et al., , 2008. For example, the rate of foreigners is still as much as four times lower in Eastern versus Western German federal states (Statistisches Bundesamt, 2018). Such preconditions provide only relatively few opportunities for intergroup contact for Germans in East as compared to West Germany. Contact theory has consequently been widely used as social-psychological theoretical framework explaining higher levels of prejudice (e.g., Decker et al., 2016;Zick et al., 2019) as well as higher rates of xenophobic attacks and hate crimes against foreigners (e.g., Benček and Strasheim, 2016) in East versus West Germany (see also Wagner et al., 2003;Andresen et al., 2018). Furthermore, intergroup contact with specifically refugees is more prevalent in West than in East Germany as revealed in recent large-scale surveys (Ahrens, 2017). Based on intergroup contact theory, one would therefore predict negative links between proportion of foreigners and prejudice as well as far right electoral support (Teney, 2012) and political hate-crime (Benček and Strasheim, 2016). It seems noteworthy that at least this second prediction is somewhat contradictory to the intuition that for hate-crimes to occur, the target outgroup needs to be present. However, according to intergroup contact theory, hate-crimes should be most frequent in areas with low rates of outgroup individuals, that is, for example, foreigners and refugees. In a nutshell, we thus assumed a negative relationship between number of refugees in a particular German municipality and numbers of hate-crimes against refugees. Other theoretical approach that is relevant for the present study concern perceived threat ostensibly posed by the outgroup and their members and collective deprivation. Perceived Threat and Collective Deprivation A major psychological driver of antagonistic intergroup attitudes is the perception that the outgroup threatens the ingroup's status or culture (e.g., Semyonov et al., 2004; but see Wagner et al., 2006). Perceived threat has consequently also been used to explain differences in prejudice levels between East and West Germans (Wagner et al., 2003;Asbrock et al., 2014; also see Semyonov et al., 2004;Wagner et al., 2008). Interestingly, threat can be closely related to the sociostructural factor discussed above: Higher unemployment rates in Eastern versus Western German federal states may contribute to differences in perceived threat or status. In other words, competition over jobs and economic opportunities might translate into higher perceived threat by outgroups in general. This perception may, in turn, be linked with prejudice and other negative attitudes toward members of ethnic out-groups. Perceived threat does not necessarily coincide with realistic threat. In response to the recent so-called "refugee crisis", concerns about immigration increased twice as much in East as compared to West Germany (Sola, 2018; also see Jacobsen et al., 2017). A concept that might therefore be psychologically more relevant in this context is collective or fraternal deprivation (Runciman, 1966; also see Major, 1994;Pettigrew and Meertens, 1995). In his original conceptualization, Runciman (1966) distinguished fraternal from egoistic deprivation and argued that it is linked with "lateral solidarity" or ingroup identification for social groups that are relatively deprived in some objective way -such as areas with higher unemployment. Even more importantly, collective deprivation also "uniquely generates agitation for or against structural change" (Taylor, 2002, p. 15). It should therefore be particularly relevant when it comes to voting for a party that insistently opposes structural and societal change -such as the AfD. The Current Study The aim of the current study was to investigate socio-structural and psychological correlates of far right electoral support and hate crimes in Germany. Due to its administrative organization into 401 municipalities (294 "Landkreise/Kreise" and 107 "kreisfreie Städte"), Germany lends itself to analyses combining data from different sources that are available on this level. We therefore combine socio-structural data that are made available on a regular basis (i.e., unemployment rates and proportion of foreigners per municipality) with other data from official sources (i.e., election results and reported hate crimes targeting refugees and their homes). We also included survey data on intergroup contact, fraternal deprivation, and extreme right-wing attitudes that were collected as part of a representative telephone survey and that could be located on the municipality level. As a first set of hypotheses, we predicted substantial links between socio-structural factors and both outcome variables. More specifically, based on previous research and theorizing, we expected (Hypothesis 1a) unemployment rate to be positively linked with far right electoral support (e.g., Pratt, 1948;Falter and Zintl, 1988;Jackman and Volpert, 1996;Rydgren, 2009) and (Hypothesis 1b) proportion of foreigners to be negatively linked with right-wing hate crime (e.g., Wagner et al., 2003;Bustikova, 2014;Benček and Strasheim, 2016;Andresen et al., 2018). As the evidence for a link between proportion of foreigners and far right electoral support is mixed (Arzheimer and Carter, 2006;Lubbers et al., 2006;Golder, 2016), we made no predictions regarding this link or the link between unemployment rate and right-wing crime. However, we hypothesized that they should be in the same direction, that is, unemployment rate may be positively linked with right-wing crime and proportion of foreigners negatively with AfD electoral success (Hypothesis 1c). In a second set of hypotheses, we predicted specific links of socio-structural factors with survey data. As such "crosslevel" links are not commonly theorized or researched, we based our hypotheses on a general reading of the literature on intergroup contact (Allport, 1954;Pettigrew, 1998;Pettigrew and Tropp, 2006) and deprivation theory (Runciman, 1966;Major, 1994;Pettigrew and Meertens, 1995) and predicted psychological perceptions to be linked with the corresponding socio-structural parameters in the suitable fashion. More specifically, we expected proportion of foreigners to be linked with intergroup contact (Hypothesis 2a) and unemployment rate with perceptions of fraternal deprivation (Hypothesis 2b). Both should in turn be correlated with extreme right-wing attitudes (Hypothesis 2c) that should be predictive of both outcome variables (Hypothesis 2d). We made no predictions regarding intercorrelations of sociostructural predictors or outcomes. However, regarding far right electoral support and right-wing crime, we hypothesized that they might be correlated because they are facilitated by similar sociostructural as well as psychological factors. MATERIALS AND METHODS In a first step, we combined data from three independent and official sources: socio-structural data for the year 2016 provided by the German office for statistics, 2017 election results available through the Federal Election Commissioner, and 6,354 reports of crimes targeting refugees and their homes filed as "politically motivated crime, right-wing" by the police between 2015 and 2017 that were collected in an overview . In a second step, we added survey data from a representative sample drawn in 2016 into the data set. 3 We shall now briefly describe each of our data sources in turn and how they were combined before analyzing their interrelations more systematically. Socio-Structural Data Official numbers of residents per municipality were available through the German office for statistics along with other information on absolute numbers regarding legal status (unemployed persons and foreigners). We used these numbers to generate unemployment rates for June 2016 (ranging from 1.2 -14.7%) and foreigners per municipality for 2016 (ranging from 1.96 -33.91%). Election Results Results for the 2017 national elections are available from the Federal Election Commissioner along with total valid votes per municipality. We computed electoral success for the AfD as one dependent variable by dividing valid AfD votes by total valid votes per municipality (ranging from 4.94 -35.46%). Right-Wing Hate Crimes Our second dependent variable was the number of right-wing attacks and crime targeting refugees and their homes reported to police within municipalities in 2017. We compiled an overview of 2.211 such incidents based on police statistics. The numbers of attacks targeting refugees and their homes are published by the federal Government in so called Antworten der Bundesregierung (official replies by the Federal Government to requests by parliamentarians and parties). These are special reports filed by the Government answering inquiries officially requested by parliamentary parties or MPs covering various political issues. The statistics on hate crimes targeting refugees and their homes were published quarterly and in a final version by the government in response to members of parliamentary party Die Linke (The Left). Crimes reported ranged in severity from rightwing graffiti and dissemination of propaganda to defamation and harassment all the way to assault, bomb attacks, and homicide. The crimes had been categorized by the Government as "politically motivated crime, right-wing" linking them directly to the "refugee subject matter" (Federal Government, 2018). Records included running number, date, location, federal state as well as the most severe reported offense. One sample line reads "268, 19.10.2017, Erftstadt, NW (Federal state of North Rhine-Westphalia), Schwere Brandstiftung §306a StGB (severe case of arson)" (Federal Government, 2018, p. 18). Two independent coders placed each reported crime within the respective municipality based on where it had been recorded. The index of right-wing crimes reported in 2017 ranged from 0 (e.g., in Bottrop) up to 58 in Chemnitz. As municipalities vary considerably in their numbers of residents and in order to yield similar ranges for this index as for the other indices, we used number of right-wing crimes targeting refugees and their homes reported per 10,000 inhabitants for the analyses to be reported below. to include Berlin in our analyses. We note, however, that the pattern of results did not change substantially when analyses were re-run including Berlin as a whole. Survey Data A representative sample of 2,008 German participants was surveyed in standardized telephone interviews that were conducted by a professional survey institute between June and August 2016. The survey covered measures for cross-group friendships, perceived economic threat, and attitudes toward various political issues including extreme right-wing attitudes that we will focus on in this paper and describe below in more detail (see Zick et al., 2016). To ascertain representativeness of the sample, telephone numbers were randomly generated, and the last-birthday method was used to randomly select participants within households. As 25% of the participants were contacted via mobile phone numbers and we needed to assign data to municipality-(Kreis-) level by city-prefix, we only used data from those N = 1,506 participants that were contacted by landline. Both sub-samples differed somewhat in terms of age and gender, with more younger (M = 46.10, SD = 16.93), t(1896) = 8.75, p < 0.001, and more male participants (55%), χ 2 (N = 1917, df = 1) = 18.68, p < 0.001, in the mobile-only subsample than in the landline-only sub-sample. Crucially, however, both samples did not differ in terms of level of education and those measures that were of interest for our analyses. The resulting landline-only sample was M = 53.98 years old (SD = 16.93), with slightly more female than male participants (54.4%). Level of education was slightly skewed with 53.6% of the sample holding a university or technical degree, 28% reporting some secondary school-leaving certificate, and 12.8% reporting no degree at all. Monthly household net income was 20.2% "less than 2,000 EUR", 19.9% "more than 2,000 but less than 3,000 EUR", 13.6% "more than 3,000 but less than 4,000 EUR", and 19.8% "4,000 EUR and more". Perceived Collective Deprivation Perceived collective deprivation was measured with the item "How would you judge the economic situation of Germans compared with foreigners living here?" and answers ranging from 1 "very good" to 5 "very bad". Intergroup Contact Intergroup contact was measured with the item "How many of your friends or close acquaintances have a migration background?" with answers ranging from 1 "none" to 4 "very many". Extreme Right-Wing Attitudes Extreme right-wing attitudes were measured using seven items on a scale ranging from 1 "completely false" to 5 "completely true". Items included statements such as "I can understand that some citizens resist forcefully against homes for asylum seekers" and "No one can expect for me to live next to a home for asylum seekers" (Cronbach's alpha = 0.83; see section "Supplementary Material" for the full scale). As the analyses reported in the current study were based on secondary analyses of official sources and on data previously collected in a survey, ethics approval was not required as per applicable institutional and national guidelines and regulations. For the survey data used in our analyses, informed consent of the participants was implied through survey completion. Participation in the survey was completely voluntary and anonymous and participants were free to withdraw from the survey at any time without incurring any penalties. Preliminary Analyses Means, standard deviations, and zero-order correlations are shown in Table 1 for all socio-structural variables on municipality level and in Table 2 for all psychological variables on both individual (lower triangle) and municipality level (upper triangle). 4 As can be seen, for the latter, means, standard deviations, and correlational patterns did not differ significantly between individual and municipality level. We used psychological measures aggregated on municipality level for the current analyses to link them with socio-structural variables. On the one hand, it should be kept in mind that measures varied in range due to their different sources. On the other hand, it seems noteworthy that most of them were systematically linked despite their different sources even after correcting for skew or using non-parametric test procedures (see Footnote 4). Extreme rightwing attitudes, for example, were significantly and negatively correlated with proportion of foreigners, r(346) = −0.13, p = 0.02, but positively with far right electoral support, r(345) = 0.19, p < 0.001, and with right-wing crimes reported in the respective municipalities in the subsequent year, r(346) = 0.12, p = 0.03. While these correlations are small in magnitude, they were all significant and in the expected direction. Recall that attitudes were measured in an independent telephone survey. These were also correlated with our dependent variables as can be seen from Table 1. Local unemployment rates in June 2016, for example, while unrelated to local proportions of foreigners in the same year, were correlated with both local far right electoral support, r(397) = 0.25, p < 0.001, and right-wing crimes reported in the respective municipalities 1 year later, r(398) = 0.40, p < 0.001. As one final correlational result, both dependent variables, far right electoral support and right-wing crimes reported were linked substantially and positively, r(399) = 0.50, p < 0.001. Spatial distributions of socio-structural and outcome variables are illustrated in Figure 1. Figure 1 not only illustrates socio-structural variations across municipalities -unemployment rates tend to be highest in the East and the Ruhr area and lowest in the South of Germany, proportions of foreigners per municipality fall within the lowest category in almost all Eastern German municipalities. There is also a striking East-West difference regarding both outcome variables: far right electoral support was highest in the Eastern and Southern municipalities and the proportion of right-wing crime seems to be higher in these areas, too. These differences also show empirically with significantly higher unemployment (1) through (3) are per 100, and (4) per 10,000 inhabitants. * * * p < 0.001. , respectively, all ps < 0.001. 5 As East-West differences regarding attitudes and behavior toward minority groups in Germany have been demonstrated before (Wagner et al., 2003;Benček and Strasheim, 2016;cf. Czymara and Schmidt-Catran, 2016), and may distort the results of the following analyses, we decided to account for East-West differences where appropriate and feasible. Testing the Proposed Model The correlational analyses reported above lend some initial support to our proposed model. However, they leave open the issue of shared variance in predicting the outcome variables as well as the question of how much predictive value -if any at all -is added above and beyond harder socio-structural variables through softer variables such as extreme right-wing attitudes measured in a telephone survey. In a first set of analyses, we consequently addressed the issue of shared variance. Crucially, in assuming random error, conventional statistical models do not account for shared variance due to geographical proximity or spatial auto-correlation. We addressed this issue in a second set of analyses using GWR analyses. In a third and final set of analyses, we accounted for and explored the differences between East and West Germany reported above. Controlling for Potential Overlap on Predictor and Criterion Sides First, we performed path analyses to account for potential overlap both on predictor and criterion sides and to test how much predictive value our psychological survey measures would add. Path analyses were performed using AMOS 24.0 and maximum likelihood estimation. The model was composed in such a way that the socio-structural variables, unemployment rate and proportion of foreigners correlated with far right electoral support and right-wing crime (Figure 2). All four paths remained significant and almost unchanged when compared to the zeroorder correlations: Proportion of foreigners per municipality on the one hand were negatively linked with both right-wing crimes FIGURE 2 | Path model of socio-structural correlates of far right electoral support and political hate crimes in Germany. Standardized path coefficients for overall analyses; West and East Germany separately in brackets (West/East); error terms are not displayed for the sake of clarity. * * p < 0.01, * p < 0.05, † p < 0.10. FIGURE 3 | Path model of socio-structural and psychological correlates of right-wing electoral support and political hate crimes in Germany. Standardized path coefficients for overall analyses; West and East Germany separately in brackets (West/East); error terms are not displayed for the sake of clarity. * * p < 0.01, * p < 0.05, † p < 0.10. reported, β = −0.31, and even more strongly with far right electoral support 1 year later, β = −0.42. Unemployment rate on the other hand was linked with both AfD electoral success, β = 0.26, and even more strongly with right-wing crimes reported, β = 0.34 1 year later (all ps < 0.001). Taken together, both sociostructural predictors explained 20% and 24% of variance in rightwing crimes reported and AfD electoral support, respectively. To correct for skew in the data, we also performed bootstrapping analyses using 5000 bootstrap resamples and bias-corrected 95% confidence intervals (CIs). As none of the resulting CIs included zero, these analyses further supported the model. We then introduced the three psychological variables, collective deprivation, contact, and extreme right-wing attitudes, into the model (Figure 3). A first saturated model included paths between all constructs and, expectedly, fitted the data perfectly. We then computed a second model where a total of eight paths that we had not predicted were set to zero. Specifically, these were paths from socio-structural variables to psychologically incongruent constructs (i.e., paths from proportion of foreigners to collective deprivation and from unemployment rate to contact) and paths that we predicted to be zero because we assumed the psychological contribution to be mediated through extreme right-wing attitudes (i.e., paths from socio-structural variables to extreme right-wing attitudes and paths from contact and collective deprivation to both outcome variables, respectively). As this second model was nested within the previous one, we compared both and concluded that the second explained the data equally well, χ 2 (df = 8) = 12.21, p = 0.14. It seems noteworthy that, as predicted, the deleted paths were generally statistically non-significant (ps > 0.10) except for the path from unemployment rate to contact (β = 0.12, p = 0.02). The resulting model is displayed in Figure 2. It had a good fit with the data, χ 2 (N = 400, df = 8) = 15.57, p = 0.05, CFI = 0.98, RMSEA = 0.05, PCLOSE = 0.47 (MacCallum et al., 1996;Hu and Bentler, 1999). The original pattern remained almost unchanged when psychological variables were included into the full model. Extreme right-wing attitudes which had been significantly correlated with both outcome variables only retained a weak but reliable link with far right electoral support, β = 0.13, p = 0.01, but were not significantly linked with right-wing crime reported, β = 0.07, p = 0.16. Explained variance in both dependent variables remained unchanged. 6 Accounting for Shared Variance Due to Geographical Proximity As briefly mentioned above, the analyses reported thus far neglect spatial auto-correlation or shared variance due to mere geographical proximity. Such shared variance may be due to common exposure of the observed variables to unobserved confounders and can create problems for conventional statistical models that assume random error (see Fotheringham et al., 2002;Teney, 2012). We performed a series of GWR analyses using spatial error models based on maximum likelihood estimation and queen contiguity weights in GeoDA (Version 1.12; Anselin et al., 2006;GeoDa, 2019) to address the issue of spatial autocorrelation. As can be seen from Table 3, results from these analyses replicated the results reported above in showing that unemployment was linked positively with both far right electoral support and right-wing crime while proportion of foreigners was linked negatively with both outcomes. Regarding the survey measures, the GWR analyses confirmed that they had weak but significant links with far right electoral success above and beyond socio-structural variables -the respective model was superior to that without survey measures according to the Akaike Information Criterion (AIC). 7 Survey measures did not contribute significantly to the explained variance in right-wing crime as the outcome variable, however. In fact, that model became significantly worse according to the associated AIC. It seems noteworthy that the Lambda-coefficients were significant across all analyses: there was strong support for spatially correlated errors. Accordingly, the GWR models explained substantially more variance in both far right electoral support 6 An argument could be made that collective deprivation is not causally prior to extreme right-wing attitudes but that it is in fact part of or even an outcome of extreme right-wing attitudes. We therefore tested alternative models including (a) a reverse path from extreme right-wing attitudes to collective deprivation and (b) a bi-directional path between both constructs. Both models fitted the data descriptively better than the original model that we had hypothesized. It therefore seems worthwhile to note again that we are not testing any causal mechanisms based on the correlational data reported. 7 As a rule of thumb, an AIC difference of <2 indicates no meaningful discrepancy between models; a difference between 4 and 7 indicates considerable evidence that the model with the lower AIC is better, and a difference of >10 indicates substantial support for the model with the lower AIC (Burnham and Anderson, 2002). Exploring East-West Differences As we had observed considerable differences for all variables of interest, we controlled for and explored the differences between East and West Germany in a third and final set of analyses. While accounting for spatial auto-correlation in GWR analyses and East-West differences through dummy-coding, we also included interaction terms with both socio-structural variables. When predicting far right electoral support, we found a significant interaction of proportion of foreigners and the East-West dummy variable, z = 7.56, p < 0.001. Similarly, when predicting right wing-crime, we found interactions with the East-West dummy variable for both unemployment rate, z = 4.37, p < 0.001, and proportion of foreigners, z = 4.20, p < 0.001. Following up on these results, we analyzed the data for municipalities in East and West Germany separately. For far right electoral support, both socio-structural factors retained significant links with the outcome: unemployment rate retained links of similar magnitude in the West, b = 0.36, z = 5.01, compared with the East, b = 0.44, z = 2.65, ps < 0.01. Proportion of foreigners, however, was linked much more strongly with the outcome in the East, b = −1.19, z = 7.92, p < 0.001 than in the West, b = −0.10, z = 2.76, p = 0.01. For right-wing crime, when analyzing East and West German municipalities separately, proportion of foreigners was no longer significantly linked with the outcome in West Germany, z < 1, and unemployment rate still retained a positive but much weaker link, b = 1.02, z = 1.70, p = 0.09. In East Germany, however, both socio-structural factors were still linked with the outcome: unemployment rate, b = 8.20, z = 2.64, as well as proportion of foreigners, b = 8.48, z = 2.77, ps < 0.01, were correlated with right-wing crime significantly and positively. Bootstrap analyses with 5000 resamples and bias-corrected 95% CIs generally replicated these results with some exceptions: CIs for the paths from socio-structural factors to far right electoral support did not include zero with the exception of the path from unemployment rate to far right electoral support. For right-wing crime, only those paths in East Germany were reliable according to these analyses and did not include zero while both paths in West Germany did. In a sense then this pattern of results was similar but more pronounced than that of the GWR analyses. DISCUSSION The present study systematically combined data on sociostructural variables (unemployment rates and proportion of foreigners by municipality) with self-reported attitudes and official data on actual behavior (police records on right-wing hate crime and far right electoral support in the German federal elections). In doing so, we tried to approximate what could be referred to as a "climate of hate" -a bundle of objective as well as more subjective or psychological variables all of which contribute to a social norm or a perception of such a norm that facilitates hostile behaviors toward outgroups. One central notion of the current work is, consequently, that more conventional but potentially exclusionary behaviors such as far right voting on the one hand and more extreme behaviors such as right-wing hate crimes targeting refugees on the other hand should co-occur because they are facilitated by similar factors. Correlates of Behavioral Outcomes and Local Variation While unrelated with each other, both socio-structural factors were linked with both outcome variables in a systematic fashion: First, overall, the local proportion of foreigners was negatively correlated with relative number of hate crimes, and more strongly so with far right electoral support in municipalities. Second, unemployment rate was positively linked with far right electoral support, and more strongly so with relative number of hate crimes reported. When examining these links for East and West German municipalities separately, the pattern remained similar for correlates of far right electoral support. However, the pattern changed drastically for right-wing crime: While unemployment rate was still positively linked with right-wing crime across Germany (but substantially weaker so in Western municipalities), proportion of foreigners was no longer linked with the same outcome in West German municipalities. Contrary to what we had expected, in East German municipalities, this relation even reversed and proportion of foreigners was significantly and positively linked with right-wing crime reported. This pattern of results can be interpreted as an artificial overall negative correlation between proportion of foreigners and reported rightwing crime that can be attributed to the mean differences in measures between East and West Germany. With regard to the two behavioral outcomes, this local variation in their correlates may provide preliminary evidence for related but contrary underlying motives: While one seems to be driven by contactlogic (more contact opportunities, less far right electoral support; e.g., Allport, 1954), the other seems to be more in line with group threat-logic (more foreigners, more right-wing crime; e.g., Teney, 2012). Methodologically, however, this result emphasizes the importance of considering local variation of the phenomena that are being investigated. Attitudinal variables measured in a representative survey were also linked with socio-structural variables in the predicted direction, that is, intergroup contact correlated positively with local proportion of foreigners, thus replicating a host of previous research (Wagner et al., 2003(Wagner et al., , 2008Semyonov et al., 2004). Furthermore, collective deprivation correlated positively with unemployment rates, and both, intergroup contact and collective deprivation predicted extreme right-wing attitudes. However, these predictors' contribution to the explained variance in outcome variables above and beyond socio-structural variables was neglectable. We will address the limitations of the present research in more detail further below after discussing potential implications that can be drawn. Identifying Areas at Risk for Right-Wing Extremism Through Contextual Indicators In order to reduce or prevent political extremism, a first step can be to identify and monitor areas that are at particularly high risk of violent hate crimes. So far, monitoring instruments for such hate crimes have been scarce or non-existent (Benček and Strasheim, 2016;but see ProAsyl, 2017). We argue that high risk areas can still be identified based on contextual factors that are reliable correlates of actual violent extremism and that such correlates can be found in the neighboring research field on far right electoral support. For example, some previous research has predicted incidents of violent right-wing attacks from analyses of public discourse (Koopmans and Olzak, 2004) or from social media data (Müller and Schwarz, 2018). Scholars in the social sciences also seem to agree that there is merit in collecting data on political attitudes including attitudes toward outgroups in order to identify risks or at least shed light on psychological processes that may turn prejudice into violence (e.g., Wagner et al., 2003). All in all, the notion that contextual factors can be useful in identifying high risk areas is not new and two factors -unemployment rate and proportion of foreigners in a given local context -have been particularly well studied (Semyonov et al., 2004;Wagner et al., 2008). In his review of research on far right electoral success, Golder (2016) has systematized these contextual factors into "economic grievances" and "cultural grievances". In discussing the global rise of the far right, some have argued that it is more about culture than economics (e.g., Norris and Inglehart, 2019), others that it is more about economics than culture (Judis, 2018). And even others have argued that such a binary distinction between economic and cultural grievances is "far too simplistic and glosses over the way in which concerns about culture and economics can, and often do, interact" (Eatwell and Goodwin, 2018, p. xxiv). We argue that much can be learned from investigating far right electoral support when studying right-wing hate crime and that both fields can benefit from each other. Based on the current study, we conclude that violent right-wing hate crime is particularly likely in areas with high unemployment rates (as is far right electoral support) and a high proportion of foreigners (contrary to far right electoral support) but that this latter correlate may vary locally. This finding is somewhat contradictory to intergroup contact theory (Pettigrew, 1998;Wagner et al., 2006) but well in line with the group threat hypothesis and the intuition that, in order for hate crimes to occur, the target outgroup needs to be present. Diversity, while increasing community resilience against far right agitation through contact opportunities (Allport, 1954;Pettigrew and Tropp, 2006) may ironically increase the risk of right-wing crime in the same area. Finally, far right electoral support was so strongly correlated with relative number of right-wing hate crimes that it might be considered an additional indicator for areas that are at high risk for right-wing extremism. In other words, our results seem to be supportive of the notion that far right electoral support is not only an indicator, but actually part of the social climate of hate that facilitates right-wing violence. Limitations and Future Research There are limitations of the current study, some of them due to its overall cross-sectional design or the nature of the data we use. First, we report correlational data that do not allow for causal inferences. While the basic premise of the current work does not necessarily hinge on causal relationships between the constructs but is merely to show that two behavioral outcomes are linked with the same socio-structural correlates and co-occur systematically in certain areas, the variables we used as predictors were all measured 1 year prior to the variables we used as criteria. We would therefore argue that the analyses reported are at least to some extent suggestive of the predictive value of socio-structural and survey data for future outcomes. The distinction between causality and correlation, however, is crucial especially for policy makers and practitioners and future research using longitudinal designs might tackle the issue of causality more convincingly. Second, from a methodological point of view, the compatibility of our measures especially at the interface of socio-structural and survey data may be open to criticism. More specifically, one could argue that perceived competition on the labor market would be more compatible with local unemployment than perceived collective deprivation. Also, the use of single item-measures is problematic. The weak explanatory performance of attitudinal variables, in other words, may then be due to issues of validity and reliability. However, empirically, we think the measures we used tap into the respective constructsthey do in fact correlate with socio-structural variables in the expected fashion (i.e., proportion of foreigners correlates with contact and unemployment correlates with perceptions of deprivation). While more elaborate data on the psychological level including longer scales would certainly be desirable, such data were not available for the analyses presented in this contribution. Our analysis may thus serve as a proof of principle and hopefully inspire future research to link sociostructural data with survey data and attitudinal as well as actual behavioral outcomes. Such research could also take context into account more systematically by studying specific other European countries experiencing an increase in far right electoral support, such as, Hungary (Palonen, 2009), Italy (Verbeek and Zaslove, 2016), or The Netherlands (Otjes and Louwerse, 2013) or by comparing far right electoral support and the prevalence of hate-crime in countries across Europe (e.g., Lubbers et al., 2002). Furthermore, future research might benefit from qualitative or mixed-methods approaches, ideally in a longitudinal design to examine regional specifics and developments in those municipalities and areas most affected by a climate of hate. It seems fruitful to investigate those contextual factors qualitatively, social constellations, and regional specifics that facilitate a climate of hate in identified risk areas to draw conclusions on how to strengthen community resilience against extreme rightwing behavior. Methodically, some studies already follow this approach, taking closer looks at intergroup relations on a smallscale level (Bynner, 2017) or at the "normalization" of antirefugee sentiments in everyday life of a medium sized town (Kurtenbach, 2018). Conclusion The results of the current study can be placed within the wider research fields on far right electoral support (e.g., Golder, 2016;Eatwell and Goodwin, 2018) and the prevalence of hate-crime in countries across the world. Understanding both phenomena as partly connected may have important implications for future research, both basic and applied, as well as for politics and practice. Practitioners and policy-makers may find them useful in developing effective strategies to prevent or at least reduce right-wing extremism by identifying high risk areas. Diverse communities should be more resilient against far right agitation whereas areas with little heterogeneity and high unemployment rates are susceptible for a general climate of hate. A decentralized housing policy for newcomers like refugees may thus decrease far right support but also increase the risk for right-wing crime. Our final conclusion relates to the added value of survey data in identifying high risk areas. We believe that attitudinal data and surveys will continue to contribute invaluable insights into the processes of prejudice, discrimination, and radicalization. However, our analysis and its results might also serve as a cautionary note: Measures collected in a representative survey were generally linked with socio-structural indicators in the predicted pattern. Self-reported extreme right-wing attitudes were even correlated with actual voting behavior in municipalities 1 year later. While this is good news for attitude research in general and social scientists in particular, the bad news is that the incremental predictive value of these survey data above and beyond socio-structural indicators was neglectable. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
2019-10-18T14:28:22.970Z
2019-10-18T00:00:00.000
{ "year": 2019, "sha1": "6003d6e7a4c27c13bebb96e8f1596ca866c93736", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02328/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6003d6e7a4c27c13bebb96e8f1596ca866c93736", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
216257666
pes2o/s2orc
v3-fos-license
Analysis of Approaches for Modeling the Low Frequency Emission of LED Lamps : Light emitting diode (LED) lamps are now an established lighting technology, which is becoming prevalent in all load sectors. However, LED lamps are non-linear electrical loads, and their impact on distribution system voltage quality must be evaluated. This paper provides a detailed analysis of time domain and frequency domain approaches for developing and evaluating models suitable for use in large scale steady-state harmonic power flow analysis of the low frequency (LF) emission of LED lamps. The considered approaches are illustrated using four general categories of LED lamps, which have been shown to cover the vast majority of LED lamps currently available on the market. The aim is an in-depth assessment of the ability of commonly applied models to represent the specific design characteristics of different categories of LED lamps. The accuracy of the models is quantitatively evaluated by means of laboratory tests, numerical simulations, and statistical analyses. This provides an example, for each LED lamp category, of comprehensive information about the overall accuracy that can be achieved in the general framework of large scale LF harmonic penetration studies, particularly in the assessment of voltage quality in low voltage networks and their future evolution. Introduction Light emitting diode (LED) lamps are now an established technology and can be utilized in a wide range of applications, from replacing incandescent lamps in residential buildings to the illumination of commercial offices, retail spaces, or industrial premises, as well as street and public area lighting. This wide range of applications, coupled with the well-known advantages in terms of efficiency, regulation of light output, lifetime, and good light quality, have all contributed to the growing market share of LED lamps, which are now prevalent in the residential, commercial, and industrial load sectors. Based on these factors, it is likely that LED lamps will become the ubiquitous lighting technology of the near future. Therefore, it is important to understand the impact of LED lamps on electricity supply networks. As LED lamps are non-linear electrical loads, their wideband spectrum (i.e., from DC to 150 kHz) current emissions will impact distortion levels in distribution networks. Accordingly, there is a need to develop accurate models of LED lamps, as an important component of the residential, commercial and industrial load sectors, and power electronic devices in general, as part of ongoing efforts in large scale (e.g., probabilistic) modeling for harmonic penetration studies to assess supply system voltage quality. A vital aspect of ongoing research in this area is the ability to model and simulate the low frequency (LF) current emissions (from DC to 2.5 kHz) of an enormous number of individual devices. Models of LED lamp high frequency (HF) emissions beyond the LF range are not considered in this paper but details of LED lamp HF distortion characteristics are available in [1,2]. Most of the approaches for developing LF harmonic current emission models for large scale steady-state harmonic penetration studies can be divided into two broad categories: time domain and frequency domain. The objective of the time-domain modeling approach is to reproduce the (instantaneous) time domain current waveform of the modeled device, from which further processing is required to extract the LF spectral components. The objective of the frequency-domain modeling approach is to provide the spectral components directly for a given input voltage supply condition. Time domain models (TDM) are typically based on a representation of the electrical components of the device; when including the control circuits, the modeling approach may be considered as 'white box' modeling, and extensive knowledge of the device is required. Different TDMs for harmonic power flow analyses are available in literature, with a review available in [3] (see References [65][66][67][68][69][70][71][72][73][74] of [3]) and other examples in [4][5][6][7][8][9][10]. The main advantage of this approach is that the model can be directly applied for the analysis of different supply conditions, i.e., different supply voltage magnitudes and the presence of background voltage distortion, as well as parametric sensitivity analysis. The main disadvantage is that knowledge of the circuit topology is required, while additional knowledge of the control circuits may also be required. With such level of detail it is possible to develop generic models based solely on the required functionality of the power and control circuits, e.g., in [9], or identify specific parameter values to represent a physical device, e.g., [4][5][6][7][8]. However, TDMs usually require a long development time and significant computational resources. Furthermore, specialized software (often not directly compatible with commercial power flow software) is needed, and TDMs are difficult to generalize when modeling a large population of devices, as required for large scale harmonic studies, thus limiting their use. It is possible to overcome some of these disadvantages, e.g., by using an equivalent circuit model to simplify the device representation, e.g., compact fluorescent lamps (CFL) [7] and LED lamps [10], to reduce computation time and the number of model parameters, but limitations still exist. Conversely, harmonic modeling in the frequency domain can generally be considered either a 'white box' (e.g., Harmonic State Space models [11]) or a 'black box' modeling approach, when knowledge of the circuit topology is not necessary (e.g., Norton-based models). Different frequency-domain models (FDM) for harmonic power flow analyses are available in literature, with a review available in [3] (specifically References [21] and of [3]) and, more recently, in [12][13][14][15]. Among all of the frequency-domain modeling approaches, Norton-based models are frequently used due to their simplicity and are considered in this paper. Norton-based model can be classified, in order of complexity and accuracy, as: (i) constant harmonic current source models (CCM); (ii) decoupled Norton models (DNM); (iii) coupled Norton models (CNM) [16,17]; and (iv) fully coupled [18,19] or tensor coupled models (T2) [20]. CCMs, which are the most common method used in industry and in commercial software, are not able to reproduce the interactions of the equipment with non-ideal system conditions (i.e., pre-existing background voltage distortion, or common variations in supply voltage magnitude). The last three methods are all based on the use of admittance Frequency Coupling Matrices (FCM) and are suited for linear time invariant systems (DNM) and for linear time variant systems (CNM and T2). Harmonic cross-coupling between voltages and currents of different harmonic orders and the dependency of the harmonic current phasors on the phase angle of the supply harmonic voltage phasors can be modeled by CNM and T2, respectively. Norton-based models have been used to model several power system components and devices [3]. Due to their computational efficiency, they are preferred for large-scale probabilistic penetration studies [11] and have been used to study the impact of CFLs [21][22][23] and LED lamps [24] on distortion levels in distribution networks. From this discussion, it is evident that both model development approaches have certain favorable attributes and have been widely utilized in previous research for modeling the LF current emissions of electrical devices. The development of a TDM can, in theory, start without laboratory measurements, as a generic topology can be readily developed to satisfy a design specification. However, the FDM process must begin from a processed time domain waveform, which can be obtained either from measurement or a TDM. When using measurements as the input, a programmable power source, capable of providing the required voltage supply conditions, is necessary; when using the TDM as the input, its accuracy must be warranted (e.g., experimentally validated). This paper begins from a thorough critique of the rationale of the development and evaluation process of TDMs and FDMs and considers connections between the two processes. From this critique, the paper then provides a detailed analysis of the time-domain and frequency-domain modeling approaches with the objective of developing and evaluating models of the LF emissions of LED lamps suitable for use in large scale harmonic power flow analysis to assess harmonic distortion in distribution networks. This extends the preliminary research on TDMs [25] and FDMs [26] of LED lamps and fills a gap in existing literature by providing a complete set of models of the four different types of LED lamps suitable for use in harmonic penetration studies. In the analysis, the performance of the models is quantified using experimental data, numerical simulations, and statistical evaluation, providing an in-depth analysis of the ability of commonly applied model approaches to represent the LF emissions of different types of LED lamps. TDMs are utilized to introduce the variation in the circuit topologies present in different types of LED lamps, with different circuit models defined for each type of LED lamp. The TDMs are validated using experimental data from laboratory tests for different supply voltage conditions. In the context of TDMs, a specific contribution of this paper is the proposal of novel models for two of the four types of LEDs, which are presented here for the first time to the best of the authors knowledge. For the purpose of this analysis, the TDMs are used to derive the four Norton-based FDMs, as they allow for rapid development in lieu of extensive laboratory tests. The accuracy of the FDMs is assessed by Monte Carlo (MC) simulation versus TDM results. Particular attention is given to the FDMs as these are directly applicable for large scale penetration studies, and there is still relatively little information on FDMs of LED lamps. Currently, to the best of the authors knowledge, only two papers consider the widespread impact of CFL or LED lamps in detail [21,24]. The frequency domain analysis demonstrates the impact of the circuit topology on the sensitivity of the device to the background voltage magnitude and phase. The results indicate that, for certain types of LED lamps, the parameter values of the FDM is dependent on the specific background voltage distortion, and the model performance is also influenced by the background voltage distortion present in the supply voltage. This novel contribution to the FDM area provides comprehensive information about the overall accuracy of the FDM when representing different LED lamps, serving as a guide on the impact of model selection on the assessment of voltage distortion in low voltage (LV) networks. All TDM parameter values are included in Appendix A for use by the community; from these models, all FDM parameters, which are difficult to communicate in compact form, can be derived. However, FDM models are available from the authors upon request. The rest of the paper is structured as follows: a rationale of the development and evaluation of TDMs and FDMs is discussed in Section 2; the TDM approach is analyzed in Section 3, the FDM approach is analyzed in Section 4; conclusions are provided in Section 5. Rationale of the Development and Evaluation of Time and Frequency Domain Models TDMs and FDMs can be developed, and their performance assessed, following different processes. Figure 1 highlights the general processes and input data requirements of the time-domain and frequency-domain modeling methodologies. The time-domain modeling methodology implemented in this paper is denoted by the blue path, with the frequency-domain modeling methodology shown by the orange path. Use of the physical device, i.e., the LED lamp to be modeled, is marked by the black path. In the approach implemented in this paper, the lamp under test serves as the starting point for the TDM, which in turn serves as the input for the subsequent development of the FDM. The alternative path for FDM development, marked in grey, indicates that modeling in the frequency domain can also start directly from the physical device. This path has the inherent advantage that any inaccuracies present in the TDM do not propagate to the FDM, but it requires a fully controllable power source for laboratory testing and a huge number of test points. The whole process can be divided into two main stages: model development and model performance evaluation. The model development process involves defining a model structure and obtaining the parameters of the lamp under test. It should be noted that, although presented for LED lamps, the comprehensive analysis of modeling processes in Figure 1 is generally applicable for modeling the LF emissions of any electrical device. Clearly, the required steps are significantly different for the development of TDMs and of FDMs. However, the model development process is specific to the properties to be emulated correctly and the specified range of operating conditions. In this paper, attention is devoted to the LF harmonic content of line current waveforms of common LED lamps, subjected to supply voltage deviations from rated sinusoidal conditions. For performance evaluation, the specification of the test points is given in terms of the supply voltage distortion and requires a formal definition of the magnitude and phase of the frequency components of the voltage waveforms used in the evaluation process. These are marked as separate processes in Figure 1, as different evaluation test points are implemented in this paper for TDMs and FDMs to illustrate different possible approaches for model performance evaluation. However, the same test points could be used for TDM and FDM cases to directly compare the accuracy of the different modeling approaches. In the remainder of this section, the parts common to both time domain and frequency modeling methodologies are introduced. These are: the LED lamp set, which serves as an input to the whole process; the characteristic voltage waveforms for test point definitions, which are an input to the model evaluation process; and the model evaluation metrics. Specific details of the development and the evaluation stages utilized for the time-domain and frequency-domain modeling methodologies are found in the subsequent sections, with relevant subsection numbers marked in Figure 1. LED Lamp Set Recent work on power quality issues caused by LED lamps, e.g., [1,27], has revealed the diversity in the LF current emissions of LED lamps. These variations are a consequence of the utilization of different LED driver circuits, and previous research has shown that, for the purpose of classification of the LF emissions of the line current waveform, LED lamps can be divided into four main categories [28]. The categories are based on the circuitry utilized to convert the AC supply voltage to the DC current required by the LED chain and are defined as follows: • Type A: consists of a full-wave rectifier with bulk smoothing capacitor and DC-DC switch-mode converter; • Type B: consists of a simple capacitor divider formed across a full-wave rectifier circuit; • Type C: consists of a full-wave rectifier loaded by a constant current regulator (CCR); • Type D: includes a switch-mode driver circuit with active power factor correction (aPFC), which can be either a single-stage or a double-stage converter. One LED lamp from each type was selected for the model development process. The line current waveforms of the four LED lamps considered in this research, which are typical for each category, are shown in Figure 2. Table 1 provides the main electrical data obtained from measurements of the lamps considered with rated sinusoidal AC voltage waveform. A comprehensive description and classification of the LED lamp driver circuits is available in [29,30]. Table 1. Main electrical data of the considered LED lamps at rated sinusoidal voltage. PF 1 = fundamental power factor; THD = total harmonic distortion; THC = total harmonic current. Characteristic Voltage Waveforms for Test Points Definition The definition of test points for the development and performance evaluation of both modeling approaches is based on three characteristic voltage waveforms, selected as representative base case conditions of typical voltage distortion in Low Voltage (LV) networks [31]. The three different voltage waveforms considered are shown in Figure 3. The sinusoidal voltage waveform is considered as an ideal supply, which is particularly important for the development of TDMs. The flat top (FT) and peak top (PT) voltage waveforms are selected as representative of the typical voltages present in low-voltage networks, the total harmonic distortion (THD) values are 3.0% and 3.6%, respectively. Model Performance Evaluation Different performance evaluation procedures are implemented for time-domain and frequency-domain modeling approaches, with full descriptions included in Sections 3.2 and 4.2, respectively. However, the performance evaluation metrics are identical for both modeling approaches and focus on the deviations of the LF current components from the reference values. The magnitude errors are quantified using the relative percentage error: and the phase errors are quantified using the absolute error: for h = 1, 3, . . . , H, where I h,est is the estimated current value of order h, and I h,re f is the reference current value of order h. As indicated in Figure 1, the measurement data is used as I h,re f in the TDM process, while the TDM serves as I h,re f during the FDM analysis. In this paper, odd harmonics up to and including the 15th order are considered, i.e., H = 15. In addition to the assessment of individual harmonic components, THD and total harmonic current (THC) indicators are also evaluated: The THD and THC provide aggregate information about the overall harmonic content in the line current drawn by the LED lamp and are especially valuable when evaluating the model performance across different LED lamps and categories. For TDMs, the THD and THC error are calculated in absolute terms; for FDMs, the THD and THC errors are calculated in relative terms for quicker comparison between the multiple model forms. All evaluation metrics are presented using boxplots, in order to summarize the significant statistical indices from numerical values obtained using the evaluation test points (described in Sections 3.2 and 4.2, respectively) in a concise manner. For each box, the central mark is the median, the edges of the box are the 25th and 75th percentiles, and the whiskers extend to the most extreme data points (±2.7σ and 99.3% coverage if the data are normally distributed), not considering outliers. Time Domain Model Development When developing a model to emulate a real world process, e.g., start-up, steady state, regulation, failure, etc., there are a wide range of influencing parameters, e.g., various supply conditions, and a very detailed model may be required. Therefore, it is important to clearly define the specific response to be emulated, and the particular conditions for which the model is valid, in order to simplify the model development process. Typical techniques employed to simplify the model are idealization, linearization, and averaging, and simplified models should include the key components, either directly or as equivalents, which allow for the desired response to be obtained. The general procedure for developing a TDM consists of two main stages: 1. In the first stage, the key components of the topology and functionality of the circuit should be identified. This can be achieved by reverse engineering of the lamp or using a priori knowledge of the relationship between current waveform shape and specific circuitry; 2. In the second stage, values for the key components should be obtained. This can be performed either by reverse engineering of the lamp or by a parameter estimation technique. The Type A LED model is developed using a prori knowledge of the relationship between the measured line current waveform shape and commonly utilized driver topologies. The component values of the circuit are obtained by a parameter estimation technique, with further details in [32,33]. As the circuits employed in Type B and Type C LED lamps consist of only a few components, the circuit structure was identified by reverse engineering. For the Type B LED lamp, Step 2 was performed directly using the physical components, while the parameters were estimated for the Type C LED lamp. Reverse engineering was also applied to develop the Type D LED lamp model in Step 1, followed by tuning the control circuit parameters to match the simulated input line current with the available measurements. Unlike the Type A LED lamp, simplification of the control algorithms of Type D LED lamp is difficult to achieve and a generic equivalent circuit model does not exist. The TDMs were implemented in MATLAB/SIMULINK, with the exception of the Type D LED lamp model, which was developed in PLECS. The circuit schematics and component values used in this paper are available in Appendix A. All experimental data from laboratory tests were obtained with negligible source impedance; further details of the measurement chain are available in [28]. In consonance with this, the voltage source equivalent internal impedance was neglected during the simulation of the TDMs. Time Domain Model Evaluation The TDMs were evaluated at a number of test points by comparing the magnitude and phase angle of the current harmonics obtained using the TDM against the values extracted from measurement data under the same supply conditions. The measurement data is taken as I h,re f in Equation (1) and (2), with the TDM output taken as I h,est . To obtain the harmonic components from the time domain line current waveforms, the waveforms are processed by Discrete Fourier Transform (DFT) using a 200 ms rectangular window, in accordance with [34]. In this paper, the TDM test design considers a set of five magnitudes of the voltage fundamental V 1,pu = {0.90, 0.95, 1.00, 1.05, 1.10} for each of the voltage waveforms in Section 2.2. For each magnitude of the voltage fundamental, the harmonics shown in Figure 3 are scaled proportionally, thus maintaining a fixed THD level. These 15 test points are considered sufficient for evaluating the steady-state performance of the TDM developed for each type of LED lamp defined in Section 2.1. Additional test points could be included depending on the level of certainty required. For the presentation of the results, there is no discrimination by voltage magnitude nor waveform shape. This is possible as there is little correlation between the supply condition and the model performance. However, this does have an effect on the FDMs, which is discussed further in Section 4. Type A The assumed circuit model structure and the values obtained from the parameter estimation technique are included in Appendix A.1. Appendix A.1 also includes the model validation at the development test point, i.e., at rated ideal sinusoidal voltage. A comparison of the range of harmonic current magnitude and phase angles obtained from the developed model and the measured data across all considered test points is shown in Figure 4, where the excellent performance of the model is observed. This is expected as the assumed equivalent circuit model is an established approach to modeling such electronic loads. Generally, the magnitude and phase angle errors increase with harmonic order, i.e., in inverse proportion to the magnitude of the current harmonic. Phase angle dispersion, which also increases with harmonic order, is caused by the sensitivity of the load to changes in the supply voltage waveform. However, as demonstrated by the low value of the phase errors, the TDM is able to accurately reproduce this behavior. Type B The assumed circuit model and its parameterization are included in Appendix A.2. Due to the simple nature of the circuit topology, all components are explicitly represented in the model. Appendix A.2 also includes the model validation at the development test point, i.e., at rated ideal sinusoidal voltage. A comparison of the range of harmonic current magnitude and phase angles obtained from the developed model and the measured data across all considered test points is shown in Figure 5. Again, the accuracy of the developed TDM is very high, with magnitude errors comparable to those observed for the Type A LED lamp (c.f. Figure 4). However, unlike the Type A LED lamp, the value of the magnitude error is not simply correlated to the harmonic order. Although larger phase angle errors are observed than in the case of the Type A LED lamp, the median values are all below 20°, with lower values reported for the prominent harmonic tones. Type C The assumed circuit model and parameter values are included in Appendix A.3 and is presented for the first time in this paper. Appendix A.3 also includes the model validation at the development test point, i.e., at rated ideal sinusoidal voltage. The comparison of the range of harmonic current magnitude and phase angles obtained from the developed model and the measured data across all considered test points is shown in Figure 6. For this type of LED, the magnitude deviation between the developed TDM and the measured data is greater than Type A and B. However, considering the deviations in the magnitude, the median values are still generally small for components with larger magnitude, with larger deviations observed for harmonics with lower absolute values. The median values are lower than 10% for all harmonic orders. The deviations in the magnitude can be attributed to the simplification of the CCR functionality in the developed TDM, and the largest errors occur for the condition in which the harmonic current magnitude is smallest. In the model, the ability of the CCR to limit the current was idealized as a constant value, without considering finite response of control and error in regulation related to flow of current. The impact of this is evident in Figure A6 in Appendix A.3. On the other hand, reproduction of the harmonic phase angles by the Type C LED lamp TDM is very accurate and is insensitive to the harmonic order, unlike the Type A and B LED lamp TDMs (c.f. Figures 4d and 5d). Type D Details of the circuit model and its parameterization are included in Appendix A.4 and is presented for the first time in this paper. This includes full details of the power and control circuits and their parameters. Appendix A.4 also includes the model validation at the development test point, i.e., at rated ideal sinusoidal voltage. A comparison of the range of harmonic current magnitude and phase angles across all considered test points is shown in Figure 7. As can be expected, the largest errors of the four LED lamp types are observed for this model. This can be attributed to the value of the LF current emissions, which, relative to fundamental component, are much smaller in comparison to all other LED lamp types (between 0-10% compared to approximately 0-90% for LED Type A, 0-40% for LED Type B, and 0-15% for LED Type C). Furthermore, the line current waveform of Type D LED lamps is sensitive to small errors in modeling the physical switched-mode driver, including, for instance, parasitic couplings, power components nonlinearity, and real signal transfer via control loop, which are difficult to derive and were not incorporated in the presented model. As the LF current emission magnitudes are small, these are extremely sensitive to variations in the power and control circuit, as well as the supply conditions. Although the relative deviations may be bigger, the absolute deviation is very small, due to the low magnitude values, and not as significant as for the other LED lamp types. Total Distortion Indices The total distortion indices, THD and THC, obtained from measurement and simulations and their deviations from the measured reference data, are presented in Figure 8. Again, this includes numeric values obtained for all test points. These results confirm the overall accuracy of the proposed TDMs and also serve to highlight the difference in the characteristics of the four general categories of LED lamps. For Type A LED lamps, the high values of THD and THC are both accurately reproduced by the TDM estimate, as it was previously shown that this model returns the lowest overall errors. Similar performance is observed for the Type B LED lamp. Even LED Type C and D lamps return reasonable errors for THD and THC in terms of absolute values. This indicates that the large errors observed are not coincident and occur in different supply voltage conditions, i.e., reducing the impact of individual larger errors on the overall assessment. The larger errors of the THC of the Type D LED lamp can be attributed to the small magnitude of the LF emissions which are generally overestimated by the TDM, but good accuracy of the THD is ensured by the dominant effect of the fundamental component, which is well represented by the model. Frequency Domain Modeling Any power system component can be represented by a voltage controlled current source [3,11]: where I and V are vectors of harmonic phasors of the emitted current and the applied voltage, and the function f is a complex vector function. If f is a non-linear function, a widely used and powerful technique is to linearize f around an operating base reference condition (e.g., the three voltage waveforms described in Section 2.2) [12]. Harmonic cross coupling and phase dependency can be elegantly modeled by estimating the direct and negative FCM Y + and Y − : where I b is the base current, i.e., the emission measured at the base reference conditions. The elements of the direct FCM Y + , which accounts for direct and cross coupling between harmonic orders, and of the negative matrices FCM Y − , which accounts for the background voltage "phase dependency", can be obtained either by numerical or laboratory tests, e.g., as described in [35]. The different FDMs used in this paper can be summarized starting from Equation (10). Constant Harmonic Current Source Model Constant harmonic current source models (CCM) are the simplest, and most commonly used, representation of a non-linear load. The load is modeled by a vector of constant current sources and are assumed independent of the background voltage distortion: Decoupled Norton Models Decoupled Norton models (DNM) model only the interaction between applied harmonic voltages and emitted harmonic currents of the same order. In other words, the off-diagonal elements of Y + are set to zero: Coupled Norton Models Coupled Norton models (CNM) are able to take into account the cross coupling between different harmonic voltage and current orders but neglect the "phase dependency": Tensor Representation The tensor representation model (T2) is equivalent to the general model, as in Equation (10), but the direct and negative FCMs are represented by concise real-valued matrices in which elements are rank-2 tensors: Representing ∆I and ∆V in Cartesian form, it is possible to write: . . . where H and K are highest considered current and voltage harmonic order, respectively. The matrix elements of T2, i.e., T2 h,k , can be determined using Fourier Descriptors [12,21,36,37]. The Fourier Descriptor is the discrete Fourier transform of a sequence of complex numbers, y(m p ), represented by M p evenly spaced vectors and is described by: Frequency Domain Model Development The development of a FDM requires a huge number of tests in order to linearize the behavior of the lamps around a base operational point. In general, one test in the absence of perturbations is necessary to calculate the base currents spectra I b (see Equation (10)). Then, N 1 test values of the fundamental magnitude deviations from nominal are required to evaluate the first column of the direct matrix Y + , and, for each background voltage harmonic considered (up to the Kth odd harmonic order, where index k is used for voltage harmonics in order to distinguish from current haromincs index h), N 2 harmonic magnitudes, each characterized by N 3 phase angles, have to be analyzed. The total number of tests is given by: and is usually very large (from a few hundred to more than one thousand). Depending on the kind of analysis to be performed, and on other practical issues, the tests can be performed either experimentally or numerically (i.e. starting from detailed TDMs). For example, a detailed emission assessment of a specific device requires experimental testing, while a statistical distortion assessment of several kinds of devices can be performed using FDMs evaluated numerically starting from the more straightforward parameterization of TDM parameters, e.g., [25]. In this paper, the FDMs of the four LED lamps were obtained using the TDMs described and validated in the previous section. As the aim of this analysis was to compare the performance of the different FDMs for different LED driver circuits, the TDM are assumed to have acceptable accuracy, and their use allows for quicker development of FDMs than the measurement based approach. To analyze the impact of the supply voltage waveform on the performance of the FDM, the FCMs were obtained by perturbing and linearizing the TDMs around the two operating points constituted by FT and PT voltage waveforms. For the sake of brevity, only harmonic orders up to the 15th were evaluated, and only the three dominant components (k = 3,5,and 7 for FT and k = 5,7,and 11 for PT) were considered in the modified distortion voltage waveforms. For the modified FT and PT voltage waveforms, simulations without perturbations were performed in order to evaluate the two base currents spectra I b . For the harmonic perturbation, only one amplitude at a time was considered: for each of the dominant harmonic components, shown in Figure 3, the modified FT and PT, respectively, were amplified by a factor equal to 10%. The same 10% amplification factor was used for the components not present in the modified spectra (e.g., 3rd harmonic for PT), by first assuming their base amplitudes equal the limits suggested by standard EN 50160 [38], reported in Table 2. The number of phase angles, i.e., M p in Equation (12), selected was 24. Therefore, the total number of tests for each of the two different operating points was equal to 169 (N 1 = 0, N 2 = 1, K = 15, and N 3 = 24 in Equation (18)). Frequency Domain Model Assessment FDM assessment was conducted by means of MC simulations. For each of the two operation points constituted by the modified FT and PT voltage waveforms, 100 MC trials were run. The perturbation added to the base spectrum was generated assuming a uniform distribution between 0 and 10% for harmonic magnitudes and a uniform distribution between 0 and 2π for phase angles. As per the FDM derivation process, for the harmonic components not present in the modified FT and PT base cases, the random magnitude perturbation U~[0, 0.1pu] was applied to the magnitudes of the limits reported in Table 2; for the MC simulations, the phase angle was randomly assigned from U~[0, 2π]. Results From Sections 4.3.1-4.3.4, the results of the assessment are reported for each lamp category, comparing the performance of the four FDMs previously presented (CCM, DNM, CNM, and T2) with the TDM in terms of magnitude and phase angle and their errors. In addition to the magnitude and phase errors evaluated with Equations (1) and (2), the FDMs are also assessed analyzing the Y + and Y − matrices. In real terms, Y + is able to take into account the linear direct and cross-coupling between voltage and current harmonic phasors, while Y − takes into account the dependency of the current harmonics to the phase angle of the voltage harmonics. In the simple case of a linear system, the Y + matrix is diagonal (no cross-coupling), while the Y − matrix is nil. The following considerations apply: Finally, the comparison of the total distortion indicators, THD and THC, is shown in Section 4.3.5. Type A The performance of the different FDMs of the Type A LED lamp is shown for flat-top and peak-top supply voltage conditions in Figures 9 and 10, respectively. In Figure 9a,b for flat-top (Figure 10a,b for peak-top), magnitudes and phase angles obtained by the four FDMs are compared to the results obtained by TDM using boxplots, with the exception of CCM, which is invariant with respect to the background harmonic voltage variations. In Figure 9c,d for flat-top (Figure 10c,d for peak-top), the corresponding relative and absolute errors are shown. The magnitude of the admittance matrices are shown in Figure 11 and are useful to help understand the performance of the different models. It is possible to observe: • the CNM and the T2 model perform noticeably better than the CCM and the DNM for both flat-top and peak-top voltage waveforms. This can be explained by the presence of non-negligible off-diagonal elements in both Y + matrices (see Figure 11a,b); • the T2 model performs significantly better than the CNM due to the non-negligible magnitudes of the elements of the Y − matrices (see Figure 11c,d), although the magnitudes are about six times lower than the corresponding values of Y + for both FT and PT; • looking at Figure 11a,b, it is evident that in the case of PT supply condition the magnitudes of the elements of Y + are smaller by a factor 3 compared to the FT supply condition; and • the same considerations apply for phase angles. It should be noted that the phase angles returned by the FDMs are with respect to the cosine of the voltage waveform, rather than the sinusoid used in the TDM, so the angles presented here (and for the FDMs of other LED lamp types) cannot be directly compared with those in Section 3, but allow for a comparison between FDMs. Type B The performance of the different FDMs of the Type B LED lamp is shown for flat-top and peak-top supply conditions in Figures 12 and 13, respectively. It is possible to observe: • Y + approaches a diagonal matrix, indicating that the most pronounced coupling exists between same order harmonics (e.g., between 11th and 11th) and between them and their nearest neighbors (e.g., between 11th and 9th and 13th), as can be seen in Figure 14a,b; • moreover, the Y + matrices are practically identical for both voltage waveforms (and also identical to that measured under ideal sinusoidal conditions [26]), which indicates that only one FCM is required to analyze both scenarios as already evidenced in [12]; It is possible to observe: • the 'patterns' of both Y + and Y − are almost identical for both voltage waveforms, even if they have very small magnitudes compared with the other lamps; • for the peak-top voltage waveform, all methods, except T2, show higher errors in respect to the FT supply condition due to the magnitudes of the elements being slightly greater (see Figures 15c,d and 16c,d); • off-diagonal elements each two harmonic orders of Y + are, in both cases, of the same order of magnitude of diagonal elements as evidenced by the great difference of performances between CCM and DNM versus CNM and T2; and • the performance of the T2 model is significantly better than the other methods due to the order of magnitude of the elements of Y − , which are the same as Y + . Type D The performance of the different FDMs of the Type D LED lamp is shown for flat-top and peak-top supply conditions in Figures 18 and 19, respectively. It is possible to observe: • All FDMS, with the exception of the CCM, exhibit similar accuracy. This can be explained by analyzing Y + in Figure 20 which is a diagonal matrix, indicating that the LED lamp behaves as a linear load and only direct coupling between same order harmonics exists; • Y + are practically identical for both voltage waveforms; • the aforementioned linear behavior can be modeled, for each harmonic, by a simple Norton equivalent constituted by a parallel RC circuit in parallel with a constant current source; • Y − demonstrates that the sensitivity to the phase angle of the lower order harmonics is more pronounced, however, the values are generally two orders of magnitude lower than Y + ; and • for the PT voltage waveform, the magnitudes of the Y − matrix are negligible for the 11th order harmonic current, which is reflected in the results in Figures 18 and 19, respectively, where the errors of the 11th harmonic are noticeably lower for the PT voltage waveform than the FT voltage waveform. The presented results confirm the overall improvement in the model performance with regard to the increase in model complexity. The performance of the simple CCM always results in the greatest error, while the smallest reported errors are returned by the T2 model. Comparing the performance of the two Norton models, it is evident that they are sensitive to the type of LED lamp modeled and the presence of background voltage distortion; the decoupled model is generally able to perform as well as the coupled model (with respect to the total harmonic indices). Conclusions This paper provided a detailed analysis of two different modeling approaches for representing the LF current emissions of LED lamps: one in the time domain and one in the frequency domain. The considered approaches were illustrated using four general categories of LED lamps, which cover the vast majority of LED lamps currently available on the market. The aim was an in-depth assessment of the ability of commonly applied models to represent the LF current emissions of different categories of LED lamps. The performance of the models were quantitatively evaluated using experimental data, numerical simulations, and statistical analyses, thus providing comprehensive information about the overall accuracy that can be achieved in the general framework of harmonic penetration studies. The main outcomes of the paper are: • The TDMs, which in this paper were validated using experimental measurements, demonstrate that it is possible to achieve excellent levels of accuracy for certain types of LED lamps, i.e., Type A and B, in which control is either not present or can be emulated using an equivalent circuit form. For LED lamps that require specific representation of the control logic, i.e., Type C and Type D, new models were presented, for the first time, in this paper. For the Type C LED lamp example, the accuracy of the TDM is lower than Type A and Type B, but the median values are still very low with respect to measurements ( < 10%). Higher magnitude errors are observed for the Type D LED lamp model but, in absolute terms, their values are very low. Areas of possible further improvement, e.g., by fine tuning the settings of the control algorithms, were discussed. • The FDMs, which in this paper were derived from and compared against the TDMs, clearly show that the simulation error is significantly influenced by both the LED lamp type and the background voltage distortion. As expected, the overall errors reduce when increasing the model complexity; the magnitude errors obtained with the most complex model (i.e., the tensor based model T2) are always below 10%, and generally considerably lower (with median values of around a few percent or less), while the phase errors are always less than 5°, highlighting the value of including the phase dependency in the model formulation. • The presented models and the quantitative results about their accuracy allow probabilistic harmonic penetration studies, such as the assessment of voltage distortion in LV networks and their future evolution, to be approached with the knowledge of the accuracy levels that can be obtained using different types of FDM. • TDM parameter values were also reported in Appendix A for use by the community. From the presented TDMs, the FDMs analyzed in this paper can be obtained. Alternatively, the authors are happy to provide the parameters of the FDMs upon request. The typical Type B LED driver circuit model consists of only a few passive components, with the full circuit model shown in Figure A3. The feature of this circuit is the combination of two capacitors, C in and C dc , which form a capacitive divider across a DBR to reduce the supply voltage magnitude. Unlike the Type A LED lamp, the voltage ripple in the DC bus propagates to the LED string, and the LED string must be explicitly modeled with respect to the current-voltage characteristic of physical device. The LED string is modeled as an ideal diode, with constant DC voltage source, representing the forward voltage V f of the LED string, and intrinsic resistance R f connected in series. Due to the simple nature of the circuit, all component parameters are explicitly represented in the model and their values are marked directly on Figure A3. However, the discharging resistor R d has no influence on the LF current emissions and is included only for the sake of completeness. The performance of the model by means of its response to the rated ideal sinusoidal supply condition is documented in Figure A4. Appendix A.3. Type C LED Lamp The Type C LED lamp driver circuit uses a DBR followed by a simple active DC-DC converter, in the form of a constant current regulator (CCR), to limit the output current through the series LED chain. The CCR is normally realized as an integrated circuit and is able to provide a constant current to the LED string over a wide voltage range. In this circuit model, presented in Figure A5, the CCR behind the DBR is emulated using a controlled current source (CCS). The control logic and values are in Figure A5b. Due to boundary changes in the LED string operating point when the current changes from zero to the limit set by the CCR, and since DC current propagates directly to AC side, the LED string has to be modeled with respect to a real V-I curve of the LED string. In this case, the LED string is modeled as a part of CCS control by means of a calculated resistance, using the exponential-based approximation. In this model, forward voltage V F , thermal potential including nonlinearity factor V T , and R F on-state resistance, representing the overall LED chain parameters, are considered. Starting from the calculated voltage over the LED string, when the LED current is reached, its value its limited to I C , emulating the idealized action of the CCR. The model performance is given in Figure A6. The Type D LED lamp driver circuit topology is similar to the Type A LED lamp driver, employing a passive full-bridge rectifier and at least one DC-DC converter stage. However, the input DC bus capacitor size is reduced, close to zero value, and the energy storage is moved to the output side of the DC-DC converter. This means that the DC-DC converter effect on the input line current waveform will be much higher and, in order to preserve its response, including the functionality of the control algorithm, a more detailed model of the DC-DC converter is required. The Type D LED lamp driver circuit, including control logic and parametrization, were derived from the physical device by means of reverse engineering. This circuit consists of an offline single-stage flyback converter. The implemented and emulated PWM control is a peak current controller based on fixed switching frequency f s with current slope compensation and constant output regulation without secondary feedback (utilizing an auxiliary winding to sense the output voltage). The f s deduced from measurement is 60 kHz. The driver is designed for 0.7 A (20-56 V) output and for universal supply from 100-265 V at 50/60 Hz. A circuit model of the driver, including the component parameter values, for simulation purposes is documented in Figure A7. The driver model consists of an input EMI filter model (C 4 , L 1 ), followed by an idealized DBR with small DC bus capacitor (C 2 ) and consequent model of the flyback switching converter with CL output stage network (C 5 , L 2 +L 3 ). The LED string is modeled by means of equivalent resistance (R 2 ), possible due to the relatively small ripple in the output. Values of the control circuit parameters and the transformer are given in Tables A1-A3. The performance of the model by means of its response to the rated ideal sinusoidal supply condition is documented in Figure A8.
2020-04-02T09:12:17.988Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "1ec8028101fb7eed8f5915b7985d1ab03ef36781", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/7/1571/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "554daa79dc5d61a4f6a6ff36146a35ae604ef6c8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
244117435
pes2o/s2orc
v3-fos-license
Birkhoff's Completeness Theorem for Multi-Sorted Algebras Formalized in Agda This document provides a formal proof of Birkhoff's completeness theorem for multi-sorted algebras which states that any equational entailment valid in all models is also provable in the equational theory. More precisely, if a certain equation is valid in all models that validate a fixed set of equations, then this equation is derivable from that set using the proof rules for a congruence. The proof has been formalized in Agda version 2.6.2 with the Agda Standard Library version 1.7 and this document reproduces the commented Agda code. Given a set of sort symbols, a signature over is an indexed endo-container, which has three components: a) Per sort ∶ , a set of operator symbols. (In the container terminology, these are called shapes for index , and in the interaction tree terminology, commands for state .) b) Per operator ∶ , a set , the arity of operator . The arity is the index set for the arguments of the operator, which are then given by a function over . (In the other terminologies, these are the positions or responses, resp.) c) Per argument index ∶ , a sort ∶ which denotes the sort of the th argument of operator . (In the interaction tree terminology, this is the next state.) Closed terms of a multi-sorted algebra (aka first-order terms) are then concrete interaction trees, i.e., elements of the indexed -type pertaining to the container. Note that all the "set"s we mentioned above come with a size, see next point. 2. Universe-polymorphic: As we are working in a predicative and constructive meta-theory, we have to be aware of the size (i.e., inaccessible cardinality) of the sets. Our formalization is universe-polymorphic to ensure good generality, resting on the universe-polymorphic Agda Standard Library. In particular, there is no such thing as "all models"; rather we can only quantify over models of a certain maximum size. The completeness theorem consequently does not require validity of an entailment in all models, but only in all models of a certain size, which is given by the size of the generic model, i.e., the term model. The size of the term model in turn is determined by the size of the signature of the multi-sorted algebra. 3. Open terms (with free variables) are obtained as the free monad over the container. Concretely, we make a new container that has additional nullary operator symbols, which stand for the variables. Terms are intrinsically typed, i.e., the set of terms is actually a family of sets indexed by a sort and a context of sorted variables in scope. No lists: We have no finiteness restrictions whatsover, neither the number of operators need to be finite, nor the number of arguments of an operator, nor the set of variables that are in scope of a term. (Note however, since terms are finite trees, they can actually mention only a finite number of variables from the possibly infinite supply.) Preliminaries We import library content for indexed containers, standard types, and setoids. which contains pairs consisting of an operator and its collection of arguments. The least fixed point of (X ↦ C X) is the indexed W-type given by C, and it contains closed first-order terms of the multi-sorted algebra C. We need to interpret indexed containers on Setoids. This definition is missing from the standard library v1.7. It equips the sets ( C X s) with an equivalence relation induced by the one of the family . The definition of _ can be stated for heterogeneous index containers where we distinguish input and output sorts and . Multi-sorted algebras A multi-sorted algebra is an indexed container. Closed terms (initial model) are given by the W type for a container, renamed to here (for least fixed-point). It is convenient to name the concept of signature, i.e. (Sort, Ops) We assume a fixed signature. Models A model is given by an interpretation (Den ) for each sort plus an interpretation (den ) for each operator . A model is also frequently known as an Algebra for a signature; but as that terminology is too overloaded, it is avoided here. The setoid model requires operators to respect equality. The Func record packs a function (apply) with a proof (cong) that the function maps equals to equals. Terms To obtain terms with free variables, we add additional nullary operators, each representing a variable. These are covered in the standard library FreeMonad module, albeit with the restriction that the operator and variable sets have the same size. Terms with free variables in Var. module _ (Var ∶ Cxt) where We keep the same sorts, but add a nullary operator for each variable. Ops + ∶ Container Sort Sort ℓ o ℓ a Ops + = Ops ⋆C Var Terms with variables are then given by the W-type for the extended container. Tm ∶ Pred Sort _ Tm = W Ops + We define nice constructors for variables and operator application via pattern synonyms. Note that the in constructor var' is a function from the empty set, so it should be uniquely determined. However, Agda's equality is more intensional and will not identify all functions from the empty set. Since we do not make use of the axiom of function extensionality, we sometimes have to consult the extensional equality of the function setoid. Letter ranges over terms, and ts over argument vectors. Parallel substitutions A substitution from Δ to Γ holds a term in Γ for each variable in Δ. Application of a substitution. Letter ranges over substitutions. Interpretation of terms in a model Given an algebra of set-size ℓ and equality-size ℓ , we define the interpretation of ansorted term as element of ( ) according to an environment that maps each variable of sort ′ to an element of ( ′ ). An environment for Γ maps each variable ∶ Γ( ) to an element of ( ). Equality of environments is defined pointwise. Interpretation of terms is iteration on the W-type. The standard library offers 'iter' (on sets), but we need this to be a Func (on setoids). apply ≃ ( t ′ ) .apply This notion is an equivalence relation. Substitution lemma Evaluation of a substitution gives an environment. Equations An equation is a pair ≐ ′ of terms of the same sort in the same context. Sets of equations are presented as collection E : I → Eq for some index set I : Set ℓ . An entailment/consequence ⊃ ≐ ′ is valid if ≐ ′ holds in all models that satify equations . Derivations Equalitional logic allows us to prove entailments via the inference rules for the judgment ⊢ Γ ⊳ ≡ ′ . This could be coined as equational theory over a given set of equations . Relation ⊢ Γ ⊳ _ ≡ _ is the least congruence over the equations . Soundness of the inference rules We assume a model that validates all equations in . In any model that satisfies the equations , derived equality is actual equality. Birkhoff's completeness theorem Birkhoff proved that any equation ≐ ′ is derivable from when it is valid in all models satisfying . His proof (for single-sorted algebras) is a blue print for many more completeness proofs. They all proceed by constructing a universal model aka term model. In our case, it is terms quotiented by derivable equality ⊢ Γ ⊳ _ ≡ _. It then suffices to prove that this model satisfies all equations in . The term model satisfies all the equations it started out with. Universal model W-types nor user-defined inductive types. These restrictions also prompt the authors to code terms as lists of stack machine instructions rather than trees. Lynge and Spitters [2019] formalize multi-sorted algebras in HoTT, also restricting to finitary operators. Using HoTT they can define quotients as types, obsoleting setoids. They prove three isomorphism theorems concerning sub-and quotient algebras. A universal algebra or varieties are not formalized.
2021-06-14T21:06:28.240Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "48737415711f502c1fd780d9eb39ce0c3760eaf6", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5616cb64fab810d9d4df3e28c0ef3891a1c72663", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
4175476
pes2o/s2orc
v3-fos-license
Polymorphisms analysis of the Plasmodium ovale tryptophan-rich antigen gene (potra) from imported malaria cases in Henan Province Background Plasmodium ovale has two different subspecies: P. ovale curtisi and P. ovale wallikeri, which may be distinguished by the gene potra encoding P. ovale tryptophan-rich antigen. The sequence and size of potra gene was variable between the two P. ovale spp., and more fragment sizes were found compared to previous studies. Further information about the diversity of potra genes in these two P. ovale spp. will be needed. Methods A total of 110 dried blood samples were collected from the clinical patients infected with P. ovale, who all returned from Africa in Henan Province in 2011–2016. The fragments of potra were amplified by nested PCR. The sizes and species of potra gene were analysed after sequencing, and the difference between the isolates were analysed with the alignment of the amino acid sequences. The phylogenetic tree was constructed by neighbour-joining to determine the genetic relationship among all the isolates. The distribution of the isolates was analysed based on the origin country. Results Totally 67 samples infected with P. o. wallikeri, which included 8 genotypes of potra, while 43 samples infected with P. o. curtisi including 3 genotypes of potra. Combination with the previous studies, P. o. wallikeri had six sizes, 227, 245, 263, 281, 299 and 335 bp, and P. o. curtisi had four sizes, 299, 317, 335 and 353 bp, the fragment sizes of 299 and 335 bp were the overlaps between the two species. Six amino acid as one unit was firstly used to analyse the amino acid sequence of potra. Amino acid sequence alignment revealed that potra of P. o. wallikeri differed in two amino acid units, MANPIN and AITPIN, while potra of P. o. curtisi differed in amino acid units TINPIN and TITPIS. Combination with the previous studies, there were ten subtypes of potra exiting for P. o. wallikeri and four subtypes for P. o. curtisi. The phylogenetic tree showed that 11 isolates were divided into two clusters, P. o. wallikeri which was then divided into five sub-clusters, and P. o. curtisi which also formed two sub-clusters with their respective reference sequences. The genetic relationship of the P. ovale spp. mainly based on the number of the dominant amino acid repeats, the number of MANPIN, AITPIN, TINPIN or TITPIS. The genotype of the 245 bp size for P. o. wallikeri and that of the 299 and 317 bp size for P. o. curtisi were commonly exiting in Africa. Conclusion This study further proved that more fragment sizes were found, P. o. wallikeri had six sizes, P. o. curtisi had four sizes. There were ten subtypes of potra exiting for P. o. wallikeri and four subtypes for P. o. curtisi. The genetic polymorphisms of potra provided complementary information for the gene tracing of P. ovale spp. in the malaria elimination era. Background Plasmodium ovale was first described in 1922 [1], as the fourth malaria parasite of humans [2,3]. Generally, a P. ovale infection is of low parasitaemia, and the morphology of the parasite is similar to Plasmodium vivax. Also, it frequently presents as a mixed infection with the other Plasmodium species [4][5][6][7][8][9]. As a result, P. ovale attracted less attention compared to other species, and its prevalence has apparently been underestimated. It has long been considered predominantly found in Africa and some islands of Western Pacific [10,11], with confirmed cases occasionally found in other endemic regions [12,13]. Currently, P. ovale spp. may be classified into two different subspecies by molecular genotyping: P. o. curtisi (classic type) and P. o. wallikeri (variant type) [14]. The nuclear genome sequences further confirmed that the two species were genetically different, but morphologically indistinct [15], and their duration of latency were seemly different [16]. Both species were considered to exist sympatrically in Africa and Asia, and even both parasites were infected simultaneously [17][18][19][20][21]. Because of the generally low parasitaemia of P. ovale infections, sensitive molecular methods to detect and identify the two subspecies must be used in future investigations, with polymorphic markers as a method to discriminate the different strains. Many protocols showed that the SS rRNA genes [17,22,23] were suitable for identification but not for genotyping. The recent study showed that the gene encoding P. ovale tryptophan-rich antigen (potra) could be used to distinguish the two P. ovale subspecies [14]. The sequence and size of the tryptophan-rich antigen gene was variable among the P. ovale subspecies (poctra and powtra) [14]. A nested PCR detection assay was exploited to discriminate the species by the size of the amplified fragments (299 or 317 bp for poctra; 245 bp for powtra), where the conserved sequences were chosen as primers for these two genes [19]. Additionally, a semi-nested PCR protocol was developed by Tanomsing et al. [24] with which the two P. ovale subspecies could be discriminated efficiently, and more fragment sizes were found comparing with previous studies, the 299 bp fragment was overlapping between the two subspecies. This would invalidate amplified fragment size difference, as a means of distinguishing between P. o. curtisi and P. o. wallikeri. The amplified fragment size variations resulted from differences in the number of repeated units, which suggested that a broader range of size variants might occur. In this study, more variations of potra gene were observed. Sample collection and DNA extraction Dried blood spots on filter paper (Whatman 3M) were collected from patients returned from Africa with P. ovale infection before treatment. All the patients were diagnosed by nested PCR and microscopy. All the dried blood spots were labelled with a unique identification number, air-dried and individually placed in plastic bags with desiccant and stored at − 20 °C until laboratory analysis. DNA was extracted from the dried blood spots using a QIAamp DNA mini kit (Qiagen, Germany). Nested PCR amplification and DNA sequencing The fragments of potra was amplified with nested PCR using the primers as described previously [14,19]. The amplified products were identified by agarose gel electrophoresis. Bidirectional sequencing was performed for the secondary potra PCR products using the secondary primers by Sangon Biotech Co Ltd (Shanghai, China). Sequencing alignments and analysis All the genes sequences were analysed with multiple sequence alignment using the Clustal X software. HM594180-HM594183 [19], KF018430-KF018433 [24] and KX417700-KX417704 [25] from the GenBank would be as the reference sequences of P. ovale spp. Phylogenetic trees were constructed using the Molecular Evolutionary Genetics Analysis (MEGA) 6.06. Amplification of the potra gene of Plasmodium ovale spp. A total of 110 dried blood samples, from patients returned from Africa to Henan Province with P. ovale infection, were collected. The amplified nested PCR products of the potra gene of 110 samples were blasted in the GenBank. The blast data showed that 67 samples infected with P. o. wallikeri and 43 samples infected with P. o. curtisi. More fragment sizes of the potra gene from this study were found comparing with the previous reports. P. o. wallikeri had five different sizes including 227, 245, 263, 281 and 299 bp, while P. o. curtisi had three polymorphisms of potra provided complementary information for the gene tracing of P. ovale spp. in the malaria elimination era. Keywords: Plasmodium ovale curtisi, Plasmodium ovale wallikeri, Plasmodium ovale tryptophan-rich antigen (potra), Amino acid unit, Subtype sizes including 299, 317 and 335 bp. Also, the amplified fragment size differed as a result of differences in 18 bases of units, with the overlap of 299 bp between the two species (Fig. 1) Table 1). Genotypes of potra of Plasmodium ovale spp. There were 8 genotypes of the potra gene for the 67 isolates infected with P. o. wallikeri and three genotypes of potra for the 43 isolates infected with P. o. curtisi. The sizes of 277, 245 and 263 bp for P. o. wallikeri all had two different subtypes. The sequences of the 11 genotypes of potra gene were deposited in GenBank under accession number MG588144-MG588154. For P. o. wallikeri, the genotype of MG588146 was the same with that of the reference sequences HM594180 and HM594181, but the reference sequences KF018430 and KF018431 were different with any of MG588144-MG588151. For P. o. curtisi, the genotype of MG588152 was the same with that of the reference sequences HM594182 and KF018433, and the genotype of MG588153 was the same with that of the reference sequence HM594183. Combination with the previous studies, there were ten genotypes of potra exited for P. o. wallikeri, and four genotypes for P. o. curtisi. The number of isolates for each genotype was shown in the Table 2. Alignment of the translated amino acid sequence of potra fragments Interestingly, the translated amino acid sequence of potra fragments were composed with multiple amino acid units, and six amino acids was considered as a unit. Table 2. As the same with the genotypes of potra, there were 10 subtypes exiting for P. o. wallikeri and 4 subtypes for P. o. curtisi. The detail information is shown in Fig. 2 and Table 2. Phylogenetic relationship among potra subtype families Neighbour-joining was used to cluster the potra gene sequences. The 11 genotypes were classified into two clusters, eight genotypes infected with P. o. wallikeri were Tanomsing et al. [24] 299 bp (n = The sequences 10-317 and 11-335 had a closer genetic relationship, which formed another sub-cluster with reference sequences, having the same two repeats of TITPIS (Fig. 3). (Table 3). Discussion Malaria elimination is a long-term goal to be achieved worldwide. As one species of human Plasmodium, the identification of P. ovale is more widespread than formerly known. Plasmodium ovale, like P. vivax, has hypnozoites that cause relapses [26,27], and it consists of two different subspecies: P. ovale curtisi and P. ovale wallikeri [14]. Therefore, the differentiation of the two Fig. 3 Genetic relationship of potra among isolates of Plasmodium ovale subspecies. Black triangles represented the isolates in this study. The reference sequences of P. ovale spp., HM594180-HM594183, KF018430-KF018433 and KX417700-KX417704, were obtained from the NCBI database P. ovale species, especially with respect of molecular phylogeny will need to be better understood. In 2011, Oguike et al. [19] published the discrimination of the two P. ovale subspecies by the size of the amplified fragments of the potra gene (299 or 317 bp for P. o. curtisi; 245 bp for P. o. wallikeri), using nested PCR. Although this technique was specific for P. ovale spp., the sizes of the amplified fragment varied with the number of repeat units, which reduced the discrimination between species: for P. o. curtisi, 299, 317 and 353 bp, and for P. o. wallikeri, 245, 299 and 335 bp [24]. Tanomsing et al. [24] suggested the number of potra size variations might be more than those evaluated and this speculation was confirmed in this study. Using the same primes and method, more fragment sizes were identified in this study, while some sizes overlapped between the two subspecies: for P. o. curtisi having sizes 299, 317 and 335 bp, and for P. o. wallikeri having sizes 227, 245, 263, 281 and 299 bp. The results of the three studies [19,24] were also combined, as shown in Table 1. Four different sizes for P. o. curtisi and six sizes for P. o. wallikeri have been reported, and that the fragment sizes of 299 and 335 bp were overlaps between the two species. As more samples were analysed, it was likely that the number of potra size variants would be more than expected. Conceivably, more size variants may be identified in future studies. Potra gene was used to discriminate the two P. ovale species, because the tryptophan-rich antigen was encoded by a repeat pattern of variable length 3-amino acid [14]. Sutherland et al. [14] and Oguike et al. [19] had also proposed that the potra gene of P. o. curtisi (poctra) could be identified by the pattern of the six amino acids of the repeat region, TITPIS, while the potra gene of P. o. wallikeri (powtra) were different in two non-synonymous positions. By alignment of the amino acid sequence of potra fragments, 11 genotypes of potra were found from the 110 isolates in this study. Combination with the previous studies of Oguike et al. [19] and Tanomsing et al. [24], showed ten subtypes of potra gene for P. o. wallikeri and four subtypes for P. o. curtisi, and 14 genotypes of potra gene were under analysis. This study showed that the amino acid sequence of potra fragments were composed with multiple amino acid units, six amino acids were as one unit. There was different dominant amino acid repeat of potra for the two P. ovale species, which could be used to discriminate the subtype of P. ovale spp. The repeat of six amino acids as one unit to analyse the difference of the genotypes between the two P. ovale species was first reported, which could make the results more clear and simple. The sizes of 277, 245 and 263 bp for P. o. wallikeri all had two different subtypes, and this phenomenon did not find in the P. o. curtisi. The sizes of the reference sequences KX417700-KX417704 [25] from Genbank were short for discriminating the differences of dominant amino acid repeats. In our study, the genetic relationship of the P. ovale spp. was analysed by the neighbour-joining tree mainly based on the number of the dominant amino acid repeats including MANPIN, AITPIN, TINPIN or TITPIS. The distribution of the two P. ovale spp. was different. The isolates of P. o. wallikeri was more than that of P. o. curtisi, and the genotype of the 245 bp size was the predominant type for P. o. wallikeri in most Africa countries, but the other genotypes were less. For P. o. curtisi, the genotypes of the 299 and 317 bp size were commonly in Africa and the genotype of 335 bp size was less. Molecular epidemiological studies on genetic diversity of Plasmodium vivax have been based mainly on single copy polymorphic genes which code for parasite surface antigens such as circumsporozoite protein (csp), merozoite surface protein-1 (msp-1) and merozoite surface protein 3 alpha (msp 3α) [28]. Pvcsp comprises of central domain of tandem repeated sequences flanked by two non-repeated conserved sequences [29][30][31][32]. Two types of repeat elements, either VK210 or VK247 types were detected in clinical isolates of P. vivax and thus pvcsp serves as a useful tool for genotyping [33,34]. Potra has the similar characteristics with pvcsp, which also could be used for parasite genotyping. Conclusions Considering the change of malaria epidemiology and the approaching of malaria elimination, P. ovale spp. deserves more attention. Molecular techniques are a good tool for detecting and identifying the two P. ovale subspecies and their relative distribution. Authors' contributions RMZ was responsible for the molecular genetic analysis and data interpretation and drafted the manuscript. YL participated in sample detection and data analysis. HWZ and BLX conceived the study and revised the manuscript. FH revised the manuscript. SUL participated in sample collection and sample detection. YLZ, YD and DLL provided the administrative coordination. CYY and DQ participated in the data collection and analysed the data. All authors read and approved the final manuscript.
2018-03-24T23:05:22.700Z
2018-03-23T00:00:00.000
{ "year": 2018, "sha1": "6e042ebe226586f125d88f9e6557364bf9ca09d1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12936-018-2261-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e042ebe226586f125d88f9e6557364bf9ca09d1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
202863502
pes2o/s2orc
v3-fos-license
Biochemical composition and antioxidant activity of three extra virgin olive oils from the Irpinia Province, Southern Italy Abstract Extra virgin olive oil (EVOO), appraised for its healthy properties, represents an important element for the economy of several countries of the Mediterranean area, including Italy. Our study aimed to evaluate some biochemical characteristics (polyphenols and volatile compounds) as well as the antioxidant activity of three EVOOs obtained from the varieties Ravece, Ogliarola, and Ruvea antica, grown in the same field of an Irpinian village, Montella, in the Campania region, Southern Italy. Extra virgin olive oil Ruvea antica contained the greatest amount of total polyphenols and showed the highest antioxidant activity. Principal component analysis of the aromatic profiles indicated that the three EVOOs could be easily discriminated according to the cultivar. 1‐Hexanol, 2‐hexen‐1‐ol, 3‐pentanone, representing the most abundant volatiles of the EVOO Ruvea antica, and 2‐hexenal, which resulted the main component in EVOOs Ogliarola and Ravece, could be considered as markers to discriminate these three EVOOs, according to the ReliefF feature selection algorithm. . However, olives of the same variety, cultivated under different environmental conditions or in diverse geographical areas, can produce EVOOs with different organoleptic characteristics and healthy properties (Angerosa, Basti, Vito, & Lanza, 1999). Concurrently, fruits from different cultivars grown under the same environmental conditions could produce oils with different biochemical characteristics (Gorzynik-Debicka et al., 2018). In the composition of EVOOs, volatile organic compounds (VOCs) and polyphenols are of great importance. Volatile organic compounds are strongly related to oil aroma perceived during the assay of the product (Salas, Harwood, & Martinez-Force, 2013). They are produced at the beginning of the malaxation, during cell structure rupture, due to enzymatic reactions in the presence of oxygen. C6 aldehydes, C6 alcohols, and their corresponding esters, together with smaller amounts of C5 carbonyl compounds, are the main constituents of VOCs (60%-80%). Specifically, hexan-1-ol, hexanal, E-2-hexenal, and 3-methylbutan-1-ol generally dominate the VOCs pattern of the most common EVOOs from Mediterranean regions . volatile organic compounds profile can depend on cultivar and on degree of maturation (Angerosa et al., 1999). The Mediterranean diet is the golden standard for healthy nutrition. It is characterized mainly by a high intake of fruit, vegetables, and cereals, which are rich in phytochemicals . Among these compounds, polyphenols stimulated particular attention, due to their versatility of action, being able to protect against oxidative stress and to inhibit the proliferation of cancer cells (Del Rio, Costa, Lean, & Crozier, 2010). The beneficial effects of the Mediterranean diet are also attributed to the EVOO (Visioli & Bernardini, 2011), which, even if more expensive than olive oil, is richer in polyphenols, vitamins, phytosterols, etc., concurring to reduce the risk of cardiovascular events (Estruch et al., 2013), so that US Food and Drug Administration compared it to a real drug. Extra virgin olive oil is rich in polyphenols ranging between 50 and 1,000 mg gallic acid equivalents (GAE)/kg of product (Gorzynik-Debicka et al., 2018). Oleuropein, quercetin, and hydroxytyrosol, some of the main polyphenols present in EVOO, have antioxidant activity and ascertained effects in protecting against the coronary artery disease (Manna et al., 2002) or cancer (Owen et al., 2000). The aim of our work was to determine the biochemical composition of three EVOOs obtained from traditional varieties of olives cultivated in the same field of Montella, a little village of the Irpinia region, Southern Italy, harvested in the same period and processed by cold pressure. Three varieties, Ogliarola, Ravece, and Ruvea antica, in particular, attracted our attention. These are typical varieties of the Mediterranean area, diffused in Campania. Tree of Ogliarola has a medium foliage, with elliptical-lanceolate leaves. It produces a low number of flowers. Its fruits are black and elliptical, with a weight of 2-4 g. The endocarp has a weight of 0.3-0.45 g. Ravece tree has a high foliage density. Leaves are elliptical-lanceolate. Fruits are elongated, purple, and have a weight of 4-6 g; the endocarp is heavy (weight >0.45 g). Ruvea antica tree has medium foliage. Leaves are elliptical-lanceolate and longer more than 7 cm. Its fruits are purple, elliptical, and show a weight of 2-4 g. The endocarp has a weight of 0.3-0.45 g (Di Vaio & Nocerino, 2012). The biochemical characterization of resulting EVOOs involved the total antioxidant activity and the polyphenol content. The polyphenolic profile and VOCs were also evaluated. Statistical analysis allowed us to correlate some of the biochemical characteristics of the EVOOs; in particular, the antioxidant activity was correlated with total polyphenols and the singular components, identified in the oil by UPLC. Principal component analysis (PCA) of the aromatic profiles (obtained by Gas Chromatography/Mass Spectrometry) was carried out to discriminate oil samples according to cultivar. Moreover, a feature selection algorithm was used to identify and select putative volatile markers responsible for EVOO varieties discrimination. Ultrapure water from a Milli-Q system (Millipore) with a resistivity at 25°C of 18 MΩ * cm was used throughout the analyses. Helium (Rivoira) at a purity of 99.999% was the GC carrier gas. The SPME glass vials and the fibers were from Supelco; the capillary GC-MS column HP-Innowax (30 m × 0.25 mm × 0.5 μm) was purchased from Agilent J&W (Agilent Technologies Inc.). | Plant material The EVOOs used in this study were produced in the same year by cold pressing of three different varieties (Ruvea antica, Ogliarola, and Ravece) grown in the same field located in the Montella village, in the Irpinia Province, Campania region, Southern Italy. Prof. Vincenzo De Feo identified the varieties. Voucher specimens of the three varieties were stored in the herbarium of the Department of Pharmacy, University of Salerno. | Polyphenol analysis and free radical scavenging capacity To isolate the phenolic fraction of the three EVOOs, 1.5 g of sample was mixed with 1.5 ml of hexane and charged onto cartridges SPE C 18 . Polyphenols were eluted through 3 ml of methanol 100% and recovered; this step was repeated other two times. The three residues were collected, grouped, dried, and re-suspended with 1 ml of methanol. The samples were filtered (mesh = 0.20 μm). The method of Singleton and Rossi (Singleton & Rossi, 1965) was used to evaluate the content of total polyphenols present in the three EVOO samples. Quantification was determined by using gallic acid as standard and reading the absorbance at 760 nm through a Cary UV/Vis spectrophotometer (Varian). Results were expressed as μg gallic acid equivalent (GAE)/g of EVOO ± standard deviation (SD). The scavenging activity was expressed as effectiveness (%) of the sample to inhibit DPPH radical activity during a 60-min incubation. Polyphenol profile was determined through UPLC (ultra highperformance liquid chromatography) by using an ACQUITY Ultra Performance system linked to a PDA 2996 photodiode array detector (Waters), setting the UV detection wavelength at 280 nm, following the method of Fratianni and coworkers . Quantification of known components was performed by comparing the peak areas on the chromatograms of samples with those obtained from standard solutions. | Analysis of VOCs profiles The optimization of SPME parameters was achieved by examining samples of a commercial EVOO bought at a local supermarket. SPME GC-MS volatile analysis was accomplished according to Romero and coworkers (Romero, Garcıa-Gonzalez, Aparicio-Ruiz, & Morales, 2015), but using the DVB/CAR/PDMS (50/30 μm) fiber. For the sample preparation, 2 g of each sample was put into a 20-mL headspace vial with screw cap (Supelco) and 4-methyl-2-pentanol to a final concentration of 1.5 mg/g was added as an internal standard to guarantee the analytical reproducibility. Subsequently, vials, closed with a Teflon (PTFE) septum and an aluminum cap (Chromacol) and stirred, were put in the instrument dry block heater and held at 40°C for 10 min. After the equilibration time, the extraction and injection processes were automatically carried out using an autosampler MPS 2 (Gerstel). Volatiles were analyzed by gas chromatography-quadrupole mass spectrometry (GC-qMS), introducing the SPME fiber into the injector port of the gas chromatographer, model GC 7890A, Agilent hyphenated with a mass spectrometer 5975C. Once desorbed, metabolites were directly transferred to the capillary column HP-Innowax for the analysis. The oven temperature program was initially set at 40°C for 3 min, increased to 200°C at 30°C/min, and then ramped to 240°C at 30°C/min, holding for 1 min. Volatiles were investigated according to the instrumental parameters as reported in the literature (Cozzolino, Martignetti, et al., 2016;Cozzolino, Pace, et al., 2016). Each sample was analyzed in duplicate in a randomized sequence where blanks were also run. Volatile metabolites recorded in the headspace of the extra virgin olive oils under study were identified by three diverse methods, as previously reported (Cozzolino, Martignetti, et al., 2016;Cozzolino, Pace, et al., 2016). The areas of the identified volatiles were determined from the total ion current (TIC), and the semiquantitative data of each metabolite (Relative Peak Area, RPA%) were considered in relation to the area of the peak of 4-methyl-2pentanol, used as internal standard. | Statistical analysis Data were expressed as the mean ± standard deviation (SD) of triplicate measurements, and antioxidant activity was correlated with polyphenols. As concerns VOCs, analysis of variance (ANOVA) was used to compare results and significance was accepted at p < .05. Principal component analysis (PCA) was then used to relate the obtained values and as an explorative tool for the preliminary visualization of the separation of the different EVOO samples, according to their VOCs profiles. Last, the ReliefF (Kononenko, Simec, & Robnik-Sikonja, 1997) feature selection algorithm was used to identify potential markers, among VOCs, responsible for EVOO discrimination. | Total polyphenol content and antioxidant activity The analysis of total polyphenols (TPF, Table TA B L E 1 Total polyphenols (expressed as μg GAE/g of EVOO ± SD) and antioxidant activity (evaluated through the DPPH and expressed as percentage ± SD) of the three polyphenolic extracts from Ogliarola, Ravece, and Ruvea antica EVOOs | Polyphenol profile The amount (expressed as μg GAE/g of EVOO) of polyphenols identified through UPLC analysis is shown in representing an abundant polyphenol in Ogliarola and Ruvea antica, was found at concentrations much lower in EVOO Ravece (9.30 μg GAE/g), the 5.93% of the total polyphenols. This molecule is an ester of hydroxytyrosol; it gives rise from the mevalonic acid pathway (Omar, 2010 and Ruvea antica (9.77 and 5.09 μg GAE/g, respectively), but not in Ogliarola. The polyphenols identified in the three EVOOs are well known highly bioavailable molecules. The presence of high amounts of oleuropein, whose absorption in the body is about 55%-60% (Omar, 2010), is very significant, given the numerous and key effects of such metabolite including antioxidant, anti-inflammatory, anticancer, antiatherogenic activities, and cardioprotective, antihyschaemic and hypolipidemic properties (Visioli & Galli, 2002). Concomitantly, the high content of quercetin contributes to improve the biological value of the three EVOOs. The amount of quercetin and its derivative spiraeoside in the EVOOs Ogliarola and Ruvea antica represented the 51.74% and 35.04%, respectively, of the polyphenols. A so high amount of these compounds is certainly essential: Like other flavonoids, they can affect the cellular function, by mediating gene expression and signal transduction rather than through a direct antioxidant effect (Nemeth et al., 2003). Dietary quercetin and other flavonoids are absorbed by a little percentage (5%-10%) in the small intestine; the residue of these molecules moves to the colon, where they are metabolized by the gut microbiota, influencing its composition. These molecules exert potential prebiotic effect, protecting from intestinal dysbiosis and all alterations interesting microbiota, and finally, they can concur to significantly influence host biochemistry and host susceptibility to diseases (Nazzaro, Fratianni, d'Acierno, & Coppola, 2013;Tamura et al., 2017). Considering the almost complete linearity between the total polyphenol content and the antioxidant activity (corr = 99.9, Figure 1 . The analysis was performed taking into consideration the most abundant molecules present in the three extracts, which resulted, by the UPLC analysis, quercetin, oleuropein, spiraeoside, formononetin, naringenin, and luteolin. The results are shown in Figure 2. Naringenin, present in all three polyphenolic extracts, at amounts ranging between 7.90 and 21.05 μg GAE/g EVOO, did not seem to affect the antioxidant activity in marked way until 10.31 μg GAE/g. Its effect seemed stronger upper such threshold, so that, at twice amounts, a doubling of the antioxidant activity was observed (Figure 2a). Oleuropein appeared to exhibit a linear behavior, with an antioxidant activity growing concurrently to its amounts (corr = 88.75, Figure 2b). This molecule seemed to be the main responsible for the antioxidant activity exhibited by the three polyphenol extracts, although it did not represent the most abundant molecule. Therefore, the noticeable antioxidant activity of oleuropein is reported, mainly as a scavenger of chain-propagating lipid peroxyl radicals within the membranes (Saija et al., 1998 which decreased from 34% to 20.7% (Figure 2c). A similar behavior could be also attributed to luteolin (Figure 2d), which amounts in the three extracts ranged between zero and 9.77 μg GAE/g. Like quercetin, it apparently exerted an antioxidant activity (34%) until a specific threshold (that could be ascribable to 5.09 μg GAE/g); after which, increasing its content until 9.77 μg GAE/g, the antioxidant activity decreased from 34% to 16%. Formononetin (corr = −37.98) showed a variable trend so that increasing its amount until a certain percentage (7.6%) the antioxidant activity decreased, increasing again as the molecule's content increased (Figure 2e). Our results corroborated the hypothesis that a certain bioactive compound can modify its properties in the presence of other compounds. In the case of spiraeoside, for instance, it is possible that its influence on the antioxidant activity can be negligible (Figure 2f), although its presence in the extracts of EVOOs Ruvea antica and Ogliarola was practically the same. | Volatile compounds analysis The analysis of EVOO volatile compounds was performed through the SPME sampling followed by GC-MS (Torri, Sinelli, & Limbo, 2010). SPME, as an alternative technique for fractionation of volatiles from interfering non-volatile matrix compounds, is a pre-concentration technology, which integrates sample extraction, concentration, and sample introduction into a single solvent-free step, preventing the production of artifacts compared with conventional solvent extraction procedures (Pawliszyn, 2012). A total of 49 VOCs were identified, which belonged to hydrocarbons (3), aldehydes (11), alcohols (12), ketones (5), esters (5), carboxylic acids (6) conditions. SPME GC-MS semiquantitative data, calculated as the percent ratio of the respective peak area relative to the peak area of 4-methyl-2-pentanol, used as internal standard, were subject to a one-way ANOVA, in order to investigate the effect of cultivar on the identified VOCs. Table 3 . This compound provides the typical "green note" of olive oil and has been reported to be negatively correlated with the maturity and degree of oxidation of virgin olive oils (Pouliarekou et al., 2011). Alcohols were the most abundant volatiles present in the EVOO of Ruvea antica, representing the 58.2% of the total VOCs. The principal alcohols were (E) -hexen-1-ol (37.4%) and 1-hexanol (15%), both deriving from the LOX pathway and showing a characteristic odor described as green, grassy, leafy. These compounds, on the other hand, were present only at lower concentration in Ogliarola (1.2% and 1.8%, respectively) and in Ravece EVOOs (4.2% and 14.1%, respectively). The variety could strongly affect the abundance of volatile compounds, which in turn have revealed to be extremely valuable as varietal markers (Kalua et al., 2005). For this reason, the volatile profiles of the three EVOOs were subjected to multivariate statistical analysis with the aim to build models able to explain the variations of the metabolic content dependently from genotype, and to identify putative volatile markers useful for cultivar discrimination. | CON CLUS ION Data obtained in this research clearly confirm the influence of genetic and environmental factors in determining the organoleptic properties of olive oil and permitted a distinction of the three EVOOs studied on the basis of their volatile constituents. ACK N OWLED G M ENTS This work was funded within the Project "SALVE," PSR 2007-2013, mis. 214, action f2, by the Campania Regional Council, Italy. CO N FLI C T O F I NTE R E S T Authors declare that they do not have any conflict of interests. E TH I C A L A PPROVA L Human and animal testing was unnecessary for this study.
2019-09-17T03:08:46.668Z
2019-09-06T00:00:00.000
{ "year": 2019, "sha1": "568e7fa8f546db90a935bd890fc419dfbbfaeab5", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.1180", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c705595cefccc36712f7da9a74fda5a78d01b2d2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
270030370
pes2o/s2orc
v3-fos-license
Breathomics to monitor interstitial lung disease associated with systemic sclerosis Extract Systemic sclerosis (SSc) is an autoimmune disease of unknown origin characterised by an inflammatory process associated with vascular damage and collagen deposition. Interstitial lung disease (ILD) is highly prevalent in SSc (SSc-ILD), is known to be the leading cause of death among these patients and its treatment requires aggressive multimodal therapy [1]. In this context, there is a major clinical need to identify significant SSc-ILD at the earliest stage, especially for patients at risk of developing a progressive form of the disease. Nowadays, few biomarkers can classify patients at risk of developing SSc-ILD; most of them are blood-based and detected in the last clinical stage of the disease. Previously, we have demonstrated that SSc patients exhibit a specific signature of volatile organic compounds (VOCs) compared to healthy subjects [2]. In this prospective study, we aimed to identify the potential of VOCs to predict the ILD phenotype. Breathomics to monitor interstitial lung disease associated with systemic sclerosis To the Editor: Systemic sclerosis (SSc) is an autoimmune disease of unknown origin characterised by an inflammatory process associated with vascular damage and collagen deposition.Interstitial lung disease (ILD) is highly prevalent in SSc (SSc-ILD), is known to be the leading cause of death among these patients and its treatment requires aggressive multimodal therapy [1].In this context, there is a major clinical need to identify significant SSc-ILD at the earliest stage, especially for patients at risk of developing a progressive form of the disease.Nowadays, few biomarkers can classify patients at risk of developing SSc-ILD; most of them are blood-based and detected in the last clinical stage of the disease.Previously, we have demonstrated that SSc patients exhibit a specific signature of volatile organic compounds (VOCs) compared to healthy subjects [2].In this prospective study, we aimed to identify the potential of VOCs to predict the ILD phenotype. The study presented was conducted on a cohort composed of 42 patients, 21 patients suffering from SSc and 21 suffering from SSc-ILD.These patients were prospectively recruited both in University Hospital of Liège (CHU), Belgium, and Maastricht University Medical Center (MUMC+), the Netherlands, during a period of 6 months starting in July 2021 and ending in September 2021.SSc was diagnosed according to 2013 American College of Rheumatology/European League Against Rheumatism guideline [3].SSc-ILD was defined by a specific interstitial lung involvement confirmed through a multidisciplinary discussion as recommended by American Thoracic Society/European Respiratory Society guidelines [4].The protocol was locally approved by ethical committees (Belgian number B707201422832, reference Liege 2014/302; Dutch number NL57351.068.17,reference Maastricht 172021).All subjects gave written informed consent before participating to the study.All breath samples were systematically collected in the same room at the two medical facilities to minimise the effect of variation in background air.As established in our standard operating procedures (SOPs), breath sampling was conducted before any pulmonary function test and patients were not required to fast [5].The exhaled breath samples were collected in inert 5-L Tedlar bags.The content of the sampling bag was subsequently concentrated under standardised conditions into Tenax GR/Carbopack B TD tubes (Markes International Ltd).Following the collection process, the tubes were hermetically sealed using specific caps for preservation before being analysed.The exhaled air was finally analysed by thermal desorption comprehensive two-dimensional gas chromatography high-resolution time-of-flight mass spectrometry (TD-GC×GC-HRMS) (Leco Corporation) at the OBiAChem laboratory in Belgium [2].Statistical analyses were performed using RStudio (2022.12.0) and MetaboAnalyst online 5.0 [6].For more detailed technical information, see previous research conducted [2,5]. A total of 42 patients were recruited from two expert centres.The patients' characteristics are presented in figure 1a. In our study, we compared the exhaled breath composition between SSc and SSc-ILD patients using TD-GC×GC-HRMS.This technique allowed us to detect ∼800 features.We developed a statistical model based on partial least squares-discriminant analysis.This model was then subsequently employed to select nine significant markers based on their variable importance score (figure 1b and c).This model achieved an area under the curve (AUC) of 0.82, accuracy of 85%, sensitivity of 77% and specificity of 100% (figure 1d) for identifying the ILD phenotype.Furthermore, the achieved metrics were similar to the 1d). To evaluate robustness, we tested potential confounding factors such as smoking habits, treatments and gender, which were included in the metadata (figure 1a).We did not identify any interference in the predictive ability of VOCs by potential confounders.A correlation was observed between the functional respiratory parameters (i.e.D LCO and forced vital capacity % predicted value) and the VOCs.A positive correlation was observed between D LCO and the probability of classification of the VOC-based model. We have identified a breath-based model able to discriminate SSc-ILD with a high sensitivity, confirming its potential in patient management.Four markers are in line with our previous study, reaffirming the potential of VOCs in disease classification (i.e. two terpineol isomers, menthone I and menthone II) with their potential metabolic pathways discussed in our earlier work [2].A key focus of this research lies in the discovery of nine VOCs present in the exhaled breath of patients that exhibit the capability to discriminate between SSc and SSc-ILD.These nine markers demonstrated significant classification performance in comparison to conventional lung physiological markers and functional parameters [7].Furthermore, we validated methodological SOPs to conduct breath-based multicentric studies, a determinant step toward validating our classification across several clinical centres.Multicentric breath studies are a major improvement for this emerging monitoring strategy. Moreover, this finding contributes to an enhanced understanding of the disease and the associated metabolic pathways.For instance, 1,4-pentadiene, a hydrocarbon, emerges as a potential biomarker of several lung pathologies.We previously demonstrated that chemically and biologically induced inflammation in lung epithelial cells can lead to increased hydrocarbons levels due to inflammation-associated oxidative stress [8]. Another compound, 1-propanol, has been proposed as a potential marker for lung cancer, detected in the breath of cancer patients and in the headspace of cancer cells [9].The presence of this alcohol might stem from the cytochrome P450 enzymes that hydroxylate lipid peroxidation biomarkers, generating alcohols.Notably, this last marker has also been observed in the exhaled breath of asthmatic patients and has discriminatory capabilities, along with other VOCs, in distinguishing between neutrophilic and eosinophilic asthma [5]. The constant exposure of humans to exogenous compounds through various sources such as diet and environment can lead to the direct secretion of these volatile compounds in breath.Additionally, volatile downstream products stemming from these compounds could potentially serve as medical probes [10].Limonene (D-limonene), another terpene regarded as an exogenous marker, was found to be elevated in the breath of patients with liver cirrhosis [11].Following entry into the bloodstream, limonene is metabolised by the P450 enzymes CYP2C9 and CYP2C19.This represents the second instance in this study where cytochrome P450 enzymes play a role.Conversely, carvone and chlorobenzene have yet to be associated with disease markers based on current knowledge.Like limonene, these compounds could be considered as a probe that assesses metabolism performances.It is worth noting that an increased amount of altered extracellular matrix components destroys alveolar architecture and disrupts gas exchange equilibrium [12].Therefore, elevated volatile concentrations could be also attributed to the thickening of alveolar walls and subsequent impairment of gas exchange, influencing concentrations.The accurate and sensitive statistical model presented in this study showed the potential of VOCs in exhaled breath to identify SSc-ILD patients in a SSc cohort.In addition, our study is corroborating the potential of four terpenes to discriminate SSc patients.Exhaled breath could help clinicians to rapidly provide targeted treatment to patients suffering from ILD.Nevertheless, prospective multicentric studies to further validate the potential of exhaled breath analysis for the management of SSc-ILD patients would be needed.Future studies would include SSc-ILD at early stages to evaluate longitudinal changes of VOCs compared to disease progression and treatment response. FIGURE 1 a FIGURE 1 a) Patients' characteristics.Data are presented as mean±SD unless otherwise stated.p-values were obtained through Wilcoxon-Mann-Whitney rank-sum test for continuous variables and Chi-squared test for noncontinuous variables.b) Boxplot of the nine metabolites identified in exhaled breath used to discriminate systemic sclerosis (SSc) patients (red) from SSc patients with interstitial lung disease (ILD) (green).The lower middle, and upper lines of the box represent the 25th, 50th and 75th percentiles.The upper and lower whiskers extend to 1.5 times the interquartile range.The numbering refers to the identification of the metabolites reported in c.From these nine compounds, we specifically identified eight compounds that were exhaled more in the breath of SSc patients compared to SSc-ILD.c) Table of the potential markers discriminating the two classes of patients.Mass spectral information (library match, probability and characteristic ion), partial least squares-discriminant analysis (PLS-DA) variable importance (VIP) score and p-value obtained after Wilcoxon rank-sum test are reported.The mass accuracy was calculated on the specified ion.Those specific compounds belong to the alkadiene (V1), terpenoid (V2, V3 and V5-V8) and alcohol (V4) chemical families.Conversely, we observed a reduction of chlorobenzene in SSc patients' breath compared to SSc-ILD.d) Classification performances of the most influential metabolites identified in exhaled breath for SSc diagnosis compared to SSc-ILD diagnosis using a receiver operating characteristic (ROC) curve analysis based on PLS-DA algorithm (orange).Classification performances of diffusing capacity of the lung for carbon monoxide (D LCO ) using ROC curve analysis based on univariate D LCO analysis and a threshold value set at 61 (blue).BMI: body mass index; FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity; 1 t R : first-dimension retention time; 2 t R : second-dimension retention time; CAS: Chemical Abstract Service; m: mass; z: charge; AUC: area under the curve; VOC: volatile organic compound.# : Metabolomics Standards Initiative (MSI) level 2; ¶ : MSI level 3; § : MSI level 1. Table of the potential markers discriminating the two classes of patients.Mass spectral information (library match, probability and characteristic ion), partial least squares-discriminant analysis (PLS-DA) variable importance (VIP) score and p-value obtained after Wilcoxon rank-sum test are reported.The mass accuracy was calculated on the specified ion.Those specific compounds belong to the alkadiene (V1), terpenoid (V2, V3 and V5-V8) and alcohol (V4) chemical families.Conversely, we observed a reduction of chlorobenzene in SSc patients' breath compared to SSc-ILD.d) Classification performances of the most influential metabolites identified in exhaled breath for SSc diagnosis compared to SSc-ILD diagnosis using a receiver operating characteristic (ROC) curve analysis based on PLS-DA algorithm (orange).Classification performances of diffusing capacity of the lung for carbon monoxide (D LCO ) using ROC curve analysis based on univariate D LCO analysis and a threshold value set at 61 (blue).BMI: body mass index; FEV 1 : forced expiratory volume in 1 s; FVC: forced vital capacity; 1 t R : first-dimension retention time; 2 t R : second-dimension retention time; CAS: Chemical Abstract Service; m: mass; z: charge; AUC: area under the curve; VOC: volatile organic compound.# : Metabolomics Standards Initiative (MSI) level 2; ¶ : MSI level 3; § : MSI level 1. https://doi.org/10.1183/23120541.00175-2024 1,5 1 Molecular System, Organic & Biological Analytical Chemistry Group, University of Liège, Liege, Belgium. 2 Department of Internal Medicine, Division of Clinical and Experimental Immunology, Maastricht University Medical Center, Maastricht, The Netherlands. 3 Respiratory Medicine, CHU Liège, Liege, Belgium.
2024-05-26T15:26:13.540Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "71dc516d5ce7c61fef84a2267f9753ab6a5e212a", "oa_license": "CCBY", "oa_url": "https://openres.ersjournals.com/content/erjor/early/2024/04/19/23120541.00175-2024.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f54a592fbaac39e8a86e5839e5496457d2d4d827", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234900026
pes2o/s2orc
v3-fos-license
School Performance Measurement Based on Business Architecture School performance measurement is the process of collecting, processing, analyzing and interpreting data about the quality of work carried out by school members in carrying out their main tasks and roles. Measuring the performance of an organization will encourage the achievement of objectives in the organization. A performance measurement system must be built so that the information obtained is as much and as accurate as possible. Business architecture is a formal representation and tools as well as information for business professionals in assessing, changing and designing a business. Business modeling will show the relationship of organizational behavior with the information needed, and the relationships that occur within the organizational structure, so that business architecture is the main thing that must be completely defined before continuing on to the next stage. To encourage schools to achieve goals and design business strategies that are in line with the objectives, this research will propose a system for measuring school performance based on business architecture. INTRODUCTION School performance measurement is the process of collecting, processing, analyzing and interpreting data about the quality of work carried out by school members (especially service providers) in carrying out their main tasks and roles. To conduct a school performance measurement a number of indicators are needed. Indicators are measures to determine the performance of a person, program or institution as a whole (IEES, 1986: 40). Thus, school performance indicators are a measure to determine the performance of a school institution (Ikhfan, 2016). School performance is the achievement of schools resulting from the process/school behavior. In conclusion, the essence of school performance is the success achieved by schools as measured by indicators to improve the learning process in achieving learning objectives, namely learning outcomes (Muttaqin, 2010: 2). In a performance measurement there are several aspects that can be measured. According to Nurkolis (2003), performance can be measured by effectiveness, quality, productivity, efficiency, innovation, quality of life and work morale. Meanwhile, according to Fenwick (2008) using three aspects, namely economy, efficiency and effectiveness. Economics is for the comparison of costs and quality of resources. Efficiency as a comparison of resources used. While the effectiveness of knowing the extent to which objectives are achieved according to targets. The description above shows that there are several aspects that can be used in measuring a performance. According to Haryoto (2008) in the process of measuring a performance must be returned to the goals and reasons for the formation of the organization itself. In a dynamic, more open and competitive environment, it has implications for change and progress very rapidly and quickly. The education sector is one sector that receives the impact of change and is required to always adapt to change (responsive adaptative). Responding to changes that occur, schools, as an important part of the education sector are required to remain consistent and concerned about maintaining the quality of all activities carried out. One strategy to remain committed in maintaining quality is to measure performance that has been achieved in a systematic and measurable and accountable manner (Haris, 2016: 10). Measuring organizational performance is very important. Measurement of organizational performance according to Bastian (2001: 330) will encourage the achievement of organizational goals and will provide feedback for continuous improvement efforts. Therefore the performance measurement system must be built in such a way that information about performance can be obtained as much and as accurately as possible (Haris, 2016: 11). Business architecture is a formal representation and tools and information for business professionals in measuring, changing and designing a business (SOA, 2010). Architectural conceptual modeling in enterprise architecture is influenced by 3 (three) main layers, namely the business layer, application layer and technology layer (Jonkers, et al, 2004). Business layer describe 3 (three) aspects, namely structure, behavior and information. These three aspects are very important in business modeling, business modeling shows the relationship of organizational behavior with the information needed, and the relationships that occur within the organizational structure, so that business architecture is the main thing that must be defined in full before proceeding to the next stages. According to Ralph Whittle and Conrad Myrick (2004), business architecture in an enterprise can be connected with all the components that exist in the development of enterprise architecture. Based on previous studies, researchers only measure school performance based on SNP alone, none of which are based on business architecture. This is what drives the author to build a framework for measuring school performance based on business architecture. Related Research In a study entitled "Analisis Kinerja Dengan Pendekatan Balanced Scorecard Di SMAN 3 Yogyakarta" compiled by Emi Susanti stated that the use of the Balanced Scorecard method in school measurements showed good overall results from various aspects. In the financial aspects of measurement using the concept of value for money shows good performance with the results of the acquisition is very economical, effective and quite efficient. Whereas in other aspects such as customer aspects, internal business processes and learning and growth showed good performance results. In a study entitled "Penilaian Standar Pengelolaan Dalam Sistem Informasi Supervisi, Monitoring Dan Evaluasi Pada Sekolah Standar Nasional (SSN) Tingkat SLTP" by Aditya Ramadhan explained the measurement automation of an information system used to measure the achievement of good school management processes according to the criteria of management of Standar Sekolah Nasional (SSN) in education authorities of Gresik disctrict. In this study, it was concluded that 95.4% of the test items were successful and valid, so the application could be run according to the standar proses SSN. Whereas in the study entitled "Otomasi Penilaian Standar Isi Dalam Sistem Informasi Supervisi, Monitoring Dan Evaluasi Pada Sekolah Standar Nasional (SSN) Tingkat SLTP" by Alexander Malik Hidayatullah explained the researchers were trying to create an information system to facilitate the implementation of Supervision, Monitoring and Evaluation of the Content Standards with the hope that the existing standard values can be quickly identified. It was concluded that 89% of the test items were successful and valid. So the application can be run according to SSN instruments. Study Area The object of this research is a school. Every school has a business process in it. Every business process is covered according to established standards. Procedure The following are the steps used in measuring performance based on existing business processes: Identification of Business Architecture Business architecture is a business concept or design in which there are business processes that play an important role in system design. Business processes that occur in schools, described in the process models. Here are some business processes in schools based on SNP that are illustrated by the process model. School Business Process Identification Existing business processes are then divided based on time, ie day, week, month, semester and year. These business processes must be carried out according to certain rules. The following are the first principles for measuring business process performance: 1. If the business process is successfully carried out 76-100% of the target set, the business process performance measurement score is 4 2. If the business process is successfully carried out 51-75% of the target set, the business process performance measurement score is 3. 3. If the business process is successfully carried out 26-50% of the target set, the business process performance measurement score is 2. 4. If the business process carried out is only successful <26% of the specified target, the business process performance measurement score is 1. And also proposed a different rule, the second rule as follows: 1. If the business process is 100% successful or reaches the specified target, the business process performance measurement score is 4 2. If the business process does not reach the specified target or <100%, the business process performance measurement score is 0. Standards of achieving business processes are adjusted to the standards in the SNP. The following are tables of the division of business processes based on time and aspects measured. Subject syllabus information PBS9 The number of subject syllabus information that being informed PBS10 Reporting the results of the assessment at the end of the semester PBS11 Coordination evaluation PBS12 Reporting the results of assessment of subjects Standar Pendidikan dan Tenaga Kependidikan PBS13 Physical and spiritual health PBS14 The ability to plan, learning in accordance with the principles of learning PBS: Semester Business Process Data Analysis After measuring business processes in accordance with aspects of the SNP, the results of the assessment are then processed again based on the time of the business process, so the following formulas are obtained. The formula for measuring daily school performance is as follows: : daily school performance measurement function PBD : daily business process score ADn : many aspects of daily measurement The following is the formula for measuring weekly school performance: The ranking of school performance based on business architecture is as follows. Result I Results I are carried out by following the first principle.Suppose a school has a daily business process score as follows: So the calculation of the school's daily school performance is as follows: And the monthly business process score is as follows: And suppose the school has conducted a business process for one semester with Mn = 6 and the results of the calculation of school performance each month are as follows: And suppose the school has conducted a business process for one year with Sn = 2 and the results of the calculation of school performance each semester are as follows: S1 S2 0,79 0,76 And the semester business process scores are as follows: Result II Result II follows the second principle in the same case so the following values are obtained. The score of the school's daily business process is as follows: So the calculation of the school's daily school performance is as follows: The business process for one work week with Dn = 6 and the results of the calculation of school performance every day is as follows: And the monthly business process scores are as follows: And the semester business process scores are as follows: And the school has been doing business processes for one year with Sn = 2 and the results of the calculation of school performance each semester are as follows: S1 S2 0,32 0,29 And the semester business process scores are as follows: Discussion Calculation of school performance measurement using the first principle obtained daily performance values D = 0.75 and weekly performance values W = 0.83 and monthly performance values M = 0.82 and semester performance values S = 0.79 and annual performance values Y = 0 , 77 so that the school is in the range of 0.71 <Y <0.85, which is ranked B or good. While the calculation of school performance measurement using the second principle obtained daily performance values D = 0.33 and weekly performance values W = 0.44 and monthly performance values M = 0.38 and semester performance values S = 0.32 and annual performance values Y = 0.29 so that the school is in the range of Y <0.51 which is ranked D or poor. Both show different results. This is due to the first principle that still tolerates business processes that have not been completed perfectly while in the second rule business processes will only be counted if the business process is completed. CONCLUSION From the results of measuring school performance with the proposed method, several conclusions can be drawn, including: 1. The results of school performance measurement based on business architecture can interpret data about the quality of work performed by school residents within a certain time. 2. The results of measuring school performance based on business architecture using the two proposed rules show different results. Performance measurement using the first rule yields a value of Y = 0.77 indicating the school has a rating of B or good. While the performance measurement using the second rule yields a value of Y = 0.29 which indicates that the school has a D or poor rating. School performance assessment based on business architecture depends on how to calculate existing business processes.
2021-05-22T00:03:44.430Z
2020-04-30T00:00:00.000
{ "year": 2020, "sha1": "e5cccd3d3cde8f1adb3cd20b5998322d2d9630cb", "oa_license": "CCBYNCSA", "oa_url": "http://sunankalijaga.org/prosiding/index.php/icse/article/download/548/522", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d1615297b53c6ebbabff187db6647467170ab2af", "s2fieldsofstudy": [ "Business", "Education", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
148574118
pes2o/s2orc
v3-fos-license
Evaluation of single-view contrast-enhanced mammography as novel reading strategy: a non-inferiority feasibility study Background Guidelines recommend screening of high-risk women using breast magnetic resonance imaging (MRI). Contrast-enhanced mammography (CEM) has matured, providing excellent diagnostic accuracy. To lower total radiation dose, evaluation of single-view (1 V) CEM exams might be considered instead of double-view (2 V) readings as an alternative reading strategy in women who cannot undergo MRI. Methods This retrospective non-inferiority feasibility study evaluates whether the use of 1 V results in an acceptable sensitivity for detecting breast cancer (non-inferiority margin, − 10%). CEM images from May 2013 to December 2017 were included. 1 V readings were performed by consensus opinion of three radiologists, followed by 2 V readings being performed after 6 weeks. Cases were considered “malignant” if the final BI-RADS score was ≥ 4, enabling calculation of sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Histopathological results or follow-up served as a gold standard. Results A total of 368 cases were evaluated. Mean follow-up for benign or negative cases was 20.9 months. Sensitivity decreased by 9.6% from 92.9 to 83.3% when only 1 V was used for evaluation (p < 0.001). The lower limit of the 90% confidence interval around the difference in sensitivity between 1 V and 2 V readings was − 15% and lies below the predefined non-inferiority margin of − 10%. Hence, non-inferiority of 1 V to 2 V reading cannot be concluded. AUC for 1 V was significantly lower, 0.861 versus 0.899 for 2 V (p = 0.0174). Conclusion Non-inferiority of 1 V evaluations as an alternative reading strategy to standard 2 V evaluations could not be concluded. 1 V evaluations had lower diagnostic performance compared with 2 V evaluations. Key Points • To lower radiation exposure used in contrast-enhanced mammography, we studied a hypothetical alternative strategy: single-view readings (1 V) versus (standard) double-view readings (2 V). • Based on our predefined margin of − 10%, non-inferiority of 1 V could not be concluded. • 1 V evaluation is not recommended as an alternative reading strategy to lower CEM-related radiation exposure. Introduction Breast cancer is a leading cause of cancer-related deaths in women worldwide every year. Some women have genetic mutations, making them more susceptible to develop breast cancer in their life. These include, for example, BRCA-1, BRCA-2, TP53, PALB2, CDH1, STK11, and PTEN gene mutations. Other reasons for having a > 20% lifetime risk of developing breast cancer include prior chest (mantle) radiation and specific syndromes, such as Li Fraumeni or Cowden syndrome. Based on studies that showed an improved cancer detection rate in these women when breast MRI is used as adjunct screening modality [1][2][3], current international guidelines recommend annual screening of these women with breast MRI [4][5][6]. The use of breast MRI as a screening tool has some limitations. It is a relatively expensive modality and a widespread use as a screening tool is challenging due to the limited availability of sufficient scan slots. Breast MRI has an excellent sensitivity, but its specificity is moderate (resulting in falsepositive findings requiring additional follow-up exams or biopsies) [7]. In addition, studies have shown that gadolinium (Gd) of contrast agents accumulates in the body [8]. Although no negative long-term side-effects have been reported, this phenomenon might result in a discouragement of using Gdbased contrast agents for (repeated) screening purposes. Finally, a number of women will not be able to undergo breast MRI because of claustrophobia, previous adverse reactions to the contrast agent used, or the presence of metal objects within their bodies. Therefore, an alternative imaging modality might be appealing for these groups of women, using for example contrast-enhanced mammography (CEM, synonyms: CESM, contrast-enhanced spectral mammography or CEDM, contrast-enhanced dual-energy mammography). The underlying principle of CEM is comparable to that of breast MRI: growing tumors need to sprout newly formed blood vessels to adhere to their increasing demand for nutrients in a process called angiogenesis [9]. These newly formed vessels are rapidly formed and Bleaky^to contrast agents like the ones used in CEM or breast MRI [10]. These contrast agents can extravasate into the tumor interstitium, causing enhancement on CEM or MRI exams. Multiple studies have evaluated the diagnostic performance of CEM compared with breast MRI, showing that sensitivity is at least equal to breast MRI [11][12][13]. CEM is increasingly considered as a potential screening modality, especially in women with high or intermediate breast cancer risk or dense breasts [14,15]. However, disadvantages of CEM over breast MRI, especially when it would be considered in screening of patients, are not only the use of iodinated contrast agents but also its increased radiation dose (for example, when compared with full-field digital mammography or FFDM) [16] and the lack of CEM-guided biopsy capabilities. To compensate for the increased radiation dose in screening patients at high risk for developing breast cancer using CEM, we propose an alternative strategy: single-view CEM. In this retrospective study, we evaluated the diagnostic performance of single-view CEM (1 V) versus standard double-view CEM (2 V) to find out if this approach has the potential to serve as an alternative strategy. This retrospective study was designed as a non-inferiority study to evaluate whether use of 1 V results in an acceptable sensitivity for detecting breast cancer in our study population, while maintaining similar specificity when compared with 2 V. However, we also consider this to be a feasibility study, as our primary analyses were not conducted on the assumed target population of women at high risk of developing breast cancer, but on our institute's available CEM database. Materials and methods For this study, we retrospectively analyzed all CEM exams performed at our institute between May 2013 and December 2017. Indications for CEM included recalls after a positive screening mammography, suspicious findings during physical examination or detected on imaging performed elsewhere, unknown primary tumors, and inconclusive findings at FFDM or alternative to breast MRI. All images were anonymized with an allocated study code. Due to the study design used, the necessity to acquire informed consent was waived by our ethical committee (decision number METC 15-4-008). Image protocol and gold standard All CEM exams were performed on a single CEM unit (Senographe* Essential with Senobright* upgrade, GE Healthcare) using a non-ionic, monomeric, low-osmolar contrast agent at a dose of 1.5 ml/kg of body weight (iopromide, Ultravist 300, Bayer Healthcare). Iodinated contrast was administered intravenously with a flow rate of 3 ml/s 2 min prior to image acquisition. The breasts were imaged in (at least) mediolateral oblique and craniocaudal views. For all solid lesions and (micro)calcifications, histopathological results served as the gold standard. In cases of negative results or suspected cysts, a minimum follow-up of 12 months was used to exclude any false-negative findings. Image analysis The CEM images were evaluated using a double-reading strategy, which is the applied strategy for our nationwide screening program. In short, two certified screening radiologists (with 9 and 7 years of screening expertise and both having 5 years of CEM experience) provided a BI-RADS classification for the 1 V images first (i.e., mediolateral oblique view for both breasts, as this view covers the largest part of the breast). For this study, we considered a BI-RADS classification of ≥ 4 to be Bsuspicious for breast cancer,^which in a screening setting would require a recall. BI-RADS classifications ≤ 3 were considered Bnot suspicious for breast cancer^and would not have been recalled in a screening setting. For the primary analysis, a consensus opinion was used and in case of discrepancies between both readers, a third screening-certified reader (6 years of screening expertise and 5 years of CEM experience) was consulted for the final decision. To minimize recall bias, all cases were re-evaluated in a similar fashion after 6 weeks, but this time, the complete CEM exam was available (2 V). The images were evaluated in a different, randomized order in these two sessions. During the evaluations, all radiologists were blinded to the primary CEM indication, their score in the other reading session and final diagnosis. Statistical analysis The study was designed as a non-inferiority study to evaluate whether the use of single-view CEM exams does not result in an unacceptable lower sensitivity for detecting breast cancer in our study population, while maintaining similar specificity when compared with double-view CEM exams. The prespecified non-inferiority margin was determined at 10%. This margin was chosen because a sensitivity decrease by more than 10% was considered unacceptable. Assuming a sensitivity in the reference group (i.e., 2 V group) for the detection of malignant cases of 90%, 112 pairs with malignant cases are required to be 80% sure that a one-sided 95% confidence interval (CI, equivalent to a two-sided 90% CI) will exclude a difference in favor of the 2 V group of more than 10%. Prior to this study, we estimated the prevalence of malignant cases to be 30% [17], resulting in a required total sample size of 373 cases (source: OpenEpi, www.openepi.com). As explained before, BI-RADS 1-3 were considered benign and BI-RADS 4-5 malignant. Using these cutoff values, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. In addition, the area under the ROC curve (AUC) was calculated for the different reading sessions. The absolute differences in sensitivity and specificity and one-sided 95% CI (equivalent to two-sided 90% CI) of the difference were calculated using Tango's score CI for a difference of paired proportions [18]. The corresponding one-sided p values were derived using McNemar's test for paired proportions. The paired areas under the curve (AUC) of the receiver operating characteristic (ROC) curves for 1 V and 2 V view exams were compared using an algorithm developed by DeLong et al [19]. STATA (version 13.1, StataCorp LLC) and R (version 2.15.1, The R Foundation for Statistical Computing) were used for the statistical analyses. One-sided p values of 5% were considered to indicate statistical significance. Results The mean age of the women included in our study population was 59.7 years (range 50-77 years). In the study period, 368 patients instead of the required 373 patients were included, but the prevalence of malignancies turned out to be higher than expected (34.2%). Consequently, the number of malignant cases was 126, which is higher than the planned sample size of 112 cases for the evaluation of the sensitivity difference. Of the 126 malignant diagnoses, 48 consisted of invasive cancer of no special type (NST, or invasive ductal carcinoma; 13%), followed by 42 cases of ductal carcinoma in situ (DCIS, 11.4%) and 30 cases of invasive lobular carcinoma (ILC, 8.1%). The remaining 6 cases were invasive breast cancers not otherwise specified (1.7%). Of the benign diagnoses, most were cysts (n = 90), followed by fibroadenoma (n = 19) and intramammary lymph nodes (n = 10), with the remaining benign diagnosis being not otherwise specified (n = 35). A total of 88 cases were negative. The mean follow-up period for benign or negative cases was 20.9 months, range 12.7-55.3 months. The results for the 1 V and the 2 V evaluation are presented in Table 1. For the 1 V readings, there were 48 discrepancies between the first two readers, while the number of discrepancies for the 2 V readings was 64. The combined assessment of the radiologists based on 1 V CEM images resulted in 105 true-positive cases (TP), 38 false-positive cases (FP), 204 truenegative cases (TN), and 21 false-negative cases (FN). For the 2 Vevaluation, these numbers were the following: 117 TPs, 50 FPs, 192 TNs, and 9 FNs, respectively. An overview of the histopathological results of the FN cases is presented in Table 2. An example of an invasive lobular carcinoma that was overlooked on the 1 V (MLO) exam and detected on 2 V readings is presented in Fig. 1. Our results show that sensitivity decreases when only 1 V is used for the evaluation of the CEM images. This decrease is statistically significant (p < 0.001). The lower limit of the 90% CI of the difference in sensitivity between 1 V and 2 V readings is − 15% and lies below − 10%, but the entire 90% CI (ranging from − 15 to − 5.3%) does not exclude the non-inferiority margin. More specifically, non-inferiority could be observed for the first reader, but could not be concluded for the second or the combined readings. It can also be observed that there is a tradeoff between sensitivity and specificity: the significant decrease of sensitivity is accompanied by a significant increase in specificity. These trends are observed for the evaluations by R1 and R2 as well. To evaluate whether overall diagnostic performance decreases when using the 1 V strategy instead of the standard 2 V evaluation, AUCs were compared. The ROC curves for the consensus results (R1 + R2 + R3 based on 1 V and 2 V readings) are presented in Fig. 2. The AUC for 1 V was 0.861 versus 0.899 for 2 V (one-sided p = 0.0174), indicating a significantly worse overall diagnostic performance for the 1 V readings when compared with 2 V. These results, however, were based on the analyses performed on our institute's CEM database, which is not the assumed target population of women at high risk for developing breast cancer (see also the BStudy limitations^section). Discussion In this study, the sensitivity of 1 Vevaluations was significantly lower compared with (standard) 2 V evaluations. The lower CI limit of the difference in sensitivity between 1 V and 2 V lies below the predefined non-inferiority margin of − 10% and, consequently, the results did not allow for the conclusion that 1 V evaluation is non-inferior to 2 V evaluation of CEM exams. We observed a trade-off between sensitivity and specificity: the significant decrease of sensitivity was accompanied by a significant increase in specificity. These trends were also observed for the evaluations by R1 and R2. Using 1 V readings instead of 2 V readings leads to a substantial and significant decrease of sensitivity and overall diagnostic performance. The majority of FN diagnoses were caused by DCIS and ILC (Table 2). Based on our observations, we would not recommend 1 V evaluations as an alternative reading strategy to lower CEM-related radiation exposure. Since previous studies have demonstrated an improved cancer detection rate in women at high risk for developing breast cancer who were annually screened with breast MRI, many guidelines have recommended its use for this indication. In our national guidelines, annual breast MRI screening is recommended for most women with a known genetic predisposition for developing breast cancer, such as BRCA-1, BRCA-2, TP53, PALB2, CDH1, STK11, and PTEN gene mutations. Other women eligible for this kind of screening are those with prior chest (mantle) radiation or with Li Fraumeni or Cowden syndrome. Screening is initiated at the age of 25 or 30 (depending on the gene mutation) and continues to the age of 60 [20]. Consequently, all these women undergo approximately 30-35 screening breast MRI exams in their lives. Dynamic, gadolinium (Gd)-enhanced T1w images are the backbone of a breast MRI protocol. McDonald et al observed an accumulation of Gd within the brain, even in patients with normal renal function and without any intracranial abnormalities [21,22]. Although confirmed by other studies, there is no evidence at present to show any adverse effects [8], but several international bodies have recommended a more cautious use of these agents until long-term effects can be ruled out. CEM might be considered in the future as an alternative screening modality for this group of women, for example for Table 1 Comparison of sensitivity and specificity between double-view (2 V) and single-view (1 V) contrast-enhanced mammography exams. Results are presented for the combination of all three readers (R1 + R2 + R3) and for the first (R1) and second (R2) reader independently Double (2 V) view % (n) Single (1 V those who have claustrophobia, refuse the repeated administration of Gd-based contrast agents, or who prefer CEM over breast MRI. In CEM, an iodine-based contrast agent is administered intravenously 2 min prior to image acquisition. By using a dual-energy technique, the radiologist can read a low-energy image (which is like a conventional full-field digital mammogram) and a recombined image, in which areas of enhancement can be appreciated (Fig. 3) [23]. Previous studies have shown that CEM is consistently superior to FFDM, with comparable diagnostic performance to breast MRI in terms of both cancer detection and the evaluation of disease extent [11][12][13]24]. Jochelson et al were the first to evaluate the potential of CEM as a screening tool for high-risk patients in a study containing 307 cases [14]. In the first screening round, three cancers (two invasive and one DCIS) were detected. Breast MRI detected all three, whereas CEM detected only the two invasive breast cancers. None of the cancers were visible on the low-energy images. After the next screening round (after 2 years), five additional screen-detected cancers were observed. The PPV turned out to be comparable between CEM and breast MRI: 15% and 14%, respectively. Hence, the authors concluded that CEM might be a suitable alternative for screening these women when they had a contra-indication for breast MRI or who have limited access to it. However, an important limitation of using CEM as a screening tool over MRI is the current lack of (commercially available) CEMguided stereotactic capabilities. This is expected to change soon, as prototypes are currently being evaluated for clinical applications. Nevertheless, they are not available at this point, which further supports the fact that at this point, screening of high-risk women using CEM can only be recommended when breast MRI is contra-indicated. More recently, Sorin et al studied the diagnostic accuracy of CEM compared with FFDM in women with dense breasts and intermediate breast cancer risk (i.e., positive personal or family history). In this study of 611 cases, the sensitivity increased to 90.5% when using CEM (for FFDM sensitivity was 52.4%), but specificity dropped from 90.5% (FFDM) to 76.1% (CEM) [15]. These preliminary studies confirm that CEM has a potential as a screening tool in women with high or intermediate breast cancer risk, or even as supplemental imaging tool in women with dense breasts. However, an important disadvantage of performing CEM as screening tool is its increased radiation dose. Jeukens et al showed on a single commercially available unit that the radiation dose increased with 81% when CEM was used instead of full-field digital mammography (mean radiation dose of mammography being 1.55 mGy per exposure, compared with 2.80 mGy per CEM exposure) [16]. Hence, a complete FFDM (i.e., two breasts, two views) would result in an annual dose of 6.2 mGy. Considering the lifetime-attributable risk numbers and a 30-year screening period (according to our current national guidelines), the lifetime risk of radiationinduced breast cancer incidence is estimated to be 0.23% and its mortality 0.06% [25]. If CEM would be used as a screening tool, the annual dose would become 11.2 mGy, resulting in a lifetime risk of breast cancer incidence and mortality of 0.41% and 0.1%, respectively, during the 30-year screening period. In theory, the use of single-view CEM could provide an interesting alternative, especially if sensitivity would not decrease significantly. The ideal study design to test this hypothesis would be a randomized controlled clinical trial, dividing women in a CEM-and MRI group for screening. However, the breast cancer incidence in this population is very low, requiring a larger number of study participants and a sufficiently long follow-up period to draw any final conclusions. Therefore, we opted to perform a retrospective study designed as a non-inferiority study first to evaluate whether the use of 1 V did not result in an unacceptable worse sensitivity for detecting breast cancer in our study population, while maintaining similar specificity when compared with 2 V. With respect to the consensus evaluation, the lower limit of the 90% CI around the difference in sensitivity between 1 V and 2 V evaluation is − 15% and lies below the non-inferiority margin of − 10%. However, the entire 90% CI ranging from − 15 to − 5.3% does not exclude the non-inferiority margin. Therefore, we cannot conclude that 1 V is non-inferior to 2 V with respect to sensitivity, but neither can it be concluded that 1 V evaluations are inferior to 2 V evaluations (at a predefined non-inferiority margin of − 10%). With this chosen non-inferiority margin, formally the results are inconclusive with respect to non-inferiority [26]. However, based on the substantial and statistically significant decrease in sensitivity and overall diagnostic performance, we would not recommend 1 V CEM as an alternative reading strategy. The most important causes for FN findings, and thus a decrease in sensitivity, were ILC and DCIS. Due to some selection bias caused by the study design, the prevalence of both ILC and DCIS in malignant cases was higher than would be expected (23.8% and 33.3%, respectively). Hypothetically, the sensitivity of both strategies would improve if these entities were less frequently observed in this population. Nevertheless, ILC shows no to subtle enhancement on 2 V CEM exams [27], while Houben et al recently showed that the use of 2 V CEM does not significantly increase its diagnostic performance for suspicious breast calcifications [28]. In summary, these diagnoses remain challenging even in 2 V CEM, and it is not plausible that the detection of these lesions will be better on 1 V CEM. Another important disadvantage of CEM is the use of iodinated contrast agents, which can result in hypersensitivity reactions (or even anaphylactic shock) or can cause contrastinduced nephropathy. In a recent study of 839 patients, the incidence of mild or moderate hypersensitivity reactions during a CEM was 0.6%, without any severe reactions resulting in hospital admission or worse [29]. Although contrastinduced nephropathy might occur as a result of the administration of iodinated contrast agents, the incidence was recently estimated to be 2.6-2.7% in high-risk patients (i.e., with a glomerular filtration rate of 30-50 ml/min/1.73 m 2 ) [30]. The expected incidence in our current population is expected to be lower, as it is not a high-risk group [29]. Considering these facts that CEM requires iodinated contrast agents and must be performed in 2 V, we agree with Jochelson et al that at present, CEM might be considered an alternative to breast MRI for screening high-risk patients, not a replacement. We support their proposal to perform larger prospective trials on this topic, but our results show that the CEM exam used in these studies should consist of a standard, two-view CEM exams of both breasts. Study limitations Our study had several limitations. First, the cohort consisted of women undergoing CEM for an abnormality already suspected using a different modality, introducing some selection bias. However, the readers were blinded for the CEM indication and final diagnosis when reading the exams. Second, we used a blinded double-reading strategy for the analyses of this study, as it is similar to our national screening program. Other reading strategies, such as unblinded double reading, might have resulted in different observations. Another potential reading strategy, using a single radiologist aided by computer-aided detection (CAD) systems, was not feasible, since there are currently no approved CAD systems available for CEM. Third, the population that we used is not a high-risk population that would be considered for intensified screening (i.e., lifetime risk > 20%). These are more often young(er) women, with more often dense breasts, who can additionally express different breast cancer subtypes [31,32]. CEM might be a suitable alternative screening method, since Lord et al showed a sensitivity of mammography and breast MRI combined of 94%, with a specificity varying between 77 and 96% [33]. Although these findings are in line with our observed diagnostic performance of 2 V CEM exams, it remains unclear how the results of our study population (i.e., a non-screening population not of high risk) would be applicable to the target population of (screening) high-risk patients. The use of CEM for screening high-risk patients needs to be studied further, but then using 2 V CEM exams (not 1 V). Finally, the follow-up period for benign diagnoses should preferably be more than 2 years for all lesions studied, while our current mean follow-up period is 21 months. Nevertheless, in a study using a similar population, Lalji et al showed that the chance of having overlooked a breast cancer when CEM was deemed Bnegative^is negligible [17]. Fig. 3 Typical example of a contrast-enhanced mammography exam, showing the lowenergy images in the top row and the recombined (contrastenhanced) images on the bottom row. In this case, an irregular, illdefined mass is visible in the outer lower quadrant of the right breast (arrow), showing rim enhancement after contrast administration. Biopsy revealed an invasive carcinoma of no special type (NST) Conclusion Non-inferiority of 1 V evaluations as alternative reading strategy to standard 2 V evaluations could not be concluded. 1 V evaluations had lower diagnostic performance compared with 2 V evaluations. Funding The authors state that this work has not received any funding. Compliance with ethical standards Guarantor The scientific guarantor of this publication is M.B.I. Lobbes. Conflict of interest The authors of this manuscript declare no relationships with any companies, whose products or services may be related to the subject matter of the article. Statistics and biometry One of the authors has significant statistical expertise. Informed consent Written informed consent was waived by the Institutional Review Board. Ethical approval Institutional Review Board approval was not required because of the retrospective study design that was used in this article. Methodology • Retrospective non-inferiority study performed at one institution Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2019-05-10T13:54:49.512Z
2019-05-09T00:00:00.000
{ "year": 2019, "sha1": "16486fe069244c7fb26c962a90c22fedd4419212", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00330-019-06215-7.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "16486fe069244c7fb26c962a90c22fedd4419212", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14631944
pes2o/s2orc
v3-fos-license
On the structure of categorical abstract elementary classes with amalgamation For $K$ an abstract elementary class with amalgamation and no maximal models, we show that categoricity in a high-enough cardinal implies structural properties such as the uniqueness of limit models and the existence of good frames. This improves several classical results of Shelah. $\mathbf{Theorem}$ Let $\mu \ge \text{LS} (K)$. If $K$ is categorical in a $\lambda \ge \beth_{\left(2^{\mu}\right)^+}$, then: 1) Whenever $M_0, M_1, M_2 \in K_\mu$ are such that $M_1$ and $M_2$ are limit over $M_0$, we have $M_1 \cong_{M_0} M_2$. 2) If $\mu>\text{LS} (K)$, the model of size $\lambda$ is $\mu$-saturated. 3) If $\mu \ge \beth_{(2^{\text{LS} (K)})^+}$ and $\lambda \ge \beth_{\left(2^{\mu^+}\right)^+}$, then there exists a type-full good $\mu$-frame with underlying class the saturated models in $K_\mu$. Our main tool is the symmetry property of splitting (previously isolated by the first author). The key lemma deduces symmetry from failure of the order property. Introduction The guiding conjecture for the classification of abstract elementary classes (AECs) is Shelah's categoricity conjecture. Most progress towards this conjecture has been made under the assumption that the categoricity cardinal is a successor, e.g. [She99,GV06a,Bon14] 1 . In this paper, we assume the amalgamation property and no maximal models and deduce new structural results without having to assume that the categoricity cardinal is a successor, or even "high-enough" cofinality. Consider an AEC K with amalgamation and no maximal models which is categorical in a cardinal λ > LS(K). Then K is stable in every cardinal below λ [She99, Claim 1.7.(b)]; so if cf(λ) > LS(K), then the model of size λ is cf(λ)-saturated 2 . In particular, if λ is regular then the model of size λ is saturated. Baldwin [Bal09, Problem D.1.(2)] has asked if this generalizes to any "sufficiently large" cardinal λ, so let us discuss what happens if we have no control over the cofinality of λ. One strategy to show saturation of the model of size λ is to show that K is stable in λ. However an example of Hart and Shelah [HS90] yields (for an arbitrary k < ω) a sentence ψ k ∈ L ω 1 ,ω categorical in ℵ 0 , ℵ 1 , . . . , ℵ k , but not stable in ℵ k , [BK09] or see [Bal09,Corollary 26.5.4]. Therefore it is not in general true that categoricity in λ implies stability in λ. On the other hand, it is true if we assume a locality property for Galois types: tameness. This is due to the second author and combines the stability transfer in [Vasa] and the Shelah-Villaveces theorem [SV99]. Fact 1.1. If K is a LS(K)-tame AEC with amalgamation, has no maximal models, and is categorical in a λ > LS(K), then K is stable in every cardinal. In particular, the model of size λ is saturated. Proof. By the Shelah-Villaveces theorem (see Fact 4.4), K is LS(K)superstable (see Definition 2.3). Let µ ≥ LS(K). By [Vasb, Proposition 10.10], K is µ-superstable, so in particular stable in µ. 1 Recently, the second author has proved a categoricity transfer theorem without assuming that the categoricity cardinal is a successor, but assuming that the class is universal [Vasc] (other partial results not assuming categoricity in a successor cardinal are in [Vasb] and [She09]). 2 In the sense of Galois types, i.e. M is λ-saturated if every Galois type over a model of size less than λ contained in M is realized in M . This is called "λ-Galoissaturated" by some authors, but here we always drop the "Galois" (and similarly for other concepts such as stability). In this paper, we do not assume tameness: we show that we can instead take λ sufficiently big (this is (1) of the theorem in the abstract, see Corollary 4.7 for the proof): Theorem 1.2. Let K be an AEC with amalgamation and no maximal models and let µ > LS(K). If K is categorical in a λ ≥ (2 µ ) + , then the model of size λ is µ-saturated. Note that we only obtain µ-saturation, not full saturation (the slogan is that categoricity cardinals above (2 µ ) + behave as if they had cofinality at least µ). However if λ = λ this gives full saturation (this is used to obtain a downward categoricity transfer, see Corollary 4.9). Moreover µ-saturation is enough for many applications, as many of the results of [She99] only assume categoricity in a λ with cf(λ) > µ (for a fixed µ ≥ LS(K)). For example, we show how to obtain weak tameness (i.e. tameness over saturated models) from categoricity in a big-enough cardinal (this is Theorem 4.14). We can then build a local notion of independence: a good µ-frame (this is (3) of the theorem in the abstract, see Corollary 5.4 for a proof): Then there exists a type-full good µ-frame with underlying class the saturated models in K µ . This improves on [Vasa,Theorem 7.4], which assumed categoricity in a successor (and a higher Hanf number bound). This also (partially) answers [She99, Remark 4.9.(1)] which asked whether there is a parallel to forking in categorical AECs with amalgamation. The key to the proof of Theorem 1.2 is a close study of the symmetry property for splitting, identified by the first author in [Vana]. There it was shown (assuming superstability in µ) that symmetry of µ-splitting is equivalent to the continuity of reduced towers of size µ, which itself implies uniqueness of limit models in µ. It was also shown that symmetry of µ-splitting follows from categoricity in µ + . In [VV], we improved this by only requiring categoricity in a λ of cofinality bigger than µ: Fact 1.4 (Corollary 5.2 in [VV]). Let K be an AEC with amalgamation and no maximal models. Let µ ≥ LS(K). Assume that K is categorical in a cardinal λ with cf(λ) > µ. Then K is µ-superstable and has µsymmetry. In particular [VV, Theorem 0.1], it has uniqueness of limit models in µ: for any M 0 , M 1 , M 2 ∈ K µ , if M 1 and M 2 are limit over Here we replace the cofinality assumption on the categoricity cardinal with the assumption that the categoricity cardinal is big enough (this also proves (2) of the theorem in the abstract, see Theorem 4.5): Theorem 1.5. Let K be an AEC with amalgamation and no maximal models. Let µ ≥ LS(K). If K is categorical in a λ ≥ (2 µ ) + , then K is µ-superstable and has µ-symmetry. In particular, K has uniqueness of limit models in µ. Remark 1.6. This gives a proof (assuming amalgamation and a highenough categoricity cardinal) of the (in)famous [SV99,Theorem 3.3.7], where a gap was identified in the first author's Ph.D. thesis. The gap was fixed assuming categoricity in µ + in [Van06,Van13] (see also the exposition in [GVV]). In [BG,Corollary 6.10], this was improved to categoricity in an arbitrary λ > µ provided that µ is big-enough and the class satisfies strong locality assumptions (full tameness and shortness and the extension property for coheir). In [Vasa,Theorem 7.11], only tameness was required but the categoricity had to be in a λ with cf(λ) > µ. Still assuming tameness, this is shown for categoricity in any λ ≥ The proof of Theorem 1.5 is technical but conceptually not hard: we show that a failure of µ-symmetry would give the order property, which in turn would imply instability below the categoricity cardinal. The idea of using the order property to prove symmetry of an independence relation is due to Shelah, [She75, Theorem 6.10(ii)] or see [She90,Theorem III.4.13]. In [BGKV], an abstract generalization of Shelah's proof to any independence notion satisfying extension and uniqueness was given. Here we adapt the proof of [BGKV] to splitting. This uses the extension property of splitting for models of different sizes from [VV]. In general, we obtain that an AEC with amalgamation categorical in a high-enough cardinal has many structural properties that were previously only known for AECs categorical in a cardinal of high-enough cofinality, or even just in a successor. This improves several classical results from Shelah's milestone study of categorical AECs with amalgamation [She99]. This paper was written while the second author was working on a Ph.D. thesis under the direction of Rami Grossberg at Carnegie Mellon University and he would like to thank Professor Grossberg for his guidance and assistance in his research in general and in this work specifically. Background Throughout this paper, we assume: For convenience, we fix a big-enough monster model C and work inside C. This is possible since by Remark 2.4, we will have the joint embedding property in addition to the amalgamation property for models of the relevant cardinalities. Many of the pre-requisite definitions and notations used in this paper can be found in [GVV]. Here we recall the more specialized concepts that we use explicitly. We begin by recalling the definition of nonsplitting, a notion of independence from [She99, Definition 3.2]. Definition 2.2. A type p ∈ ga-S(N) does not µ-split over M if and only if for any N 1 , N 2 ∈ K µ such that M ≤ N ℓ ≤ N for ℓ = 1, 2, and any f : The definition of superstability below is already implicit in [SV99] and has since then been studied in several papers, e.g. [Van06, GVV, Vasb, BV, GV, VV]. We will use the formulation from [Vasb, Definition 10.1]: (1) µ ≥ LS(K). (2) K µ is nonempty, has joint embedding, and no maximal models. (3) K is stable in µ 3 , and: (4) µ-splitting in K satisfies the following locality (sometimes called continuity) and "no long splitting chains" properties: For any limit ordinal α < µ + , for every sequence M i | i < α of models of cardinality µ with M i+1 universal over M i and for every p ∈ ga-S( i<α M i ), we have that: Remark 2.4. By our global hypothesis of amalgamation (Hypothesis 2.1), if K is µ-superstable, then K ≥µ has joint embedding. The main tool of this paper is the concept of symmetry over limit models which was identified in [Vana]: Definition 2.6. An abstract elementary class exhibits symmetry for µsplitting (or µ-symmetry for short) if whenever models M, M 0 , N ∈ K µ and elements a and b satisfy the conditions 1-4 below, then there exists Figure 1. (1) M is universal over M 0 and M 0 is a limit model over N. (3) ga-tp(a/M 0 ) is non-algebraic and does not µ-split over N. We recall a few results of the first author showing the importance of the symmetry property: Fact 2.7 (Theorem 5 in [Vana]). If K is µ-superstable and has µsymmetry, then for any M 0 , M 1 , M 2 ∈ K µ , if M 1 and M 2 are limit models over M 0 , then For λ > LS(K), we will write K λ-sat for the class of λ-saturated models in K ≥λ (we order it with the strong substructure relation induced by K). By [Vanb] superstability and symmetry together imply that the union of certain chains of saturated models is saturated. This has an easier formulation in [VV, Theorem 6.6]: Fact 2.8. Assume K is µ-superstable, µ + -superstable, and has µ +symmetry. Then K µ + -sat is an AEC with LS(K µ + -sat ) = µ + . It will be convenient to use the following independence notion. A minor variation (where "limit over" is replaced by "universal over") appears in [Vasa, Definition 3.8]. Remark 2.11. Assuming µ-superstability, the relation "p does not µfork over M" is very close to defining an independence notion with the properties of forking in a first-order superstable theory (i.e. a good µframe, see Section 5). In fact using tameness (or just, as we will show in Section 5, weak tameness) it can be used to do precisely that, see [Vasa]. µ-Forking has the following properties: ( ga-S(M) be such that p explicitly does not µ-fork over (M 0 , M). If K is superstable in every χ ∈ [µ, λ], then there exists q ∈ ga-S(N) extending p and explicitly not µ-forking over (M 0 , M). Moreover q is algebraic if and only if p is. Proof of (3). By induction on N . Let a realize p. If N = M this is given by [VV,Proposition 4.4]. If M < N , build N i ∈ K M +|i| : i ≤ N increasing continuous such that N 0 = M, N i+1 is limit over N i , and ga-tp(a/N i ) explicitly does not µ-fork over (M 0 , M). This is possible by the induction hypothesis and the continuity property ). It is easy to check that q is as desired. Symmetry from no order property In this section we show (assuming enough instances of superstability) that the negation of symmetry implies the order property, and hence contradicts stability. This is similar to [BGKV, Theorem 5.14], but due to the intricate definition of the symmetry property for splitting, some technical details have to be handled. We first give an equivalent definition of symmetry. Recall that in [VV,Definition 3.3], we gave three variations on the symmetry property: (1) The uniform µ-symmetry, which is essentially Definition 2.6 (and in fact is formally equivalent to it). (2) The non-uniform µ-symmetry, which weakens the conclusion of uniform µ-symmetry by "changing" the model N that ga-tp(a/M b ) does not µ-split over. (3) The weak non-uniform µ-symmetry which strengthens the hypotheses of non-uniform µ-symmetry by requiring that ga-tp(b/M) does not µ-fork, instead of µ-split over M 0 . There is a fourth possible variation, the weak uniform µ-symmetry property, which strengthens the hypotheses of uniform µ-symmetry similarly to the weak non-uniform µ-symmetry, but leaves the conclusion unchanged. For clarity we have underlined the differences between the weak and non-weak definitions. We start by showing that assuming µ-superstability this distinction is inessential, i.e. the two properties are equivalent. We will use the following characterization of symmetry: Fact 3.2 (Theorem 5 in [Vana]). Assume that K is µ-superstable. The following are equivalent: (1) K has µ-symmetry. (2) Reduced towers of size µ are continuous 4 . Lemma 3.3. Assume that K is µ-superstable. The weak uniform µsymmetry is equivalent to uniform µ-symmetry which is equivalent to µ-symmetry. Proof. That uniform µ-symmetry is equivalent to µ-symmetry is easy (it appears as [VV, Proposition 3.5]). Clearly, uniform implies weak uniform. Now assuming weak uniform symmetry, the proof of (1) ⇒ (2) of Fact 3.2 still goes through. The point is that whenever we consider ga-tp(b/M) in the proof, M = i<δ M i for some increasing continuous M i : i < δ with M i+1 universal over M i for all i < δ, and we simply use that by superstability ga-tp(b/M) does not µ-split over M i for some i < δ. However we also have that ga-tp(b/M) explicitly does not µ-fork over (M i , M i+1 ). Therefore reduced towers are continuous, and hence by Fact 3.2 K has µ-symmetry. We say that K has the α-order property of length λ if some M ∈ K has it. We say that K has the α-order property if it has the α-order property of length λ for all cardinals λ. We will use two important facts: the first says that it is enough to look at length up to the Hanf number. The second that the order property implies instability. Fact 3.6 (Claim 4.5.3 in [She99]). Let α be a cardinal. If K has the α-order property of length λ for all λ < h(α + LS(K)), then K has the α-order property. Fact 3.7. If K has the α-order property and µ ≥ LS(K) is such that µ = µ α , then K is not stable in µ. The following lemma appears in some more abstract form in [BGKV,Lemma 5.6]. The lemma says that if we assume that p does not µ-fork over M, then in the definition of non-splitting (Definition 2.2) we can replace the N ℓ by arbitrary sequences in N of length at most µ. In the proof of Lemma 3.9, this will be used for sequences of length one. Lemma 3.8. Let µ ≥ LS(K). Let M ∈ K µ and N ∈ K ≥µ be such that M ≤ N. Assume that K is stable in µ. If p ∈ ga-S(N) does not µ-fork over M (Definition 2.10), a realizes p, andb 1 The next lemma shows that failure of symmetry implies the order property. The proof is similar to that of [BGKV,Theorem 5.14], the difference is that we use Lemma 3.8 and the equivalence between symmetry and weak uniform symmetry (Lemma 3.3). Proof. By Lemma 3.3, K does not have weak uniform µ-symmetry. We first pick witnesses to that fact. Pick limit models N, M 0 , M ∈ K µ such that M is limit over M 0 and M 0 is limit over N. Pick b such that ga-tp(b/M) does not µ-fork over M 0 , a ∈ |M|, and ga-tp(a/M 0 ) explicitly does not µ-fork over (N, M 0 ), and there does not exist M b ∈ K µ containing b and limit over M 0 so that ga-tp(a/M b ) explicitly does not µ-fork over (N, M 0 ). We will show that C has the µ-order property of length λ. (5) N ′ α is limit over N α and N α+1 is limit over N ′ α . (6) ga-tp(a α /N α ) explicitly does not µ-fork over (N, M 0 ) and ga-tp(b α /N ′ α ) does not µ-fork over M 0 . This is possible. Let N 0 be any model in K µ containing M and a and limit over M. At α limits, let N α := β<α N β . Now assume inductively that N β has been defined for β ≤ α, and a β , b β , N ′ β have been defined for β < α. By extension for splitting, find q ∈ ga-S(N α ) that explicitly does not µ-fork over (N, M 0 ) and extends ga-tp(a/M 0 ). Let a α realize q and pick N ′ α limit over N α containing a α . Now by extension again, find q ′ ∈ ga-S(N ′ α ) that does not µ-fork over M 0 and extends ga-tp(b/M). Let b α realize q ′ and pick N α+1 limit over N ′ α containing b α . This is enough. We show that for α, β < λ: For (1), observe that b ∈ |N 0 | ⊆ |N α | and ga-tp(a α /N α ) explicitly does not µ-fork over (N, M 0 ). Therefore by monotonicity N α witnesses that there exists N b ∈ K µ containing b and limit over M 0 so that ga-tp(a α /M b ) explicitly does not µ-fork over (N, M 0 ). By failure of symmetry and invariance, we must have that ga-tp(a α b/M 0 ) = ga-tp(ab/M 0 ). Proof. If K is unstable in 2 µ , then we can set λ := (2 µ ) + and get a vacuously true statement; so assume that K is stable in 2 µ . By Fact 3.7, K does not have the µ-order property. By Fact 3.6, there exists λ < h(µ) such that K does not have the µ-order property of length λ. By Lemma 3.9, it is as desired. Remark 3.11. How can one obtain many instances of superstability as in the hypothesis of Theorem 3.10? One way is categoricity, see the next section. Another way is to start with one instance of superstability and transfer it up using tameness. Indeed by [Vasb, Proposition 10.10], if K is µ-superstable and µ-tame, then it is superstable in every µ ′ ≥ µ. Thus Theorem 3.10 generalizes [VV,Theorem 6.4] which obtained µsymmetry from µ-superstability and µ-tameness. Throughout this section, we assume (in addition to amalgamation) that K has no maximal models. This is not a big deal because we can always take a tail of the AEC to obtain it: Fact 4.2 (Proposition 10.13 in [Vasb]). If K is an AEC with amalgamation categorical in a λ ≥ H 1 , then there exists χ < H 1 such that K ≥χ has no maximal models. Hypothesis 4.3. K is an AEC with amalgamation and no maximal models. The following powerful fact has its roots in [SV99, Theorem 2.2.1], where it was proven assuming the generalized continuum hypothesis instead of amalgamation. This is the main tool to obtain superstability from categoricity. Its proof relies on Ehrenfeucht-Mostowski models, which is why we assumed no maximal models. Combining this fact with Theorem 3.10, we obtain symmetry from categoricity in a high-enough cardinal: Theorem 4.5. Let µ ≥ LS(K). Assume that K is categorical in a λ ≥ h(µ). Then K is µ-superstable and has µ-symmetry. Proof. By Corollary 4.7, the model of size λ is µ-saturated. By Fact 4.8, every model in K ≥µ is µ-saturated. In particular, every model of size µ is saturated. By uniqueness of saturated models, K is categorical in µ. Remark 4.10. For Corollary 4.9, Fact 4.2 shows that the no maximal models hypothesis is not necessary. We can improve on Corollary 4.9 using the more powerful downward transfer of [She99]. A key concept in the proof is the following variation on tameness (an important locality for Galois types isolated by Grossberg and the first author in [GV06b]). We use the notation in [Bal09,Definition 11.6]. Definition 4.12. Let χ, µ be cardinals with LS(K) ≤ χ ≤ µ. K is (χ, µ)-weakly tame if for any saturated M ∈ K µ , any p, q ∈ ga-S(M), The importance of weak tameness is that it is known to follow from categoricity in a suitable cardinal: this appears as [She99, Main Claim II.2.3] and a simplified improved argument is in [Bal09,Theorem 11.15]. Proof. By Corollary 4.7, the model of size λ is µ + -saturated. Now apply Fact 4.13. Proof. By Theorem 4.14, there exists χ < H 1 such that K is (χ, H 2 )weakly tame. By Corollary 4.7, the model of size λ is χ-saturated. Now apply Fact 4.15. Remark 4.17. We can replace H 2 above by any collection cardinal, see [Bal09,Definition 14.5] and the proof of Theorem 14.9 there. Good frames and weak tameness In [She09, Definition II.2.1] 6 , Shelah introduces good frames, a local notion of independence for AECs. This is the central concept of his book and has seen many other applications, such as a proof of Shelah's categoricity conjecture for universal classes [Vasc]. A good µ-frame is a triple s = (K µ , ⌣ , ga-S bs ) where: (1) K is a nonempty AEC which has µ-amalgamation, µ-joint embedding, no maximal models, and is stable in µ. (3) ⌣ is an (abstract) independence relation on types of length one over models in K λ satisfying the basic properties of first-order forking in a superstable theory: invariance, monotonicity, extension, uniqueness, transitivity, local character, and symmetry (we will not give their exact meaning here). As in [She09, Definition II.6.35], we say that a good µ-frame s is type-full if for each M ∈ K µ , ga-S bs (M) consists of all the nonalgebraic types over M. We focus on type-full good frames in this paper. Given a type-full good µ-frame s = (K µ , ⌣ , ga-S bs ) and M 0 ≤ M both in K µ , we say that a nonalgebraic type p ∈ ga-S(M) does not s-fork over M 0 if it does not fork over M 0 according to the abstract independence relation ⌣ of s. We say that a good µ-frame s is on K µ if its underlying class is K µ . It was pointed out in [Vasa] (and further improvements in [Vasb, Section 10] or [VV, Theorem 6.12]) that tameness can be combined with superstability to build a good frame. At a meeting in the winter of 2015 in San Antonio, the first author asked whether weak tameness could be used instead. This is not a generalization for the sake of generalization because weak tameness (but not tameness) is known to follow from categoricity. As it turns out, the methods of [VV] can be used to answer in the affirmative: Theorem 5.1. Let λ > µ ≥ LS(K). Assume that K is superstable in every χ ∈ [µ, λ] and has λ-symmetry. If K is (µ, λ)-weakly tame, then there exists a type-full good λ-frame with underlying class the saturated models in K λ . Proof. First observe that limit models in K λ are unique (by Fact 2.7), hence saturated. By Fact 2.9, K has χ-symmetry for every χ ∈ [µ, λ]. By Fact 2.8, for every χ ∈ [µ, λ), K χ + -sat , the class of χ + -saturated models in K ≥χ + is an AEC with LS(K χ + -sat ) = χ + . Therefore (see [VV,Lemma 6.7]) K λ-sat is an AEC with LS(K λ-sat ) = λ. By the λsuperstability assumption, K λ-sat λ is nonempty, has amalgamation, no maximal models, and joint embedding. It is also stable in λ. We want to define a type-full good λ-frame s on K λ-sat λ . We define forking in the sense of s (s-forking) as follows: For M ≤ N saturated of size λ, a nonalgebraic p ∈ ga-S(N) does not s-fork over M if and only if it does not µ-fork over M. Now most of the axioms of good frames are verified in Section 4 of [Vasa], the only properties that remain to be checked are extension, uniqueness, and symmetry. Extension is by Fact 2.12.(3), and uniqueness is by uniqueness in µ (Fact 2.12.(1)) and the weak tameness assumption. As for symmetry, we know that λ-symmetry holds, hence we obtain the result by Section 3 of [VV]. Of course we can now combine this construction with our previous results: Corollary 5.3. Let λ > µ ≥ LS(K). Assume that K is superstable in every χ ∈ [µ, h(λ)). If K is (µ, λ)-weakly tame, then there exists a type-full good λ-frame with underlying class the saturated models in K λ . We obtain that a good frame can be built from categoricity in a high-enough cardinal (of arbitrary cofinality). Corollary 5.4. Let µ ≥ H 1 . Assume that K has no maximal models and is categorical in a λ > µ. If the model of size λ is µ + -saturated (e.g. if cf(λ) > µ or by Corollary 4.7 if λ ≥ h(µ + )), then there exists a type-full good µ-frame with underlying class the saturated models in K µ .
2016-01-13T18:10:52.408Z
2015-09-04T00:00:00.000
{ "year": 2015, "sha1": "17ecdffba590d2519c8821c1025646af26131b64", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "17ecdffba590d2519c8821c1025646af26131b64", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
263965857
pes2o/s2orc
v3-fos-license
Assessing exposure, uptake and toxicity of silver and cerium dioxide nanoparticles from contaminated environments The aim of this project was to compare cerium oxide and silver particles of different sizes for their potential for uptake by aquatic species, human exposure via ingestion of contaminated food sources and to assess their resultant toxicity. The results demonstrate the potential for uptake of nano and larger particles by fish via the gastrointestinal tract, and by human intestinal epithelial cells, therefore suggesting that ingestion is a viable route of uptake into different organism types. A consistency was also shown in the sensitivity of aquatic, fish cell and human cell models to Ag and CeO2 particles of different sizes; with the observed sensitivity sequence from highest to lowest as: nano-Ag > micro Ag > nano CeO2 = micro CeO2. Such consistency suggests that further studies might allow extrapolation of results between different models and species. Background Nanotechnology includes the production of nanoparticles (NPs), defined as particles with three dimensions of less than 100 nm [1]. Due to their small size, NPs exhibit greater specific surface areas and surface energies, quantum related effects and generally increased surface reactivity than those of the corresponding conventional (larger) forms, leading to vastly different properties. For these reasons NPs are being increasingly employed in a variety of consumer products, including paints, cosmetics, medicines, food and suntan lotions. A number of applications also release NPs into the environment via intentional routes. For example, zerovalent iron NPs are already in use for the remediation of polluted environments [2]. Zerovalent iron NPs, however, have been shown to remove oxygen from and alter pH ground-waters, important deleterious effects resulting in unanticipated environmental impacts. It is vital that as the nanotechnology industry expands rapidly, it does so in a sustainable and ethical manner, addressing the potential impacts on human and environmental health, alongside the development of new materials and applications. This study focuses on Ag and CeO 2 nano and micro particles. Silver NPs were developed in order to improve human health due to their anti-microbial activity for use in wound dressings and medical equipment, but they are also now being used in clothing, food processing work surfaces and even health remedies accessible via the internet. However, it is known that Ag is highly toxic to fish and other aquatic organisms [3]. CeO 2 has been developed as a fuel additive to improve the efficiency of combustion. A number of toxicology studies suggest that CeO 2 NP induce relatively low levels of toxicity in vitro [4][5][6][7]. Both silver and CeO 2 NP are likely to be released into waste waters and the atmosphere and thus be distributed widely in the aquatic environment. The aim of this project was therefore to conduct pilot studies using CeO 2 and silver particles of different sizes, focusing on the potential for NP to be taken up by aquatic species, human exposure via ingestion of contaminated food sources and the resultant toxic impact to the exposed organisms and cells. Particles and characterisation Ag particles of nominal sizes 35 nm (nano Ag) and 0.6-1.6 μm (bulk Ag) diameter were purchased from Nanostructured and Amorphous Materials (USA) and dispersed without use of surfactants, capping agents or other dispersants. CeO 2 of nominal sizes <25 nm (nano-CeO 2 ) and <5 μm (bulk CeO 2 ) were purchased from Sigma. These sizes were provided by the suppliers, but were investigated further by TEM, STEM ( Figure 1), SEM, AFM and DLS. Other characterisation techniques included specific surface area (BET), charge (zeta potential), composition and surface chemistry (XPS, ICP-MS and UV visible spectroscopy), crystal structure (TEM and XRD) and dissolution (UF-ICP-MS). Characterisation was conducted of the pristine particles, as well as of the particles dispersed in all of the media used in the experiments described below in order to allow characteristics to be related to any observed uptake and toxicity. Aquatic species Daphnia magna neonates were exposed for 96 h to EPA water containing 0-10 μg/ml of particles. Particles were prepared by sonicating for 30 minutes. Endpoints assessed included lethality and shedding of the carapace. Controls were treated with EPA water without the addition of particles. Carp (Cyprius carpio) were kept in oxygenated, dechlorinated tap water at 10°C. For each treatment group, 8 fish were maintained in 60 l of water, 50% of which was replaced with the appropriate doses of nanoparticles every 48 h. Nano and bulk Ag particles were added to the tank water after sonication in double distilled H2O (15 minutes) at 0.01 and 0.1 μg/ml. Control fish were exposed to dechlorinated tap water without the addition of particles. After 21 d, C. carpio were sacrificed, and various organs were removed and processed for ICP-OES analysis to determine tissue levels of the exposed NP. In vitro human and trout cell models The C3A human hepatocyte cell line was cultured in M2279 medium supplemented with 10% foetal calf serum (FCS), 2 mM L-glutamine, 100 Units/ml penicillin, 0.1 mg/ml streptomycin, 1 mM sodium pyruvate and 1% non-essential amino acids at 37°C and 5% CO 2 . The Caco-2 human intestinal epithelial cell line was maintained in MEM medium supplemented with 10% FCS, 2 mM L-glutamine and 0.5 mg/ml gentamycin at 37°C and 5% CO 2 . For uptake and transport assays, cells were plated into 12-well-Transwell inserts at 500,000 cells per well and cultured for 2 weeks until a differentiated monolayer was formed. Composite Z-stack image of C3A human hepatocyte cell line treated with silver nanoparticles Figure 1 Composite Z-stack image of C3A human hepatocyte cell line treated with silver nanoparticles. Cells were treated for 2 h with Ag NP at a concentration of 31.25 μg/ cm 2 . Red represents the F-actin cytoskeleton, blue the nuclei and green the particles. The faint grey line drawn from the particle in the center of the main frame indicates its position in the two sections on the sides and confirms its location within the cell. For all in vitro experiments, the particles were dispersed by sonication at 1 mg/ml in culture medium with additives (described above) for 15 minutes, and then diluted to the concentrations to be used in the studies. To assess cytotoxicity, hepatocyte cells were plated at 100,000 cells per well in a 96-well-plate, incubated overnight and treated with 0-1000 μg/ml (0-625 μg/cm 2 ) for 24 h. The supernatants were then analysed to assess lactate dehydrogenase (LDH) release from cells as described in [8]. Negative controls were treated with media only, and the detergent Triton X-100 was used as a positive control (100% cell death). To assess particle uptake C3A cells plated on glass coverslips and Caco-2 cells plated on Transwell membranes were incubated with media only (negative control) or with particle suspensions at 3.125 and 31.25 μg/cm 2 for 2 h (C3A) or 24 h (Caco-2). Cells were stained for actin using Phalloidin-FITC and for DNA using DAPI. Characterisation The sizes measured varied according the technique used as expected, but confirm the significant size difference between the bulk and nanoparticle forms. Solubility was below 1% for all samples. Daphnia magna In a 96 h acute D. magna exposure study, nano Ag caused more mortality than bulk Ag, while CeO 2 of both sizes did not induce any significant mortality (Table 1). Cyprius carpio Ag was detected in liver, intestine, gills and gall bladder after treatment with both sizes of particles. There was a trend towards higher uptake of the nano Ag than the micro sized particles. However, this was not statistically significant. In vitro cytotoxicity Treatment with CeO 2 , at concentrations of up to 1000 μg/ cm 2 , did not cause LDH release from either the C3A cell line or primary trout hepatocytes. Nano-Ag was more toxic than the bulk Ag, and primary trout hepatocytes were less susceptible to toxic effects compared with the human C3A cell line (Table 1). Uptake into C3A and Caco-2 cells Both particle types at all sizes were taken up into both C3A hepatocytes (2 h) and Caco-2 intestinal epithelial cells (24 h exposure; Figure 1). Discussion A number of conclusions can be drawn from this study. Firstly, the results clearly show that silver particles are more toxic than CeO 2 particles in a variety of model species and cell types. For example, compared to CeO 2 , Ag particles caused a higher mortality in the aquatic invertebrate D. magna and are more cytotoxic to both trout primary hepatocytes and human hepatocyte cell lines in vitro. In addition to this, the silver NP were more toxic than the larger silver particles in the same aquatic invertebrate and in vitro cell models. The carp studies demonstrated that the fish ingested Ag and accumulated it within the liver. These data suggested that Ag accumulation might be greater following exposure to the nano-Ag than the bulk Ag. Observations of the fish in the exposure tanks indicated that much of the uptake of the NPs into the fish may have occurred as a consequence of the fish eating agglomerated NP material, rather than uptake via the water through the gills. The uptake data suggest that either the nano-Ag could be more efficient at crossing the intestinal barrier than the bulk Ag, or that the dissolution of the nano-Ag, either in the surrounding water or the contents of the gastrointestinal tract, is greater than the bulk Ag, allowing greater uptake of free ions. Coupled with the observation that the nano-Ag is more toxic to the trout hepatocytes than the bulk Ag or the CeO 2 particles, this suggests that the Ag NP pose a greater risk than the other particle types tested. The Caco-2 cell model also demonstrated the potential for human intestinal epithelial cells to take up particles from the apical surface and to transport them into the cell. Both particle types and sizes were taken up, demonstrating that particles within the diet have the potential to enter the body following ingestion. Further studies of basolateral http://www.ehjournal.net/content/8/S1/S2
2014-10-01T00:00:00.000Z
2009-12-21T00:00:00.000
{ "year": 2009, "sha1": "f879c65f345a78c8710a9a1955b9594deebcc917", "oa_license": "CCBY", "oa_url": "https://ehjournal.biomedcentral.com/counter/pdf/10.1186/1476-069X-8-S1-S2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f879c65f345a78c8710a9a1955b9594deebcc917", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
4230304
pes2o/s2orc
v3-fos-license
Evaluation of myocardial involvement in patients with connective tissue disorders: a multi-parametric cardiovascular magnetic resonance study Background Severe arrhythmias or heart failure may be surrogates of myocardial involvement in patients with connective tissue disorders (CTD). However, most patients present with unspecific symptoms, normal ECG, and preserved left ventricular ejection fraction (LV-EF). Therefore, timely diagnosis by an accurate technique is crucial. Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) has proven value for the detection of focal processes, but due to the often diffuse character of fibrosis/inflammation in CTD patients, CMR mapping techniques might be of incremental value for the assessment of myocardial involvement. Purpose of this study was to evaluate a multi-parametric CMR protocol as a screening tool for myocardial involvement in CTD patients. Methods Forty CTD patients were prospectively enrolled and underwent CMR, twenty healthy volunteers served as control group. Results Mean LV-EF was 62 %; LGE prevalence was low (18 %). CTD patients had higher native T1 (1008 vs. 962 ms, p = 0.001), lower post contrast T1 (494 vs. 526 ms, p = 0.008), expanded extracellular volume (ECV) (28 vs. 25 %, p = 0.001), and higher T2 values (53 vs. 49 ms, p < 0.001) compared to controls. Among patients with values higher than the 95 % percentile of healthy controls, native T1 and T2 values seem to be the most promising discriminators. Conclusion CTD patients showed higher T1, ECV, and T2 values compared to controls, with most significant differences for native T1 and T2, which seem to be independent of the presence of LGE. Our data suggest that CMR mapping techniques are of incremental value in the detection of myocardial involvement in CTD patients. Electronic supplementary material The online version of this article (doi:10.1186/s12968-016-0288-4) contains supplementary material, which is available to authorized users. Background Connective tissue disorders (CTD) are a heterogeneous form of rheumatic disorders comprising systemic lupus erythematosus (SLE), systemic sclerosis (SSc), Sjögren's syndrome, inflammatory muscle diseases and overlap syndrome [1]. There is a high variety in the prevalence of CTD, which may occur at all ages, but show a higher prevalence in young adults [1]. SLE is one of the most common autoimmune disorders in the western world with a prevalence ranging from 15 to 50 per 100,000 persons [1]. Cardiovascular complications may manifest as inflammation of valves, myocardium, pericardium resulting in myocardial dysfunction, and heart failure [2]. The prevalence of SSc is estimated about 26 per 100,000 persons [1]. SSc is characterized by structural and functional abnormalities of small blood vessels, fibrosis of the skin and internal organs, activation of the immune system and autoimmunity [1]. Myocardial involvement often remains subclinical, however autopsy studies reveal diffuse myocardial fibrosis in up to 80 % of cases [3][4][5][6], and sudden cardiac death occurs in up to 21 % of SSc patients [5]. Therefore, timely detection of myocardial involvement in stages, which might be potentially reversible by an adequate treatment regimen, is of high clinical interest in patients with CTD. Cardiovascular magnetic resonance (CMR) offers beside functional assessment excellent tissue characterization without the need of radiation. Recent data suggest that a CMR approach, including late gadolinium enhancement (LGE) for the detection of focal fibrosis, and T1 mapping sequences for the detection of diffuse fibrosis, might be useful in the detection of myocardial involvement in patients with SLE and SSc [2,3]. However, for the assessment of inflammation, these groups used standard T2weighted images, which are known for severe limitations (e.g. proneness for artifacts) [7]. In the meantime, new T2 mapping sequences were developed, overcoming most of the standard T2-weighting limitations [8]. Consequently, aim of our study was to evaluate a comprehensive CMR protocol, including LGE and quantitative T1 and T2 mapping techniques for the assessment of both fibrosis and inflammation, as a screening tool for potential myocardial involvement in patients presenting with CTD. Patient population Forty patients presenting at our institution between October 2013 and March 2016 were consecutively enrolled if they fulfilled the following criteria: 1) connective tissue disorder; and 2) no history of CAD, myocardial infarction and/or prior revascularization; and 3) successfully underwent CMR imaging. Exclusion criteria were contraindications for CMR (e.g. pregnancy, pacemaker/ ICD, glomerular filtration rate <30 ml/min., previous adverse reactions to gadolinium, cochlea implant). Healthy volunteers (n = 20) with no history of cardiac disease and free of symptoms served as control group. Prior to CMR, all participants provided a blood sample for measurement of hematocrit. The ethics committee of the University of Tuebingen approved the study and all patients gave written informed consent. CMR protocol ECG-gated CMR was performed in breath-hold using a 1.5 T Magnetom Aera (Siemens Healthcare, Erlangen, Germany) in line with current recommendations [9]. Both cine and LGE short axis images were prescribed every 10 mm (slice thickness 6 mm) from base to apex. In-plane resolution was typically 1.2 × 1.8 mm. Cine was performed using a steady-state free-precession (SSFP) sequence. LGE images were acquired on average 5-10 min after contrast using a segmented inversion recovery gradient echo (IR-GRE)-sequence constantly adjusting inversion time to null normal myocardium [10,11]. The contrast dose (Gadopentetate-Dimeglumine) was 0.15 mmol/kg. A modified look-locker inversion recovery prototype sequence (MOLLI) was used for T1 mapping and performed in a single midventricular short-axis (SAX) slice at mid-diastole, prior to and 20 min after administration of contrast, in line with current recommendations [12,13]. Short axis T2 mapping was performed in a matching midventricular SAX before administration of contrast agent using an ECG-triggered T2-prepared single-shot bSSFP prototype sequence with multiple T2 preparation times [8]. More detailed information on T1 and T2 mapping sequences is provided in the Additional file 1. CMR analysis Cine and LGE images were evaluated by experienced observers (S.G., H.M.) as described elsewhere [14]. In brief, endocardial and epicardial borders were outlined on the short-axis cine images. Volumes, mass and ejectionfraction were derived by summation of epicardial and endocardial contours. Extent of LGE was assessed using QMass software (Medis, Leiden, The Netherlands), and the results were expressed as percentage of myocardial mass. The distribution of LGE was characterized as epicardial, intramural, transmural, or subendocardial [14]. Color-coded T1, ECV, and T2 maps were generated based on inline-generated, motion corrected raw images using QMap software 1.0 (Medis, Leiden, the Netherlands) in a single matching midventricular SAX. Motion-corrected T1 maps were examined for quality in three modalities: 1) raw T1 images 2) T1 maps 3) R 2 maps. Endo-and epicardial contours were manually drawn by two experienced observers (S.G., A.M.), and then divided into 6 segments using the anterior right ventricular insertion point as reference. Care was taken to avoid partial volume effects at the endocardial and epicardial borders for T1, ECV and T2 maps. Global T1, ECV, and T2 values were calculated: T1 values were determined by fitting an exponential model to the measured data [15]. Prior to CMR, the hematocrit was determined in all subjects, allowing with native and post contrast T1 measurements of the myocardium and blood pool the calculation of extracellular volume (ECV), using a previously described equation [16]. T2 results were obtained by fitting a 2-parameter intensityweighted exponential model (no offset term) [17]. Variables and definitions All variables were collected directly from patients, and/ or medical records except CMR parameters, which were evaluated as described above. Most variables are selfexplanatory; all others are defined below. Underlying connective tissue disorders had to fulfill the diagnostic criteria of the American College of Rheumatology or the European League Against Rheumatism, respectively. Due to the variety of CTD (n = 5), these were clustered into three subgroups: 1) SSc 2) SLE 3) "Others": overlap syndrome, Sjögren's syndrome, and polymyositis. For SSc and SLE, subgroup analyses were performed. Due to the low number of patients and the heterogeneity of CTD in the "others" group (3 different CTD in 10 patients) no further subgroup analysis was performed. Statistical analysis Absolute numbers and percentages were computed to describe the patient population. All continuous variables were tested for normality using the Kolmogorov-Smirnov test. Normally distributed continuous variables were expressed as means (with standard deviation) and skewed variables were presented as medians (with quartiles). Comparisons between groups were made using the Mann-Whitney U test or the Fisher's exact test, as appropriate. P-values (two-tailed) of <0.05 were considered significant. All statistical analyses were performed using SPSS, version 22.0 (IBM Corp., Armonk, NY, USA). Patient characteristics In total n = 60 subjects were included in the final analysis, see Table 1: n = 40 patients with CTD, n = 20 healthy individuals served as control group. At inclusion, CTD patients were 54 ± 17 years of age, predominantly female (87 %), and did not differ significantly from the control group, p = 0.10 for age and p = 0.27 for gender, respectively. Most patients suffered from SSc (n = 17) or SLE (n = 13). Others (n = 10) had overlap syndrome (n = 6), Sjögren's syndrome (n = 3), and polymyositis (n = 1). Nonspecific dyspnea and angina were the most frequently reported symptoms in the overall patient population (33 and 23 %, respectively). ECG abnormalities were detected in n = 8 (20 %) of the patients. In detail, n = 3 showed left bundle branch block (in all of them CAD could be excluded by coronary angiography), n = 2 had atrial fibrillation (one patient had coronary angiography and showed no CAD), n = 2 had ventricular extrasystoles (in one of the patients CAD was ruled out by coronary angiography), n = 1 patient showed a right bundle branch block. The majority (60 %) of our overall CTD population was on steroids during the time of CMR. Details are displayed in Table 1. General CMR results CMR findings can be viewed in Table 2. The mean LV-EF was 62 %, and did not differ to our control group (p = 0.41). Furthermore, functional CMR parameters (LV size, mass, etc.) were not significantly different between CTD patients and controls. LGE was present in 7 (18 %) of the CTD patients, most commonly occurring in a non-ischemic pattern (epicardial and/or intramural) [14]. LGE was not present in any of the controls. Looking at the SSc and SLE subgroups (Tables 3 and 4) revealed that mean LV-EF was also preserved, and the prevalence of LGE tended to be low (12 % SSc, 23 % in SLE). T1 and ECV results We found higher native T1 values in the CTD patient population: 1008 (990-1042) ms vs. 962 (947-987) ms in controls, p = 0.001; Table 2, Fig. 1a. Post contrast T1 values were decreased in comparison to controls: 494 (477-522) ms vs. 526 (508-553) ms, p = 0.008, Table 2 Tables 3 and 4. Furthermore, post contrast T1 values were lower in SSc patients: 494 (474-525) vs. SLE patients: 507 (479-539). However, this difference was not statistically significant (p = 0.43). Compared to healthy controls, patients with SSc demonstrated: 1) significantly higher median native T1 values and ECV values (both p < 0.001), 2) significantly decreased post contrast values (p = 0.02). Patients with SLE showed increased median T1 native values in comparison to healthy controls (p = 0.03). However, although increased, ECV values did not differ significantly to controls (p = 0.24). Furthermore, SLE patients demonstrated lower post contrast values than controls without reaching significance (p = 0.16). Figure 2 displays a LGE- Defining the 95 % percentile of our control group as a threshold for definite abnormal values, we found values above 1033 ms for native T1, below 451 ms for post contrast T1, above 32 % for ECV, and above 54 ms for T2 to be abnormal, see In the SLE subgroup, values higher than the 95 % percentile of controls were found in n = 3 patients (23 %), with one patient showing both increased T2 and native T1 values beyond the 95 % percentile of normal, and the two other patients isolated increased ECV or T2 beyond the 95 % percentile of normal, respectively. Only one of these patients was reported LGE-positive, also see Fig. 5. Discussion To the best of our knowledge, this is the first study evaluating cardiac involvement in patients with CTD and preserved left ventricular ejection fraction by a comprehensive CMR approach, including LGE CMR, as well as T1 and T2 mapping techniques. The findings are as follows: 1) Patients with CTD show increased native T1, ECV, T2 and decreased post contrast T1 values compared to controls. 2) Subgroup analysis of SSc and SLE patients revealed that native T1 and T2 values seem to be higher in patients with SSc compared to patients with SLE. However, both parameters can separate between SSc/SLE patients and controls. 3) Abnormal values beyond the 95 % percentile of healthy controls might help to detect myocardial involvement in patients with CTD even in the absence of LGE. Patient characteristics and general CMR results Most patients were middle-aged and female, in line with previous reports [3]. The majority of patients was nonor oligosymptomatic, and had normal ECG, underlining that the diagnosis of cardiac involvement is a challenge in CTD, Table 1. The mean LV-EF in our cohort was preserved (62 %), cardiac dimensions did not differ from controls, Table 2. LGE was present in 18 % of the CTD patients, occurring in a non-ischemic pattern in accordance with other studies [2,3,[20][21][22]. T1 and ECV results We found higher native T1 values and increased ECV in our CTD population in comparison to controls, Table 2, Fig. 1a + c. Furthermore, post contrast T1 values were decreased in comparison to controls, Table 2, and Fig. 1b. Since these differences are independent of the presence of LGE, they may allow early detection of subclinical myocardial alterations in patients with CTD, as reported in other inflammatory cardiomyopathies [23,24]. In the SSc subgroup, differences for native T1 and ECV were even larger than in the overall CTD population, suggesting a high rate of diffuse myocardial involvement detected by T1 mapping, supporting data from Ntusi et al. [3], who found also elevated native T1 and ECV in SSc patients. At first sight, the mapping data in this study seem to conflict with the low prevalence of LGE (12 %). However, LGE has its strengths in detecting focal processes (e.g. infarcted myocardium vs. remote myocardium), whereas in diffuse processes this technique is of limited value. Conversely, mapping techniques, which provide absolute quantitative values, rather than just visual or semi-quantitative interpretation of the images, perform well in the assessment of diffuse myocardial processes [7]. Therefore, the T1 and ECV findings in this study might be the surrogate for the high rate of diffuse fibrosis (44-100 %) observed by endomyocardial biopsy or autopsy in SSc patients [25,26], and might be a useful tool not only for detection of myocardial involvement, but also for evaluation of an adequate response to immunosuppressive agents during the clinical course of the disease. In the SLE subgroup, we observed lower T1 and ECV differences to controls than in the SSc subgroup. Consequently, although showing increased ECV and decreased post contrast T1 values compared to controls, the difference was significant only for native T1 values, p = 0.03. This might have at least two reasons: 1) In contrast to SLE patients, autopsy studies from SSc patients revealed a high rate of diffuse fibrosis, which might be the surrogate for higher native T1 and ECV values in SSc patients [25,26]. 2) Our finding that native T1 seems to separate best between SLE patients and healthy controls, is supported by a recent study [2], which identified native T1 a) as the best parameter to separate between SLE patients and controls, and b) as an independent predictor of the underlying SLE diagnosis. However, in the study by Puntmann [2] also post contrast T1 values and ECV differed significantly to the control group. They included 33 asymptomatic SLE patients, with an activity index (SLEDAI) of 0, and observed a high LGE prevalence of 61 % (n = 20), which is in contrast to our study (SLEDAI 16, prevalence of LGE 23 %). Another explanation for these differences might be the time duration from SLE diagnosis to CMR imaging: In the study from Puntmann et al., the average time from SLE diagnosis to imaging was 7.4 years whereas in our study almost 40 % had their CMR within the first year of SLE diagnosis. Therefore, it might be argued that the grade of diffuse fibrosis, as well as the presence of focal fibrosis detected by LGE, might increase in later stages of the disease. Since both studies found that native T1 is the most sensitive parameter to separate between SLE patients and controls, native T1 may play an important role in: a) initial diagnosis of myocardial involvement and b) the monitoring in SLE patients. Our findings add knowledge to the potential role of T1 mapping in patients with different CTD, since this technique seems to provide more detailed tissue characterization than LGE alone. This might have clinical implications for the assessment of disease activity, and monitoring of the response to immunosuppressive medication in CTD patients. Moreover, since T1 and ECV values in patients with ECG abnormalities did not differ to the values of patients with normal ECG, the presence of ECG abnormalities alone may be of limited diagnostic value for detecting myocardial involvement in CTD patients. T2 results In contrast to T1 mapping, myocardial T2 values correlate closely with free tissue water content [27,28], predisposing them for the assessment of active myocardial inflammation in systemic disorders such as CTD. Newer T2 mapping sequences provide objective and robust data [8,29], and will most likely replace previously described T2-weighted sequences [7]. As expected in systemic inflammatory disorders such as CTD, median myocardial T2 values were significantly higher than in controls, suggesting myocardial involvement due to systemic inflammation, Table 2, Fig. 1d. Of note, T2 performs even better than native T1 to separate controls from CTD patients (p < 0.001, p = 0.001, respectively). This difference remains significant by dividing the CTD population in a LGE-positive and a LGE-negative group, underlining the additional value of T2 mapping in comparison to the performance of LGE CMR alone. For the SSc subgroup, we found only studies in the literature that used T2-weighted images for the assessment of inflammation instead of newer T2 mapping techniques [3,30]. We filled this gap and found higher T2 values both than controls (p < 0.001), and patients with SLE (p = 0.001), suggesting a high grade of myocardial inflammation, possibly representing active disease, in SSc patients. The occurrence of both myocardial inflammation and diffuse fibrosis is a well-known finding in these patients [3]. Thus, a comprehensive CMR approach including LGE, T1 and T2 mapping seems a reasonable approach to evaluate both chronic and active stages of the disease in SSc patients. Our data are also supported by a recent study [31], which reported elevated T2 values in SLE patients compared to controls. However, their T2 values were higher in SLE patients and controls as compared to the values in this study, which might have the following reasons: 1) Different patient populations: our patients were younger; 2) different grades of inflammation due to different immunosuppressive treatment regimen: 77 % of our patients were on steroids vs. only 17 % in the latter study. 3) Differences in the T2 mapping sequence and map analysis software. Therefore, as long as there are no consistent mapping sequences, each institution should create its individualized normal values [12]. Of note, T2 values of our control group were in line with the results of other groups [32]. Since increased T2 values are supposed to represent potentially reversible processes [31], T2 mapping might play an important role as a quantitative biomarker, which might serve as surrogate for response or failure of immunosuppressive agents. As shown above for T1 values, T2 values in patients with normal vs. abnormal ECG did not differ significantly, underlining the need for further detailed tissue characterization for the detection of myocardial involvement in CTD. Values above the 95 % percentile of normal Despite highly significant differences in T1 and T2 values between the CTD population and controls, there is still some overlap in values, hampering the diagnosis of myocardial involvement in the individual CTD patient, also see Fig. 1. Therefore, we used the 95 % percentile of our control group as a threshold for definite abnormal values in patients with CTD. The majority of abnormal values were reported for T2 (n = 15), and native T1 (n = 10), suggesting to be the most promising parameters for potential detection of myocardial involvement. Of note, 87 % of these patients with elevated T2 values, and 90 % of the patients with elevated T1 native values, were LGE-negative, see Fig. 4. In the SSc and SLE subgroups we found comparable results, with native T1 and T2 as most frequent parameters above the 95 % percentile of normal, and a high rate of LGE-negative patients, see Fig. 5. These findings underline the additional benefit of the newer mapping techniques compared to LGE imaging alone. Clinical implications In this study, we could demonstrate that mapping sequences in addition to LGE-CMR might be useful for the detection of myocardial involvement in patients with CTD. Patients with CTD show higher T1, ECV, and T2 values compared to healthy controls. These findings are independent of the presence of LGE. Furthermore, subgroup analysis in SSc and SLE patients revealed that native T1 mapping and T2 mapping are the best parameters to separate between normal subjects and patients. This could be confirmed among patients with values higher than the 95 % percentile of controls, suggesting a combination of both fibrosis and inflammation in CTD patients. Despite potential life-threatening complications by myocardial involvement of CTD, many patients will present with nonspecific symptoms, normal ECG, and preserved LV-EF. Thus, a comprehensive CMR approach may be of future clinical importance not only for detection of myocardial involvement but also for response to treatment. Nevertheless, larger randomized trials are warranted to investigate the diagnostic and prognostic value of abnormal mapping findings, before these sequences can be implemented in the clinical routine. Limitations Several potential limitations need to be addressed. Due to the single center setting, potential center-specific bias cannot be excluded. However, since most mapping sequences are vendor and center specific, there is still a lack of established normal values and thresholds, so preferably centers should establish their own normal values and thresholds upon healthy controls, as suggested by current recommendations [12]. The overall CTD group, and in particular the SSc and SLE subgroups are small, but comparable in size to most of the studies in the current literature dealing with CTD. Furthermore, despite the relatively small numbers of patients, significant differences in the mapping parameters were measured compared to controls. Measuring global myocardial T1 or T2 values in a single mid-ventricular slice might overlook focal processes. However, this approach is common practice [33,34], less subjective and might be even better comparable to follow-up exams. Moreover, for comparing different CMR techniques (native T1, post contrast T1, ECV, T2), it is fundamental that measurements are made in matching locations. Endomyocardial biopsy was not routinely performed. However, it is well known that EMB has several limitations, e.g. invasiveness, sampling error, lowering its diagnostic benefit. Furthermore, in oligosymptomatic patients with preserved LV-EF, this would be a rather unethical approach, and not in line with current guidelines [35]. Comparing mapping results to cardiac biomarkers would have been of interest, however this was not intention of our study, and should be investigated by further studies. Conclusions We found increased values for native T1, ECV, T2, and decreased values for post contrast T1 in our CTD population with preserved LV-EF compared to controls, independent of the presence of LGE. Native T1, and T2 as the best discriminators to controls seem to have incremental value in the detection of myocardial involvement compared to LGE CMR alone, with the largest differences observed in patients with SSc. A potential benefit of the newer mapping techniques might be an early
2018-03-25T11:30:14.534Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "bb22f0f0c2536fcf6b1b6a65b39408497dba8310", "oa_license": "CCBY", "oa_url": "https://jcmr-online.biomedcentral.com/track/pdf/10.1186/s12968-016-0288-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb22f0f0c2536fcf6b1b6a65b39408497dba8310", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119213683
pes2o/s2orc
v3-fos-license
Rosen-Zener model in cold molecule formation The Rosen-Zener model for association of atoms in a Bose-Einstein condensate is studied. Using a nonlinear Volterra integral equation, we obtain an analytic formula for final probability of the transition to the molecular state for weak interaction limit. Considering the strong coupling limit of high field intensities, we show that the system reveals two different time-evolution pictures depending on the detuning of the frequency of the associating field. For both limit cases we derive highly accurate formulas for the molecular state probability valid for the whole range of variation of time. Using these formulas, we show that at large detuning regime the molecule formation process occurs almost non-oscillatory in time and a Rosen-Zener pulse is not able to associate more than one third of atoms at any time point. The system returns to its initial all-atomic state at the end of the process and the maximal transition probability is achieved when the field intensity reaches its peak. In contrast, at small detuning the evolution of the system displays large-amplitude oscillations between atomic and molecular populations. We find that the shape of the oscillations in the first approximation is defined by the field detuning only. Finally, a hidden singularity of the Rosen-Zener model due to the specific time-variation of the field amplitude at the beginning of the interaction is indicated. It is this singularity that stands for many of the qualitative and quantitative properties of the model. The singularity may be viewed as an effective resonance-touching. basic physical processes and to the development of efficient experimental tools for precise control of cold atom motion, there is a pronounced need to explore models other than the mentioned two. For the non-crossing models the next after the basic constant-amplitude Rabi one comes the Rosen-Zener model [15] of finite pulse duration, when the detuning is supposed constant while the field amplitude varies in time according to the hyperbolic secant low. Though in the limits of the model considered here the cold molecule formation processes via photoassociation or a Feshbach resonance are mathematically treated in equivalent manner, this field configuration is directly relevant to the photoassociation only. This is because in the case of a magnetic resonance the coupling term (i.e., the pulse duration, if optical terminology is used) can not be adjusted -it corresponds to some given hyperfine coupling. On contrary, in photoassociation the pulse duration can not be infinite (this would mean infinite energy). Hence, finite pulse duration should necessarily be discussed if experimental realization is assumed. To this end, the accumulated knowledge from the linear theory suggests that one should be careful with the optical pulse inclusion and shutdown scenarios -the particular form of the time-variation of the field amplitude plays a substantial role. A well discussed shape of such a time-variable pulse in the linear theory is the Rosen-Zener hyperbolic-secant model. This is a motivation for exploring the Rosen-Zener field-configuration for the photoassociation. One should note, however, that this model is applied, though indirectly, to the Feshbach resonance as well. This is achieved by applying a transformation of the independent variable (time) that changes the governing equations to a constant-amplitude form (see below). Changing to the constant-amplitude form turns the model into a variable-detuning field-configuration. Yet, strictly speaking, the model remains non-crossing. In the meantime, this constant-amplitude form reveals a prominent property of the model, namely, a hidden singularity due to the speed of the field inclusion at −∞ = t . It is this singularity that makes a major difference of this model from the Rabi one, which does not reveal the different evolution scenarios inherent for the Rosen-Zener model as discussed below. The mentioned singularity effectively acts as a resonance-touching. Finally, it should be noted that the constant-amplitude form of the model makes it relevant to several recent experiments. Thus, the model is equally useful for the magneto-association via Feshbach resonances. In the present paper we explore both the weak and strong coupling regimes for the Rosen-Zener field-configuration comparing the results with those for the linear Rosen-Zener model [15] and the nonlinear Rabi problem [8]. In the quasi-resonance approximation, the semiclassical equations describing twomode one-color photo-or magneto-association of an atomic Bose-Einstein condensate have the form of the following system of coupled nonlinear equations for the probability amplitudes of the atomic and molecular states a 1 and a 2 [16][17][18] where t is the time, ) (t U is the Rabi frequency, and ) (t δ is the detuning modulation function. These equations are often faced in different field theories with a Hamiltonian containing a term of the form , for instance, in controlling the interaction strength by a Feshbach resonance [5,18], in second harmonic generation in nonlinear optics [19], etc. System (1) preserves the total number of particles that we normalize to unity: 1 const 2 We will consider a condensate initially being in pure atomic state: All the parameters involved in (1) are supposed dimensionless. We study the following field configuration known as the Rosen-Zener model [15]: In the analysis below the following linear analog of system (1) is used: with the same functions . Here, to ensure coincidence of the solutions to systems (1) and (3) , which leads to . Note that the solution of system (3) for 1 = L I is written as is the Gauss hypergeometric function [20]. Hence, accurate to a phase factor, the solution for system (3) satisfying the normalization The final (at +∞ → t ) probability of transition to the second level is given by the following nice formula by Rosen and Zener [15]: This formula states the well-known π -theorem [13] according to which the system returns to , and reaches the highest transition probability possible for the given fixed detuning at ). Note that the system is completely inverted at exact resonance only. The numerical solutions to nonlinear and linear systems are compared in Fig.1. As it is seen, the nonlinear behavior displays considerable deviations from the linear case. First, at exact resonance the dependence of the final transition probability on the Rabi frequency in the nonlinear case is monotonic. Second, while at non-zero detuning atom/molecule oscillations are always observed as the field amplitude is increased, the π -theorem is no longer valid. However, importantly, at fixed detuning the final transition probability depends nearly periodically on the field amplitude and approximately periodic returns to the initial state are observed. (Therefore, it is likely that a changed form of the π -theorem holds in this nonlinear case as well.) This is demonstrated in Fig.2. Furthermore, examining the graphs in this figure, we see that the oscillation shape, amplitude and frequency are changed depending on the detuning. Clearly, the oscillation nature is close to that of the nonlinear Rabi-solution (see, e.g., [8]). Finally, we note that in the nonlinear case the transition probability decreases considerably faster as the detuning is increased, becoming negligible already at Our study is based on two different exact nonlinear equations written for the molecular state probability We start from the weak coupling regime of small field intensities that is often encountered situation under the current experimental conditions. Using the nonlinear Volterra integral equation, we show that an accurate approximate solution for this limit can be constructed, using Picard's successive approximations, in terms of the solution to the linear quantum-optical problem. We determine the final conversion probability and show that because of the inherent properties of the Rosen-Zener model under consideration the strict limit of weak nonlinearity (when no essential deviations from the linear evolution are observed) corresponds to smaller field intensities as compared with the Landau-Zener case. We discuss the specific reasons for such behavior and construct an approximation that is valid also for the intermediate regime of moderate coupling strength. Further, we pass to the strong coupling limit of high field intensities and show that the system reveals two different time-evolution pictures depending on the frequency detuning of the associating field. At large detuning the molecule formation process occurs almost nonoscillatory in time. In contrast, at small detuning the evolution of the system displays strongly pronounced large-amplitude Rabi-type oscillations. The third-order differential equation in each case is reduced to a limit equation of lower order. In the case of large detuning this equation is of the first order, while in the small detuning case it is an effective Rabi-equation of the second order. Using these limit equations, we derive two accurate approximate formulas for the molecular state probability applicable to the two mentioned regimes. The results show that in the large detuning regime the system always returns to the initial allatomic state independently on the field intensity, hence, the final molecule formation efficiency in this case is nearly zero. In the small detuning regime, because of large-amplitude oscillations, the Rabi frequency (or, equivalently, the Rosen-Zener pulse area) should be adjusted in order to achieve efficient conversion. Weak coupling limit Consider the transformation of the independent variable ) sech( / t dt dz = that changes system (1) to the following constant-amplitude form where ( ) To treat the weak coupling limit of such problems with arbitrary detuning and constant Rabi frequency 0 U , we have earlier developed an appropriate mathematical approach based on the reduction of system (6) to the following nonlinear Volterra integral equation [21] for the molecular state probability Note that if the term proportional to 2 p is omitted, Eq. (8) It is not difficult to see that it is sufficient to take only the first term of Eq. (11). Thereby, the approximate solution to system (6) is written by means of the solution to linear system (3): This formula is checked to be rather accurate in an appropriate range of variation of 2 0 U = λ . Now consider how to calculate the integral in Eq. (12). Note that to achieve a preset accuracy in powers of λ , the approximation of L p by a finite number of terms of its Picard's series can be used. Restricting to the accuracy up to ) (the first order of the expansion), . To improve this approximation, a correction factor can be introduced, thereby applying an approximation of the form . Furthermore, the functions δ C and δ S are explicitly determined by considering an auxiliary integral: These functions are shown in Fig.3. ) is readily calculated. The result reads , the solution to nonlinear problem (1), accurate to ) , is given by the following formula: This formula describes well the process up to 3 . , the maximum value 1/2 allowed by the normalization. However, the derived formula can be modified to essentially improve the result. This can be done by noting that L p at small non-zero λ is much better approximated by a formula . This, obviously, corresponds to the what leads to a formula of significantly better structure: Indeed, unlike formula (15), the transition probability Reasons for the latter additional restriction deserve special discussion and we will return to this a little later. But before, we will show that there is a non-trivial way to improve this result even more. Note first that, with accuracy to a constant factor, ) This observation suggests the replacing of the functions δ C and δ S in (11) by a , respectively, )) ( cos( z δ and )) ( sin( z δ by the corresponding derivatives. As is easily seen, this is nearly equivalent to substitution λ / P RZ C = ∞ in formula (16). As a result, we have More accurate calculations taking into account the properties of RZ a 2 show that The derived formula gives very good approximation up to 25 Let us now discuss the applicability range for the obtained formulas and the origin of the restriction imposed on λ . The calculations above rest upon the presumption of smallness of Picard's successive approximations for u as compared to the first term of Picard's series. As follows from Eq. (11), the second Picard's term has the form As it is immediately seen, whenever at 1 << λ the condition ] is fulfilled, the assumption 0 1 u u << is warranted to be the case. Of course, this takes place under 1 . 0 ≤ λ and, as was mentioned above, it is this fact that defines the applicability range of formulas (15) and (16). The situation, however, is drastically changed already at 3 . (17) and (18) applicable up to ) are of substantial importance. One might hope that the latter formula will be applicable for a little larger λ if λ δ ≥ 0 , since then, due to the presence of the factor . Thus, the general conclusion is that under , one may not confine himself only to the first term of Picard's series for u since the successive terms play an important role. Thereby, the given regime should be viewed as a strongly nonlinear one. Strong coupling limit In the strong coupling limit of high field intensities, 1 2 0 >> U , the nonlinearity is well pronounced. In this case, however, the Volterra equation (8) is of little help, because the successive Picard's approximation terms become larger and larger. Instead, we use the following exact nonlinear differential equation of the third order [22] . To construct an approximate solution to this equation, compare the magnitudes of involved terms keeping in the mind that we suppose 1 2 0 >> U . It is then immediately seen that there are two basic possibilities depending on the magnitude of the detuning, 1 0 << δ and 1 0 >> δ . This conclusion is also guessed from Fig.1. Indeed, as was already noted above, at small detuning the final conversion probability (i.e., the molecular state probability at +∞ → t ) reveals large amplitude oscillatory dependence on the Rabi frequency. In the meanwhile, the probability rapidly decreases as the detuning is increased becoming practically negligible at 1 0 ≈ δ . These observations are further confirmed by examining the time evolution of the transition probability (Fig.5). We see that at 5 . 0 0 ≤ δ strong atommolecule time-oscillations occur (see the detailed picture in Fig.6), while at larger detuning the oscillations are highly suppressed (Fig.7); they can be neglected already at considerably simplified reducing to a quadratic equation for 0 p : whereby we arrive at the following principal result: This is a highly accurate approximation. For 5 0 > U and 2 0 > δ the probability calculated by this formula and the numerical result are practically indistinguishable (Fig. 7b). Besides, it allows one to linearize Eq. is always less than 6 / 1 . Hence, at large detuning a Rosen-Zener pulse is not able to associate more than one third of atoms ( 6 / 1 molecule = p corresponds to the 3 / 1 of atoms). This limitation for the conversion efficiency had been noted to be the case in the adiabatic limit (which is equivalent to the discussed case of high field intensities and large detuning) for other non-crossing models too (see, e.g., [23]). Another variation of the 3 / 1 limitation is the observation that for the crossing models in the adiabatic approximation the molecular state probability is always close to 6 / 1 at the resonance crossing time-point (see, e.g., [12,22]). The solution to this equation reads (31) This formula demonstrates the same qualitative features as the nonlinear solution, Eq. (28); i.e., in the linear case again a return to the initial state is observed if the applied Rosen-Zener pulse is of a large detuning, and there is a maximal possible transition probability achieved at . This time, this probability is 8 / 1 (i.e., 2 / 1 for normalization 1 = L I ). Small detuning case: To treat this regime we first rewrite Eq. (21) in the following factorized form (32) The speculations now to proceed are as follows. Though the detuning is supposed to be small, one cannot completely neglect the term where ( ) However, the numerical simulations reveal that for any nonzero small 0 δ the solution is oscillatory (this is well seen from Figs. 5 and 6). Hence, in a sense, the exact resonance case 0 0 = δ is degenerate. This degeneracy can be resolved by introducing a small perturbation when constructing the initial approximation. Intuitively, in order to get an approximation that is as close to the real solution as it is possible, one should try to introduce a perturbation as To proceed with the outlined approach, we rewrite Eq. (32) in the following equivalent form: , we neglect the last two terms of this equation and integrate the remaining equation once. Taking into account the initial conditions applied here, we arrive at the following second order equation Comparing this solution with the exact resonant solution we first note that the solution given by Eq. (33) is also written in terms of Jacobi sn -function if one takes ). Furthermore we note that Eq. (38) is reduced to Eq. (33) for 0 = A . These observations clearly suggest that the performed procedure, the introduction of an A -term, is equivalent to changing the parameters of the resonant solution (33) written in the Jacobi snfunction form. Hence, the approach we applied can be viewed as a modification of the wellknown method of strained parameters [24]. Here, the idea is to choose the parameter A so that this remnant becomes as small as possible. Strictly speaking, one should look for a value of A for which the influence of the neglected terms is minimal. To address the latter question mathematically strongly, one should examine the behavior of the next approximation term constructed by means of using 0 p of Eq. (38) as zero-order approximation. However, it is difficult to proceed in this way because the analytic expression for the next approximation term is not known. For this reason, we look for indirect criteria. A possibility opens up when examining the behavior of function This is a step-wise function that exponentially slowly decreases from a relatively large value we immediately get with this value of A well describes the process for many oscillations (see Fig. 8a). Nevertheless, it is seen that the deviation from the exact solution slowly increases during the time and eventually becomes rather notable at the end of the interaction process. The parameters 1 p and 2 p are finally given by simple formulas: This is a really good approximation. The Jacobi sine solution (38) with these parameters produces graphs practically indistinguishable from the numerical solution as far as 0 δ is small enough and 1 0 >> U (Fig. 8b). If needed, one may further improve the results by linearization of the problem using this solution as an initial approximation. Thus, we have seen that at small detuning the Rosen-Zener pulse causes large amplitude oscillations during the time evolution of the coupled atom-molecule ensemble described by the Jacobi sn -function. According to the properties of this function, the shape of , thus leading to a zero integration constant C , we conclude that in this regime too the behavior of the system is essentially determined by the mentioned hidden singularity. Summary We have examined, in the limits of two-modes' Gross-Pitaevskii mean field approach, the molecule formation process in a Bose-Einstein condensate under the conditions of the non-crossing Rosen-Zener model for which the detuning of the field is constant and the pulse amplitude is varied according to the hyperbolic secant law. We have first studied the weak coupling limit for this field configuration. Using an exact nonlinear Volterra integral equation, we have shown that in this limit the solution to the problem is written in terms of the solution to an auxiliary linear Rosen-Zener problem. We have derived a simple expression for the final transition probability. We have found that for the Rosen-Zener model the strict limit of weak nonlinearity corresponds to smaller field intensities than for other known models such as the Landau-Zener and Nikitin-exponential ones. We have shown that this is because of the inherit properties of the particular hyperbolic secant pulse shape under consideration. Further, we have treated the strong coupling limit of high field intensities when the nonlinearity is most pronounced in the molecule formation process. We have shown that here there are two different regimes of the time evolution of the coupled atom-molecule system corresponding to large and small detuning of the associating field. In the first case the behavior of the system is almost non-oscillatory while in the second case large amplitude coherent oscillations in the population dynamics are observed. Discussing the large detuning regime, we have shown that the conversion process is effectively described by a limit first-order nonlinear equation for the molecular state probability. Using the exact solution to this equation, we have shown that in this regime the molecular fraction qualitatively follows the field amplitude time-variation, i.e., the probability of the molecular state first monotonically increases, reaches a maximum at the time point when the field intensity is maximal, and then decreases as the field amplitude decreases. Eventually, the system returns to the initial all-atomic state. The maximal possible molecular fraction is found to be 6 / 1 , i.e., in this regime a Rosen Zener pulse is capable to capture no more than the third of the initial atomic population (this is an argument why a resonancecrossing is needed for molecule production efficiency). In accordance with this prediction, the JILA experiments [25] have shown a maximum molecular conversion of about 16%. Furthermore, discussing the small detuning limit, we have shown that in this time the system is well described by a second order nonlinear equation that is shown to be the equation for an effective Rabi problem with changed parameters. We have derived accurate approximations for the parameters of the corresponding Rabi-solution written in terms of the Jacobi elliptic sine function. We have seen that the number of the oscillations, as in the linear case, is mainly defined by the pulse area. In the meantime, we have shown that the oscillation shape is mostly defined by the field detuning; the influence of the field intensity here presents a small correction of higher order. Finally, we have indicated an inherent singularity of the Rosen-Zener model, a hidden singularity that stands for many of the qualitative and quantitative properties of the model. This singularity, which is shown to be due to the time-variation law of the field amplitude at the beginning of the interaction, can be viewed as an effective resonance-touching.
2019-04-12T19:49:29.776Z
2009-01-15T00:00:00.000
{ "year": 2009, "sha1": "6b939dbb4f97e1f950c5bdca62ab6978f0a08893", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0909.0546", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6b939dbb4f97e1f950c5bdca62ab6978f0a08893", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251402892
pes2o/s2orc
v3-fos-license
Integers for Radical Extensions of Odd Prime Degree as Product of Subrings For a radical extension K of odd prime degree the ring O_K of integers is constructed as a product of subrings with the following property: for all prime divisors q of the discriminant of O_K there is a q-maximal factor. The discriminant of O_K is the greatest common divisor of the discriminants of all factors. The results are applied to give a criterion for the monogeneity of K where the opposite is not true. INTRODUCTION For ≥ 2 and ∈ ℤ, ≠ ±1, consider the polynomial ( ) = − and assume that it is irreducible over ℤ. Define = √ , = − , = ℚ( ) and denote by the ring of integers of . Such radical extensions are also known as in the literature. The present paper is based on the findings of [3] where it has been characterized when = ℤ[ ]. This is true under certain conditions (see Theorem 5.3 of [3]). Turning this theorem into the negative it follows that ℤ[ ] ⊊ is equivalent to (i) is not squarefree or (ii) There is a prime factor of which is a Wieferich prime to base . For more information concerning Wieferich primes see Section 4 of [3] and the references cited there. Note that always divides − 1 if and are coprime (Little Fermat Theorem). The definition of -maximality can be found in 6.1.1 of [2]. In the present paper we assume that = is an odd prime. The oddness of is not a real restriction because for = 2 the ring of integers for radical extensions is already well-known. In the main theorem we characterize as a product of subrings of the form ℤ[ ], where ℤ[ ] is -maximal and runs through all prime divisors of ; furthermore the minimal polynomial of all is of the form − . In the Wieferich case an additional factor of the form ℤ[ ] is necessary to ensure -maximality. The discriminant of is calculated as the greatest common divisor of the factors without using ℤ bases of . The proof of the main theorem and its corollaries in Section 5 needs some preparation which is done in Sections 2 to 4: In Section 2 preliminaries are handled which we need for the following sections. In Section 3 the Wieferich case is handled: We prove that a specific subring of is -maximal. In Section 4 the non-squarefree case is handled: For every prime factor of we construct a subring which is -maximal. In Section 6 we give examples and a criterion for the monogeneity of where the opposite is not true. PRELIMINARIES In this section we prove lemmas and propositions which are needed in the subsequent sections. (i) A well-known theorem due to N. H. Abel (see Satz 277 together with Satz 180 of [7]) says that − is irreducible over ℤ if and only if is not a -th power in ℤ. Then (i) follows immetiately. (ii) This is immediate. ◻ Remark 2.2. As an immediate consequence of Lemma 2.1 we can assume in the following without restriction that in the prime decomposition = ∏ it holds that 1 ≤ ≤ − 1 for all 1 ≤ ≤ . ◻ The following lemma introduces some further notation and properties of bases of Wieferich primes. (ii) This holds because ( ) = − is the minimal polynomial of . (iii) This is clear. (iv) This holds because the square of the determinant of every base change matrix between the power bases is 1. Then the statement follows from the well-known theorem that the discriminants of two ℤ bases differ by the factor det( ) = 1 (see Proposition 1 in §2.7 of [8] . This follows immediately from Theorem 2.6 (ii). Note that , , = and apply Lemma 2.5 (v). Statement (iii) is also clear from (i). ◻ The following lemma has a more general context. [4] are fulfilled: ℤ is a Dedekind domain, has rank and this holds by assumption also for the subrings and . If ⊆ then • = , and the statement follows from the Elementary Divisor Theorem (see Theorem 1 in §1.5 of [8]). THE WIEFERICH CASE In this section we assume that is a Wieferich prime to base . Then does not divide and is an integer in (Lemma 2.8 (iii)). We will prove The proof is done at the end of this section because several preparative statements are necessary. Remark 3.2 (matrix notation). We will use matrix notation because we need the characteristic polynomial and the analysis modulo powers of seems to be easier. Related to the base ; 0 ≤ ≤ − 1} multiplication with is represented by the × matrix For details see Proposition 1 in §2.6 of [8]. The matrix has the following entries: Row 1: In every entry in the -th column is divided by hence Then, for 1 ≤ ≤ − 1, with 1 in the entries ( + 1, 1), … , ( , − ) (which is the -th secondary diagonal below the main diagonal) and 0 mod elsewhere. This means also that every entry in the columns − + 1, … , is divided by . This finishes Remark 3.2. ◻ The matrix has the following properties (see Remark 3.2): (a) It has integral entries in the columns 2, … , which are all divided by . (b) Every entry in the first row is divided by (Lemma 3.3 (iii)). (c) The entry ( , 1) equals , all other entries in the first column are 0. Using the Laplace development for determinants along the first column it follows that is monic of degree and has integral coefficients. It remains to be shown that is the minimal polynomial of . Because ∉ ℚ it follows that the minimal polynomial of has degree as there are no further fields between ℚ and . Our statement follows now from the well-known fact that divides (which is the Cayley-Hamilton Theorem, see Satz 6 in Algebraische Ergänzung §2 of [1]). ◻ Next we analyze mod . . (i) Consider in Remark 3.2 the first -columns of . The first rows are zeroes, the remaining -rows form the unit matrix . Then (i) follows immediately. (For the first congruence use Lemma 3.3 (iii) and ≡ 0 mod ; for the third congruence use the induction hypothesis and the results for .) Hence statement (iv) follows for the -th column. Then statement (iv) follows inductively from Lemma 3.3 (i). . . MAIN THEOREM AND COROLLARIES We can start immediately with the main theorem of this paper. EXAMPLES AND A CRITERION FOR MONOGENEITY Firstly, we construct ℤ bases of . . We omit the (lengthy but straightforward) proof because ℤ bases of have already been constructed in [10] and, as a special case, in [6]. The methods used in our proof are quite similar to the one used in [10]. The following example illustrates the results developed here. (iii) Put = 11 and = 3 = 9. Then 11 is a Wieferich prime to base 3 because 11 2 divides 3 − 1. Due to Lemma 2.3 (ii) 11 is also a Wieferich prime to base 9. The other direction of this proposition is not true as the following example shows. It extends Exercise 10B to Chapter V of [8].
2022-08-09T01:16:19.314Z
2022-08-06T00:00:00.000
{ "year": 2022, "sha1": "acfdd322d83b4f5bc9da36f5676ad8bfb4f61fca", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "acfdd322d83b4f5bc9da36f5676ad8bfb4f61fca", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
211163934
pes2o/s2orc
v3-fos-license
Development and Field Evaluation of the INTER-ACT App, a Pregnancy and Interpregnancy Coaching App to Reduce Maternal Overweight and Obesity: Mixed Methods Design Background The interpregnancy and pregnancy periods are important windows of opportunity to prevent excessive gestational weight retention. Despite an overwhelming number of existing health apps, validated apps to support a healthy lifestyle between and during pregnancies are lacking. Objective To describe the development and evaluation of the INTER-ACT app, which is part of an interpregnancy and pregnancy lifestyle coaching module, to prevent excessive weight gain in pregnancy and enhance optimal weight and a healthy lifestyle in the interpregnancy period. Methods A mixed methods design was used to identify the needs of health care providers and end users, according to 15 semistructured interviews, two focus groups, and two surveys. The user interface was evaluated in a pilot study (N=9). Results Health care providers indicated that a mobile app can enhance a healthy lifestyle in pregnant and postpartum women. Pregnant women preferred graphic displays in the app, weekly notifications, and support messages according to their own goals. Both mothers and health care providers reported increased awareness and valued the combination of the app with face-to-face coaching. Conclusions The INTER-ACT app was valued by its end users because it was offered in combination with face-to-face contact with a caregiver. Introduction An increasing number of women are obese at the start of pregnancy. Concurrently, one in three European pregnant women has excessive gestational weight gain [1]. In particular, women with a high pregestational body mass index (BMI), young women (<20 years), single women, and women belonging to ethnic minority groups are at risk [2]. Adverse outcomes associated with maternal obesity and excessive gestational weight gain include gestational hypertension, gestational diabetes mellitus, and large-for-gestational age infants [3]. Approximately half of women with excessive gestational weight gain do not return to their prepregnancy BMI before the next pregnancy. This increases prepregnancy obesity and is an important predictor for increased risks of pregnancy-and birth-related outcomes in the next pregnancy, including cesarean delivery, fetal overgrowth, and postnatal weight retention [3][4][5][6]. Face-to-face lifestyle intervention studies during pregnancy are effective to reduce gestational weight gain [7][8][9], but they are time-consuming with limited scalability, and no or minimal effects have been shown regarding relevant pregnancy outcomes [10][11][12]. Given the high impact of prepregnancy BMI, intervening early during the preconception period is essential [3]. Reaching the most vulnerable women and subsequently achieving adherence to a healthy lifestyle before becoming pregnant are of high priority [13]. The use of mobile health (mHealth) technology in the prevention, screening, and treatment of health-related issues is increasing, as is reflected by the ample offering of smartphone apps. On one hand, mHealth can offer easier access to individually-tailored support at a low cost. On the other hand, these apps are mostly not targeted at groups with specific needs, such as pregnant and postnatal (between pregnancies) women. Moreover, their effectiveness has not been tested in randomized controlled trials (RCTs) [14]. Results of the effectiveness of mHealth tools are scarce [15,16], but pioneering studies have shown promising results regarding intervention adherence, feasibility, and achieving an adequate pregnancy weight gain [17][18][19]. The aim of this study was therefore twofold. First, we aimed to develop an app to monitor and coach pregnant and postnatal women with focus on maternal weight, physical activity, healthy eating, and mental wellbeing. Second, we aimed to gather feedback on user experience (ie, usability, usefulness, and user acceptance). This app, called INTER-ACT, will be used in combination with four postnatal (interpregnancy) and three prenatal face-to-face coaching sessions. The ultimate aim of an RCT, in which this app is embedded, is to reduce the risk of gestational hypertension, gestational diabetes, cesarean section, and large-for-gestational-age infants in subsequent pregnancy among women who had excessive gestational weight gain in their previous pregnancy [19]. Overview The INTER-ACT app targets women during the interpregnancy period, as well as pregnant women. The interpregnancy period is defined as the period between delivery and the start of a subsequent pregnancy. The app was developed in three stages ( Figure 1). First, a mixed methods design was used to gain insights into experiences with and views on perinatal lifestyle coaching from the perspective of health care providers and women/end users. Second, the app was designed by user-experience researchers and developed by the Belgium Campus ITversity in South Africa. Third, the app was evaluated in a qualitative field evaluation study. The three stages are elaborated below. A subsequent stage that is beyond the scope of this study involves embedding the app in a lifestyle intervention and evaluating it with an RCT design. The content of the face-to-face coaching is described elsewhere [19]. Health Care Providers' Perspectives We conducted semistructured interviews with a purposive sample of four general practitioners, three gynecologists, five midwives, and three dieticians (Table 1), who were selected according to their previous experience with obesity care in pregnant and postnatal women. A topic list was developed to gain insight into their experiences with and views on perinatal lifestyle coaching and their attitude towards technology-supported lifestyle coaching. In addition, two focus groups with a total of 16 midwives were conducted to explore their experiences with and views on perinatal lifestyle coaching and their attitude towards technology-supported lifestyle coaching in order to support data triangulation and achieve data saturation. All interviews and focus groups were audiotaped, transcribed, and analyzed thematically using open coding. The analysis of focus groups additionally included a peer debriefing with our researchers to control the interpretation of the results. Furthermore, 43 caregivers (Table 2) attending a symposium about lifestyle coaching in pregnant women were asked to respond to two open questions about their knowledge and skills regarding perinatal lifestyle coaching and potential gaps. Written informed consent was obtained from all participants, and confidentiality and anonymity were assured. Ethical approval was obtained from University Hospital Universitair Ziekenhuis Leuven, Belgium (B300201422650). End Users' Needs We conducted a survey among 50 pregnant women between 12 and 42 weeks of pregnancy (Table 3) to explore their needs regarding technology-supported lifestyle coaching to optimize gestational weight gain. They were recruited from the waiting room before prenatal consultations in two nonuniversity hospitals. The inclusion criteria were as follows: sufficient fluency in spoken Dutch, age between 18 and 45 years, uncomplicated pregnancy between 12 and 42 weeks, and at least one prenatal consultation prior to the current consultation. The exclusion criteria were as follows: twin pregnancies, diagnosis of gestational diabetes or complications influencing physical activity or eating behavior. Ethical approval was obtained from University Hospital Universitair Ziekenhuis Leuven, Belgium (B243201628083). Stage II: App Development The content of the app was based on the nutritional recommendations of the Superior Health Council of Belgium and the Institute of Medicine guidelines for gestational weight gain [20]. Additionally, guidelines from the Flemish Institute for Healthy Living and results from discussions with experts (clinicians, researchers, and policy makers) on the INTER-ACT external advisory board contributed to the content of the app. Furthermore, the principles of motivational interviewing techniques, goal setting, and positive messaging were incorporated in the app. User-experience researchers designed the INTER-ACT app ( Figure 2) according to usability heuristics, state-of-the-art insights from the domain of human-computer interaction, research experiences from previous mHealth projects and technologies [21], and results from the interviews and focus groups described in the first stage. The participants could use INTER-ACT to monitor mental wellbeing, set goals on physical activity and healthy eating, and record progress on these goals. Additionally, Bluetooth connections were made with the Withings Go activity tracker (model WAM02; Withings, Issy-les-Moulineaux, France) and Withings Body+ weighing scale (model WBS05; Withings) in order to track physical activity and weight, respectively. Tips and motivating messages to support weight management, physical activity, healthy eating, and mental wellbeing were created, and an algorithm was developed to send these messages to the participants according to their input. Custom tips could be added by the researchers and sent to specific participants. The user interface of the app is designed according to the principle of conversational interfaces [22]. All content in the app is structured as a conversation between the user and the system in a chronological stream of messages (eg, a new step count or weight) ( Figure 3). Messages are clickable, and clicking opens a page that provides additional information regarding the clicked message (eg, a weight graph). This approach allows the combination of both automatic input (from the weighing scale and activity tracker) and manual input (from entered mood), the display of feedback on achieved goals, and the display of reminders after a period of nonuse in a dynamic way. The first prototype of the app was tested for functionality and feasibility by two pregnant women and a multidisciplinary team involving a professor of gynecology, a professor of midwifery, a biostatistician, a psychologist, two lifestyle coaches, and a group of app developers. They provided feedback regarding the design from medical, wellbeing, and technical perspectives. An iterative process of adaptation led to the development of the INTER-ACT app, which is being used in an ongoing RCT. To ensure privacy and data security, the data are stored in a database hosted at a secure data center in Katholieke Universiteit Leuven. The database can only be accessed by our Application Programming Interface via a Structured Query Language connection. Security between the Application Programming Interface and end users involves a username and password system to keep the approach user friendly, but in the background, this system is supported by token-based authentication to prevent password theft. For the external system, we access authentication for data transfer via the Withings OAUTH2 system (Withings). Stage III: Field Evaluation The app was assessed in a qualitative field evaluation study, in which the technical functionality and user experience were explored. We recruited two pregnant and seven postnatal women (<6 months after delivery) through social media. During a home visit, the researchers installed the app, set up the Withings Go activity tracker and Withings Body+ weighing scale, and provided a short explanation of the app functions. The women used the app and devices for 3 weeks and were contacted at least once a week by telephone to address potential usability issues with the app and devices. In case of questions, the women could also contact the researchers by email and telephone. After the 3-week period, a semistructured interview was conducted during a home visit. Technical functionality issues, such as crashes, bugs, and connectivity issues with the activity tracker and weighing scale, were explored. The evaluation of user experience involved topics, such as content of knowledge-and skill-based elements; content, number, and timing of notifications; experienced accuracy of the activity tracker; and esthetics. The interviews were recorded and transcribed verbatim for analysis. The researchers' written notes of the observations made during the home visits, user feedback of the app, and reported user experiences were analyzed through an affinity diagram using Post-It notes. These insights allowed us to improve the app for a better user experience and prepare it for a full-scale field trial. Ethical approval was obtained for all studies, and informed consent was provided by all respondents (University Hospital Universitair Ziekenhuis Leuven, Belgium; B322201730956). Health Care Providers' Perspectives Qualitative semistructured interviews and focus group discussions revealed health care provider-experienced barriers and facilitators, and perspectives on pregnancy and postpartum lifestyle coaching supported by mHealth. According to health care providers, low social background and educational levels, increased economic difficulties, ethnic minorities, different cultural or religious context, and insufficient knowledge about healthy eating were characteristics that needed attention in performing lifestyle coaching. The experienced facilitating factors were women's motivation to change lifestyle, awareness of their own responsibility, and self-control. Some health care providers were not convinced that an app would be effective in acquiring a healthier lifestyle among obese pregnant women, and they felt that it could even induce fear and anxiety. From the open questionnaires (n=43) and interviews (n=15), the following three themes for coaching emerged: (1) in-depth communication training; (2) motivational techniques; and (3) behavioral change training, with specific attention to sensitive communication for vulnerable groups, including insights on their religious and cultural contexts. During the focus groups with midwives, they indicated a willingness to take up the role of a coach to empower women for a healthy lifestyle, but they lacked practical knowledge and skills to support vulnerable groups. They were not sure whether an app would be helpful in lifestyle coaching. However, if combined with face-to-face coaching and not used as a tool to "monitor and control" women's behavior, they indicated that an app could be useful. Data collected in the app could facilitate a coaching session and could result in a conversation about healthy lifestyle issues. However, midwives expressed that they prefer to restrict their administrative work and do not want to spend time on integration of additional technologies. End Users' Needs Among the 50 pregnant women who completed the survey, 30 (60%) wanted personal advice from caregivers about a healthy lifestyle. Only 8 out of 15 women (16%) indicated currently being counselled, mostly only regarding prenatal weight management (Table 4). Additionally, 45 out of the 50 women (90%) indicated that an app would help them to maintain a healthy lifestyle. Among the 50 women, 46 (92%) were eager to monitor their calorie consumption and 28 (56%) were eager to monitor physical activity goals using an app or diary. Moreover, among the 50 women, 45 (90%) indicated that they would like to self-monitor their mental wellbeing using a Likert scale with emoticons and 39 (78%) indicated that encouraging messages might enhance their motivation. Furthermore, among the 50 women, 36 (72%) preferred the app to display and evaluate the actual weight and weight gain, including tailored feedback. All women preferred an app that could tell them what they could eat safely in pregnancy and that included food diaries, weekly shopping lists, and pictures with recommended portion sizes ( Table 4). The women indicated that the attractiveness of the app might be enhanced by the addition of features regarding fetal development, an agenda for prenatal appointments, a checklist with hospital necessities, information on health risks for the mother and child, the ability to upload pictures and ultrasounds, a contraction counter, and a kick counter. Finally, women reported that they want their partners to be involved in the use of the app. Field Evaluation The qualitative user evaluation study showed a high user acceptance of the system and reported an increased consciousness regarding physical activity, eating behavior, weight management, and mental wellbeing. The activity tracker, goal setting for nutrition, and regular push notifications were especially appreciated. Multiple users requested to increase the number of notifications and suggested to spread them during the day instead of a single evening notification. Furthermore, users preferred to configure both the kind of reminder (steps, weight, mood, and goals based on the user's own behavior) and the timing. Participants who had an app and device installed on their smartphones besides the INTER-ACT app made comparisons between the two apps (eg, comparisons were made regarding the accuracy of the activity tracker). Participants rarely felt that the Withings Go activity tracker was more accurate than their known devices (eg, Fitbit). There were no such remarks regarding the weighing scale. Participants reported missing certain functionalities that other health-and weight-related apps incorporate, such as sleep tracking, heart-rate monitoring, and advanced food tracking and calorie counting. The esthetics of our study app were considered less modern or attractive when compared with today's standard. Despite these remarks, our participants noted important value in the INTER-ACT app when combined with face-to-face coaching. Table 4. Results from the survey of pregnant women (N=50). Principal Findings This paper reports on the development and evaluation of a mHealth app designed to help women improve their lifestyle during and between pregnancies. We found that pregnant women and health care providers valued the combination of the INTER-ACT app with face-to-face contact in supporting a healthy lifestyle. Personalized feedback from the system with different frequencies according to the focus of health behavior is highly appreciated and increases awareness about healthy behavior. Health care providers stress the importance of considering the vulnerability of risk groups within their cultural and religious contexts when introducing mHealth apps. On one hand, midwives were keen to improve knowledge and skills about sensitive communication and were interested in tools to enhance the intrinsic motivation for behavioral change. On the other hand, they reported reluctance to integrate new technologies fearing a high practical and administrative workload. Comparison With Prior Work Few studies have been published about app development processes for weight management in pregnant women [22,23]. Some studies focused on preconception health only [24,25]; however, to the best of our knowledge, there are no studies on app development targeting women in the interpregnancy period. Participants in this study reported the need for mHealth as an addition to face-to face contact. This is supported by the findings in a recent RCT comparing the effectiveness of face-to-face contact, that of mHealth, and that of a combination of face-to-face contact with mHealth for 5% weight loss in an obese population. They concluded that a conventional face-to-face weight loss program can partially be replenished with an mHealth program without losing effectiveness [26]. A healthy prepregnancy BMI is an important indicator for optimal pregnancy and birth outcomes [27]. Reaching women with unhealthy lifestyles in due time is a challenge. The effects of preconception interventions for improving pregnancy outcomes in overweight and obese women are scarce [28]. Concurrently, health care providers indicate that they need more training and education about effective obesity communication and weight management practice [29,30]. Women themselves felt that tailored advice specific to their personal situation and weight monitoring would help them implement changes [31]. Both conclusions have been confirmed in this study. Hence, we developed the INTER-ACT protocol consisting of a mHealth-supported lifestyle program [19]. The INTER-ACT app monitors women's weight and physical activity through connections with a weighing scale and activity tracker. Eating behavior and mental wellbeing were both self-reported. According to the data, algorithms provide continuous coaching through positive behavioral change techniques. The app targets women with excessive weight gain in a previous pregnancy and can be a low-cost alternative to labor-intensive face-to-face programs for the prevention of postnatal weight retention and excessive gestational weight gain in the subsequent pregnancy. Well-designed intervention trials with attention to structure, method of information delivery, and look and feel are required to further assess the feasibility and effects of such a technology for this target population. A recent pilot mHealth-supported intervention study that included 40 postnatal women (6-16 weeks) showed that a higher intervention adherence was associated with greatly lower body weight and percentage body fat [32]. It is known that self-monitoring and increased intervention adherence are associated with increased weight loss [33,34]. Concurrently, Herring and colleagues [35] showed that peer support and interaction by social networking in the mHealth app can increase intervention adherence in urban low-income mothers. The high variability in intervention adherence in both mHealth- [32] and non-mHealth-supported lifestyle interventions [7] indicates that it is important to work on these barriers in the future through cocreation with end users. Strengths and Limitations A strength of this study is the mixed methods design used to explore the experiences and views of different health care providers, as well as pregnant women and mothers in the postnatal period. The iterative approach with user participation allowed us to adapt the content and functionality of the app. Limitations are possible biases for the results because of the selection of experienced health care providers and motivated women in the pilot study. Besides, a rather short timeframe for the field evaluation of 3 weeks complicated the technical readiness of the app and thus could influence the crucial adherence and compliance of the program in the longer run. Furthermore, developing tailored feedback is complex and needs more time than was used in this approach to reach deeper levels. However, actual user evaluation showed that the INTER-ACT app increased the awareness for behavioral change. Recommendations for upgrading the app include subsequent iterations with focus on graphical design, improving stability and performance, making notifications and reminders configurable, and achieving optimal adherence and compliance for using the app and coaching program. Furthermore, an RCT is needed to validate the app, including the coaching program, for long-term use and health-related outcomes. Conclusion Health care providers appreciate the INTER-ACT app in combination with face-to-face contact and emphasize the importance of paying attention to reach the most vulnerable groups, and they are keen on enhancing their sensitive communication skills. On the other hand, they are reluctant to take up additional administrative tasks and to handle technical issues that might be accompanied with the implementation of the INTER-ACT app. Pregnant women and postnatal mothers value the combination of the INTER-ACT app with face-to-face coaching over more commercial and visually attractive apps. Technological readiness is crucial to refine the app before integration in an RCT. Future studies should evaluate the effectiveness of combinations of face-to-face programs and mHealth apps for this targeted population at risk.
2019-12-19T09:19:15.818Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "44c699d9ad6429ae37103c1a013b6fc105587a32", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2020/2/e16090/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b9206d8733b1d426584ce89f1c3557ad9937383", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12344754
pes2o/s2orc
v3-fos-license
Diet composition, not calorie intake, rapidly alters intrinsic excitability of hypothalamic AgRP/NPY neurons in mice Obesity is a chronic condition resulting from a long-term pattern of poor diet and lifestyle. Long-term consumption of high-fat diet (HFD) leads to persistent activation and leptin resistance in AgRP neurons in the arcuate nucleus of the hypothalamus (ARH). Here, for the first time, we demonstrate acute effects of HFD on AgRP neuronal excitability and highlight a critical role for diet composition. In parallel with our earlier finding in obese, long-term HFD mice, we found that even brief HFD feeding results in persistent activation of ARH AgRP neurons. However, unlike long-term HFD-fed mice, AgRP neurons from short-term HFD-fed mice were still leptin-sensitive, indicating that the development of leptin-insensitivity is not a prerequisite for the increased firing rate of AgRP neurons. To distinguish between diet composition, caloric intake, and body weight, we compared acute and long-term effects of HFD and CD in pair-fed mice on AgRP neuronal spiking. HFD consumption in pair-fed mice resulted in a significant increase in AgRP neuronal spiking despite controls for weight gain and caloric intake. Taken together, our results suggest that diet composition may be more important than either calorie intake or body weight for electrically remodeling arcuate AgRP/NPY neurons. is associated with increased food intake and positive energy balance [6][7][8][9][10][11] while POMC neuronal activation is associated with satiety and increased energy expenditure 7,10,[12][13][14] . Both AgRP/NPY and POMC neurons are modulated by a variety of peripheral factors, perhaps the best characterized of which is leptin, which potently inhibits AgRP/NPY neurons 12,15,16 and activates POMC neurons 12 . Leptin is important for long-term regulation of body weight and energy homeostasis 17 , but there is mounting evidence that the development of leptin insensitivity in the hypothalamus is a significant contributor to obesity [18][19][20][21] . We recently showed that diet-induced obesity (DIO) significantly increases the intrinsic excitability of ARH AgRP/NPY neurons, resulting in persistently increased activity that is refractory to inhibition by leptin 22 . In this study, our goal was to investigate the onset of the neuronal changes associated with DIO and determine whether the defects in the AgRP neuronal microcircuit arise as a result of increased body weight, high-fat diet (HFD), or both. We show here that changes in both AgRP neuronal excitability and projections to the paraventricular nucleus (PVH) occur rapidly upon brief exposure to an HFD and provide evidence that diet composition may play a more significant role in remodeling the electrical properties of AgRP/NPY neurons than body weight or caloric intake. Further, we show that the development of leptin-resistance does not contribute significantly to these early diet-related changes, as AgRP/ NPY neurons from mice briefly fed HFD are still robustly inhibited by leptin, suggesting that the diet itself influences the function of AgRP/NPY neurons. Results Short-term consumption of a high-fat diet induces persistently elevated output in arcuate AgRP/NPY neurons. In rodents, AgRP/NPY neuronal electrical activity (e.g., action potential frequency) is exquisitely sensitive to nutritional status: AgRP/NPY neurons from satiated mice fire quite slowly (< 1 s −1 ), while those from food-deprived (hungry) mice fire significantly faster (> 3 s −1 ) 22 . This switch in AgRP/NPY neuronal excitability is likely crucial for physiological regulation of appetitive behaviors, as direct optogenetic 9,10,23 or chemogenetic 11,[24][25][26] manipulation of these neurons has a rapid, potent impact on food intake in mice. We recently demonstrated that long-term (> 8 weeks) consumption of a high-fat diet (HFD) dramatically remodels the intrinsic excitability of AgRP neurons-in brain slices from mice fed HFD for at least 8 weeks, AgRP/NPY firing was persistently increased, regardless of nutritional status, and was resistant to inhibition by leptin 22 . Long-term consumption of HFD is associated with other physiological and pathophysiological changes such as obesity, increased adiposity, and altered hormone and nutrient levels 19,[27][28][29] , which may influence AgRP/NPY neuronal plasticity and/ or leptin-sensitivity. Therefore, in this study, we sought to determine whether these aforementioned changes in AgRP/NPY neuronal output were secondary to other HFD-induced changes (e.g., increased body weight and/or adiposity, hormone levels, etc) or if the persistent activation we observed in our previous study is an early consequence of HFD consumption that precedes and potentially contributes to other sequelae of overweight and obesity. To address this question, we examined the correlation of the diet-induced increase in AgRP/NPY neuronal output with diet-dependent weight gain to ascertain whether weight gain or other factors are required for electrical remodeling of AgRP/NPY neurons and the development of leptin-resistance. To better define the timeframe during which feeding mice an HFD promotes inappropriate activation of AgRP/NPY neurons, we fed mice HFD ad libitum for 2-6 days and examined the excitability of AgRP/ NPY neurons at each timepoint. Consistent with previous reports, we did not observe any significant change in body weight following brief (< 6 days) exposure to HFD (see refs 30,31). As shown in Fig. 1, even this short period of HFD feeding was associated with a significant increase in the AP firing rate of AgRP/NPY neurons (CD: 0.9 ± 0.2 s −1 , n = 7; short-term HFD: 3.0 ± 0.9 s −1 , n = 16; p = 0.02), similar to what we observed in AgRP/NPY neurons from mice fed HFD long-term, suggesting that HFD-induced remodeling of AgRP/NPY neuronal excitability occurs rapidly following the switch from a lower-fat CD and may be independent of an increase in body weight. AgRP/NPY neurons from mice fed HFD short-term are still sensitive to inhibition by leptin. In lean mice on a low-fat CD, leptin potently inhibits AP firing in AgRP/NPY neurons 16,22,[32][33][34] and this inhibition is significantly blunted in AgRP/NPY neurons from animals fed an HFD long-term 22 . Previous reports suggest that HFD disrupts leptin receptor (LepR) signaling as early as 48 h after the switch from control diet, as assessed by leptin-dependent JAK2 phosphorylation of STAT3 and upregulation of SOCS3 27,30,31 . Since one possible explanation for the elevated firing rate observed in HFD-fed mice is that leptin fails to inhibit AgRP neuronal firing due to the development of leptin resistance, we determined whether leptin inhibition of electrical activity in AgRP/NPY neurons is also disrupted after short-term HFD consumption. As shown in Fig. 2, bath application of 100 nM leptin to brain slices from CD: fasted mice significantly inhibited AP firing (aCSF: 2.7 ± 0.4 s −1 ; + leptin: 0.24 ± 0.07 s −1 , p = 0.0001, n = 7), consistent with previous reports 16,22,34 . Unlike our previous finding in mice fed HFD for at least 8 weeks 22 , 100 nM leptin significantly inhibited AP firing in AgRP/NPY neurons from short-term HFD-fed mice (aCSF: 3.1 ± 0.5; + leptin: 0.9 ± 0.3, p < 0.0001, n = 11; Fig. 2), suggesting that the increased activity of AgRP/NPY neurons in these mice is not a consequence of leptin-resistance. Consumption of a high-fat diet reduces AgRP + neuronal projections to the paraventricular hypothalamus. Neurons in the PVH are a primary target of AgRP/NPY neurons and inhibition of PVH output by AgRP/NPY neurons is necessary and sufficient for increased feeding in mice 23 . Prior studies of the anatomical projections from ARH → PVH assessed the integrity of this critical pathway in either the offspring of DIO or DIO-resistant rats or in leptin-deficient ob/ob mice 35,36 . While ARH AgRP → PVH projections are significantly decreased in both of these models, the developmental impact of impaired leptin signaling in both of these models confounds the interpretation of the specific effect of diet and/or body weight on ARH AgRP innervation of the PVH. Therefore, to determine if consumption of a high-fat diet by adult wild-type mice also perturbs innervation of the PVH by ARH AgRP neurons and accompanies the effect we observed on AgRP/NPY neuronal output, we fed mice HFD for either 2 days or 8 weeks, and used AgRP immunoreactivity (AgRP-IR) in the PVH as an indirect measure of ARH to PVH innervation. As expected, we found dense AgRP-IR throughout the PVH in sections from lean, CD-fed mice (Fig. 3A). As previously described 36 , we also observed a significant decrease in AgRP-IR in the PVH from CD-fed As shown in Fig. 3, consistent with previous reports in the offspring of DIO rats, AgRP-IR was significantly decreased in the PVH from mice fed HFD for 8 weeks, a timepoint at which these animals are obese and leptin-resistant 19,22 , indicating that long-term consumption of an HFD profoundly disrupts this critical circuit (8 weeks HFD: 37.48 ± 13.8, n = 3, Tukey's adjusted p = 0.0009). Since we found that even brief exposure to HFD was sufficient to alter AgRP/NPY neuronal output (Figs 1 and 2), we next determined whether short-term feeding of HFD also affected the density of AgRP-IR in the PVH. As shown in Fig. 3, after only 48 h of HFD feeding, AgRP-IR in the PVH was significantly reduced to the same degree as observed in both ob/ob and 8 weeks HFD mice (48 h HFD: 22.41 ± 7.4, n = 3/group, Tukey's adjusted p = 0.0003), indicating that AgRP immunoreactivity in the PVH is diminished very early in the response to feeding a HFD. The loss of AgRP-IR in the PVH may reflect a physical loss of axonal projections from ARH AgRP neurons to the PVH or, alternatively, an effect of HFD on the AgRP protein itself (e.g., defective axonal trafficking or post-translational processing of the peptide). To distinguish between these possibilities, we generated an AgRP-Cre-tdTomato mouse by crossing the AgRP-Cre transgenic mouse 37 , which expresses Cre recombinase under the control of the AgRP promoter with a tdTomato reporter mouse, in which the fluorescent protein tdTomato is expressed behind the strong CAG promoter at the ubiquitous Rosa26 locus 38 . Following Cre-dependent excision of a transcriptional blocker, tdTomato is strongly expressed only in AgRP neurons in these offspring. Because soluble tdTomato is expressed from a different locus by a different promoter, it is unlikely that tdTomato trafficking to the PVH is affected by a possible HFD-induced defect in AgRP trafficking or processing. Consistent with this, we observed a qualitatively similar decrease in red fluorescence in the PVH of AgRP-tdTomato mice following either 3 days or 3 weeks of HFD feeding (Fig. 3C), suggesting that the decreased AgRP-IR in the PVH is due to a loss of axonal projections. The PVH is the principle target of AgRP neurons in the hypothalamus 39,40 , thus a significant loss of projections from ARH AgRP neurons may be reflected as a decreased synapse number in the PVH. To test this hypothesis, we performed immunohistochemistry for the synaptic marker synaptophysin (Syp), which is expressed by virtually all neurons in the brain and is widely used as an indirect measure of synaptic density 41,42 . As shown in Fig. 3D, along with the decrease in AgRP-IR and tdTomato fluorescence, there was also a significant decrease in the intensity of Syp-IR, suggesting that after only 3 days of high-fat feeding, there is a physical loss of synapses to the PVH. Diet composition, not calorie intake, induces persistent activation of AgRP neurons in HFD-fed mice. In the early phase of high-fat feeding, mice exhibit hyperphagia 31 , thus in the first 1-2 weeks of HFD, mice consume both more dietary fat and more calories. It is therefore possible that the HFD-induced plasticity we observed in AgRP/NPY neurons is due at least in part to increased calorie consumption. To dissect the effect of increased dietary fat from increased caloric intake, we yoked a cohort of age-matched HFD-fed NPY-GFP mice (HFD-CR) to a group on CD diet (CD-CR) such that the HFD group was given the same daily calories as the CD group (~14 kcal/mouse/day). The CD group was also given ~14 kcal daily portion of food to control for the change from ad libitum to restricted feeding. As shown in Fig. 4A, even on a restricted calorie diet, mice fed HFD still gained a significant amount of body weight compared to the CD-CR group (note that the food-restricted CD group exhibits less age-related weight gain than ad libitum CD-fed mice), suggesting that diet composition alone is sufficient to alter energy balance in mice and promote weight gain. We next investigated whether consumption of an isocaloric, but high-fat diet altered electrical excitability in ARH AgRP/NPY neurons. As shown in Fig. 4B-D, just as in ad libitum HFD-fed mice, caloric restriction of an HFD induces a similar persistent activation of AgRP/NPY neurons (CD-CR: 0.8 ± 0.2, n = 8; HFD-CR: 4.3 ± 1.3, n = 7; p = 0.0006, Mann-Whitney U Test). Since the HFD-CR mice still gained weight in spite of long-term calorie restriction, there remains the possibility that increased body weight and/or adiposity in these mice may contribute to the persistent increase in AgRP neuronal firing from these mice. Therefore, to control for the effects of diet, calories, and body weight, we fed a separate group of age-matched male mice HFR-CR for only 2 days, a time point at which HFD fed mice have not yet gained weight (Fig. 4A). CD mice were also calorie restricted for 2 days. As shown in Fig. 4A,B, even after only 2 days of restricted-calorie HFD, AgRP neuronal firing was significantly increased from 1.6 ± 0.25 s −1 (n = 24) in 2d CD-CR mice to 4.0 ± 0.3 s −1 (n = 28) in 2d HFD-CR mice (p < 0.0001), suggesting that diet composition alone can alter neuronal excitability and AP firing in ARH AgRP/NPY neurons. Discussion In lean animals, AgRP/NPY neuronal firing is exquisitely sensitive to nutritional status and peripheral metabolic and nutritional signals: neuronal output increases in response to food deprivation 6,7,15,22,23,34 and circadian timing 25 and is potently inhibited by hormonal signals such as leptin 15,16,22 . We recently reported that long-term (> 8 weeks) consumption of an HFD was associated with disruption of this intrinsic plasticity: AgRP/NPY neurons from DIO animals were persistently activated, regardless of nutritional status and refractory to inhibition by leptin 22 . In this study, our goal was to determine the time course of altered AgRP neuronal firing and leptin resistance following HFD exposure and determine the relative contribution of diet composition versus caloric intake to the onset of these defects in neuronal function. We found that the diet-dependent increase in AgRP neuronal output occurs very quickly, within 48 h of consuming HFD and that the composition of the diet is itself a significant contributor to the development of hypothalamic neuronal dysfunction, as we observed increased AgRP neuronal output in neurons from mice fed a defined calorie, high-fat diet. Our finding that AgRP neurons from mice fed HFD for a short time were still robustly inhibited by leptin, despite their persistent diet-dependent activation was somewhat surprising, as several other studies have demonstrated that leptin-resistance occurs shortly after exposure to HFD, on a similar time scale to what we used here (2 days in ref. 30 and 6 days in ref. 27). However, those studies used biochemical assays of LepRb function to assess leptin sensitivity, namely upregulation of SOCS3 along with impairment of STAT3 phosphorylation, whereas we directly measured the leptin-dependent inhibition of neuronal firing in AgRP neurons. In our previous study, we found that the LepRb-mediated inhibition of AgRP neurons occurs at least in part due to modulation of K + channels via a Src-family kinase 22 . Thus, it is possible that the signal transduction pathway that mediates leptin-dependent AgRP neuronal inhibition uses an alternative pathway from the JAK/STAT/SOCS3 pathway, raising the intriguing possibility that these effects are differentially sensitive to the impact of DIO-dependent increases in serum leptin. Our results also support the hypothesis that hypothalamic leptin resistance alone cannot account for the persistently increased firing of ARH AgRP neurons and that the increased excitability of these neurons must be due to something else, such as another hormone such as insulin or perhaps some aspect of the high-fat diet itself. One possible mechanistic explanation for our finding is that high-fat feeding, even on the short time scale described here, may cause disruption of mitochondrial dynamics, thus contributing to decreased neuronal excitability, as demonstrated by Dietrich, et al. (2013) 43 . Similar to what we show here (Fig. 4A), Petro et al. demonstrated that mice pair-fed a high-fat diet still gained significant amounts of weight compared to control mice fed a low-fat diet 44 . Thus, dietary fat itself can cause obesity independently of caloric intake. Our results extend this finding to show that even caloric restriction of a HFD remodels the neurons that regulate appetite and that this remodeling occurs very quickly, within 48 h. Further, since the additional weight in the HFD-CR mice cannot be coming from increased intake, we postulate that it is due to a decrease in energy expenditure, highlighting the importance of AgRP neurons and the circuits they are a part of (e.g., the melanocortin system) in regulating not only food intake, but also energy expenditure. In addition to the increased electrical activity of AgRP neurons in the ARH, we also found that anatomical projections of AgRP neurons from ARH to the PVH are reduced in both short-and long-term HFD-fed mice, suggesting these projections are altered in response even to acute changes in diet in the absence of body measureable weight changes. The neurotrophic action of leptin has been implicated in the development of ARH AgRP → PVH projections, as these projections are significantly decreased in both ob/ob mice and offspring of DIO rats 35,36 . The mice used in our study were wild-type mice not genetically prone to obesity, demonstrating that the loss of ARH AgRP → PVH projections can occur even in adult animals in response to an obesogenic challenge, highlighting the plasticity of these neurons and their sensitivity to energy balance signals. In summary, we demonstrate here that changes in the intrinsic neuronal output and anatomical projections of ARH AgRP neurons occur rapidly following exposure to a high-fat diet and that these changes depend on the fat composition of the diet rather than caloric intake. Interestingly, although leptin resistance in the LepRb-JAK/STAT/SOCS3 pathway in AgRP neurons occurs within a similar timeframe as we have examined here, the inhibition of AgRP neuronal electrical activity is not yet leptin resistant, suggesting that the timing of diet modification may be an important consideration for therapeutic approaches to obesity. Methods Animal care. All animal care and experimental procedures were performed in accordance with a protocol approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Tennessee Health Science Center (14-056.0). Mice were housed at 22-24 °C on a 12 h light/dark cycle (lights on at 6:00 AM). All electrophysiology studies described here used transgenic hrGFP-NPY mice in which humanized Renilla green fluorescent protein (hrGFP) is expressed under the control of the murine NPY promoter 45 . Experiments involving immunohistochemical analysis of ARH AgRP projections to the paraventricular nucleus of the hypothalamus (PVH) used age-matched non-transgenic littermates of the hrGFP-NPY mice, ob/ob mice, or C57Bl6/J mice. Adult (> 8 weeks) male mice were used for all experiments. Mice fed HFD long-term (> 6 weeks) were started on the high-fat diet at 6 weeks of age and maintained on HFD until they were 12-14 weeks old. Control diet (CD) fed mice were fed a standard pelleted rodent chow (Teklad 7912, 17 kcal% fat, 3.1 kcal/g metabolizable energy). For some experiments, aged-matched littermates were fed a high-fat diet (HFD; D12451, 45 kcal% fat, 4.5 kcal/g metabolizable energy, Research Diets, Inc.), for 2-6 d. There was no significant difference in firing rate or leptin sensitivity in neurons from mice fed HFD up to 6 days, so all time points were collapsed into a single group and presented as "short-term HFD", except for the immunohistochemistry experiments in which mice were fed HFD for either 48 h or 8 weeks. Both CD and HFD were administered ad libitum and water was freely available at all times. Mice that were fasted had all of the food removed from the cage just prior to the start of the dark cycle (6:00 PM) and were food-deprived for no more than 16 hours; water was freely available. For experiments involving calorie restriction, we measured the daily food intake of CD-fed mice for one week (average daily intake/mouse ~14 kcal/day). A measured amount of food corresponding to ~14 kcal/day of either CD or HFD was added to each cage in the afternoon prior to "lights-off ", beginning at 6 weeks of age and continuing for 7 weeks. To control for the switch from ad libitum to restricted feeding, CD-fed mice were given ~14 kcal/day of CD. For all groups, water was freely available at all times. All mice were weighed weekly as well as just prior to use in experiments. In all experiments, mice were euthanized and slices prepared between 9 and 10 AM to minimize circadian variation in feeding. Electrophysiology. Slice preparation. Adult male mice (8-16 weeks old) were deeply anesthetized using isoflurane prior to decapitation and rapid removal of the brain. The brain was then immediately submerged in ice-cold, oxygenated cutting solution (in mM: 80 NaCl, 90 sucrose, 3.5 KCl, 4.5 MgSO 4 , 0.5 CaCl 2 , 1.25 NaH 2 PO 4 , 23 NaHCO 3 , and 10 glucose). The brain was blocked for sectioning and 250 μ m coronal slices were cut using a Vibratome (VT1000S, Leica). Sections containing arcuate nucleus were incubated in oxygenated cutting solution for at least 1 h prior to recording. Slice recording. Slices were transferred to a recording chamber constantly perfused (~2 ml/min) with oxygenated artificial cerebrospinal fluid (aCSF, in mM: 119 NaCl, 2.5 KCl, 1 MgSO 4 , 2.5 CaCl 2 , 1.25 NaH 2 PO 4 , 23 NaHCO 3 , and 10 glucose). Fast synaptic neurotransmission was blocked using 100 μ M picrotoxin, 10 μ M CNQX, and 20 μ D,L -AP5 to inhibit GABA A , AMPA-R and NMDA-R, respectively to isolate spontaneous, intrinsic action potentials in AgRP/NPY neurons. GFP-positive AgRP/NPY neurons were identified using epifluorescence and standard GFP filters on a fixed-stage Olympus BX-51WI microscope equipped with an XM-10IR CCD camera (Olympus America, Inc.). All recordings were performed using a Multiclamp 700B amplifier interfaced via a Digidata 1440 digitizer and controlled using Clampex 10 (Molecular Devices). Data were acquired at 5 Hz and digitized at 20 Hz using the built-in 4 pole Bessel filter of the Multiclamp. Pipette capacitance was nulled following the formation of a GΩ seal in all experiments. Recording pipettes were prepared from filamented, thin-wall glass (TW150, World Precision Instruments) and had a resistance of 5-7 MΩ when filled with intracellular solution (in mM: 130 K-gluconate, 10 KCl, 0.3 CaCl 2 , 1 MgCl 2, 1 EGTA, 3 MgATP, 0.3 NaGTP, 10 Na-phosphocreatine, and 10 Hepes, pH = 7.35 with KOH). The liquid junction potential (LJP) between the aCSF and intracellular solution was measured to be 14.2 mV; membrane potential recordings were corrected for the LJP off-line. All current-clamp recordings were performed at 32-34 °C, and membrane potential and Scientific RepoRts | 5:16810 | DOI: 10.1038/srep16810 spontaneous action potentials were recorded for at least 10 minutes. Neurons that did not exhibit spontaneous activity within 2 minutes were not included in the analysis. Leptin (100 nM, National Hormone and Peptide Program) was bath applied for 60-80 seconds, after which perfusion with normal aCSF resumed. For experiments involving leptin, a stable baseline was acquired for 2-3 minutes prior to the addition of leptin. Immunohistochemistry. Three adult (> 12 weeks old) male mice from each group (CD, ob/ob, 8 weeks HFD, 48 h HFD) were anesthetized with Avertin and transcardially perfused with 10% formalin. The '8 weeks HFD' mice were started on HFD at 6 weeks of age and were 14 weeks old when sacrificed for IHC. Brains were removed and postfixed overnight at 4 °C in formalin. Coronal sections (50 μ m) containing paraventricular nucleus of the hypothalamus (PVH) were cut on a vibratome (VT1000S, Leica). Sections were washed 3× for 15 min in PBS, then permeabilized and blocked in PBS + 0.25% Triton X-100 + 5% donkey serum + 1% IgG-and fatty-acid free BSA (Jackson Immunoresearch). Slices were incubated in goat anti-AgRP antibody (1:1000, Santa Cruz Biotechnology) for 3 h at room temperature, then washed and incubated with donkey anti-goat Alexa568 secondary antibody (1:400, Invitrogen) for 2 h at room temperature, then mounted on glass slides for imaging. Sections were imaged at 1024 × 1024 resolution using a Zeiss 710NLO confocal microscope. Optical section thickness, laser power, pixel dwell time, and detector settings were determined for the brightest section and then applied equally across all groups. All images were acquired using Zeiss ZEN software and offline image analysis was performed using the ZEN software and FIJI (ImageJ 2.0). An experimenter blind to the identity of the experimental groups performed image acquisition and analysis. Data analysis and Statistics. Electrophysiology. Action potential frequency was measured using Clampfit 10. Descriptive statistics and group differences were determined using Prism 6 (GraphPad). Action potential frequencies for Figs 1 and 2 were compared using a Kruskal-Wallis ANOVA with Dunn's multiple comparisons post hoc test. Action potential frequencies for calorie-restricted groups in Fig. 4 were compared using a Mann-Whitney U Test. All data are presented as mean ± SEM. A value of p < 0.05 was considered significant. Image analysis. To quantify AgRP immunoreactivity, three-dimensional confocal images containing PVH from approximately the same anatomical location were used per mouse for image analysis. Images were imported into FIJI and thresholded to generate a binary image (the same threshold value was applied to each image in the stack). The binary image was then skeletonized using the "Skeletonize" plugin included in FIJI to thin all objects above threshold to a line 1-pixel wide. Each image in the stack was skeletonized individually and the sum of the integrated fluorescence density of each skeletonized 2D image in the 3D stack used as the measure of AgRP immunoreactivity for each brain section. Group differences were determined using a one-way ANOVA with a Tukey's multiple comparisons post hoc test. A value of p < 0.05 was considered significant. All image analysis was conducted blind to the experimental groups.
2018-04-03T04:10:51.138Z
2015-11-23T00:00:00.000
{ "year": 2015, "sha1": "aa64613c60577481202d3b9efd505784aa212ad2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep16810.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa64613c60577481202d3b9efd505784aa212ad2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256425928
pes2o/s2orc
v3-fos-license
Investigation of mechanical properties and surface roughness of friction stir welded AA6061-T651 Friction stir welding (FSW) of 6-mm-thick plates of AA6061-T651 was carried out using a simple cylindrical pin tool. The impact of welding factors (rotational speed, welding speed) on tensile properties, microhardness, and surface roughness of FSW joints was investigated. Ultimate tensile strength (UTS), yield strength, and % elongation of AA6061-T651 base plate as well as FSW joints were found out using a universal testing machine (UTM). Maximum value of UTS and yield strength were achieved at rotational speed of 1400 rpm and welding speed of 20 mm/min. Minimum surface roughness was reached at rotational speeds of 1400 rpm and welding speed of 20 mm/min. Microstructural evolutions in the friction stir welded (FSWed) joint and microhardness profile were also determined. Maximum hardness of HV 120 was acquired for the stir zone (SZ). Hence, attainment of the maximum tensile strength, microhardness, and minimum surface roughness during FSW is a desired method to improve the service life and suitability of AA6061-T651. Introduction Working on a surface is affected by surface roughness. In most of the cases, failure of part starts on the surface. This is due to either incoherence or decline of the surface quality. Surface must be within limits of variations. The service qualities of the welded Al-alloy joints in the aerospace industry can be improved using FSW, which was initially used for welding difficult-to-weld high strength Al-alloys (Cam, Ipekoglu, & Tarık Serindag, 2014;Bhushan & Sharma, 2019). FSW is a solidstate welding process. FSW is also utilized to weld hollow objects, such as pipeline and containers. Components with three-dimensional profile are also joined using FSW. This technique is utilized to make butt, corner, lap, T, spot, and fillet junctions. The fracture of the welded joints under fluctuating loading conditions is initiated on the surface of the component. Hence, the surface roughness significantly affects the fatigue of the welded joint. As per Mishra and Ma (2005) in FSW a revolving tool with a specifically designed pin is placed into sides of plates to be joined. The tool then moves alongside the joint line, and plates are welded. According to Khaled (2005) when fastened joints are replaced with FSW joints, the result is weight and cost savings. Weight and cost savings are essential for aerospace industry. Weight savings are achieved due to the elimination of the fasteners. Cost savings are achieved because of a decrease in design, manufacturing, assembly, and maintenance time. İpekoğlu, Erim, and Çam (2014) welded AA6061 plates in O and T6-temper state FSW in buttposition. After post-weld heat treatment, mechanical properties of FSW joints significantly improved with respect to as-welded plates and respective base plates. Cam et al., 2014 concluded that chemistry of dynamically recrystallized zone (DXZ) of AA6061-T6 joints may be improved by utilizing high strength interlayer during FSW. This leads to substantial increase hardness of DXZ. He, Ling, and Li (2016) conducted FSW of AA6061 tick plates. They concluded that with rise in rotation speed, values of tensile longitudinal residual stresses enhanced marginally. Longitudinal tensile stresses were observed at the edge of the shoulder in AS of joints. Dorbane, Ayoub, Mansoor, Hamade, and Imad (2017) carried out FSW of AA6061 plates. Examination of samples showed that in temperature range of 25-200°C locus of failure initiates at the region between thermomechanicalaffected zones (TMAZ) and heat-affected zone (HAZ). However, at higher temperatures (300°C), failure arises in stir zone. Zhou et al. (2019) carried out dual-rotation FSW of AA6061-T6. The impact of rotation speed on microstructure and mechanical properties of joints was examined. Defect-free joints were achieved under process parameters used. This work is unique in the sense that research has been done about the microstructure and microhardness of FSW joints and to find out the value of process parameters to achieve best mechanical properties and minimum surface roughness of FSW joints of AA6061-T651, as this affects the corrosion behavior, fatigue strength, and life cycle of FSW joints. This research work has been carried at lower advance per revolution. Material and methods Workpiece material for FSW AA6061-T651 sheet of size 1200 mm × 300 mm × 6 mm was identified for FSW. Plates were further cut into to size 100 mm × 50 mm × 6 mm. Press cutting machine was used for this. These workpieces are shown in Fig. 1. Chemical composition of the AA6061-T651 plates was obtained using optical emission spectrometer. This is tabulated in Table 1. FSW machine FSW was carried out on modified HMT-FN2V vertical milling machine. Special fixture was fabricated to grip plates in the desired location. Setup is displayed in Fig. 2. Fabrication of FSW tool Tool geometry, tool size, and tool material are main criteria during FSW. Melting point and hardness of tool should be higher than the plates. Tool used in this work is shown in Fig. 3. Tool was manufactured from 25-mm diameter circular rod of high carbon high chromium D2 tool steel. Chemical composition of the high carbon high chromium D2 tool steel is tabulated in Table 2. Friction stir welding of AA6061-T651 Specimens to be joined using FSW were held on a specific fixture. This was done in order to stop the movements during the welding. A pilot hole of diameter 6 mm was drilled, at 10 mm from the last edge along the weld line. Due to this hole, tool easily plunged into the plates. Fixture was secured on table of milling machine. FSW tool was attached on collet of machine spindle. Spindle was rotated at required speed. Rotating tool pin moved into the plates till shoulder reached upper surface of plates. Length of tool pin was kept slightly shorter than thickness of plates. This is to prevent over plunging and tool breakage. Dwell time of 5 s was provided to the tool after the plunge. As a result of this, adequate frictional heat was generated. Tool moved forward along weld line at required welding speed. After welding was completed, tool was pulled out of plates 10 mm before reaching last edge of plates. Workpieces welded utilizing changed process parameters are displayed in Fig 4. Process parameters are tabulated in Table 3. Microstructure Samples were selected from weld nugget. Samples were made ready for metallographic inspection using 220-320-500-1000 mesh emery papers. Then, polishing by with 2μm-sized diamond paste was done. Scanning electron microscope (SEM) attached with energy-dispersive spectroscopy (EDS) was used to study microstructure. Microhardness Vickers microhardness tester of maximum capacity 1000 gf was used for microhardness test. This is shown in Fig 5. Tensile test Ultimate tensile strength (UTS), yield strength, and % elongation of base metal and friction stir welded plates of AA6061-T651 were found out. Tensile test specimens were cut from AA6061-T651 and as-welded workpieces from the joint area, according to ASTM E8 standards. Electrical discharge machine (EDM) wire cut machine was utilized to cut specimens. Specimens were mounted on UTM, and load was applied until the specimen broke. In each tensile test, 03 specimens were used. Average value of 03 readings was taken. Chemical composition test Chemical compositions were obtained using vacuum optical emission spark spectrometer. Surface roughness There is a need to measure surface roughness because crack if any will start from the surface. Surface roughness of the AA6061-T651 and as-welded workpieces was measured using surface roughness tester. Surface roughness of the workpieces along the full length of the weld bead was measured. Chemical composition results Composition of FSW joints was obtained using optical emission spectrometer. Results are given in Table 4. Optical emission spectrometry shows that the chemical composition of FSW joints is approximately same to that of base alloy as reported in previous studies (Kafli & Nuran, 2009). Microstructure-weld macrograph of FSW joint Weld zones are the result of thermomechanical activities experienced by various areas of the weld. FSW joint consists of four zones: (a) base metal (BM); (b) HAZ, where plates are influenced only by heat and no plastic deformation occurs; (c) TMAZ, where plates are influenced by heat and plastic deformation; and (d) weld-nugget/stir zone (SZ). The SZ consists of noticeable onion ring structures. Onion rings are due to consecutive shearing of semi-cylindrical plastic material layers from front of the tool and their accumulation at back of the tool. Therefore, grain structure development in SZ is a very complicated mechanism. This is primarily because of continuous dynamic recrystallization (CDRX) and geometric dynamic recrystallization (GDRX). Particle-simulated nucleation (PSN) also shows minor role in certain cases. The AA6061-T651 displays coarser elongated microstructure. In optical micrograph, grains are oriented along rolling direction (Fig. 6(a)). Dark spots expose etch pits in microscope due to etching. Grains are equiaxed. Grains are somewhat oriented in rolling direction. Figure 6(b) shows microstructure of stir zone in advancing side. Elements are uniformly distributed in the microstructure. Therefore, FSW joint has sound microstructure. No microscopic cavities or flaws are detected at the weld nugget areas, so it can be said that thermal flow of material is uniform. Analysis in Fig. 7 displays that main element at nugget zone are Al 97.52% and Si 0.84%. Microhardness Microhardness value of AA6061-T651 was 150 HV. The maximum hardness of HV 120 was achieved for SZ, while boundary between HAZ and TMAZ on advancing side showed hardness value of HV 81. A hardness loss was detected on advancing side (AS) and retreating side TMAZ region. Microhardness values of friction stir welded samples are tabulated in Table 5. Relation between macrostructure and microhardness Hardness distribution is not same in the weld area. It shows various FSW zones including SZ, TMAZ, and HAZ. The hardness of SZ is more than surrounding TMAZ and HAZ. Least microhardness is at boundary of HAZ and TMAZ on retreating side (RS). HAZ on advancing side (AS) likewise has a low hardness. This microhardness distribution is of "W" shape for precipitation strengthened alloy like AA6061 (Zhou et al., 2019). The external boundaries of this distribution would rise uninterruptedly in microhardness until it reaches natural plate hardness. According to Table 5, the SZ has maximum hardness equated with neighboring areas. It appears that comparatively high hardness of SZ is owing to its fine equiaxed grains and the associated grain boundary consolidation. Equiaxed grains result from dynamic recrystallization, while low hardness of TMAZ is a result of mixing of strengthening precipitates, which occurs due to high temperatures reached during FSW process. Low hardness of HAZ is as a consequence of grain coarsening and extra aging occurred in this zone. Tensile test results Tensile test results of AA6061-T651 and as-welded workpieces are shown in Table 6. Specimen nos. 1, 3, 5, and 7 were welded at welding speed of 16 mm/min. Specimen nos. 2, 4, 6, and 8 were welded at welding speed of 20 mm/min. Figure 8 shows change in UTS with variation in rotational speed, at constant welding speed of 16 mm/ min. Figure 9 represents the change in UTS with variation in rotational speed, at constant welding speed of 20 mm/ min. It was discovered from results that the maximum values of UTS and yield strength are obtained in FSW specimen no. 8. The maximum value of UTS for FSW specimen no. 8 is 0.447 KN/mm 2 . These values were attained at a rotational speed of 1400 rpm and at a welding speed of 20 mm/min. UTS of as-welded specimens ranges from 68 to 80% of that of the base alloy. The minimum value of UTS was obtained at 900 rpm and 16 mm/min, for specimen no 3. The UTS and yield strength increase with increase in rotational speed for specimen no 2, 4, 6, and 8, at constant welding speed of 20 mm/min. Alike results were also obtained for breaking strength (BS), yield strength (YS) and % elongation. All tensile specimens had failed in heat-affected zone close to boundary of TMAZ. Specimens after tensile test are shown in Fig. 10. Surface roughness testing Effects of rotational speed on surface roughness of friction stir welded specimens were also investigated. Table 7 shows the surface roughness results of AA6061-T651FS welded specimens at different rotational speed (710-1400 rpm), at constant welding speed of 16 mm/min. As rotational speed decreases, roughness of the surface increases. Results discovered that smallest average surface roughness of 6.84 μm was observed in specimen no. 8, i.e., at a rotational speed of 1400 rpm. The maximum average surface roughness of 9.07 μm was observed in specimen no. 3, i.e., at a rotational speed of 900 rpm. Surface finish of weld region of all the welded specimens is found to be good. Figure 11 shows the change in surface roughness with variation in rotational speed, at constant welding speed of 16 mm/min. Table 8 shows surface roughness results of AA6061-T651FS welded specimens at different rotational speed (welding speed is maintained at 20 mm/min). As rotational speed decreases, roughness of surface enhances. Minimum average value of surface roughness of 6.71 μm was witnessed in specimen no. 8, i.e., at a rotational speed of 1400 rpm and constant welding speed of 20 mm/min. Minimum surface roughness of 6.71 μm was attained at rotational speed of 1400 rpm and at constant welding speed of 20 mm/min. Figure 12 signifies change in surface roughness with variation in rotational speed, while welding speed is constant at 20 mm/min. The high rotational speed results in less roughness due to high heat input. Microstructure aspects Sound FSW butt joints of AA6061 T651 specimens were obtained in this research work. This shows the distinctive capabilities of the FSW process. In FSW, the welding of plates is accomplished by generated heat due to rotation of tool on AA6061 T651 plates and plastic deformation. The generated heat is consumed in softening the plates. This facilitates the flow of material. During FSW, the material is shifted from AS to the RS at the front of the tool. Material is moved from RS to AS at rear end of the tool. When the material moves from AS, it forms an opening. When the tool rotates, the material from RS fills the opening created in the AS. This occurs if the amount of material moved to AS from RS is less than the amount of material removed from AS. If heat generated is less, plasticization of material will slow; this will lead to reduced flow of material. Result will be defect in the stir zone. Otherwise, if surplus heat is generated, turbulent flow of materials takes place. This leads to defects. Therefore, the optimal heat generation is required to attain the sound joint. Same is the opinion of Humphreys and Hatherly (2004). In spite of the optimal heat generated, the material is to be stirred by the pin shape. This is necessary to have sound joints. The area in touch with pin shoulder is the top part of stir section. This faced the effects such as heat generation and material flow. These are exclusively produced by rubbing of tool. Stirred zone comprises fine and equiaxed grains for both situations of FSW. Fine recrystallized zone at weld nugget is because of substantial plastic deformation. This is followed by dynamic recrystallization because of thermomechanical processing. The SZ in AA6061 T651 has recrystallized grains with substantially minor grain size in contrast to BM. The grains on SZ of AA6061 T651 are considerably finer, in spite of the bigger original grain size of AA6061 T651. This is due to the happening of heavy plastic deformation and robust dynamic recrystallization on the AA6061 T651. Similar were the observations of Murr, Liu, and Mcclure (1998). Al with FCC structure has additional slip planes present for distortion than Mg with HCP structure. This increases the tendency of Al to plastically deform. Hence, substantial grain modification witnessed in SZ of AA6061 T651 can be ascribed to major plastic deformation and subsequent heat input in AA6061 T651. Recrystallized microstructure in SZ would arise from a dynamic recovery (DRV) and a dynamic recrystallization (DRX). In rigorously deformed microstructure, subgrains are made by DRV and they create grains with HAGB in the process of DRX [Su, Nelson, & Sterling, 2005].The microstructures of the plates consists of coarse grains, a sufficient number of HAGBs, and large number of precipitates such as Mg 2 Si (for AA6061 T651). Hence, continuous grain growth (CGG) of dynamically recrystallized grains in stirred zone on AS is started. These grains are somewhat coarsened after plastic deformation. Reason is static annealing during weld cooling cycle. Distinctive boundary is there amid stirred zone and HAZ on the AS alike to results stated in previous articles [Kumar, Yuan, & Mishra, 2015;Venkateswarlu, Nageswararao, Mahapatra, Harsha, & Fig. 6 Microstructure of friction stir welded AA 6061-T651 Mandal, 2015; Threadgill, Leonard, & Shercliff, 2009]. This is in difference with RS of weld joint. The boundary is further diffusive and rather unclear. Hence, the two zones cannot be easily differentiated. This occurs because strain rate and temperature gradients are much sharper on AS than that on RS. Shear plastic deformation in AA6061 T651 takes place within a lesser time. Reasons are the torsion and circumventing velocity fields with opposed directions in AA6061 T651. Surface roughness Surface roughness helps in deciding surface integrity. Roughness also helps in identifying function of surface. This is because an important share of material failure initiates at surface. It may be due to either the discontinuity or deterioration of the quality of the surface. Surface finish also plays a significant role in corrosion resistance. Surface finish improves performance and reduces costs of life cycle of component. At the interface between the AA6061-T651 joints, onion rings are seen. Space between the layers in onion ring structure is equivalent to advancing motion of tool in single rotation. Therefore, it can be concluded that reduction in surface roughness of FSW joints has a significant role in governing the quality of FSW joints. Flow of material on AS is unlike from the flow on RS. AA6061-T651 on the RS certainly not go in rotational zone nearby the pin. The reason is that the material on the AS forms fluidized bed nearby pin and revolves around it. In transition zone, AA6061-T651 movement takes place mainly on the RS. This phenomenon is also supported by Li, Murr, (1999). There is no flash on the RS. This is possibly owing to deficiency of heat generation triggered by decrease of surface roughness. Therefore, flexibility of AA6061-T651 reduces. Hence, it becomes difficult to extrude below the shoulder space. Width of the HAZ reduces steadily with the reduction in plate surface roughness. Reduction in FSW joint surface roughness leads to fall in the heat generation. Hence, less heat is transferred to HAZ region. This results in decrease in width of HAZ. Amount of least FSW joint surface roughness shows the important influences on grain refinement. On one side, maximum grain size was attained with extreme FSW joint surface roughness. While on the other side, minimum grain size was attained with least FSW joint surface roughness. Therefore, it is inferred that grain size in NZ is reduced, when amount of AA6061-T651 surface roughness is reduced. Hirata et al. (2007) stated that grain size in NZ was reduced, when flow of friction Table 7 Surface roughness of aluminum alloy AA6061-T651 at different rotational speeds Bhushan and Sharma International Journal of Mechanical and Materials Engineering (2020) heat was reduced. Therefore, the decrease in amount of plate surface roughness triggered the reduction in heat generation. This resulted in additional grain refinement. Microhardness Intermetallic formation, boundary energy, precipitate formation, and strain hardening of FSW joint affect the microhardness of joint. Extra hardness in SZ of FSW joints is due to the increase of grain boundaries and fine grains. Grain size is inversely proportional to hardness and strength. Hence, formation of fine recrystallized grains leads to increase in hardness in SZ. Similar results were also obtained by Guven and Cam et al. (2014). During welding, heat generated in SZ is transferred to neighboring regions (TMAZ and HAZ). For heat treatable AA6061 T651, strength and hardness mostly depend on availability and distribution of precipitates. Availability and distribution of precipitates in matrix are controlled by prevalent thermal conditions. Precipitation sequence of Al-Mg-Si 6xxx alloys is generally termed to be solid solution →GP→ß"→ß'→ß (Mg 2 Si). During solutionization, precipitates are dissolved in matrix and form super saturated solid solution upon cooling. Further aging leads to precipitation of a secondary phase which reinforces strength of aluminum alloy [Gallais, Denquin, Bréchet, & Lapasset, 2008]. Because AA6061 T651 is heat treatable Al alloy, hardness is largely ascribed to existence of precipitates. Thermal cycle in TMAZ region makes dissolution of strengthening precipitates, which shows decreased hardness in retreating and advancing side in the Table 5. This region is softer since solute additions trapped in second phases dissolve back into solid solution. It can be also said that heating and cooling thermal condition exists in TMAZ, making precipitation dissolve in matrix. According to Cam et al. (2014), a hardness reduction usually occurs in weld region of AA6061-T6 joints due to solution and/or coarsening of strengthening particles within TMAZ region and overaging in HAZ region. Scialpi, De Giorgi, De Filippis, Nobile, and Panella (2008) stated that HAZ has a peak temperature; therefore, results show decrease in hardness. The hardness of AA6061 T651 primarily is influenced by size and amount of precipitate particles. Maximum particles are mixed in TMAZ, and precipitates are partially mixed in stirred zone. Average grain size of TMAZ on AS was higher than that of SZ, whereas microhardness in SZ shows only a small enhancement as compared to that of TMAZ on AS. This is primarily because of existence of many precipitate particles in TMAZ. This outcome is in line with findings of Uzun, Dalle Donne, Argagnotto, Ghidini, and Gambaro (2005). Tensile strength Kim, Fujii, Tsumura, Komazaki, and Nakata (2006) stated that at constant rotational speed, mechanical opposition of joints increases with rise of travel speed. This is due to decreased heat input. Therefore, this research work considered less heat generation due to low welding speed. It is observed from Table 6 that UTS of all the 08 samples of FSW joints is less than the AA6061-T651 (BM). This is because of two reasons. First reason is influence of dissolved precipitates formed in FSW process. This results in decrease of tensile strength of joints. Second reason is robust tendency for intergranular cracking. This is as a consequence of the combination of weak grain boundaries and concentration of grain boundary stress. Therefore, cracks can propagate speedily along grain boundary. Hence, strength of the BM is higher than strength of welded joints. Ipekoglu and Cam et al. (2014) found that tensile strength of FSW joints were further less than lower strength of AA6061-T6 BM. Reason for this is coarsening of strengthening particles within DXZ and TMAZ regions and overaging in HAZ region. Increase in speed from 710 to 1400 rpm (at constant translation speed of 16 mm/min) linearly increases UTS from 0.25 to 0.43 kN/mm 2 , whereas increase in speed from 710 to 1400 rpm (at constant translation speed of 20 mm/min) linearly increases UTS from 0.37 to 0.44 kN/mm 2 . Increase in rotary speed increases heat input per unit weld length monotonically which leads to better bonding. This is the reason for increases in UTS with rise in speed. No significant change is observed in UTS with increase of translation speed from 16 to 20 mm/min. Alike results were conveyed by Sharma, Dheerendra, and Pradeep (2012). Conclusions This paper provides outcome of impact of welding parameters on tensile properties, microhardness, and surface roughness of friction stir welded AA6061-T651. Conclusions derived from this research work are: i. UTS of as-welded specimens has reached up to 80% of that of base alloy. Maximum value of UTS and yield strength are found at rotational speed of 1400 rpm and welding speed of 20 mm/min. ii. Result of surface roughness of the welded region showed that rotational speed has noteworthy effect on surface properties of welds. Surface roughness was minimum, at maximum rotational speeds, i.e., 1400 rpm, and welding speed of 20 mm/min. iii. Microstructural analysis of FSW joint shows uniform distribution of particles. iv. The maximum hardness of HV 120 was attained for SZ, while boundary between HAZ and TMAZ on advancing side exhibited lowest hardness value of HV 81.
2023-02-01T14:32:26.414Z
2020-06-06T00:00:00.000
{ "year": 2020, "sha1": "51940307886d51ccd89e6d450f4dc4112fc5d89a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s40712-020-00119-x", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "51940307886d51ccd89e6d450f4dc4112fc5d89a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
119399908
pes2o/s2orc
v3-fos-license
Gravastars and bifurcation in quasistationary accretion We investigate the newtonian stationary accretion of a polytropic perfect fluid onto a central body with a hard surface. The selfgravitation of the fluid and its interaction with luminosity is included in the model. We find that for a given luminosity, asymptotic mass and temperature of the fluid there exist two solutions with different cores. Introduction The question we want to address in this paper is the following inverse problem: having a complete set of data describing a compact body immersed in a spherically symmetric accreting fluid, find the mass of the central body. We assume that we know the total mass, luminosity, asymptotic temperature, the equation of state of the accreting gas and the gravitational potential at the surface of the core. The fundamental question is whether observers can distinguish between gravastars [1] versus black holes as engines of luminous accreting systems (see a controversy in [2,3]). While we do not address this problem here, we show a related ambiguity in a simple newtonian model. The Shakura model The first investigation of stationary accretion of spherically symmetric fluids, including luminosity close to the Eddington limit, was provided by Shakura [4]. It was later extended to models including the gas pressure, its selfgravity and relativistic effects [5][6][7][8]. In the following we will denote the areal velocity by U (r, t) = ∂ t R (where t is comoving time and R the areal radius), the local, Eddington and total luminosities by L(R), L E and L 0 , quasilocal mass by m(R) and total by M , pressure by p, the baryonic mass density by ̺ (the polytropic equation of state will be p = K̺ Γ , 1 < Γ ≤ 5/3) and the gravitational potential by φ(R). The radius of the central body is R 0 while its "modified radius" is defined byR 0 = GM/|φ(R 0 )|. Under the assumption that at the outer boundary of the fluid the following holds true: we have the following set of equations: α is a dimensional constant α = σ T /4πm p c. The details of solving the system are provided elsewhere [9,10] and we will present only the main results here. We assume that the accretion is critical, i.e., there exists a sonic point, where the speed of accreting gas U is equal to the speed of sound a. All values measured at that point will be denoted with an asterisk. We define: and obtain the total luminosity: χ ∞ is approximately the inverse of the volume of the gas located outside the sonic point. In sake of brevity we will use to obtain Eq. (6) in a form using the relative luminosity: Bifurcation For the relative luminosity, fulfilling Eq. (7) we proved the following theorem: from (a, b). They are locally approximated by: (iii) The relative luminosity x is extremized at the critical point (a, b). Discussion In the paper we have assumed the existence of an accreting system which satisfies certain conditions. Under those assumptions the complicated set of integrodifferential nonlinear equations (2)(3)(4)(5) can be simplified to an algebraic one (7). We checked numerically that the performed simplification causes errors of the order of 10 −3 (see [9] for details). The analysis of Eq. (7) shows that there exist two different solutions, having the same total luminosity and total mass, but different masses of the core objects. One can also conclude that for sufficiently large β the maximal relative luminosity a can get close to 1, i.e., the total luminosity approaches the Eddington limit. As the two solution branches bifurcate from the point (a, b), there is no much difference between the central masses of bright objects (see [9,10] for plots). However, when luminosity is small (L 0 ≪ L E ), this difference can become arbitrarily large. This can be understood intuitively, because the radiation is small for test fluids (since the layer of gas is thin), or when the central object is light (therefore weakly attracting surrounding gas). The results obtained here are consistent with relativistic analysis neglecting interaction between the gas and the radiation [11,12].
2014-10-01T00:00:00.000Z
2006-12-18T00:00:00.000
{ "year": 2006, "sha1": "11402ffb9d20ef048a7a635ed760676742bca1e1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0612492", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "11402ffb9d20ef048a7a635ed760676742bca1e1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16334346
pes2o/s2orc
v3-fos-license
C3N4-H5PMo10V2O40: a dual-catalysis system for reductant-free aerobic oxidation of benzene to phenol Hydroxylation of benzene is a widely studied atom economical and environmental benign reaction for producing phenol, aiming to replace the existing three-step cumene process. Aerobic oxidation of benzene with O2 is an ideal and dream process, but benzene and O2 are so inert that current systems either require expensive noble metal catalysts or wasteful sacrificial reducing agents; otherwise, phenol yields are extremely low. Here we report a dual-catalysis non-noble metal system by simultaneously using graphitic carbon nitride (C3N4) and Keggin-type polyoxometalate H5PMo10V2O40 (PMoV2) as catalysts, showing an exceptional activity for reductant-free aerobic oxidation of benzene to phenol. The dual-catalysis mechanism results in an unusual route to create phenol, in which benzene is activated on the melem unit of C3N4 and O2 by the V-O-V structure of PMoV2. This system is simple, highly efficient and thus may lead the one-step production of phenol from benzene to a more practical pathway. catalyst by using C 3 N 4 (580) or PMoV 2 alone. Table 1 shows that neither former nor later alone was able to transform benzene in the absence of reductants (entries 1 and 2). On the contrary, a phenol yield of 2.1% was achieved in the dual-catalysis system containing both C 3 N 4 (580) and PMoV 2 even with only a small amount of water solvent (2 mL) ( Table 1, entry 3). The phenol yield reached 9.1% by changing the solvent to 50 vol.% aqueous solution of acetic acid ( Table 1, entry 4), and arose to the maximum value of 13.6% using LiOAc as an effective additive (Table 1, entry 5) 5,18,20,21 . The above results were obtained at 4.5 h and 130uC optimized from our detailed investigations on various conditions (see Supplementary Fig. S6 online). Many results have been reported on the oxidation of benzene to phenol 5,[9][10][11][12][15][16][17] , but reductant-free aerobic oxidation of benzene is still scarcely reported so far. Compared to the previous results under the reductant-free condition, the phenol yield of 13.6% over C 3 N 4 (580)-PMoV 2 is more than three times higher than the yield of 3.7% over the nano-plate vanadium oxide catalyst at a longer reaction time (10 h) and a higher temperature (150uC) 28 , and even exceeds the yields on noble metal catalysts [e.g., the homogeneous Pd(OAc) 2 -PMoV x (X 5 1, 2, 3) gives the phenol yield around 10% 18,21 , which sharply drops to 3.4% when Pd(OAc) 2 is immobilized on porous supports for recovering 21 ]. Moreover, the turnover frequency (TOF) of our work 5.9 h 21 calculated by the definition mmol phenol/(mmol POM catalyst 3 h reaction time) is much higher than the POM-catalyzed systems with CO as the sacrificial reducing agent (1.5 h 21 ) 5 , or with ascorbic acid as the sacrificial reducing agent (0.86 h 21 and 2.0 h 21 ) 10,11 , convincing that our reductant-free cata-lysis is even more active than those reductant-aided systems. Therefore, the present non-noble metal catalytic system C 3 N 4 (580)-PMoV 2 shows a remarkably superior efficacy at the reductant-free condition. Heating melamine in air at high temperatures has been a common approach for preparing C 3 N 4 , so the influence of heating temperatures for melamine on this reaction is investigated. The XRD patterns of Fig. 1a shows that heating melamine at 520uC and 550uC led to the formation of graphitic C 3 N 4 products of C 3 N 4 (520) and C 3 N 4 (550), similar to C 3 N 4 (580), but the low heating temperature 400uC resulted in melem, an intermediate toward C 3 N 4 29,30 . The non-C 3 N 4 -mediated systems of melamine-PMoV 2 and melem-PMoV 2 yielded no product (Table 1, entries 6 and 7). Though C 3 N 4 (520) and C 3 N 4 (550) were also inactive when used alone (Table 1, entry 1), their combination with PMoV 2 gave phenol yields of 0.3% and 6.1%, respectively ( Table 1, entries 8 and 9), much lower than 13.6% for C 3 N 4 (580)-PMoV 2 . The results prove that the C 3 N 4 sample obtained at the optimal temperature of 580uC is more active and in favor of the high phenol yield. We further explored catalytic systems containing C 3 N 4 (580) and other POMs. With the V-free POMs, i.e. H 3 PMo 12 O 40 (PMo) or H 3 PW 12 O 40 (PW), to company C 3 N 4 (580), no phenol product appeared (Table 1, entries 10 and 11), suggesting that the V species should be indispensable. Nonetheless, C 3 N 4 (580) with the non-POM vanadium species VOSO 4 caused an inactive system either ( Table 1, entry 12); as a consequence, it is the V species in POM framework that is synergically active with C 3 N 4 for this reaction. Moreover, when the other two less frequently used V-containing POMs (PMoV 1 and PMoV 3 ) were tested, the results show that C 3 N 4 (580)-PMoV 3 exhibited comparable activity to C 3 N 4 (580)-PMoV 2 , but C 3 N 4 (580)-PMoV 1 was definitely inactive (Table 1, entries 13 and 14), which means that not all the V species in POM framework can catalyze this reaction with C 3 N 4 . Discussion According to previous studies, the V species in V-POMs are well accepted as the catalytically active sites for versatile organic oxidations 31 . Particularly, for liquid-phase aerobic oxidations, PMoV 2 takes a catalytic effect through Mars-van Krevelen-type mechanism, where the lattice oxygen of PMoV 2 selectively oxygenates organic substrates via a valence variation between V 51 and V 4127,32 . Neumann and co-workers 27,32-35 have systematically studied series of PMoV 2catalyzed homogeneous oxidations, and based on the Mars-van Krevelen mechanism they propose that the isomers of PMoV 2 with vanadium atoms in adjacent positions (i.e. V-O-V structure) are more likely to form bridge defects, favoring higher activity in oxygen-transfer reactions. Therefore, only PMoV 2 and PMoV 3 with the highly active V-O-V structure in their frameworks can allow the occurrence of oxygen transfer in hydroxylation of benzene to phenol, while lack of V-O-V is responsible for the inactivity of PMoV 1 . Nonetheless, PMoV 2 or PMoV 3 alone cannot catalyze the reaction because of inertness of the substrate benzene, suggesting that C 3 N 4 should play a key role. Recently, Goettmann et al. 24,36 conclude an unusual activation of aromatic rings via transferring electron density from the melem unit of C 3 N 4 to arene based on reaction results plus DFT calculations. Besides, for the high-temperature gas-phase oxidation of benzene with O 2 over copper exchanged HZSM5, a bifunctional catalytic mechanism has been reported: phenol is produced from the simultaneous activation of benzene and O 2 on zeolitic acid and Cu metal sites, respectively 16,17 . From above analyses, a dual-catalysis mechanistic pathway is proposed for understanding the catalytic performance of C 3 N 4 -PMoV 2 in Fig. 2. Benzene is firstly catalytically activated by the melem unit of graphitic C 3 N 4 , forming a transitional intermediate of electron-enriched benzene ring. Immediately, the original oxidation state of PMoV 2 with V 51 species, designated as PMoV 2 [ox] , attacks the intermediate ring to produce phenol, wherein the lattice oxygen of a V-O-V structure in PMoV 2 [ox] moves into the benzene ring with the PMoV 2 [ox] thus being reduced to the V 41 -containing PMoV 2 [red] . Finally, the catalytic cycle is closed with the resume of PMoV 2 [ox] after O 2 re-oxidizes V 41 of PMoV 2 [red] into V 51 species. In the dual-catalysis mechanism above, the role of C 3 N 4 is activating benzene according to the previous finding that the p-conju-gated melem unit of C 3 N 4 could transfer electron density to aromatic rings 24,36 . It is further revealed that high temperatures for thermal condensation of melamine would enhance the p-conjugation by connecting more tri-s-triazine and extending the polymeric network of C 3 N 4 37 . The (002) diffraction peak of C 3 N 4 is assigned to the interlayer distance of its graphitic structure 30 . In our case, as shown in the magnification of XRD patterns in Fig. 1b, the gradual shifting of the (002) peak to larger degrees along with the raise of heating temperatures means the shortening of the stacking distance and thus the stronger overlap of p orbital in C 3 N 4 29,30 , indicating that the activation of benzene would be improved by a higher heating temperature up to 580uC. This accounts for the activity order C 3 N 4 (520)-PMoV 2 , C 3 N 4 (550)-PMoV 2 , C 3 N 4 (580)-PMoV 2 . On the other hand, melem-PMoV 2 is inactive because melem itself has no graphitic characteristic of C 3 N 4 30 . Also according to the mechanism in Fig. 2, the catalyst PMoV 2 will remain in its reduced state PMoV 2 [red] as the reaction occurs in O 2deficient environment. Thus we conducted a separate run by introducing a much less amount of O 2 (0.3 MPa) (see Supplementary Information) into the batch reactor. In this case, the recovered PMoV 2 was green and exhibited an eight-line signal in ESR spectra (Fig. 3), index of the reduced state PMoV 2 [red]5,10 , whereas the fresh and recovered PMoV 2 from O 2 -sufficient condition were orange and ESR silent, denoting the oxidation state PMoV 2 [ox] . The above phenomena and comparisons strongly evidence our proposal that there exists V 51 /V 41 switch during the reaction. Moreover, the activation and oxidation of benzene should occur simultaneously in this mechanism. In order to reflect this point, the well-known heterogeneous Cs salt of PMoV 2 , CsPMoV 2 10 , was tried as a partner with C 3 N 4 (580). Though CsPMoV 2 was as active as PMoV 2 (see Supplementary Table S1 online) 10 in the presence of the sacrificial reducing agent ascorbic acid, C 3 N 4 (580)-CsPMoV 2 was inactive in our reaction system (Table 1, entry 15). The SEM image for CsPMoV 2 (see Supplementary Fig. S5 online) shows a spherical morphology with spheres diameters being 800 , 900 nm. This bulk CsPMoV 2 may not contact well with another solid surface of C 3 N 4 (580), hindering the simultaneous attachment of substrate with the dual-catalyst. In other words, the intimate and efficient contacts among C 3 N 4 , benzene and PMoV 2 are essential for implementing the overall catalytic cycle, which further supports our mechanism. Besides benzene, the simplest alkyl aromatic molecule toluene was also attempted as the substrate to further investigate the catalytic behavior of C 3 N 4 (580)-PMoV 2 for aerobic oxidation of aromatic rings (see Supplementary Table S2 online). C 3 N 4 (580) alone was Table 1. www.nature.com/scientificreports SCIENTIFIC REPORTS | 4 : 3651 | DOI: 10.1038/srep03651 inert in this system, and yet, bare PMoV 2 exclusively produced methyl-oxygenated compounds of benzaldehyde (7.7%) and benzyl alcohol (1.4%) due to the side chain oxidations. For reductant-free oxidations of alkyl aromatics, early studies reveal that oxidations of benzylic C-H bond are preferred rather than the aromatic ring 9,38,39 . On the contrary, the dual-catalysis system C 3 N 4 (580)-PMoV 2 resulted in a desirable yield of cresols (0.4%) due to the ring oxidation. This feature suggests that C 3 N 4 -PMoV 2 should have enhanced the reactivity of the alkylated benzene ring, enabling occurrence of the ring oxygenation through the dual-catalysis mechanism in Fig. 1. Catalytic reusability was first investigated by recycling C 3 N 4 (580) alone (Fig. 4). The phenol yield slowly decreased from 13.6% for the fresh catalyst to 12.7% for 1 st , 9.8% for 2 nd , and still kept at 6.2% for 3 rd recycling. The XRD pattern for the last recycled C 3 N 4 (580) indicates a stable structural stability due to its identical diffraction peak to that of the fresh one (see Supplementary Fig. S1 online). Therefore, the above decrease of phenol yield can be ascribed to the tar deposition according to the gradually darkened color (inserted photos in Fig. 4) and variation of C content (see Supplementary Information) of C 3 N 4 (580) during the recycling process. In fact, tar is still an inevitable over-oxidation byproduct, because the main product phenol is more reactive than the substrate benzene 4,6,18 . Even so, when C 3 N 4 (580), PMoV 2 and LiOAc were simultaneously recovered (see Methods), the phenol yield was 10.3% and 6.5% for the 1 st and 2 nd recycling, and still 2.1% for the 3 rd recycling (Fig. 4). All the above results demonstrate that the dual-catalysis nonnoble metal system C 3 N 4 -PMoV 2 provides a high phenol yield of 13.6% in reductant-free aerobic oxidation of benzene. A dual-catalysis mechanism involving cooperative activations of benzene on melem unit of C 3 N 4 and O 2 by V-O-V structure of PMoV 2 is demonstrated for interpreting catalytic results. The present dual-catalysis process appears to be simpler, much more efficient and cost-effective when compared with the currently available catalytic systems, paving a promising step towards practical application of hydroxylation of benzene to phenol by molecular oxygen. Methods Materials and general methods. All chemicals were analytical grade and used as received. H 3 PMo 12 O 40 (PMo) and H 3 PW 12 O 40 (PW), purchased commercially, were dried before used. XRD patterns were collected on the Bruker D8 Advance powder diffractometer using Ni-filtered Cu Ka radiation source at 40 kV and 20 mA, from 5 to 50u with a scan rate of 0.2u S 21 , and before measurements the samples were dried at 100uC for 2 h. Elemental analyses were performed on a CHN elemental analyzer (FlashEA 1112). BET surface areas were calculated from the sorption isotherms measured at the temperature of liquid nitrogen using a Micromeritics ASAP2010 analyzer; the samples were degassed at 300uC to a vacuum of 10 23 Torr before analysis. FT-IR spectra were recorded on a Nicolet 360 FT-IR instrument (KBr discs) in the 4,000-400 cm 21 region. ESR spectra were recorded on a Bruker EMX-10/12 spectrometer at X-band. The measurements were done at 2110uC in a frozen solution provided by a liquid/gas nitrogen temperature regulation system controlled by a thermocouple located at the bottom of the microwave cavity within a Dewar insert. Preparation of catalysts. Graphitic carbon nitride (C 3 N 4 ). The procedure for the synthesis of C 3 N 4 (580) is similar to the previous reports 40,41 . Melamine was transferred into a crucible and heated in a muffle furnace under air at a rate of 15uC/ min to reach the temperature of 580uC and kept at 580uC for 4 h, then the resulting yellow sample was cooled to room temperature in the oven. Melem, C 3 N 4 (520) and C 3 N 4 (550) were prepared by the similar method at 400uC, 520uC and 550uC, respectively. H 5 PMo 10 V 2 O 40 (PMoV 2 ). The Keggin-structured double V-containing POM was prepared according to the procedure described in our previous report 42 . The detail of the preparation of PMoV 2 procedure is as the following. MoO 3 (16.59 g) and V 2 O 5 (2.1 g) were added to deionized water (250 mL). The mixture was heated up to the reflux temperature under vigorously stirring with a water-cooled condenser, then at 120uC the 85 wt% aqueous solution of H 3 PO 4 (1.33 g) was added drop-wise to the reaction mixture. When a clear orange-red solution appeared, it was cooled to room temperature. The orange-red powder PMoV 2 was obtained by evaporation of the solution to dryness, followed with re-crystallizing for purification. Catalytic tests. The hydroxylation of benzene was carried out in 100 ml stainless steel autoclave equipped with a mechanical stirrer and an automatic temperature controller. In a typical test, 0.1 g C 3 N 4 , 0.4 g PMoV 2 , 0.6 g LiOAc, and 4.0 mL benzene were added into 25 mL of the aqueous solution of acetic acid (50 vol%) successively. After the system was charged with 2.0 MPa O 2 at room temperature, the hydroxylation reaction was conducted at 130uC K for 4.5 h with vigorous stirring. After the reaction, 1, 4-dioxane was added into the product mixture as an internal standard for product analysis. The mixture was analyzed by a gas chromatograph (GC) with a FID and a capillary column (SE-54; 30 m 3 0.32 mm 3 0.25 mm). Yield of phenol was calculated as mmol phenol/mmol initial benzene. Catechol, hydroquinone and benzoquinone were not detected by our GC analysis, so the tar that cannot be detected by the GC technique was the over-oxidation product. Recycling of the catalyst system. After the reaction, the reaction mixture was centrifuged and the solid C 3 N 4 (580) was recovered, followed by washing with acetic acid and dried in vacuum, and then reused in the next run. After the solid C 3 N 4 (580) was separated by centrifuging, water was added into the left liquid phase followed by extraction with isopropyl ether. The combined aqueous extracts were filtered and concentrated by evaporation under reduced pressure. The resulting solid mixture containing used PMoV 2 and LiOAc was obtained.
2018-04-03T05:33:48.069Z
2014-01-13T00:00:00.000
{ "year": 2014, "sha1": "8ea80c79e483ab1a46a2e506480484fa68eaf403", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep03651.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8ea80c79e483ab1a46a2e506480484fa68eaf403", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
15116837
pes2o/s2orc
v3-fos-license
Increased Expression of Foxj1 after Traumatic Brain Injury Foxj1 is a member of the Forkhead/winged-helix (Fox) family of transcription factors, which is required for postnatal differentiation of ependymal cells and a subset of astrocytes in the subventricular zone. The subpopulation of astrocytes has the ability of self-renew and neurogenic potential differentiated into astrocytes, oligodendrocytes, and neurons. However, its expression and function in the central nervous system lesion are not well understood. In this study, we performed a traumatic brain injury (TBI) model in adult rats and investigated the changed expression of Foxj1 in the brain cortex. Western blot and immunohistochemistry analysis showed that the expression of Foxj1 gradually increased, reached a peak at day 3 after TBI, and declined during the following days. Double immunofluorescence staining revealed that Foxj1 was co-expressed with MAP-2 and GFAP. In addition, we detected that Ki67 had the co-localization with NeuN, GFAP, and Foxj1. All our findings suggested that Foxj1 may be involved in the pathophysiology of brain after TBI. Introduction In modern times, traumatic brain injury (TBI) contributes to a major cause of morbidity and mortality all over the world, especially for children and young adults (Langlois et al. 2006;Plesnila et al. 2007). More and more people have been paying attention to the pathophysiology of the trauma; however, to date, the mechanism of the inner progress is not yet fully understood and the effect of the therapeutics remains unsatisfied (Roberts et al. 1998;McKee et al. 2005). Previous experiment studies in vivo and in vitro have demonstrated that the event induced by TBI triggers not only the primary injury which occurs immediately after the damage, but also the second injury which develops for a long term proximately several days or weeks and plays an essential role in morbidity or mortality (Nortje and Menon 2004;Walker et al. 2009;Zweckberger et al. 2006). All the steps involved in the whole process will cause neuronal apoptosis, inflammatory reaction, and reactive astrogliosis, which lead to consequently tissue loss, impaired regeneration, and functional disabilities (Di Giovanni and Movsesyan 2005;McGraw et al. 2001;Raghupathi 2004). To investigate the pathological mechanism and the cellular and the molecular alteration after TBI appears important for the value in order to improve the outcome in clinical treatment. Foxj1 is a member of the Forkhead/winged-helix (Fox) family of transcription factors, which has a conserved 100 amino acid DNA binding domain and plays important roles in cilia formation of the respiratory, reproductive, and central nervous systems (Clevidence et al. 1993;Hackett et al. 1995;Brody et al. 2000). Abnormal expression or targeted mutation of Foxj1 will result in an absence of cilia in the tissues and a defect in left-right axis determination of organs (Brody et al. 2000;Chen et al. 1998). Foxj1 suppresses T cell activity and thus spontaneous autoimmunity, through the repression of NF-κB activity (Srivatsan and Peng 2005). Foxj1 also inhibits the humoral immune response in B cells; FOXJ1 deficiency in B cells results in spontaneous and accentuated germinal center formation, implicated in the development of pathogenic autoantibodies and accentuated responses to immunizations (Lin et al. 2005). Recent studies have shown that FoxJ1 is required for postnatal differentiation of ependymal cells and a subset of astrocytes in the subventricular zone (SVZ) and the subpopulation of astrocytes has the ability of self-renew and neurogenic potential differentiated into astrocytes, oligodendrocytes and neurons (Jacquet and Salinas-Mondragon 2009). We hypothesize that Foxj1 may be involved in the pathophysiological and biochemical progression after TBI, which is associated with the outcome of brain function and neurogenesis induced by injury. In our study, we investigated the expression and the distribution of Foxj1 in the rat brain after injury for the first time. Our experiment is conducted to gain a brighter insight into the physiologic functions of Foxj1 in the normal brain and the cellular and molecular mechanisms underlying central nerve lesion and repair. Animals and Surgery Male Sprague-Dawley rats (weighing 220-275 g) were used in this experiment. After deeply anesthetized with chloral hydrate (10% solution), the heads of the rats were fixed in the stereotactic frame and a microknife was inserted into the right cortex under the aseptic condition 3 mm lateral parallel from the midline with an antero-posterior surgical incision (5 mm long, 3 mm deep, and 1 mm wide); thereafter, the scalps were sutured. Shamcontrolled rats were subjected the identical procedures to experimental rats except for being inserted with the microknife into the brain. After all the procedures, animals were returned to their cages and allowed freely to get food and water. Animals were housed under a 12 h light/dark cycle and the room temperature (RT) was kept at 37±0.5. Experimental animals (n=21) were killed at 12 h, 1, 3, 5, 7, 14, and 28 days after injury. Normal rats (n=3) and sham-controlled rats (n=3) were sacrificed at 3 days. All surgical and animal care procedures were carried out in accordance with the Guide for Care and Use of Laboratory Animals (National Research Council, 1996, USA) and were approved by the Chinese National Committee to Use of Experimental Animals for Medical Purposes, Jiangsu Branch. Western Blot After given an overdose of chloral hydrate, rats were killed at different time points post-operatively, and the cortex tissue surrounding the wound (extending 3 mm to the incision) as well as an equal part of the normal and shamcontrolled cortex were dissected out and stored at −80°C until use. In order to prepare lysates, frozen cortex tissue samples were weighed and minced with eye scissors in ice. Then the samples were homogenized in lysis buffer (1% sodium dodecyl sulfate (SDS), 1% Triton X-100, 50 mmol/ L Tris, 1% NP-40, pH 7.5, 5 mmol/L EDTA, 1% sodium deoxycholate, 1 μg/ml leupeptin, 10 μg/ml aprotinin, and 1 mmol/L PMSF) and centrifuged at 12,000 rpm and 4°C for 20 min to collect the supernatant. After determined protein concentration with the Bradford assay (Bio-Rad), protein samples were subjected to SDS-polyacrylamide gel electrophoresis and transferred to a polyvinylidine diflouride filter membrane. The membrane was blocked with 5% milk without fat for 2 h and incubated with primary antibody against Foxj1 (anti-mouse, 1:1,000; Santa Cruz) or GAPDH (anti-rabbit, 1:1,000, Santa Cruz) at 4°C overnight. At last, the membrane was incubated with second antibody goat-anti-mouse or goat-anti-rabbit conjugated horseradish peroxidase (1:2,000, Southern-Biotech) for 2 h and visualized using an enhanced chemiluminescence system (Pierce Company, USA). Sections and Double Immunofluorescent Staining After defined survival times, rats were terminally anesthetized and perfused through the ascending aorta with 500 ml of 0.9% saline, followed by 4% paraformaldehyde. After that, the brains were removed and postfixed in the same fixative for 3 h and then replaced with 20% sucrose for 2-3 days, following 30% sucrose for 2-3 days. After treatment with sucrose solutions, the tissues were embedded in O.T.C. compound. Then, 10-μm frozen crosssections were prepared and examined. All sections were first blocked with 10% normal serum-blocking solution species the same as the secondary antibody, containing 3% (w/v) bovine serum albumin (BSA), and 0.1% Triton X-100 and 0.05% Tween-20 2 h at RT in order to avoid unspecific staining. Then, the sections were incubated with both rabbit polyclonal primary antibodies for anti-MAP-2 (a marker of neurons, 1:1,000; Santa Cruz), anti-GFAP (a marker of astrocytes, 1:200; Sigma), and anti-Ki67 (a marker of cell division, 1:300; Santa Cruz), and mouse monoclonal primary antibodies for anti-Foxj1 (1:100; Santa Cruz), anti-NeuN (a marker of neurons, 1:500; Chemicon), anti-GFAP (a marker of astrocytes, 1:200; Sigma). Briefly, sections were incubated with both primary antibodies overnight at 4°C, followed by a mixture of FITC-and TRITC-conjugated secondary antibodies for 2 h at 4°C. The stained sections were examined with a Leica fluorescence microscope (Germany). Immunohistochemistry We blocked the sections with confining liquid consisting of 10% goat serum, 1% BSA, 0.3% Triton X-100, and 0.15% Tween-20 for 2 h at room temperature, then incubated with anti-Foxj1 antibody (anti-mouse, 1:100, Santa Cruz) overnight at 4°C. After incubation with the primary and the second reagents as the second antibody for 20 and 30 min, respectively, at 37°C, the reaction sections were incubated with the liquid mixture (0.02% daminobenzidine tetrahydrochloride, 3% H 2 O 2 , and 0.1% PBS). Finally, the sections were dehydrated and covered with coverslips. We examined the sections and counted the cells with strong or moderate brown staining, weak or no staining as positive or negative Foxj1 cells, respectively, from each group at higher magnified images. We took the average assays of each group as the valuable results. Quantitative Analysis Cells double labeled for Foxj1 and the other phenotypic markers used in the experiment were quantified. Sections were double labeled for Foxj1 and MAP-2 and GFAP. To identify the proportion of each phenotype-specific markerpositive cells expressing Foxj1, a minimum of 200 phenotypespecific marker-positive cells were counted adjacent to the wound in each section. Then double labeled cells for Foxj1 and phenotype-specific markers were recorded. Two or three adjacent sections per animal were sampled. Statistical Analysis All data were analyzed with Stata 7.0 statistical software. All values were expressed as mean±SEM. One-way ANOVA followed by the Tukey's post hoc multiple comparison tests was used for statistical analysis. P values less than 0.05 were considered statistically significant. Each experiment consisted of at least three replicates per condition. The Expression of Foxj1 in Brain by Western Blot In order to investigate the temporal patterns of Foxj1 expression after TBI, Western blot was performed in this study. In the cerebral cortex surrounding the injury, Foxj1 protein level was low in normal and sham cortex, and increased at 12 h after TBI and peaked at day 3, then gradually reduced thereafter; however, the expression of Foxj1 at day 28 after injury was still higher than the sham control. These data indicated that Foxj1 protein level had a temporally change after TBI ( Fig. 1a and b). The Changed Distribution of Foxj1 in the Brain Cortex after TBI We used the coronal sections of the uninjured shamoperation and day 3 after injury to assess the changed distribution of the expression of Foxj1 by immunohistochemistry. We could see that the immunostaining of Foxj1 deposited strongly in the plasmalemma adjacent to the lesion site ( Fig. 2e-g); however, a few positive immunostaining was found in the equal contralateral hemisphere (Fig. 2c, d, and g) and the uninjured sham-operated hemisphere ( Fig. 2a and b). Further magnifications revealed clearly the distribution and the morphous of the positive immunostaining. The results suggested that the expression of Foxj1 was apparently higher in the ipsilateral injured brain on day 3 after TBI compared with the contralateral hemisphere and the sham-operated hemisphere. The Colocalization of Foxj1 with Different Cellular Markers by Double Immunofluorescent Staining To further determine the cell types expressing Foxj1 after TBI, we used double labeling immunofluorescent staining with cell-specific markers: MAP-2 (a marker of neurons) and GFAP (a marker of astrocytes). We found that Foxj1 was expressed in neurons ( Fig. 3a-c) and astrocytes ( Fig. 3g-i) and with a relatively low level in sham brain ( Fig. 3d-f and j-l). To identify the proportion of each phenotype-specific marker-positive cells expressing Foxj1, a minimum of 200 phenotype-specific marker-positive cells were counted between sham and 3 days after TBI (Fig. 3m). After injury, Foxj1 expression was increased significantly in neurons and astrocytes at 3 days compared with sham brain. Cellular Proliferation in the Brains after TBI To identify the proliferative cell types after TBI, we performed double immunofluorescent staining with NeuN (a marker of neurons), GFAP (a marker of astrocytes), and Ki67 (a marker of cell division) in injured brain 3 days after TBI. We also performed Foxj1 and Ki67 to investigate their relationship. The results revealed that there were colocalizations between Neun and Ki67 (Fig. 4a-c), GFAP and Ki67 (Fig. 4d-f), Foxj1, and Ki67 ( Fig. 4g-i), that was to say neurons and astrocytes had proliferated after injury and Foxj1 had the relationship with cell proliferation. Discussion TBI has been one of the leading causes of death and disability in both industrialized and developing countries (Djebaili et al. 2004;Lo et al. 2003) and results in a significant society burden throughout the world (Langlois et al. 2006;Marshall 2000;Teasdale and Graham 1998). The pathophysiology of cerebral contusions is varied and complicated, including complex temporal and regional changes of cerebral blood flow and metabolism (Katayama et al. 1998;Bullock et al. 1992), disruption of the bloodbrain barrier resulting in brain edema (Unterberg et al. 2004), and progressive neuronal cell death in pericontusional tissue (Cervos-Navarro and Lafuente 1991). TBI induces deleterious neuroinflammation, as proved by edema, free radicals, cytokine production, induction of nitric-oxide synthase and cyclooxygenase type 2, and leukocyte infiltration (Xiao et al. 2008). Because a series of inner mechanisms leading to the physiological and the pathological alteration in the brain after injury have not been understood clearly, there are currently no good treatments that improve clinical outcome measures (Roberts et al. 1998;McKee et al. 2005). In our study, we employed a controlled stereotactic knife lesion model in adult rats to investigate the cell mechanism after TBI. We displayed the increased expression of Foxj1 in adult rats' brain after traumatic brain injury for the first time. Western blot analysis showed that the expression of Foxj1 was significantly increased and peaked at 3 days after injury. We also found that the staining of Foxj1 was enhanced obviously in the ipisilateral brain cortex nearby the lesion site compared to the conspilateral brain and the sham-operated brain by immuohistochemical staining. By double immunofluorescent staining, we observed that there were co-localization of Foxj1 and MAP-2, as well as Foxj1 and GFAP. In addition, the co-localization of Foxj1 with MAP-2 and GFAP was increased in the brain 3 days after injury compared to the sham brain. In our experiment, the important alteration of the expression of Foxj1 located in cytoplasm in the brain after injury forcefully supported the idea that Foxj1 was implicated in the central nervous system pathophysiology after TBI. Our findings may provide a crucial clue to learn the cellular and molecular mechanisms underlying TBI. Traumatic brain injury would lead to permanent motor, cognitive and behavioral deficits, which are the result of neural tissue loss and cell death (Paghupathi et al. 2000). In response to the injury, the cells in the brain will produce corresponding reactions to resist the injury and help to recover the insulted functions. Neurons may be replenished by neural stem cells in the dentate gyrus and subventricular zones (Yagita et al. 2001;Peterson 2002). Astrocytes proliferate possibly to support surviving neurons and prevent further tissue damage through formation of the glial scar (Smith et al. 2001). Microglia increase to get rid of cellular debris and promote recovery of brain function (Glulian et al. 1991). Recent studies have shown that the mammalian brain contains neural stem and progenitor cells in different regions such as the sub-granular zone of the dentate gyrus and the SVZ of the lateral ventricles (Reynolds and Weiss 1992;Johansson et al. 1999;Alvarez-Buylla et al. 2002;Doetsch et al. 1999;Palmer et al. 1997). After injury, neurogenesis is stimulated in the adult brain (Kunlin et al. 2010) and migration toward pathology is the first critical step in stem cell engagement during regeneration (Imitola et al. 2004). Neural stem cells migrate through the parenchyma along nonstereotypical routes in a precise directed manner across great distances to injury sites in the central nervous system, where they could engage niches harboring local transiently expressed reparative signals (Imitola et al. 2004). Injury-induced neurogenesis has been observed across a broad range of injury models in experimental animals and human patients. Fig. 2 The changed distribution of Foxj1 in injured brain cortex by immunohistochemical staining. a, b We could observe the staining of Foxj1 in the sham-operated brain, and the level was relatively lower. c, d The same staining was found in the equal contralateral hemisphere of the injured brain; however, the level was low as the sham brain. e, f We could see that the immunostaining of Foxj1 deposited strongly adjacent to the lesion site, in addition the level was significantly higher compared with the sham and the contralateral hemisphere of the brain after TBI. g Quantitative analysis of Foxj1 positive cells/ mm 2 between contralateral and ipsilateral brains 3 days after injury. Foxj1 was significantly increased in the ipsilateral brain at 3 days after TBI. Asterisk indicated significant difference at P<0.05 compared with contralateral brain. Double immunofluorescence staining for Foxj1 and different phenotype-specific markers in adult rat brain 3 days after TBI. The sections from sham and injured brains 3 days after TBI were immunostained with Foxj1 (green, b, e, h, k) and different cell makers, such as MAP-2 (a marker of neurons, red, a, d) and GFAP (a marker of astrocytes, red, g, j), and the co-localization of Foxj1 with different phenotype-specific markers were visualized in the merged images (c, f, i, l). a-c, g-i Immunostaining for Foxj1 with MAP-2 and GFAP at 3 days after TBI; d-f, j-l Immunostaining for Foxj1 with MAP-2 and GFAP in sham brain. m Quantitative analysis of different phenotype-specific markers positive cells expressing Foxj1 (%) in sham group and 3 days after injury. The change of Foxj1 was striking in neurons and astrocytes; *, # indicate significant difference at P< 0.05 compared with the sham group. Error bars SEM. Scale bars 20 μm (a-l) neuronal precursor cells in the SVZ, followed by their migration into ischemic brain regions, where they differentiate and mature (Jin et al. 2001), in addition ectopic neurogenesis has been observed in animal models in the ipsilateral striatum of middle cerebral artery occlusion (Arvidsson et al. 2002;Zhang et al. 2002) and in degenerated hippocampal CA1 with global cerebral ischemia (Nakatomi et al. 2002;Bendel et al. 2005). The finding that brain injury stimulates the production of neurogenesis in the mammalian brain shows a role for this process in brain repair. Foxj1 is a member of the Fox family of transcription factors, which plays an important role in cilia formation (Clevidence et al. 1993;Hackett et al. 1995;Brody et al. 2000), left-right axis determination of organs (Brody et al. 2000;Chen et al., 1998), suppressing T cell activity (Srivatsan and Peng 2005), inhibiting the humoral immune response in B cells (Lin et al. 2005) and so on. Recent studies found that Foxj1 is required for postnatal differentiation of ependymal cells and a subset of astrocytes in the SVZ, where these cells form a postnatal neural stem cell niche (Jacquet and Salinas-Mondragon 2009). The findings of the studies revealed that the subset of astrocytes harvested from the SVZ generate neurospheres, which had the capability of self-renew and had the potential to give rise to neurons, astrocytes, and oligodendrocytes, thus functionally resembling adult neural stem cells. In support of this, recent studies have revealed that Foxj1 promoteractive cells in the spinal canal and SVZ participate in neurogenesis and gliogenesis in spinal cord injury and in response to stroke (Meletis et al. 2008;Carlen et al. 2009). Our study revealed that the expression of Foxj1 was significantly increased in the rat brain after injury, which supported the concept that Foxj1 may be involved in the physiological and the pathological process in the injured brain. Recent studies have displayed that Foxj1 is depended in inducing the differentiation of a subset of astrocytes in the subventricular zone, which can functionally act as adult neural stem cells and give rise to neurons, astrocytes, and oligodendrocytes. Our results suggest that Foxj1 may be Double immunofluorescence staining for cellular proliferation in the brains after TBI. Double immunofluorescence staining for NeuN (a marker of neurons, green, a), GFAP (green, d), Foxj1 (green, g), and Ki67 (a marker of cell division, red, b, e, h) in injured brain cortex after TBI. In the rat brain 3 days after injury, there were colocalization between NeuN and Ki67 (a-c), GFAP and Ki67 (d-f), Foxj1 and Ki67 (g-i). Scale bars 20 μm (a-i) required for the differentiation of the cells acting as adult neural stem cell which participate in neurogenesis and give rise to neurons, astrocytes, and oligodendrocytes. These cells may migrate to the lesion region by stimulation of injury to compensate for the loss of neuronal function caused by TBI. Our experiment may provide a novel strategy for the treatment of CNS trauma in the field of neurogenesis. Further studies are needed to confirm the inherent mechanisms of the role of Foxj1 after brain injury. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2011-02-24T00:00:00.000
{ "year": 2011, "sha1": "743d7e0534de314a8db9bd57c18e6214422aaa83", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12031-011-9504-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "743d7e0534de314a8db9bd57c18e6214422aaa83", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219489211
pes2o/s2orc
v3-fos-license
Numerical study on the symmetric and asymmetric orientation of the crack branching in 2D plate The phenomenon of crack branching is one of the typical fracture behaviours. The effect of crack branching orientation is investigated in this paper. By considering a static branched crack in a 2D plate under uniaxial traction, the numerical study is carried out for two study cases. The first study case is the symmetric crack branching in which the various crack branching length and orientation have the same value between both crack branching. The second case is the asymmetric case crack branching. In this case, both crack branching length has a particular constant value, Moreover, the orientation of first crack branching is constant and then the second one has various values. The stress intensity factors of the crack tips are calculated for both study cases. It is revealed for the symmetric case; the increasing of the crack branching length will increase the value of stress intensity factors KI for various orientations of crack branching. In contrast, the stress intensity factors KI will tend to decrease along with the increasing of the crack branching orientation. Moreover, the stress intensity factors KI of first crack branching will increase, but the stress intensity factors KI of second crack branching will decrease along with increasing of the second orientation crack branching for the asymmetric case. Furthermore, the direction the stress intensity factors KII will prone to change along with the increasing of the crack branching orientation. The stress intensity factor KII tends to increase along with the increasing of the crack branching orientation as well as the increasing of the crack branching length for the symmetric case. And then the increase of constant angle of first crack branching will increase significantly the stress intensity factors KII of the first crack tip along with the increasing of the second crack branching angle for the asymmetric case. Introduction The crack tends to branch out if cracks travel faster, especially in brittle solids such as glass, rocks, and rock-like materials. The mechanism of crack branching is a complicated process and has been usually treated dynamically. Many researchers have been considered crack branching under dynamic loading [1][2][3][4]. There are some methodologies used to predict the crack branching behaviour such as XFEM [5], Bond-particle model [2], Peridynamic modelling [4], multidimensional space method [6] and a modified displacement discontinuity method [7], the crack-tip displacement discontinuity element [8] and Pseudo-spring smoothed particle hydrodynamics [3]. Most of them, the stress intensity factors are considered as the crack growth parameter. However, Cheng [6] proposed the criterion of crack branching based on the energy release rate and the strain energy density by Theocaris and Andrianopoulos [9]. In this present study, the Finite Element Method (FEM) is proposed to study the effect of the Symmetric and Asymmetric of crack branching on the stress intensity factors. The Asymmetric case has been studied by Yan [8] using the crack branching model developed by Theocaris and Andrianopoulos [9]. They are introduced the 2D plate model having a center crack on the plate with crack branching. Therefore, three crack tips in the crack branching model are considered. In this present analysis, the crack branching model proposed is the single crack edge with crack branching. Two crack tips are considered in this model. However, for the case of the Symmetric, only one crack tip is studied because in the symmetric case, it is assumed that two crack tips have similar behaviour. The stress intensity factor mode-I (KI) and mode-II (KII) is considered this study because in the crack branching the mode of fracture is not only mode-I and mode-II but also the mixed mode I/II at the crack tips [10]. Numerical modelling of crack branching Consider the 2D plate with a 25 mm of main crack length (a) and the crack branching length (L1 and L2) having the angle of crack branching (θ and θ ). The plate has a 100 mm of width (W) and 100 mm of height (h) under the uniform tensile stress, = 10 MPa. as shown in figure 1. In this present analysis, there are two cases considered, namely the symmetric case and the asymmetric case. In the symmetric case, the stress intensity factors of the crack tips are studied for the variation of the crack branching length namely L = L = 5 mm, 10 The results and discussion The effect of the orientation of the branch crack length on the stress intensity factors KI for the Symmetric case is shown in figure. 3. The figure shows that the lowest stress intensity factors KI occur at the 60 0 of crack branching angle for all variation of the crack branching length. It shows that increasing of crack branching orientation will decrease the stress intensity factors KI. The value of stress intensity factor depends on the direction of crack mode, increase the orientation will change the direction of crack mode. Therefore, the first mode direction will decrease along with increasing the angle of crack branching length. Moreover, the highest stress intensity factors KI occur in the 20 mm of the crack branching length. The increasing of the crack branching length will increase the stress intensity factors KI. It means that stress intensity factors KI have a relationship that is directly proportional to the length of the branch crack. The effect of the orientation of the crack branching length on the stress intensity factors KII for the Symmetric case is shown in figure. 4. The figure shows that the direction of stress intensity factor changes along with the increasing of the crack branching orientation for both crack tips (CT). It shows that the direction of the stress intensity factors KII relates to the direction of the loading. Furthermore, the stress intensity factor tends to increase along with the increasing of the crack branching orientation as well as the increasing of the crack branching length. It reveals that stress intensity factors KII have a relationship that is directly proportional to the length of the branch crack. The effect of the angle of the branch crack length on the stress intensity factors KI for the Asymmetric case is shown in figure. 5. The figure shows that the stress intensity factors KI of the crack tip 1 (CT-1) having both the 30 0 and 45 0 of the constant angle (θ ) will prone to increase along with the increasing of the second crack branching angle (θ ). In contrast, the stress intensity factors KI of the crack tip 2 (CT-2) will tend to decrease along with the increasing of the second crack branching angle (θ ). Furthermore, the stress intensity factors KI of both of the crack tips (CT-1 and CT-2) will increasing along with the increasing of the constant angle of the first crack branching (θ ). except for CT-1 with 45 0 of the angle of crack branching. It is shown that increasing of the constant angle of first crack branching (θ ) will increase significantly the stress intensity factors KII of CT-1 along with the increasing of the second crack branching angle (θ ). Conclusion In this paper, the variation angle of crack branching on stress intensity factors KI and KII are investigated for both symmetric and asymmetric cases. It is revealed that for the symmetric cases, the increasing of the crack branching length will increase the value of stress intensity factors KI for various orientations of crack branching. In contrast, the stress intensity factors KI will tend to decrease along with the increasing of the crack branching orientation. Moreover, the stress intensity factors KI of first crack branching (CT-1) will increase, but the stress intensity factors KI of second crack branching (CT-2) will decrease along with increasing of the second orientation crack branching for the asymmetric case. Furthermore, the direction the stress intensity factors KII will prone to change along with the increasing of the crack branching orientation. The stress intensity factor KII tends to increase along with the increasing of the crack branching orientation as well as the increasing of the crack branching length for the symmetric case. And then the increase of constant angle of first crack branching (θ ) will increase significantly the stress intensity factors KII of CT-1 along with the increasing of the second crack branching angle (θ ) for the asymmetric case.
2020-05-21T09:17:51.977Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "92d9830ef58292af99556208e3238b371abecfaa", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/830/4/042026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f02d41558c80e91f764b41c2e35540192dde3bc6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
12334451
pes2o/s2orc
v3-fos-license
Universal bifurcation property of two- or higher-dimensional dissipative systems in parameter space: Why does 1D symbolic dynamics work so well? The universal bifurcation property of the H\'enon map in parameter space is studied with symbolic dynamics. The universal-$L$ region is defined to characterize the bifurcation universality. It is found that the universal-$L$ region for relative small $L$ is not restricted to very small $b$ values. These results show that it is also a universal phenomenon that universal sequences with short period can be found in many nonlinear dissipative systems. I. INTRODUCTION One of the standard ways of investigating the dynamics of physical systems is by exploiting the universal (system-independent) property 1−8 of them. The best understood transition sequence is the period-doubling cascade, which has been observed in a variety of physical systems. Beyond the accumulation point for the period-doubling sequence there is chaos. Two decades ago Metropolis, Stein and Stein showed 1 that there is an ordered sequence of distinct periodic windows, each of which occurs for some range of control parameter, within the chaotic region for unimodal maps, x n+1 = f (µ, x n ). They have called this sequence the U-sequence since the ordering of the windows is system independent. Remarkably, this universality is also observed in systems with many degrees of freedom both experimentally 2,3,8 and theoretically 4−7 although the phase portraits of these two-or high-dimensional system still exhibit very complex behaviour which is clearly not one-dimensional or close to onedimensional. It has been found that the periodic windows interspersed in chaotic region for these systems are ordered in a systematic way as those of one-dimensional (1D) maps. The most striking and detailed observation is obtained in the Lorenz equationṡ x = 10(y − x), y = rx − xz − y, z = xy − 8z/3. (1) On the parameter r axis with 45 < r < 400, all of the found 68 periodic windows of the Lorenz equations can fit into those of a 1D antisymmetrical map with only one exception 9 . Experimentally, even though the Belousov-Zhabotinskii reaction involves more than thirty chemical species, it exhibits rather complex bifurcation behaviour that is modeled well by 1D maps 3 . Despite these numerical and experimental observations, the underlying mechanism for the universal property is not fully understood. The motivation of this paper is to present an approach towards interpreting all these experimental and numerical observations and exploring their limitations. We will take the Hénon map 10 as an example. The bifurcation structure of the Hénon map in the two-dimensional parameter (a, b) space has been extensively discussed 11,12 . In this paper, we will use symbolic dynamics 1,14−20 of 1D mappings and 2D mappings to illustrate the universal topological property of the Hénon map at selected parameters by considering the unstable periodic orbits embedded in its chaotic attractor. Two topological quantities δ and L are defined to characterize this universal topological property. Then we discuss the universal bifurcation property of the Hénon map in 2D parameter space (a, b) by defining universal-L regions in which the Hénon map exhibits 1D bifurcation behavior to period L. It is remarkable to find that the universal-L region for relative small L is not restricted to very small b values. We will also present two examples of ordinary differential equations (ODE's), the Rössler equations 13 and the forced Brusselator 4 , to demonstrate the validity and robustness of our approach. These results show that it is also a universal phenomenon that universal sequences with short period can be found in many experiments or numerical calculations on nonlinear dissipative systems. The paper is organized as follows. In Section II, we review the basic property of 1D unimodal maps. The universal bifurcation property and its limitations of the Hénon map in the 2D parameter (a, b) space is studied in Section III. To demonstrate the validity of the method presented in Section III, the universal bifurcation property of Rössler equations and the forced Brusselator in a definite parameter axis is investigated in Section IV. Finally, in Section V we give our conclusion. II. UNIVERSAL SEQUENCES IN 1D UNIMODAL MAPS By using the symbolic dynamics of 1D mappings, Metropolis, Stein and Stein (MSS) had already shown that the dynamics of unimodal 1D maps of the interval [-1, 1] is embodies in the U-sequence of periodic windows 1,14,15 . Fig. 1 shows a typical case. The extremum is denoted by a letter C. Each periodic window of the map can be labelled by a symbolic sequence of 0's and 1's that mark the location (to the left or right of C ) of the successive iterates of the initial point C. For example, the only windows with period 5 are 101 2 C, 10 2 1C, and 10 3 C ( 101 2 C represents the periodic window (101 2 C) ∞ hereafter.) Indeed, we can define an ordering 14,15 for these symbolic sequences referring to the natural order in the interval [-1, 1]. These ordering rules are consistent with the ordering of a real number α defined for a sequence S(x) with an initial point x as following 17 with [16] Since the symbolic sequence K=S(C), called also the kneading sequence, acquires a maximal α in this metric representation, a symbolic sequence S(x) corresponds to a real trajectory if and only if it satisfies where σ denotes the shift operator. With this admissibility condition, we can generate all the admissible periodic orbits for a given kneading sequence K. The kneading sequence changes as the controlling parameter alters. Since kneading sequences correspond to orbits coming from C, they should also satisfy the above condition. Thus we obtain the admissibility condition for K themselves: a symbolic sequence K can be a kneading sequence if and only if it satisfies α(σ m (K)) ≤ α(K), m = 0, 1, 2, · · · . When K is a periodic string, K corresponds to a periodic window. From Eq. 5, we can generate all the possible periodic windows. It can be checked that there are only three period 5 windows as those listed above. With the ordering rules in equation (3), all periodic windows can be ordered to yield the U-sequence. In the logistic map, this U-sequence is consistent with the increasing µ order which is listed in Table 1 up to period 7. III. UNIVERSAL SEQUENCES IN 2D HÉNON MAPS The Hénon map (2) has be extensively studied by using symbolic dynamics 16−19 . The set of all "primary" tangencies between stable and unstable manifolds determines a binary generating partition which divides the attractor into two parts marked by letters 0 and 1. Any trajectory is encoded by a bi-infinite string S(x) = · · · s −m · · · s −1 s 0 • s 1 s 2 · · · s n · · · , where s n denotes a letter for the nth image, s −m a letter for the mth preimage, each is either 0 or 1, Table 1: Symbolic sequences for periodic windows of the Hénon map along two different parameter axes and that for the forced Brusselator equations. The axis I (long dashed in Fig. 4) and axis II (dash-dotted) are two axes in and out of the universal-7 region of the Hénon map, on which a complete and incomplete U-sequence is found respectively. the solid dot indicates the "present" position. In order to extend the grammar for unimodal maps to this map, a "backward" variable is defined as 17 with For this 2D map, each primary tangency C is associated with a bi-infinite kneading sequence K (with the first backward letter s 0 undetermined which may be 0 or 1) and two symmetrical points (α(K), β − (K)) and (α(K), β + (K) = 1 -β − (K)) in the symbolic plane corresponding to s 0 = 0 and 1 respectively 18 . Analogously to those in unimodal maps, for all admissible points (α, β) with β ∈ [β − (K), β + (K)], α should be less than α(K) and thus the pruning front 17 is obtained by cutting out rectangles {α, β|α > α(K), β ∈ [β − (K), β + (K)]} for all points on the partition. The union of these rectangles gives fundamentally forbidden zone. Consequently, the grammar for a word admissible or forbidden in this map can be expressed as: A bi-infinite word is admissible if and only if all its shifts never fall into the fundamentally forbidden zone 17,18 . It is clear that there are infinitely many kneading sequences (corresponding to infinitely many primary tangencies) in a 2D map to determine the admissibility condition for a word, while there is only one kneading sequence in a 1D map. Universality in the Hénon map. Fig. 3 shows a typical symbolic plane, (a, b) = (1.4, 0.16). The corresponding attractor is shown in Fig. 2 which has a rather complicated structure. Its fractal dimension is 1.16± 0.03. Numerically 203 kneading sequences are found as shown in Fig. 2. It is found that the minimal and maximal of all the forward parts of these kneading sequences start with K min =101111010101 and K max =101111011111 respectively, corresponding to a minimal and maximal α-values α min = 0.837560 and α max =0.838466 of all these kneading sequences. We define two quantities δ and L as δ = α max − α min = 0.000906, where [log 2 δ] denotes the integer part of log 2 δ. It is clear that δ =0 and L → +∞ in the 1D limit (b =0). For (a, b) = (1.4,0.16), an unstable periodic orbit with length n ≤ L = 10 can not tell the difference between these kneading sequences. Indeed, no symbolic string with length n ≤ L lies in the interval between K min and K max . Thus for the unstable periodic orbits with length n ≤ L, the grammar is completely determined by a symbolic string K f , which is 1011110101 or 1011110111, the first 10 letters of K min or K max , that is, a word S(x) corresponds to an unstable periodic orbit of the Hénon map for (a, b) = (1.4,0.16) if and only if it satisfies α(σ m (S(x))) ≤ α(K f ), m = 0, 1, 2, · · · . This is just the grammar for unimodal maps with a kneading sequence K f . Consequently the unstable periodic orbits of the Hénon map for (a, b) = (1.4, 0.16) can be generated as that of unimodal maps with a kneading sequence K f (see Eq. 4). The only exception is the unstable periodic orbit K ∞ f which can not be determined by Eq. 9. We here noted that the Hénon map is divergent for a = 1.4 and b > 0.315. As b decreases, L increases. It had already shown 19 that L = 32 for (a, b) = (1.4, 0.05). In the 2D phase space, even for (a, b) = (1.4, 0.05) the attractor has a clear hook indicating that the map is two-dimensional. We emphasize that though the attractors reveal very complicated structure in 2D phase space, the topologies for these attractors may be very close to those in 1D maps that the unstable periodic orbits can be generated with only one kneading sequence to some degree. Now we consider the universal bifurcation property of the Hénon map in parameter space. Fig. 4(a,b) show the isoperiodic lines 11,12 for all the nine period 7 windows. Numerically we find that L ≤ 7 for all the parameters a and b in the region between the two heavy solod lines shown in the figure 21 . We call this region the universal − 7 region hereafter. Thus all the periodic orbits with length≤7 of the Hénon map in this Universal-7 region can be determined with only one kneading sequences as those of 1D unimodal maps. Consequently, in this region there is a perfect MSS-sequence up to period≤7 along any axis provided that the axis is never tangent with any isoperiodic lines. These axes are in a sense the same as the axis of b = 0 (corresponding to the Logistic map). We present an example of these axes in Fig. 4 (line I, long dashed). The periodic windows on this axis are listed in Table 1. It is clear that they share the universal feature as that of 1D unimodal maps up to period 7. We can also obtain universal − M region numerically for M = 5, 6, 8, 9, · · · in which there are MSS-sequences for period≤ M along any axes provided that they are never tangent with any isoperiodic lines for period≤ M. In Fig. 4 we also show the borders for the Hénon map exhibit an attracting set with initial points (x 0 , y 0 ) very close to original point (0, 0). Comparing to this borders we can say that the universal-7 region is not restricted to very small b values. Thus it is rather likely to get a MSS-sequence up to a relative short period (say, period 7) in the full 2D parameter plane of the Hénon map. Incomplete U − sequence in the Hénon map. In fact, even on a axis out of the universal region, the Hénon map can exhibit approximately 1D behaviour if the axis is never tangent with any isoperiodic lines. In table 1 we also show the periodic windows on the axis represented by dash-dotted line (II) in Fig. 4. It is clear that all of these words increase monotonically as a increases except the word 10001C and the period windows 10111C, 1000C, 10000C and 100000C are missing. IV. APPLICATIONS TO ODE's The above idea can be extended to many other two-or higher-dimensional systems. Here we only take the Rössler's equations 13ẋ and the forced Brusselator 4ẋ as examples. The 2D attractor of the Rössler's equations is usually taken from a section of the 3D flow on the half-plane y = 0, x < 0 13 . It has already shown 19 that the unstable periodic orbits of the attractor can be generated with only one kneading sequence up to period 12 for parameters c = 2, d = 4 and a = 0.408 (corresponding to L ≥ 12). We find similar results (L ≥ 9) for c = 2, d = 4 and 0.125 < a < 0.415. Table 2 shows the periodic windows up to period 9 in descending a order along with their periods, words and locations on the parameter axis. They are exactly consistent with part of the U-sequence from word C up to 1001011C. The forced Brusselator had been extensively studied with symbolic dynamics of 1D maps 4 . An incomplete U-sequence up to period six along the axis A = 0.46 − 0.2ω had already been found by Hao et. al 4 which is also listed in Table 1. Only the periodic window 10001C was missing. Our investigation on the Poincaré map with symbolic dynamics shows that L = 2 for the parameter range 0.8056 < ω < 0.8194 so that the U-sequence up to period 6 might be incomplete. Recently, J.X. Liu has confirmed that the missing period 10001C is pruned 22 . V. CONCLUSION AND DISCUSSION In this paper the universal bifurcation property and its limitations of the Hénon map in 2D parameter space (a, b) is discussed with symbolic dynamics. Two topological quantities δ and L are defined to characterize this topological universality. In the universal-L region, as that of 1D unimodal map, there is a perfect MSS-sequence up to period≤L along any axis provided that the axis is never tangent with any isoperiodic lines, though the phase portraits of the Hénon map exhibit very complicated 2D behaviour. Extending this idea to many other two-or higher-dimensional systems ensures that the symbolic dynamics of 1D mappings is an effective technique to investigate the universality in these two-or higher-dimensional systems and then the parameter for definite periodic motion may be predicted 23 . We have presented two examples of ordinary differential equations (ODE's), the Rössler equations 13 and the forced Brusselator 4 , to demonstrate the validity and robustness of our approach. It should be noted that only the short period is considered although the theory presented in this paper is also valid for higher period. In fact, in real experiment (or numerical study on ODE's or PDE's), only short periodic orbit can be obtained. Our investigation shows that it is not a surprizing result that universal sequences with short period are found in many experiments. Moreover, our result shows that it is also a universal phenomenon that universal sequences with short period can be found in many nonlinear dissipative systems. This observation ensures that the parameter of many periodic motion for many dynamical systems (such as some fluid system, e. g. ref. 23) can be well predicted. In this paper we also show that even on a axis out of the universal region, the Hénon map can exhibit approximately 1D behaviour. This observation interprete the numerical results that in some nonlinear dynamical systems only incomplete U-sequences had been found 4 . Anyway, our defined universal-M region gives the background to interprete the experimental and numerical observations that complete or imcomplete U-sequences with short period can be found in many dissipative systems, and understand the limitations that 1D symbolic dynamics can be used to study two-or high-dimensional dissipative systems. [21] Though we can calculate the L value for given parameters, the distribution of L values are rather irregular so that we do not give iso-L lines. A detail discussion will be presented elsewhere. [23] H. P. Fang and Z. H. Liu, Phys. Rev. E 50, 2790(1994). The isoperiodic lines together with the universal-7 region (the region between two heavy lines) and the borders for the Hénon map exhibit an attracting set with initial points (x 0 , y 0 ) very close to original point (0, 0) (the two heavy short dashed lines). The long dashed (I) and dash-dotted (II) lines represent two axes in and out of the universal-7 region, on which a complete and incomplete U-sequence is found respectively. (b) an enlarged part.
2014-10-01T00:00:00.000Z
1995-06-02T00:00:00.000
{ "year": 1995, "sha1": "c3c4fe61c13b478e9c36dcc818e65303ea596e44", "oa_license": null, "oa_url": "http://arxiv.org/pdf/chao-dyn/9506001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c3c4fe61c13b478e9c36dcc818e65303ea596e44", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
73643356
pes2o/s2orc
v3-fos-license
Behavioral approach to portfolio selection : The case of Tehran Stock Exchange as emerging market Behavioral finance is the study of the influence of psychology on the behavior of financial practitioners and the subsequent effect on markets. In this paper, concepts of behavioral finance are surveyed and then the portfolio selection model in framework of behavioral finance theories is presented and compared with the Mean-Variance rational pattern. Historical data of TEDPIX for 10 years has been used and separated to 2 parts of test and evaluation groups. The optimum weight for risky asset proposed by standard mean-variance and behavioral model based on returns for the first 7 years (test data) in the 3 months periods. After that, returns of 84 optimum portfolios in a three year evaluation period are calculated. Mean test (CL=95%) shows that in Tehran Stock Exchange, the research hypothesis return of behavioral model is greater than return of standard mean-variance model, was rejected. INTRODUCTION In standard portfolio selection model, optimal values are determined regarding risk tolerance, investment limits, financial goals and mean-variance optimization pattern.But human being may not follow this process, because of behavioral biases.For instance, people encounter shortterm changes and long-term trends, change their portfolio. Several empirical studies about emotional biases have been done.Kahneman and Tversky (1992) showed that when encounter gain, investors are risk-averse, but encounter loss, are risk-seeking (Asymmetric Risk-taking Behavior).Also, lots of people have worse feeling toward loss compared with the same amount of gain.This phenomenon which is named risk-aversion is deeply related to psychology of people and taken into account as one of the fundamental concepts of prospect theory. Most of financial theories are based on maximization of expected utility and risk-aversion whereas empirical studies about real world have criticized modern financial *Corresponding author.E-mail: r-moosavi@iau-arak.ac.ir.Tel: (+98)122276163. theories and rational behavior hypothesis in recent years.Psychologists' studies show individuals' behavior is different from what modern financial theories draw for rational human behaviors (Fernandez et al., 2009). This paper surveys the hypothesis and frameworks of behavioral finance theories and then, represents a portfolio selection model based on behavioral finance assumptions. Statement of problem Portfolio selection has always been one of the subjects of financial theories.Before the 50th decade of 21st century, most of financial theories were in form of case study and nonsystematic.Harry Markowitz (1952) formulated the first portfolio theory, in title of "Modern Portfolio Theory" which was the first systematic financial theory.Modern portfolio theory evaluates return and risk of risky assets, using mean-variance pattern; and represents a normative pattern for portfolio selection.This theory assumes economic equilibrium, was the basis for other financial theories like capital assets pricing model (CAPM)developed by Sharp, Lintner and Mossine, and and efficient market hypothesis by Fama.Fallow up studies such as survey of behavior of stock price showed some anomalies in reality and efficient financial market hypothesis.So, researchers who are always looking for behaviors and reasons of financial markets events, attempted to explain behavior of decision makers in financial markets, using behavioral science.They explained the limits of rational financial theories such as limits of arbitrage and human cognitive limits.So, irregular behavior was known as an effective factor of economic behavior as well as other economic variables.Therefore, behavioral economics and behavioral finance attempt to explain economic variables in the framework of and normative theories, better and more accurate.The most important questions in this field are: 1. How do cognitive limits affect on economic behavior and investment decisions?2. How the human behavioral biases could be modeled? This paper attempts to explain irrational factors which affect investment decisions and portfolio selection in financial market of Iran and presents a behavioral model based on frameworks of behavioral finance.Finally, this model is evaluated and compared with rational portfolio selection models.et al. (2009) classified behavioral biases into two groups: cognitive biases and emotional biases.These two groups cause irrational decision making.Cognitive biases such as "anchoring" and "availability" is caused by wrong reasoning and could be decreased by getting more information.Emotional biases such as "loss aversion" and "regret aversion" are caused by sudden emotions and insight and could not be corrected easily.Shefrin (2005) showed that portfolio selection in framework of prospect theory is different from portfolio selection in framework of expected utility theory.The most important property of behavioral portfolio is that it involves some risk free securities and some risky securities and portfolio does not have enough diversification.In this framework, an optimal portfolio is one which covers the interests of decision maker instead of maximize the expected return.Therefore, interests and emotional biases are determinants in portfolio selection.Kahneman andTversky (1979, 1992) explained four new concepts in financial behavior of investors in prospect theory and in its newer model, cumulative prospect theory: Fernandez 1. Investors evaluate assets based on gain and loss, not final value of investment (mental accounting).2. Individuals are more averse to loss rather than gain (loss aversion). 3. Individuals are risk-seeker encounter loss and riskaverse dealing with gain (asymmetric risk preference).4. Individuals assign higher weight to the events with less probability and lower weight to the events with more probability (probability weighting function).Weber and Zuchel (2003) stated that investors, who have started morning with gain, avoid evening probable loss.In this way, they avoid probable risk and keep their gain.Shefrin and Statman (1985) presented disposition effect phenomenon, that individuals hold loser stocks for a long time and sell winner stocks soon.This behavior is named "fear of regret".They showed that, individuals' risk-aversion decreases after a period of loss and they become more risk-averse after a period of gain.This behavior is named asymmetric risk seeking. Odean (1998) analyzed around 10,000 transactions of investors.His finding showed that, individuals are keen on gain from winner stocks.Weber and Kamerer (1998) showed that, individuals would like to sell winner stocks instead of loser stocks.Nevertheless, some researchers have observed a different behavior.Thaler and Johnsone (1990) and Barberis et al. (2001) presented a model which states that, individuals divulge less risk-aversion after a period of gain, and more risk-aversion after a loss.This phenomenon is known as "house-money effect".Odean (1999) stated that overconfidence is the reason of high volume of individuals' transaction.According to his findings, overconfidence causes that individuals think that other investors' decisions are affected from disposition effect and their decisions are more rational.This behavior is specially intensified in some fields that, individuals have knowledge.For instance investors prefer local stocks or stocks related to their country instead of foreign companies' stocks because they feel they have more information about them, whereas it is possible that, this vision is incorrect.Another instant is that investors suppose that, successes due to chance are due to their skill.Individuals remember their successes but not their failure.This phenomenon is named "hot hand".Shiller (1998) studied the intellectual background and psychological, social and anthropological properties of individuals' decisions.He introduced some behavioral biases such as anchoring, overconfidence and cultural roots of investors' decisions.Barberis and Thaler (2003) stated that, behavioral biases are the reason of deviation of decisions from rational decisions.Table 1 shows a brief behavioral phenomenon that contradict efficient market hypothesis.Some of the most important theories in behavioral finance are listed in Table 2.Although several empirical studies about investors' behavioral biases have been conducted, there are a few comprehensive studies about behavioral biases effect on assets selection in financial markets (Fernandez et al., 2009).Barberis et al. (2001) tried to explain stock price behavior in terms of riskaversion concepts and mental accounting.Benartzi and Thaler (1995) explained individuals' risk-aversion behavior in framework of prospect theory and showed how myopic behavior affects portfolio selection.Magi (2005) used numerical calculations and explained the model of international portfolio selection based on behavioral preference.He also explained how investors prefer national stocks rather than foreign stocks, even though their performance is better.Davies and Satchell (2004) showed the method of optimal assignment of stocks based on prospect theory concepts.Shefrin (2005) considered heterogeneous investors to survey behavioral biases effects on asset pricing. Expected utility hypothesis as mental framework of modern financial theories, is extracted from the answer presented by Daniel Bernoulli (1954) to the paradox stated by Nicholas Bernoulli (1738) in title of "St.Petersburg Paradox".Two fundamental concepts which are extracted from this theory are: 1. Investors evaluate investment opportunities in terms of utility of outcomes. 2. Utility does not have a linear relationship with wealth but increase at a decreasing rate with increase in wealth (marginal utility). The concept presented by Bernoulli, was expanded in form of expected utility hypothesis by Von Neumann and Oscar Morgenstern (1944).This theory presented a descriptive model for method of individuals' decision making under risky condition.According to this model, individuals' utility function is specified in terms of their preference on risky (probabilistic) condition.The hidden concept of this theory is that unlike Bernoulli's theory which states investors consider outcomes of decision, they consider objective probabilities of each decision.Markowitz (1952) presented the concept of optimization based on maximum of utility and minimum of risk, in form of mean-variance efficient frontier.Efficient frontier involves all portfolios that are economically efficient in terms of expected return and risk trade off.He separated systematic and non-systematic risks and offered portfolio selection based on mean of returns and covariance of assets to decrease non-systematic risk. Tobin's separation theorem (1958) explained the process of assignment of assets and method of selection between risky and risk free assets.He stated that, portfolio selection should be done between risky and risk free assets, and between different categories of assets as well. Paul Samuelson (1965) mentioned efficient market hypothesis in his studies.He stated that, in the efficient market, where information is available for all participants, price change should be unpredictable.Fama (1970) briefed this concept and stated price reflects all available information and if there is no transaction cost, there would not be any outcomes due to transactions based on information.Roy (1973) and Locus (1978) tried to present a new version of efficient market hypothesis and believed return is not completely random. Generally, financial theories have been presented based on two fundamental hypotheses: 1. Individuals behave rationally.2. They use all available information for decision making. But there are several instances of irrational behaviors and cognitive bias in real-world.So, some anomalies and empirical studies about market efficiency resulted in weakness of efficient market hypothesis and equilibrium.Experts are going to explain behaviors in market and in this condition, some researchers have used data driven methods and dynamic systems to discover the relationship between variables.On the other hand, some researchers like the well-known biologist, Kaufman (1988) and a computer scientist, Holland (1988) tried to explain and predict behaviors in market, using adoptive systems theory and finding the relationship between economic behaviors, growth and completeness of systems.The third path is behavioral finance development effort to discover the effect of cognitive and emotional errors on decision making.Behavioral finance theories states financial markets are not efficient because participants' decisions are affected from behavioral biases and framework of decision presence and it finally causes asset price deviate from intrinsic value. METHODOLOGY This research is a kind of applied research, in terms of explanation of a mathematical portfolio selection model in framework of behavioral finance hypothesis.In addition, empirical test of this model and explanation of relationship between variables using Tehran stock exchange is considered. In this paper, after explanation of hypothesis of behavioral finance theories, mathematical model of relationship between variables is presented in framework of a mathematical model.In the next step, empirical test of behavioral model is done and it is compared with classic model using ten year TEDPIX data in 2000 to 2009.For optimization, the Mathematical software and for statistical tests, SPSS is used.The hypothesis of research is that, portfolio selection model based on behavioral finance hypothesis is more efficient than rational model. The model This research evaluates a model of portfolio selection in framework of behavioral finance theories in Iran Capital Market.The goal is setting the optimal weights for risky asset.It is assumed that shortselling is impossible.Investors are going to select weight of risky assets so that, expected utility is maximized in framework of prospect theory. Portfolio selection model is presented in two periods and in a market with two kinds of risky and risk free assets and investors' behavior is explained using Kahneman and Tversky's prospect theory.Therefore, investor's decision on weight of risky assets dependent to reference point and wealth changes can be explained.Weight of risky asset is θ and amount of return or loss in the first period is: The process of selection in prospect theory is done in two stages of edition and evaluation, as mentioned in Kahneman and Tversky's model.In the edition stage, investor recognize and separate benefit and loss and modify the probability function of each outcome.Empirical studies show that individuals assign higher weights to the lower probabilities and vice versa.In this research, offered weighting function of Giorgi et al (2004) is used. γ is coefficient of probability weight adjustment.In evaluation stage, investor attributes mental value to expected outcome.Giorgi et al (2004) andFernandez et al (2009) explained hypothesis of portfolio selection based on Kahneman and Tversky's model.The value function is defined as: In this model, α is general risk-aversion coefficient.Because λ -> λ + > 0, the slope of value function is more in loss part and then λ shows risk-aversion.x shows wealth changes and is representative of investors' mental accounting concept.This value function is concave for points greater than reference point and convex for other points (asymmetric risk-aversion). An investor defines weight of risky asset to maximize expected utility (V).Also his/her preferences are defined in framework of prospect theory and based on wealth changes.So, expected value is: Where v(x) is expected value of event x and  (f(x)) is cumulative weight of probability of event x, based on probability weighting function.An investor selects weight of risky asset in each stage of investment so that, expected value of investment is maximized. RESULTS AND ANALYSES In this research, two periods of investment is considered to evaluate behavioral portfolio model.Therefore, data is separated to two parts.One part is used for optimal offered portfolio calculation and another part is used to evaluate the results. Evaluation of behavior and standard portfolio model Return and risk of risky portfolio is estimated using 28 three month periods data.Mean of long-term interest rate of bank deposits (15%) is used as risk free interest rate (Table 3).Also Table 4 shows the valuation data Weight of risky asset in optimal portfolio is calculated using behavioral and standard models.If risk aversion coefficient (α) is 3, probability weighting coefficient (γ) is 0.9, λ + = 1 and λ -= 2.25 (offered by Kahneman and Tversky), mean return and standard deviation of test period would be 8.88 and 11.24 respectively.Figure 1 shows the value function based on standard and behavioral models. Calculation shows that according to behavioral model, weight of risky asset is 60.5% and according to standard model, it is 78.4%.portfolio based on standard and behavioral models and using 28 investment periods data (or 7 year test periods). Figure 2 shows value in terms of mean return.So it is possible to survey the effect of mean return on expected value.Also figure 3 shows the effect of standard deviation (risk factor) on expected value. Figure 4 shows the effect of both determinants (expected return and risk) on optimal weight of risky asset.Now, to evaluate Standard and Behavioral portfolio models, return and risk of 7 optimal risky asset, calculated through first period based on 28 periods of test data, 12 optimall portfolio weights was calculated based on 3 year evaluation data.So, 84 portfolios based on behavioral model and 84 portfolios based on standard model have been evaluated.Figure 5 shows the optimal weight of risky portfolio in terms of return and risk changes in evaluation period. Testing of hypotheses In this section, using deductive statistics, it is surveyed that whether behavioral model is better than standard model or not. Null-hypothesis (H 0 ): mean return of portfolios based on behavioral models is not greater than mean return of portfolios based on selected standard model.Alternative-hypothesis (H 1 ): mean return of portfolios based on behavioral models is greater than mean return of portfolios based on selected standard model. Equality of means test (confidence level is 95%) has been done to test equality of means of 84 offered portfolios based on behavioral and standard models. According to Levene's test for equality of variances, sig. is 0.003 and less than 0.05 (error level).So, it is deducted that, the variances of two populations are not equal.Therefore, equality of means has been tested in the next step. Results show that, sig. is 0.773 and greater than 0.05.Therefore, H 0 is not rejected.The lower bound (-0.47726) and upper bound (0.64059) have different signs and it shows that, there is no significant difference between mean returns obtained from behavioral and standard models, in 95% confidence level. of the reasons of difference between point estimation and result of statistical test in Tehran Stock Exchange can be due to data.TEDPIX is modified after evolution of capital market, especially entrance of new companies and it leads to instability and incomparability of index value.Finally, it is proposed to use more stable indices (such as industry index) and other method of risk evaluation such as semi standard deviation instead of standard deviation. Figure 3 . Figure 3. Expected value changes in terms of changes of standard deviation of risky portfolio. Figure 4 . Figure 4. Expected value changes in terms of changes of and standard deviation of expected return of risky portfolio. Figure 5 . Figure 5. Optimal weights of risky portfolio in terms of changes of expected return and standard deviation. Table 1 . Behavioral Phenomenon that Contradict Efficient Market Hypothesis. Table 2 . The Most Important Theories in Behavioral Finance. Table 3 . Test period data. Table 4 . Evaluation period data. Figure 1.Value function of behavioral and standard portfolio models. Table 5 shows optimal weights of risky Table 5 . Optimal weights of risky portfolio based on standard and behavioral models.Expected value changes in terms of changes of expected mean return of risky portfolio.
2018-12-28T06:10:37.487Z
2011-09-04T00:00:00.000
{ "year": 2011, "sha1": "3e49de8fd904eb4225415ef1434549bd90ac8e36", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJBM/article-full-text-pdf/27A18AF16944.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3e49de8fd904eb4225415ef1434549bd90ac8e36", "s2fieldsofstudy": [ "Economics", "Psychology", "Business" ], "extfieldsofstudy": [ "Economics" ] }
29371951
pes2o/s2orc
v3-fos-license
Molecular Cloning and Characterization of a Mitochondrial Selenocysteine-containing Thioredoxin Reductase from Rat Liver* A thioredoxin reductase (TrxR), named here TrxR2, that did not react with antibodies to the previously identified TrxR (now named TrxR1) was purified from rat liver. Like TrxR1, TrxR2 was a dimeric enzyme containing selenocysteine (Secys) as the COOH-terminal penultimate residue. A cDNA encoding TrxR2 was cloned from rat liver; the open reading frame predicts a polypeptide of 526 amino acids with a COOH-terminal Gly-Cys-Secys-Gly motif provided that an in-frame TGA codon encodes Secys. The 3′-untranslated region of the cDNA contains a canonical Secys insertion sequence element. The deduced amino acid sequence of TrxR2 shows 54% identity to that of TrxR1 and contained 36 additional residues upstream of the experimentally determined NH2-terminal sequence. The sequence of this 36-residue region is typical of that of a mitochondrial leader peptide. Immunoblot analysis confirmed that TrxR2 is localized almost exclusively in mitochondria, whereas TrxR1 is a cytosolic protein. Unlike TrxR1, which was expressed at a level of 0.6 to 1.6 μg/milligram of total soluble protein in all rat tissues examined, TrxR2 was relatively abundant (0.3 to 0.6 μg/mg) only in liver, kidney, adrenal gland, and heart. The specific localization of TrxR2 in mitochondria, together with the previous identification of mitochondria-specific thioredoxin and thioredoxin-dependent peroxidase, suggest that these three proteins provide a primary line of defense against H2O2 produced by the mitochondrial respiratory chain. A thioredoxin reductase (TrxR), named here TrxR2, that did not react with antibodies to the previously identified TrxR (now named TrxR1) was purified from rat liver. Like TrxR1, TrxR2 was a dimeric enzyme containing selenocysteine (Secys) as the COOH-terminal penultimate residue. A cDNA encoding TrxR2 was cloned from rat liver; the open reading frame predicts a polypeptide of 526 amino acids with a COOH-terminal Gly-Cys-Secys-Gly motif provided that an in-frame TGA codon encodes Secys. The 3-untranslated region of the cDNA contains a canonical Secys insertion sequence element. The deduced amino acid sequence of TrxR2 shows 54% identity to that of TrxR1 and contained 36 additional residues upstream of the experimentally determined NH 2 -terminal sequence. The sequence of this 36-residue region is typical of that of a mitochondrial leader peptide. Immunoblot analysis confirmed that TrxR2 is localized almost exclusively in mitochondria, whereas TrxR1 is a cytosolic protein. Unlike TrxR1, which was expressed at a level of 0.6 to 1.6 g/milligram of total soluble protein in all rat tissues examined, TrxR2 was relatively abundant (0.3 to 0.6 g/mg) only in liver, kidney, adrenal gland, and heart. The specific localization of TrxR2 in mitochondria, together with the previous identification of mitochondria-specific thioredoxin and thioredoxin-dependent peroxidase, suggest that these three proteins provide a primary line of defense against H 2 O 2 produced by the mitochondrial respiratory chain. Thioredoxin (Trx) 1 is a widely expressed 12-kDa protein that performs pleiotropic cellular functions (1,2). The active site of Trx contains the sequence -Cys-Gly-Pro-Cys-, and the reduced form of the protein serves as a hydrogen donor for ribonucleotide reductase (3), protein methionine sulfoxide reductase (4), thioredoxin-dependent peroxidase (5), and protein tyrosine phosphatase (6) as well as contributes to the up-regulation of various transcription factors (7)(8)(9)(10)(11). In addition, the reduced form, but not the oxidized form, of Trx binds to and inhibits the catalytic activity of apoptosis signal-regulating kinase, also known as mitogen-activated protein kinase kinase kinase (12). Furthermore, Trx serves as a growth factor that stimulates the proliferation of T lymphocytes (13). Oxidized Trx is converted back to the reduced form by thioredoxin reductase (TrxR) with the use of electrons from NADPH (14,15). TrxR is a homodimeric enzyme with a redoxactive disulfide and contains one FAD molecule per subunit (14,15). It belongs to a superfamily of flavoprotein disulfide oxidoreductases that includes glutathione reductase (GR), dihydrolipoamide reductase, mercuric reductase, and alkylhydroperoxide reductase (16,17). Mammalian TrxR is distinct from those of prokaryotes and yeast. The mammalian enzyme exhibits a broader substrate specificity, having the ability to reduce chemically unrelated compounds such as selenite and 5,5Ј-dithiobis (2-nitrobenzoic acid) in the presence of NADPH (18,19), is larger in subunit size (58 kDa, compared with 35 kDa for prokaryote and yeast enzymes), and contains a much longer COOH-terminal region (18,20). In addition, mammalian TrxR is a selenoprotein that contains a penultimate selenocysteine (Secys) residue in the sequence -Gly-Cys-Secys-Gly (14,(21)(22)(23), which can serve as a redox center (24). Mammalian cells contain two distinct forms of Trx: Trx1 is located in the cytosol and nucleus and is also secreted (10), whereas Trx2 is restricted to mitochondria (25). However, only one isoform of TrxR has previously been identified in mammalian cells, and it has not been known whether this protein is expressed in mitochondria. We now describe the purification and cloning of a second type of TrxR, named TrxR2, from rat liver and demonstrate that this protein is specifically expressed in mitochondria. EXPERIMENTAL PROCEDURES Materials-Rat liver was obtained from Bioproducts for Science, Inc. (Indianapolis, IN). Rabbit antiserum to rat TrxR1 was produced by immunization with purified enzyme according to standard procedures. Rabbit antiserum to TrxR2 was prepared by injection with a hemocyanin-conjugated peptide (RSGLDPTVTGCCG) corresponding to the COOH-terminal sequence of rat TrxR2, with the exception that the penultimate residue (Secys) was changed to Cys. Horseradish peroxidase-conjugated antibodies to rabbit immunoglobulin G and the enhanced chemiluminescence (ECL) immunoblot detection system were from Amersham; biotin-conjugated iodoacetamide, N-(biotinoyl)-NЈ-(iodoacetyl)ethylenediamine (BIAM), was from Molecular Probes; horseradish peroxidase-conjugated streptavidin, 3,3Ј,5,5Ј-tetramethyl benzidine, and Neutravidin beads were from Pierce; and yeast GR was from Boehringer-Mannheim. Recombinant rat Trx was prepared as described (26). Purification of TrxR1-Rat livers (1 kg) were homogenized in 4 liters of 20 mM Tris-HCl (pH 7.8) containing 1 mM EDTA, 1 mM dithiothreitol (DTT), 0.05 mM 4-(2-aminoethyl)-benzenesulfonyl fluoride hydrochloride (AEBSF), pepstatin (0.5 g/ml), leupeptin (0.5 g/ml), and aprotinin (0.5 g/ml). The homogenate was centrifuged at 70,000 ϫ g for 30 min, and the resulting supernatant was adjusted to pH 5.0 with 1 M acetic acid and then centrifuged again at 70,000 ϫ g for 30 min. The resulting pellet and supernatant were subjected to immunoblot analysis with antibodies to TrxR1 and TrxR2 (see Fig. 6A). TrxR1 was detected only in the supernatant, whereas TrxR2 was present mostly in the pellet. Thus, the supernatant and pellet served as the sources for purification of TrxR1 and TrxR2, respectively. For purification of TrxR1 (elution profiles of column chromatographies are not shown), the supernatant (40 g of protein) from the pH 5 precipitation step was adjusted to pH 7.8 with 1 M ammonium hydroxide and then applied to a DEAE-Sephacel (Pharmacia) column (10 ϫ 16 cm) that had been equilibrated with 20 mM Tris-HCl (pH 7.8) containing 1 mM EDTA, 1 mM DTT, and 0.01 mM AEBSF. The column was washed consecutively with 2.5 liters of equilibration buffer and 2.5 liters of equilibration buffer containing 100 mM NaCl. Proteins were eluted from the column with a linear gradient of 100 to 400 mM NaCl in 5 liters of equilibration buffer, and fractions (25 ml) were collected and assayed for TrxR1 by immunoblot analysis. The peak fractions (10.4 g of protein), corresponding to 300 to 380 mM NaCl on the gradient, were pooled, dialyzed overnight against 20 mM Tris-HCl (pH 7.5) containing 1 mM EDTA, 1 mM DTT, and 0.01 mM AEBSF, and then applied to a 2Ј5Ј-ADP-agarose column (2 ϫ 7 cm) that had been equilibrated with 20 mM Tris-HCl (pH 7.5) containing 1 mM EDTA. The column was washed with 100 ml of equilibration buffer, and proteins were then eluted stepwise with 100 ml each of equilibration buffer containing 200 mM KCl, equilibration buffer containing 200 mM sodium phosphate and 200 mM KCl, and equilibration buffer containing 1 M NaCl and 200 mM KCl. The 58-kDa TrxR1 was present almost exclusively in the fractions eluted by the buffer containing 200 mM KCl as revealed both by SDS-polyacrylamide gel electrophoresis (PAGE) and Coomassie Blue staining and by immunoblot analysis. Peak fractions (19.8 mg of protein) were pooled and then adjusted to 1.2 M ammonium sulfate by addition of 4 M ammonium sulfate. After removal of the resulting precipitate by centrifugation, the supernatant was applied to a Phenyl-5PW high-performance liquid chromatography (HPLC) column (0.75 ϫ 7.5 cm) that had been equilibrated with 20 mM Hepes-NaOH (pH 7.5) containing 1 mM DTT, 1 mM EDTA, and 1.2 M ammonium sulfate. The column was washed with 60 ml of equilibration buffer, and proteins were then eluted with a decreasing linear gradient of 1.2 to 0 M ammonium sulfate in 120 ml of 20 mM Hepes-NaOH (pH 7.5) containing 1 mM DTT and 1 mM EDTA. Peak fractions, corresponding to 0.8 to 0.64 M ammonium sulfate on the gradient, were pooled, concentrated, dialyzed against 20 mM Hepes-NaOH (pH 7.5) containing 1 mM DTT and 1 mM EDTA, divided into portions, and stored at Ϫ70°C. Purification of TrxR2-The pellet derived from the pH 5 precipitation step described for the purification of TrxR1 was dissolved in 2 liters of 20 mM Tris-HCl (pH 7.8) containing 1 mM EDTA, 1 mM DTT, 0.05 mM AEBSF, leupeptin (0.5 g/ml), and aprotinin (0.5 g/ml). The pH of the suspension was adjusted to 7.8 with 1 M ammonium hydroxide. After centrifugation of the suspension, the resulting supernatant (7.3 g of protein) was applied to a DEAE-Sephacel column (10 ϫ 16 cm) that had been equilibrated with 20 mM Tris-HCl (pH 7.8) containing 1 mM EDTA, 1 mM DTT, and 0.01 mM AEBSF. The column was washed with 1.5 liters of equilibration buffer and then with 2.5 liters of equilibration buffer containing 50 mM NaCl. Proteins were eluted with a linear gradient of 50 to 800 mM NaCl in 5 liters of equilibration buffer, and fractions (20 ml) were collected. TrxR2 was assayed by immunoblot analysis (see Fig. 6B). Peak fractions (numbers 80 to 120, containing 1.8 g of protein) were pooled, dialyzed overnight against equilibration buffer, and then applied to a 2Ј5Ј-ADP-agarose column (2 ϫ 7 cm) that had been equilibrated with 20 mM Tris-HCl (pH 7.5) containing 1 mM EDTA. The column was washed with 100 ml of equilibration buffer, and proteins were then eluted consecutively with 60 ml of equilibration buffer containing 20 mM KCl, a linear gradient of 20 to 200 mM KCl in 260 ml of equilibration buffer, 60 ml of equilibration buffer containing 200 mM KCl, 100 ml of equilibration buffer containing 200 mM sodium phosphate and 200 mM KCl, and 120 ml of equilibration buffer containing 1 M NaCl and 200 mM KCl. Fractions were assayed for TrxR2 by immunoblot analysis (see Fig. 6C). Peak fractions eluted between 340 and 352 min were pooled and then adjusted to 1.2 M ammonium sulfate by adding 4 M ammonium sulfate. After removal of the resulting precipitate by centrifugation, the supernatant (6.1 mg of protein) was applied to a Phenyl-5PW HPLC column (0.75 ϫ 7.5 cm) that had been equilibrated with 20 mM Hepes-NaOH (pH 7.5) containing 1 mM DTT, 1 mM EDTA, and 1.2 M ammonium sulfate. The column was washed with 60 ml of equilibration buffer, and proteins were then eluted with a decreasing linear gradient of 1.2 to 0 M ammonium sulfate in 120 ml of 20 mM Hepes-NaOH (pH 7.5) containing 1 mM DTT and 1 mM EDTA. Fractions (2 ml) were collected and assayed for TrxR2 by immunoblot analysis (see Fig. 6D). Peak fractions eluted between 32 and 42 min (2.1 mg of protein) were pooled, concentrated, dialyzed against 20 mM Hepes-NaOH (pH 7.5) containing 1 mM EDTA and 1 mM DTT, divided into portions, and stored at Ϫ70°C. Determination of Protein Concentration-The concentrations of recombinant Trx and TrxR1, TrxR2, and GR were determined spectrophotometrically, with A 280 values for 0.1% solutions of 0.738, 0.938, 1.081, and 1.091, respectively, which were calculated based on their amino acid composition. The concentrations of other proteins were determined with the BCA protein assay reagent (Pierce), with bovine serum albumin as a standard. Labeling of TrxR2 with BIAM-All procedures were performed in an anaerobic chamber with solutions that were free of oxygen. TrxR2 (130 g/ml) in 1 ml of 20 mM Hepes-NaOH (pH 7.5) containing 1 mM EDTA was reduced by incubation for 10 min at room temperature first with 54 M NADPH and then with 100 M DTT. The reduced enzyme was dialyzed on ice against 50 mM Bis-Tris-HCl (pH 6.5) containing 1 mM EDTA. The dialyzed protein (100 g) was then incubated at room temperature and in the dark for 10 min in 4 ml of 50 mM Bis-Tris-HCl (pH 6.5) containing 0.5% Triton X-100, 5% glycerol, 150 mM NaCl, 1 mM EDTA, and 10 M BIAM. The biotinylation reaction was stopped by the addition of iodoacetamide to a final concentration of 20 mM, and the pH of the reaction mixture was adjusted to 7.5. After 10 min, the mixture was dialyzed twice against 50 mM Tris-HCl (pH 8.0) containing 1 mM EDTA. Purification of Tryptic Peptides Derived from BIAM-labeled TrxR2-Dialyzed BIAM-labeled TrxR2 (80 g) was digested by incubation overnight at room temperature with 4 g of trypsin. One-fourth of the resulting digest was analyzed by HPLC on a C 18 column; peptides were eluted with a linear gradient (0 to 60%, v/v) of acetonitrile in 0.1% trifluoroacetic acid at a flow rate of 1 ml/min over 60 min. Fractions (500 l) were collected, and a portion (2 l) of each was analyzed for BIAM-labeled peptide. Peptides were immobilized on maleic anhydrideactivated microplates (Pierce) and labeled peptide was detected with horseradish peroxidase-conjugated streptavidin and the peroxidase substrate 3,3Ј,5,5Ј-tetramethyl benzidine, the oxidation of which was monitored spectrophotometrically at 405 nm. A major BIAM-labeled peptide eluted at 28.5 min. Two nonlabeled peptides that eluted at 38.3 and 40.1 min were also collected and subjected to sequence analysis. The remaining three-fourths of the BIAM-labeled TrxR2 digest were incubated with 15 l of Neutravidin beads (30% slurry) for 50 min at room temperature. The beads were separated by brief centrifugation, washed twice with 1 ml of phosphate-buffered saline containing 0.01% Lubrol, and then incubated for 30 min at 37°C with 6 M guanidine hydrochloride in 500 mM potassium phosphate (pH 2.5). The released peptides were further purified by HPLC on a C 18 column as described above, and the BIAM-labeled peptide that eluted at 28.5 min was collected for sequence analysis. Mass and Peptide Analyses-The purity and mass of the isolated BIAM-labeled peptide were assessed by matrix-assisted laser desorption ionization with time of flight detection (MALDI-TOF) mass spectroscopy (Hewlett-Packard model G2025A), using sinapinic acid as the matrix (21). Electrospray mass spectroscopy was performed with a Hewlett-Packard model G1946A instrument interfaced to a model 1100 HPLC system equipped with a Vydac 218TP narrow bore C 18 column. The effluent from the column (200 l/min) was mixed in a tee with acetic acid, pumped by another 1100 series pump (100 l/min), and the mixture was introduced into the mass spectrometer (27). Sequences were determined by automated Edman degradation with a Hewlett-Packard model G1005 sequencer running version 3.5 of the manufacturer's chemistry program. Assay of TrxR and GR Activities-Reduction of oxidized Trx was measured in a mixture (1 ml) containing 50 mM potassium phosphate (pH 7.0), 50 mM KCl, 1 mM EDTA, 0.25 mM NADPH, and 120 M oxidized Trx. After addition of the TrxR source, the oxidation of NADPH was monitored spectrophotometrically at 340 nm and 25°C. The GR assay mixture contained 50 mM potassium phosphate (pH 7.0), 1 mM EDTA, 0.25 mM NADPH, 1 mM GSSG, and enzyme source in a final volume of 0.5 ml, and the reaction was monitored spectrophotometrically at 340 nm and 25°C. For both TrxR and GR assays, activity was calculated as micromoles of NADPH oxidized per minute at 25°C from the relation A 340 ϫ 0.5/6.22. Assay mixtures lacking enzyme served as controls. Cloning and Sequencing of TrxR2 cDNA from a Rat Liver cDNA Library-Complementary DNA encoding TrxR2 was amplified from Marathon rat liver cDNA (CLONTECH) by the polymerase chain reaction (PCR) with the 5Ј primer 5Ј-CA(G/A)CA(G/A)AA(C/T)TT(C/T)GA and 3Ј primer 5Ј-GTNACNGGCTGCTGAGG, corresponding to the determined NH 2 -terminal (QQNFDLLVIGGGS) and COOH-terminal SGLDPTVTGCUG (where U represents Secys) sequences, respectively, of the purified protein. The PCR products were separated on a 1% agarose gel, and the amplified 1.5-kilobase molecule was eluted from the gel with a Qiagen gel extraction kit. After ligation of the eluted DNA into the pCR3.1 vector (Invitrogen) and transformation of Escherichia coli, positive clones were identified by nested-PCR with the internal forward primer 5Ј-GA(C/T)GA(C/T)ATNTT(C/T)TGG or the internal reverse primer 5Ј-CCA(G/A)AANAT(G/A)TC(G/A)TC, both of which were derived from the internal amino acid sequence HGITSDDIFWLK of TrxR2. Plasmid DNA was purified from the positive E. coli clones with a Qiagen miniplasmid preparation kit, and was sequenced with the T7 primer and pCR3.1 reverse primer on an ABI sequencer. To extend the 5Ј sequence, we perfomed 5Ј-rapid amplification of cDNA ends (5Ј-RACE) by PCR with Marathon-Ready rat liver cDNA as the template, the adapter primer 1 (CLONTECH) as the forward primer, and a reverse primer complementary to the sequence 5Ј-CTGTGGCTGAC-TATGTGGAA. 5Ј-RACE was also performed with nested adapter primer 2 (CLONTECH) as the forward primer and a reverse primer complementary to the sequence 5Ј-GCAGCAGAACTTCGATCTC. PCR products were cloned into the TA vector and sequenced. The 5Ј-extended sequences determined from the two independent 5Ј-RACE experiments were identical. Similarily, the sequence of the 3Ј-untranslated region was determined by two independent 3Ј-RACE experiments with two different sense primers (5Ј-GCTTCATACGCACAGGTGATG-CAG and 5Ј-TGGTTAAGCTGCACATCTCC) and the adapter primer 1 as the antisense primer. Analytical Ultracentrifugation-The Beckman Optima model XL-A analytical ultracentrifuge equipped with a four-place An-Ti rotor was used for sedimentation velocity experiments at 20.0°C. The density () of the dialysate buffer (10 mM sodium phosphate-1.8 mM potassium phosphate (pH 7.4), 137 mM NaCl, 2.7 mM KCl, 1 mM EDTA, and 1 mM 2-mercaptoethanol (substituted for 1 mM DTT in centrifugation studies)) was determined to be 1.00546 g/ml at 20.00 Ϯ 0.01°C with a Paar DMA 58 densitometer, and the relative viscosity was determined to be 1.020 (28). The partial specific volumes (v ) of TrxR1 and TrxR2 were calculated to be 0.720 and 0.722 ml/g (29), respectively, from the amino acid sequences. The protein sample (0.34 ml) and dialysate buffer (0.35 ml) were loaded into the right and left sides, respectively, of a 4°Kel-F coated double-sector centerpiece in 12-mm cells that were equipped with plane ultraviolet-quartz windows. Sedimentation velocity experiments were performed at 48,000 and 40,000 rpm for TrxR1 and TrxR2, respectively, while scanning in a continuous mode (0.003-cm steps) with triple averaging at 280 nm and 4-min intervals (after equilibration and radial calibration at 3000 rpm, at which speed radial and wavelength (9 to 11 averages at 1-nm resolution) scans were collected). The TRACKER program of A. P. Minton (http://bbri-www.eri.harvard.edu/RASMB/ rasmb.html) was used to monitor the progress of runs. Observed sedimentation coefficients (s obs ) were corrected to the density and viscosity of water at 20.0°C, yielding s 20,w values of 1.0393 s obs and 1.0395 s obs for TrxR1 and TrxR2, respectively. The time derivative method of Stafford (30,31) was used to estimate the molecular weights of TrxR1 and TrxR2; the diffusion coefficient (D) and sedimentation coefficient (s) were obtained from the half-width and maximum, respectively, of the Gaussian fit to the g(s * ) distribution pattern from four late scans (Origin Windows g(s * ) Velocity Program of Beckman Instruments), and the solute molecular weight (M) was calculated from the Svedberg equa- The relation of D to the half-width or standard deviation of the Gaussian fit is given by D ϭ ( r m 2 t) 2 /2t, where t is the sedimentation time in seconds, 2 is the angular velocity of the rotor, and r m is the radial position of the meniscus (30,31); t, 2 t, and r m can be read from the output file from DC_DT in the Beckman g(s * ) program. Atomic Absorption Spectrometry-The selenium content of TrxR1, TrxR2, and the BIAM-labeled peptide derived from BIAM-labeled TrxR2 was determined with a Perkin-Elmer model 4100 ZL atomic absorption spectrometer with the use of a palladium-magnesium nitrate modifier and temperature conditions as described (32). Calibration solutions were prepared by diluting the selenium stock solution (1 g/liter) with a solution containing 1.16 mM Na 2 HPO 4 , 0.31 mM KH 2 PO4, 10.26 M NADPH, and bovine serum albumin (0.120 mg/ml) to give final concentrations of 0, 10, 30, and 90 g/liter. Various concentrations of TrxR1 and TrxR2 were prepared in the same solution devoid of albumin; the BIAM-labeled peptide solution was prepared in doubledistilled water. Distribution of TrxR Isozymes in Rat Tissues and in Subcellular Fractions of Rat Liver-Frozen rat tissues (Pel-Freeze Biologicals) were sonicated in a solution containing 20 mM Tris-HCl (pH 7.5), 1 mM EDTA, aprotinin (2.5 g/ml), and leupeptin (5 g/ml). The sonicates were centrifuged at 100,000 ϫ g for 15 min, and the resulting supernatants were subjected to immunoblot analysis with antibodies specific for TrxR1 or TrxR2. Rat liver homogenates were prepared, and cytosolic and mitochondrial fractions were separated by ultracentrifugation as described (33,34). and c, that were subjected to sequence determination are indicated. B, each fraction from the C 18 column in A was assayed for BIAM-labeled peptides with the use of horseradish peroxidase-conjugated streptavidin and the peroxidase substrate 3,3Ј,5,5Ј-tetramethyl benzidine, the oxidation of which was monitored spectrophotometrically at 405 nm. C, the tryptic digest of BIAM-labeled TrxR2 was subjected to affinity purification with Neutravidin beads, and the resulting purified peptides were analyzed on a C 18 column as described in A. The position of BIAM-labeled peptide a is indicated. RESULTS Purification of TrxR1 and TrxR2-In previous studies, we have used the Trx system as the electron donor for reduction of H 2 O 2 by peroxiredoxins (26) and for reactivation of H 2 O 2 -inactivated protein tyrosine phosphatase (6). For these experiments, the 58-kDa TrxR, designated here as TrxR1, was routinely purified from rat liver by a procedure that included acidification of tissue homogenate to pH 5 and sequential chromatography of the acid-soluble proteins on DEAE Sephacel, 2Ј,5Ј-ADP-agarose affinity matrix, and Phenyl-5PW. A purification in which the pH 5 precipitation step was inadvertently omitted yielded two peaks of flavoprotein after the 2Ј,5Ј-ADP-agarose column step, as judged from the ratio A 280 /A 460 . The first peak eluted with the buffer containing 200 mM KCl and contained a protein with an apparent molecular mass of 58-kDa, whereas the second peak eluted with the buffer containing 1 M NaCl and 200 mM KCl and comprised predominantly a 55-kDa protein. Further purification of these peak fractions by phenyl-Sepharose column chromatography yielded highly purified preparations of the 58-and 55-kDa proteins (Fig. 1A). Whereas both preparations exhibited TrxR activity, polyclonal antibodies to 58-kDa TrxR1 recognized the 58-kDa protein from the first peak but not the 55-kDa protein from the second peak (Fig. 1B), suggesting that the 55-kDa protein was not derived from TrxR1. The 55-kDa protein was thus named TrxR2. Like TrxR1, TrxR2 was demonstrated to be a flavin-containing protein by its absorption spectrum, which showed maxima at 280, 352, and 446 nm in a ratio of 1, 0.12, and 0.11, respectively (Fig. 1C). Sequences of Peptides Derived from TrxR2-TrxR1 contains a penultimate Secys residue at its COOH terminus that can be selectively labeled with an alkylating agent (35). To determine whether TrxR2 also contains such a residue, the purified protein was labeled with 10 M BIAM at pH 6.5 and subsequently incubated with 2 mM iodoacetamide at pH 7.5. The labeled protein was cleaved with trypsin, and a BIAM-labeled peptide was purified with Neutravidin affinity matrix and a C 18 column (Fig. 2). Edman degradation of the labeled peptide yielded the sequence SGLDPTVTGCXG; the residue corresponding to cycle 10 was identified as carboxymethylated cysteine, and the residue corresponding to cycle 11 was unknown. Determination of the molecular mass of the purified peptide by MALDI-TOF mass spectrometry yielded a mass of 1540.8 (data not shown), which is virtually identical to the value of 1539.8 calculated for the dodecameric peptide with a carboxymethylated Cys and BIAM-labeled Secys. These data suggest that the residue corresponding to cycle 11 was a Secys that had been selectively labeled with BIAM at the lower pH, whereas the Cys residue adjacent to the Secys was alkylated with iodoacetamide after the pH and the concentration of alkylating reagent were increased. The presence of selenium in the BIAM-labeled peptide was confirmed by atomic absorption spectrometry (see below). Two nonlabeled TrxR2 peptides were also purified ( Fig. 2A) and yielded the sequences IIVDAQEATSVPHIYAIGDV and HGITSDDIFWLK. In addition, TrxR2 was subjected to auto-mated Edman sequencing for 20 cycles, with the major sequence being readable for 19 cycles: GGQQNFDLLVIGGGSG-GLA. A minor sequence was consistent with loss of the first glycine residue. Two preparations of TrxR2 were analyzed by electrospray mass spectroscopy, giving a mass of 53,037 Ϯ 2, in excellent agreement with the 53,036 calculated from the sequence deduced from the cDNA, and confirming the aminoterminal site as well as the presence of selenocysteine. TrxR1 was similarily labeled with BIAM, and the BIAMlabeled peptide generated by digestion with endoproteinase Lys-C was purified. Sequencing and MALDI-TOF analysis of the labeled peptide (data not shown) yielded the sequence RSGGDILQSGCUG (where U represents Secys), which matches exactly the sequence of amino acids 486 to 498 at the COOH terminus of the previously identified TrxR from rat liver (18). Cloning and Sequencing of TrxR2 cDNA-TrxR2 cDNA was amplified from a rat liver cDNA library by PCR with primers based on the NH 2 -and COOH-terminal amino acid sequences of the purified protein, as described under "Experimental Procedures." A 1.5-kilobase PCR product was obtained, the cloning and sequencing of which revealed a 1463-base pair fragment that contained the precise coding sequences in the same reading frame for the two internal tryptic peptides derived from purified TrxR2 (Fig. 3). Additional 5Ј and 3Ј sequences were obtained with the use of RACE-PCR, yielding a cumulative sequence of 1982 base pairs, excluding the poly(A) tail (Fig. 3). The translational initiation site was assumed to be the methionine codon composed of nucleotides 29 to 31, which was the first ATG triplet downstream of an in-frame nonsense codon (TAA at nucleotides 14 to 16). Two translational termination codons, TGA and TAA, occurred in-frame in the sequence TGAGGTTAA (nucleotides 1601 to 1609). As in other Secyscontaining proteins, the TGA codon corresponds to the penultimate Secys residue. Therefore, the TAA triplet was assumed to be the termination codon. The open reading frame encodes a polypeptide of 526 amino acids, with a calculated molecular mass of 56,574.8 daltons. The deduced protein sequence contained 36 residues upstream of the experimentally determined NH 2 terminus of purified TrxR2. This additional 36-residue sequence contains 6 arginine residues and no acidic residues, and it is predicted to form an ␣-helical structure (Plotstructure program of the University of Wisconsin Genetics Computer Group). The predicted high isoelectric point and ␣-helical structure are hallmarks of most mitochondrial leader peptides (36). Furthermore, like many mitochondrial precursor proteins, the predicted TrxR2 protein contains an arginine residue at position Ϫ10 (relative to the NH 2 -terminal residue of the mature protein) (37). Therefore, the mature TrxR2 comprises 490 amino acids, with a calculated molecular mass of 53,036 daltons; for comparison, the calculated molecular mass of the 498-residue rat TrxR1 is 54,491 daltons (18). The minor NH 2 -terminal sequence GQQNFD obtained from purified TrxR2 was likely derived from products of cleavage by aminopeptidases. FIG. 3-continued The deduced TrxR2 sequence shows 54% identity (62% similarity) to both rat (18) and human (20) TrxR1 sequences (Fig. 4). Rat TrxR proteins show a low sequence homology to prokaryote and yeast TrxR enzymes (28 and 36% identity to E. coli (16) and yeast (5) TrxR, respectively), and they are distinguished from these enzymes by the presence of a COOH-terminal extension containing the Secys residue. TrxR2 showed a relatively high homology to human GR (38) (41% identity and 50% similarity) as well as to putative GR sequences of Caenorhabditis elegans (39) and Drosophila melanogaster (40) (48 to 50% identity, 57 to 58% similarity). Furthermore, unlike human and mouse GR proteins, the C. elegans and D. melanogaster enzymes possess long COOH-terminal regions that are similar to those of mammalian TrxR proteins. The COOH termini of C. elegans and D. melanogaster GR proteins also end with the sequences GCCG and SCCS, respectively, which resemble the GCUG motif at the COOH termini of mammalian TrxR enzymes. As a result of the effort to sequence C. elegans chromosome III (41), another putative GR gene (GenBank accession number, U61947) has been identified on the basis of its homology (36% identity and 45% similarity) to mammalian GR enzymes. However, the gene product has not been shown to possess GR activity, and, at the time the gene was characterized, mammalian TrxR genes had not been identified. Comparison of the predicted amino acid sequence of the protein encoded by the chromosome III GR gene with those of TrxR1 and TrxR2 revealed higher homology (48 to 59% identity and 58 to 69% similarity) to these proteins than to mammalian GR enzymes. Furthermore, as in mammalian TrxR enzymes, the codons for Secys (TGA) and Gly (GGT) in the chromosome III gene are followed by the stop codon TAA, indicating that the gene product could contain the sequence GCUG at its COOH terminus. Therefore, we listed the product of the chromosome III gene as a TrxR rather than as a GR in Fig. 4. TrxR2 showed low homology to other members of the flavoprotein disulfide oxidoreductase family, such as human dihydrolipoamide reductase (42) (28% identity) and Psuedomonas aeruginosa mercuric reductase (43) (26% identity). Secys Insertion Sequence Element in the 3Ј-Untranslated Region of TrxR2 cDNA-The encoding of Secys by TGA in eukaryotic selenoproteins requires the presence of a Secys insertion sequence (SECIS) element such as that located in the 3Ј-untranslated regions of transcripts that encode thyroid hormone deiodinases, glutathione peroxidases, and several types of selenoprotein P (44,45). The spacing between the TGA codon and the SECIS element varies greatly. The SECIS element has been defined on the basis of conserved sequence rather than functional features. The conserved sequence includes the invariable AUGA, three consecutive A residues that are separated by 9 to 12 residues from the AUGA motif, and the doublet GA that is separated by a widely variable number of residues from the AAA triplet (Fig. 5A). Although the overall sequence homology among SECIS elements is low, they exhibit conserved stem-loop structures that can be divided into two types (46). Both type I and type II structures contain the conserved sequences AUGA and GA in the 5Ј and 3Ј arms, respectively, of the stems. The two types differ in that the unpaired AAA sequence is located in the apical loop in type I structures but forms a bulge in the 5Ј arm of type II structures. The AAAcontaining bulge in type II structures is separated from the apical loop by a predicted stem of 3 to 5 base pairs. SECIS elements with a long sequence between the AAA triplet and GA dinucleotide show a tendency to assume a type II structure (47,48). The 3Ј-untranslated region of TrxR2 cDNA contains a putative SECIS element that conforms to the consensus sequence. A computer folding program indicated that the TrxR2 SECIS element forms a type II stem-loop structure that contains a precipitated proteins (B-D). A, immunoblot analysis with antisera to TrxR1 (␣TrxR1) and to TrxR2 (␣TrxR2) of rat liver homogenate (crude extract) as well as of the supernatant and precipitate obtained after acidification of the homogenate to pH 5.0. B-D, purification of TrxR2 by sequential chromatography of the acidprecipitated proteins on columns of DEAE-Sephacel (B), 2Ј, 5Ј-ADP-agarose (C), and Phenyl-5PW (D). Column fractions were subjected to immunoblot analysis with antibodies to TrxR2 (insets). See "Experimental Procedures" for further details. AAA bulge and a stem of three GC pairs below the apical loop of six nucleotides (Fig. 5B). Improved TrxR Purification Procedure-A rabbit antiserum to TrxR2 was prepared by injection with a peptide (RS-GLDPTVTGCCG) that is identical to the COOH-terminal sequence of TrxR2 with the exception that Secys was replaced by Cys. Immunoblot analysis with this antipeptide serum and with rabbit antiserum to TrxR1 indicated that the acidification of rat liver homogenate to pH 5 resulted in the precipitation of TrxR2, whereas TrxR1 remained soluble (Fig. 6A). On the basis of this result, the purification protocols for TrxR1 and TrxR2 were improved as described under "Experimental Procedures." The modified approach, which includes three successive column chromatographies after the acid precipitation step, allowed us to obtain TrxR1 and TrxR2 with no cross-contamination and with higher yields, because increased pooling of column fractions was possible. Elution profiles for the three column steps are shown for the purification of TrxR2 (Fig. 6, B-D). The procedure yielded 6.5 mg of TrxR1 and 2.1 mg of TrxR2 with high purity (Ͼ95%) from 1 kg of rat liver. Homogeneity, Size, and Shape of TrxR Enzymes-Like the TrxR enzymes from prokaryotes and yeast, mammalian TrxR1 was shown to be a dimer in its native state. Analysis by nondenaturing PAGE of TrxR1 and TrxR2 purified by the improved procedure yielded a value for the molecular mass of TrxR1 of 115 kDa, consistent with that expected for a dimer. However, the mobility of TrxR2 was substantially less than that of TrxR1. Electrophoresis performed overnight on an 8 to 16% gradient gel yielded an estimated size of 300 kDa for TrxR2 (data not shown). Furthermore, the molecular mass of TrxR2 determined from its mobility was dependent on the time of electrophoresis. Because the isoelectric point of mature TrxR2 (8.0) predicted from its amino acid sequence is substantially higher than that of TrxR1 (5.9), it was possible that the lower mobility of TrxR2 was due to its lower charge density under the conditions of electrophoresis rather than to a difference in oligomerization state. To verify this supposition, we subjected TrxR1 and TrxR2 to analytical ultracentrifugation. A single, symmetrical sedimentation boundary was observed for both TrxR1 and TrxR2 at pH 7.4 and 20°C. Time derivative analyses of four late concentration profiles are shown for TrxR1 (Fig. 7A) and TrxR2 (Fig. 7B). The fit of g(s * ) data for TrxR1 and TrxR2 to a single Gaussian curve in each instance demonstrated the homogeneity of the proteins. The g(s * ) function for TrxR2 (Fig. 7B) appeared broader than that for TrxR1 (Fig. 7A) because TrxR2 was sedimented for a longer time and at a lower speed (40,000 rpm, 93 min) than was TrxR1 (48,000 rpm, 77 min) on the basis of the gel electrophoresis data indicating that TrxR2 might be larger than TrxR1. The sedimentation properties of TrxR1 and TrxR2 are summarized in Table I. The corrected sedimentation coefficients for TrxR1 and TrxR2 were 6.08 and 6.29 S, and the diffusion coefficients (Ϯ4%) were 4.85 ϫ 10 Ϫ7 and 5.57 ϫ 10 Ϫ7 cm 2 /s, respectively, as calculated from Gaussian fits of the g(s * ) distributions shown in Fig. 7. These values and the partial specific volumes calculated from the amino acid compositions (29) yielded molecular weight values within 4% of those expected for dimers of TrxR1 and TrxR2. The determined frictional coefficients (f) were slightly higher than those (f 0 ) calculated for spherical dimer particles, indicating that shape or volume effects reduce sedimentation rates. Catalytic Activity-The broad substrate specificity of mammalian TrxR proteins has allowed the NADPH-dependent reduction of 5,5Ј-dithiobis(2-nitrobenzoic acid) to be used as the basis for an assay of TrxR activity (14,15). In the present study, we have used an assay of greater physiological relevance that is based on the reduction of recombinant mammalian Trx after its oxidation by H 2 O 2 . The specific activities of TrxR1 and TrxR2 measured in the presence of saturating concentrations of oxidized Trx and NADPH were 2.2 and 3.3 mol/min/mg of protein, respectively (Fig. 8A). The COOH-terminal Secys of TrxR1 was recently shown to be essential for catalytic activity (18,35). To determine whether the Secys residue of TrxR2 was similarly essential, freshly prepared TrxR2 was labeled with BIAM at pH 6.5 as described under "Experimental Procedures" and Fig. 2, with the exception that the BIAM-labeled enzyme The solid line in each panel represents a single Gaussian fit for either the TrxR1 or TrxR2 data set. The observed sedimentation and diffusion coefficients of the solute are given by the s * values at the maximum and the half-width, respectively, of the g(s * ) curve (28,29). TABLE I Sedimentation properties of TrxR1 and TrxR2 Values are listed for the molecular weights calculated from the amino acid sequence (M chain ) and those determined from the sedimentation and diffusion coefficients (M r ) with the Svedberg equation; the sedimentation coefficient (s 20,w ) corrected for the viscosity and density of the buffer; and the frictional ratio (f/f 0 ) calculated from the frictional coefficient (f ϭ M(1 Ϫ )/Ns) and from the frictional coefficient of a sphere having a volume equal to that of an ellipsoid: f 0 ϭ 6(3M /4N) 1/3 (62 was not subsequently exposed to iodoacetamide. Labeling with BIAM completely blocked the activity of TrxR2 toward oxidized Trx (Fig. 8B). Labeling of TrxR2 as described in Fig. 2 yielded only one major BIAM-labeled peptide, in which Secys, but not the adjacent Cys, was modified. Like other flavoprotein disulfide oxidoreductases, both TrxR1 and TrxR2 contain a redoxactive disulfide center comprising the sequence CVNVGC (residues 52 to 57 in mature TrxR2). However, the two cysteine residues in this sequence were not labeled by BIAM in the experiment shown in Fig. 2. These results suggest that the inactivation of TrxR2 by BIAM resulted from modification of the Secys residue, and thus that this residue is essential for the catalytic activity of this protein. Because of the high homology between TrxR2 and GR enzymes from human, C. elegans, and D. melanogaster, we also assayed TrxR2 for GR activity with yeast GR as a control. TrxR2 did not exhibit detectable GR activity (Fig. 8C). Selenium Content of TrxR Enzymes-The selenium content of TrxR1 purified from various sources was previously determined to be 0.6 to 0.93 mol of selenium per subunit (4,18). TrxR appears to lose selenium under conditions of increased oxidative stress, as indicated by the observation that the selenium content of TrxR1 from HeLa cells decreased by almost half when the oxygen level in the culture chamber was increased (35). We measured the selenium content of freshly purified TrxR1 and TrxR2 by atomic absorption spectrometry and comparison with dilutions of a standard selenium stock solution. Five independent measurements with TrxR enzymes in the concentration range of 13.5 to 40.5 g/ml yielded selenium contents of 0.75 Ϯ 0.08 and 0.84 Ϯ 0.20 mol of selenium per subunit (means Ϯ S.E.) for TrxR1 and TrxR2, respectively. Tissue Distribution and Subcellular Localization of TrxR Isoforms-Total soluble fractions of sonicates prepared from various rat tissues were subjected to immunoblot analysis with rabbit antibodies specific for TrxR1 or TrxR2 (Fig. 9A). Comparison of the intensities of the immunoreactive proteins in the various tissues with those of purified TrxR proteins allowed us to estimate the amount of each isoform in micrograms of TrxR per milligram of total soluble protein. TrxR1 was abundant in all the tissues examined, varying in amount from 0.6 to 1.6 g/mg of soluble protein. However, TrxR2 was relatively abundant (0.3 to 0.6 g/mg of soluble protein) only in liver, kidney, adrenal gland, and heart; in the other tissues, the amount of TrxR2 was below the limit of detection (0.02 g/mg). The antibodies to TrxR2 detected two bands in the liver and kidney (Fig. 9A). The lower band corresponded to TrxR2. Longer exposure of the immunoblot also revealed a faint upper band for the other tissues. This upper band might correspond to a protein that shows low cross-reactivity with the antibodies to TrxR2. However, the marked intensity of the upper band in kidney is suggestive of the presence of a third isoform of TrxR that is similar in size to TrxR1 but which possesses a COOHterminal sequence highly similar to that of TrxR2. Alternatively, the upper band might represent the TrxR2 preprotein FIG. 8. Measurement of TrxR and GR activities of TrxR2. A, the Trx-reducing activities of 1 g of TrxR1 (f) and TrxR2 (OE) were measured by coupling the reduction of oxidized Trx to NADPH oxidation and monitoring the decrease in A 340 . An assay mixture lacking enzyme served as a control (q). B, the Trx-reducing activities of 1 g of BIAMlabeled (f) or unlabeled (OE) TrxR2 were measured as in A. An assay mixture lacking TrxR2 served as a control (q). TrxR2 was labeled with BIAM as described under "Experimental Procedures," with the exception that subsequent incubation with iodoacetamide was omitted. C, the GR activities of 2 g of yeast GR (), rat TrxR1 (f), and rat TrxR2 (OE) were measured by coupling the reduction of GSSG to NADPH oxidation and monitoring the decrease in A 340 . An assay mixture lacking enzyme served as a control (q). with the NH 2 -terminal 36 residues intact, which would suggest that the translocation of the preprotein from the cytosol into mitochondria is slower in kidney than in other tissues. We next investigated the subcellular localization of TrxR1 and TrxR2 by immunoblot analysis of cytosolic and mitochondrial fractions of rat liver (Fig. 9B). Whereas TrxR1 was detected only in the cytosolic fraction, TrxR2 was present predominantly in the mitochondrial fraction. DISCUSSION While the present study was in progress, Rigobello et al. (19) described the purification of TrxR from a rat liver mitochondrial fraction. Like previously isolated mammalian TrxR enzymes (15), the purified mitochondrial enzyme exhibited a broad substrate specificity. However, its chromatographic behavior differed from that of the cytosolic enzyme and its size was smaller than that of the cytosolic enzyme. It was not determined whether the purified mitochondrial protein was derived from the cytosolic enzyme or whether it contained a Secys residue. We have now established a relatively simple procedure for the purification of TrxR1 and TrxR2 without cross-contamination, and we have cloned a cDNA encoding rat TrxR2. Comparison of the amino acid sequence deduced from the cDNA with the experimentally determined sequence of the NH 2 -terminal region of purified TrxR2 indicated that TrxR2 is likely synthesized in the cytoplasm as a pre-protein that is converted to the mature form in mitochondria by removal of the 36 NH 2 -terminal residues. Like TrxR1, but unlike TrxR proteins from prokaryotes and yeast, TrxR2 contains an essential Secys residue in the COOH-terminal region. As in other selenoproteins, the Secys residue of TrxR2 appears to be encoded by a UGA codon under the influence of a stem-loop structure formed by a SECIS element located in the 3Ј-untranslated sequence of TrxR2 mRNA. Mammalian cells express two distinct forms of superoxide dismutase, cytosolic CuZn-superoxide dismutase and mitochondrial Mn-superoxide dismutase. We have previously shown that mammalian cells express a family of peroxidases, termed the peroxiredoxin family, that reduce H 2 O 2 and lipid peroxides with the use of electrons donated by Trx (26,49). On reaction with hydroperoxides, the redox-sensitve Cys residue of Prx is oxidized to Cys-SOH, which then reacts with a neighboring Cys-SH of the other subunit to form an intermolecular disulfide. This disulfide is specifically reduced by Trx, but not by glutathione or glutaredoxin (5,26). Whereas Prx I, II, and IV isoforms are cytosolic proteins, Prx III is synthesized in the cytosol and then transferred to mitochondria, where its 62 or 63 NH 2 -terminal residues are cleaved during maturation (26,50,51). Like TrxR2, Prx III is most abundant in adrenal gland, heart, liver, and kidney. 2 Recently, a cDNA that encodes a second isoform of Trx (Trx2) with a 60-residue mitochondrial targeting sequence was cloned and its specific expression in mitochondria confirmed (25). Most of the reactive oxygen species generated in unstimulated mammalian cells are generated as a result of the univalent reduction of molecular oxygen to the superoxide anion ( Increased oxidative stress in mitochondria results in collapse of the mitochondrial membrane potential, consequent impairment of oxidative phosphorylation of ADP, and, ultimately, cell death (53,54). Oxidative stress in mitochondria also promotes the calcium-dependent, nonspecific permeabilization of the inner membrane as a result of the oxidation and cross-linking of thiol groups in membrane proteins (55,56). Such increased nonspecific permeabilization has been suggested to lead to the 2 S. W. Kang and S. G. Rhee, unpublished observation. FIG. 9. Tissue distribution (A) and subcellular localization (B) of TrxR1 and TrxR2. A, protein samples (23 g of the total soluble fraction of various rat tissues and the indicated amounts of purified enzymes) were fractionated by SDS-PAGE on a 10% gel, transferred to a nitrocellulose membrane, and subjected to immunoblot analysis with antibodies specific for TrxR1 or TrxR2. Immune complexes were detected with the use of horseradish peroxidase-conjugated secondary antibodies and ECL reagents. B, immunblot analysis of rat liver total extract (20 g), cytosolic proteins (20 g), and mitochondrial proteins (20 g) with antibodies to TrxR2, TrxR1, and cytochrome c oxidase VIc subunit (cyt. ox.). Purified TrxR1 (50 ng) and TrxR2 (10 ng) were also analyzed as controls. release of mitochondrial constituents, including cytochrome c, into the cytosol, which in turn induces cell death by apoptosis (57)(58)(59)(60)(61). Thus, the line of defense provided by PrxIII, Trx2, and TrxR2 against H 2 O 2 likely plays a critical role in cell survival.
2018-04-03T01:23:48.811Z
1999-02-19T00:00:00.000
{ "year": 1999, "sha1": "ed91594e01f58fd3b5f3a22545a4dc861a385605", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/8/4722.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "f3f8a780efb996c7ba79eb892af39e6657a34fa4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219469215
pes2o/s2orc
v3-fos-license
Modeling and Forecasting Medium-Term Electricity Consumption Using Component Estimation Technique The increasing shortage of electricity in Pakistan disturbs almost all sectors of its economy. As, for accurate policy formulation, precise and efficient forecasts of electricity consumption are vital, this paper implements a forecasting procedure based on components estimation technique to forecast medium-term electricity consumption. To this end, the electricity consumption series is divided into two major components: deterministic and stochastic. For the estimation of deterministic component, we use parametric and nonparametric models. The stochastic component is modeled by using four different univariate time series models including parametric AutoRegressive (AR), nonparametric AutoRegressive (NPAR), Smooth Transition AutoRegressive (STAR), and Autoregressive Moving Average (ARMA) models. The proposed methodology was applied to Pakistan electricity consumption data ranging from January 1990 to December 2015. To assess one month ahead post-sample forecasting accuracy, three standard error measures, namely Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE), were calculated. The results show that the proposed component-based estimation procedure is very effective at predicting electricity consumption. Moreover, ARMA models outperform the other models, while NPAR model is competitive. Finally, our forecasting results are comparatively batter then those cited in other works. Introduction Electricity is a key component for the growth and development of any country's economy. It is a highly flexible form of energy that practically fuels the performance of each sector of an economy. It is a basic requirement of modern human life, bringing benefits and development in different sectors including healthcare, transportation, industries, mining, broadcasting, etc. [1]. Generally, electricity demand is an indication of the performance of a country's economy as electricity demand is integrated with all phases of development. Therefore, electricity demand forecast is essential for power system management, scheduling, operations, and capability evaluation of networks. In practice, however, electricity demand forecasting remains challenging for researchers as many factors directly or indirectly influence electricity consumption over the time [2][3][4]. Generally, electricity load or price forecasting is divided into three categories with respect to time scale: short term generally refers to forecasts from a few hours to a few days ahead; medium term is used for forecasting of few weeks to few months ahead; and long-term forecasts generally cover forecasts from a few months to years ahead [5]. Short-term electricity load forecasting is essential for the control and programming of electric power systems and also required by transmission companies when a self-dispatching market is in operation [6]. Medium-and long-term forecasts are also important for energy systems. For example, the medium-term electricity demand forecast is required for electric power system operation and scheduling [7][8][9], whereas the long-term electricity demand forecasting is crucial for capacity scheduling and maintenance planning [10]. It is well known that electricity demand time series exhibit specific features. The monthly electricity demand time series may have a more or less cyclic behavior and a long-term trend. Electricity consumption is extremely effected by weather and social factors that generally reflect in the demand time series [9,11]. Economic indicators commonly influence the consumption series trend, while climate changes introduce a periodic behavior in the series. The medium-term electricity demand forecasting generally deals with monthly data points, which often include a long-run (trend) component, as well as yearly and seasonal periodicities. For example, Figure 1 Previously, many researchers have worked on medium-term electricity demand forecasting that generally ranges from one month to a few months ahead using different methods, including time series, regression, artificial intelligent, genetic algorithm, fuzzy logic, and support vector machine [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31]. Times series models are easy to implement and have been commonly used for electricity load forecasting in the past. For example, Yasmeen and Sharif [32] used different linear and non-linear time series models, namely AutoRegressive Integrated Moving Average (ARIMA), Seasonal-ARIMA, AutoRegressive conditional heteroscedasticity model (ARCH), and its generalized form, GARCH, to forecast medium-term electricity demand. Electricity demand series often contain non-linearity and, hence, non-linear models can produce better forecasts. For example, Al-Saba and El-Amin [33] forecasted one year ahead electricity consumption for Pakistan using classical time series models, namely Autoregressive (AR) and ARMA models, as well as Artificial Neural Network (ANN). Some authors compared the classical time series and regression models. For example, Abdel-Aal and Al-Garni [34] used multiple regression model and compared it with seasonal and non-seasonal ARIMA models. Economic and weather variables strongly influence electricity demand. To account for these effects, Nawaz et al. [21] studied Pakistan's annual electricity consumption with the help of economic variables. They forecasted electricity demand up to 10 years ahead using Smooth Transition Auto-Regressive (STAR) model. Many researchers compared time series, regression, and computational intelligence models [16,18]. Electricity load can be effected by temperature. To see this, Ali et al. [35] studied the effect of monthly temperature on electricity demand in Pakistan. The results indicate that there was moderate linear correlation (r = 0.412) between mean temperature and electricity demand. On the other hand, several authors combined the features of two or more than two models and proposed a new model which is often referred as hybrid model [27,30,36,37]. For example, Alamaniotis et al. [13] proposed a hybrid model by combining the features of machine learning tools (kernels) with vector regression model. For medium-term demand forecasting, Ghiassi et al. [38] proposed a hybrid model that combines the neural networks model with expert systems. Several other techniques have been also used to forecast electricity demand [39][40][41] The purpose of this study was to develop and evaluate model(s) for forecasting medium-term electricity consumption time series. The model(s) are intended to support operational planning and trading decisions. Following the authors of [42,43], in the proposed forecasting methodology, the electricity consumption series is divided into two parts: deterministic and stochastic. Each component is estimated by parametric and nonparametric regression and time series methods. At the end, the forecasts from both components are combined to obtain the final forecast. Thus, the main contribution of this paper is the thorough investigation of the parametric and nonparametric approaches used for medium-term electricity consumption out-of-sample forecasting. Within the framework of the components estimation method, we compare models in terms of forecasting ability considering univariate, parametric, and non-parametric models. Moreover, for the considered models, the significance analysis of the difference in predication accuracy is also conducted. The rest of the article is organized as follows. Section 2 contains an overview of Pakistan's electricity sector. Section 3 describes the proposed forecasting framework and information on the models used for forecasting. An application of the proposed forecasting framework is provided in Section 4. Section 5 concludes the study. An Overview of Pakistan Electricity Sector Pakistan has been facing electricity shortage crisis since its inception. In 1947, Pakistan had the capacity to produce only 60 megawatts (MW) of electricity for its thirty-two million inhabitants. To address the electricity shortage through recognized interventions, the Water and Power Development Authority (WAPDA) was established in 1958. WAPDA built two dams, each with the capacity of about 4478 MW in the late 1970s to overcome the electricity crisis. Pakistan continued facing electricity shortages even in the 1980s even though some haphazard efforts were taken towards improving the situation [44]. With each passing year, the demand for electricity continued rising because of developmental activities, i.e. urbanization, rural electrification, and industrialization [45]. In 1990s, the private sector was given licenses to build new thermal energy plants. It was a strategy shift in terms of the electricity generation mix from hydro to thermal, which increased the cost of electricity generation significantly [46]. Until 2005, the total supply of electricity was surplus to the required demand by approximately 450 MW. During 2007, Pakistan was hit by the worst power crisis in its history. Production fell by 6000 MW, resulting from huge shutdowns all over the country. In 2008, the required electricity demand fell short by 15%, and power outages became more frequent. Furthermore, the existing power stations and electricity distribution networks were also damaged during the 2005 earthquake and 2010 flood [47]. At the same time, the demand for electricity was increasing continuously. For example, from 2001 to 2008, the electricity demand rose by almost 6% per year. In June 2013, the electricity shortage reached 4250 MW per day with demand standing at 16,400 MW per day and generation at 12,150 MW per day [48]. These crisis strongly affected the economic growth and service, despite regular interventions being made to increase electricity production. Pakistan is a developing country situated in South Asia with a population of over 200 million people. The demand for electricity is increasing exponentially due to an increased demand in both the household and manufacturing sectors. The failure of Pakistan's power policy over the last few decades has left the country with an acute electricity crisis that increased economic deficit to the country. There are some country specific issues that turn its electricity shortfall into a crisis. These include theft, misuse, and overuse of electricity in the household and industrial sectors; unjustifiably huge line losses; and low institutional capacity, corruption, mismanagement, and political controversies over mega power projects [49]. Pakistan fulfills its electricity requirements by different sources including coil, natural gas, oil, wind, solar, and nuclear [50]. The electricity sector in Pakistan comprises of WAPDA, National Electric Power Regulatory Authority (NEPRA), and a few independent power producers (IPPs). WAPDA and NEPRA are responsible for electric power maintenance, scheduling transmission, and distribution throughout in Pakistan, with the exclusion of Karachi city, which is provided by Karachi Electric Supply Company (KESC). The four main electricity producers in Pakistan includes WAPDA, KESC, IPPs, and Pakistan Atomic Energy Commission (PAEC) . The total power generation volume of Pakistan as of 30 June 2015 was 24,823,000 kW, of which thermal was 16,814,000 kW (67.74%), hydro-electric was 71,160,000 kW (28.67%), nuclear was 7,870,000 kW (3.17%), and wind was 106,000 kW (0.43%) [51]. Table 1 describe the installed electricity generating volumes of Pakistan during 2011-2015. Proposed Forecasting Model The main objective of this study was to forecast one month ahead electricity consumption for Pakistan. Let C m be electricity consumption for mth month. To account the dynamics of electricity consumption time series, we propose C m can be modeled as: i.e., the electricity consumption series C m is divided into two major components: D m , a deterministic component, and S m , a stochastic component. The deterministic component includes trend (log-run) and yearly periodicity. Mathematically, D m is defined as where T m represents the trend (long-term) and Y m represents the yearly periodicity component. Parametric Case This section describe the estimation of deterministic component using parametric regression method. The response variable C m is modeled parametrically by estimating the trend (long-run) component T m using cubic polynomial regression for time m and yearly periodicity is described by dummies as with I i,m = 1 if m refers to the ith month of the year and 0 otherwise. All regression coefficients related to these components are estimated by using Ordinary Least Square (OLS) method. Once all regression coefficients are obtained, the estimated equation is given bŷ In the past, many researchers used this method for trend and yearly cycle components estimation [52][53][54][55]. Nonparametric Case In the literature, many authors captured trend and yearly cycle in a time series using nonparametric regression methods. For example, some authors used smoothing spline [43,56,57], kernel regression [58][59][60][61], and regression spline [43,62]. In our case, the deterministic component can be modeled nonparametrically as follows. Here, each h i is a smoothing function of T m and Y m . For yearly cycles, the smooth function is estimated from the series 1, 2, 3, . . . , 12, 1, 2, 3, . . . , 12, . . ., whereas the long-term (trend) T m is estimated as a function of time m. For the smoothing functions, cubic regression splines are used to estimate the deterministic component. In regression spline approach, the most important selection is the number of knots and their location as they define the smoothness of the approximation. For this issue, we use cross validation (CV) technique. Regression coefficients are estimated by using OLS method and the estimated equation is given by: Once the deterministic component is estimated, the residual (stochastic) component can be obtained as: To see the performance graphically of the above-described methods used for estimation of deterministic components D m (both parametric and nonparametric), the observed electricity consumption and the estimated deterministic component are depicted in Figure 2, with parametric estimation of D m (Figure 2a) and nonparametric estimation of D m (Figure 2b). In the figure, it is evident that both models used for the estimation of D m capture adequately both dynamics, i.e. long trend and yearly seasonality, of electricity consumption series, as the increasing (upward) trend and yearly cycles can be seen clearly in the figure. Using Equation (6), the stochastic (residual) component obtained from both methods are also plotted in Figure 2. Here, it is worth mentioning that, in general, the stationarity of a time series are inspected using the Augmented Dickey-Fuller (ADF) and Philips-Perron (PP) tests [63,64]. However, several researchers showed that the ADF and PP tests may produce biased and misleading results owing to the possibility of structural breaks in the time series data [65]. Additionally, for the electricity market variables, i.e., prices or demand time series, the unit-root test results are weaker due the presence of periodicities and exceptionally heavy tailed data, which affect the size and power of standard unit-root tests [66,67]. In our case, we did not apply these tests because, once the consumption series is filtered for deterministic component, the stochastic component is always almost stationary. Modeling the Stochastic Component After the estimation of deterministic component using parametric and nonparametric techniques, the remaining part (residuals), considered as stochastic component, is obtained through Equation (6). The residual series obtained from both models are plotted in Figure 3. To model and forecast the stochastic component, this work consider four different univariate time series models: parametric AutoRegressive (AR), nonparametric AutoRegressive (NPAR), Smooth Transition AutoRegressive (STAR), and Autoregressive Moving Average (ARMA) models. Details about these models are given in the following. AutoRegressive Model An Autoregressive (AR) model is a widely used model in the time series literature. The AR models describes the response variable linearly dependent on its own past (lag) values and on a stochastic term. The general form of an AR(n) model is given by where µ indicates the intercept, α i (i = 1, 2, . . . , n) are parameters of AR(n) model, and m is a white noise process with mean zero and variance σ 2 . After plotting the ACF and PACF of the series, we concluded that lags 1, 2, and 12 are significant and, hence, are included in the model. In this work, the parameters are estimated using the Maximum Likelihood Estimation (MLE) method. Nonparametric AutoRegressive Model The linear AutoRegressive model can be generalized by removing the linearity property. We denote the model by Nonparametric AutoRegressive (NPAR) model. In this case, the relation between the present and past values does not have a particular parametric form and thus accounts for any potential type of nonlinearity in the data. Mathematically, NPAR is given by where h i are smoothing functions describing the relation between each past values and S m . In this work, functions h i refers to cubic regression spline functions. As done in the parametric case, we used lags 1, 2, and 12 to estimate NPAR. To overcome the curse of dimensionality, which is attributed to the exponential decline of data points within a smoothing window by increasing the dimension of regressors, generally, an additive form is considered that assumes no interactions among the explanatory variables [68]. Smooth Transition AutoRegressive (STAR) Model The Smooth Transition AutoRegressive (STAR) model is an extension of AR model that allow smooth transition in regime switching models. To control the regime switching process, the STAR model makes use of logistic and exponential functions instead of the indicator function used in threshold AR models. Mathematically, STAR model is defined as where Z m = (1, S m−2 , S m−1 , · · · , S m−n ), R m (ω m , η, µ) is the transition function bounded between 0 and 1, and ω m is a transition variable. The parameter η represents the speed and smoothness of transition, while µ can be interpreted as threshold between two regimes. Finally, m is a white noise process that is assumed to be normally distributed with mean zero and variance σ 2 . This model is defined as a two-regime switching model, in which the transition function R allows the dynamics of the model to switch between regimes smoothly. A common specification of the generalized version of smooth transition functions is given by where σ ω m is the standard deviation of the transition variable. STAR implements the iterative building strategy described in [69] to identify and estimate STAR model. AutoRegressive Moving Average Model Autoregressive Moving Average (ARMA) model not only includes the past lagged values of the variable of interest but also considers the past lags of error term. In our case, the response variable S m is modeled linearly using its past values as well as past white noise terms, i.e., where µ indicates intercept; α i (i = 1, 2, . . . , n) and φ j (j = 1, 2, . . . , s) are parameters of AR and MA, respectively; and m is a Gaussian white noise series with mean zero and variance σ 2 . Inspection of the ACF and PACF suggests that, for AR part, lags 1, 2, and 12 are significant, while the first two lags for the MA part. Thus, a constrained ARMA (12,2) where α 3 =, · · · , = α 11 = 0 is fitted to S m using the MLE method. Once both components, deterministic and stochastic, are estimated, the final one month ahead forecast is obtained asĈ Out-of-Sample Forecasting In this study, we used monthly electricity consumption aggregated data of Pakistan. The dataset was obtained from Pakistan Bureau of Statistics (PBS). The monthly series ranges from January 1990 to December 2014 and measured in kilowatt hours (kWh). The whole dataset contains 288 data points, of which data from January 1990 to December 2009 (240 data points) were used for model estimation and from January 2010 to December 2014 (48 data points) for one month ahead out-of-sample forecasts. The monthly electricity consumption series was represented by C m , where (m = 1, 2, . . . , 288). For the forecasting accuracy, three standard accuracy measures-Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Square Error (RMSE)-for each model were calculated as follows: where C m denotes observed series andĈ m represents forecasted consumption series for mth month. Combining models for the both components, deterministic and stochastic, led us to compare eight different possible combination, namely P-AR, P-NPAR, P-STAR, P-ARMA, NP-AR, NP-NPAR, NP-STAR, and NP-ARMA, where the first letter(s) represents the deterministic part with 'P' standing for parametric and 'NP' for nonparametric estimation, and the second part used for the stochastic model. To assess the best combinations of these models, we calculated different accuracy measures and tabulated the results in Table 2. In the table, it is clear that both the ARMA models outperform all the competitors as they produce better results. The MAPE values for P-ARMA and NP-ARMA are 4.84 and 4.83, respectively. The second best model is NP-NPAR for which the MAPE value is 5.18. The MAPE values for all combinations are also plotted in Figure 4 where the superiority of models involving ARMA model can be clearly seen. The season-specific errors are listed in Table 3. In the table, we can observe that the season-specific MAPEs are comparatively low in autumn and high in the remaining three seasons. Except in spring, the season-specific MAPE values for P-ARMA and NP-ARMA are considerably lower than those of the other models. The season-specific MAPEs values are also plotted in Figure 5. The ACF and PACF plots for the final error m are plotted in Figures 6 and 7. In these figures, we observe that there is no longer a meaningful autocorrelation structure present in the series. Overall residuals from all models have been whitened and can be considered as satisfactory. To verify the superiority of the results listed in Table 2, we performed Diebold and Mariano (DM) test for each pair of models [70]. The results (p-values) of DM test are listed in Table 4. Each entry of the table is the p-value of a hypothesis system where the null hypothesis assumes no difference in the accuracy of the predictor in the column/row against the alternative hypothesis that the predictor in the column is more accurate than predictor in the row. In this table, we can see that, among all possible combination models, P-ARMA and NP-ARMA models at 5% level of significance are statically better than the rest, except when comparing them to NP-NPAR. Table 4. P-values for the Diabold and Marion test for same forecasts accuracy against the alternative hypothesis that model in the column is more accurate than model in the row (using squared loss function). Forecasted values from four best combination models are plotted in Figure 8. In this plot, we can see that the forecasted values follow the observed values of electricity consumption very well. Finally, it is worth mentioning that our best MAPE values are comparatively batter than those cited in other works. For example, using four different models for Pakistan electricity consumption forecasting, Yasmeen and Sharif [32] reported a minimum MAPE value of 5.99, a value 24% greater than our minimum MAPE value of 4.83. For the total consumption forecast of Pakistan, Hussain et al. [71] reported a RMSE value of 1796.9 that is considerably higher than our value of 460.80. Conclusions The main aim of this study was to forecast one month ahead electricity consumption for Pakistan using component estimation technique. To this end, the electricity consumption time series was divided into two major components, i.e., deterministic and stochastic. The deterministic component consists of trend (long-run) and yearly periodicity and was modeled by both parametric and nonparametric approaches. For the stochastic component, we used four univariate time series models, including AutoRegressive (AR), Nonparametric AutoRegressive (NPAR), Smooth Transition AutoRegressive (STAR), and AutoRegressive Moving Average (ARMA) models. The estimation of both deterministic and stochastic components led us to compare eight different combinations of these models. To check the forecasting performance of all models, consumption data from Pakistan were used, and one month ahead post-sample forecasts were obtained for four years. The predicting accuracy of the models was evaluated through MAE, MAPE, and RMSE. To evaluate the significance of the differences in the forecasting performance of the models, the Diebold and Mariano test was performed. The results show that the component based estimation approach is highly effective for modeling and forecasting electricity consumption. Among all possible models, P-ARMA and NP-ARMA produced the best results, while NP-NPAR model remained competitive to the best. Finally, our forecasting results are comparatively batter than those cited in other works. In the future, this study can be extended by exploring the effects on out-of-sample forecasting when other exogenous variables are included in the models. Conflicts of Interest: The authors declare no conflict of interest.
2020-05-28T09:16:17.391Z
2020-05-23T00:00:00.000
{ "year": 2020, "sha1": "938dda2621aff61ac4f0694bc4dd50c3e5172537", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-9394/2/2/9/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "64b556861cbd30751ac05134d0a8ee85a14f5440", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
221971300
pes2o/s2orc
v3-fos-license
A study of critically ill children presenting with seizures regardless of seizure duration admitted in the PICU of a tertiary hospital in India Our aim was to study the clinical profile, immediate outcome and risk factors associated with poor outcome in critically ill children presenting with seizures requiring PICU admission. As seizures lasting 10 min or more can potentially cause brain damage, we included all children regardless of seizure duration. The records of 157 children aged 1 month to 16 years admitted in the PICU at a tertiary hospital in India with seizures as the presenting symptom during a three-year period were studied retrospectively. Median age of patients was 4 years. 34 (21%) had pre-existing epilepsy and 33 (21%) had previous developmental delay/neuro-deficit. Seizure duration was > 30 min in 75 (47.7%) and 56 (35.6%) required the use of more than 2 antiseizure drugs. 101 (64%) had acute symptomatic seizures, 28 (17%) remote symptomatic and 27 (17.1%) had unknown cause. New onset neurological deficit was seen in 18 (15.6%) and 14 (8.9%) died. Young age, high PEWS score at presentation, prolonged/recurrent seizures, CNS infection, need for multiple antiseizure drugs and ventilation/pressor use were risk factors for poor outcome. Neurological outcome and survival of children in our study were good. Further all-inclusive studies irrespective of seizure duration are needed to obtain a complete picture of critical children presenting with seizures. Introduction The burden of acute neurologic affliction in pediatric population is high and contributes to 16.2% of the total admissions to pediatric intensive care units (PICU) globally [1]. Status epilepticus (SE) is the commonest neuro-emergency in children and as per epidemiological studies in western countries, it's estimated incidence in children (18)(19)(20) per 100,000 children per year) is much greater than the adult incidence of around 4-6 per 100,000 per year [2][3][4][5]. Despite advances in management, SE in children is associated with significant mortality as well as permanent morbidity in the form of epilepsy or neurological disability in developing countries like India. The common causes of SE in children vary from region to region as evidenced by the differences in the results of studies conducted in developing and developed nations. Also, there are substantial differences between older and younger children in terms of etiology as well as outcome. For planning of management strategies and appropriate resource allocation, there is thus a need for regional demographic statistics. Insight into risk factors associated with poor outcome is imperative for counseling of parents while the child is under intensive care. Population and hospital-based Indian studies required for this purpose are however limited. Irrespective of the duration of seizure, children admitted to the PICU following seizures reflect severe end of the spectrum of disease. However, ICU based studies so far have limited themselves to including children with seizures lasting for more than 30 min, which has been the definition of status epilepticus until recently. Literature on critical patients presenting with seizures but who do not fit the definition of status is lacking owing to their exclusion. It is becoming increasingly recognized that seizure duration of 10 min or more can potentially lead to brain damage [6]. Studies show that longer the duration of convulsion before intervention, more is the difficulty in controlling it and more is the risk of permanent neurological damage [7]. In consideration of this, the International League Against Epilepsy (ILAE) while defining status epilepticus has proposed 5 min as the duration of seizure beyond which the seizure should be regarded as "continuous seizure activity" [8]. Thus, our aim in this study was to retrospectively evaluate all the children who presented with seizures at our hospital and required admission to the PICU in terms of their clinical profile & immediate outcome and to determine the risk factors for poor outcome. Study population The study was conducted at MGM Hospital and Medical College, Maharashtra, India, which is an academic institution providing tertiary care under both public and private sector. It is situated at the junction of the Mumbai-Pune Expressway and four other major roadways connecting rural areas of Raigad District to the city. It caters to the urban population in its vicinity and owing to its location, is also the nearest tertiary hospital of referral for the primary health centers and rural sub-district hospitals in Raigad District. Patients with drug resistant seizures are referred to our hospital for intensive care & advanced diagnostic workup, thus providing us with a combination of urban as well as rural patients. Treatment protocol The protocol for management of active convulsion followed at our hospital is based on consensus guidelines provided by Association of Child Neurology, Indian Council of Medical Research [9] and includes intravenous midazolam boluses as the first line antiseizure drug followed by loading of a second line drug. Fosphenytoin is the most commonly used 2nd line drug in our hospital. Ongoing therapy of the child with a broad-spectrum antiseizure drug belonging to a different class is one of the exceptions to its use. Seizure that persists despite the use of two doses of initial benzodiazepine and a second antiseizure drug is treated in the intensive care unit. Drug resistant seizures are treated with loading doses of levetiracetam and convulsions continuing despite third anticonvulsant are started on continuous midazolam drip. Patients who achieve seizure control with use of 2 antiseizure drugs or less but are treated in PICU include those falling under high risk category according to PEWS score and presence of new onset focal neurologic deficit or traumatic brain injury. Study design and inclusion criteria We did a retrospective study of medical records of children in the age group 1 month to 16 years who presented with seizures at our institute and required admission to intensive care unit. Approval of Institutional Ethics Committee was obtained. In contrast to other studies, duration of seizure activity was not the inclusion criteria in our study. Children who were admitted to PICU for nonneurological emergencies and later experienced seizures during their course in the hospital were not included. Records of 157 children admitted in a three-year period (March 2015-February 2018) were analyzed. Details regarding clinical presentation, past medical history, Pediatric Early Warning Signs score (PEWS) at presentation, lab/imaging reports, treatment received, etiological diagnosis, and immediate neurological outcome were noted. Definitions Seizures were classified based on ILAE report on definition and classification of status epilepticus, 2015 [8]. Poor outcome was defined as death during course in PICU or presence of persistent new onset neurological deficit at discharge from hospital. Neurological deficit was defined as a score of less than 15 on the pediatric Glasgow Coma Scale (GCS) or presence of focal deficits on neurological examination. Development of cognitive deficit during course in hospital was not assessed. Statistical analysis Descriptive statistics were summarized as percentages. Categorical variables were tested for association using the Pearson's chi-square test and Fisher's exact test wherever appropriate. A p level of < 0.05 was considered statistically significant. Results Out of 1421 children admitted to the PICU at our hospital over a three-year period, 157 (11%) were admitted after presentation to the emergency room with seizures. Patient characteristics Patient characteristics are summarized in Table 1. Majority of children in our study were under 5 years old with a median age of 4 years. The number of males and females was comparable. Most children (79%) had no prior history of seizures and were previously developmentally and neurologically normal. At presentation in the emergency room, 92.9% children were identified to have high risk for rapid clinical deterioration based on calculation of the PEWS score. Seizure characteristics As shown in Table 1, generalized tonic-clonic seizures were more frequent than focal seizures. Half of the children (52.2%) had seizures lasting for less than 30 min but required PICU care. 50 (31.8%) patients had recurrent seizures during their stay in PICU despite initial control of seizures with antiseizure drugs. Treatment received Intravenous midazolam followed by fosphenytoin were the most commonly used drugs. 56 (35.6%) patients required the use of more than 2 types of antiseizure drug and 27 (17.1%) required 4 or more types of antiseizure drug for control of seizures. Midazolam infusion was given in 26 patients and 1 patient received thiopentone. 31 (19.7%) patients required ventilatory support and 24 (15.2%) required pressor support. (10), pyogenic meningitis (4), cerebral malaria (2), rickettsial meningoencephalitis (1), neurocysticercosis (1) and toxoplasmosis (1)). Other causes of acute symptomatic seizures included traumatic brain injury (13), metabolic disorders such as organic acidemias, aminoacidopathies, mitochondrial diseases, lipid storage disorders and other inborn errors of metabolism (12), prolonged febrile seizures (7), hypocalcemic seizures (6), hypertensive encephalopathy (4), hypoxic seizures (4), epileptic encephalopathy (3), VP shunt block (3), hypoglycemic seizures (2) and stroke (2). Causes of remote symptomatic seizures were perinatal insult (14), structural abnormality (9) and CNS infection in the past (5). 5 children presented with new onset refractory status epilepticus (NORSE) out of which 3 had history of fever prior to and during presentation. However, these patients were classified under unknown etiology as further evaluation of these cases remained incomplete and autoantibody testing was not done. The etiology of seizures across age groups is illustrated in Fig. 2. No significant correlation was seen between age of the child and etiology of seizure (p = 0.63). Characteristics of children with pre-existing epilepsy Out of 34 (21%) patients with pre-existing epilepsy, 23 had generalized and 11 had focal epilepsy. Majority of the patients (25) had symptomatic epilepsy. Etiologies were perinatal hypoxia/neonatal hypoglycemia (9), CNS infection including post tubercular communicating hydrocephalus (10), cortical dysplasia (3) and metabolic encephalopathy (3). 9 patients had unknown etiology (generalized epilepsy (7), focal epilepsy (2)). 3 had epileptic encephalopathy and 4 patients with generalized epilepsy could not be classified due to unavailability of EEG report. 3 patients had Lennox-Gastaut syndrome and 5 had temporal lobe epilepsy. Neuroimaging was normal in 7, showed abnormality in 21 and was not performed in 6 patients. EEG was normal in 9, showed abnormality in 14, and was not done in 11 patients. 18 patients were on monotherapy prior to presentation. 9 patients were receiving two antiseizure drugs, 5 were receiving three to four antiseizure drugs and 3 patients were not on any antiseizure drug. Non-compliance to therapy was the reason for occurrence of seizures in 2 patients while 12 had intercurrent illness. Seizures occurred during antiseizure drug therapy tapering in 5 patients and 1 patient had acute ventriculoperitoneal shunt block. Mortality 14 of the 157 patients (8.9%) admitted with seizures died during stay in PICU as compared to an average all-cause mortality rate of 12.1% in our PICU. Out of the patients who died, 6 had remote symptomatic epilepsy with breakthrough seizures, 4 had viral encephalitis, 2 had metabolic encephalopathy, 1 had hypoxic seizures and cause was not known in 1 patient. Cause of death in these patients was septicemia (4), respiratory failure following pneumonia (3), intractable seizures (3), brain herniation (2), and severe acidosis (2). Infection (sepsis/pneumonia) was the cause of death in majority of patients with remote symptomatic epilepsy. Factors significantly associated with mortality are shown in Table 2. Table 2. Risk factors for poor outcome Factors showing significant association with poor outcome, that is either death or development of new neurological deficit, are shown in Table 3. The presence of CNS infection as the underlying etiology of seizure was a significant risk factor for poor outcome. Among individual etiologies, tubercular meningitis and metabolic encephalopathy were also significant predictors of poor outcome (p < 0.001). Other important factors predicting poor outcome were age less than 2 years, seizure duration of more than 30 min, presence of recurrent seizures, PEWS score falling in the high risk category, need to use more than 2 antiseizure drugs and need for ventilatory/pressor support. Characteristics of children with recurrent seizures 7 patients had a delay in presentation to the hospital and suffered from prolonged as well as recurrent seizures without regaining consciousness for more than 24 h. Out of these, 3 patients had preexisting epilepsy with breakthrough seizures, 3 had CNS infection and 1 had metabolic disorder. All 7 patients had drug resistant seizures with need for midazolam infusion to control seizures in 4 patients. Ventilatory support was required in 4 and 2 required pressor support. 5/7 had poor outcome (death in 3 and new onset neurodeficit in 2). 50 children had recurrent seizures after initial achievement of seizure control in the hospital. Since it was identified as a risk factor for poor outcome, we further studied the characteristics of these children. 27 children less than 2 years, 14 between 2 and 5 years and 9 between 6 and 16 years had recurrent seizures. Thus, their occurrence was significantly more in younger children (p < 0.01). 35/50 children had 1st episode of seizure at presentation while 15/50 had pre-existing epilepsy. 30/50 had at least one seizure lasting for more than 30 min. 39 patients required use of more than 2 antiseizure drugs, 17 required ventilatory support and 12 required pressor support. 40/50 patients had acute symptomatic etiology. Discussion To our best knowledge, this is a first of its kind study evaluating critically ill children who presented with seizures and required management in intensive care unit, irrespective of the duration of the seizure. Due to lack of studies with similar study design, we compared our results to studies including children with status epilepticus (continuous seizure activity lasting for 30 min or longer, or intermittent seizure activity lasting for more than 30 min without regain of consciousness) admitted to the PICU. Half of the children (52.2%) in our study had seizures lasting for more than 5 min but less than 30 min. This throws light on the large number of critical children who may have been left out of the definition of SE in previous studies. In our study, the median age was 4 years. Studies conducted in developing as well as developed countries have shown similar results having younger age at presentation [10,11,12]. This may be attributable to the low threshold for seizures in young children and their vulnerability to acquired disorders involving the CNS. A study conducted by Shinnar et al. [10] found a strong effect of age on cause of status epilepticus, where febrile and other acute symptomatic etiologies were more common in less than 2 years of age and unknown & remote symptomatic etiologies were more common in the older children. However, this correlation was not seen in our study and there was no significant difference in the occurrence of acute or remote symptomatic seizures across age groups (Fig. 2, p = 0.636). Our findings show that age less than 2 years is a significant risk factor for poor immediate neurological outcome ( Table 2, p < 0.001). Similar findings have been seen in a study by Sadarangani et al. [13]. This highlights the criticality of managing these patients promptly and appropriately in order to ensure neurologically intact survival. The Pediatric Early Warning Signs (PEWS) score of all the patients presenting to the ER at our hospital is calculated for identification of patients at risk for rapid clinical deterioration and need of higher level of care. It is based on objective assessment parameters to determine the overall status of the patient and looks at three categories: behavior (neurological), cardiovascular and respiratory, with scores ranging from 0 to 3 in each category and a maximum total score of 9. A PEWS score of 3 in any one category or a total score of 5 or more has a very high risk [14]. In our study, a significant association was seen between PEWS score and outcome. 21% patients in our study had pre-existing epilepsy as compared to 69.7% in a 5-year retrospective study conducted in PICU in USA [15], 36% in a similar study conducted in UK [11], 46.6% in study conducted in Delhi, India [12] and 25.7% in a study in Bihar, India [16]. It is noteworthy that almost 80% of our patients presented with 1st episode of seizure due to the differences in underlying cause and approach to management posed by it. Generalized tonic-clonic seizures were the most common type seen in our study and type of seizure did not have any association with underlying etiology or outcome. Patients with seizures lasting less than 30 min had low morbidity and good immediate neurological outcome despite being critical. Further studies to determine long term neurological morbidity in these patients are however needed in the Indian setting. 38 patients had seizures lasting for 60 mins to 24 h and 7 patients for more than 24 h. This is because patients from remote rural areas are referred to our hospital for management of SE and at times, poor transportation facilities lead to a delay in initiation of treatment. Seizure characteristics such as duration, number of drugs needed to achieve control and presence of recurrent seizures had an association with outcome as seen in multiple studies [12,16,17]. 64% children had acute symptomatic seizures and CNS infections constituted a majority of these. Cause could not be identified in 17.1% children and includes the children in whom investigations necessarily for diagnosis could not be performed. There are differences in underlying etiology between developed and developing countries owing to better healthcare facilities. In a study in UK by Hussain et al. [11], 34% had prolonged febrile seizure, 28% had remote symptomatic, 11% had acute exacerbation of a pre-existing idiopathic epilepsy and only 18% had acute symptomatic seizures. A systematic review [18] reported that 1% to 12% of children from countries in the developed world presenting with SE have infectious cause as compared to 28.6% seen in our study. An 8-year review of PICU admissions in a hospital in South Africa [17] showed an infective cause in 43% cases, whereas an Indian study conducted in Bihar [16] showed 38.5% cases. Thus, infections are still a major cause of pediatric SE in developing countries like India and the scope of preventive strategies in reducing its burden is large. In our study, 30% of the CNS infections were caused by vaccine preventable diseases. A systematic review of the outcome of convulsive status epilepticus in children showed that most studies report neurological sequelae in less than 15% and that cause is the main determinant of outcome [19]. The poorest outcome is reported in acute symptomatic status patients with neurological dysfunction in more than 20% of cases. Our results are consistent with this finding as all children who developed neurodeficit in our study had acute symptomatic seizures with CNS infection being the underlying etiology in 70% and metabolic encephalopathy in remaining 30%. A systematic review of 63 studies conducted worldwide showed that mortality among children admitted to PICU with status epilepticus is 5-8% [19]. Indian studies have reported higher rates of 16.7% from Kashmir [20], 30% from Delhi [12] and 31.4% from Bihar [16]. This contrasts with our study which had a mortality rate of 8.9%. Delay in initiation of treatment, differences in access to health care facilities between urban and rural areas of India and variable management protocols across centers may be the reason for vast differences in mortality rates seen in Indian studies. We found a number of risk factors which had significant association with poor outcome. Although a causative effect cannot be established from our study, the recognition of factors predicting poor outcome will help in early risk stratification for aggressive management. Mortality associated with pediatric status epilepticus in India is on the fall. There is now a need to shift focus on neurologically intact survival. An alternative approach to management which focuses on neuroprotective measures based on risk stratification for adverse neurological outcome could be the solution in resource poor settings. Limitations The evaluation of neurological outcome in our study was retrospective and was based on documentation of GCS score and deficits detected on physical examination. The presence of cognitive or behavioral deficits may have been overlooked in the absence of formal methods of assessment. Conclusion A large number of critical children may have been left out in previous studies due to the use of 30-min seizure duration as the criteria for SE. Further all-inclusive studies such as ours are needed to obtain a complete picture. In our study, neurological outcome and survival of children following admission to PICU with seizures were good. Risk factors significantly associated with poor outcome were age less than 2 years, a high risk PEWS score at presentation, prolonged seizures, recurrent seizures, CNS infection as the etiology, need for multiple antiseizure drugs and need for ventilatory/pressor support. CNS infections were a major underlying cause in children admitted to PICU with seizures in our study and this emphasizes the scope of preventive strategies in reducing disease burden in developing countries like India. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Ethical statement Approval of institutional ethical committee was obtained prior to conducting the study. The study was conducted in accordance with their recommendations. Declaration of competing interest The authors declare no potential conflicts of interest.
2020-08-13T10:09:59.358Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "b8e9e08ebaed2a77efe795e01ad75a5b76eadc53", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ebr.2020.100382", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c6605b02e2573fc777a35363235c2cf49eb6015", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
85353337
pes2o/s2orc
v3-fos-license
Indicators of soil fertility and opportunities for precontact agriculture in Kona, Hawai ’ i . The distribution, mode, and intensity of agriculture both influence and are influenced by the natural environment. Soil fertility indicators that correlate with the intensification of dryland agriculture in pre-contact Hawai ’ i have been mapped across the Hawaiian archipelago. We investigated these soil fertility indicators and agricultural development in the unique environment of Kona, Hawai ’ i, the largest, culturally most significant, and geologically youngest of the dryland field systems in Hawai ’ i. Agriculture was intensified systematically on substrates (cid:2) 4,000 years old with appropriate climate and fertility in Kona, in keeping with archipelago-wide analyses. In comparison with other dryland agricultural systems in Hawaii, we found that soil fertility indicators used to predict pre-European agricultural intensification are shifted towards lower rainfall on the younger geological substrates of Kona. For example, base saturation reached low levels ( , 30 % ) at ; 1200 mm/yr rainfall on 1,200 year old substrate and at ; 1400 mm/yr on 7,500 year old substrate in Kona, versus ; 1800 mm/yr on 150,000 year old substrate on Kohala Volcano. We suggest that this difference reflects a kinetic, rather than an irreversible, limitation to soil fertility in Kona, and we discuss how this difference could have influenced opportunities for agricultural intensification and the distribution of agroecological zones. soil INTRODUCTION Archeologists divide Pre-European Hawaiian agriculture into wetland, flood irrigated systems that were restricted to stream valleys and coastal plains of each island and dryland, rainfed systems that covered vast areas of relatively fertile soils on the younger islands (Ladefoged et al. 2009). Each agricultural system entrained its own pattern of social organization (Kirch 1994), and development of the dryland systems in particular led to consolidation of social control by the ruling elites (Kirch 2011). Some intensive dryland systems, such as the well-studied leeward Kohala agricultural area, supported relatively homogeneous agricultural infrastructure and intensive agricultural plantings (Ladefoged et al. 2003, Ladefoged andGraves 2008) whereas others such as the Kona system developed into a highly diverse patchwork characterized by a matrix of agricultural practices overlaid onto a spectrum of lava flows of varying ages. Research in the Kohala field system has demonstrated that the boundaries of systematic agricultural intensification consistently aligned with several soil properties-particularly percent base saturation, exchangeable calcium and total and extractable phosphorus-which conversely have been applied as indicators of a soil fertility threshold below which the systematic intensification of agriculture was not pursued . Ladefoged et al. (2009) used this threshold to evaluate the potential distribution of intensive rainfed agricultural systems across the Hawaiian Archipelago and summarized their results in a geospatial model depicting the potential extent of intensified agricultural systems in Hawai'i. Output of the model correlated spatially with well-documented sites and led to the identification of intensively cultivated areas not previously mapped. The Ladefoged et al. (2009) model was designed to provide an archipelago-wide perspective on the controls of agricultural intensification rather than a guide to place-based intensification features (e.g., Vitousek et al. 2010). Place-specific controls of soil fertility may alter the predicted extent of agricultural intensification. The Kona district of the Island of Hawai'i offers an opportunity to evaluate the importance of place-specific controls in an environment with high spatial variability in soil properties, extremely young substrate ages, unique agricultural practices and agroecology, and a large spatial extent of dryland agriculture (Kelly 1983, Allen 2001, 2004. Further, Kona became the cultural and political center of the Island of Hawai'i late in the precontact period (Kamakau 1961). The farming landscape of Kona thus offers an opportunity to link soils, societies, and agricultural development in a culturally important and highly diverse landscape. In this paper we develop site-specific characterizations of soil fertility and summarize the archeological evidence for pre-historic agricultural development in the Kona district. We describe the properties of Kona soils from 0.5-7.5 ky, and discuss them in the context of continuous intensive versus patchy systems of agriculture, which we refer to as ''systematic'' and ''informal'' agriculture respectively. For the Kona system, the Ladefoged et al. (2009) model predicted patchy intensification of systematic agriculture due to the exclusion of lava flows ,4 ky based on the expectation of inadequate soil thickness. For the purpose of this paper ''Kona soils'' refers to lava flows of all ages across the landscape of Kona, ''young soils'' refer to ,4 ky flows, and ''old soils'' will refer to !4 ky flows. We utilize this information on soils to ask (1) do soil fertility indicators in Kona follow the same patterns with climate as they do in the intensified Kohala field system, (2) is 4 ky an appropriate age threshold for systematic or intensive agriculture in pre-European Hawai'i, and (3) what landscape-level patterns of agriculture occur on the different Kona soils? Climate, geology, and soils Kona is situated on the western slopes of Mauna Loa and Hualalai volcanoes with a local weather system controlled by a land-sea breeze cycle that creates a wet environment compared to other leeward areas in the archipelago. Rainfall increases upslope from as little as 600 mm/yr at the coast to as much as ;2000 mm/yr at ;600 m elevation, declining again at higher elevation ( Fig. 1A; Giambelluca et al. 2012). Hualalai and Mauna Loa volcanoes also create a patchwork of overlapping lava flows (Fig. 1B) that range from less than 65 to over 10,000 years old (Trusdell et al. 2006). Hualalai lava has an alkalic composition while Mauna Loa lava is tholeiitic in nature. The two volcanic sources differ importantly in chemical composition, with alkalic sources typically containing much lower concentrations of silica and sodium, but higher concentrations of aluminum and especially phosphorus. Soils in Kona are primarily Histosols and Andisols (NRCS 2010). Histosols are recognized in recent flows where organic matter has accumulated but tephra and lava have not sufficiently weathered to produce the shortrange-order minerals that typify Andisols. Most of the nutrient supplying and holding capacity in Histosols comes from organic matter that on very young flows is introduced from nearby vegetated areas and by in situ pioneering plants as the flows become vegetated. With increasing lava flow age, weathering and secondary mineral formation lead to the development of Andisols, which feature both increased surface area provided by clays and relatively abundant nutrients derived from weathering. Tephra deposits and organic matter settle on top of the lava flows within the voids of the irregular shaped chunks of lava. Over time this process forms a soil layer consisting of mostly unweathered rocks known as clinkers; tephra and organic matter fill in the spaces between the clinkers, and the clinkers weather in place over time. Because tephra is a significant component of Kona soils, soil ages are younger on average than the underlying flow ages. Moreover, the fine and coarse fractions of soils may originate from different sources (Ziegler et al. 2003). Even under the oldest Kona flows, ;12.5 ky, much of the original lava texture can be observed almost unweathered on the underlying bedrock, illus-trating the importance of tephra deposition and organic matter accumulation in the development of Kona soils. Farming landscape of the Kona field system Across the farming landscape and soils, Kona was segregated into four major agroecological zones-the kula, kaluulu, apaa, and amau zonesthat created distinct bands as one moved upslope (Kelly 1983, Malo 1951. These zones are thought to have existed irrespective of the age of underlying flows, but may have extended further along the coastal-inland continuum depending on substrate type. Crops grown in the different zones are summarized in Table 1. The kula is the v www.esajournals.org dry coastal plains, described as the farmable areas receiving less than 1000 mm/yr rainfall. The kaluulu was an agroforestry development where understory crops were grown beneath an open canopy primarily of breadfruit; Lincoln and Ladefoged (in press) suggest it was bounded by ;1000-1250 mm/yr of rainfall. The apaa, was considered to be the most productive planting zone; it is generalized from historical testimonies as occurring between ;1250-2000 mm/yr, but archeological evidence indicates variability in the transition of the apaa to the amau, the final zone of cultivation. This uppermost zone modified the native forest to grow cultivated crops and encourage naturally occurring resource plants within the subcanopy. Historical descriptions broadly place this zone between 600-900 m. Intensive, systematic agricultural systems can be usefully separated from informal agriculture developments. The former consisted of relatively homogenous infrastructure, short or no fallow period, and little redistribution of resources above the level of an individual field within the system, while the latter vary widely in form and are believed to have depended upon resource concentration. Systematic agriculture in Kona is characterized by a common infrastructure, most importantly long rock walls (kuaiwi ) running parallel to the slope that encompassed cleared fields (Escott and Spear 2003). Other common features are cross walls that intersect perpendicularly with the kuaiwi, and rock mounds that occur in the fields between kuaiwi. As defined by the common infrastructure, systematic agriculture is considered to have primarily occurred within the two central agroecological zones-the kaluulu and the apaa-although there are rare examples in areas with favorable conditions where systematic infrastructure extends through the kula to sea level (Escott and Spear 2003). Informal agricultural techniques are less generalizable, and encompassed a range of techniques, infrastructures and planting densities. We assume that informal farming techniques strove to overcome agricultural limitations that prevented the establishment of systematic agriculture. These limitations may include soil depth, soil moisture, or soil fertility. The methods of informal agriculture therefore concentrated or exploited naturally concentrated soil resources, v www.esajournals.org increased organic matter in the soils, utilized deep or uniquely rooted plants to increase capture, uptake and storage of nutrients, or enhanced or exploited naturally elevated soil moisture. Informal agricultural techniques left less noticeable archeological remains compared to systematic agriculture, but are documented in all four farming zones in Kona. Informal farming methods can be placed in three general categories: the use of terraces to capture soil and create swale agriculture, the use of composting to create soils in ''pocket'' or rock mound farming, and the use of tree crops or naturally forested areas to practice agroforestry. SAMPLING AND ANALYSES METHODS Soil samples were collected along four transects ( Fig. 1) representing five lava flow ages selected to encompass a range of rainfall and elevation. The lava flows have been dated to approximately 0.5, 1.2, 2.25, 4, and 7.5 ky (Trusdell et al. 2006). Three samples of approximately 100 g each were taken by trowel from small soil pits within localized depressions every 50 m elevation from just above sea level to 1150 m, and composited. Soil depth was variable, and the lower portion of samples typically contained clinkers (rocks .2 mm) in a matrix of fine soil. Samples were taken to 30 cm with soil extracted from around clinkers; depth to clinker layer and total depth to 30 cm was noted during collection. Sampling locations were recorded on GPS, and climatic data were summarized for each point using GIS layers obtained from the Hawai'i State GIS Program (www.state.hi.us/dbedt/gis) and the University of Hawai'i Geography Department (rainfall.geography.hawaii.edu). All soils were passed through a 2 mm sieve and homogenized. The samples were split into three parts. One subsample was analyzed for pH, total carbon and nitrogen, ammonium and nitrate, and resin extractable phosphorus at Stanford University (following procedures in Soil Survey Laboratory Staff 1992). A KCl extraction was performed with 3 g of field-moist soil and 20 ml of 2 M KCl. Samples were shaken for 2 hours and filtered extract was analyzed with a WestCo SmartChem 200 Discrete Analyzer for ammonium and nitrate. Air-dried soil was mixed with deionized water in a 1:2 ratio, allowed to stabilize over a 30-minute period and measured for pH. Total C and N were analyzed on a Carlo Erba NA1500 Elemental analyzer, using 5 mg of ovendried soil. Resin extractable phosphate was evaluated after shaking 5 g of air-dried soil in 50 ml of deionized water for 24 hours with mixed cation and anion resin bags; resin bags were eluted in a 0.5M HCl solution and analyzed for phosphate concentration using a WestCo SmartChem 200 Discrete Analyzer. A second subsample was shipped to ALS (Reno, Nevada) for total element analysis using lithium borate fusion and XRF. The loss or accumulation of elements (relative to parent material) was determined using Niobium as an immobile index element, and using average parent elemental concentrations from Mauna Loa and/or Kilauea Volcanoes. Elemental gains or losses relative to parent material were calculated using the equation in Porder et al. (2007). The third subsample was analyzed for exchangeable cations, cation exchange capacity, and base saturation using the ammonium acetate (NH 4 OAc) method buffered at pH 7; these analyses were carried out at the University of California, Santa Barbara (following procedures in Soil Survey Laboratory Staff 1992). In addition to the data from our samples, we made use of data from samples analyzed in Porder et al. (2007). Data was converted to match our sampling protocol (homogenized samples 0-30 cm depth) by applying weighted averages representing thickness of soil horizons analyzed within the surface 30 cm. We also made use of data from Vitousek et al. (2004) and Palmer et al. (2009) for comparisons to the Kohala system; sampling depth in these studies matched our own. Analytical procedures for each data set followed the same protocols applied in this study. RESULTS Soil properties vary as a function of flow age and of climate, and can interact with other soil properties. We present the results from all samples and analyses in the Supplement. In addition, Tables 2 and 3 summarize soil properties by individual and grouped flow ages using a subset of samples in the rainfall range of 900-1300 mm/yr, where all flows were well reprev www.esajournals.org sented with a similar mean rainfall (625 mm/yr); these results are discussed in the subsections below. Table 4 utilizes all results to summarize the influence of rainfall as well as soil age and carbon on soil properties. Results from two regressions-one based on rainfall alone, the other incorporating rainfall, soil age, and carbon-are presented to emphasize the influence of rainfall for different soil properties; relevant aspects of these results are highlighted in the subsections below. In the final subsection we summarize the important soil properties used as indicators of Hawaiian agricultural intensification as they vary with rainfall and age (Figs. 2-4). We present information on element concentrations relative to an assumed parent material as Carbon and nitrogen Organic carbon levels were high in Kona soils, significantly higher in young soils (19.2%) than old soils (10.0%), particularly the youngest flow (0.5 ky). Carbon levels increased significantly but not strongly with increasing rainfall. Total nitrogen (TN) was highly correlated with percent carbon (p , 0.001; r 2 ¼ 0.82) and that correlation explained most of the variation in multivariate regression. Total inorganic nitrogen (TIN) was represented by the sum of nitrate and ammonium. While old soils had less TN (0.99%) than young soils (1.39%), they had more TIN (107 ug/ gdw versus 92 ug/gdw) and a significantly higher proportion of TN existing as TIN (1.08% versus 0.66%). The carbon to nitrogen ratio declined significantly, though not strongly, with flow age (p , 0.001; r 2 ¼ 0.27). Cation exchange capacity (CEC), pH, exchangeable cations, and base saturation CEC was dominated by organic exchange capacity, as indicated by the high correlation between CEC and soil carbon (p , 0.001, r 2 ¼ 0.75). Soils were strongly acid to neutral, with pH values correlated most significantly with flow age ( p , 0.001; r 2 ¼ 0.37 ). The sum of exchangeable cations (exchangeable calcium, sodium, magnesium, and potassium) was dominated by calcium, which averaged 69% of the total. Individual cations were dominantly correlated with soil carbon, with the exception of potassium that most strongly correlated with soil age. Base saturation negatively correlated with rainfall (p , 0.001; r 2 ¼ 0.35), with the relationship being much stronger on old flows than young flows. Phosphorus Resin extractable phosphate ranged from 0-260 lg/g dry soil. Resin phosphorus concentrations were significantly but not strongly correlated to rainfall on individual flows; resin phosphorus most strongly correlated to soil age (p , 0.001; r 2 ¼ 0.42). Elemental phosphorus concentrations ranged from 0.08-0.49%; using Nb as an index element, P concentrations in soils were enriched relative to the underlying parent material. The uncertain source of tephra makes the level of enrichment difficult to interpret, but samples showed enrichment relative to all potential tephra sources (Mauna Loa, Hualalai, Kilauea). Phosphorus enrichment correlated most strongly with soil carbon. Soil depth Depth of the soil was variable, and in some cases difficult to judge due to the uneven layer of clinkers. We recorded the depth of clinker-free soil for each sample, and the depth to bedrock (i.e., the bottom of the clinker layer) in cases where it was reached. In most cases clinkers could be extracted to the full depth of the sample but they comprised vast majority of the volume within the clinker layer. The clinker-free soil was not necessarily devoid of rocks, but was mostly soil. Due to our sampling of local depressions, soil depth overestimates the landscape average depth for each flow age. Soil depth in the clinkerfree layer correlated significantly with flow age (p , 0.001; r 2 ¼ 0.79). Mean depth to the clinker layer was 9.7 cm and 27.7 cm on the young and old flows, respectively; the latter especially is an underestimate because depths .30 cm were recorded as 30 cm. Elemental concentrations and retention We used the complete elemental analyses to calculate apparent element retention and mobility, relative to the index element Nb, across the Kona landscape. This approach is useful for calculating weathering in an environment where both mass and volume are subject to loss (via leaching) or gain (via organic matter addition). However, because Kona soils often developed in tephra derived from volcanoes other than those producing the underlying lava flows, we present these data (together with all of our soil measurements) as a supplemental on-line table (Supplement). We used the 1.2 ky flow as the baseline from which to assess the mass loss. All elements show a general trend of depletion over time (with the exception of phosphorus as discussed above). Despite uncertainty in the provenance of soil material, our results are sufficiently robust to demonstrate substantial depletion of Ca, Na, Mg, Agriculturally important soil properties As previous investigations showed, a few soilfertility related properties appeared particularly important in defining boundaries of intensive cultivation, foremost among them being resin extractable phosphorus, base saturation, and exchangeable calcium. The distributions of these properties in relation to substrate age and rainfall are summarized in Figs. 2-4. These indicators of soil fertility (and Polynesian agricultural intensification) declined with increasing rainfall on all flows; moreover, the decline occurred at lower rainfalls on successively younger lava flows. Resin extractable phosphorus in Kona soils ( Fig. 2A) averaged below 30 lg/g dry soil above 1300 mm/yr of rainfall, with varying levels of enrichment at lower rainfalls. Each successively younger flow increased in resin phosphorus at a lower rainfall than older flows; this trend holds for all flow ages. The decline in soil base saturation with increasing rainfall also occurred at lower rainfall levels on younger flows (Fig. 3). While the flows in Fig. 3 were chosen for illustrative purposes-they are well represented by samples and are graphically distinct-the trend holds for all flows with the exception of the 0.5 ky flow, which had a relatively constant base saturation across the rainfall gradient. The v www.esajournals.org relationship of exchangeable calcium with rainfall is less compelling in Kona than Kohala (Fig. 4); the oldest Kona flows (7.5 and 12.5 ky) decrease with increasing rainfall, but the young flows displayed no clear pattern of depletion, with high variability in samples across the rainfall gradient. The high variability in exchangeable calcium in the younger soils can be attributed to CEC being dominated by the high levels of organic material rather than mineral sources. DISCUSSION The moderate but varying rainfall (;600-2000 mm/yr) and young substrates (;0.5-12.5 ky) made pre-Contact agriculture in Kona fundamentally different from other well-studied dryland systems in Hawai'i. The best-known Kohala system is composed of relatively old and uniform substrate (areas of ;150 ky and ;400 ky), and is embedded within a very large rainfall gradient (;300-3500 mm/yr); rainfall and the leaching of nutrients drive the limits of agriculture in this system ). The agricultural system in Kona more closely mirrors the Kahikinui system on Maui; that system encompasses a broad range of substrates (;3-130 ky), but a small and relatively dry rainfall gradient (;400-900 mm/yr); sufficient moisture for mineral weathering and cultivation appears to drive the limits of agriculture in this system , 2005, Hartshorn et al. 2006, Giambelluca et al. 2012. Agriculture in Kona developed with higher rainfall than Kahikinui. As we discuss, portions of the Kona system likely were limited by low rates of weathering and retention of mineral nutrients in young flows. The low release and retention of nutrients in Kona may have encouraged the broad application of resource concentrating farming methods. Indicators of soil fertility In the well-studied Kohala system, the upper (wetter, lower fertility) boundary of intensive pre-contact agriculture is associated with base saturation ;30%, resin extractable phosphorus ;50 ug/g, and exchangeable calcium ;10 cmol(þ)/kg Vitousek et al., in press). In Kona, these soil properties varied with climate in a way similar to Kohala, declining at higher rainfall, particularly on the older soils. However the annual rainfall at which these values are reached is lower in Kona than on the older Kohala substrates, and within Kona it is lower on younger than on older flows (Figs. 2-4). Over a much longer timescale (150-4,000 ky), these transitions occur at lower rainfall on very old sites Chorover 2001, Vitousek andChadwick 2013); together with our results, this suggests that the maximum rainfall that experiences high soil fertility increases and then declines with long-term soils development, rather than experiencing a continuous decline, and that the peak occurs relatively early in the process of Hawaiian basalts (.10-150 ky). The observation that all three soil fertility indicators are shifted towards drier conditions in Kona than Kohala could be explained in several ways. We suggest that the weathering of the coarse substrate of young lava may be constrained by low surface area, and the low soil fertility in moderately wet Kona sites may reflect the kinetics of supply of elements via weathering and their removal via leaching. Low surface area reduces the reaction of minerals with water, but could increase material transport. Variation in particle size and mineralogy are primarily responsible for heterogeneity in weathering rates within a given climate (Reeves and Rothman 2013). Other potential explanations for this pattern include: (1) Kona may be wetter than reported (or Kohala drier), potentially explained by high fog drip in Kona (Brauman et al. 2010), highly localized variations, or vagaries of mapping; (2) water erosion may be significant on the young lava, leading soils at a given location to reflect uphill, and typically wetter, conditions; (3) higher soil moisture in Kona driven by higher levels of soil organic matter and lower wind intensity may make soils at lower rainfall behave as if it were wetter; or (4) retention of mineral nutrients in Kona may be limited by weakly adsorbing soil complexes that result in higher leaching of nutrients at a given rainfall. The first two points seem unlikely to contribute substantially to the pattern, in that soil fertility parameters are typically lower on young than old soils within Kona, and downhill fluvial transport is likely greater on the fine-grained soils of Kohala than the coarse lava flows of Kona. The third and v www.esajournals.org fourth points may contribute to the pattern, but not enough to explain the differences between older flows within Kona, where carbon levels do not vary significantly and CEC declines with increasing flow age. There is evidence of intensified, systematic agricultural occurring above the rainfall levels that our analyses suggest should bound the development of systematic agriculture (for instance , Burtchard 1996. Our soil transects on old flows traversed areas of agricultural infrastructure that had soil fertility indicators below the established thresholds. Agricultural intensification in areas identified as marginal based on previously determined indicators could occur if the distribution of these indicators reflects a kinetic limitation of weathering rather than a boundary condition; there would be a sustained supply of mineral nutrients in low-fertility sites controlled by kinetic processes, differing from those controlled by irreversible depletion of weatherable soil minerals as seen in Kohala. Our data set suggests that total elemental concentrations in these wetter areas remain higher than levels seen in Kohala, and that Kona soils therefore could provide sustained nutrient fluxes. Unfortunately our data set has relatively few samples in wetter areas of older flows, is uncertain about parent matter concentrations in tephra, and lacks detailed information on the volume of non-fine materials, limiting our ability to draw a confident conclusion. Another possibility is that some areas considered systematic agriculture relied on alternative sources of soil fertility, such as inputs from nearby forests, incorporating fallow periods, or the development of agricultural practices that enhanced local soil fertility. If the latter is the case systematic agriculture in Kona may have required more management and inputs than did other dryland systems in Hawai'i. Thresholds of systematic agriculture in Kona In general, soils in Kona are more fertile the older the underlying lava flow and the lower the rainfall. The old soils in Kona that receive less than ;1400 mm/yr of rain are fertile enough to support the development of systematic agriculture as defined by the soil fertility indicators observed in Kohala. In contrast, the flows ,4 ky typically fall below the fertility thresholds that bound the Kohala field system, or exceed those limitations only at rainfalls below ;1000 mm/yr (i.e., the kula zone). The Ladefoged et al. (2009) model assumes inadequate soil development on the young soils as the basis for the restriction of systematic agriculture. The background levels of soil fertility indicators in Kona soils support the 4 ky threshold for the development of intensive pre-contact agricultural systems. The indicators of fertility in the young soils suggest that systematic agriculture was infeasible even where soil depth was sufficient. The low levels of soil fertility are reinforced by the low quantity of soil on the younger flows. Nevertheless, examples show that a portion of the land within agricultural sites on young flows was made farmable. However we don't know how densely these microsites occur across the landscape, or the extent of agriculture between favorable microsites. Approximately half the area defined as the Kona Field System consists of young soils and so even a small percentage of usage is a meaningful contribution to the total yield of the region. Further synthesis of the archeological record could better parse out the types and intensities of resource concentrating agriculture on the different flow ages in Kona. The ethno-agricultural landscape within Kona The matrix of agricultural opportunities in Kona relative to climate and soil fertility suggests linkages between Hawaiian farmers and their environment in the development of the agricultural landscape. The four general zones of cultivation (moving upslope the kula, kaluulu, apaa, and amau) can be viewed in terms of the variable fertility between zones (variation with rainfall) and within zones (variation with flow age). The kula zone, which was too dry to farm in Kohala and Kahikinui (below ,750 mm/yr), has adequate rainfall in Kona (in an average year) to crop sweet potato (Kagawa and Vitousek 2012). Moreover, the soils within the kula show relatively high indicators of soil fertility even on the youngest flows, and cropping would likely meet with success wherever adequate soil and moisture could be gathered. The development of sweet potato farming in the kula can be seen in infrastructure that ranges from systematic kuaiwi in select areas on old soils Clark v www.esajournals.org 1980, Escott andSpear 2003), sparse kuaiwi, terraces and mounds in a wider range of areas on old and young soils (Hammatt and Clark 1980, Schilt 1984, Henry and Wolforth 1998, Escott and Spear 2003, Haun and Henry 2010, dense mounds and swales in more marginal areas on young soils (Schilt 1984, Rechtman et al. 2001, Haun and Henry 2010, to sparse mounds and other informal techniques in the most marginal areas (Rechtman et al. 2001, Escott andSpear 2003). The high indicators of soil fertility and adequate rainfall for cropping suggest that soil depth within the kula may be the most significant constraint to agriculture here. The soil fertility indicators within the agroforestry plantations of the kaluulu zone remain high on old flows but fall below levels associated with intensification on the young flows. Lincoln and Ladefoged (in press) show that breadfruit remains highly productive on flows as young as 1.2 ky within this zone, and provides yields comparable to sweet potato production on older flows. Systematic agricultural infrastructure occurs on old flows in this zone (Escott andSpear 2003, Tomonari-Tuggle 2006), and evidence for extensive informal techniques occurs on young flows Wolforth 1998, Haun andHenry 2010), suggesting that although the breadfruit canopy was continuous the density of understory plantings varied with soil fertility within the plantations. Here the extensive application of an informal agriculture technique (the use of tree crops) may have been facilitated by the close proximity of high and low fertility soils. Young flows in this zone show a higher density of agricultural features in closer proximity to older flows (e.g., Hammatt et al. 1997). The increased nutrient uplift and storage within the kaluulu could facilitate other informal agriculture. Hawaiians often used groves of trees to provide mulch to support pocket agriculture, to plant within the altered environment, or to periodically engage in slash and burn agriculture. Within the apaa zone the soil fertility indicators are depleted for young flows, and above ;1400 mm/yr even the oldest flows drop below levels thought to have sustained intensification. In accordance with varying soil fertility, some evidence exists for apaa and amau zones of varying width. For example Cordy et al. (1991) find a very narrow apaa zone on a 2.25 ky flow, with systematic infrastructure prevalent from ;550-700 m (;1250-1315 mm/yr) followed by a rapid transition to informal infrastructure extending up to ;900 m; in contrast the systematic infrastructure of the apaa zone on a 4 ky flow in Kealakekua extends from ;375-800 m (;1250-1650 mm/yr), with historical accounts indicating a narrow amau zone extending to ;900 m (Menzies 1920). This scenario would result in ''fingers'' of forested area extending further downslope on younger flows. These forested patches could enhance fertility on the young flows and botanical material from the forests could potentially augment fertility on nearby old flows. CONCLUSION The agricultural mosaic of Kona differed from the other farming landscapes in Hawai'i due to the hospitable climate of Kona and proximity of soils of varying fertilities. Kona offers the opportunity to understand fertility in early soil pedogenesis and couple human engagement with a diverse and productive landscape. The soil properties used as indicators for Hawaiian dryland agricultural intensification appear to be systematically shifted towards a lower rainfall in younger lava flows within Kona and Kohala (0.5-400 ky). We attribute this shift to limitation of soil fertility in Kona soils by the kinetics of release and retention of mineral nutrients in coarse substrates. It is clear that areas of high fertility were used in intensive, systematic ways, that there existed a range of intensities across the fertility gradients, and that informal techniques were applied in areas in which systematic agriculture was constrained. Agroecological zones within the Kona farming landscape roughly align with changing patterns of soil fertility as they relate to rainfall, while localized adaptations of infrastructure and planting density tend to vary more consistently with soil fertility as it relates to lava flow age. v www.esajournals.org SUPPLEMENT Soil and site properties along four transects as described in the main text (Ecological Archives C005-003-S1).
2019-03-22T16:11:35.459Z
2014-04-01T00:00:00.000
{ "year": 2014, "sha1": "7938f3e6f69e3b195f8eafbdc35e5b10fb491758", "oa_license": "CCBY", "oa_url": "https://esajournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1890/ES13-00328.1", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ddf2ec1201dca974986e67b442f35530c6fd3e51", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
258171376
pes2o/s2orc
v3-fos-license
Spatiotemporal tracking of gold nanorods after intranasal administration for brain targeting Intranasal administration is becoming increasingly more attractive as a fast delivery route to the brain for therapeutics circumventing the blood-brain barrier (BBB). Gold nanorods (AuNRs) demonstrate unique optical and biological properties compared to other gold nanostructures due to their high aspect ratio. In this study, we investigated for the first time the brain region-specific distribution of AuNRs and their potential as a drug delivery platform for central nervous system (CNS) therapy following intranasal administration to mice using a battery of analytical and imaging techniques. AuNRs were functionalized with a fluorescent dye (Cyanine5, Cy5) or a metal chelator (diethylenetriaminepentaacetic dianhydride, DTPA anhydride) to complex with Indium-111 via a PEG spacer for optical and nuclear imaging, respectively. Direct quantification of gold was achieved by inductively coupled plasma mass spectrometry. Rapid AuNRs uptake in mice brains was observed within 10 min following intranasal administration which gradually reduced over time. This was confirmed by the 3 imaging/analytical techniques. Autoradiography of sagittal brain sections suggested entry to the brain via the olfactory bulb followed by diffusion to other brain regions within 1 h of administration. The presence of AuNR in glioblastoma (GBM) tumors following intranasal administration was also proven which opens doors for AuNRs applications, as nose-to-brain drug delivery carriers, for treatment of a range of CNS diseases. Intranasal administration is becoming increasingly more attractive as a fast delivery route to the brain for therapeutics circumventing the blood-brain barrier (BBB). Gold nanorods (AuNRs) demonstrate unique optical and biological properties compared to other gold nanostructures due to their high aspect ratio. In this study, we investigated for the first time the brain region-specific distribution of AuNRs and their potential as a drug delivery platform for central nervous system (CNS) therapy following intranasal administration to mice using a battery of analytical and imaging techniques. AuNRs were functionalized with a fluorescent dye (Cyanine5, Cy5) or a metal chelator (diethylenetriaminepentaacetic dianhydride, DTPA anhydride) to complex with Indium-111 via a PEG spacer for optical and nuclear imaging, respectively. Direct quantification of gold was achieved by inductively coupled plasma mass spectrometry. Rapid AuNRs uptake in mice brains was observed within 10 min following intranasal administration which gradually reduced over time. This was confirmed by the 3 imaging/ analytical techniques. Autoradiography of sagittal brain sections suggested entry to the brain via the olfactory bulb followed by diffusion to other brain regions within 1 h of administration. The presence of AuNR in glioblastoma (GBM) tumors following intranasal administration was also proven which opens doors for AuNRs applications, as nose-to-brain drug delivery carriers, for treatment of a range of CNS diseases. Introduction Delivery of therapeutics for the treatment of central nervous system (CNS) diseases is often challenged by the presence of the blood-brain barrier (BBB). In general, only drugs with low molecular weights (under a threshold of 400-600 Da) and high lipid solubility can cross the BBB owing to the unique brain capillary endothelial cell structures with tight junctions and efflux transporters [1][2][3]. These features limit most drugs from reaching the brain at therapeutic levels. For decades, great efforts have been undertaken to achieve sufficient brain accumulation of the therapeutic agents including using invasive techniques (e.g., Glia-del® wafers, intrathecal injection, and convection-enhanced delivery) [4,5], disruption of BBB (e.g., osmotic disruption, ultrasound disruption, and magnetic disruption) [2] and targeting BBB receptors (e.g., transferrin receptors and folate receptors) [2,4]. However, these strategies are either intrusive or lack specific targeting to disease sites, resulting in adverse effects. Intranasal administration is an alternative route for delivering therapeutic agents to the brain. This non-invasive approach allows therapeutics to enter the brain through the olfactory and trigeminal nerve pathways, circumventing the BBB [6]. The benefits of this administration route also include the avoidance of first pass metabolism by the liver, reduced drug accumulation at non-targeted tissues, and rapid onset of actions. For these reasons, intranasal delivery has been trialled for a wide range of therapeutic agents such as proteins, peptides, and small molecules for CNS disease treatment [2,7]. For example, TNX-2900, an intranasal potentiated form of oxytocin, was approved by the U.S. Food and Drug Administration (FDA) as a treatment for Prader-Willi syndrome. Midazolam is an FDA approved nasal spray to treat seizures. In addition to the free drugs, lipid [8], polymer [9,10], chitosan [11] and inorganic material-based [12,13] nanoplatforms have also demonstrated promising effects for CNS disorder treatment after intranasal administration in preclinical studies. Gold nanoparticles (AuNPs) and nanocrystals constitute the most advanced form of gold nanostructures that reached clinical trials for CNS applications [14,15]. The first-in-human trial to determine the safety of nucleic acid labelled gold nanoparticles in treating patients with recurrent glioblastoma multiforme or gliosarcoma after intravenous injection has been completed recently with 0% of mortality and 87.5% of patients unaffected by serious adverse events (ClinicalTrials.gov Identifier: NCT03020017). The assessment of oral gold nanocrystals' safety and efficacy for relapsing-remitting multiple sclerosis treatment is now in Phase 2 clinal trials (ClinicalTrials.gov Identifier: NCT03536559 and NCT03993171). Gold nanorods (AuNRs) constitute another form of gold nanostructures being increasingly investigated biologically due to their anisotropic structure, tuneable surface plasmon and excellent biocompatibility profiles [16]. In the past decades, AuNRs have been investigated for bioimaging [17][18][19], photothermal therapeutics [20] and drug delivery [21] applications for CNS diseases through systemic exposure mainly via the intravenous route. Published reports have shown that intratumorally or intravenously administered AuNRs combined with photothermal therapy can suppress the growth of glioblastoma tumors [20,22,23]. In addition, intravenous amyloid-β targeted inhibitory peptides [21] and quercetin [24] loaded onto AuNRs achieved therapeutic advantages in the neurodegenerative disease landscape. Only a few studies have been reported on intranasal delivery of gold nanostructures. They have mainly focused on using gold nanoparticles. Exploiting the intranasal route for AuNRs delivery remains an explored area. To track gold nanostructures in vitro and in vivo, a number of imaging and quantification techniques have been employed such as magnetic resonance (MR) imaging [25,26], optical imaging [27], nuclear imaging [28,29] and inductively coupled plasma mass spectrometry (ICP-MS) [30,31]. Each of these techniques comes with its own set of benefits and limitations. For example, MR imaging can provide tomographic information on live biological specimens with high spatiotemporal resolution (down to 50-250 μm) but lacks intrinsic contrast which makes definitive detection of desired tissue difficult [25,32]. Optical imaging is highly desirable in clinical and preclinical studies due to its rapid screening and cost-effectiveness. However, background autofluorescence and limited light penetration into deep tissues make it difficult to quantitively analyze gold nanostructures [32,33]. ICP-MS is considered the 'gold standard' method for gold content quantification in tissues, however, it is labour intensive without offering spatial information. Therefore, multi-modal imaging techniques are required for robust qualitative and quantitative analyses of AuNR biodistribution profiles. In this study, we built on our in-house expertise in multi-modal imaging to investigate the spatiotemporal distribution of AuNRs after intranasal administration. To achieve this, AuNRs were functionalized with Cyanine5 (Cy5), a fluorescent probe suitable for optical imaging, or diethylenetriaminepentaacetic dianhydride (DTPA dianhydride), a metal chelator, to enable nuclear imaging. Whole body distribution with a focus on healthy and glioblastoma bearing brains was comprehensively conducted using optical imaging, ICP-MS, single-photon emission computed tomography (SPECT)/computed tomography (CT) imaging, gamma counting and autoradiography. The results obtained provide evidence of AuNRs access to the brain via the nasal route which are highly relevant for further development of AuNRs as drug delivery carriers for brain imaging and therapy via the nose-to-brain route. Synthesis of AuNRs Gold nanorods were synthesized by the seed-mediated growth method as described previously with modifications [34,35]. Firstly, CTAB solution (5 mL, 0.2 M) was mixed with HAuCl 4 ⋅ 3H 2 O (5 mL, 0.0005 M). Then ice-cold NaBH 4 (600 μL, 0.01 M) was added to the above mixture, which rapidly turned into a brownish-yellow color. The solution was vigorously stirred for 2 min and kept in a water bath at 28 • C for 2 h and was used as seed solution. For growth solution preparation, HAuCl 4 ⋅3H 2 O (45 mL, 0.001 M) was mixed with AgNO 3 (3.6 mL, 0.004 M) and CTAB (45 mL, 0.1 M). After mixing the solution a few times by inversion, vitamin C (0.72 mL, 0.1 M) and HCl (1.44 mL, 1 M) were added to the solution. Finally, the seed solution (360 μL) was added to the prepared growth solution. The whole mixture was kept at 28 • C overnight to obtain AuNRs. After synthesis, the mixture was purified by a combinational purification method in which the mixture was firstly dialyzed (dialysis bag, MWCO = 3500) against deionized water (1500 mL) overnight with the external dialysis buffer exchanged once. Afterwards, the solution was split in two halves and transferred into two 50 mL centrifuge tubes and centrifuged (Eppendorf, Centrifuge 5810 R, UK) at 10,000 rpm for 15 min at room temperature (RT). The pellets were resuspended into deionized water (30 mL) and washed one more time before collection. Cy5-PEG-SH synthesis Sulfo-cyanine5 NHS ester was dissolved in dimethyl sulfoxide (DMSO) to obtain a concentration of 5 mM. NH 2 -PEG-SH (450 μL, 550 μM in 0.1 M sodium bicarbonate buffer (pH 8.2)) was mixed with sulfo-cyanine5 NHS ester (50 μL, 5000 μM in DMSO) at 1:1 molar ratio. After being reacted for 3 h at RT in the dark, the mixture was applied to a NAP™-5 column and eluted with deionized water. A total of 30 fractions of 120 μL each were collected. Afterwards, each fraction was diluted 100-fold with deionized water for fluorescence intensity measurement. In addition, 2 μL of each fraction was spotted on thin-layer chromatography (TLC) silica gel supported by aluminium sheet. The color was developed under iodine vapour to detect PEG fragments. Only fractions which were Cy5 (blue color) and PEG fragment (brown color) positive were collected. The final concentration of Cy5-PEG-SH was measured in a plate reader (BMG Labtech, UK) against a 0-5 μM Cy5 standard curve. Cy5-AuNR PEG-NH2 synthesis AuNR stock solution (1 mL, 50 nM) was centrifuged (10,000 rpm, 15 min, RT) to collect the pellet. A mixture (1 mL) containing Cy5-PEG-SH (200 μM) and NH 2 -PEG-SH (500 μM) was added to AuNR pellet fraction (30 μL) and then reacted for 24 h in the dark. As a negative control group, the mixture (1 mL) was mixed with deionized water (30 μL) and reacted under the same condition. At the end of the reaction, the mixtures were centrifuged (10,000 rpm, 15 min, RT). The concentration of Cy5 in the supernatant was diluted 100-fold with deionized water before being measured in the plate reader. The conjugated Cy5 on AuNRs was calculated by measuring Cy5 amount in the supernatant of the experimental group which was subtracted from the negative control group. The optical properties of Cy5-AuNR PEG-NH2 before and after functionalization were characterized using UV-vis-NIR spectra. The Cy5-AuNR PEG-NH2 pellets were resuspended into 0.1% Tween® 20 and washed once by centrifugation before use. DTPA-PEG-SH synthesis The reaction time and molar ratio between DTPA anhydride and NH 2 -PEG-SH were optimized to synthesize DTPA-PEG-SH. In brief, NH 2 -PEG-SH (2.5 mM final concentration) was mixed with DTPA anhydride (5, 2.5, 1.25 or 0.625 mM final concentration) in 500 μL DMSO. After being reacted for 4 h and 24 h, respectively, the substitution of amine groups in NH 2 -PEG-SH was determined by Ninhydrin assay (Supplementary Information). An optimized NH 2 -PEG-SH and DTPA anhydride molar ratio of 1:1 was used for large-scale synthesis to obtain DTPA-PEG-SH linkers. Briefly, DTPA anhydride (3.6 mg) was dissolved in DMSO (1 mL) and then added to NH 2 -PEG-SH powder (35 mg). The mixture was stirred for 4 h. At the end of the reaction, the mixture was diluted 5-fold with deionized water before being applied to a PD-10 column and eluted with deionized water. A total of 30 fractions of 400 μL each was collected. PEG presence in each fraction was determined by TLC as described above. Only fractions positive for PEG were collected. To obtain a high purity of DTPA-PEG-SH for radiolabelling, DTPA-PEG-SH linkers were purified by the PD-10 column for another three rounds: two rounds were eluted with 0.9% NaCl and the last round was eluted with deionized water. The concentrations of PEG and DTPA fragments in the final products were quantified by the published iodine solution-based assay [36] and Gd 3+ -Xylenol Orange assay [37], respectively, with modifications (Supplementary Information). The resulting products after freeze-drying were stored at − 20 • C until use. DTPA-AuNR PEG-NH2 synthesis DTPA-PEG-SH (100 μL, 2.5 mM) and NH 2 -PEG-SH (100 μL, 2.5 mM) were added to AuNR stock solution (1 mL, 50 nM) and reacted for 24 h. NH 2 -PEG-SH (200 μL, 2.5 mM) was added to AuNR stock solution (1 mL, 50 nM) or deionized water (1 mL) and reacted under the same condition as positive and negative control groups, respectively. After 24 h reaction, mixtures were centrifuged (10,000 rpm, 15 min, RT). The unconjugated NH 2 -PEG-SH in the supernatants was collected and detected by Ninhydrin assay. Amine group content in NH 2 -PEG-SH in the supernatant of the positive group which was subtracted from the negative control group accounted for 100% conjugated PEG. The conjugated DTPA-PEG-SH in the experimental group was estimated as 50% of the total conjugated PEG. The DTPA-AuNR PEG-NH2 pellet was resuspended into 0.1% Tween® 20 and washed twice by centrifugation before radiolabelling. UV-vis-NIR absorption The optical properties of AuNRs and functionalized AuNRs were determined by UV-Vis spectroscopy using Lambda2 UV/ VIS Spectrometer (Perkin Elmer, USA) with a wavelength range of 400-1000 nm. Fourier transform infrared spectrum characterization DTPA-AuNR PEG-NH2 was characterized using a Frontier FT-IR spectrometer (PerkinElmer, USA). Each sample was analyzed at RT in the spectral range of 4000-800 cm − 1 with a total of 32 scans per run. Nanoparticle tracking analysis (NTA) Hydrodynamic size and particle concentration of the functionalized AuNRs were determined by nanoparticle tracking analysis (NTA) using NanoSight LM10 (Malvern Instruments, UK). NTA represents the most reliable method to directly establish the nanoparticle concentration [38]. The particles were diluted with filtered deionized water to obtain 20-80 particles in the viewing frame. The modal size and particle count were measured in quadruplicate, with 30 s as the duration for each recording, and analyzed using the NanoSight NTA 3.2 software (Malvern Instruments, UK). Zeta potential The Zeta potential of the particles was determined at RT by electrophoretic mobility measurement using Zetasizer Nano series (Malvern Instruments, UK). Transmission electron microscopy The morphology of the particles was evaluated by transmission electron microscope (TEM). A drop of particle sample after purification was loaded onto a carbon-coated 300-mesh copper grid and allowed to stand for 3 min. The excess fluid was absorbed by filter paper. The grid was quickly washed with filtered deionized water and dried in air. The grids were then imaged at 200 kV with a JEM-2100 transmission electron microscope (JEOL, Japan). For the negative staining, the sample placed on the grid was treated with 3% uranyl acetate for 2-3 min. Excess fluid was removed by a filter paper, washed twice with filtered deionized water and dried in air then the grid was imaged. The obtained images were analyzed using ImageJ software (USA). More than 200 nanoparticles were counted for the length and width measurement. Radiolabelling efficiency and radiochemical stability analysis Radiolabelling efficiency of DTPA-PEG-SH was firstly assessed. In brief, 2 MBq (2.5-4 μL) of 111 InCl 3 stock was added to 0.2 M ammonium acetate buffer (pH 5.5) to achieve a final volume of 100 μL. The mixture was then added to an equal volume of DTPA-PEG-SH (~1 mM in deionized water) to achieve a final concentration of 0.1 M ammonium acetate. The mixture was incubated for 30 min at RT by vortexing every 5 min in between. EDTA (5 μL, 0.1 M) was added to stop the reaction. Afterwards, 1 μL of the sample was spotted on an instant TLC (iTLC) paper strip. The strips were developed in 0.1 M ammonium acetate containing 0.25 mM EDTA (pH 5.5) as the mobile phase. The strips were then exposed to a multipurpose storage phosphor screen (Cyclone®, Packard, UK) in autoradiography cassettes for ~5 min, analyzed on a Cyclone Storage Phosphor System and quantified using Optiquant software (Packard, Meriden, USA). The spots at the application point of the iTLC strips correspond to the 111 In labelled DTPA-PEG-SH. Radiolabelling efficiency was calculated as % radioactivity remaining at the application point. For DTPA-AuNR PEG-NH2 radiolabelling, the particles with the desired amount were resuspended in 0.1% Tween® 20 and mixed with the required amount of 111 InCl 3 stock, 0.5-1, 3-5, 5-10 MBq per mouse for gamma counting, autoradiography and SPECT/CT imaging, respectively, and reacted under the conditions described above. The radiolabelled DTPA-AuNR PEG-NH2 was purified by centrifugation at 10,000 rpm for 20 min at RT to remove free 111 In prior to in vivo studies. For radiochemical stability analysis, [ 111 In]DTPA-AuNR PEG-NH2 was incubated in PBS or 50% serum for 24 h at RT or 37 • C and then spotted on iTLC paper strips. The strips were developed in 0.1 M ammonium acetate containing 0.25 mM EDTA (pH 5.5) as the mobile phase. The 111 In remained conjugated to DTPA-AuNR PEG-NH2 (immobile spot at the application point) was considered as radio-chemically stable. Animals All animal experiments were performed in compliance with the UK Animals (Scientific Procedures) Act 1986 and UK Home Office Code of Practice for the Housing and Care of Animals Used in Scientific Procedures (Home Office 1989). In vivo experimentation was adhered to the project license approved by the King's College London animal welfare and ethical review body (AWERB) and UK Home Office (PBE6EB195). Female CD-1 mice (25-35 g, 6-8 weeks old) were obtained from Charles River (UK) for multi-modal tracking studies. Male and female C57BL/6 mice (18-25 g, 4-6 weeks old) obtained from Charles River (UK) were used for orthotopic glioblastoma mouse model establishment. Both sexes were included in the study in line of the new published recommendations by the Medical Research Council, UK, for conducting research on animals. Tumor model induction Intracranial GL261 glioma model was established in C57BL/6 mice as described previously with modifications [39,40]. Female and male C57BL/6 mice, aging 4-6 weeks, were anesthetized using isoflurane inhalation. Prior to surgery, animals received a subcutaneous injection of 0.3 mg/kg of Vetergesic. The mice were then injected stereotactically with 200 K of GL261 murine glioma cells expressing Red-Fluc luciferase (BW134246, Perkin-Elmer) suspended in 2 μL PBS into the left hemisphere using a Hamilton syringe (Harvard Apparatus, UK) with a 28gauge needle at a rate of 0.2 μL/min. The stereotactic coordinates relative to bregma were: 0.5 mm anterior, 1.5 mm lateral and 2.5 mm deep. Tumor growth was monitored by bioluminescence imaging twice a week (IVIS Lumina III, Perkin-Elmer, UK). Anesthetized mice were injected subcutaneously with 150 mg/kg luciferin (D-luciferin potassium salt, Perkin-Elmer, UK) and imaged 10 min after injection. Bioluminescence signals from the regions of interest were measured using Living Image software (Perkin-Elmer, UK) and recorded as total flux (photons/s). Animals were used for gamma counting biodistribution studies when the tumors reached the desired size ~2 weeks after the implantation (total flux >1 × 10 7 p/s). Ex vivo optical imaging studies Female CD-1 mice had food restriction but with free access to water for 24 h before administration for ex vivo optical imaging studies. Cy5-AuNR PEG-NH2 was suspended in CTS solution (0.5% CMC, 0.1% Tween® 20 and 0.9% NaCl, w/w) to ensure sufficient retention of the nanoparticles in the nasal passage. AuNR was administered at a final particle concentration of 300 nM. Mice were intranasally administered under inhalational anaesthesia with formulations by dosing 2 μL to the left and right nostrils alternatively at a minimum 20 s interval. A total of 20 μL were administered to each mouse. The administered AuNR was ~6 pmol/mouse. At predetermined time points (10 min, 30 min, 1 h, 3 days and 7 days) post-administration, mice were culled by cervical dislocation without cardiac perfusion. Organs including brain, heart, lung, liver, spleen, kidneys, stomach and intestine were collected, weighed and imaged using an IVIS Lumina III system (PerkinElmer, UK). Mice without any treatment were used as control groups. Cy5 free dye dissolved in CTS solution was administered as a control at equivalent fluorescence intensity to the formulations. Fluorescence images were obtained using Cy5 filter (Ex: 620 nm/Em: 670 nm) with the exposure time of 1 s for the major organs and 25 s for the separated brains. The obtained images were analyzed using the Living Image 4.7.2 software (PerkinElmer, UK) where the regions of interest (ROIs) were drawn for each organ to obtain the fluorescence signals. ICP-MS measurement for brain uptake After ex vivo optical imaging studies, brains were weighed using an analytical balance (Secura, Germany), followed by drying in an oven at 70 • C in 15 mL pre-cleaned trace metal grade HDPE centrifuge tubes. Afterwards, 1.5 mL HCl (37%, w/w) and 0.5 mL HNO 3 (68%, w/w) as the composition of aqua regia were added to individual brain samples separately at 2-3 min intervals. Tubes were allowed to settle for 10 min at RT then were closed properly. Samples were digested in an oven at 70 • C overnight. After digestion, the tubes were vortexed slightly and centrifuged (4000 rpm, 40 min, RT) to precipitate the undissolved fat. To correct the instrument drift and matrix effects, 50 μL of 2 ppm Iridium (Ir), as the internal standard, was spiked into 250 μL of supernatants, deionized water was then added to a final of 5 mL for ICP-MS measurement (Perkin Elmer, UK). Gold calibration curve between 0.1 and 250 μg/L was established. All calibration solutions and blanks were doped with Ir as the internal standard. Quality control of ICP-MS measurements was ensured through repeated measurements of blanks and a calibrant. Quantitative biodistribution of radiolabelled AuNRs using gamma counting Organ biodistribution profiles of [ 111 In]DTPA-AuNR PEG-NH2 were investigated in female CD-1 mice using gamma counting to obtain quantitative data. Mice were intranasally administered with 20 μL of 300 nM [ 111 In]DTPA-AuNR PEG-NH2 in CTS solution (0.5-1 MBq). Blood sample (20 μL) was collected from the tail vein at 10 min, 30 min or 1 h post-administration. Mice were then sacrificed, organs and tissues such as skin, liver, spleen, kidneys, heart, lung, muscle, bone (femur), brain, stomach, intestine, nasal passage and carcass were collected, weighed, and placed in scintillation vials. The radioactivity of the samples was measured by a gamma counter (RUO WIZARD 2 2-detector 550 samples, PerkinElmer, UK) together with radioactive dose standards. To further investigate the brain distribution of the particles, brains were dissected into four coronal sections (1: the olfactory bulbs (OB), 2: front cerebrum (CB 1), 3: back cerebrum (CB 2), 4: brain stem (BS) and cerebellum (CE)). To investigate AuNRs biodistribution in an orthotopic glioblastoma mouse model, at predetermined time points post-administration (10 min and 24 h), organs and tissues were collected as described above with the modification that brains were dissected to separate the tumor mass from the rest of brain tissue parenchyma. Organs were weighed using an analytical balance prior measurement of radioactivity using a gamma counter. Results were expressed as the percentage of injected dose per tissue (%ID/tissue) or percentage of injected dose per gram of tissue (% ID/g of tissue). Autoradiography To investigate the regional distribution of [ 111 In]DTPA-AuNR PEG-NH2 in brains, CD-1 mice were intranasally administered with 20 μL of 300 nM [ 111 In]DTPA-AuNR PEG-NH2 in CTS solution (3-5 MBq) for autoradiography. Brains were harvested at 10 min, 30 min or 1 h post-administration. Each brain was cut into 2 mm thick sagittal sections using a mouse brain slicer (Zivic-Miller, USA). Sections were placed between two glass microscope slides before exposing to a super-sensitive plate (Storage Phosphor Screen BAS-IP, Fujifilm, USA) in autoradiography cassettes for exposure. For brain harvested at 30 min or 1 h postadministration, these sandwich units were exposed for ~40 h in the autoradiography cassettes. For brain harvested at 10 min postadministration, these sandwich units were allowed to decay for 2 days and then exposed for ~16 h in the autoradiography cassettes before imaging using a laser scanner (Typhoon™ FLA 7000, GE Healthcare Life Sciences, UK). The obtained images were analyzed using ImageJ software (USA). Statistical analysis Quantitative results were presented as mean ± standard deviation (SD), where "n" denotes the number of repeats. Statistical differences were examined using one-way ANOVA, except for the gamma counting study for brain different sections in which two-way ANOVA was used, by GraphPad Prism 8 software (v 8.2.1). The P value <0.05 was considered statistically significant. Cy5-PEG-SH synthesis Cy5-PEG-SH was synthesized by the reaction between the NHS ester group of Cy5 and the amine group of PEG to yield a stable amide bond (Scheme 1). After synthesis, Cy5-PEG-SH was purified by the NAP™-5 column. Cy5-PEG-SH possessing a larger molecular weight was eluted first from fractions 3 to 7, confirmed by the strong fluorescence signals (Fig. S1A and B). The same fractions were positive for PEG (Fig. S1C), further confirming the successful synthesis of Cy5-PEG-SH. The unconjugated Cy5 were collected in fractions 13 to 24 (Fig. S1A and B). DTPA-PEG-SH synthesis The free amine in PEG acts as a nucleophile which attacks the anhydride on the DTPA, resulting in an amide bond formation (Scheme 1). To optimize the synthesis conditions, NH 2 -PEG-SH was reacted with DTPA anhydride at molar ratios of 1:2, 1:1, 2:1 and 4:1 using DMSO as solvent (Table S1). It was shown that at the molar ratio of 1:1 (PEG: DTPA anhydride), ~75.4% of amines in PEG were substituted after 4 h reaction. By increasing the concentration of DTPA anhydride to the molar ratio of 1:2 (PEG: DTPA anhydride) or extending the reaction time to 24 h, there was no significant improvement in the PEG substitution. It is worth noticing that when NH 2 -PEG-SH was reacted with DTPA anhydride at the molar ratio of 4:1, ~20.8% of amines in PEG were substituted after 4 h and the PEG substitution was increased to ~43.8% after 24 h, indicating disubstitution with DTPA anhydride may have occurred. Therefore, the molar ratio between NH 2 -PEG-SH and DTPA anhydride of 1:1 and the reaction time of 4 h were applied for large-scale synthesis. In the final products, the proportion of DTPA fragments to PEG fragments quantified by the modified Gd 3+ -Xylenol Orange assay and iodine solution-based assay, respectively, was in the range of 0.8-1, confirming the successful synthesis of DTPA-PEG-SH. The FT-IR spectrum of DTPA-AuNR PEG-NH2 demonstrated the typical bonds of PEG and an amide bond at 1633 cm − 1 , confirming the successful DTPA-PEG-SH conjugation to AuNRs (Fig. 1, Fig. S2C). In the positive control group, 3267 ± 938 PEG molecules were conjugated to a single AuNR determined by Ninhydrin assay. Therefore, it was estimated that each AuNRs is conjugated to ~1600 DTPA-PEG-SH molecules. Compared with AuNR PEG-NH2, DTPA-AuNR PEG-NH2 demonstrated an increased hydrodynamic size of 64.0 ± 3.4 nm and a decreased Zeta potential of 11.4 ± 0.5 mV due to the reduced amine content on the particle surface. Both Cy5-AuNR PEG-NH2 and DTPA-AuNR PEG-NH2 demonstrated good colloidal stability. The physicochemical characteristics of Cy5-AuNR PEG-NH2 and DTPA-AuNR PEG-NH2 are summarized in Table 1. Representative TEM images of the particles are shown in Fig. 1 and Fig. S3. The images show no apparent structural differences after Cy5 or DTPA functionalization. All the particles demonstrated monodispersed rod morphology with the length of 45.1 ± 4.4 nm and width of 11.2 ± 1.3 nm (>200 particles counted). Ex vivo biodistribution of AuNRs by optical imaging Ex vivo organ distribution profiles were first assessed by optical imaging using Cy5-labelled AuNRs. Mice were intranasally administered with 20 μL of 300 nM nanoparticles. At this dose, mice behaved normally without change in the body weight up to 7 days postadministration (Fig. S4). Mice had food restriction but free access to water for 24 h before experiments to reduce food interference in optical imaging. High autofluorescence signals particularly in the stomach and intestine were seen in control and treated mice ( Fig. 2A) making it hard to distinguish if the signals in stomach/intestine are attributed to AuNRs or an artefact of tissue background. Similar results have been reported previously where extracellular vesicles were bioengineered with mCherry (ex/em = 587 nm/610 nm) to track their ex vivo organ distribution profiles [41]. Interestingly, in ex vivo imaged brains (Fig. 2B and C), higher fluorescence signals were observed in treated mice compared with the control mice, especially at 10 min postadministration (**P < 0.01). The highest fluorescence signal was seen in the frontal brain region. To confirm that the detected fluorescence signals represented Cy5-AuNR PEG-NH2 , Cy5 molecules alone with identical fluorescence intensity were administered into the CD-1 mice which have shown similar signals intensity in the brain to that of control mice (P > 0.05) suggesting that increased fluorescence signals was indeed attributed to AuNRs translocation to the brain (Fig. S5). ICP-MS measurement for gold brain uptake Although the Au -S bond can be considered as a covalent bond with the bond energy of 40-50 kcal/mol [42,43], it was reported the monothiol ligand may experience a high dynamic off-rate that destabilizes the particles in biologically relevant reducing environments [44]. To further confirm the fluorescence signals detected in the brain came from the intact Cy5-AuNR PEG-NH2 rather than the detached Cy5-PEG, the brain samples after ex vivo optical imaging were proceeded for ICP-MS to quantify the gold content. Brains harvested at 10 min postadministration showed the highest Au uptake, achieving 39.71 ± 16.57 μg Au/g of brain (Fig. 3). At later time points 30 min and 1 h postadministration, Au uptake in brains was reduced. After 3 days and 7 days administration, the Au contents in brains were significantly reduced to 0.86 ± 0.22 and 0.59 ± 0.21 μg Au/g of brain, respectively (*P < 0.05 vs 10 min post-administration). Au contents in the control brains were ~ 0.09 μg Au/g of brain. These findings are consistent with the trend observed by the optical imaging that AuNRs distributed rapidly to the brain except that Au could be detected in brains at 30 min and 1 h presumably due to the high sensitivity of the technique and the fact that ICP-MS measures Au directly. Radiolabelling efficiency and radiochemical stability of DTPA-AuNR PEG-NH2 Quantitative biodistribution assessments were also carried out by radiolabelling of AuNRs with the gamma emitting radioisotope 111 In and studying the biodistribution of [ 111 In]DTPA-AuNR PEG-NH2 construct. Radiolabelling efficiency of DTPA-PEG-SH was first assessed using iTLC by eluting with 0.1 M ammonium acetate containing 0.25 mM EDTA (pH 5.5). Free DTPA chelated with 111 In migrated to the solvent front and the radiolabelled DTPA-PEG-SH remained at the application point ( Fig. 4A and Fig. S6). DTPA-PEG-SH conjugates after synthesis were collected through PD-10 columns, followed by different times of elution. The radiolabelling efficiency of DTPA-PEG-SH conjugates increased from 18.0% to 71.2% and 94.2% after the first, second and third elution, respectively (Fig. S6). The DTPA-PEG-SH after the third elution were used for AuNRs functionalization. The resulting DTPA-AuNR PEG-NH2 demonstrated radiolabelling efficiency of 97% as shown in Fig. 4A. Quantitative organ distribution of AuNRs by gamma counting To gain quantitative insight on biodistribution of [ 111 In]DTPA-AuNR PEG-NH2 , gamma counting was performed on all organs. For all groups, ~ 30% of injected dose (ID) remained in the nasal passage (Fig. S9) after intranasal administration. Radioactivity in the stomach increased from 0.009 ± 0.005%/g of tissue at 10 min to 38.2 ± 8.5%/g of tissue at 30 min and then decreased to 11.2 ± 7.0%/g of tissue at 1 h post-administration (Fig. 5B). Similarly, the radioactivity in the intestine increased from 0.003 ± 0.002 %ID/g of tissue at 10 min to 2.0 ± 0.5 and 3.0 ± 2.3% ID/g of tissue at 30 min and 1 h post-administration, respectively. In contrast, [ 111 In]DTPA-AuNR PEG-NH2 showed noticeable accumulation in the lung with 0.87 ± 1.21 and 3.6 ± 5.5% ID/g of tissue at 10 min and 30 min, respectively, which dropped to 0.07 ± 0.04% ID/ g of tissue at 1 h post-administration. This suggests that some particles went beyond nasopharynx and got into lung after administration through a process of mucociliary clearance. Most of the particles were swallowed by the mice and went through the GI system in the later time points (> 10 min). This property allows the particles to be excreted from Values were expressed as mean ± SD, n = 3. **P < 0.01. All images were obtained by IVIS Lumina® III and data were analyzed by Living Image software. the animal through faeces, leading to minimized systemic exposure. Other major organ biodistribution profiles of [ 111 In]DTPA-AuNR PEG-NH2 at 1 h post-administration showed the uptake in kidneys > heart> spleen > liver with 0.10 ± 0.05, 0.04 ± 0.007, 0.03 ± 0.004 and 0.01 ± 0.002 %ID/g of tissue, respectively. In addition, ~ 0.6% of ID was found in the blood throughout (Fig. S9), suggesting a small amount of the particles can be rapidly absorbed into the systemic circulation after intranasal administration. Looking at brain accumulation, 10 min postadministration demonstrated the highest brain uptake with 0.036 ± 0.008% ID/g of tissue (*P < 0.05, 10 min vs 30 min and 1 h postadministration). Brain accumulation decreased to 0.019 ± 0.009% ID/ g of tissue at 1 h post-administration. Brain regional distribution of AuNRs by autoradiography To further investigate the brain distribution of the particles, brains were cut into four coronal segments (Fig. 6A). The results showed that the highest brain uptake was detected in the olfactory bulbs at all time points tested at 3-8 folds values obtained in other coronal brain sections. Autoradiography was then applied to provide direct evidence of the region-specific brain accumulation of [ 111 In]DTPA-AuNR PEG-NH2 . The exposure time has been adjusted depending on the amount of radioactivity for each group for better illustration. Brain harvested at 10 min post-administration demonstrated the highest signals among all time points (Fig. 6B). These findings matched the previous results in optical imaging, ICP-MS and gamma counting. Interestingly, with time increasing to 1 h post-administration, a clear signal was noticed in the caudal brain, indicating the deeper transport of the [ 111 In]DTPA-AuNR PEG-NH2 into brain regions with time. AuNRs brain uptake in glioblastoma bearing mice After confirming the ability of AuNRs to reach the brain after intranasal administration, we tested the hypothesis if AuNRs can reach brain tumors via the nasal route. C57BL/6 mice were intracranially implanted with GL261 tumor cells. The tumor growth curves after GL261 cells injection are shown in Fig. S10. When the tumors reached the desired size, mice were intranasally administered with [ 111 In]DTPA-AuNR PEG-NH2 . At 10 min and 24 h post-administration, GL261 tumors were excised, AuNRs uptake was assessed by gamma counting and compared to the uptake in brain parenchyma of the same brains. The whole organ biodistribution profile at 10 min was similar to that obtained in healthy CD-1 mice (Fig. 7A). At 24 h, uptake was dramatically reduced in nasal passage, blood, heart and lung. However, the AuNRs μL) was intranasally administered into CD-1 mice and brains were harvested at 10 min, 30 min, 1 h, 3 days and 7 days post-administration. The total amount of Au in the brain tissue after digested by aqua regia was measured by ICP-MS. Values were expressed as mean ± SD, n = 3. *P < 0.05. were incubated in PBS or 50% serum for 24 h at room temperature (RT) or 37 • C, and then spotted on a iTLC paper. The paper was then run on 0.1 M ammonium acetate with 0.25 mM EDTA (pH 5.5) as the mobile phase and imaged using a phosphorimager. Radiolabelling efficiency and radiochemical stability were calculated as % radioactivity remaining at the application point. levels have increased in stomach, intestine and liver. The measured radioactivity in healthy C57BL/6 mouse brain at 10 min timepoint was lower (~0.01% ID/g of tissue) than those values measured in CD-1 mice (~0.036% ID/g of tissue) in agreement with other studies we performed using other types of carriers (data not shown). No significant difference in radioactivity was found between tumor tissues and brain parenchyma (P > 0.05) (Fig. 7B). At 24 h, AuNRs were almost entirely cleared from both tumors and the brains. Altogether, the results confirmed the presence of AuNRs in brain tumors broadening the applications of intranasal AuNRs to include brain cancer indications in addition to neurodegenerative diseases. Discussion When designing effective nanomedicines, the shape of the particles has been considered as one of the most critical factors which can influence their cellular internalization and in vivo biodistribution [20,45]. Nanoparticles with a high AR tend to be taken up by cells at a faster rate and to a greater extent than particles with a low AR [46]. It was reported that for two similar sized particles, the particles with AR value of 3 showed 4-fold more efficient cell internalization compared to the particles with AR value of 1 [47]. Another study has shown more efficient extravasation and deeper penetration of AuNRs than nanospheres of the same effective hydrodynamic size in tumors [48]. This informed the basis of this study where we aimed to synthesize high rod purity AuNRs with an average AR value of 4 as a candidate for drug delivery carrier. The aim of this study is to come up with a conclusive overview on spatial and temporal brain distribution of AuNRs after intranasal administration as both the delivery system and the route of administration have recently attracted great attention in the field of non-invasive delivery to the brain in attempt to treat a range of CNS diseases including brain cancer [49,50]. Optical imaging uses non-ionizing radiation ranging from ultraviolet to infrared light to capture detailed images of tissues, cells and even molecules [51,52]. This imaging modality is highly desirable in the clinical and pre-clinical studies due to its safety, rapid screening and cost-effectiveness [51,53]. AuNPs are known as fluorescence quenchers. In principle, the quenching property of AuNPs is mainly dependent on three factors: morphology, their optical properties and the distance between the fluorescent dyes and the AuNPs. Efficient quenching occurs for gold nanospheres possessing a small diameter (<50 nm), a short distance to the fluorescent dyes (< 2 nm) and a plasmon resonance overlapping with the dye emission [54]. In this study, to diminish the quenching property of AuNRs, a PEG linker was introduced and a fluorescent dye (Cy5) with ~660 nm emission wavelength best evading the plasmon resonances of the AuNRs was selected to enable AuNR tracking by optical imaging. A similar strategy has been successfully applied in another study where Alexa Fluor 647 with an emission wavelength of ~670 nm was used to fluorescently label Angiopep-2-AuNR conjugates to assess their ability to improve blood-brain barrier crossing in vitro. The mechanism of cell internalization of the particles was investigated on bEnd.3 cells and studied by flow cytometry [27]. Optical imaging in the selected absorption window, visible spectrum instead of near infrared region, has its own limitations such as interference from tissue autofluorescence and tissue absorption, making deep tissue imaging more challenging and only semi-quantitative [32,33]. The variability observed in optical imaging of brains could be a combination of the semi-qualitative nature of the technique and the type of administration route. High variations in in vivo biodistribution studies are commonly observed in intranasal administration studies, in comparison for example to intravenous administration which can almost ensure 100% deposition of the substance into the bloodstream. Despite the individual variations, the observation from each modality is conclusive that AuNRs demonstrated rapid brain uptake which occurs within minutes and the signal decreased with time function. For the organs such as brains and lungs which are less affected by the background autofluorescence, optical imaging remains a facile approach to semi-quantitatively track the AuNR particles ex vivo. In this study, optical imaging was employed as the first-choice screening technique, and upon comparison with other imaging modalities, it was suggested that it should be employed in combination with at least one other imaging/ quantification technique. SPECT/CT images are obtained by reconstructing a series of imaging frames acquired over a period of time so that the 3D reconstructed image is an average of uptake over ~30 min. SPECT/CT imaging done at 0-30 min, 4 h and 24 h was applied to visualize the long-term distribution and body clearance of AuNRs following intranasal administration. As shown in Fig. S9, within 1 h post-administration, most of the radioactivity remains in the nasal passage. Compared with high radioactivity in nasal passage (~30% ID), the radioactivity in brains is too low (0.01-0.02% ID) to be clearly detected by SPECT imaging. The signal to noise ratio is what makes some organs appear brighter than others. A similar phenomenon was also observed by other researchers [55,56], in which the brain signals of 123 I labelled-peptides and 99m Tc-labelled exosome were not shown in SPECT/CT images following intranasal administration, while brain uptakes were confirmed by gamma counting and autoradiography studies. The gamma counting and autoradiography were also used in this study to spatiotemporally track AuNRs brain distribution with higher detection sensitivity. The autoradiographs showing the distribution of AuNRs in different brain regions strongly suggest the involvement of multiple transport pathways after intranasal administration. The olfactory pathway (olfactory nerve and olfactory epithelium) and trigeminal nerve pathway are primarily responsible for the nose to brain delivery [6,49,57,58]. The neural projections of the olfactory bulb extend into multiple rostral brain tissues such as the olfactory tract, anterior olfactory nucleus and piriform cortex so particle distribution in brain areas neighbouring the olfactory bulb observed over 10 min to 1 h period could be attributed to the olfactory pathway [57]. The distribution in the more distal regions at 1 h post-administration is possibly facilitated via the trigeminal nervemediated transport as the trigeminal nerve branches innervate the respiratory and olfactory epithelium, and also enter the brain stem in the pons [57,58]. Most of the studies reporting on gold nanostructures' biodistribution focus on intravenous route as mainstream administration method. Talamini et al. compared organ distribution of spherical-shaped (50 nm), rod-shaped (length: 60 nm, width: 30 nm) and star-shaped (55 nm as average of the longest tip-to-tip distance) gold nanostructures. No evidence of gold accumulation was found in brains after 1, 24, or 120 h by ICP-MS following intravenous injection [38]. Jong et al. investigated tissue distribution of AuNPs with spherical morphology in the size range of 10 nm to 250 nm after intravenous injection in rats [30]. They demonstrated that AuNPs with the size of 10 nm were detectable in all the evaluated tissues (blood, liver, spleen, kidney, heart, lung, testis and thymus) including the brain (~ 0.3% of ID) at 24 h post-injection determined by ICP-MS. Larger particles (50 nm, 100 nm and 250 nm) could be detected in liver, spleen and blood but were excluded from the brain. Sonavane et al. also confirmed the ability of 15 nm and 50 nm AuNPs to reach the brain at 24 h timepoint after intravenous injection while 200 nm AuNPs showed negligible accumulation in tissues including brain, blood, stomach and pancreas [59]. These studies used single injection without active targeting. A plausible explanation could be that AuNPs with small particle <20 nm diameters can pass through the gap separating the astrocytic end-feet from the capillary endothelium [59,60], components of the brain microvascular units in addition to pericytes, astrocytes, tight junctions, neurons, and basal membrane [3]. Few studies have been carried out to investigate gold nanostructures organ biodistribution after intranasal administration. Ye et al. compared intravenous and intranasal administrations of gold nanocluster (~ 5.6 nm) at 1 h timepoint [5]. The intravenous group showed >10-fold higher blood, lung, liver, spleen, kidney, and heart uptake compared to the intranasal group. No significant difference between the groups was found in the brain quantified by gamma counting. Brain uptake for the intranasal group was improved when focused ultrasound combined with microbubble-mediated technique was applied. To increase affinity to β-amyloid, a therapeutic target in Alzheimer's disease, Gallardo-Toledo et al. prepared D1-peptide functionalized gold nanospheres (~47 nm) and investigated their biodistribution profiles [61]. They first demonstrated the brain accumulation achieved highest of 106 ± 19 ng Au/g tissue at 0.75 h and then decreased dramatically at 2, 4, 8 and 24 h timepoint after intranasal administration determined by Neutron Activation Analysis (NAA). However, only 1.9 ± 0.9 ng Au/g tissue achieved intravenously at 0.75 h timepoint, indicating a fast and significant delivery of AuNPs to CNS could be achieved using intranasal administration. In addition, they evaluated the brain distribution at 0.75 h timepoint using GoldEnhance™ kit for light microscopy after dissecting the brain in coronal sections and found a high percentage of the nanoparticles was in the olfactory bulb, periaqueductal gray, perirhinal and entorhinal cortex, and hippocampus region after intranasal administration. In case of intravenous administration, a greater percentage was observed in the basal forebrain, thalamus, and cerebellum. In our study, we achieved ~40 μg Au/g of brain after 10 min post-administration which decreased to ~9 and ~ 17 μg Au/g of brain after 30 min and 1 h, respectively. It is worth noting that both size and shape are different in our study from the reported study. Other groups have attempted to deliver gold nanoparticles and their hybrid materials to brain tumors by intranasal delivery but to our knowledge no work has been carried out on AuNRs using this route of administration. Wang et al. developed an intranasal anti-EphA3 functionalized AuNPs for temozolomide delivery to glioblastoma [12] which was shown to prolong the median survival time and increase apoptosis compared to free drug. ICP-MS and histological examinations showed reduced gold content in brain after 24 h and no visible damage was caused in major organs, respectively. Sukumar et al. have shown that intranasal therapeutic microRNAs loaded gold-iron oxide nanoparticles combined with systemic temozolomide improved the survival rate of glioblastoma bearing mice [62]. The presence of the Cy5-labbelled miRNA in brain cancer cells was also observed providing evidence that the therapeutic agent reached the tumor mass after intranasal administration. In our study, to investigate if AuNRs can exhibit an enhanced permeation and retention effect in the glioblastoma tumors which normally occurs after longer periods (~4-24 h), we examined brain uptake after 24 h in addition to the 10 min timepoint. AuNRs achieved glioblastoma and brain parenchyma uptake of ~0.14% ID/g of tissue and ~ 0.05% ID/g of tissue, respectively, at 10 min. AuNRs could have their way to the tumors through the nose-to-brain route or from the systemic circulation via the respiratory mucosa since the blood brain barrier is compromised in glioblastoma tumors [63]. Cheng et al. conjugated doxorubicin (Dox) to transcription (TAT) peptide functionalized AuNPs [64]. This nanoconjugates achieved a brain concentration of ~4.5 μg Au/g of tissue and demonstrated significant survival benefit compared to the free Dox after a single intravenous injection. The brain concentration of AuNRs achieved in this study is therefore expected to be therapeutically meaningful. Generally, AuNPs was considered bioinert and demonstrated negligible in vivo toxicity [65,66]. However, studies found intracranially injected AuNPs, especially with small particle size, could increase nestin expression which is related to CNS injury [67]. The biocompatibility of AuNRs following nose to brain delivery needs to be fully investigated in advancing them for therapeutic application. Conclusions AuNRs were successfully synthesized and functionalized to enable ex vivo and in vivo analyses in mice tissues. This is the first study to comprehensively analyze the brain region-specific accumulation of AuNRs following intranasal administration providing qualitative and quantitative insights using a battery of complementary techniques namely optical imaging, ICP-MS, gamma counting and autoradiography. The results demonstrated rapid brain uptake of AuNRs occurs within minutes of nasal instillation followed by gradual distribution to other brain regions over 1 h in healthy mice. Intranasal administration to an orthotopic glioblastoma mouse model confirmed that uptake of AuNRs extends to brain tumors in the same brains. Autoradiography images and uptake pattern suggested the involvement of olfactory and trigeminal pathways in brain uptake. The current study does not only provide qualitative and quantitative information about AuNRs uptake in the brain after intranasal administration for the first time but also confirmed the potential of AuNRs as intranasal delivery carriers to treat brain diseases including brain cancer. Declaration of Competing Interest The authors have declared that no competing interest exists. Data availability Data will be made available on request.
2023-04-17T06:16:12.640Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "5840b8f4a305e42d38d677adc6409f523e185bf1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jconrel.2023.04.022", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7c51d3979bbf414c985e14c563d59885600a3a49", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
10606280
pes2o/s2orc
v3-fos-license
On the Capacity of a Class of MIMO Cognitive Radios Cognitive radios have been studied recently as a means to utilize spectrum in a more efficient manner. This paper focuses on the fundamental limits of operation of a MIMO cognitive radio network with a single licensed user and a single cognitive user. The channel setting is equivalent to an interference channel with degraded message sets (with the cognitive user having access to the licensed user's message). An achievable region and an outer bound is derived for such a network setting. It is shown that the achievable region is optimal for a portion of the capacity region that includes sum capacity. t includes sum c pacity. I. INTRODUCTION The design of radios to be "cognitive" has been identified by the Federal Communications Commission (FCC) as the next big step in better radio resource utilization [2].The term "cognitive" has many different connotations both in analysis and in practice, but with two underlying common themes: intelligence built into the radio architecture oupled with adaptivity. Cognitive radios have been studied under different model settings.The first models studied cognitive radios as a spectrum sensing problem [3][4] [5] [6].Under this setting, the cognitive radio opportunistically uses licensed spectrum when the licensed users are sensed to be absent in that band.Problems encountered in this setup are threefold : 1) Sensing must be highly accurate to guarantee non interference with the licensed radio.2) Control and coordination between the cognitive transmitter receiver pair is required to ensure the same spectrum is used, and finally 3) There are no QoS guarantees for the cognitive trans itter receiver pair. Other models with different side information at the cognitive users have been studied.In [7] and [8], the authors study frequency coding by the cognitive transmitter by assuming non causal knowledge of the frequency use of the rimary transmitter. Manuscript received May 18, 2007; revised September 16,2007; revised October 26, 2007.This research was supported in part by National Science Foundation grants NSF CCF-0448181, NSF CCF-0552741, NSF CNS-0615061, and NSF CNS-0626903, THECB ARP and the Army Research Office YIP.The material in this paper was presented in part at the IEEE Information Theory Workshop, Lake Tahoe, CA, S ptember 2007 [1]. The authors are with the Wireless Networking and Communications Group, Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX -78712 (email: sridhara@ece.utexas.edu;sriram@ ce.utexas.edu). In this paper, we study cognition from an information theoretic setting where we assume that the cognitive transmitter knows the message of the licensed transmitter apriori.Such a model is interesting for two reasons : 1) It provides an upper limit, or equivalently a benchmark on the performance of systems where the cognitive radio gains a partial understanding of the licensed transmitter and 2) It allows us to understand the ultimate limits on the cognitive transmitter by giving it maximum information and allowing it to change its transmission and coding strategy based on all the information available at the licensed user.In essence, it enlarges the possible schemes that can be implemented at the cognitive radio, and 3) It lends itself to information theoretic analysis, being a setting where such tools can be applied to determine the performance limits of the system.Many other configurations, including the interference channel setting when the cognitive transmitter does not know the message of the licensed transmitter are multi-decade long o en problems. The goal of this paper is to study the fundamental limits of performance of cognitive radios.Along the lines of [9], we consider the model depicted in Figure 1.In this setting, we have an interference channel [10] [11][12] [13], but with degraded message sets, where the transmitter with a single message is called "legacy," "primary" or "dumb" and the transmitter with both messages termed the "cognitive" transmitter.Prior work on this model for the single antenna case is in [9] [14] 15] [16]. In this paper, we study the performance of the cognitive radio model under a multiple antenna (MIMO) setting.Both the licensed and cognitive transmitter and receiver may have multiple antennas.MIMO is fast becoming the most common feature of wireless systems due to its performance benefits.Thus, it is important to study the capacity of cognitive radios under a MIMO setting.There are some instances where the methods used in this paper bears similarities with the methods used for the SISO setting.However, most of the proofs and techniques used here are distinct and considerably more involved than those used in [16].In the SISO setting, it is possible to analyze the model for specific magnitudes of channels.This is not possible for the MIMO setting.We list some of the crucial differences between the methods used in this paper and the methods that have been used under the SISO setting. 1) In [16], the authors obtain the outer bound using conditional entropy inequality.This method cannot be extended to the MIMO setting.2) We obtain the outer bound through a series of channel transformations.Although the channel transformations are similar in spirit to those in [15], the actual transformations used are significantly different both in nature and in the mathematical proofs that accompany them.In [15], the authors reduce the channel to a broadcast channel where the combined transmitters have individual power constraints and the cognitive receiver has the message of the licensed user provided to it by a genie.The capacity region for such a variation of broadcast channel is not known in general.The authors solve for the capacity region of the broadcast channel using aligned channel techniques.On the other hand, we reduce the MIMO cognitive channel to a broadcast channel with sum power constraint and whose capacity region is now known [17][18] [19].We then use optimization techniques to compare the achievable scheme with the outer bound. A. ain Contributions In his paper, our main contributions include: 1. e find an outer bound on the capacity region of the MCC. 3. e show that, under certain conditions (that depend on the channel parameters), the outer bound is tight for a portion of the capacity region boundary, including points corresponding to the sum-capacity of the channel.Combining the two above, we characterize the sum capacity of this channel and a portion of its entire capacity region under certain conditions. B. ganization The est of the paper is organized as follows.We describe the notations and system model in Section II.The main results are presented in Section III.In Section IV, we present an achievable region for the Gaussian MIMO cognitive channel (MCC).An outer bound on the capacity region is shown in Section V.The optimality of the achievable region for a portion of the capacity region (under certain conditions) is shown in Section VI.Numerical results are provided in Section VII.We conclude in Section VIII. II. SYSTEM ODEL AND NOTATION Throughout he paper, we use boldface letters to denote vectors and matrices.|A| denotes the determinant of matrix A, while Tr(A) denotes its trace.For any general matrix or vector X, X † denotes its conjugate transpose.I n denotes the n × n identity matrix.X n denotes the row vector (X(1), X(2), . . ., X(n)), where X(i), i = 1, 2, . . ., n can be vectors or scalars.The notation H 0 is used to denote that a square matrix H is positive semidefinite.Finally, if S is a set, then Cl(S) and Co(S) denote the closure and convex hull of S respectively. We consider a MIMO ognitive channel shown in Figure 1.Let n p,t and n p,r denote the number of transmitter and receiver antennas respectively for the licensed user.Similarly, n c,t and n c,r denotes the number of transmitter and receiver antennas for the cognitive user.The licensed user has message m p ∈ {1, 2, . . ., 2 nRp } intended for the licensed receiver.The cognitive user has message m c ∈ {1, 2, . . ., 2 nRc } intended for the cognitive receiver as well as the message m p of the licensed user. The primary user encodes he message m p into X p n .Here, X p (i) is a n p,t length complex vector.The cognitive transmitter determines its codeword X c n as a function of both m p and m c .Note that the cognitive transmitter wishes to communicate both m p (to the licensed receiver) and m c (to the cognitive receiver).The channel gain matrices are given by H p,p , H p,c , H c,p and H c,c , and are assumed to be static.It is assumed that the licensed receiver knows H p,p , H c,p , the licensed transmitter knows H p,p .It is also assumed that the cognitive transmitter knows H c,p , H p,c , H c,c and the cognitive receiver knows H p,c , H c,c .The received vectors of the licensed and cognitive users are denoted by Y p n and Y c n respectively. With the above model and notatio s, we can describe the system at time slot i by Y p (i) = H p,p X p (i) + H c,p X and secondary receivers is denoted by Z p n and Z c n respectively.The noise vectors Z p n and Z c n are Gaussian and are assumed to be i.i.d.across symbol times and distributed according to N (0, I np,r ) and N (0, I nc,r ) respectively.The correlation between Z p n and Z c n is assumed to be arbitrary.This correlation does not impact the capacity region of the system as the licensed and the cognitive decoders do not co-operate with each other. 1e denote the covariance of the codewords of the licensed and cognitive transmitters at time i by Σ p (i) and Σ c (i) respectively.Then, the transmitters are constrained by the following transmit power constraints. n i=1 Tr(Σ p (i)) ≤ nP p n i=1 Tr(Σ c (i)) ≤ nP c .(2) A rate pair (R p , R c ) is said to be achievable if III. MAIN RESULTS In this section, we describe the main re ote the set described by                                      (R p , R c ), Σ p , Σ c,p , Σ c,c , Q : R p ≥ 0, R c ≥ 0, Σ p 0, Σ c,p 0, Σ c,c 0 R p ≤ log I + GΣ p,net G † + H c,p Σ c,c H † c,p − log I + H c,p Σ c,c H † c,p R c ≤ log I + H c,c Σ c,c H † c,c Σ p,net =   Σ p Q Q † Σ c,p   0, Tr(Σ p ) ≤ P p , Tr(Σ c,p + Σ c,c ) ≤ P c                                      .(3) In this setting, Σ p,net is a (n p,t + n c,t ) × (n p,t + n c,t ) covariance matrix while Σ c,c is a n c,t ×n c,t covariance matrix.Σ p and Σ c,p represent principal submatrices of Σ p,net of dimensions n ,t × n p,t and n c t × n c,t respectively.The covariances matrices Σ p , Σ c,p and Σ c,c determine the power constraints of the system.Let R Let R α,Σz conv denote the set described by                            (R p , R c ), Q p , Q c : R p ≥ 0, R c ≥ 0, Q p 0, Q c 0 R p ≤ log I + G α Q p G † α + G α Q c G † α − log I + G α Q c G † α R c ≤ log Σ z + KQ c K † − log |Σ z | Tr(Q p ) + Tr(Q c ) ≤ P p + αP c                            . (7) Let R α,Σz out denote the closure of the convex hull of the set of rate pai represented as R out = Σz α>0 R α out .(9) Then, the next th outer bound on the capacity region of the MCC. Theorem 3.2: The capacity region of the MCC, C MCC satisfies C MCC ⊆ R α,Σz out , ∀α > 0, Σ z C MCC ⊆ R out . (10) The proof is given in Section V and proceeds by a series of channel tra esults in a new channel whose capacity region is in general a superset (outer bound) of the capac . Let BC(H 1 , H 2 , P ) denote a two user MIMO broadcast channel with channel matrices given by H 1 and H 2 and with Σ † c,p − log I + 1 α H c,p Σ c,c H † c,p R c ≤ log I + 1 α H c,c Σ c,c H † c,c Tr(Q p ) + .(11) We let R α part,out to denot hull of the set of rate pairs described by (R p , R c ) : ∃Q p , Σ c,c 0 and ((R p R c ), Q p , c,c ) ∈ R α part,conv . (12) Let K = [0 H c,c / n described by R α part,out partially meets the boundary of the capacity region of BC(G α , K, P p + αP c ), then the boundary of R α part,out partially meets the boundary of the rate region described by R α,Σz out in (8) for some Σ z .We formally tate the result in Theorem 3.3.For notational convenience, we will denote the capacity region of BC(G α , K, P p + αP c ) by C α BC . Theorem 3.3: Let µ ≥ 1 and α > 0. If max (Rp,Rc)∈R α part,out µR p + R c = max (Rp,Rc)∈ α BC µR p + R c ,(13) then, we have max (Rp,R e a point on the boundary of the capacity region C MCC .Then, there exists a µ ≥ 0 such that ( Rp , he achievable region given by R in , then (R p , R c ) lies on the boundary of R α part,out for some α > 0. That is, the theorem describes conditions of optimality of the achievable region R in . Theorem 3.4: For any µ > 0, max (Rp,Rc)∈Rin µR p + R c = inf α>0 max (Rp,Rc)∈R α part,out µR p + R c . Also, there exists α * ∈ (0, ∞), such that for any µ ≥ 1, (R p,µ , R c,µ ) = arg max (Rp,Rc)∈Rin µR p + R V. ACHIEVABLE REGION Proof of Theorem 3.1 : In this section, we show that the rate region R in given by ( 4) is achievable on the MCC. Encoding rule for Licensed user (E n p ) : For every message m p ∈ {1, . . ., 2 nRp }, the licens d encoder generates a n length codeword X p n (m p ), according to the distribution p(X p n ) = Π n i=1 p(X p ( nd Tr(Σ p ) ≤ P p . Encoding rule for the cognitive user (E n c ): The cognitive encoder acts in two stages.For every message pair (m p , m c ), the cognitive encoder first generates a codeword X c,p n (m p , m c ) for the primary message m p according to Π n i=1 p the joint distribution of (X p (i), X c,p (i)) is given by p(X p (i), X c,p (i)) ∽ N 0, Σ p Q Q † Σ c,c .(15) Here, Q denotes the correlation between X p (i) and X c,p (i). In the second stage, the cognitive encoder generates X c,c n which ncodes message m c .The codeword X c,c n is generated using Costa precoding [21] by treating p,p X p n +H c,c X c,p n as non causally known interference. A characteristic feature of Costa's precoding is that X c,c n is independen of X c,p n , and X c,c n is distributed as Π n i=1 p(X c,c (i)), where X c,c (i) ∽ N (0, Σ c,c ). Note that the codeword X c,p n is used to convey message m p to the licensed receiver and the codeword X c,c n is used to convey message m c to the cognitive receiver.The two code ords X c,p n and X c,c n are superimposed to form the cognitive codeword X c n = X c,p n + X c,c n . It is clear that X c n is distributed as Π n i=1 p(X c (i)), X c (i) ∽ N (0, Σ c ), where Σ c = Σ c,p + Σ c,c . The covariance matrices 0, Σ c,c 0, Tr(Σ c ) ≤ P c . Decoding rule for the licensed receiver (D n p ) : The licensed recei ) + Z p n . It treats H p,p X p n + H c,p X cY p n = GX p,net + H c,p X c,c n + Z p n .(16) The covariance matrix of X p,net is denoted by Σ p,net = Σ p Q Q † Σ c,p , where Q = E[X p X † c,p ]. In this setup, we use steps identical to that used for MIMO channel with colored noise in [20, Section 9.5] to show that, for any ǫ > 0, there exists a block length n 1 so that for any n ≥ n 1 , the licensed deco or < ǫ if R p ≤ log I + GΣ p,net G † + H c,p Σ c,c H † c,p − log I + H c,p Σ c,c H † c,p .(17) Decoding rule for the cognitive user (D n c ) : The cognitive decoder is the Costa decoder (with the knowledge of the encoder, E n c ). The cognitive receiver rece qns (3) to (7) in [21].We get that, for any ǫ 2 > 0, there exists n 2 such that for n ≥ n 2 , the cognitive decoder can recover the message m c with probability of error < ǫ 2 if R c ≤ log I + H c,c Σ c,c H † c,c .(18) Note that the achievable scheme holds for all possible covariance matrices Σ p , Σ c,p , Σ c,c that are positive semidefinite , Tr(Σ c,p + Σ c,c ) ≤ P c .Hence, R in , whic (4), is achievable for any code length n ≥ max(n 1 , n 2 ). V. OUTER BOUND ON THE CAPACITY REGION In this section, we prove that the rate region described by R α,Σz out is an outer bound on the capacity region of the Gaussian MIMO cognitive channel.The proof proceeds the channel at the previous stage.At the final stage, we obtain a physically degraded broadcast channel.The capacity region of this channel is now known [17][18] [19] and is used as the outer bound for the capacity region of the MIMO cognitive channel.Figure 2 Proof : Let (R p , R c ) be a rate pair that is achievable on the MCC.That is, for all ǫ 1 , ǫ 2 > 0, there exists a n and a sequence of encoder decoder pairs at the licensed and cognitive transmitter and receiver (E n p : m p → X p n , D n p : Y p n → mp , E n c : (m p , m c ) → X c n , D n c : Y c n → mc ) such that the codewords X p n and X c n satisfy the power constraints given by ( 2) and the probability of decoding error is small (P r(m p = mp ) ≤ ǫ 1 , P r(m c = mc ) ≤ ǫ 2 ).We use the following encoder decoder pairs at the licensed and cognitive transmitters and receivers of the scaled MIMO cognitive channel. E n p : m p → , m c ) → √ αX c n , D n c : Y c n → c . It follows that using these encoder and decoder pairs, the licensed and cognitive codewords satisfy the new power constraints of P p and αP c respectively.Also, the system equation is the same as that of the MCC and P r(m p = mp ) ≤ ǫ 1 and P r(m c = mc ) ≤ ǫ 2 .Hence, the rate pair (R p , R c ) is achievable on the scaled MIMO cognitive channel.Hence, the capacity region of the SMCC is a superset of the capacity region of the MCC. Similarly, we can also establish this in the other direction, namely we can treat the MCC as the scaled version of the SMCC (scaling by 1/α).Therefore, it can be shown that the capacity region of the MCC is a superset of the capacity region of the SMCC. Hence, the capacity region of the MCC is equal to the capacity region of the SMCC. Transformation 2 (scaled MIMO cognitive channel (SMCC) → scaled MIMO cognitive channel A (SMCCA)) : The scaled MIMO cognitive channel A (SMCCA) is described in Figure 2c and Figure 4.In this transformation, we provide a modified version of Y p n , which is Ŷn p to the cognitiv robability distribution as that of Z p n (i.e., complex Gaussian with zero mean and identity covariance matrix), but is permitted e assume that the joint probability distribution of ( Ẑp (i), Z c (i)) is given by p( Ẑp (i), Z c (i)) = N (0, Σ z ),(19) where Σ z has the form g all ǫ 1 , ǫ 2 > 0, there exists a n and a sequence of encoder decoder pairs at the licensed and cognitive transmitter and receiver (E n p : m p → X p n , D n p : Y p n → mp , E n c : (m p , m c ) → X c n , D n c : Y c n → mc ) such that the codewords X p n and X c n satisfy the power constraints and the probability of decoding error is small (P r(m p = mp ) ≤ ǫ 1 , P r(m c = mc ) ≤ ǫ 2 ).In the SMCCA, we can use the same encoder decod r pair E n p and D n p at the licensed transmitter and receiver to achieve a rate R p with probability of decoding error < ǫ 1 .Also, by ignoring the received vector Ŷn p at the cognitive receiver, we can use E n c and D n c at the cognitive transmitters nd receivers to achieve a rate R c with the decoding probability of error < ǫ 2 .Hence, the rate pair (R p , R c ) is achievable on the scaled MIMO cognitive channel A (SMCCA).Therefore, the capacity region of the SMCCA is a superset of the capacity region of the SMCC. Transformation 3 (scaled MIMO cognitive channel A (SM-CCA) → scaled MIMO cognitive channel B (SMCCB) ) : The scaled MIMO cognitive channel (B) is described in Figure 2d and Figure 5.The channel matrix from the licensed transmitter to the cognitive receiver is modified from K 1 = H p,p H p,ctoK 1 = H p,p0 . Hence, the received vector at the cognitive receiver is given by Ŷn p Y c n where Y c n = Hc,c √ α X c n + Z c tion is to remove the Proof : Let the ), D n c (δ 2 /2 ) to obtain m c with probability of error ≤ δ 2 /2.Clearly, the probability of error in recovering m c is less than δ 2 .Hence, the rate pair (R p , R c ) is achievable on SMCCB.Therefore, the capacity region of SMCCB is a superset of the capacity region of SMCCA. Let the rate pair (R p , R c ) be achievable on SMCCB.Then, for every ǫ 1 , ǫ 2 > 0, there exists encoder-decoder pair for the licensed user (E n p (ǫ 1 ), D n p (ǫ 1 )) and for the cognitive user (E n c (ǫ 2 ), D n c (ǫ 2 )) such that the probability of decoding error is less than ǫ 1 and ǫ 2 respectively for the licensed and cognitive user.Let δ 1 , δ 2 > 0. In SMCCA, the licensed user can employ E n p (min(δ Proof : Let the rate pair (R p , R c ) be achievable on the SMCCB.In the SMBCA, using no collaboration between the two transmitters and using separate power constraints of P p and αP c respectively, we reduce the SMBCA to the SMCCB.Hence, the rate pair (R p , R c ) is achievable on the SMBCA.Therefore, the capacity region of the SMBCA is a superset of the capacity region of the SMCCB. We have showed that for any α > 0, C MCC = C SMCC ⊆ C SMCCA = C SMCCB ⊆ C SMBCA . Hence, the capacity region of the scaled MIMO broadcast channel A (SMBCA) is a superset of the capacity region of the MIMO cognitive channel (MCC). Proof of Theorem 3.2 : In the SMB note the covariance matrix of the codeword for the licensed user an itive user.The SMBCA is a physically degraded broadcast channel.Hence, the capacity region of the SMBCA (as given by [17]) denoted by C SMBCA is the closure of the convex hull of the set of rate pairs described by                        (R p , R c ) : R p ≥ 0, R c ≥ 0 R p ≤ log I + G α Q p G † α + G α Q c G † α − log I + G α Q c G † α R c ≤ log Σ z + KQ c K † − log |Σ z | ∀Q p 0, Q c 0 Tr(Q p ) + Tr(Q c ) ≤ P p + αP c                        . (20) Also, this is the outer bound of the MCC.Hence, R α,Σz out described by ( 8) is an outer bound on the capacity region of the MCC.Hence, C MCC ⊆ R α,Σz out .Also, C MCC ⊆ R out , where R out is described in (9).Proof : Let the rate pair (R p , R c ) be achievable on the SMBC.That is, for all ǫ 1 , ǫ 2 > 0, there exists a n and a sequence of encoder decoder pairs at the transmitter and the two receivers (E n : (m p , m c ) → X n , D n p : Y p n → mp , D n c : Y c n → mc ) such that the codeword X n satisfies the power constraint of P p + αP c and the probability of decoding error is small (P r(m p = mp ) ≤ ǫ 1 , P r(m c = mc ) ≤ ǫ 2 ). In the SMBCA, the transm tter and the receivers use the same coding strategy.The licensed receiver can decode message m p at a rate R p .The cognitive receiver can ignore Ŷn p and use just Y c n to decode message m c at a rate R c .Hence, the rate pair (R p , R c ) is achievable in the SMBCA.Hence, the capacity region of the SMBCA is in general a superset of the capacity region of the SMBC. We describe one more lemma whose result will be used in the proof of Theorem (3.3). Lemma 5.6 ([23]): Let C SMBC denote the capacity region of the scaled MIMO broadcast channel described in Figure 2f. Then, for any µ ≥ 1, sup (Rp,Rc)∈CSMBC µR p + R c = inf Σz sup (Rp,Rc)∈CSMBCA µR p + R c . The proof is described in [23, Section 5.1] and is omitted here. We now give the proof for Theorem (3.3). Proof of Theorem 3.3 : It was shown in [17] that Gaussian codebooks (i.e., codebooks generated using i.i.d.realizations of an appropriate Gaussian random variable) achieve the capacity region for the MIMO broadc st channel.In SMBC, let Q p denote the covariance of codeword X n for the licensed user and Q c denote the covariance matrix for the cognitive user.The covariance matrices satisfy the joint power constraint Tr(Q p + Q c ) ≤ P p + αP c . Let R α SMBC,1 denote the closure of the convex hull of the set of rate pairs described by                      (R p , R c ) : R p ≥ 0, R c ≥ 0 R p ≤ log I + G α Q p G † α + G α Q c G † α − log I + G α Q c G † α R c ≤ log I + KQ c K † ∀Q p 0, Q c 0 Tr(Q p ) + Tr(Q c ) ≤ P p + αP c                      .(21) Similarly, let R α SMBC,2 denote the closure of the convex hull of the set of rate pairs described by                      (R p , R c ) : R p ≥ 0, KQ p K † + KQ c K † − log I + KQ p K † ∀Q p 0, Q c 0,Tr(Q p ) + Tr(Q c ) ≤ P p + αP c                      . (22) The capacity region of SMBC, C SMBC is the closure of the convex hull of R α SMBC,1 ∪ R α SMBC, (Σ p ) − P p ) = λ 2 (Tr(Σ c,p + Σ c,c ) − P c ) = 0. Hence, in all the cases, the complementary slackness conditions are satisfied.Hence, the optimal solution of the optimization problem (28) satisfy the power constraints and the objective function reduces to that of optimization problem (25).Hence, both the optimization problems have the same optimal values.That is, M = U . Next, we find the optimum value of µR p rt,out described by (12).This is done by solving the following optimization problem: sup ((Rp,Rc),Qp,Σc,c) µR p + R c(29) such that ((R p , R c ), Q p , Σ c,c ) ∈ R α part,conv,rate Tr(Σ c,c ) + Tr(Q p ) ≤ αP c + P p , where R α par ,conv,rate is the set of quadruples ((R p , R c ), Q p , Σ c,c ) described by       α>0 N (α).(31) We show in Lemma 6.2 that α * ∈ (0, ∞) exists.Then, N is given by the optimum value of the following inf sup optimization p sup ((Rp,Rc),Qp,Σc,c) µR p + R c (32) such that ((R p , R c ), c ) + Tr(Q p ) ≤ αP c + P p . The infimum constraint α > 0 is not a compact set.We modify the constraint on α to α ∈ R + ∪ cation can be found in [24,Section 2.8].The new space α ∈ R + omes N 1 = inf α∈R + ∪{0,∞} sup ((Rp,Rc),Qp,Σc,c) µR p + R c (33) such that ((R p , R c , Q p , Σ c,c ) ∈ R α part,conv,rate Tr(Σ c,c ) + Tr(Q p ) ≤ αP c + P p . We show that adding the two points 0 and ∞ to the constraint set on α does not change the optimum valu of the optimization problem.This result is formally stated and proved in the following lemma. Lemma 6.2:The optimum value of the optimization problem given by (32), N is equal to the optimum value of the optimization problem descr bed by (33), N 1 .That is, N = N 1 . Proof : For any α ∈ R + ∪ {0, ∞}, we let h(α) to denote the value of the inner sup problem.That is, Σc,c) h(α) = sup ((Rp,Rc),Qp,µR p + R c (34) such that ((R p , R c ), Q p , Σ c,c ) ∈ R α part,conv,rate Tr(Σ c,c ) + Tr(Q p ) ≤ P p + αP c . We show that lim inf α→0 h(α) = lim inf α→∞ h(α) = ∞. Letting α → 0, we put all the power in Σ c,c .That is, we choose Σ p = 0, Σ c,p = 0, Q = 0 and Σ c,c = Pp+αPc nc,t I nc,t .Also, we take R p = 0 and R c = log I + 1 α P p + αP c n c,t H c,c H † c,c . It follows from (30) that ((R p , R c ), Q p , Σ c,c ) ∈ R α part,conv,rate . Also, Tr(Q p ) + Tr(Σ c,c ) = P p + αP c . Hence, ((R p , R c ), Q p , Σ c,c ) satisfy all the necessary constra h(α).That is, lim inf α→0 h(α) ≥ lim inf α→0 log I + 1 α P p + αP c n c,t H c,c H † c,c = ∞.(35) Next, we look at the situation when α → ∞.In this case, we put all the power in Σ p .That is, we choose Σ p = Pp+αPc np,t I np,t , Σ c,p = 0, Σ c,c = 0 and Q = 0. We also choose R c =(R p , R c , Σ p , Σ c,p , Σ c,c , λ, α) a d g 1 (R p , R c , Σ p , Σ c,p , Σ c,c , α) as follows L 1 (R p , R c , Σ p , Σ c,p , Σ c,c , λ, α) = µR p + R c − λ Tr(Σ p ) + αTr(Σ c,p ) + αTr(Σ c,c ) − P p − αP c ,(43)g 1 (R p , R , Σ c,p , Σ c,c , λ, α). (44) We define the following optimization problem V = sup (Rp,Rc,Σp,Σc,p,Q,Σc,c) inf α g 1 (R p ), Σ p , Σ c,p , Q, Σ c,c ) ∈ R part,conv,rate α ∈ R + ∪ {0, ∞}. Lemma 6.4:The optimum value of optimization problem (42), N is equal to the optimum value of the optimization problem (45), V. Proof : The proof of the lemma is along the same lines as the proof of Lemma 6.1.We show that for any set of covariance matrices Σ p , c,p and Σ c,c that do not satisfy the power constraint Tr(Σ p ) + αTr(Σ c,p ) + αTr(Σ c,c ) ≤ P p + αP c , g 1 (R p , R c , Σ p , Σ c,p , Σ c,c , α) = −∞. This is because, Tr(Σ p ) + α ,p , Σ c,c , α) to −∞. Hence, the outer supremization problem will ensure that the power con Σ p , Σ c,p , Σ c,c , 0, α). Hence, λ will take the value zero.When the power constrai , then Tr(Σ p ) + αTr(Σ c,p ) + αTr(Σ c,c ) − P p − αP c = 0.Then, λ will take some non negative real number.Hence, the complementar zation problem satisfy the power cons raint and the objective function reduces to that of (42).It follows that, the optimum value of the optimization problem (42), N is the same as the optimum value of the optimization problem (45), V . Next, we show that the optimum value of the optimization problem (28), U is an upper bound on the optimal value of the optimization problem (45), V .Lemma 6.5:The optimal value of (28), U is an upper bound on the optimal va ue of (42), V . Proof : Both the optimization problems are sup min prob , α) = L(R p , R c , Σ p , Σ c,p , Σ ,c , λ 1 , λ 2 ). Hence, for any ((R p , R c ), Σ p , Σ c,p , Σ c,c ), inf λ≥0,α∈R + ∪{0,∞} L 1 (R p , R c , Σ p , Σ c,p , Σ c,c , λ, α) ≤ inf λ1≥0,λ2≥0 L(R p , R c , Σ p , Σ c,p , Σ c,c , λ 1 , λ 2 ) of Theorem 3.4 : Let µ ≥ 1.The proof of the theorem fo lows directly from Lemmas 6.1, 6.4 and 6.5.From Lemma 6.1, we have that the optimum value of the optimization problem (25), M equals the optimum value of optimization pr have that the optimum value of optimization problem (42), N equals the optimum value of the optimization problem (45), V .M is the solution of the optimum µR p + R c over the achievable region and N is the solution of the optimum µR p +R c over R α part,out described in (12).Hence if the condition given by ( 13 nce, we have that the optimal value of the original optimization problem (25), M is equal to the optimal value of the optimization problem described by (42), N .Hence, the achievable region R in is µ-sum optimal. VII. NUMERICAL RESULTS n this section, we provide some numerical results on the capacity region of the MIMO cognitive channel.We c e one antenna each, and the licensed and cognitive receivers have one and two antennas respectively.We assume that the channel coefficients are real and als restrict ourself to real inputs and outputs.We generate the channel values randomly H p,p = 1.4435,H p,c = −0.351 ,c = 0.9409 −0.9921 . We assume a power constraint of 5 at the licensed and cognitive g all the power to support the licensed user.Note that the maximum value of R p in the set described by R α part,out is an upper bound on the er has a power constraint of P p and the cognitive transmitter has a power constraint of P c .Applying this to our exampl channel, we have G = 1.4435 0.799 .The optimum covariance matrix is of the form Σ p,net = 5 5ρ 5ρ 5 , where ρ is the correlation be itters.Therefore, the rate achieved by the licensed user is R p (ρ) = 1 2 log(1 + GΣ p,net G † ). The maximum rate is att ined at ρ = 1 and the maximum value of R p is 2.3542. Maximizing R p over R α part,out : For a given α, this reduces to a single user MIMO channel with G α = H p,p H c,p / √ α and a su d individual power constraints at the license or the maximum value of R p over R α part,out has a sum power constraint.This is a conventional MIMO channel and the optimum covariance matrix is obtained by water-filling.For a given α, the best R p is got by max R p (α) = VIII. CONCLUSIONS In this paper, we derived an achievable region, R in given by ( 4) and an outer bound, R α,Σz out given by ( 8) for the MIMO cognitive channel.We describe conditions when the achievable region is µ-sum optimal for any µ ≥ 1.In particular, for any µ ≥ 1, there exists α * ∈ (0, ∞), such that if the region given by R α * part,out optimizes the µ− sum rate of the SMBC (for that particular α * ), then the achievable reg on achieves the µ-sum capacity of the MCC. ,p n as the valid codeword and H c,p X c,c n + Z p n as Gaussian noise.Taking G = [H p,p H c,p ] and X p,net n = depicts the various channel configurations considered, and the system equations of all the configurations.Ẑn p shown in Figures 2c, 2d and 2e has the same distribution as Z p n , but has an arbitrary correlation with Z c n .Before proving Theorem 3.2, we prove the following lemmas.Transformation 1 (MIMO Cognitive Channel (MCC) → Scaled MIMO cognitive channel) : The scaled MIMO cognitive channel is defined in Figure 2b and Figure 3.In this transformation, the channel matrices H c,p and H c,c are scaled by 1/ √ α.Also, the power constraint at the cognitive transmitter is changed to αP c .Lemma 5.1: The capacity region of the MIMO ognitive channel is equal to the capacity region of the scaled MIMO cognitive channel (SMCC) for any 0 < α < ∞. Lemma 5 . 2 : 52 The capacity region of the scaled MIMO cognitive channel A (SMCCA) is a superset of the capacity region of the caled MIMO cognitive channel (SMCC). Fig. 2 .Fig. 3 .Fig. 4 .Lemma 5 . 3 : 23453 Figure 2f Transformation 5 (Fig. 6 . 56 Fig. 6.Capac apacity Region andλ 2 = 0 to drive g(R p , R c , Σ p , Σ c,p , Σ c,c ) to −∞. • Tr(Σ p) ≤ P p and Tr(Σ c,p ) + Tr(Σ ,c ) > P c : In this ca e, λ 1 = 0 and λ 2 will take an arbitrarily large value to drive g(R p , R c , Σ p , Σ c,p , Σ c,c ) to −∞. • Tr(Σ p ) > P p and Tr(Σ c,p ) + Tr(Σ c,c ) > P c : In this case, λ 1 and λ 2 will take arbitrarily large values to drive g(R p , R c , Σ p , Σ c,p , Σ c,c ) to −∞. Fig. 8 . 8 Fig. 8. Plot of Achievable Region R in and partial outer bounds R α part,out Figure 8 8 Figure8shows how R α part,out intersects with R in at different points for different values of α. The capacity region of the Gaussian MIMO cognitive channel is the set of all achievable rate pairs (R p , R c ) and is denoted by C MCC . 1) there exists a sequence of encoding functions for thelicensed and cognitive users E n p : {1, . . . , 2 nRp } → X p n and E n n c : {1, . . . , 2 nRp } × {1, . . . , 2 nRc } → X c such that the codewords satisfy the power constraintsgiv
2014-10-01T00:00:00.000Z
2007-09-24T00:00:00.000
{ "year": 2007, "sha1": "23c679a7c4be383d55984bfd77fa02780111b57f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0711.4792v2.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1b16a84d89dc27a7104f8170e10302780f75fea3", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
4603927
pes2o/s2orc
v3-fos-license
Analyzing and Visualizing State Sequences in R with TraMineR This article describes the many capabilities offered by the TraMineR toolbox for categorical sequence data. It focuses more specifically on the analysis and rendering of state sequences. Addressed features include the description of sets of sequences by means of transversal aggregated views, the computation of longitudinal characteristics of individual sequences and the measure of pairwise dissimilarities. Special emphasis is put on the multiple ways of visualizing sequences. The core element of the package is the state sequence object in which we store the set of sequences together with attributes such as the alphabet, state labels and the color palette. The functions can then easily retrieve this information to ensure presentation homogeneity across all printed and graphical displays. The article also demonstrates how TraMineR ’s outcomes give access to advanced analyses such as clustering and statistical modeling of sequence data. Introduction This article is concerned with categorical sequence data and more specifically with state sequences, where the position of each successive state receives a meaningful interpretation in terms of age, date, or more generally of elapsed time or distance from the beginning of the sequence.Its aim is to examine a series of questions about such state sequences and to present the various solutions that we implemented in the R (R Development Core Team 2011) package TraMineR for answering them. The addressed methods are for sets of sequences and most of them are holistic (Billari 2001b) in that they consider each sequence as a whole; i.e., as a conceptual unit.The discussion is mainly oriented towards the analysis of sequences describing individual life courses.Nevertheless, most of the discussed concepts and tools should be applicable in other domains such as text, biology, quality control or web logs analysis, to cite just a few. Sequences are complex objects, and we need special tools for describing and displaying them.We consider, therefore, questions regarding the exploration and description of sets of sequences such as: • Which characteristics of sequences are we interested in? • What kind of indicators can we compute for a sequence set? • What are suited plots for rendering sequences? • How can we measure similarity between sequences?With a more analytical or explanatory concern, we also consider issues such as: • How can we identify groups with similar patterns and build typologies of sequences? • How can we analyze the relationship of sequences with covariates? In the social sciences, state sequences are of interest for studying life trajectories such as occupational histories, professional careers or cohabitational life courses.Some of the typical questions arising in this area are: • Do life courses obey some social norm?Which are the standard trajectories?What kind of departures do we observe from these standards ?How do life course patterns evolve over time ? • Why are some people more at risk to follow a chaotic trajectory or to stay stuck in a state?How does the trajectory complexity evolve across birth cohorts? • How is the life trajectory related to sex, social origin and other cultural factors? Empirical answers to such questions require us to consider collections of life sequences, to examine them from both a transversal and a longitudinal perspective, and to study their relationships with covariates. The primary objective of sequence methods is then to extract simplified workable information from sequential data sets; that is, to efficiently summarize and render these sets and to categorize the sequential patterns into a limited number of groups.This is essentially an exploratory task that consists of computing summary indicators, as well as sorting, grouping and comparing sequences.The resulting groups and real-value indicators may then be submitted to classical inferential methods and serve, for instance, as response variables or explanatory factors for regression-like models. A common approach for categorizing patterns consists of computing pairwise distances between them by means of sequence alignment algorithms (such as optimal matching) or other suitable metrics and using this information for clustering the sequences.This method has been applied to various data since the pioneering work of Abbott and Forrest (1986).A review can be found in Abbott and Tsay (2000).The expected outcome of such a strategy is a typology, with each cluster grouping cases with similar trajectories.Through binary logistic regression or classification trees, for example, we can then study how each cluster membership is related to covariates. A more recent complementary approach considered in the literature (Elzinga and Liefbroer 2007;Widmer and Ritschard 2009) is to focus on sequence indicators measuring for instance the longitudinal diversity and complexity of the sequences and to analyze them by means of conventional statistical tools for real-value variables. With a somewhat more aggregated point of view, an approach considered, for instance, by Billari (2001a) consists of looking at the sequence of transversal characteristics measured at each position, such as the diversity of states observed at each given age.Comparing the evolution of such transversal characteristics for different groups defined by birth cohorts or sex, for instance, provides instructive insights (Widmer and Ritschard 2009).However, when working with transversal indicators we lose the specific information on individual follow-ups. We may indeed imagine many other ways of looking at categorical sequences such as correspondence analysis of the states (Deville and Saporta 1983) or advanced Markov modeling (Berchtold and Raftery 2002); i.e., the study of how the probability of a given state depends on the previously observed states.Transforming state sequences into event sequences and resorting to tools for mining frequent subsequences permits us to gain interesting knowledge about the typical sequencing of states or events (Billari, Frnkranz, and Prskawetz 2006;Ritschard, Gabadinho, Müller, and Studer 2008).A very common approach in the life course literature is event history or survival analysis (Mayer and Tuma 1990;Yamaguchi 1991;Hosmer and Lemeshow 1999;Blossfeld, Golsch, and Rohwer 2007) which focuses on the occurrence of a specific event or somewhat equivalently on the duration-time to event-until a given state transition.Though not addressed here, all these techniques may usefully complement the considered state sequence techniques. The TraMineR R package is available from the Comprehensive R Archive Network at http: //CRAN.R-project.org/package=TraMineR and offers many analysis and visualization tools for either state or event sequences.These tools include already known methods, as well as new developments.We focus in this article on the functions intended for state sequence analysis.The paper is organized as follows.In Section 2, we introduce the TraMineR library, describe the mvad data set used for illustration and give a first example of analysis that can be run with TraMineR.Section 3 defines the different forms of sequential data that are supported by the package.In Section 4, we introduce the central concept of state sequence object.Section 5 introduces two basic visualization tools and describes the general plotting principles used by the package.Section 6 is devoted to the summarization and visual rendering of sets of sequences, while Section 7 is concerned with individual sequence indicators.In Section 8, we present the metrics that were implemented for measuring pairwise dissimilarities between sequences.In Section 9, we illustrate how dissimilarities measures can be used for further statistical analysis.Finally, we make some concluding remarks in Section 10. The TraMineR R package TraMineR (Gabadinho, Ritschard, Studer, and Müller 2009) is a package for mining and visualizing sequences of categorical data describing life courses in R (R Development Core Team 2011), the name TraMineR being a contraction of Life Trajectory Miner for R. It puts together most of the features proposed separately by other software for sequential data and offers many original tools for managing, analyzing and rendering categorical sequences. They all compute the optimal-matching edit distance between pairs of sequences and each of them offers specific useful facilities for describing sets of sequences.TraMineR is, to our knowledge, the first such toolbox for the free R statistical and graphical environment.Its salient characteristics are: • R and TraMineR are free and open source; • Since TraMineR is developed in R, it takes advantage of many already optimized procedures of R as well as of its powerful graphical capabilities; • R runs under several OS including Linux, MacOS X, Unix and Windows; any R script with TraMineR functions runs unmodified under all operating systems; • Specific TraMineR functions can be combined in the same script with any of the numerous basic statistical procedures of R as well as with those of any other R-package. TraMineR is readily installed from within R via install.packages("TraMineR").It features a unique set of procedures for analyzing and visualizing state sequence data, such as: • Handling a large number of state sequence representations with simple functions for transforming to and from different formats; • A whole series of easy to use plot functions for rendering sets of sequences (density plot, frequency plot, index plot, representative sequence plot and more); • Individual longitudinal characteristics of sequences (length, time in each state, longitudinal entropy, complexity, turbulence and more); • Sequence of transversal characteristics by position (transversal state distribution, transversal entropy, modal state); • Other aggregated characteristics (transition rates, average duration in each state, sequence frequency); • A choice of metrics for evaluating distances between sequences. Table 1 gives an overview of the key functions for analyzing state sequences that will be described in the remainder of the article.It is worth mentioning that the package provides also tools for event sequences such as finding the most frequent and most discriminating subsequences and extracting association rules between subsequences, as well as tools for ANOVAlike analyses of sequences (Studer, Ritschard, Gabadinho, and Müller 2011) that will not be discussed here.See the User's guide (Gabadinho et al. 2009) for a detailed description of the package usage. To continue, we examine how the diversity of states within each sequence is related to sex, to whether the father is unemployed and to whether the qualification grade at end of compulsory school was good.We compute the longitudinal entropy and regress it on the covariates: Results show that males experience less diverse states and that youngsters with good grades at the end of compulsory school experience more diverse states.Whether the father is unemployed does not have a significant effect. Sequence representations State sequences can be represented in many different ways, depending on the data source and on how the information is organized.Data organization and conversion between formats is discussed in detail in Ritschard, Gabadinho, Studer, and Müller (2009), where an ontology of longitudinal data presentations is given that may help identify the kind of data at hand.Here, we limit the discussion to the sequence data representations that TraMineR can handle and import.Those formats are listed in Table 4 together with the conversions that can currently be done with the provided seqformat() function. State sequences We consider sequences of discrete or categorical data.Formally, we define a state sequence of length ℓ as an ordered list of ℓ elements successively chosen from a finite set A of size a = |A| that is called the alphabet.A natural way of representing a sequence x is by listing the successive elements that form the sequence x = (x 1 , x 2 , . . ., x ℓ ), with x j ∈ A. With reference to this expanded form of sequences, state sequences are characterized by two properties.Firstly, they are formed by elements that are states; i.e., something that can last as opposed, for instance, to events that occur at given time points.Secondly, the position of each element conveys meaningful information in terms of age, date or, more generally, elapsed time or distance from the beginning of the sequence.Position indexes providing time information may be either absolute calendar values (day, year, month, ...) or relative process time (age, process duration, ...). In TraMineR, the expanded form is called STate-Sequence (STS) format.In this format, the successive states (statuses) of an individual are given either in consecutive columns, or as a character string with states separated by a given symbol such as '-' or '/', the former being the default separator.Each position (column) is supposed to correspond to a predetermined time unit. Other sequence representations Sequence data can be represented in more compact ways than STS essentially by giving only one of several same successive states.In that case, we have to explicitly stamp the successive distinct states with their starting position or duration. of two sequences in the different formats.The two considered sequences describe family formation histories of two individuals, the states being single (S), married (M), married with children (MC) and divorced (D). A first efficient way of representing a state sequence is by listing the distinct successive states with their associated durations.We get thus a sequence of couples (x j , t j ) where x j is a state and t j its duration; i.e., the number of times it is repeated.This is the State-Permanence-Sequence (SPS) format (Aassve, Billari, and Piccarreta 2007).By considering only the distinct successive states without their associated durations, we get the Distinct-Successive-States (DSS) sequence format.This DSS form holds the basic state sequencing information, but loses all time (t j ) and, more generally, alignment data. In the SPELL format there is one line for each spell.Each spell is characterized by the state (supposed constant during the spell) and the spell start and end times.STS sequences can easily be derived from such representation. State sequence objects The general philosophy of the library is to ensure that the various results and plots outputted for a same set of sequence data use the same state labels and colors.Likewise, any information on case weights and possible missing information about some positions in sequences should be treated the same way throughout the analysis.To achieve this goal, the TraMineR functions for state sequence analysis require as an argument a state sequence object that includes both the sequential data and its attributes.Thus, the first step when using TraMineR for state sequence analysis is to create a state sequence object.This is done with the seqdef() function from data organized in either of the STS, SPS or SPELL forms described in the previous section.set introduced in Section 2. The main attributes are listed in Table 5, together with their default values and the dedicated functions to retrieve or set them. Creating state sequence objects In the mvad data set, the retained activity status variables are stored in columns 17 to 86.We display these statuses for the first six considered months (September 1993 to February 1994) of the first two records: R> The default input format for the seqdef() function is STS, which is appropriate for the mvad data set.If the input data is in another format it must be specified with the informat argument and seqdef() will automatically make the required conversion. Alphabet and state labels The alphabet is the list of states allowed in the sequences.Both short and long labels of the states forming the alphabet are attached to the object.Long labels serve mainly for color legends in plots, while short state names are primarily used in printed outputs.Shorter names produce cleaner and shorter output when printing the sequences. By default, the sorted list of the distinct states found in the input data (as returned by the seqstatl() function) defines the alphabet and is used as state names and labels.This can be changed with optional arguments, which is necessary, for example, when the alphabet contains states that do not appear in the retained sequences. Below we specify short state names with the states argument and long state labels with labels.These arguments expect vectors of names or labels that are ordered conformably with the alphabet.The alphabet argument can be used to change the order of the states, in which case the vectors passed with the states and labels arguments should conform to the newly defined order. Other important attributes and properties We briefly comment here upon some other important attributes that will be used in conjunction with the alphabet and state labels by TraMineR's functions. State colors and position names The sequence plot functions provided by the library need a distinct color for each state.A color palette is therefore attached to the sequence object.A default color palette from RColorBrewer (Neuwirth 2007) is automatically selected as long as the alphabet size does not exceed 12. Position names serving mainly for labeling the ticks of the x-axis but that are also useful for increasing readability of tabulated output are also an attribute of the object.If left unspecified, position names are taken from the corresponding column names of the original data frame.The interval between the x-axis tick-marks is an additional attribute that can be set (xtstep argument) for optimizing rendering. Case weights Survey data often come with case weights that account for the sampling scheme and unit nonresponses.Using such case weights is important to compensate for sampling bias and thus get results that are more realistic.When weights are attached to the state sequence object, the TraMineR functions that can handle weights automatically produce weighted results.To disable the use of weights, add option weighted=FALSE to the function. The weight variable in mvad contains case weights that account for the selective attrition during the survey and we attach them to the sequence object as shown below.Unless otherwise specified, we will use this weighted sequence object from here on. Missing values Missing values in the expanded (STS) form of a sequence occur, for example, when: • Sequences do not start on the same date while using a calendar time axis; • The follow-up time is shorter for some individuals than for others yielding sequences that do not end up at the same position; • The observation at some positions is missing due to nonresponse, yielding internal gaps in the sequences. The way missing values should be handled may be different for each of these situations.In the first case, we may want to maintain explicitly the starting missing values to preserve alignment across sequences or possibly left-align sequences by switching to a process time axis.In the second case, the ending missing terms could just be ignored. To allow such differentiated treatments, TraMineR distinguishes left, in-between and right missing values.We can specify how each of the missing types should be encoded with the left, gaps and right arguments.By default, gaps and left-missing states are coded as explicit missing values while all missing values encountered after the last valid (rightmost) state in a sequence are considered void elements; i.e., the sequence is considered to end after the last valid state. The specific treatment of each type of missing value will depend upon whether the analysis method envisaged supports missing values; and, if yes, which kind it supports.Most of the proposed functions, such as seqdist() for computing distances between sequences, have optional arguments for dealing with missing states. Subsets and attributes inheritance Subsets of sequence objects can be defined by specifying row and column indexes (or names) as for R matrices and data frames.Every subset of a state sequence object inherits its 'parent' attributes.The alphabet and color palette, for instance, remain the same for all subsets.This is of particular importance when comparing graphics that render different subsets of a same sequence object. Visualizing individual state sequences State sequence visualization is one of the most important features of the package.This section introduces two basic plotting functions, namely the index plot intended to render a set or subset of individual state sequences and the frequency plot that visualizes them according to their frequencies.We explain also the common design of most of TraMineR's plotting functions. Sequence index plots A sequence index plot (Figure 2) individually renders the selected state sequences.Each of them is represented by horizontally stacked boxes that are colored according to the state at the successive positions.The resulting bars are put above each other to vertically align the positions.We thus visualize, for each case, the individual longitudinal succession of states as well as, through the length of each color segment, the duration spent in each successive state. The alignment also permits easy transversal comparisons at each position.The sequence index plot shown in Figure 2 was obtained with the command below: 1 R> seqiplot(mvad.seq,border = NA, with.legend= "right") Since we have attached case weights to the mvad.seqsequence object, the width of the bar representing each sequence is proportional to its weight.This default behavior could be changed with the weighted=FALSE argument.The plotted sequences are selected with the idxs argument by providing either a vector of indexes, or 0 for requesting all the sequences.The default value is 1:10 and Figure 2 displays therefore only the first 10 sequences of mvad.seq.The seqIplot() alias produces full index plots that display all the sequences in the set without spaces between sequences and without borders around unit states.The usefulness of such plots has, for instance, been stressed by Scherer (2001) and Brzinsky-Fay et al. (2006).However, when the number of displayed sequences is large, they may produce burden pictures that are often hard to interpret. 2We can partially overcome this drawback by sorting the sequences according to the values of a suitably chosen covariate-passed with the sortv argument.Good choices are, for instance, the distance to the most frequent sequence or the scores of a multidimensional scaling analysis 3 of the dissimilarities between sequences 1 seqiplot(), as most other plotting functions described in this paper, is just an alias for calling a generic seqplot() state sequence plot function with the appropriate type argument and suitable default option values.The border=NA option suppresses the border that surrounds, by default, each unit state in the sequence. 2 When plotting several hundred of sequences, saving index plots may also produce heavy files in vectorial formats such as PostScript and PDF; generating plots in bitmap formats such as PNG or JPEG is recommended in such cases. 3 The scores are obtained from the dissimilarity matrix with the cmdscale() function.(Figure 3).Both solutions suppose that we can compute such dissimilarities; this will be addressed in Section 8. Sequence frequencies The seqtab() function returns a table with the counts and percent frequencies of the sequences sorted in decreasing order of their frequencies.In the next example, we request the four most frequent sequences of mvad.seq with idxs=1:4.In the printed outcome, sequences are displayed in the shorter and more readable SPS format: R> seqtab(mvad.seq,idxs = 1:4) The most frequent sequence in the mvad.seqobject is a spell of two years of school followed by 46 months of higher education.It accounts, however, for only 4.7% of the total weights of the 712 cases considered.The second most frequent sequence, which concerns 3.5% of the weighted individuals, is indeed very similar to the previous one. Sequence frequency plots A graphical view of the sequence frequency table where bar widths are proportional to the frequencies is obtained with the seqfplot() function.Figure 4 shows the plot of the weighted and unweighted frequencies4 obtained with: If we look at the unweighted results, the most frequent sequence is to stay employed during the entire follow-up period (be in state EM during 70 months).This sequence, which was the fourth most frequent in the weighted frequency table with 2.5% of the total weight, accounts for 7% of the 712 cases considered. The probability for two individuals to follow exactly the same 70-month trajectory is small, yielding a large number of different patterns.The 10 most frequent sequences account for only about 20% of all the trajectories, which reflects this high diversity. Reading and controlling state sequence plots The way index plots render individual state sequences with horizontally stacked boxes is common to other functions of the library that visualize specific state sequences.The position in the sequence is read on the x-axis.The first value on this axis is the selected origin.The sequence is read from left to right in the same way as printed outputs.Tick labels for the x-axis are retrieved, by default, from the plotted sequence object. The values on the y-axis are the indexes of the plotted sequences.The index refers to the considered ranking of the sequences.For instance, in sequence index plots, the default order is that in the state sequence object unless a specific sort variable is provided with the sortv argument.In sequence frequency plots, sequences are sorted according to their frequency in the data set, while in representative sequence plots (Section 9.1), sequences are sorted according to their representativeness score. Analyzing and Visualizing State Sequences in R with TraMineR The indexes on the y-axis-and hence the sequences-are displayed bottom-up.Thus, when sequences are sorted, the first ranked one is at the bottom of the plot.5This respects the usual standard for y-axes.It may, however, be confusing when compared with the corresponding printed outputs where sequences are displayed top-down. Other aspects of the graphic (title, font size, axes display, axis label, state legend, ...) can be controlled with dedicated options described in detail in the reference manual.There is also an option to produce separate plots by levels of a covariate. Computing and plotting overall and transversal statistics We now turn to the facilities offered by TraMineR for visualizing and computing overall and transversal descriptive statistics of a set of sequences.The functions discussed here all require a state sequence object as main argument and admit a series of optional parameters. We illustrate with the mvad.seqweighted sequence object created on page 11. Overall statistical characteristics We consider, first, global synthesized information that is based neither on individual longitudinal characteristics nor on transversal characteristics by position.More specifically, we focus on the overall state distribution and transition rates between states. Mean time spent in each state A first synthetic information is given by the mean-not necessarily consecutive-time spent in the different states; that is, the mean number of times each state is observed in a sequence.This characterizes the overall state distribution.As an example, we plot the mean times for two subsets defined by the funemp covariate that indicates whether the respondent's father was unemployed at the time of the survey (Figure 5).The graphic with the distinct plots by levels of the funemp covariate is obtained by passing funemp as group argument to the plotting function.This option is common to all the plotting functions presented in this article. R> seqmtplot(mvad.seq,group = mvad$funemp, ylim = c(0, 30)) We can see that the mean time spent in joblessness and training is higher for interviewees with unemployed fathers, while the time they spent in 'school', 'further education' and 'higher education', is lower. Mean time values are obtained with the seqmeant() function. Transition rates Another interesting information about a set of sequences is the transition rate between each couple of states (s i , s j ); i.e., the probability to switch at a given position from state s i to state s j .Let n t (s i ) be the number of sequences that do not end in t with state s i at position t and let n t,t+1 (s i , s j ) be the number of sequences with state s i at position t and state s j at position t + 1.The transition rate p(s j | s i ) between states s i and s j is obtained as . with L the maximal observed sequence length. The seqtrate() function returns the matrix of transition rates for the provided sequence object.By default, the rates are assumed position-independent; i.e., the same whatever t. The outcome is a single matrix where each row i gives a transition distribution from the originating state s i in t to the states in t + 1; that is, each row total equals one.Hence, transition rates provide information about the most frequent state changes observed in the data together with, on the diagonal, an assessment of the stability of each state. In the following example we compute the transition rate matrix for the mvad.seqsequence object: R> mvad.trate<-seqtrate(mvad.seq)R> round(mvad.trate, 2) Time-varying transition rates can be obtained with option time.varying=TRUE, in which case a 3-dimensional array with a distinct transition rate matrix for each of the positions t = 1, 2, . . ., L − 1 is returned.The matrix for position t is computed by considering only the states at t and t + 1.The third dimension of the array corresponds to the position t index. Transversal state distributions Time varying transition rates are transversal characteristics computed at the successive considered positions.In the same vein, it is of interest to look at the transversal distribution of the states at each position of the considered sequences.A state distribution plot, as produced by seqdplot(), displays the general pattern of the whole set of trajectories.When interpreting such graphics, one must remember that, unlike sequence index plots and sequence frequency plots, they do not render individual sequences or individual follow-ups.They provide aggregated views made of successive slices, each of which represents transversal characteristics. Sequence of modal states An interesting summary that can be derived from the state distributions is the sequence made of the most frequent state at each position.It is obtained with the seqmodst() function and plotted with seqmsplot().Figure 7 shows how such modal state sequences are displayed.The height of the bar at each position is proportional to the frequency of the displayed state at that position.The number of occurrences of the modal state sequence is also displayed.Since the shown sequences of modal states do not belong to the sequence dataset, the number of occurrences is 0 for both considered groups. Transversal entropy of state distributions In addition to the state distribution, the seqstatd() function provides for each position in the sequence the number of valid states and the Shannon entropy of the transversal state distribution.Shannon's entropy, also known as the entropy index, has been applied to social science data by, for instance, Billari (2001a) and Fussell (2005).Letting p i denote the proportion of cases in state i at the considered position, the entropy is where a is the size of the alphabet.The entropy is 0 when all cases are in the same state and is maximal when we have the same proportion of cases in each state.The entropy can be seen as a measure of the diversity of states observed at the considered position. Plotting the transversal entropies can be useful to find out how the diversity of states evolves along the time axis.We plot transversal entropies with seqHtplot().Figure 8 shows the curves by end of compulsory school qualification group.For the first group, the entropy of the state distributions noticeably decreases at the end of the follow-up period.This is a consequence of the increasing proportion of youngsters entering into full employment (Figure 6).For the second group, the entropy index slightly increases at the end of the considered period, which may be explained by the emergence of two balanced subgroups, namely those who continue higher education and those who enter into employment.We focus now on the characterization and summarization of longitudinal characteristics of individual sequences.Essentially, the aim is to define measures that inform on how each sequence is constituted; i.e., on whether it takes a simple or more complex form. Individual sequence characteristics The interpretation of complexity indexes will depend indeed on the context.Consider for instance the number of transitions-changes of state-in a sequence.When looking at work trajectories, for example, sequences with numerous transitions may correspond to unusual disrupted trajectories.In other contexts such as family formation, sequences with fewer transitions may indicate that an individual failed to pass through the usual stages of the family formation (leaving the parental home, cohabitation with a partner, birth of one or more children, etc...). In the SPS form (see Section 3) a state sequence is represented as an ordered list of successive distinct states with their associated durations; i.e., as a sequence of couples (x j , t j ) where x j is a state and t j its duration.This suggests that we can distinguish characteristics of the state sequencing-the distinct successive states (DSS)-from those of the durations. 8We first examine two indicators of the state sequencing and one based on the durations.More synthetic measures are addressed in Section 7.2. Number of transitions Perhaps the simplest indicator is the number of transitions in the sequence; i.e., the number of state changes.The number of transitions in a sequence x is readily obtained from the length ℓ d (x) of its DSS sequence.It is ℓ d (x) − 1.We get the number of transitions for each sequence of state sequence object with seqtransn(). Number of subsequences 8 The two pieces of information can be extracted separately with seqdss() and seqdur().The number ϕ(x) of subsequences that can be extracted from the DSS sequence provides also useful information on the sequencing of the states.This measure is returned by the seqsubsn() function.It is used in the turbulence measure presented below.A subsequence y of x is composed of elements of x occurring in the same order than in x. The maximal number of subsequences is reached only for a sequence made of the repetition of the alphabet.In Figure 9, for example, sequences 5 and 9 have the maximal number of transitions, while the number of subsequences is maximal for sequence 9 only. Within sequence entropy Regarding the durations, we consider the total time spent in each state; i.e., in case of multiple spells in a same state, the sum of the lengths of these spells.For example, in (EM,4)-(TR,2)-(EM,64), the first sequence in the object mvad.seq,there are two spells in state EM with respective durations 4 and 64.Hence, the time spent in state EM is 68 months, as shown by the output of the seqistatd() function: R> seqistatd(mvad.seq[1:4, ]) EM FE HE JL SC TR 1 68 0 0 0 0 2 2 0 36 34 0 0 0 3 10 34 0 2 0 24 4 14 0 0 9 0 47 The total time spent in each state characterizes the state distribution within a sequence.The entropy of this distribution can be seen as a measure of the diversity of its states.We call it within or longitudinal entropy to distinguish from the transversal entropy considered in Section 6.2 on page 20. The seqient() function returns the longitudinal Shannon entropies; i.e., for each sequence the value of where a is the size of the alphabet and à i the proportion of occurrences of the ith state in the considered sequence.When the state remains the same during the whole sequence, the entropy equals 0, while the maximum entropy is reached when the same time is spent inside the sequence in each possible element of the alphabet.By default the entropy is normalized by dividing the value of h(à 1 , . . ., à s ) by its theoretical maximum, log a.9 Figure 9 helps to get a more concrete idea of what the entropy measures.We see that the within-sequence entropy does not account for the state order in the sequence.For instance, sequences 7 and 9 have the same maximal normalized entropy of 1. Composite complexity measures The previous measures are based either on the sequencing or on the durations.We look now at composite measures that account simultaneously for those two aspects. Turbulence The turbulence T (x) of a sequence x is a composite measure proposed by Elzinga (Elzinga and Liefbroer 2007) that accounts for the number ϕ(x) of distinct subsequences of the DSS sequence and the variance s 2 t (x) of the consecutive times t j spent in the ℓ d (x) distinct states.The formula is is the maximum value that s 2 t (x) can take given the total duration ℓ(x) = j t j of that sequence.This maximum is 2 where t(x) is the mean consecutive time spent in the distinct states. From a prediction point of view, the higher the differences in state durations and hence the higher their variance, the less uncertain the sequence.In that sense, small duration variance indicates high complexity. The vector containing the turbulences of the sequences in a sequence object is obtained with the seqST() function. Complexity index The complexity index, introduced in Gabadinho, Ritschard, Studer, and Müller (2010), is a composite measure that combines the number of transitions in the sequence with the longi-tudinal entropy.It reads where h max is the theoretical maximum value of the entropy given the alphabet; i.e., h max = log a.We get the vector of complexity indexes with the seqici() function. The minimum value of 0 can only be reached by a sequence with a single distinct state; i.e., with no transition and an entropy of 0. C(x) reaches its maximum 1 if and only if the sequence x is such that i) x contains each of the states in the alphabet, ii) the same time ℓ(x)/a is spent in each state, and iii) the number of transitions is ℓ(x) − 1. Complexity index versus turbulence It is instructive to look at how the turbulence and complexity indexes behave for the examples in Figure 9.The turbulence produces significantly higher values for sequences 3 and 4, which have a rather low 'sequencing' complexity but a null variance of their state durations.Indeed, this variance does not account for states that are not visited, which tends to give high turbulence values to seemingly simple sequences such as sequence 3 with two spells of same length and hence a null variance of their durations.Similarly, the turbulence exceeds the complexity index for sequences 3, 4, 5, and 7, which all have a zero variance in duration and, hence, a relatively high turbulence value.The longitudinal entropy that intervenes in the complexity index is another way of looking at the time spent in the states.It accounts, on its side, for all states, including the nonvisited ones, and, therefore, discriminates clearly between the sequences with zero duration variance. Measuring sequence (dis)similarity We examine now how we can measure the dissimilarity between two state sequences.As we will see in Section 9, once we have pairwise dissimilarities we will be able to run many types of powerful classical and specific statistical analysis methods on sequence data. Many sequence dissimilarity measures have been proposed in the literature, of which the most popular in social sciences is the optimal matching (OM) edit distance.TraMineR offers a general seqdist() function that can compute the OM dissimilarity as well as a set of other dissimilarity measures.Table 6 lists the available methods and their required parameters. The seqdist() function can output the matrix of pairwise dissimilarities or the vector of distances to a provided reference sequence.We can also compute multichannel dissimilarities (Pollock 2007) with the seqdistmc() function. Dissimilarity measures can be classified into measures based on the count of matching attributes and those defined as the (minimal) cost of transforming one sequence into the other.Another interesting distinction is between those that make position-wise comparisons; i.e., that do not allow shifting a sequence or part of it, and those accounting for similar shifted patterns (see Table 6).Without shift, x = ABAB and y = BABA are very distant, while they are quite similar if we shift y by just one position. Dissimilarities based on counts of common attributes Let A(x, y) be a count of common attributes between sequences x and y.It is a proximity measure since the higher the counts, the closer the sequences.We derive a dissimilarity measure from it through the following general formula where d(x, y) is the distance between objects x and y.The dissimilarity is maximal when A(x, y) = 0; i.e., when the two sequences have no common attribute.It is zero when the sequences are identical, in which case we have A(x, y) = A(x, x) = A(y, y).Let us briefly describe the implemented count-based dissimilarity measures. The simple Hamming distance (Hamming 1950) is the number of positions at which two sequences of equal length differ.It can equivalently be defined as ℓ − A H (x, y), with ℓ = |x| = |y| the common sequence length and A H (x, y) the number of matching positions. 10We get the Hamming distance with Equation (1) by using A H (x, y)/2 as proximity measure. We obtain another simple distance measure by using the length A P (x, y) of the longest common prefix (LCP) between two sequences; i.e., by counting the number of successive common positions starting from the beginning of the sequences11 (see for instance Elzinga 2007b).The reversed longest common prefix (RLCP) or longest common suffix is similar to the LCP but looks for the common elements from the end rather than from the beginning of the sequences. Another implemented metric is based on the length A S (x, y) of the longest common subsequence (LCS).12Notice that consecutive states in the common subsequence are not necessarily consecutive in the compared sequences.For example, the length of the LCS between sequences 1 and 3 of mvad.seq(see page 11 and Figure 2 on page 13) is 12 and we get 59 between sequences 2 and 5. Quite obviously, we can only have A S (x, y) ≥ A P (x, y); i.e., the length of the LCS cannot be smaller than the length of the LCP, and hence the LCS distance cannot be greater than the LCP distance.We have also A S (x, y) ≥ A H (x, y).When compared with metrics based on position-wise counts such as the simple Hamming and the LCP distances, the LCS metric reduces distances by accounting for non-aligned matches; i.e., position-shifted similarities. Edit distances An edit distance is defined as the minimal cost of transforming one sequence into the other.This cost depends, indeed, on the allowed transforming operations and their individual costs. Basically, two types of operations are considered: i) the substitution of one element by an other one, and ii) the indel; i.e., the insertion or deletion of an element, which generates a one-position shift of all the elements on its right.The generalized Hamming (HAM) and dynamic Hamming distances (DHD) (Lesnard 2006) accept only substitutions and hence no shift.The former assumes position-independent substitution costs while the second allows for position-dependent costs.The Optimal Matching (OM) distance, first considered by Levenshtein (1966) and popularized in the social sciences by Abbott (Abbott and Forrest 1986), accounts for both operations. Setting indels and substitution costs Usually the indel cost is set as a constant independent of the concerned position and state.Setting a high indel cost relatively to substitution costs favors substitutions while low values favor indels.We can prohibit shifts by setting the indel cost sufficiently high. 13ubstitution costs are generally organized in matrix form.A three-dimensional matrix is necessary in the case of position varying costs as used, for instance, by the DHD metric. In the time invariant case, the substitution-cost matrix is a square symmetrical matrix of dimension a × a, where a is the number of distinct states in the alphabet.The element (i, j) in the matrix is the cost of substituting state s i with state s j .The user can either specify its own substitution-cost matrix,14 or compute one by means of the seqsubm() function with option method = "CONSTANT" or method = "TRATE".With "CONSTANT", all costs are set as the user provided cval constant (2 by default).With "TRATE", the costs are determined from the estimated transition rates as where p(s i | s j ) is the probability of observing state s i at time t + 1 given that state s j has been observed at time t (see page 17).The idea is to set a high cost when changes between s i and s j are seldom observed and lower cost when they are frequent. Here is how we get the time-invariant transition-rate-based substitution cost matrix for the mvad data: R> scost <-seqsubm(mvad.seq,method = "TRATE") R> round(scost, 3) EM FE HE JL SC TR EM 0.000 1.971 1.987 1.957 1.988 1.961 FE 1.971 0.000 1.993 1.977 1.991 1.993 HE 1.987 1.993 0.000 1.997 1.981 1.999 JL 1.957 1.977 1.997 0.000 1.992 1.976 SC 1.988 1.991 1.981 1.992 0.000 1.995 TR 1.961 1.993 1.999 1.976 1.995 0.000 The minimum cost is 0 for the substitution of each state by itself, and the maximum is less than 2; i.e., the value that we would get for a transition not observed in the data.In accordance with what we observed in the transition rate matrix (page 18), we get the lowest costs for substituting EM (employment) to JL (joblessness) or TR (training).Remember, however, that-unlike transition rates-the substitution costs are symmetric and hence we have the lowest cost for changing JL or TR into EM. Implemented edit distances Generalized Hamming (HAM) and Dynamic Hamming (DHD) dissimilarities are intended for sequences of equal lengths only.The former generalizes the basic Hamming distance by allowing for state dependent substitution costs.Indeed, the count of nonmatching positions is the cost of substituting a state at each position when all costs are set to 1. DHD is the extension proposed by Lesnard (2006) to account for time-varying costs.For the mvad data set, the flexibility in substitution costs allowed by the DHD metric has only a limited impact as can be seen in Figure 10. In addition to substitutions, Optimal Matching (OM) allows also insertions/deletions.It thus applies to sequences of unequal lengths.Since the cost of the transformation may vary with the order of the successive indels and substitutions, OM is defined as the minimal costin terms of insertions, deletions and substitutions-of transforming one sequence into the other one.The cost minimization is achieved through dynamic programming, the algorithm implemented in TraMineR being essentially that of Needleman and Wunsch (1970) with standard optimizations. The 712 × 712 pairwise distance matrix for our mvad data computed by using the transitionrate-based costs and an indel cost of 1 is obtained with the command: R> mvad.om<-seqdist(mvad.seq,method = "OM", indel = 1, sm = scost) The mvad.om distance matrix requires only 3.96 megabytes memory space.However the number n of sequences in the data can be an important issue when computing dissimilarity matrices since both the computing time and the size of the resulting matrix increase exponentially with n.If necessary, we can divide the size by 2 by requesting only the upper triangle of the matrix with the full.matrix=FALSEargument.Most R functions accept the resulting upper-triangle objects as dissimilarity argument. Comparing dissimilarity measures Choosing a dissimilarity measure and setting substitution and indel costs is an important step in sequence analysis.Though popular in social sciences, distances based on such costs have raised questions in the literature (see for instance Dijkstra and Taris 1995;Wu 2000;Elzinga 2007b).The meaning of the substitution costs, their required symmetry and the sensitivity of the results to the chosen values have been pointed out as important issues.More recently, the meaning of indels was also addressed (Hollister 2009;Lesnard 2010).Favoring insertions and deletions reduces the importance of time shifts in the comparison, while favoring substitutions gives more importance to position-wise similarities. Comparing the results obtained with various settings can also be useful for selecting the appropriate measure.Figure 10 compares the discussed dissimilarity measures for the distance to the most frequent sequence on the mvad data.We observe that, apart from the LCP metric, the measures yield very similar results.The few significant differences between HAM (or DHD) and LCS (or OM) illustrate how LCS and OM reduce dissimilarity by allowing for shifts in the comparison of the sequences.The mean difference between OM, obtained with costs derived from substitution rates, and LCS is only 0.4% of the maximal distance.The largest difference is 0.63%.These small differences are a consequence of low transition rates which lead to substitution costs comprised between 1.96 and 2; i.e., close to 2. With a constant substitution cost of 2 and an indel cost equal to 1, OM is just LCS (Elzinga 2007b). Normalized distances When dealing with sequences of different lengths, we may want to normalize the distances to account for these differences.More specifically, the aim of normalization is to relativize distances such that a dissimilarity of say 10 between sequences of length 100 becomes less important than a dissimilarity of 10 between sequences of length 5.While the maximal distance between a pair of sequences depends on their length, normalization aims at setting it to 1 or, at least, to a value that does not depend on the lengths.15 With seqdist() we control normalization by means of the norm argument.When setting it to TRUE, the normalization applied is determined by the selected metric.For LCP, RLCP and LCS, we apply Elzinga (2007b)'s normalization.It works as follows.Letting A(x, y) be the (non normalized) proximity measure, we first normalize this similarity The normalized distance is, then, just the complement to 1 of the normalized similarity. which gives values comprised between 0 and 1. For the OM distance, as well as for HAM and DHD, we apply Abbott's normalization, which consists of dividing the distance by the length of the longest of the two sequences It results that for OM with an indel cost of 1 and a constant substitution cost of 2, the maximal normalized OM distance is 2. Though OM is in this latter case equivalent to LCS, their normalized values differ. We can also force the normalization method by specifying either "gmean" for Elzinga's normalization or "maxlength" for Abbott's solution.Alternatively, we can use "maxdist", which consists of dividing each distance by its maximal theoretical value.For LCP and LCS distances, the maximal possible value is the sum ℓ x +ℓ y of the lengths of the two sequences x and y, for HAM it is the length ℓ of the sequences, while the maximum theoretical OM distance is where c I > 0 is the indel cost, max(S) the greatest substitution cost and |ℓ x − ℓ y | the absolute value of the difference in lengths of the two sequences.With the unit indel cost and the scost transition-rate-based substitution cost matrix, this yields 139.94 for the mvad data.This is very close from twice the sequence length; i.e., 2 • 70 = 140. It is worth mentioning that the triangle inequality property of the original distance may in some cases be lost through the "maxlength" and "maxdist" normalizations. Dissimilarity based sequence analysis Beside information on the similarity between any pair of sequences, a distance matrix opens access to many classical statistical and data analysis tools.It permits for instance to extract representative sequences such as medoids, to run any clustering technique based on pairwise dissimilarities and to apply multidimensional scaling.It even permits to compute pseudovariances and run ANOVA-like analyses as explained in Studer et al. (2011).We demonstrate in this section how the mvad.omdissimilarity (distance) matrix obtained with the command shown page 27 can be exploited for further statistical analysis. Representative sequences A major concern when analyzing sets of categorical sequences is to find useful ways of summarizing them.Possible solutions could be to determine some central or typical sequence such as the modal-most frequent-sequence or the medoid-most central-sequence.However, such solutions are of limited interest since they provide usually only a too rough idea of the main patterns in the set.A more general approach consists in finding sets of representatives and TraMineR provides the versatile generic seqrep() function for extracting such sets from the dissimilarity matrix.The function allows control over the amount of information that the representative set should convey.The sets returned by seqrep() exhibit, thus, the key features of the whole set they are extracted from, which proves useful, for example, when labeling clusters of sequences. The principle of the search algorithm (Gabadinho, Ritschard, Studer, and Müller 2011) is to sort the sequences according to a representativeness criterion16 and to remove the redundancy by browsing the sorted sequences.The redundancy threshold is set as a percentage (10% by default) of the maximum theoretical dissimilarity D max between two sequences and the representative set will thus not contain any pair of sequences that are nearer each other than this threshold.The size of the representative set can be controlled by fixing either the minimal expected coverage of the representative set or the number nrep of representatives. The coverage of a representative sequence is the percentage of sequences that are in its neighborhood; i.e., the number of sequences with a distance to the representative less than a selected threshold.17The total coverage of the representative set corresponds to the percentage of the n original sequences that have a representative in their neighborhood.A series of other individual and global measures to evaluate the quality of the obtained representatives is also computed. The list of representative sequences is obtained by printing the outcome of seqrep() and we get the quality measures with the summary() method.The seqrplot() function generates representative sequence plots. Example 1: Medoid and the centrality criterion A first simple example18 of a representative sequence is the medoid of a set of sequences. The medoid is the most central object; i.e., the one with minimal sum of distances to all other objects in the set (Kaufman and Rousseeuw 2005).It is a special case of representative se-quence obtained by selecting the centrality sorting criterion (criterion="dist") and setting the size of the representative set to 1 (nrep=1). R> medoid <-seqrep(mvad.seq,diss = mvad.om,criterion = "dist", + nrep = 1) R> print(medoid, format = "SPS") The medoid of a set of sequences usually yields poor coverage.To increase coverage we should allow for more than one representative.When seeking for more than one representative, an initial sort of the sequences according to the density of their neighborhood yields better results.Neighborhood density is, therefore, the default criterion used by seqrep(). The command below finds and plots the representative set that, with a neighborhood radius of 10% (default pradius value), covers at least 25% (default coverage value) of the sequences in each of the two gcse5eq groups: R> seqrplot (mvad.seq,group = mvad$gcse5eq,diss = mvad.om,border = NA) In the resulting plot (Figure 11) the selected representative sequences are plotted bottomup according to their representativeness score with bar width proportional to the number of sequences assigned to them.At the top of the plot, two parallel series of symbols standing each for a representative are displayed horizontally on a scale ranging from 0 to the maximal theoretical distance D max .The location of the symbol associated with the representative, r i , indicates on axis A the discrepancy within the subset R i of sequences assigned to r i and on axis B the mean distance to the representative. We learn from the plots that respectively five and one representatives are necessary for each of the two groups to achieve the 25% coverage and that the actual coverage is 29% in both cases. Clustering sequences Clustering is an exploratory data analysis method aimed at finding automatically homogeneous groups or clusters in the data (Kaufman and Rousseeuw 2005).In life course studies (e.g., McVicar and Anyadike-Danes 2002;Widmer and Ritschard 2009), the method has typically been used in combination with OM distances to identify distinct groups of sequences with similar patterns; that is, to define a typology of sequences.We already showed, in Section 2, how we can make a cluster analysis of sequences using the cluster library (Maechler, Rousseeuw, Struyf, and Hubert 2005).We used agnes() to make a hierarchical clustering with the Ward method, but pam() (partitioning around medoids) or diana() (divisive analysis), for example, could also be used.The four clusters solution was retained after examining the dendrogram (Figure 12) of the clustering tree obtained with: R> plot(clusterward, which.plots= 2, labels = FALSE) Figure 13 on page 34 obtained with the command below shows the representative sequences by cluster, complementing the plots of the transversal state distributions shown in Figure 1 page 7. The threshold for the coverage of the representative set is set to 35% using the coverage=.35argument. R> seqrplot (mvad.seq,group = cl4.lab,diss = mvad.om,coverage = 0.35,+ border = NA) Looking at these two figures helps interpreting and labeling the clusters.They show that clustering from the OM distances identifies four distinct patterns of school to work transitions.In the first cluster the trajectories are clearly oriented toward early transition to employment, with, in some cases, a spell of training.The second cluster is dominated by trajectories containing a spell of school or further education followed by higher education.Cluster 3 corresponds to slow transition to employment with first an important spell of further education.In the last cluster, the transitions from school to work are more chaotic with frequent spells of training and joblessness. Figure 3 : Figure 3: Unsorted and sorted full-sequence index plots. Figure 5 : Figure 5: Mean time spent in each state by father's unemployment status. Figure 6 : Figure 6: Transversal state distributions by end of compulsory school qualification group. Figure 7 : Figure 7: Modal state sequence by end of compulsory school qualification group. Figure 10 : Figure 10: Distances to the most frequent sequence obtained with various metrics, mvad data. Figure 11 : Figure 11: Representative sequences by end of compulsory school qualification group. Table 1 : TraMineR's key functions.theirmonthlyfollow-up over the course of 6 years starting in the month where they were first eligible to leave compulsory education (July 1993).Each individual is characterized by a unique identifier, 13 covariates and 72 monthly activity state variables from July 1993 to June 1999.Since the first two months of the follow-up are summer holidays, we look hereafter at trajectories from September 1993 yielding sequences of 70 monthly statuses.The states are school, FE (further education), employment, training, joblessness, and HE (higher education).See Table2for a description of the variables in mvad. It contains the data used by McVicar and Anyadike-Danes (2002) for studying the school-towork transition in Northern Ireland.The figures cover 712 individuals, the sequences being Table 2 : List of variables in the mvad data set. Table 4 displays the same example Table 4 : Sequence data representations; some formats handled by the seqformat() function. Table 5 : Main sequence object attributes. We show below how to create a state sequence object from the mvad data 6 However, unlike for graphical displays, functions returning statistics and sequence characteristics do not have a group argument.We can retrieve the values by levels of a covariate with the row indexing mechanism 7 or with the by() function: Figure 8: Transversal entropy by end of compulsory school qualification group. Table 6 : List of available metrics for computing distances with the seqdist() function.
2014-10-01T00:00:00.000Z
2011-04-07T00:00:00.000
{ "year": 2011, "sha1": "c50d1ed3f988745f7b969cfb42d38be1a6580f76", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18637/jss.v040.i04", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ab8e38ede9ae37f851466be3caffdcf638c3ee7b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
115203236
pes2o/s2orc
v3-fos-license
Loss-of-Function of a Tomato Receptor-Like Kinase Impairs Male Fertility and Induces Parthenocarpic Fruit Set Parthenocarpy arises when an ovary develops into fruit without pollination/fertilization. The mechanisms involved in genetic parthenocarpy have attracted attention because of their potential application in plant breeding and also for their elucidation of the mechanisms involved in early fruit development. We have isolated and characterized a novel small parthenocarpic fruit and flower (spff) mutant in the tomato (Solanum lycopersicum) cultivar Micro-Tom. This plant showed both vegetative and reproductive phenotypes including dwarfism of floral organs, male sterility, delayed flowering, altered axillary shoot development, and parthenocarpic production of small fruits. Genome-wide single nucleotide polymorphism array analysis coupled with mapping-by-sequencing using next generation sequencing-based high-throughput approaches resulted in the identification of a candidate locus responsible for the spff mutant phenotype. Subsequent linkage analysis and RNA interference-based silencing indicated that these phenotypes were caused by a loss-of-function mutation of a single gene (Solyc04g077010), which encodes a receptor-like protein kinase that was expressed in vascular bundles in young buds. Cytological and transcriptomic analyses suggested that parthenocarpy in the spff mutant was associated with enlarged ovarian cells and with elevated expression of the gibberellin metabolism gene, GA20ox1. Taken together, our results suggest a role for Solyc04g077010 in male organ development and indicate that loss of this receptor-like protein kinase activity could result in parthenocarpy. INTRODUCTION The flower-to-fruit transition, also known as "fruit set, " corresponds to a major developmental shift that transforms an ovary into a fruit (Gillaspy et al., 1993). This genetically programmed process is coordinated by a complex network of signaling pathways that are activated by interacting endogenous and exogenous cues, although the genetic and molecular factors that control the flower-to-fruit transition remain poorly understood (Ariizumi et al., 2013). The development of parthenocarpic fruit has been observed under some conditions; this pollination-independent seedless fruit can arise when fertilization is inefficient, mainly due to male sterility. Some naturally occurring tomato genetic parthenocarpy has been identified, and these parthenocarpic mutants have been designated pat, pat-2, and Pat-k/SlAGL6 (Shinozaki and Ezura, 2016;Klap et al., 2017;Takisawa et al., 2018). The pat mutant is characterized by short anthers, partial male sterility, and the production of small fruits (Mazzucato et al., 1998). The locus of the gene responsible for pat phenotypes was narrowed down to chromosome 3 (Beraldi et al., 2004). In addition, the gene encoding SlGA20ox1, the key enzyme for gibberellin (GA) accumulation in the pollinated tomato ovary, is highly expressed in pat ovaries; this is likely to activate GA metabolism and increase GA levels in the unpollinated ovaries, thus triggering parthenocarpy (Olimpieri et al., 2007). The pat-2 phenotype appears to be caused by a recessive mutation at a single locus on chromosome 4, in a gene encoding a zinc finger homeodomain protein (Nunome, 2016); GA also accumulates at high levels in unpollinated pat-2 ovaries (Fos et al., 2000). Furthermore, it has been shown that fruit set initiation through both pollination-dependent and -independent processes occurs concomitantly with the down-regulation of a family of floral homeotic MADS-box genes, which regulate floral organ identities (Wang et al., 2009;Tang et al., 2015). Indeed, the loss of function of several MADS-box genes can cause tomato parthenocarpy. For instance, the loss of function of tomato MADS-box 29, tomato MADS-box 5, and DEFENCIENS/TOMATO APETALA3/STAMENLESS result in parthenocarpy, together with abnormal stamen differentiation (Pnueli et al., 1994;Ampomah-Dwamena et al., 2002;Mazzucato et al., 2008;Quinet et al., 2014;Okabe et al., 2019). Moreover, parthenocarpy was induced in tomatoes that were genetically transformed in order to inhibit stamen development at an early stage of differentiation via the expression of the BARNASE ribonuclease gene under a stamen-specific promoter (Medina et al., 2013). Although the mechanisms underlying the role of the stamen in parthenocarpy have not yet been fully characterized, it has been hypothesized that stamens could counteract fruit set initiation before pollination in tomato plants, and this may be associated in part with elevated levels of GA (Okabe et al., 2019). Flowers and fruits are considered to represent sink organs because their development requires high level of nutrients such as sucrose, as a carbon source (Osorio et al., 2014). The vasculature within flowers, fruits, and their pedicels is therefore of major importance because it transports nutrients and water to these organs (Rančić et al., 2010). XYLEM INTERMIXED WITH PHLOEM1 (XIP1) is one of the proteins with a key role in the organization of vasculature in Arabidopsis (Shiu and Bleecker, 2001). This protein is a leucine-rich repeat receptor-like kinase (RLK) that belongs to a large family with at least 216 members encoded in the Arabidopsis genome. A loss of XIP1 resulted in modification of vascular bundle organization and abnormal lignification of phloem cells, transforming them to xylem cells (Bryan et al., 2012). To identify key regulators of parthenocarpy, the present study characterized a novel tomato parthenocarpic mutant known as small parthenocarpic fruit and flower (spff ), which was isolated from a population where mutations were introduced via exposure to γ-ray irradiation . The spff mutant exhibits small flower formation, male sterility, and increased transcription of GA20ox1 in young ovaries. Furthermore, a rapid highthroughput approach followed by functional validation using RNA interference (RNAi) resulted in the identification of a gene encoding a novel RLK protein. Plant Material and Growth Conditions Tomato wild-type (WT) plants, Solanum lycopersicum "Ailsa-Craig" and "Micro-Tom, " and spff mutant plants were grown in pots and irrigated daily with Otsuka first and Otsuka second fertilizer solutions under greenhouse conditions in Tsukuba, Japan. The greenhouse was maintained at the ambient temperature and light photoperiod in July and August. WT S. lycopersicum "Micro-Tom" and spff plants for RNA sequence (RNA-seq) analysis, RNAi experiments, and histological analyses were grown in rockwool and irrigated daily with Otsuka first and Otsuka second under vertical farm conditions at 25 • C with a 16/8 h light/dark cycle. Histological Analysis Histological analysis of flower tissues was processed as described by Hao et al. (2017). Wax-embedded floral buds were cut into 10-µm cross-sections, layered onto glass slides, and dried overnight at 42 • C. The cell size and the number of cell layers were evaluated and the significance of group differences were statistically analyzed using Student t test. Pollen Number and Germination Assay Pollens were obtained from anthers at the anthesis stage and germinated in 1 mL of pollen germination medium (0.52 M sucrose, 1.6 mM boric Acid, 1 mM CaCl 2 , 1 mM Ca(NO 3 ) 2 , 1 mM MgSO 4 , and 0.01 mM Tris-HCl, pH 7.0). After incubation for 16 h at room temperature, pollen grains were observed under a light microscope. The pollen germination ratio was calculated by dividing the number of germinated pollen (in which the size of the pollen tube is twice or more the diameter of the pollen grain) by the total number of pollen grains; this was defined as the number of pollens observed within one microscopic field. The determinations were made for three replicate biological experiments. High-Density Genetic Mapping For genetic mapping by an Infinium assay (Illumina) using the SolCAP single-nucleotide polymorphism (SNP) array 1 , an F 2 population was derived from a cross between the spff mutant (Micro-Tom background) and WT plants (Ailsa-Craig background). Genomic DNA of 44 F 2 plants (43 with spff mutant phenotypes and one with the WT phenotype), together with F 1 , WT Micro-Tom and parental plants of each genotype, was extracted from fresh leaves using Maxwell 16 DNA purification kits, according to the manufacturer's protocol (Promega). A total of 48 DNA samples were then used for the SolCAP analysis, using the method described by Sim et al. (2012). Of the 7600 markers analyzed, 1956 markers showed polymorphisms that distinguished between Micro-Tom and Ailsa-Craig; these were used for genotyping. SNPs were obtained from the Kazusa Marker Database 2 . For the linkage analysis, we examined the genotypes at the position 59,966,064 bp on chromosome 4 with the tomInf4732 SNP marker, with sequences of AAGCTT and AAGATT in Micro-Tom and Ailsa-Craig, respectively. Each genotype was discriminated using the primers listed in Supplementary Table S1, followed by restriction digestion with Hind III for 8 h at 37 • C. Mapping-By-Sequencing For further fine mapping based on the mapping-by-sequencing approach (Abe et al., 2012;Garcia et al., 2016), an F 2 population was constructed by crossing the spff mutant and WT, in the Micro-Tom background (Supplementary Figure S1). Genomic DNA was extracted from fresh leaves of F 2 plants that exhibited the spff mutant phenotype, as described above. The same amount of extracted DNA from 20 individual plants was pooled and sequenced by 100 bp paired-end sequencing (HiSeq 2000;Illumina). Mutation or variant information was obtained using the Bowtie2-Samtools-GATK (Genome Analysis Tool Kit) pipeline (Li et al., 2009;McKenna et al., 2010;Langmead and Salzberg, 2012). Briefly, Illumina short reads were aligned onto the tomato genome reference SL2.40 by Bowtie2 version 2.2.1 3 with default parameters. Mutations or variants including SNPs or insertion-deletions (Indels) were then detected by GATK version 3.5 (McKenna et al., 2010). SNPs and Indels that might cause nonsynonymous amino acid substitution, a premature stop codon, or frameshift were identified using HaplotypeCaller, as described previously (McKenna et al., 2010;Pulungan et al., 2018). Allele frequency datasets were also obtained using GATK. Because the Micro-Tom cultivar is not inbred and relatively many intracultivar variations are present between individuals, we subtracted such intra-cultivar variants from the SNP/Indel datasets using next generation sequencing datasets of several WT Micro-Tom individuals (Pulungan et al., 2018). Candidate genes with a high SNP/Indel index and reliable read numbers (≥10) were then identified. In this analysis, the SNP/Indel index was calculated as the proportion of sequenced reads that included mutant allele SNPs or Indels, in relation to the WT allele. Linkage Analysis of the spff Locus The spff mutant was backcrossed four times with Micro-Tom WT in order to purify the responsible mutation and finally obtain BC 4 F 2 plants (Supplementary Figure S1). Linkage analysis was performed using DNA extracted from F 2 , BC 2 F 2 , BC 3 F 2 , and BC 4 F 2 populations (Supplementary Table S1). Genomic DNA was extracted by DNeasy Miniprep kit (QIAGEN) and amplified by PCR with TaKaRa Ex Taq (TAKARA) and the primer set shown in Supplementary Table S2. The PCR products were purified by the Illustra ExoStar kit (GE Healthcare) and then sent to Eurofins Genomics for sequencing. Construction of the RNAi Plasmid The RNAi construct was designed using Gateway technology (Invitrogen). Total RNA was extracted from WT ovaries using the RNeasy Plant Mini Kit (QIAGEN), followed by the removal of genomic DNA using RNA Clean & Concentrator (ZYMO RESEARCH). cDNAs were then synthesized using the SuperScript VILO cDNA Synthesis Kit (Thermo Fisher Scientific). A 521 bp fragment of the Solyc04g077010 transcript was amplified using the KOD Plus kit (TOYOBO); the cDNA was used as the template, and SlXIPRNAiF1 and SlXIPRNAiR1 were the primers (Supplementary Table S2). The amplicon was then cloned into the donor pBI-sense, antisense-GW vector (INPLANTA INNOVATIONS INC., Japan), allowing expression under the control of the constitutive 35S promoter. The resulting plasmid was introduced into WT Micro-Tom by Agrobacteriummediated transformation using A. tumefaciens GV2260 (Sun et al., 2006). Transgenic lines were selected on Murashige and Skoog (MS) agar plates containing kanamycin (100 mg L −1 ). RNA Sequencing Ovaries were collected from flowers at anthesis, separated into three replicates (15-17 ovaries in each replicate) and ground in liquid nitrogen. Total RNA extraction from the ovaries and subsequent cDNA synthesis were performed as described above. Genome-wide RNA expression levels were analyzed by HiSeq (Illumina) with 100 bp single-end reads. The raw reads were subjected to quality filtering before employing the TopHat2-Cufflinks pipeline to calculate the number of reads and calculate expression levels using the reads per kb of transcript per million mapped reads (RPKM), as described previously . Comprehensive data were analyzed using multiple t tests (p < 0.05), followed by the Bonferroni correction method, with false discovery rate analysis. Genes with mean RPKM values of ≥1 (three replicates) were considered to be expressed. Genes were considered differentially expressed if the log2 fold ratios were ≥ 1.0 or ≤ −1.0, with false discovery rate adjusted p values (q values) of < 0.05. Expression Analysis by Quantitative Reverse Transcription PCR (qRT-PCR) and RT-PCR For qRT-PCR analysis, the leaves were ground to a fine powder in liquid nitrogen. Total RNA extraction from the samples and subsequent reverse transcription reactions were performed as described above. PCRs were carried out by the CFX96 system (Bio-Rad), using the SYBR Premix Ex Taq kit (TaKaRa) and the appropriate gene-specific primers (Supplementary Table S2) according to previously described procedures (Shinozaki et al., 2015). Technical triplicates were performed for each sample, with biological triplicates. The expression levels were calculated using the delta-delta CT method (Pfaffl, 2001), with normalization to the expression of the reference gene, SAND (Expósito-Rodríguez et al., 2008). For RT-PCR analysis, cDNA synthesis was performed as described above and equal amount of cDNA was used as templete to observe level of SPFF mRNA in various tissues. In situ Hybridization The riboprobes used to detect spff transcripts were made from a 775 bp fragment amplified from tomato root cDNAs by PCR using the ishF2-ishR1 primer set. The PCR product was used for subsequent PCR using the ishT7F2-ishR1 primer set for sense, and the ishF2-ishT7R1 primer set for antisense, riboprobes; this introduced the T7 RNA polymerase promoter at the 5 and 3 ends, respectively. Labeled riboprobes were synthesized by in vitro transcription in the presence of digoxigenin-UTP (DIG RNA Labeling kit, SP6/T7; Roche) and used for in situ hybridization. The plant tissue processing and in situ RNA hybridization experiments were performed following the protocol described by Sicard et al. (2008). Primer sequences used in this study are shown in Supplementary Table S2. For the comparative analysis between WT and spff mutant, both WT and spff mutant samples were mounted on the same glass slides to allow the direct comparison under the same condition. Identification of the Single Recessive Parthenocarpic spff Mutant A visual screening of tomato M 3 populations obtained after γ-ray irradiation-induced mutagenesis in the genetic background of Micro-Tom, a dwarf and rapid growth variety (Matsukura et al., 2007;Saito et al., 2011), resulted in the isolation of a mutant line (TOMJPG4121) that produced small seedless parthenocarpic fruit ( Figure 1A). These plants also produced smaller flowers than the WT plant, particularly due to their narrower petals and shorter anthers ( Figure 1B). We therefore called this line the spff mutant. Although the spff mutant did not produce seeded fruits by practical self-pollination, crossing WT pollen to the spff stigma did result in seeded fruits ( Figure 1A); these F 1 seeds germinated normally, suggesting that spff is male-sterile, with the ovary retaining substantial fertility. Furthermore, all of the resulting six F 1 plants exhibited normal flower morphology, with no evidence of parthenocarpic ability, indicating that these mutant phenotypes were recessive. Thirty-three out of 109 F 2 progenies obtained through crossing with the WT cultivar Micro-Tom, and 43 out of 186 F 2 progenies obtained through crossing with the WT cultivar Ailsa-Craig, exhibited the spff mutant flower morphology and parthenocarpy phenotypes ( Table 1 and Supplementary Figure S2). These segregation ratios corresponded to the expected 3:1 for a single recessive gene (Chi-squared = 1.62 for Micro-Tom and 0.35 for Ailsa-Craig background at p < 0.05 ). These data suggested the presence of a monogenic recessive mutation in the spff line. In the spff and WT cultivar Micro-Tom or Ailsa-Craig F 2 populations, anthesis of the first flower was delayed in the plants with the spff phenotype for 19 or 15 days, respectively, as compared to plants with the WT phenotype (Supplementary Figure S3); this indicated that the flowering delay trait was tightly associated with the spff flower morphology and parthenocarpy phenotypes. Characterization of the Pleiotropic Mutant Phenotypes in spff For detailed phenotypic characterizations, the spff mutant in the M 3 population was backcrossed four times with WT cultivar Micro-Tom pollen to reduce mutagen-induced background mutations (Supplementary Figure S1). The resulting BC 4 F 2 plants that exhibited spff phenotypes were analyzed. First, we examined the parthenocarpic phenotype in the spff mutant. The spff yielded obligate parthenocarpic fruit under spontaneous The spff mutant phenotypes were evident as small flower and parthenocarpic fruit formation. a chi-squared test (p < 0.05). ns, not significant. production, and this was not observed in WT plants (Figure 2A). Compared to the pollinated WT fruits, the spff parthenocarpic fruits were smaller and lighter (Figures 2B-E). For cytological characterization of parthenocarpy at the early developmental stage, we prepared cross-sections of the ovaries at anthesis and examined the number of cell layers and cell size within the pericarp (Figure 3). The spff mutant cells were significantly larger than the WT cells, by approximately 1.3-fold (WT = 202 ± 17 µm 2 , spff = 272 ± 11 µm 2 ), and spff had fewer cell layers. This suggested that spff parthenocarpy was associated with cell enlargement, rather than active cell division. Further, the smaller flowers produced in spff reflected the presence of smaller constitutive tissues, including the petals, style, and anthers; the clearly defective anther may explain the malesterility of this mutant (Figures 4A-H). To evaluate the male fertility of spff, cross-sections of the WT and spff anthers at the anthesis stage were compared. The oval-shaped WT anther locules included pollen grains that showed a germination rate of approximately 60 ± 5% (Figures 4D,I,K,L). In contrast, the spff anther locules were shrunken and contained very few pollen grains, which were unable to germinate (Figures 4H,J-L); this indicated that the spff mutant was fully male sterile. In addition, histological observations of the spff and WT ovaries at the bud length 4 mm indicated the presence of equivalent internal structures, except for their size (Figures 4M-O), consistent with the fact that the spff retained substantial female fertility ( Table 1). We also found that spff affected plant architecture, with an altered pattern of axillary shoot development ( Supplementary Figures S4A-C). The lateral branches of spff showed increased sympodial growth, in which vegetative and inflorescence stems were more actively developed from the individual first axillary buds, leading to a bushy plant morphology. These data characterizing the phenotypes of spff indicated that the mutation conferred pleiotropic effects on both reproductive and vegetative morphology in tomato plants. We next compared yield potential between WT and spff mutant. Since spff mutant showed significant growth delay compared to WT leading to late fruit production ( Figures 5A,B), which made it difficult to conduct comparative yield quantification, WT and spff mutant plants were grown in a greenhouse for 112 and 173 days, respectively, until they nearly reached vegetative growth maturation, determining the yield of ripe red fruits as well as the total number of fruits per plant. The yield (total weight) of ripe red fruit in spff mutant was reduced to 28 % of WT albeit longer growth period and higher number of fruits per plant, suggesting less impact of its potential for improving yield (Figures 5C-F). Identification of the Gene Associated With the spff Phenotype The spff mutation was mapped using an Table 1). According to the Kazusa Marker Database 3 (based on SL2.40), this candidate region included 267 protein-coding genes. The tomInf4732 SNP, which discriminated between Ailsa-Craig and Micro-Tom alleles within the candidate region using primer set F4-R4 (Supplementary Table S2), was used to further genotype 73 F 2 plants. These included 43 plants with spff phenotypes and 30 with WT phenotypes, allowing us to narrow down the region of interest to 2.0 Mbp, which included 205 genes. We next employed mapping-by-sequencing (Abe et al., 2012;Garcia et al., 2016) of an F 2 population derived by crossing spff with WT in a Micro-Tom background. DNA from 20 individual mutant phenotype F 2 plants was sequenced by Illumina HiSeq, and cleaned reads were mapped onto the cultivar Micro-Tom reference genome; polymorphisms were substituted against the cultivar Heinz reference genome version SL2.40 (Kobayashi et al., 2014). The Bowtie2-Samtools-GATK pipeline identified and calculated the frequencies of potential spff -specific SNPs and Indels. This analysis identified 77 mutant homozygous SNPs and Indels within the region narrowed down by SNP Infinium analysis (Supplementary Table S3). These 77 mutations were present in the coding regions of 46 genes, which were considered to represent candidate genes for Table S4). Five of these candidates (Solyc04g076020, Solyc04g076100, Solyc04g076250, Solyc04g076320, and Solyc04g077010) were chosen for further linkage analysis. These were selected because of their relatively high expression levels in flowers and fruits, according to tomato eFP browser (Winter et al., 2007;The Tomato Genome Consortium, 2012), and because of the predicted impact of the mutation on the encoded protein. Their linkages with the spff phenotypes were analyzed using marker-based approaches at F 2 and backcrossed populations listed in Supplementary Table S1 with the five primer sets shown in Supplementary Table S2. The F18-R18 marker for a 2 bp deletion in the Solyc04g077010 gene (Figure 6B), which encodes an RLK, showed perfect segregation with the spff phenotypes. All of the 83 mutant-phenotype plants, and none of the 80 non-parthenocarpic plants, were homozygous for this mutation; the non-parthenocarpic plants were either heterozygous or azygous for this mutation, while four other mutations were not perfectly linked with the spff phenotypes (Supplementary Table S5). We realized that the gene model of Solyc04g077010 in the tomato gene annotation ITAG2.3/SL2.40 differed from the latest ITAG3.2/SL3.0 4 , in which Solyc04g077010 consists of two exons spanning 2871 bp and encoding 957 amino acids. The mutation identified in the present study was located in the first exon and led to a frame shift, which 4 https://solgenomics.net/organism/Solanum_lycopersicum/genome introduced a premature stop codon at position 494 and therefore generated a truncated protein composed of 493 amino acids ( Figure 6C). RLK proteins are structurally characterized by three conserved domains: a receptor domain containing a varying number of leucine-rich repeats; a transmembrane domain; and a kinase domain that transduces the downstream signal via autophosphorylation (Shiu and Bleecker, 2001). The RLK protein encoded by Solyc04g077010 harbors a single transmembrane domain between amino acids 505 and 524. This suggested that the mutation would cause a loss-of-function of this protein, thus resulting in the spff mutant phenotypes. To confirm this, RNAi was used to reduce Solyc04g077010 expression. The RNAi vector targeted the first exon of this gene, which encoded a highly specific receptor domain that was confirmed to be unlikely conserved in other tomato genes encoding RLK proteins by the BLAST search. The RNAi vector was introduced into Micro-Tom plants and three transgenic lines were obtained; these showed significantly reduced mRNA expression of the target protein ( Figure 7A). These three independent transgenic plants showed resemblance to spff phenotypes such as producing small flowers and fruits with parthenocarpy (Figures 7B-H). Further, those RNAi showed complete male sterility, while pollination of WT pollen gave rise to mature viable seeds as observed in spff mutant. These analyses demonstrated that the spff phenotypes resulted from a loss-of-function of this RLK protein-encoding gene. Vasculature-Specific Expression of SPFF Gene in Flower Receptacle The in silico expression profile obtained by RNA-seq and RT-PCR analyses (Winter et al., 2007;The Tomato Genome Consortium, 2012) revealed that Solyc04g077010 was expressed in various plant organs, including roots, leaves, buds, and flowers (Supplementary Figures S7A, S8). Previously published transcriptome data indicated that this gene was expressed in floral organs both before and after anthesis, and transcripts were observed in individual floral organs including the ovary/pistil, anther, petal, and sepal, with the highest expression observed in the ovary/pistil at 1 day before anthesis (Supplementary Figure S7B). Interestingly, a spatiotemporal analysis of the transcriptome of developing tomato fruits (Fernandez-Pozo et al., 2017;Shinozaki et al., 2018b) revealed vasculature-specific expression of Solyc04g077010 in the fruit pericarp throughout development (Supplementary Figure S7C). Consistent with this, predominant expression of this gene was also found in fruit internal tissues, columella and placenta (Supplementary Figure S7D), with a high abundance in thick vascular bundles. To unravel the spatio-temporal expression pattern of Solyc04g077010 during flower development, in situ mRNA hybridization was performed in WT floral buds at different stages of development. In the early developing 1.1 mm bud, the transcript signal was exclusively observed in the vasculature tissues of the receptacle (Figures 8A,D). As development proceeded, the SPFF transcripts were also detected in the vasculature of the pedicel (2.9 mm bud) (Figures 8B,E), and in the vasculature of the columella tissue (4.5 mm bud) (Figures 8C,F). We also observed reduced SPFF transcripts in receptacle and leaves of spff compared to WT (Supplementary Figure S6A), indicating that the spff mutation influences both transcript abundance and protein function. Solyc04g077010 Mutation May Affect Hormonal Regulation at the Transcriptional Level To obtain insights into the molecular mechanisms underlying parthenocarpy in the spff mutant plant, the ovarian transcriptome at the anthesis stage, corresponding to flowerto-fruit transition, was compared to that of WT plants. Our RNA-seq analysis identified a total of 25 differentially expressed genes; 13 of these were significantly up-regulated in spff plants (log2 fold-change > 1) and 12 were significantly down-regulated (log2 fold-change < −1) (q values < 0.05 for the comparison with WT, Supplementary Table S6). Notably, the up-regulated genes in the spff ovary included SlGA20ox1 (Solyc03g006880), which encodes a key GA biosynthetic enzyme that is induced by pollination and is also highly expressed during parthenocarpy in the pat mutant (Olimpieri et al., 2007;Serrani et al., 2007b). In spff, the expression level of SlGA20ox1 was > 10-fold of that observed in the WT plant. This result suggested that GA is involved in the parthenocarpic early transition from flower to fruit exhibited by the spff mutant. To gain further insights into this, we compared our differentially expressed genes with previously published transcriptomic data obtained from GAtreated and -untreated unfertilized ovaries (Tang et al., 2015). One of our 13 up-regulated genes (SlGA20ox1) and three of our 12 down-regulated genes [Solyc02g078150 (Plant-specific domain TIGR01615 family protein), Solyc12g094620 (catalase), and Solyc05g005150 (F-box/Kelch repeat-containing F-box family protein)] were found in the list of genes that were up-and down-regulated by GA treatment, respectively. Flower Receptacle Development Is Not Likely to Be Affected in the spff Mutant A database BLASTP search showed that the protein encoded by XYLEM INTERMIXTED WITH PHLOEM1 (XIP1) is the closest homolog to tomato SPFF, with 63% amino acid identity (E-value 0, score 1153 bits, and 77% positives) with the Arabidopsis counterpart (GenBank accession no. BAC42540.1). Arabidopsis xip1 loss-of-function mutants showed excessive anthocyanin accumulation in the leaves and severe defects in plant growth, while fertility was not affected (Bryan et al., 2012). Here, the spff mutant did not show excessive anthocyanin accumulation in the leaves and showed severe male sterility (Figure 4 and Supplementary Figure S4D). Nevertheless, the fact that the xip1 mutants altered plant vascular development, represented by intermixed xylem with phloem, suggests a similar function for the SPFF protein, whose expression was indeed localized to vasculature in the fruit and inflorescence tissues (Figure 8 and Supplementary Figure S7C). To unravel this, we compared xylem-phloem distribution patterns between WT and spff mutant receptacles. Cross-sections of receptacle were stained with Safranin O and Astra blue to visualize lignified (seen as red) and unlignified (seen as blue) tissues. Supplementary Figure S9 shows that the stained receptacle cross-sections did not reveal significant xylem-phloem intermixing in the spff mutant. DISCUSSION The Gene Associated With the spff Phenotype Encodes a Putative RLK Involved in Flower and Fruit Development This study aimed to identify and characterize the gene underlying a newly isolated tomato mutant, named spff, which showed parthenocarpy and floral organ dwarfism as its major phenotypes (Figures 1-4). A high-throughput approach combining highdensity genetic mapping (Supplementary Figure S5) and mapping-by-sequencing, followed by conventional genetic linkage analysis (Supplementary Tables S3-S5), allowed the rapid identification of a potential causal mutation in a gene located on chromosome 4, Solyc04g077010 (Figure 6). This gene encodes a potential RLK that appeared to be mainly expressed in the receptacle of young floral buds (Figure 8 and Supplementary Figure S6). A 2 bp deletion mutation was identified, which introduced a premature stop codon that leads to the production of a truncated RLK protein (Figure 6) as well as to reduced transcript abundance (Supplementary Figure S6). Using RNAi approach, we confirmed that the spff phenotypes could be reproduced by silencing Solyc04g077010 (Figure 7), and thus concluded that this is the causative gene for the spff mutant. The Solyc04g077010 homolog in Arabidopsis, xip1, was reported to be involved in vascular bundle differentiation (Bryan et al., 2012). The xip1 mutant shows aberrant xylemlike cells within the phloem in inflorescence stems. Although Solyc04g077010 appeared to be expressed in close vicinity to the vascular bundle (Figure 8 and Supplementary Figures S6, S7), xylem-like cells were not present within the phloem (Supplementary Figure S9). Moreover, fertility was not affected in Arabidopsis xip1 mutant plants, where the inflorescence stems are shorter than those of the Col-0 accession plants, and the cotyledons and rosette leaves show a purple color, indicative of anthocyanin accumulation. Since these phenotypes were not observed in the present spff mutant (Figures 1, 4 and Supplementary Figure S4), Solyc04g077010 does not seem to be a functionally conserved ortholog of XIP1. It is more likely to be a novel gene that has possibly acquired a specific function in tomato, although further analyses are needed to confirm this functional dissimilarity with the Arabidopsis XIP1 gene. Hypothesis for How the spff Mutant Induces Parthenocarpy Parthenocarpy can mimic the molecular mechanisms underlying pollination-dependent ovary growth (Li et al., 2014). Fruit set initiation and parthenocarpy are regulated by complex hormone networks. Molecular genetic studies of many mutants/genotypes and transcriptome analyses of early fruit development have suggested that parthenocarpy is in part induced through a hierarchical scheme of temporal regulation by multiple hormones, initiated by the accumulation of auxin; this induces intense cell division, with the subsequent induction of GA metabolism triggering active cell expansion (Martí et al., 2007;Serrani et al., 2007aSerrani et al., , 2008. Thus, GA should act as the downstream signal and cell expansion most likely plays a crucial role for fruit set initiation in tomato (Serrani et al., 2008;Shinozaki et al., 2015). The present study revealed that the spff mutant exhibited higher levels of GA20ox1 than WT plants (Supplementary Table S6); this is one of the key factors involved in GA biosynthesis in tomato ovaries (Olimpieri et al., 2007;Serrani et al., 2007b). Further, three GA-down regulated genes (Solyc02g078150, Solyc12g094620, and Solyc05g005150) were found in the list of differentially expressed genes identified by the RNA-seq analysis in the unfertilized ovary of spff mutant (Supplementary Table S6). In addition, the small parthenocarpic fruits produced by the spff mutant were characterized by enlarged cells, rather than an increased number of cell layers in the ovary pericarp, most likely due to a lack of intense cell division (Figure 3). This was consistent with the characteristics of parthenocarpic fruit induced by increased GA sensitivity (Martí et al., 2007). In contrast, auxin-induced parthenocarpy is associated with intensive cell division in the pericarp, resulting in an increased number of cell layers (Wang et al., 2009). The spff mutant also showed reduced pollen fertility (Figures 4I-L), which could reflect an increased GA response (Livne et al., 2015). These results suggest that the RLK encoded by Solyc04g077010 functions to repress the GA response in reproductive organs, and that spff parthenocarpy may result in part from an increased GA response. Additionally, the association of parthenocarpy with early male organ developmental abnormality has been observed in tomato plants. Mutations or genetic suppressions of MADSbox genes, which inhibit functional stamen development by causing homeotic conversions, can induce parthenocarpy (Pnueli et al., 1994;Ampomah-Dwamena et al., 2002;Mazzucato et al., 2008;Quinet et al., 2014;Okabe et al., 2019). Furthermore, the over-accumulation of BARNASE mRNA under a stamen-specific promoter triggers early anther ablation and parthenocarpy (Medina et al., 2013), while loss of function of SEXUAL STERILITY/HYDRA results in complete male sterility and parthenocarpy (Hao et al., 2017;Rojas-Gracia et al., 2017). Recently, a tap3 mutant has also been described in which stamens are converted into a carpelloid structure and GA over-accumulates in unfertilized ovaries, most likely due to the overexpression of GA metabolism genes such as GA20ox1 (Okabe et al., 2019). Taken together with the fact that the spff mutant shows male sterility and GA20ox1 is highly expressed in the unfertilized ovary of the spff mutant (Supplementary Table S6), it is possible that parthenocarpy in the spff mutant involves increased levels of GA20ox1 transcripts through the association with male sterility. Since our transcriptome analysis revealed no differential expression of MADS-box genes between WT and spff mutants (Supplementary Table S6), and no homeotic conversion phenotypes were observed in the spff mutant (Figures 1, 4), the association of floral homeotic genes with the Solyc04g077010 gene, and the mechanisms involved in GA20ox1 gene regulation, require further elucidation. The in situ mRNA analysis showed that Solyc04g077010 was strongly expressed in vascular bundle cells of the floral receptacle and pedicel (Figure 8). Vascular systems in inflorescence stems are important for nutrient and signal transportation during developmental events in the reproductive organs (Rančić et al., 2010). We therefore hypothesize that the RLK encoded by Solyc04g077010 may be involved in the transportation of molecular substances essential for normal floral organ development, and that loss-of-function mutations of this gene may lead to the disruption of integrity of such a system, which may then cause anther abortion. Since we identified little cytological evidence for structural differences between the vascular bundles observed in WT and spff mutant plants (Supplementary Figure S9), future studies are required to investigate this possibility in more detail. Although the role of RLK family proteins in the regulation of fruit development has yet to be fully delineated, a celltype specific transcriptome study of tomato ovaries showed that several genes encoding RLKs were enriched in the cluster that is mainly expressed in the funiculus of the developing seed. These included a homolog of the Arabidopsis HAESA gene, which is involved in specifying seed abscission zones, suggesting that the tomato homolog may possess a similar function (Pattison et al., 2015). Furthermore, silencing of an invertase inhibitor gene in the SlINVINH1-RNAi line, causing increased cell wall invertase activity, was associated with an overall reduction in the transcription of RLK family members in young ovaries, suggesting that RLK may play a role in sensing the modification of cell wall components, thereby regulating downstream gene expression (Ru et al., 2017). Elucidation of RLK activities, including the identification of ligands and kinase domain target proteins, would provide valuable insights into the involvement of RLK proteins in the regulation of fruit development. CONCLUSION In conclusion, this study identified a novel tomato mutant showing parthenocarpy and this was caused by the loss of function in the gene encoding a receptor kinase gene designated as SPFF. The parthenocarpic variety potentially shows improved fruit productivity due to increased fruit set efficiency (Shinozaki et al., 2018a), although the spff produced delayed growth, smaller mature fruits and reduced yield compared to WT (Figure 5). Such unfavorable traits render this mutant less attractive for breeding application, but it would be interesting to identify hypomorphic (weaker) alleles of spff carrying less detrimental phenotypes through screening from TILLING populations or genome editing approaches Shimatani et al., 2017), and investigate their potentials for impact on breeding application. AUTHOR CONTRIBUTIONS YS and TA contributed to the mutant screening. HT, YS, RY, and TA contributed to genetic mapping and transcriptomic analysis. HT and YS performed phenotypic characterizations of mutant plants. SK and HT contributed to expression analysis. HT, MH, and CC contributed to histological analysis and in situ hybridization assays. HT, YS, MH, CC, HE, and TA wrote the manuscript. All authors reviewed and approved the final manuscript.
2019-04-16T13:31:45.176Z
2019-04-16T00:00:00.000
{ "year": 2019, "sha1": "c19be258abfd711b4af0e77c77658063451ecbc9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00403/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c19be258abfd711b4af0e77c77658063451ecbc9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
253578443
pes2o/s2orc
v3-fos-license
Prevalence of Primary Dysmenorrhea, Its Intensity and Associated Factors Among Female Students at High Schools of Wolaita Zone, Southern Ethiopia: Cross-Sectional Study Design Introduction Primary dysmenorrhea is a highly prevalent gynecological problem and one of the most common causes of school absenteeism among school adolescents. Nearly, half of females with primary dysmenorrhea missed school or work at least once per cycle. Therefore, this study aimed to assess the prevalence of primary dysmenorrhea and its associated factors among female students in Wolaita soddo town high schools. Methods An institution-based cross-sectional study was conducted among female students at Wolaita soddo town high schools from October 1–30/2021. A total of 733 students were selected using a simple random sampling technique. The data were entered using Epi data version 3.1 and exported to SPSS version 25 for analysis. Binary logistic regression analysis was used. Variables with a p-value of <0.05 in the multivariable logistic regression analysis model were considered statistically significant. Results The prevalence of primary dysmenorrhea was 70% (95% CI (66.6%, 73.4%)). Factors such as age <18 years (AOR 2.55; 95% CI (1.77, 3.68)), long duration of menstrual flow (AOR 2.72; 95% CI (1.42, 5.17)), irregular menstrual cycle (AOR 2.39; 95% CI (1.68, 3.41)), family history of dysmenorrhea (AOR 2.46; 95% CI (1.67, 3.64)) and skipping breakfast (AOR 1.62; 95% CI (1.13, 2.33)) were associated with primary dysmenorrhea. Conclusion The prevalence of primary dysmenorrhea was high among high school students in the study area. Being younger age, long menstrual flow duration, irregular monthly menstrual cycle, family history of dysmenorrhea, and skipping breakfast were determinants of primary dysmenorrhea. Introduction Dysmenorrhea is defined as the presence of painful cramps of uterine origin that occur during menstruation. 1 Dysmenorrhea can be divided into two broad categories of primary and secondary. Primary dysmenorrhea is described as recurrent, cramping pain occurring with menses in the absence of identifiable pelvic pathology. Secondary dysmenorrhea is menstrual pain associated with underlying pelvic pathologies such as endometriosis, uterine myomas, pelvic inflammatory disease, ovarian cyst, intra-uterine adhesions and cervical stenosis. The most common symptoms of dysmenorrhea are cramps lower abdominal pain, back pain, nausea/vomiting and headache. 2 The cause of primary dysmenorrhea is not well established. However, the responsible cause has identified on the hyper-production of uterine prostaglandins, thus resulting in increased uterine one and high amplitude contractions. Women with dysmenorrhea have higher levels of prostaglandins, which are highest during the first two days of menses. Prostaglandin production is controlled by progesterone; when progesterone levels drop, immediately prior to menstruation prostaglandin level increase. These increase in prostaglandin level cause muscles contraction in uterus, which cause pain during menstrual flow. 1,2 Primary dysmenorrhea is the most commonly reported gynecological and menstrual disorder. It affects a large proportion of women of reproductive ages. It affects millions of women during their reproductive year. 1,3 Globally, the previous epidemiological investigations have reported that the magnitudes of dysmenorrhea ranges from 41.7% to 94%. 4,5 In sub-Saharan Africa, the prevalence of primary dysmenorrhea ranges also from 51.1% to 88.1%. 6,7 In Ethiopia the prevalence of primary dysmenorrhea ranges from 62.3% to 85,4%. 3,8 The common risk factors for primary dysmenorrhea are a positive family history of dysmenorrhea, obesity, being younger age, shorter or longer menstrual cycle interval, stress, menstrual cycle irregularity, early menarche before 12 years and circumcision. 11,12 Primary dysmenorrhea is a significant contributor to approximately 10% of incapacitating severe menstrual pain among females during adolescence and early adulthood. In addition, it is severe enough to result in a significant socioeconomic dysfunction and disability particularly in adolescents and young women. 9 In United-States an estimated 600 million work hours and 2 billion dollars of economic loss are associated with dysmenorrhea. It has a significant negative impact on students' academic performance. 7 Several studies have stated that primary dysmenorrhea usually affects relationships, functioning, and productivity, contributes to absenteeism in class/work and reduces day-to-life activities. 10 In despite this, the problem is considered to be underestimated and untreated as most women do not seek medical treatment because they commonly perceive that pain is an expected part of menstruation. For example, about 85.8% of females do not seek medical care/advice which indicates that screening all adolescent girls for primary dysmenorrhea is important. 28 So that findings can guide to design an effective menstrual health education program and to develop strategies to compensate lost classes and improve poor academic performance. Developing appropriate management and preventive strategies is important to reduce the health impact of dysmenorrhea among adolescent girls. 9 However, studies on the status of dysmenorrhea and associated factors among female high school students are scarce in southern Ethiopia. Therefore, this study aimed to determine the prevalence of primary dysmenorrhea and its associated factors among female students at Wolaita soddo town high schools, southern Ethiopia. Study Setting and Period A cross-sectional study was conducted among female students at Wolaita soddo town high schools from October 1-30/ 202. Wolaita Soddo town is the capital city of Wolaita zone found southern nation nationalities people regional state of Ethiopia. It is about 320 km away from Addis Ababa the capital of Ethiopia. There are seven public and four private high schools in the town serving for a total of 12,792 students of which 6580 are female. Source Population and Study Population All female high school students attending their education in Wolaita Soddo town were used as source populations. On the other hand, all randomly selected female high school students who were present in the four selected high schools during the data collection period were taken as the study population. Inclusion and Exclusion Criteria Female students who undergo their education in the selected high schools in Soddo town were included in the study. On the other hand, female students who had a known diagnosed medical history of pelvic pathology were excluded from the study. Sample Size Determination The required sample size was computed using Open Epi V.3.03 statistical software. The following assumptions were considered; a confidence level of 95%, marginal error of 5%, design effect = 2 and the prevalence of primary dysmenorrhea from previous study as 64.7%. 13 Based on this assumption the required sample size was 660. Finally, considering a 10% non-response rate, the required sample size was 733. Sampling Technique and Procedure Study participants were selected from all Soddo town high schools using a multistage stratified sampling technique. First, the eleven schools were stratified into seven public and four private schools. Then, three schools were selected from the public and one from the private by using simple random sampling. The selected schools were stratified and proportionally allocated to their grades from 9 to 12 and the list of female students from each grade was used as a sampling frame. Finally, the study participants were selected using a simple random sampling technique after the required sample size was proportionally allocated to each grade. Data Collection Tools and Procedures Data were collected using a pretested structured self-administered questionnaire developed based on review of the related literature. The questionnaire contained items on socio-demographic and economic factors, menstrual characteristics of respondents, lifestyle and behavioral related factors, menstrual characteristics and severity of dysmenorrhea was measured using a 10-point visual analogue scale (VAS). 13 The tool was first developed in English and translated into Amharic, and then translated back to English for consistency. Three BSc nurses and one MPH level health professional were recruited and trained as data collectors and supervisors, respectively. Operational Definition Primary dysmenorrhea: -Students who had pain in the abdomen, thighs and lower back one day before and/or the first to third day of menstruation in the last 3 months. 14 To measure the intensity of primary dysmenorrhea; a 10-point numerical rating scale (NRS) was used to represent the continuum of girls' student perception of degree of pain and classified as mild 1-3, moderate 4-7 and severe 8-10. 13 Data Processing and Analysis The collected questioners first manually checked for completeness, and then the data was coded and entered using Epi data version 3.1 and exported to SPSS version 26 for data analysis. The descriptive statistics, such as frequency, percentage, mean, standard deviation were performed to describe study population. Bi-variable binary logistic regression model was computed to test the presence of crude association between primary dysmenorrhea and independent variables and to identify candidate variables for multivariable analysis. All variables in bi-variable analysis with p < 0.25 were included in multivariable binary logistic regression analysis. Significance factors were identified based on p-value <0.05. Finally, text, tables and graphs were used to present the result. Socio Demographic Characteristics of the Participant and Their Parents A total of 707 female students participated in the study with a response rate of 96.4%. The mean (SD) age was 16.7 ± 1.32 years with the minimum age of 14 and the maximum age of 19 years. About nine tenth of 636 (90%) of the study participants were Wolaita by ethnicity. Of the respondents, majority 600 (84.9%) were urban dwellers and nearly twothird 482 (68.2%) lived with their parents. Regarding parents' educational status, more than half of the fathers 413 (58.4%) had an educational status of secondary or higher (Table 1). Obstetric and Gynecological Related Characteristics Two-third of the participants 478 (67.6%) started menarche at 13-14 years with a mean age of menarche at 13.39 years. More than half 398 (56.3%) reported a menstrual duration of 3-7 days, and more than three-fourth of them 568 (80.3%) reported a normal amount of menstrual flow. Nearly, one-tenth 70 (9.9%) of the study participants had ever used hormonal contraceptives. Moreover, 282 (39.9%) of students reported a family history of dysmenorrhea ( Prevalence of Primary Dysmenorrhea and Its Intensity According to this study, 495 (70.0%) with CI (66.6-73.4%) of the students reported that they were suffering from primary dysmenorrhea. According to the numeric rating scale (NRS) experienced mild pain 203 (41.0%), moderate pain 181 (36.6%) and severe pain 111 (22.4%) (Figure 1). More than two-thirds of students 338 (68.3%) had pain that started a few days before menstrual flow and in almost half of the students 245 (49.5%) pain lasts within one day of menstrual flow. The location of this pain varies among students and was mostly reported in the lower abdomen 341 (68.9%), followed by the lower back 103 (20.8%), and abdominal pain that extended to thighs (10.1%). Backache and fatigue were the most common symptoms associated with primary dysmenorrhea and drinking coffee or tea and getting rest were the most preferable management options used by students during menstrual pain. (Table 4). Discussion Adolescence is the transition period from puberty to early adulthood during which physical, emotional and psychological changes occur in the body. Menarche is a significant landmark of adolescence that prepares girls for future motherhood. The present study was conducted to assess one of the menstrual problems; associated with primary dysmenorrhea among adolescent girls. The prevalence of primary dysmenorrhea among the study participants was 70%. Of these, 203 (41.0%), 181 (36.6%), and 111 (22.4%) rated their pain intensity as mild, moderate, and severe respectively. The findings of this study were comparable with those of previous studies reported in Debre Markos (69.3%), 9 Hararegie (69.26%), 15 Ghana (68.1%) 16 and Brazil 73%. 17 However, the prevalence in this study was relatively lower than that, reported in Egypt 76.1%, 18 Benin 78.3%, 19 Oman 94%, 4 Kuwait 85.6% 20 and Romania 78.4%. 29 The possible reasons for the discrepancies in the estimated prevalence may be the socio-cultural differences of the study participants in pain perception during menstruation and lifestyle differences. In contrast, the prevalence was relatively higher than that reported in studies conducted among university students in Hawassa 51.5%, 11 Nigeria 51.1%, 6 China 41.7%, 5 South Korea 58.8%, 21 Georgia 52%. 22 This inconsistency is probably because the prevalence of primary dysmenorrhea is higher among adolescents and decreases with increasing age, whereas in studies performed among university students age range between 18 and 29 years. In this study, younger age was significantly associated with primary dysmenorrhea. Participants aged <14-17 years were 2.55 times more likely to experience primary dysmenorrhea than those aged ≥18 years. This finding was in line with those of the studies conducted in Benin, 19 Nigeria 6 and Iran. 23 This may be because primary dysmenorrhea is more frequent in young virgin girls and those who have not given birth and its prevalence decreases with increasing age. A longer duration of menstrual bleeding (>7 days) was an important risk factor for primary dysmenorrhea; and long duration of menstrual flow (>7) was 2.72 times more likely to develop primary dysmenorrhea. This finding is supported by studies from Mekele, 12 Nigeria, 6 South Korea, 21 Italy, 24 and India. 25 Menstrual irregularity was also one of the contributing factors for primary dysmenorrhea. Those students who had irregular menstruation were 2.39 times more likely to have primary dysmenorrhea. This finding was consistent with a study in Debre Tabor, 3 Gondar, 13 Hawassa, 11 Ghana 16 and Egypt. 18 The possible explanation might be due to an immature hypothalamopituitary-ovarian axis or it may be due to changing trends of lifestyle, changing dietary habits and tough competition which is responsible for psychological stress among adolescents and also the irregularity of menstruation which could fluctuate steroid hormones and might lead to primary dysmenorrhea. 26 Family history of dysmenorrhea was another predictor for the presence of primary dysmenorrhea. It was found that primary dysmenorrhea was 2.4 time more prevalent among those respondents who have family history of dysmenorrhea. This was supported by a study in Hawassa, 11 Debre Tabor, 3 Gondar, 13 Benin, 19 India 25 and Georgia secondary school students; 22 this could be related to behaviors that girls learn from their mothers for the possibility of societal reward or that control pain. It might have also a psychological impact such as daughters may react to menstruation similarly like their mothers and they may share the same attitude and taboos towards menses. 27 Our study demonstrated that breakfast skipping significantly increases the prevalence of primary dysmenorrhea. Students who skipped their breakfast were 1.62 times more likely to develop primary dysmenorrhea. This finding is compatible with studies done in India, 25 Georgia, 22 and China; 9 but contrasts with studies done in Hawassa University that breakfast skipping preventive rather than risk. 11 Nevertheless, it has been demonstrated that diet can influence Limitation of the Study The limitation of this study is the fact that temporal relations could not be established, since the study design was a crosssectional study. Since the study variables were measured by the participant self-reporting, and there could be a recall bias as the students were asked for events within the last three months. However, this study still provides important insights regarding primary dysmenorrhea, and associated risk factors among female secondary school students. Conclusions A high proportion of female secondary school students were suffered from primary dysmenorrhea. Students with younger age, long duration of menstrual flow, irregular cycle, family history of dysmenorrhea, and breakfast skipping were more likely to develop primary dysmenorrhea. Abbreviations AOR, adjusted odd ratio; CI, confidence interval; COR, crude odd ratio; SD, standard deviation; SPSS, Statistical Package for Social Science; VAS, visual analogue scale. Data Sharing Statement All the minimal data sets used to reach the conclusions drawn in the manuscript are included within the manuscript. Ethical Approval and Consent to Participants Ethical clearance letter was obtained from Wolaita Soddo University, College of Health Sciences; School of Public Health institutional review board (IRB) (Ref. No. CRCSD9/03/2014). Official letter was received from the school of public health and submitted to Soddo town education office in order to get official letter of permission for data collection. Informed consent was obtained for respondents above 18 years of old and for those who are under 18 year's oral assent from them and consent obtained from parents before collecting the data. All relevant ethical principles under the Helsinki declaration were followed and respected.
2022-11-18T06:12:04.729Z
2022-11-09T00:00:00.000
{ "year": 2022, "sha1": "b462141b3014f071cd4f6aea8d2f7cf4743e9b2d", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=85316", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b462141b3014f071cd4f6aea8d2f7cf4743e9b2d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117840494
pes2o/s2orc
v3-fos-license
Exploring non-linear cosmological matter diffusion coefficients Since microscopic velocity diffusion can be incorporated into general relativity in a consistent way, we study cosmological background solutions when the diffusion phenomena takes place in an expanding universe. Our focus here relies on the nature of the diffusion coefficient $\sigma$ which measures the magnitude of such transport phenomena. We test dynamics where $\sigma$ has a phenomenological dependence on the scale factor, the matter density, the dark energy and the expansion rate. Introduction For most of the universe lifetime its dynamics can be approximated by a simple expanding dust matter dominated sphere. The hot radiative (Big Bang) primordial universe cools down quickly until the radiation energy drops to the same level as the matter energy density. This happens very soon, when the universe is only ∼ 50 Kyrs old. During the following 10 Gyrs the total universe cosmic energy budget is well approximated by a pressureless matter fluid. This is the matter dominated epoch where most of the main astrophysical effects take place, such as the formation of stars, galaxies and clusters of galaxies. The matter component can be divided into two distinct contributions: the first one is the expected baryonic sector which contains the known heavy particles of the standard particle model. The second contribution comes from an unknown component called dark matter which is at least five times more abundant than the baryonic matter and is the building block of any successful cosmological theory. The matter domination era is a necessary stage for the formation of structures, but it ends when the universe is ∼ 10 Gyrs old. From this moment on, another form of energy, called dark energy, accelerates the background expansion slowing down the agglomeration rate. The nature of the dark energy is also still unknown. The simplest explanation for this effect relies on the existence of a cosmological constant Λ. However, one could admit different descriptions for the dark energy phenomena, like scalar fields, which may (Amendola 2000;Zimdahl et al 2003;Dalal et al 2011;Castro et al 2012) or may not interact with the other cosmic components. In the standard model described above the matter dynamics is therefore described by the relativistic Euler equation ∇ µ T µν = 0 on the the matter fluid energy-momentum tensor. In particular, the fluid feels only indirectly (via the gravitational potential) the presence of other components, e.g., photons, neutrinos and dark energy. If we assume fluid particles undergoing velocity diffusion in a background medium, it was shown in (Calogero 2011;Calogero 2012) that the matter dynamics can be described by the equations (1) The first equation guarantees the standard conservation law for the particles current density J µ . The quantity σ in the second equation is the (positive) diffusion coefficient, which measures the energy transferred to the fluid particles by the diffusion forces per unit of time 1 . So far only the case of a constant σ has been considered in the literature, see e.g. (Calogero & Velten, 2013;Shogin et al 2013), but here the possibility that σ varies through space-time will be considered. Since the second relation in (1) states that, in the presence of diffusion, the matter energy-momentum tensor is not a divergence free quantity, and having in mind Bianchi's identities, it is clear that the space-time geometry cannot be determined by the standard Einstein field equations of general relativity. The inconsistency with the Bianchi identities can be circumvented by adding a cosmological scalar field φ to the Einstein equation, which thereby becomes where we use physical units such that 8πG = c = 1. The scalar field φ plays the role of the background medium in which diffusion takes place. Taking the divergence ∇ µ of both sides of eq. (2), we obtain that φ obeys In order to avoid the need to introduce a new evolution equation for σ, and at the same time to ensure that the value of σ is coordinates-independent, we assume that σ = f (s), where s is a scalar invariant quantity constructed from φ, g µν , J µ and T µν . The simplest choices for the scalar invariant s are In the next section we present the basic equations for a viable cosmological model based on the diffusion theory outlined above. This model extends the one studied in (Calogero & Velten, 2013) by considering a time dependent diffusion coefficient σ. Cosmological model with variable matter diffusion A viable cosmological model in which dark matter undergoes microscopic velocity diffusion into a dark energy solvent field φ has been developed in (Calogero 2012;Calogero & Velten, 2013). This model is obtained from the general diffusion theory described in the Introduction under the following assumptions: (i) the matter content is described by a pressureless fluid, i.e., the energy-momentum tensor T µν and the current density J µ are given by T µν = ρ u µ u ν and J µ = nu µ , where ρ is the energy density, n the particles number density and u µ the fourvelocity field of the dust fluid; (ii) the universe is spatially homogeneous, isotropic and flat and so in particular the space-time metric can be written in the form where a subscript 0 indicates the evaluation at time t = 0; (iii) the diffusion coefficient σ is a positive constant. The resulting cosmological model has been called φCDM model in (Calogero & Velten, 2013) and is described by the following system on the standard normalized energy densities Forσ = 0 the φ field remains constant in time and the solution is given by the ΛCDM model: Equations (5) and (6) denote a coupled system where energy flows from the matter to the dark energy field. The direction of the flux is due to the fact thatσ > 0. Interacting models play an important role to alleviate the cosmic coincidence problem, i.e., the fact that only today the dark matter and dark energy densities are of the same order of magnitude. Usually the interaction term in the right hand side of equations (5) and (6) is incorporated in a ad hoc way. Therefore, the diffusion mechanism appears as a genuine physical mechanism responsible for the interaction on the dark sector. In the rest of the paper we assume thatσ in the equations above is timedependent. We will employ the following phenomenological choices We remark that, since for the model under discussion the scalar invariants s 1 , s 2 in (4) are given by s 1 = n 0 (1 + z) 3 , and s 2 = −ρ = H 0 Ω m (z), the choicesσ (n) , σ (ρ) and σ (φ) correspond respectively to a diffusion coefficient that is a power of the scalar invariants s 1 = n 2 , s 2 = −ρ, s 3 = φ to which (4) reduce in the dust fluid case. Cosmological background dynamics Let us investigate now how the different options for the coefficient σ affect the background dynamics of the cosmological model. In our analysis we will fix the reference values Ω m0 = 0.3 (Ω Λ = Ω φ0 = 0.7) and H 0 = 70km s −1 M pc −1 . Moreover the today magnitude of the diffusion coefficient will be fixed atσ 0 = 0.1. Although background observational data can be described by this value, the structure formation process is severely affected by the diffusion mechanism. The analysis using the matter power spectrum data imposes the upper boundσ 0 < 0.01 (Calogero & Velten, 2013). However, we will keep the reference valueσ 0 = 0.1 as a guide since here we are mostly concerned with the background expansion. Indeed, depending on the value of k, λ, δ and h, the resulting diffusive dynamics becomes closer to the ΛCDM model, thus allowing for larger values ofσ 0 . The results of our analysis are contained in Figs. 1 and 2, where we plot respectively the Hubble function and the fractionary densities corresponding to the different choices of the time dependent diffusion coefficient σ. In each plot the dynamical quantities for the ΛCDM model are shown with solid lines. The case of a constantσ =σ 0 has been already shown in (Calogero & Velten, 2013). The observational data points displayed in Fig. 1 are based on a technique which uses the differential age of old red galaxies. They were compiled in (Farooq et al 2013). The main conclusion that can be drawn from the pictures is that the diffusion dynamics can be made arbitrarily close to those of the ΛCDM model by choosing the exponent k positive and large, or the exponents λ, δ, h negative and with large absolute value. Conclusions We have investigated the background evolution for a cosmological model where the matter component undergoes microscopic velocity diffusion into the dark energy field, which acts as the diffusion solvent. Previously, the case of a constant diffusion coefficient σ = const was studied (Calogero & Velten, 2013). In this contribution we consider different temporal dependences for the diffusion coefficient, which derive by postulating a power-law dependence of σ on the other dynamical variables of the model. Our main result can be stated in the following way: By a proper choice of the exponent in the power-law, the dynamics of the diffusion model can be made arbitrarily close to those of the ΛCDM expansion. In some sense, this means that even for "high" values for the magnitude of the today's diffusion coefficientσ 0 , such asσ 0 = 0.1, an appropriate time dependence can alleviate the diffusion effects on the cosmic background dynamics. In any case, following the results of (Calogero & Velten, 2013), it is mandatory a study making use of the cosmological perturbation theory. The cosmic matter diffusion is very sensitive to this analysis and the most strong constraints come from the structure formation process. We hope to deal with this issue in a future communication.
2014-07-16T13:38:52.000Z
2014-07-16T00:00:00.000
{ "year": 2014, "sha1": "ca88c4e4e194ba574b4b5b398e6fc6ed52a0cab7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ca88c4e4e194ba574b4b5b398e6fc6ed52a0cab7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231718071
pes2o/s2orc
v3-fos-license
COVID-19–Related Knowledge and Practices Among Health Care Workers in Saudi Arabia: Cross-sectional Questionnaire Study Background Health care workers are at the front line against COVID-19. The risk of transmission decreases with adequate knowledge of infection prevention methods. However, health care workers reportedly lack a proper attitude and knowledge of different viral outbreaks. Objective This study aimed to assess the knowledge and attitude of health care workers in Saudi Arabia toward COVID-19. Assessment of these parameters may help researchers focus on areas that require improvement. Methods A cross-sectional questionnaire study was conducted among 563 participants recruited from multiple cities in Saudi Arabia. An online questionnaire was shared via social media applications, which contained questions to health care workers about general information regarding COVID-19 and standard practices. Results The mean age of the study population was 30.7 (SD 8) years. Approximately 8.3% (47/563) of the health care workers were isolated as suspected cases of COVID-19, and 0.9% (n=5) were found positive. The majority agreed that social distancing, face masks, and hand washing are effective methods for preventing disease transmission. However, only 63.7% (n=359) knew the correct duration of hand washing. Almost 70% (n=394) strictly adhered to hand hygiene practices, but less than half complied with the practice of wearing a face mask. Significant differences in health care workers' attitudes were observed on the basis of their city of residence, their adherence to COVID-19 practices, and their compliance with the use of a face mask. Among the health care workers, 27.2% (n=153) declared that they will isolate themselves at home and take influenza medication if they experience COVID-19 symptoms. Conclusions The majority of health care workers in Saudi Arabia presented acceptable levels of general knowledge on COVID-19, but they lack awareness in some crucial details that may prevent disease spread. Intense courses and competency assessments are highly recommended. Prevention of disease progression is the only option for the time being. Introduction SARS-CoV-2 is a novel virus of the large group of coronaviruses circulating in the environment and is thought to originate from bats [1]. Previous outbreaks such as severe acute respiratory syndrome (SARS) in 2003 and Middle East respiratory syndrome (MERS) in 2015 share similarities with COVID-19 [2]. This novel viral outbreak was epidemiologically linked to the Hua Nan seafood and wet animal wholesale market [3]. Moreover, SARS-CoV-2 was first discovered in Wuhan City, Hubei Province, China, by Chinese authorities. It was initially reported to manifest as pneumonia cases of unknown etiology on December 31, 2019 [4]. Later on, China officially announced the identification of a novel virus, which caused the pneumonia. Shortly after, the World Health Organization (WHO) had declared the outbreak of a novel coronavirus [5]. In February 2020, the disease was named COVID-19 [6]. People infected with COVID-19 may experience a wide range of symptoms, from mild to severe illness. These symptoms include cough, shortness of breath, fever, muscle pain, chills, sore throat, and loss of the sense of taste or smell [7]. However, these symptoms are not universal, as other studies have reported patients with gastrointestinal symptoms such as nausea, vomiting, or diarrhea [7]. According to the WHO, approximately 80% of COVID-19 patients in China experienced mild symptoms and recovered without any medical intervention [8], while 14% of them had experienced severe illness, and 5% were critically ill. However, the risk of having severe illness is higher in the elderly and individuals with underlying chronic diseases such as cancer, diabetes, and lung diseases [8]. Regarding the current state of COVID-19 in Saudi Arabia, the government imposed a curfew from March 23 to June 20, 2020. Mosques, schools, and businesses were closed during that period, and travel was restricted. At the time of writing, Saudi Arabia has reported approximately 49,176 COVID-19 cases, which is lesser than those reported in western countries [9]. In health care settings, all COVID-19 patients were initially hospitalized regardless of disease severity and treated free of charge, including visa violators [10]. Similar to the rest of the world, Saudi Arabia had experienced a shortage of personal protective equipment (PPE), prompting recommendations from the Saudi Center for Disease Prevention and Control on the use and reuse of available PPE [11]. Furthermore, outpatient clinics started seeing most patients virtually, and nonurgent consultations were rescheduled. According to the Saudi Ministry of Health (SMOH)'s statistical yearbook of 2018, the health care workforce includes 36,717 physicians, 83,616 nurses, 3277 pharmacists, and over 50,000 allied health personnel [12]. Furthermore, health care workers are at the front line and directly come in contact with COVID-19 patients. Consequently, they are always at high risk of infection. The transmission of any disease among health care workers is mainly associated with overcrowding, the absence of isolation facilities, and environmental contamination [13]. However, the transmission risk might also be related to inadequate knowledge of methods for infection prevention [14]. Consequently, health care workers need to have adequate awareness of proper infection prevention practices. In a study conducted at District 2 Hospital, Ho Chi Minh City, Vietnam, the majority (88.4%) of health care workers had adequate knowledge of COVID-19, and 90% of participants have a positive attitude toward COVID-19 [2]. It is essential to have infection control guidelines with the best available evidence to deal with COVID-19 in every health care setting and maximally avoid exposure to the virus. Emphasis should be placed on hand hygiene, which is known to be the best way to prevent the spread of microorganisms and microbial infections in health care facilities [15]. Education on proper PPE, patient screening, and mask use should be provided in accordance with the guidelines of the WHO and the Centers for Disease Control and Prevention (CDC) [16][17][18]. Previous studies have reported that health care workers might lack a proper attitude and knowledge toward SARS and MERS [19][20][21]. Therefore, this study aimed to assess the knowledge and attitude toward COVID-19 among health care workers in Saudi Arabia. This assessment may help prevent disease transmission by identifying areas requiring intervention. Study Design A cross-sectional questionnaire-based study was performed with health care workers in Saudi Arabia to assess their level of awareness, knowledge, and perception of COVID-19, their level of adherence to the applied curfew, and their understanding of methods for infection prevention. Convenience sampling was carried out by sending the questionnaire through social media platforms (Twitter and WhatsApp), as face-to-face interviews were unavailable owing to curfew regulations. Considering this data collection method, the number of health care workers who received the questionnaire could not be identified because they were encouraged to share the questionnaire within their social circle of health care workers; however, the initial number of health care workers among whom the questionnaire was shared was 1068. The study included health care workers within Saudi Arabia, while those who did not complete the questionnaire or those who worked abroad were excluded. A self-administered questionnaire was developed and distributed from April 30 to May 14, 2020. The questionnaire covered the following items: sociodemographic data such as age, nationality, city of residence, and employment status during the curfew. Cities were divided as large (population >300,000), medium (population ranging 100,000-300,000), and small (population <100,000) cities. The categorization of cities sizes was based on the measures of the Saudi General Authority for Statistics [22]. The questionnaire also assessed the level of knowledge using "agree," "neutral," and "disagree" statements, which also included questions about the duration of hand washing, COVID-19 symptoms, and the timing for COVID-19 testing. Regarding symptoms, the respondents were provided with a list of established COVID-19 symptoms and asked to choose items related to the disease. The Saudi guidelines recommend COVID-19 testing when individuals experience severe respiratory symptoms or flu-like symptoms, or if they come in contact with positive individuals or those with flu-like symptoms. These options were provided to the participants in addition to "any time." The complete questionnaire is available as Multimedia Appendix 1. After explaining the study objectives to the participants and assuring their confidentiality, the participants were asked to complete the questionnaire. At the end of the questionnaire survey, an email regarding any inquiries was sent to the participants. Informed consent was obtained before data collection, and no identifiers were requested. None of the responders was compensated, and the data were only accessible to the authors to assure confidentiality. The study received ethical approval from the King Abdullah International Medical Research Center (RJ20/079/J). Statistical Analysis Data were entered and analyzed using SPSS (version 25, IBM Corp). Data are presented as ranges, means, SD, medians, and IQR for quantitative variables and frequencies and percentages for qualitative variables. Between-group comparisons were performed using χ 2 or Fisher exact tests. Results are also expressed as odds ratio (OR) and 95% CI values. P values less than .05 were considered statistically significant. Results A total of 563 health care workers completed the questionnaire survey. As indicated in Table 1, the participants' ages ranged from 21 to 69 years. The majority of participants (n=537, 95.4%) were Saudi nationals. Furthermore, 47 (8.3%) health care workers were isolated as suspected COVID-19 cases, and 5 (0.9%) of them tested positive. Table 2 summarizes the levels of knowledge among the participants, indicated through "agree," "neutral," and "disagree" questions. Most of the cohort (n=542, 96.3%) agreed that COVID-19 is a pandemic, while 71.2% (n=401) thought it is more dangerous than seasonal influenza. The highest percentage of agreement (n=547, 97.2%) was obtained for social distancing being an effective method to prevent COVID-19 transmission, followed by hand washing (n=544, 96.6%) and impending curfew (n=542, 96.3%). Furthermore, 33.6% (n=189) of health care workers agreed that COVID-19 transmission could be prevented by wearing gloves. Health care workers were provided a list of symptoms and asked to select those related to COVID-19. As shown in Figure 1, the top selected symptoms were cough or shortness of breath (552/563, 98.1%) and fever (n=533, 94.7%). The lowest percentage (n=199, 35.4%) was for a runny nose. Figure 2 shows the responses to the question "when should a person seek testing for COVID-19?" The most frequent response (509/563, 90.4%) was when contacting someone positive for COVID-19, followed by when experiencing severe respiratory symptoms (n=455, 80.8%). Few (n=62, 11%) health care workers chose to test for COVID-19 at any time, even if asymptomatic. Furthermore, 561 (99.6%) health care workers answered "Yes" when asked about the probability of COVID-19 patients being asymptomatic. Moreover, 532 (94.5%) health care workers were aware of the absence of an established therapy for COVID-19. (4.3) No When asking the participants what they would do if they experience flu-like symptoms, 350 (62.2%) responded that they would call the SMOH hotline for advice. In comparison, 153 (27.2%) health care workers responded that they would stay at home and take flu medication. Fifty (8.9%) participants responded that they would go to the hospital to test for COVID-19, and 10 (1.8%) would not take any action. When comparing health care workers living in large, medium, or small cities (Table 4), a significant difference was observed in their compliance with wearing face masks in public places (P=.04). The larger the city, the more compliant the participant. Furthermore, health care workers in medium and small cities followed COVID-19 news more than their peers in large cities (P=.02). Principal Findings This study illustrates the knowledge and practices of health care workers in Saudi Arabia at the early stages of the pandemic during a period of significant uncertainty and rapidly changing policies and practices. Among our study participants, marked consensus was observed in their responses to hand hygiene, social distancing, and curfew regulations for effectively preventing disease transmission. Responses to questions on masks and gloves were widely distributed, probably owing to unclear information during the early stages of the pandemic from both the literature and local policies. Moreover, when asked about the timing for COVID-19 testing, most responded with "on experiencing severe symptoms" or "on coming in contact with positive cases," reflecting the local messaging at that time. Furthermore, their compliance with general hand hygiene and universal masking was concerning and represents an area of improvement. When faced with a novel viral pandemic, particularly one with no vaccine or effective treatment at the time of writing, other aspects of disease control become increasingly important. The SMOH implemented daily televised briefings with relevant statistics and discussions regarding the best practices for the current time, and any inquiries usually made by the press were addressed. Practices including hand hygiene and social distancing had the most robust emphasis, while messages regarding the worldwide use of masks were inconsistent owing to their shortage in hospitals and the need to reserve them for frontline health care workers. While the public should be preferentially informed of the best available practices to reduce disease transmission, a higher emphasis should be placed on health care workers, since they constitute a high-risk group for contracting COVID-19, and by the nature of their occupation, they have direct contact with an especially vulnerable part of our community. Hence, it is essential to assess their knowledge and practice and compare them to those of their peers elsewhere. This study also provides an insight into the early stages of the knowledge, attitudes, and practices for disease management among the health care workers, which are expected to change as the pandemic evolves or when more information becomes available. Multiple outbreaks were reported in health care settings, emphasizing the need for infection control and prevention [23,24]. Risk perception reportedly enhances compliance with protective measures [25]. Approximately 71% of individuals believed that COVID-19 is more dangerous than seasonal influenza, and slightly more than half were aware that COVID-19 could be hazardous to individuals other than the elderly, indicating an area of improvement. Moreover, a study on Egyptian health care workers reported that almost 90% of them believed that the virus is more dangerous in the elderly [26]. Furthermore, Bhagavathula et al [27] reported that only 11.4% of health care workers agreed that COVID-19 is a fatal disease. In this study, approximately 8.3% (47/563) of health care workers were isolated as suspected cases of COVID-19; fortunately, only 0.9% (n=5) tested positive, and this number is likely to increase as the spread of the pandemic progresses. Most of our study participants believe in adopting nationwide protective measures, including social distancing, maintenance of regular hand hygiene, and universal use of face masks during public activities. If these beliefs translate into practice, it could help decrease transmission by decreasing the reproductive number or "flattening the curve," allowing for better utilization of health care facilities or buying time until vaccine or treatment availability [28,29]. Interestingly, a study form Uganda [30] reported that 55% of health care workers do not believe that face masks may help prevent disease transmission, while almost all of them agreed that avoiding crowded places decreases the risk of acquiring COVID-19. Social distancing proved to be one of the most effective methods of preventing disease transmission during the initial COVID-19 outbreak in Wuhan [31]. Regarding hand hygiene, almost all our study participants agreed on the importance of hand washing, which is higher than reported in other studies [27,32], but only 63.8% (n=359) were aware of the correct duration of washing, which is at least 40 s [33]. The primary source of the participants' knowledge was the SMOH daily press briefings and its updates about COVID-19, which contained evidence-based information when available in different areas, including the best infection control practices, policies, and regulations to be implemented and various misconceptions and misinformation about COVID-19. This reflects a drastic improvement in the spread of information alongside practical knowledge through a simple, widely accessible tool such as the television, as opposed to that reported by Khan et al [34] during the MERS outbreak. In their study, the participants faced difficulty following news updates about the disease on the internet from the SMOH website and in looking for new emerging studies. In another study by Albarrak et al [35], the sources of information for the study participants during the MERS outbreak were almost equally distributed among seminars, pamphlets, articles, radio, and television. We believe that the SMOH performed an admirable job in handling the pandemic and provided transparency and continuous information regarding changes to policies as new data emerged or as the pandemic evolved. Of particular note is the high uniformity in the responses to the messaging, and areas of uncertainty included low levels of knowledge and practices in our study population. We believe that complete transparency and clear messaging are needed for maximum benefits during such events. This study provides a cross-sectional insight into a relatively early stage of the pandemic, and comparisons can potentially be made with the emergence of more data from other countries. Limitations Our study did not define the specialty of the health care workers (eg, nurse, physician, or pharmacist). We also believe that the categorization of health care settings by type (eg, outpatient department, rural hospital, or polyclinic) would have provided more context to the participants' responses. Furthermore, our study is limited by its convenience sampling method, which might have introduced a potential selection bias. Furthermore, the self-reporting nature of the study questionnaire might have introduced its own set of biases, such as social desirability. Conclusion In conclusion, the majority of our questionnaire respondents had acceptable general knowledge of COVID-19, based on their responses to our questions. Knowledge of decreased disease transmission with the use of face masks was not as uniform as we expected, perhaps reflecting the unclear messaging at that time. Furthermore, approximately half of the study participants disagreed with the statement that COVID-19 is only dangerous in the elderly. Other areas of improvement include the knowledge of the recommended duration of hand washing. Compliance with precautions for infection prevention still need to be emphasized; this can be achieved through intense educational programs and competency assessments to promote positive preventive practices. This study provides a cross-sectional insight into the relatively early stages of the COVID-19 pandemic in Saudi Arabia, and if additional similar studies from other countries become available, comparisons can be made between different populations. Conflicts of Interest None declared. Multimedia Appendix 1 The questionnaire used in the study.
2021-01-20T06:16:18.342Z
2020-06-08T00:00:00.000
{ "year": 2021, "sha1": "888e9b5bf649f935ad746b48670d15c69c3a5324", "oa_license": "CCBY", "oa_url": "https://formative.jmir.org/2021/1/e21220/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "327e9fed3bce5202413983ceb62af23da59949a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226971328
pes2o/s2orc
v3-fos-license
Prevalence of readily detected amyloid blood clots in ‘unclotted’ Type 2 Diabetes Mellitus and COVID-19 plasma: a preliminary report Background Type 2 Diabetes Mellitus (T2DM) is a well-known comorbidity to COVID-19 and coagulopathies are a common accompaniment to both T2DM and COVID-19. In addition, patients with COVID-19 are known to develop micro-clots within the lungs. The rapid detection of COVID-19 uses genotypic testing for the presence of SARS-Cov-2 virus in nasopharyngeal swabs, but it can have a poor sensitivity. A rapid, host-based physiological test that indicated clotting severity and the extent of clotting pathologies in the individual who was infected or not would be highly desirable. Methods Platelet poor plasma (PPP) was collected and frozen. On the day of analysis, PPP samples were thawed and analysed. We show here that microclots can be detected in the native plasma of twenty COVID-19, as well as ten T2DM patients, without the addition of any clotting agent, and in particular that such clots are amyloid in nature as judged by a standard fluorogenic stain. Results were compared to ten healthy age-matched individuals. Results In COVID-19 plasma these microclots are significantly increased when compared to the levels in T2DM. Conclusions This fluorogenic test may provide a rapid and convenient test with 100% sensitivity (P < 0.0001) and is consistent with the recognition that the early detection and prevention of such clotting can have an important role in therapy. (T2DM) is probably the most frequently mentioned comorbidity. It is widely recognised [11][12][13][14][15][16][17][18][19][20][21] that extensive blood clotting has a major role in the pathophysiology of COVID-19 disease severity and progression, yet so can excessive bleeding [22,23]. The solution to this apparent paradox lies in the recognition [24] that these phases are separated in time: the later bleeding is mediated by the earlier clotting-induced depletion of fibrinogen and of von Willebrand factor (VWF). This first phase of hypercoagulability is accompanied by partial fibrinolysis of the formed clots, and an extent of D-dimer formation that is predictive of clinical outcomes [25]. These features, together with the accompanying decrease in platelets (thrombocytopaenia), leads to the subsequent bleeding. Thus it is suggested that the application of suitably monitored levels of anti-clotting agents in the earlier phase provides for a much better outcome [13,24]. In addition, dysregulated hemostasis in COVID-19-associated disseminated intravascular coagulation is exacerbated by an inhibition of fibrinolysis, indicating the plasminogen-plasmin-system as a potential target to prevent thromboembolic complications in COVID-19 patients [26]. In addition, patients with COVID-19-associated respiratory failure admitted to the intensive care unit exhibit a hypercoagulable state which is not appreciable on conventional tests of coagulation. Supranormal clot firmness, minimal fibrinolysis, and hyperfibrinogenaemia are key findings [27]. As well as the extent of clotting, including states similar to the life-threatening disseminated intravascular coagulation (DIC) [15], a second issue pertains to its nature. Some years ago, we discovered that in the presence of microbial cell wall components [28,29], and in a variety of chronic, inflammatory diseases [30][31][32] (including sepsis [33]), blood fibrinogen can clot into an anomalous, amyloid form [34]. These forms are easily detected by a fluorogenic stain such as thioflavin T, or the so-called Amytracker stains [35]. In all cases, however, these experiments were performed in vitro using relevant plasma, with clotting being induced by the addition of thrombin. In our preliminary experiments this was also the case for plasma from COVID-19 patients, but the signals were so massive that they were essentially off the scale. However, as we report here, the plasma of COVID-19 patients carries a massive load of preformed amyloid clots (with no thrombin being added), and this therefore provides a rapid and convenient test for COVID-19. As the presence of T2DM is a well-known co-morbidity, that significantly decreases survival and a positive outcome for COVID-19 patients, we included such a group in our sample cohort too. Ethical considerations Ethical approval for blood collection and analysis of the patients with COVID-19, T2DM and healthy individuals, was given by the Health Research Ethics Committee (HREC) of Stellenbosch University (reference number: 9521). This laboratory study was carried out in strict adherence to the International Declaration of Helsinki, South African Guidelines for Good Clinical Practice and the South African Medical Research Council (SAMRC), Ethical Guidelines for research. Oral consent was obtained from COVID-19 patients to participate in the study. Written consent was obtained from T2DM patients and healthy participants. Covid-19 patients 20 COVID-19-positive samples (11 males and 9 females) were obtained and blood samples collected before treatment was embarked upon. Blood samples were collected by JS. Platelet poor plasma (PPP) prepared and stored at − 80 °C, until fluorescent microscopy analysis. Type 2 Diabetes Mellitus (T2DM) Stored Platelet poor plasma samples were randomly selected from our Laboratory's stored sample repository. 10 age-matched T2DM (6 Males and 4 females), collected in 2018, were used in this analysis. Healthy samples Our healthy sample was 10 age-matched controls (4 males and 6 females), previously collected and stored in our plasma repository. They were non-smokers, with CRP levels within healthy ranges, and not on any antiinflammatory medication. Lung CT scans Amongst the COVID-19 patient sample 10 patients were admitted, but stabilized and blood drawn and sent home for observation. Where patients were clinically deemed as moderate or severely ill, CT scans of the patients were performed to determine the severity of the lung pathology. We divided our sample into mild disease (no CT scan) and moderate to severely ill. The CT scan and severity score [36] confirmed moderate to severely ill patients according to the 'ground glass' opacities in the lungs. Fluorescent Microscopy of patient whole blood and platelet poor plasma (PPP) A simple fluorescence assay was developed by comparing fluorescent (anomalous) amyloid signals present in PPP from COVID-19 patients, T2DM and those from healthy age-matched individuals, all of whom were studied using PPP that had been stored at −80 °C. On the day of analysis, PPP was thawed and incubated with the dye thioflavin T (ThT; 5 µM final concentration), which detects amyloid-like structures [37]. Following this, the sample was incubated for 30 min (protected from light) at room temperature. PPP smears were then created by transferring a small volume (5 µl) of the stained PPP sample to a microscope slide (similar methods were followed to create a blood smear). A cover slip was placed over the prepared smear and viewed using a Zeiss AxioObserver 7 fluorescent microscope with a Plan-Apochromat 63x/1.4 Oil DIC M27 objective. For ThT quantification, the excitation was set at 450 to 488 nm and emission at 499 to 529 nm. Unstained samples were also prepared with both healthy and COVID-19 PPP, to assess any autofluorescence. Micrograph analysis was done using ImageJ (version 2.0.0-rc-34/1.5a). The % area of amyloid were calculated using the thresholding method. This method allows a measurement of area of amyloid signal. The RGB images are opened in ImageJ, each image is calibrated by setting the scale (calculated using the image pixel size and the known size of the scale bar). Each image is then converted to black and white (8 bit, this is adjusted under the image type setting). The next step is to threshold the image by adjusting the background intensity to white (255) and then thresholding the now black amyloid signal (in these images between 11 and 15). We used the Huang setting during thresholding. Huang's method is an optimization method which finds the optimal threshold value by minimizing the measures of fuzziness. The black amyloid area is then analyzed using the analyze particle setting where we use the particle size that is measured from 1 to infinity. The particle size setting allows us to exclude any background signal that might not be true amyloid signal. The area per data per particle size that is generated is then copied into a spreadsheet (see our raw data). Statistical analysis was done using Results Age-matched COVID-19 (average age 49.9y) and healthy individuals (58.8y), and T2DM (62.1y) were used in this analysis (p = 0.06). Platelet poor plasma (PPP) was collected and frozen. On the day of analysis, all PPP samples were thawed and analysed. We also confirmed that the same results are visible in freshly prepared PPP samples. Figure 1 shows representative CT scans of four of the COVID-19 patients. Raw data are shared in https ://1drv.ms/u/s!AgoCO mY3bk KHirZ Ou5YK Plq1x 5f1AQ ?e=xmWGK m. Figures 2, 3, 4, 5 show representative fluorescence micrographs of PPP from healthy, T2DM and COVID-19 individuals. In healthy PPP smears (Fig. 2), very little ThT fluorescent signal is visible. In plasma smears from T2DM (Fig. 3), individuals, there were a significant increase in signal, compared to controls, and an even more pronounced increase in signal in COVID-19 individuals (Fig. 4), where abundant amyloid signal is noted. Note that these signals were as received; no thrombin was added to induce clotting. Figure 5 shows the additional presence of fibrous or cellular deposits in the PPP smears of COVID-19 patients. There have been reports of extensive endotheliopathy in COVID-19 patients [38,39], and these deposits might contribute to this endotheliopathy. Figure 6a and b show box plots of the % area of amyloid signal calculated from representative micrographs of each individual. A nonparametric one-way ANOVA test (Kruskal-Wallis test) between all groups showed a highly significant difference (p = < 0.0001). However, a Mann-Whitney analysis between the mild and the moderate to severe COVID-19 individuals showed no significant Table 1 shows the average % amyloid area for each sample, ranked from lowest to highest values, as well as the sensitivity and specificity calculations. We set the cutoff % amyloid area as 1.3% for controls, and 3.05% for T2DM (see Table 1 and raw data file in shared data link). Using these calculations, the % amyloid area sensitivity and specificity in control versus COVID-19 samples is 85% and 100% respectively, and % amyloid sensitivity and specificity in T2DM versus COVID-19 is 69% and 67%, respectively. Similarly, the % amyloid area sensitivity and specificity for controls versus T2DM is 100% and 100% respectively, suggesting that T2DM is potentially a big confounder. These results suggest that T2DM disease increase the propensity for an individual to develop COVID-19. Discussion Strongly bound up with the coagulopathies accompanying severe COVID-19 disease is the presence of hyperferritinaemia (in cases such as the present it is a cell damage marker [40]) and a cytokine storm, [41][42][43][44][45] which usually occurs in the later phase of the disease [24]. In addition, there has been reports of pulmonary vascular endothelialitis, thrombosis, and angiogenesis in Covid-19 [39]. In addition, excess iron has long been known to cause blood to clot into an anomalous form [46], later shown to be amyloid in nature [28][29][30][31][32][33][34]. Increased serum ferritin levels are also known to be present in T2DM [47][48][49][50]. These kinds of phenomena seem to accompany essentially every kind of inflammatory disease (e.g. [51]), but the amyloidogenic coagulopathies are normally assessed following the ex vivo addition of thrombin to samples of plasma. Many clinical features of COVID-19 are unprecedented, and here we demonstrate yet another: the presence in PPP to which thrombin has not been added of amyloid microclots. These microclots are also an pathological feature of PPP from T2DM patients, however there is a significant increase of the microclots in COVID-19 patients. This kind of phenomenon explains at once the extensive microclotting that is such a feature of COVID-19 [11], and adds strongly to the view that its prevention via anti-clotting agents should lie at the heart of therapy. In addition, individuals with T2DM are more prone to develop microclots, due to an increased presence of circulating inflammatory biomarkers that cause hypercoagulability. T2DM patients are therefore predisposed due to their condition. When these individuals then contract SARS-CoV-2, they are already prone to hypercoagulation. This hyperocuagulable predisposition, explains why individuals with T2DM are more prone to develop severe hypercoagulability when diagnosed with COVID-19. Although fluorescence microscopy is a specialized laboratory technique, TEG ® is a well-known point of care technique, which is cheap and reliable. Samples can be collected and PPP can be analysed immediately, or frozen and thawed for later analysis. All told, the relative ease of fluorescence microscopy, speed (40 min including 30 min ThT incubation time) and cheapness of the assay we describe might be of significant utility in differentiating COVID-19 from other inflammatory diseases. Of course this must also be monitored (e.g. via Thromboelastography [52][53][54][55]) lest the disease enters its later phase in which bleeding rather than clotting is the Fig. 6 a, b Amyloid % area in platelet poor plasma smears with mean and SEM (p = < 0.0001). a All controls, Type 2 Diabetes Mellitus (T2DM) and all COVID-19 patients. b All controls vs T2DM vs 10 mild and 10 moderate to severely ill COVID-19 patients greater danger [24]. Although not shown here, an important consideration is that TEG ® can be used to study the clotting parameters of both whole blood and PPP. Whole blood TEG ® gives information on the clotting potential affected by the presence of both platelets and fibrinogen, while PPP TEG ® only presents evidence of the clotting potential of the plasma proteins [52][53][54][55]. Point-of-care devices and diagnostics like TEG ® are also particularly useful to assess fibrinolysis. In COVID-19 patients, Wright and co-workers reported Conclusion What we have shown here is that the clotting that is commonly seen in COVID-19 patients is in an amyloid form that forms large deposits that might be able to occlude fine capillaries. In addition, these deposits would interfere with fibrinolysis and cause the decreased ability to pass O 2 into the blood that is such a feature of the disease. As T2DM is a significant comorbidity to COVID-19, exceptional care must be taken when such patients are diagnosed with COVID-19. Consequently, the prevention of coagulopathies must lie at the heart of successful therapies.
2020-10-28T18:01:58.486Z
2020-09-14T00:00:00.000
{ "year": 2020, "sha1": "5a28e746ea8fa6971b605bcf8514917dcae493db", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-020-01165-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f10712f554a9c401824ac67d0f95295efae15093", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15775936
pes2o/s2orc
v3-fos-license
Corpus-Based methods for Short Text Similarity This paper presents corpus-based methods to find similarity between short text (sentences, paragraphs, ...) which has many applications in the field of NLP. Previous works on this problem have been based on supervised methods or have used external resources such as WordNet, British National Corpus etc. Our methods are focused on unsupervised corpus-based methods. We present a new method, based on Vector Space Model, to capture the contextual behavior, senses and correlation, of terms and show that this method performs better than the baseline method that uses vector based cosine similarity measure. The performance of existing document similarity measures, Dice and Resemblance, are also evaluated which in our knowledge have not been used for short text similarity. We also show that the performance of the vector-based baseline method is improved when using stems instead of words and using the candidate sentences for computing the parameters rather than some external resource. Introduction Many natural language processing applications use similarity between short text (e.g. sentences, paragraphs) such as text summarization (Lin & Hovy, 2003), which works on sentence or paragraph level ; question answering, which uses similarity between the question answer pairs (Mohler & Mihalcea, 2009) ; and image retrieval, where an image is retrieved by finding the similarity between the query and the image caption (Coelho et al., 2004). In general, existing methods view the short text similarity problem as a classification task, where one-to-one text similarity decision is made, or as an alignment task, where many-to-many text similarity decision is also made. Most of the existing methods treat this problem as a classification task which use similarity metrics (Abdalgader & Skabar, 2011) (Cordeiro et al., 2007) (Mihalcea & Corley, 2006)(Hatzivassiloglou et al., 1999. These similarity metrics give a value of similarity between pairs of short text which can then classify the pairs as similar or not using a threshold. This threshold value is usually empirically fixed for each similarity metric. These existing methods use external knowledge like WordNet (Miller et al., 1990) to find lexical similarity or some corpora like the British National Corpus for optimizing parameters. These methods are not suitable for languages that do not have resources like WordNet or large corpora. This leads to find similarity measures that use no resources or resources that are easily buildable. There are few methods that treat the similarity problem as an alignment task (Barzilay, 2003)(Nelken & Shieber, 2006. These methods also use similarity metrics like the classification methods but the value from these metrics are not used directly for alignment. The alignment in these methods are based on supervised methods and use dynamic programming which includes the context of the sentences and are designed for comparable monolingual text. We take the similarity problem between short text as a classification task. This task is based on corpus-based unsupervised methods which do not use external resources to compute the similarity value unlike existing classification methods. One of the earliest and well known classification method for text similarity is the Vector Space Model (VSM) for information retrieval, where similar documents in a collection is chosen by a similarity value computed using a similarity metric, the cosine similarity, between the vectors of term weights representing documents (Salton et al., 1975). This measure is based on the overlap of terms in the document pair whose similarity is being measured. The VSM assumes that the vectors of terms are independent, pairwise orthogonal, to each other which is unrealistic. There exist other vector space models like the Generalized Vector Space Model (GVSM) (Wong et al., 1987) which does not assume this independence and although this model claims to be more effective than the standard implementation of the VSM, it is computationally expensive and therefore VSM is widely used despite its unrealistic assumption. The VSM for information retrieval is modified and used for short text similarity by treating the short text as documents and computing the idf value using external resources like the British National Corpus. This VSM cosine similarity measure is the baseline for most of the similarity studies (Mihalcea & Corley, 2006). Our similarity measure is based on VSM but is adopted in such a way that the assumption of term independence is excluded and the short text vectors incorporates the sense and correlation of the terms. This is done by taking into account the overlap of the terms in all the short text of the corpus rather than only the short text pair between which the similarity is measured. Along presenting a new method to find similar short text we evaluate the performance of two other information retrieval methods which use term overlaps namely Dice measure (Manning & Schütze, 1999) and Resemblance measure 1 (Lyon et al., 2001). We also show that using stems instead of words can improve the baseline VSM model for short text similarity. Related Works In information retrieval, there are many methods to find similarity between documents and one of the most well known method is the VSM which uses cosine similarity measure (Barron-Cedeno et al., 2009). This vector based method is also used to measure similarity between sentences as done by Barzilay et al. (Barzilay, 2003). They view the problem of finding similar sentences as an alignment problem, where they align similar sentences between two monolingual comparable documents. In their method, the paragraphs are first aligned by a trained classifier and once the paragraphs are aligned the sentences within them are aligned using vector based cosine similarity and 1. also known as the Jaccard or Tanimoto coefficient (Manning & Schütze, 1999) dynamic programming. Rani et al. (Nelken & Shieber, 2006) took the same problem as Barzilay et al. and proposed an improved robust method. This method also uses dynamic programming for alignment but uses cosine similarity measure in a logistic regression to provide a score to aid the alignment. Both of these methods use context around the sentences, the following and preceding sentences, to aid in alignment. These methods for alignment make many-to-many sentence alignment and do not provide a similarity value between sentences indicating that these methods are suitable to find similar sentences only in comparable monolingual corpora. Another method that uses the concept of overlap like the cosine measure is the fingerprinting method. It takes into account the overlap of bigrams or trigrams between the text to calculate a value of resemblance as shown in Lyon et al.(Lyon et al., 2001) which is the basis of classifying similar text and has been used to detect plagiarism. Cordeiro et al. (Cordeiro et al., 2007) has also proposed a similarity metric to identify similarity between texts and to identify paraphrase based on word overlap. It computes a similarity value by combining the ratio of common words in each sentence and is focused on capturing paraphrases which makes it unsuitable to find other types of similar sentences for example, this metric gives a similarity value zero to identical sentences. Linguistic features has also been used to find similarity between short text as in Hatzivassiloglou et al. (Hatzivassiloglou et al., 1999). They build linguistic feature vectors to build rules in a supervised manner to classify paragraph pairs. The features used to build rules are noun phrase matching, WordNet synonyms, common word class of verbs, shared common noun and their combinations. Even though it performs better than the vector based cosine similarity measure, it requires resources like Wordnet which are not present and are hard to build for other resource less languages. Recent researches are focused on finding the similarity between lexical items in short text to find the similarities between these text. There exist corpus-based approaches to find the lexical similarity, some of which use text pattern analysis, Pointwise mutual information (PMI) and Latent Semantic Analysis (LSA). We will not focus on WordNet based approaches to find lexical similarity (Abdalgader & Skabar, 2011). Mihalcea et al. (Mihalcea & Corley, 2006) use PMI and LSA to compute the text semantic similarity using a wrapper given in equation 1. maxSim(w, T 2(1) ) is the maximum lexical similarity between the word w in sentence T 1(2) and all the words in sentence T 2(1) and idf (w) is the inverse document frequency of the word w calculated from the British National Corpus. The similarity metric, STS, proposed by Islam et al. (Islam & Inkpen, 2008), unlike other metrics, use string similarity along with corpus-based word similarity. Corpus-based word similarity is measured using two measures that includes second order co-occurrence pointwise mutual information and common word order similarity. The string similarity is measured using the concept of Longest Common Sequence. All these three measures are combined to determine the similarity between two short text. All the mentioned corpus-based method have the same drawback of using external resources. Short Text Similarity In this section, we present a new method to find similarity between short text. For simplicity reasons, we explain the method using sentence as our short text. Our similarity method is based on VSM but is different in the way the sentence vectors are created. The dimensions in sentence vectors do not represent the terms in the collection of sentences as in the bag of words model (Baeza-Yates & Ribeiro-Neto, 1999) but rather created from term vectors. Given a corpus C of n sentences and m unique terms, the term vector for term t j is created with n number of dimensions in which the presence and absence of the term in each sentence is indicated by a boolean value x : tj = [x1, x2, x3, x4, ..., xn] xi ∈ 0, 1; i ∈ 1 to n; 0 = absent, 1 = present This term vector representation is similar to the wordspace model (Schutze, 1998) where the distribution of the terms are stored. These representation of term vectors together will form a m x n term-sentence matrix and as the number of sentence increases the size of the matrix will also increase. This huge dimension of the matrix can be reduced to some extent by removing stopwords and stemming 2 . There are also mathematical procedures for the reduction of the matrix like Latent Semantic Analysis (Deerwester et al., 1990) which uses singular value decomposition, SVD, or Principle Component Analysis (Jolliffe, 1986) which represents the matrix in different coordinates. We have used the simple technique of removing the stopwords and stemming words but none of the mathematical procedures to reduce the dimension during our experiments. This representation of term vector will consist of many zero values which will take a lot of memory. To reduce this space, we represent the vector in a reduced form where only the dimensions having value 1 are kept as shown in Equation 3 where we assume that the term t j is present in sentence numbers 1,5, and 8 : tj = [(S1, 1), (S5, 1), (S8, 1)] Si is the sentence number i where the term tj is present ; i ∈ 1 to n This term vector shows the different senses that the term may have. Here, the sense of the term means the idea with which it can be related to. Our assumption is that sentences are independent to each other making each sentence presenting a unique idea and therefore, each term present in a sentence is related to this idea. This assumption like the assumption of VSM is unrealistic but the effect of this assumption can be reduced using clustering techniques like hierarchical clustering (Han & Kamber, 2006) to group sentences that give the same idea or in other words similar sentences. Clustering has not been used in the experiments. Once we have the term vectors we can create sentence vectors by adding the term vectors of the terms present in the sentence making the number of dimension of their sentence vector equal to the term vector. The term vector consists of only the boolean value to be added which doesn't provide much information about the term so while adding the term vector we add the inverse document frequency, idf, value of the term which in our case is the inverse sentence frequency. This idf value is computed from the sentences present in the corpus. For a sentence consisting of terms t 1 , t 2 , .., t n , the dimension, i, corresponding to the sentence S i of the sentence vector will be : This method is similar to the method of second-order similarity (Kaufmann, 2000) and includes more information other than cohesion of text by encoding three different information in the sentence vector which are i) the importance of each term using its idf ii) the co-occurrence of terms by adding up the idf values of all the terms that occur in a sentence and iii) the distribution of term along various sentences as the dimensions of the sentence vector is equal to the number of sentences present in the corpus. Using these sentence vectors we can now compute the similarity value between two sentences using the cosine similarity measure. We name our method Short text based Vector Space Model, SVSM, to distinguish it from the other vector based models. This method can be easily used to find similarity between other types of short text by directly using the new type of short text instead of sentences. Experiments and Results We used the Microsoft Research Paraphrase Corpus(MSRPC) (Dolan et al., 2004) to evaluate our sentence similarity method which consists of 5801 pairs of sentences collected from a range of online newswire over a period of 18 months for experiments. This dataset is divided into 4076 training pairs and 1725 test pairs. The training pairs consist of 3900 paraphrases and the test pairs consist 1147 paraphrase. The remaining sentence pairs in the corpora are not paraphrases. We test our method on these test pairs and compare results with other methods which are tested on the same corpus. We also evaluated the performance of Dice measure, Resemblance measure and an adaptation of the VSM cosine similarity measure on the same test corpus. Resemblance is the method explained in section 2 and the Dice measure follows the same principle of term overlaps whose similarity value is given by the ratio between twice the number of term overlaps and the total number of terms in both the sentences (Manning & Schütze, 1999). The adopted VSM cosine similarity measure, vector-based (A), is explained in section 1 and uses stems instead of words and calculating the idf value from the given corpus rather than using some external one. The evaluations of these methods are given in Table 1 where the evaluation value named accuracy represents the number of correctly identified true or false classifications (Mihalcea & Corley, 2006) and the rest of the evaluation values bare their traditional meaning. In Table 1, the first two section of the table presents the best results according to the highest accuracy achieved by increasing the threshold by 0.1. The remaining results from the other three sections are taken from Abdalgader et al. (Abdalgader & Skabar, 2011). Table 1 shows two baseline methods. The random method is the method which randomly assigns similarity values to the sentence pairs and the vector-based method is the VSM based cosine similarity measure between two sentences with tf*idf term weights computed using external corpus. All our experiments were done with stems as terms and without stopwords. The results for our SVSM shows improvement over the baselines with higher recall but are not better than existing methods. Our SVSM method uses the distribution of terms across sentences from which it captures the sense of the term and the correlation between other terms which leads us to believe that The Dice measure and Resemblance measure perform better than the baseline methods and have similar F-measure values with the existing methods. The evaluation of the vector-based (A) method shows that this method is among the best corpus-based sentence similarity methods with higher precision and recall values and Table 2 shows that even with larger collection of text this method performs equally well. Conclusion and Discussion In this paper we introduce a new method (SVSM) to compute the similarity between short text which takes the similarity problem as a classification task. This new method is a modified version of VSM and is similar to the second-order similarity method (Kaufmann, 2000). This method is able to capture the similarity between short text by using short text vectors which encodes three corpus-based information which are the importance of the term as idf, the distribution of terms in the short text of the corpus which represent the sense of the terms and the correlation between terms present in the pair of short text. SVSM performs better than the baseline methods with high recall and has the potential to perform better with more text available to be able to model the language by encoding the three information it utilizes. Even though this method assumes that the short text are independent of each other, which is unrealistic, we believe that the effect of this assumption can be reduced by using stems and clustering techniques. This belief has not been tested and will be incorporated in our future work. We also show that stemming increases the performance of vector-based baseline and is one of the best corpusbased sentence similarity method that exist at least for english. We also use two other information retrieval measures, Dice and Resemblance, to find similar sentences and see that they do perform better than the baseline methods. All these experiments have been done on the MSR paraphrase corpus which does not contain other types of similar short text other than paraphrase and therefore, the results only partially represent the ability of the techniques to determine similarity. Further experiments of other types of short text pairs must be done to understand the full extent of the ability of our method.
2016-01-09T01:06:55.066Z
2011-01-01T00:00:00.000
{ "year": 2011, "sha1": "910fa8cedd7c850f5b33f7979dc5eac39cdaedf7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "910fa8cedd7c850f5b33f7979dc5eac39cdaedf7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235765651
pes2o/s2orc
v3-fos-license
Daany -- DAta ANalYtics on .NET Daany is .NET and cross platform data analytics and linear algebra library written in C# supposed to be a tool for data preparation, feature engineering and other kind of data transformations and feature engineering. The library is implemented on top of .NET Standard 2.1 and supports .NET Core 3.0 and above separated on several Visual Studio projects that can be installed as a NuGet package. The library implements DataFrame as the core component with extensions of a set of data science and linear algebra features. The library contains several implementation of time series decomposition (SSA, STL ARIMA), optimization methods (SGD) as well as plotting support. The library also implements set of features based on matrix, vectors and similar linear algebra operations. The main part of the library is the Daany.DataFrame with similar implementation that can be found in python based Pandas library. The paper presents the main functionalities and the implementation behind the Daany packages in the form of developer guide and can be used as manual in using the Daany in every-day work. To end this the paper shows the list of papers used the library. Introduction Daany Daany is .NET data analytic library written in C# with support various kind of data transformation, descriptive statistics and linear algebra.With Daany an user can load the data from text based file into the DataFrame arranged into columns, rows and index.The user can also create Series object -a special kind of Daany.DataFrame in order to work with time series data.Once the data is loaded the user can start analyzing the data by performing various transformation and results can be display as chart or tabular data. Daany is an open source project hosted at https://github.com/bhrnjica/daany.In order to build and run the library from the source code, one can use Windows or Linux based distribution.The most easiest way to build the library is to use command line dotnet tool and build the binaries.Otherwise the Visual Studio and Visual Studio Code can be used for building and developing the library.The GitHub project contains documentation and unit test project for any implemented operation in the library.Also the Daany Developer Guide contains in details all aspects of the implementation and library features. The library implements the Daany.MathStuff module which consists of of algebraic operations on matrix and vectors as well as rich set of statistics distributions and parameters.Furthermore Daany.LinA extends it in order to gain better performance and functionalities.The Daany.LinA is the .NET wrapper around the LAPACK [1] and BLAS [2] C++ libraries.Besides data analysis, the library implements a set of statistics or data science features e.g.time series decomposition, optimization performance parameters and similar.The main components of the library which can be installed separately as a NuGet package are: Beside as the classic .NET library usage Daany is implemented to be used for data exploration and transformation with .NET Jupyter Notebook. When using Daany in the Jupyter Noebook a user should register the formatter for the DataFrame in order the notebook can render the DataFrame as natural tabular data.The formatter can be found later in the text.Once you install the packages, you can start developing your app using Daany packages. Namespaces in Daany Daany project contains several namespaces for separating different implementation.The following list contains relevant namespaces: 1. using Daany -data frame and related code implementation, 2. using Daany.Ext -data frame extensions, used with dependency on third party libraries, 3. using Daany.Stat -set of statistics related implementations e.g.descriptive statistice, optimizers, time series etc. 4. 'using Daany.LineA' -set of Linear algebra routines for using LAPACK and BLASS./ / c h e c k t h e s i z e o f t h e d a t a f r a m e A s s e r t .E q u a l ( 3 , d f F r o m F i l e .RowCount ( ) ) ; A s s e r t .E q u a l ( new s t r i n g [ ] { " ID " , " C i t y " , " Z i p Code " , " S t a t e " , " IsHome " , " V a l u e s " , " D a t e " } A s s e r t .E q u a l ( 7 , d f F r o m F i l e .C o l C o u n t ( ) ) ; First, the data frame is created from the dictionary collection.Then data frame is stored to file.After successfully saving, the same data frame is created the csv file.The end of the code snippet, several asserts are defined in order to prove everything is correctly implemented. In case the performance is important, the column types should be pass to FromCSV method in order to achieve up to 50% of loading time.or column in the existing DataFrame.A created column/row can be defined as list or dictionary collections, but also can be defined dynamically based on the calculation logic from the existing columns in the data frame.2. Handling missing values in DataFrame -this set of operations handle the missing values in the data frame.3. Aggregate -this group of operations include performing arithmetic operation on data frame.The result of the aggregation is new list of values or new data frame containing the result of aggregation operations.4. Filter -operations return data frame with specific filter condition.Beside filter the RemoveRows -method acts opposite and removes all rows with specified condition by using delegate implementation.5. Sorting -used for sorting the rows in the Data frame.The sorting operation supports both ascending and descending order. 2. 1 How to start with Daany Daany is .NET component and can be run on any platform .NET supports.It can be used by Visual Studio or Visual Studio Code.It is consisted of the five NuGet packages, so the easiest way to start with is to install the packages in the .NET application or Jupyter Notebook.In the Nuget repositories the user can find five packages starting with Daany, which are listed at Figure 3. 3 Daany.DataFrame -data analysis and transformation of tabular data The main part of Daany project is Daany.DataFrame -an c# implementation of data frame.A data frame is software component used for handling tabular data, especially for data preparation, feature engineering and analysis during development of machine learning models.The concept of Daany.DataFrame implementation is based on simplicity and .NET coding standard.It represents tabular data consisting of columns and rows.Each column has name and type v a r d f = new DataFrame ( d i c t ) ; / / f i r s t Save d a t a f r a m e on d i s k and l o a d i t DataFrame .ToCsv ( f i l e n a m e , d f ) ; / / c r e a t e d a t a f r a m e w i t h 3 rows and 7 c o l u m n s v a r d f F r o m F i l e = DataFrame .FromCsv ( f i l e n a m e , s e p : ' , ' ) ; For4 1 . example the following code loads the data from the file, by passing predefined column types: / / d e f i n e d t y p e s o f t h e column v a r c o l T y p e s 1 = new ColType [ ] { ColType .I32 , ColType .IN , ColType .I32 , ColType .STR , ColType ./ / c r e a t e d a t a f r a m e w i t h 3 rows and 7 c o l u m n s v a r d f F r o m F i l e = DataFrame .FromCsv ( f i l e n a m e , s e p : ' , ' , c o l T y p e s : c o l T y p e s 1 ) ; 3.2 Create Daany.DataFrame from a web url Data can be loaded directly from the web storage by using FromWeb static method.The following code shows how to load the Concrete Slump Test data from the web.The data set includes 103 data points.There are 7 input variables, and 3 output variables in the data set: Cement, Slag, Fly ash, Water, SP, Coarse Aggr.,FineAggr., SLUMP (cm), FLOW (cm), Strength (Mpa).The following code load the Concrete Slump Test data set into Daany.DataFrame: / / d e f i n e web u r l where t h e d a t a i s s t o r e d v a r u r l = " h t t p s : / / a r c h i v e .i c s .u c i .edu / ml / machine − l e a r n i n g − d a t a b a s e s / c o n c r e t e / slump / s l u m p _ / / v a r d f = DataFrame .FromWeb ( u r l ) ; d f .Head ( 5 ) Besides presented examples, the data frame can be created as a results of any operations applied on the Daany data frame.Daany.DataFrame operations Daany.DataFrame has rich set of operations that can be classified into several groups: Add or Insert new row/column in the DataFrame, -this group of operations are used to create a new row
2021-07-09T01:15:40.097Z
2021-07-08T00:00:00.000
{ "year": 2021, "sha1": "efa61b103dced0c2839e6d201c249334375931f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "efa61b103dced0c2839e6d201c249334375931f7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
17404054
pes2o/s2orc
v3-fos-license
Analysis of Machine Translation Systems’ Errors in Tense, Aspect, and Modality Errors of the translation of tense, aspect, and modality by machine translation systems were analyzed for six translation systems on the market and our new systems for translating tense, aspect, and modality. The results showed that our systems outperformed the other systems. They also showed that the other systems often produced progressive forms rather than the correct present forms. Our systems rarely made this mistake. Translation systems on the market could thus be improved by incorporating the methods used in our systems. Moreover, error analysis of the translation systems on the market identified information that would be useful for improving them. Method In our investigation, we considered that the translation of tense, aspect, and modality from Japanese to English means the production of the surface expressions of tense, aspect, and modality of the main verb phrase in the English translated sentence. We calculated the accuracy rates and extracted the error patterns in the translations. We used combinations of the following categories as the surface expressions of tense, aspect, and modality. We refer to the categories as the categories of tense, aspect, and modality. The six translation systems were the latest of leading translation system companies as of October 2003. Our systems for translating tense, aspect, and modality are based on support vector machines (SVMs) (Murata et al., 2001). 2 They translate Japanese tense, aspect, and modality expressions into English. They detect categories of tense, aspect, and modality previously defined from English expressions. The categories are detected as a categorization problem by SVMs (Cristianini and Shawe-Taylor, 2000;Kudoh, 2000). However, an SVM can handle only two categories at a time. Therefore, we used a pairwise method in addition to the SVM to handle more than two categories (Moreira and Mayoraz, 1998). As training sentences, we used the sentences remaining after eliminating the 800 evaluation sentences from the 40,198-sentence corpus. We used two feature sets for the machine learning. • Feature Set 1 This set consisted of 1-to 10-gram strings at the ends of the input Japanese sentences, e.g., shinai (do not), shinakatta (did not). • Feature Set 2 This set consisted of all of the morphemes in each of the input sentences, e.g., kyou (today), watashi (I), wa (topic-marker particle), hashiru (run). 1 This corpus was made in our previous studies (Murata et al., 2002b;Murata et al., 2005). 2 We found that support vector machines were more accurate than other kinds of machine learning methods such as the decision-list method and maximum entropy method (Murata et al., 2001). In addition, the use of support vector machines has been found to be effective in many studies (Taira and Haruno, 2001;Kudo and Matsumoto, 2000;Nakagawa et al., 2001;Murata et al., 2002a). Therefore, we used support vector machines in our translation systems. The detailed parameter settings we used are described in our previous paper (Murata et al., 2001). We performed the evaluation using both feature sets, using only Feature Set 1, and using only Feature Set 2. Because the tense, aspect, and modality expressions of a Japanese sentence can be translated into multiple categories of tense, aspect, and modality in English, we used a strict evaluation procedure. The evaluation was performed by an outside company. We first defined the categories of tense, aspect, and modality of the main verb phrase in the English sentence in an original parallel corpus as the correct category. The original parallel corpus contained example sentences taken from a Japanese-English dictionary (Murata et al., 2002b;Murata et al., 2005). We used as candidate Past 1 2 0 3 2 2 1 1 2 2 10 13 "can" Present 2 2 2 6 1 1 1 1 2 1 7 13 "should" Present 1 1 1 3 2 2 0 0 4 1 9 12 "would" Present 1 1 0 2 2 2 1 1 2 2 10 12 past Perfect 0 categories the categories of tense, aspect, and modality in English sentences as translated independently by three professional translators and the categories output by the six translation systems on the market and by our translation systems. Two other professional translators determined whether each candidate category was correct or not. The ones that were judged to be correct were defined as the correct categories. When the two judges disagreed about whether a candidate category was correct or not, it was defined as correct because we examined errors that could be judged to be clearly incorrect. However, we defined as incorrect a candidate category that was judged to be correct only when we assumed a special context or situation. The occurrence rates for the correct categories are shown in Table 1. The categories for which the frequency was less than ten are not shown. Because more than one category can be correct, the total rates can be more than 1. Investigation We evaluated the performance of the translation systems by using the method described in the previous section. The accuracy rates are shown in Table 2. For the baseline method, if a sentence ended with ta (a Japanese particle used for the past tense), it was judged to be in the past tense; otherwise, it was judged to be in the present tense. When a translation system could not output a sentence, the output of the baseline method was used instead. We refer to the six translation systems as A, B, C, D, E, and F. As shown in Table 2, the SVM had the highest accuracy rates when all features were used. Systems A and B had the highest accuracy rates of the systems on the market. Systems E and F had accuracy rates near that of the baseline method. Next, we analyzed errors by investigating the error patterns of the cases where the translations were judged to be incorrect. An error pattern was a pair of the correct category and the incorrect category output by a system. When multiple categories were correct, each case was considered as an error pattern (e.g., when both "present" and "progressive" were correct, and the system output was "past", two error patterns, the pair "present" and "past" and the pair "progressive" and "past" were extracted as error patterns.) The category of "no output" was defined for the case when a translation system on the market did not output a verb phrase in the English translation; however, this rarely occurred, so the category is not presented in the tables shown here. The results of investigating the error patterns are shown in Table 3. Only shown are those patterns with a total frequency of more than nine or an error frequency for an individual system of more than two. We investigated the tendency of the distribution of error patterns for the six translation systems on the market and the SVM when all features were used. (We also used the SVM with Feature Set 1 only and with Feature Set 2 only, but displaying all the results on one graph made the graph and the analysis shown in Figure 2 patterns for which the frequency of errors for a system was more than two and calculated the cooccurrence frequency of the error patterns and the seven translation systems. We constructed cross tables in this manner and then used the dual scaling method to analyze them (Weller and Romney, 1990;Ueda et al., 2003). As shown in Figure 1, the figure also roughly shows the error patterns of each translation system. For example, the proximity of error patterns "past:perfect", "present:perfect", and "past progressive:perfect" near System F indicate that System F produced perfect forms rather than the correct past, present, or past progressive forms more often than the other systems. Examination of our systems We first examined the performance of the SVM when only Feature Set 1 was used. It made a few more errors in many error patterns than when both feature sets were used. When only Feature Set 2 was used, many errors were made with the pair "present" and "past". We found that translating Japanese tense, aspect, and modality expressions is difficult when only word information is used. The characters at the ends of Japanese sentences are also very important. Examination of systems on the market We next examined the performance of the systems on the market. As shown in Table 3, Systems A and B had virtually the same performance. The output categories of tense, aspect, and modality for System A were exactly the same as those for System B, and the error patterns for System A were also exactly the same as those for System B. Although Systems A and B were developed by different companies, because their outputs were very similar, they likely were developed cooperatively. We also found that the translated sentences and the output categories of tense, aspect, and modality were very similar for Systems C and D. Again, some cooperative development appears to have been done. We can predict these relationships by using Figure 1 because Systems A and B are near each other and Systems C and D are near each other. Systems A and B had the highest accuracy rates, and Systems C and D had the next highest accuracy rates among the six systems on the market. This indicates that cooperative development tends to result in higher accuracy rates. Comparison of systems on the market and our systems We found that the error patterns of the systems on the market very often produced progressive forms rather than the correct present forms, while our systems rarely made such errors. An example of this is as follows. Input Japanese sentence: kono heya niwa suidou ga torituke rareteiru. (this) (room) (city water) (is laid) Translation result: A water service is being installed on this room. Correct translation: City water is laid on in this room. The system produced the progressive form rather than the correct present form. Our systems made this error much less often than the translation systems on the market. We found that the methods used in our systems could alleviate this problem. Use of these methods will thus aid in the development of future machine translation systems. We also found that our systems made fewer errors in producing a present form rather than the correct past form and in producing a present or progressive form rather than the correct perfect form than the systems on the market. Although the perfect form is thought to be difficult to handle in a translation system, the SVM made few such errors when all features were used. Our system is useful for reducing such errors. All of the systems on the market and our systems often produced a present form rather than "will". In Japanese, the same base form is used for both the future and the present. Therefore, correct translation to "will" and the present form is difficult, causing trouble for any Japanese-to-English translation system. Examination of each system on the market Next, we examined the error patterns of each translation system by using Table 3 and Figure 1. We found that Systems C and D more often produced a past perfect form rather than the correct past progressive or past form. An example of this is as follows: Input Japanese sentence: The systems produced the past perfect form rather than the correct past progressive or past form. This is a typical error with Systems C and D. Apparently, the systems failed to adjust, so they were more likely to produce a past perfect form. Engineers constructing such systems should be able to improve their performance relatively easily by examining these results. In addition, Systems C and D often produced "will" rather than the correct present form. An example of this is as follows. Input Japanese sentence: sadou no kigen wa 16 seiki izen ni made saka noboru. (tea ceremony) (origin) (16th century) (before) (trace) Translation result: The origin of the tea ceremony will go back even before the 16th century. Correct translation 1: The origin of the tea ceremony dates back before the 16th century. Correct translation 2: The origin of the tea ceremony can be traced back before the 16th century. Correct translation 3: The tea ceremony originated before the 16th century. Alleviating this problem, however, is difficult because of the translation problem with respect to "will" and present forms as described above. 3 System E more often produced "might" rather than the correct present form. An example of this is as follows. In this context, the use of "might" is unnatural. In addition, System E more often produced a present tense rather than the correct "must" or "should". An example of this is as follows. Input Japanese sentence: kimi no souiu okonai wa togame rareru bekida. (your) (such) (conduct) (be blame) (must be) Translation result: It is necessary to blame such your doing. Correct translation 1: You must be blamed for such conduct. Correct translation 2: You should be blamed for such conduct. Although System E produced a meaning similar to the correct modality by using the sentence pattern "it is necessary", "should" or "must" would be more appropriate expressions. The developers of System E should be able to improve the performance of their system relatively easily by examining these results. System F more often produced a perfect form rather than the correct present, past, or past progressive form. An example of this is as follows. A present form is appropriate. Apparently, the system failed to adjust, so it was more likely to produce a perfect form. In addition, when we examined the translation results for System F, we found cases where the translation was incorrectly divided into two sentences. An example of this is as follows. Input Japanese sentence: kare wa jissai yori wakaku mieru. (he) (he really is) (younger) (look) Translation result: It is younger than practice and he can be seen. Correct translation: He looks younger than he really is. The sentence was incorrectly divided into two sentences, and the system produced "can" rather than the correct present form. The developers of System F should easily be able to improve the performance of their system by examining these results. Other kinds of error patterns We also examined other error patterns. The SVM and Systems A, B, E, and F sometimes produced a present form rather than the correct imperative form, while Systems C and D did not. Systems C and D should thus produce better translations in such cases. An example of this is as follows. Input Japanese sentence: In the translation result, the subject noun phrase is missing. Because the structure of the translated sentence was broken, the tense, aspect, and modality expressions were broken. This error pattern would be difficult to eliminate. Doing so would require improving the overall performance of the translation system. Conclusion We analyzed errors in the Japanese-to-English translation of tense, aspect, and modality by six machine translation systems on the market and our new translation systems for tense, aspect, and modality. Our evaluation showed that our support vector machine (SVM) using all features had the highest accuracy rate. Two of the systems on the market had the highest accuracy rates among those on the market, while two others had accuracies as low as that of the baseline method. Error analysis showed that when the string characters at the ends of sentences were not used, the SVM had low accuracy and often produced a past form rather than the correct present form. Use of the string characters at the ends of sentences is thus important. Our system outperformed the other systems. The other systems often produced progressive forms rather than the correct present forms, while our system rarely made such errors. This indicates that the translation systems on the market can be improved by using the methods used in our system. We also found that our systems made fewer errors in producing a present form rather than the correct past form and in producing a present or progressive form rather than the correct perfect form. These errors can be corrected by using the methods used in our system. In our experimental results, all of the systems on the market and our systems often produced a present form rather than "will". This indicated that correct translation to "will" and present forms is difficult. This paper is thus useful for identifying error patterns that are difficult to correct in translation systems. Our investigation detected error patterns made by each system on the market. Most of these errors can be corrected relatively easily because their corresponding sentences can be translated correctly by the other systems. This paper is thus useful for identifying error patterns that can be relatively easily corrected in each translation system. We compared the results of individual systems. We extracted error patterns that are difficult to correct by extracting error patterns that were made in almost all of systems. We also extracted error patterns that are relatively easy to correct by extracting error patterns that were made in a few systems and not made in other systems. Our approach is useful for identifying whether each error pattern can be corrected easily. Acknowledgements We are grateful to the machine translation systems engineers who made valuable comments about this paper.
2015-08-11T20:29:18.000Z
2005-12-01T00:00:00.000
{ "year": 2005, "sha1": "adb2c41c1e8857e0c956b250c5ec416a8244a2ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "adb2c41c1e8857e0c956b250c5ec416a8244a2ab", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
3528885
pes2o/s2orc
v3-fos-license
Optimal Design of an Hourglass in-Fiber Air Fabry-Perot Microcavity—Towards Spectral Characteristics and Strain Sensing Technology An hourglass in-fiber air microcavity Fabry-Perot interferometer is proposed in this paper, and its second reflecting surface of in-fiber microcavity is designed to be a concave reflector with the best curvature radius in order to improve the spectral characteristics. Experimental results proved that the extinction ratio of Fabry-Perot interferometer with cavity length of 60 μm and concave reflector radius of 60 μm is higher than for a rectangular Fabry-Perot interferometer with cavity length of 60 μm (14 dB: 11 dB). Theory and numerical simulation results show that the strain sensitivity of sensor can be improved by reducing the microcavity wall thickness and microcavity diameter, and when the in-fiber microcavity length is 40 μm, the microcavity wall thickness is 10 μm, the microcavity diameter is 20 μm, and the curvature radius of reflective surface II is 50 μm, the interference fringe contrast of is greater than 0.97, an Axial-pull sensitivity of 20.46 nm/N and resolution of 1 mN can be achieved in the range of 0–1 N axial tension. The results show that the performance of hourglass in-fiber microcavity interferometer is far superior to that of the traditional Fabry-Perot interferometer. Introduction In recent years, a variety of fiber optic strain sensors have been studied [1][2][3] for application in biological systemx [4], structural health monitoring in composite materials [5,6] and civil engineering applications, such as health monitoring of buildings and dams [7,8]. For fiber Bragg grating (FBG) sensors, the strain sensitivity is less than 1.2 pm/µε [9,10], and for fiber Mach-Zehnder interferometers, the sensitivity is about 5.0 pm/µε [11,12]. However, in these sensors, the cross-sensitivity between strain and temperature is hard to overcome. Optical microcavity sensing structures, as an alternative type of strain sensor, have unique advantages, such as high sensitivity, compact size, and low temperature cross-sensitivity [13][14][15][16]. Steinmetz et al. studied the microcavity concave mirror, which is made with miniature spherical mirrors positioned on the end of single-or multimode optical fibers by a transfer technique [17]. A concave mirror with CO 2 laser was made in [18][19][20]. A short cavity Fabry-Perot sensor for strain sensing was also fabricated through acid etching the end of multi-mode fiber [21]. In 2007, Rao et al. studied micro-Fabry-Perot interferometers in silica fibers machined by femtosecond laser. A 75 µm cavity length based on the PCF was made through femtosecond laser and the strain sensitivity reaches 0.006 nm/µε [22]. In 2012, Duan et al. took advantage of an optical fiber fusion splicer to obtain a 100 µm ellipsoid microcavity, and used it in tensile sensing, where the sensitivity was 4 pm/µε and linearity was 99.99% [23]. A Fabry-Pérot (FP) strain sensor made by splicing a section of hollow-core ring photonic crystal fiber between two standard single mode fibers was investigated. For a length of 13 µm a sensitivity of 15.4 pm/µε and temperature sensitivity of~0.81 pm/ • C was attained [24]. In 2013, a microhole was also fabricated in the end face of single mode fiber by femtosecond laser. Then the fiber tip with the microhole structure was spliced together with another cleaved single mode fiber. The SMF with a hollow sphere was tapered by controlling the moving speed of the flame and the holders. A maximum sensitivity of 6.8 pm/µε was achieved with taper region length of 860 µm [25]. In 2014, Kaur et al. presented a microcavity strain sensor for high temperature applications. The EFPI sensor is fabricated by micromachining a cavity on the tip of a standard single-mode fiber with a femtosecond laser and is then self-enclosed by fusion splicing another piece of single-mode fiber. The sensor exhibits linear performance for a range up to 3700 µε and a low temperature sensitivity of only 0.59 pm/ • C through 800 • C [26]. In this work, an hourglass in-fiber air microcavity Fabry-Perot interferometer is proposed. The second reflecting surface of in-fiber microcavity is designed to be a concave reflector with the best curvature radius in order to improve the spectral characteristics. Compared with the fabrication processes and strain sensitivity of other microcavity devices [18][19][20][21][22][23]25], this sensor is attractive for its low cost, small volume, high sensitivity and better performance than the traditional Fabry-Perot interferometer. The experimental results proved that the extinction ratio of a Fabry-Perot interferometer with a microcavity length of 60 µm and concave reflector radius of 60 µm is 14 dB, and the extinction ratio of rectangular Fabry-Perot interferometer with microcavity length of 60 µm is only 11 dB. Linearity is up to 99.947% in the range of 0-1 N axial tension, and axial-pull sensitivity is up to 20.46 nm/N, the maximum interference intensity of reflection spectrum is above 0.08. The contrast of reflection spectrum is greater than 0.97, and the cavity length is 40 µm, which guarantees a good free spectral range (28 nm). Sensor Structure The traditional in-fiber Fabry-Perot cavity is an axisymmetric cylinder-structure with a fiber core. The two reflective surfaces are parallel to the axial-vertical plane, and the internal material of the microcavity is air, as shown in Figure 1a. In this paper, a new kind of hourglass fiber air microcavity structure is designed in single-mode fiber (Corning SMF-28e+), the structure is shown in Figure 1d. obtain a 100 μm ellipsoid microcavity, and used it in tensile sensing, where the sensitivity was 4 pm/με and linearity was 99.99% [23]. A Fabry-Pérot (FP) strain sensor made by splicing a section of hollow-core ring photonic crystal fiber between two standard single mode fibers was investigated. For a length of 13 μm a sensitivity of 15.4 pm/με and temperature sensitivity of ~0.81 pm/°C was attained [24]. In 2013, a microhole was also fabricated in the end face of single mode fiber by femtosecond laser. Then the fiber tip with the microhole structure was spliced together with another cleaved single mode fiber. The SMF with a hollow sphere was tapered by controlling the moving speed of the flame and the holders. A maximum sensitivity of 6.8 pm/με was achieved with taper region length of 860 m [25]. In 2014, Kaur et al. presented a microcavity strain sensor for high temperature applications. The EFPI sensor is fabricated by micromachining a cavity on the tip of a standard single-mode fiber with a femtosecond laser and is then self-enclosed by fusion splicing another piece of single-mode fiber. The sensor exhibits linear performance for a range up to 3700 με and a low temperature sensitivity of only 0.59 pm/°C through 800 °C [26]. In this work, an hourglass in-fiber air microcavity Fabry-Perot interferometer is proposed. The second reflecting surface of in-fiber microcavity is designed to be a concave reflector with the best curvature radius in order to improve the spectral characteristics. Compared with the fabrication processes and strain sensitivity of other microcavity devices [18][19][20][21][22][23]25], this sensor is attractive for its low cost, small volume, high sensitivity and better performance than the traditional Fabry-Perot interferometer. The experimental results proved that the extinction ratio of a Fabry-Perot interferometer with a microcavity length of 60 μm and concave reflector radius of 60 μm is 14 dB, and the extinction ratio of rectangular Fabry-Perot interferometer with microcavity length of 60 μm is only 11 dB. Linearity is up to 99.947% in the range of 0-1 N axial tension, and axial-pull sensitivity is up to 20.46 nm/N, the maximum interference intensity of reflection spectrum is above 0.08. The contrast of reflection spectrum is greater than 0.97, and the cavity length is 40 μm, which guarantees a good free spectral range (28 nm). Sensor Structure The traditional in-fiber Fabry-Perot cavity is an axisymmetric cylinder-structure with a fiber core. The two reflective surfaces are parallel to the axial-vertical plane, and the internal material of the microcavity is air, as shown in Figure 1a. In this paper, a new kind of hourglass fiber air microcavity structure is designed in single-mode fiber (Corning SMF-28e+), the structure is shown in Figure 1d. obtain a 100 μm ellipsoid microcavity, and used it in tensile sensing, where the sensitivity was 4 pm/με and linearity was 99.99% [23]. A Fabry-Pérot (FP) strain sensor made by splicing a section of hollow-core ring photonic crystal fiber between two standard single mode fibers was investigated. For a length of 13 μm a sensitivity of 15.4 pm/με and temperature sensitivity of ~0.81 pm/°C was attained [24]. In 2013, a microhole was also fabricated in the end face of single mode fiber by femtosecond laser. Then the fiber tip with the microhole structure was spliced together with another cleaved single mode fiber. The SMF with a hollow sphere was tapered by controlling the moving speed of the flame and the holders. A maximum sensitivity of 6.8 pm/με was achieved with taper region length of 860 m [25]. In 2014, Kaur et al. presented a microcavity strain sensor for high temperature applications. The EFPI sensor is fabricated by micromachining a cavity on the tip of a standard single-mode fiber with a femtosecond laser and is then self-enclosed by fusion splicing another piece of single-mode fiber. The sensor exhibits linear performance for a range up to 3700 με and a low temperature sensitivity of only 0.59 pm/°C through 800 °C [26]. In this work, an hourglass in-fiber air microcavity Fabry-Perot interferometer is proposed. The second reflecting surface of in-fiber microcavity is designed to be a concave reflector with the best curvature radius in order to improve the spectral characteristics. Compared with the fabrication processes and strain sensitivity of other microcavity devices [18][19][20][21][22][23]25], this sensor is attractive for its low cost, small volume, high sensitivity and better performance than the traditional Fabry-Perot interferometer. The experimental results proved that the extinction ratio of a Fabry-Perot interferometer with a microcavity length of 60 μm and concave reflector radius of 60 μm is 14 dB, and the extinction ratio of rectangular Fabry-Perot interferometer with microcavity length of 60 μm is only 11 dB. Linearity is up to 99.947% in the range of 0-1 N axial tension, and axial-pull sensitivity is up to 20.46 nm/N, the maximum interference intensity of reflection spectrum is above 0.08. The contrast of reflection spectrum is greater than 0.97, and the cavity length is 40 μm, which guarantees a good free spectral range (28 nm). Sensor Structure The traditional in-fiber Fabry-Perot cavity is an axisymmetric cylinder-structure with a fiber core. The two reflective surfaces are parallel to the axial-vertical plane, and the internal material of the microcavity is air, as shown in Figure 1a. In this paper, a new kind of hourglass fiber air microcavity structure is designed in single-mode fiber (Corning SMF-28e+), the structure is shown in Figure 1d. The process from Figure 1a,b is the optimization of spectral characteristics. The process from Figure 1b-d is the optimization of strain characteristics. This structure has better spectral characteristics than the traditional Fabry-Perot cavity: when the transmitted light is launched into the microcavity, the light will reflect on reflective surface I, forming Fresnel diffraction, as shown in Figure 2b, so this paper proposes that the reflective surface II be designed as a sphere with the best radius of curvature. It is known that a reflector II with extra-large or extra small curvature radius can also lead to scattering loss. If the reflected light energy is bound to the fiber core area, as shown in Figure 1d, an interference spectrum with higher power will be obtained. The spreading loss of reflection surface II is reduced by the focusing effect of concave mirror, as shown in Figure 2c. The process from Figure 1a,b is the optimization of spectral characteristics. The process from Figure 1b-d is the optimization of strain characteristics. This structure has better spectral characteristics than the traditional Fabry-Perot cavity: when the transmitted light is launched into the microcavity, the light will reflect on reflective surface I, forming Fresnel diffraction, as shown in Figure 2b, so this paper proposes that the reflective surface II be designed as a sphere with the best radius of curvature. It is known that a reflector II with extra-large or extra small curvature radius can also lead to scattering loss. If the reflected light energy is bound to the fiber core area, as shown in Figure 1d, an interference spectrum with higher power will be obtained. The spreading loss of reflection surface II is reduced by the focusing effect of concave mirror, as shown in Figure 2c. Why should the reflective surface I be perpendicular to the fiber axis? If the reflective surface is a curved surface that will lead to reflection loss, as shown in Figure 3c,d. Figure 3c will enhance reverse diffraction, and Figure 3d will enhance positive diffraction. Figure 3a is the most reasonable by comparison. Figure 3b is the physical figure of a parallel reflecting surface that indicates the feasibility and simplicity of preparation, but the curvature radius and cavity length of reflector II are difficult to control precisely, as shown in Figure 2d. We believe that higher precision preparation can be achieved with the development of the micro-/nano-3D printing [27]. In addition, the structure shown in Figure 1d has better strain sensitivity than the traditional F-P cavity: keeping the other structure parameters unchanged, the microcavity wall thickness and cavity diameter are reduced to improve the strain sensitivity. In practical applications, the hourglass optical microcavity strain sensor can be used in large bridges, ships and other micro-strain measurements [28]. Why should the reflective surface I be perpendicular to the fiber axis? If the reflective surface is a curved surface that will lead to reflection loss, as shown in Figure 3c,d. Figure 3c will enhance reverse diffraction, and Figure 3d will enhance positive diffraction. Figure 3a is the most reasonable by comparison. Figure 3b is the physical figure of a parallel reflecting surface that indicates the feasibility and simplicity of preparation, but the curvature radius and cavity length of reflector II are difficult to control precisely, as shown in Figure 2d. We believe that higher precision preparation can be achieved with the development of the micro-/nano-3D printing [27]. In addition, the structure shown in Figure 1d has better strain sensitivity than the traditional F-P cavity: keeping the other structure parameters unchanged, the microcavity wall thickness and cavity diameter are reduced to improve the strain sensitivity. In practical applications, the hourglass optical microcavity strain sensor can be used in large bridges, ships and other micro-strain measurements [28]. Sensing Principle The traditional Fabry-Perot interference principle is based on the theory of parallel plate multibeam interference. The derived conditions of the theory [29] are: (1) consider the two reflective surfaces of the Fabry-Perot cavity are strictly parallel; (2) ignore the light spreading loss and the absorption loss of the reflective surface. However, the spherical reflector II of the hourglass optical microcavity structure is no longer strictly parallel to reflector I. The incident angle of light in reflective surface II has changed when it incides repeatedly on different positions of the reflective surface II. As is known from the Fresnel formula [19], the reflectivity of the same reflective surface is associated with the incidence angle of the light, so the multi-beam interference analysis of this structure is very complex. Research shows the interface reflectivity between optical fiber and air is less than 0.04, so we can use the double beam interference principle to simply analyze the hourglass optical microcavity sensor. The refractive index of air is 1. The interference light intensity of the sensor reflectance spectra is () r I  [29,30]: The maximum value of The interference contrast of the sensor reflectance spectrum is V : In the formula, 1 R is an effective reflectivity of the reflective surface I; 2 R is an effective reflectivity of the reflective surface II, L is the microcavity length;  is the wavelength of incident light;   0 I  is the light intensity of the incident light; max I is the maxima of interference spectral intensity; min I is the minima of interference spectral intensity. Sensing Principle The traditional Fabry-Perot interference principle is based on the theory of parallel plate multibeam interference. The derived conditions of the theory [29] are: (1) consider the two reflective surfaces of the Fabry-Perot cavity are strictly parallel; (2) ignore the light spreading loss and the absorption loss of the reflective surface. However, the spherical reflector II of the hourglass optical microcavity structure is no longer strictly parallel to reflector I. The incident angle of light in reflective surface II has changed when it incides repeatedly on different positions of the reflective surface II. As is known from the Fresnel formula [19], the reflectivity of the same reflective surface is associated with the incidence angle of the light, so the multi-beam interference analysis of this structure is very complex. Research shows the interface reflectivity between optical fiber and air is less than 0.04, so we can use the double beam interference principle to simply analyze the hourglass optical microcavity sensor. The refractive index of air is 1. The interference light intensity of the sensor reflectance spectra is () r I  [29,30]: The maximum value of The interference contrast of the sensor reflectance spectrum is V : In the formula, 1 R is an effective reflectivity of the reflective surface I; 2 R is an effective reflectivity of the reflective surface II, L is the microcavity length;  is the wavelength of incident light;   0 I  is the light intensity of the incident light; max I is the maxima of interference spectral intensity; min I is the minima of interference spectral intensity. .) Sensing Principle The traditional Fabry-Perot interference principle is based on the theory of parallel plate multi-beam interference. The derived conditions of the theory [29] are: (1) consider the two reflective surfaces of the Fabry-Perot cavity are strictly parallel; (2) ignore the light spreading loss and the absorption loss of the reflective surface. However, the spherical reflector II of the hourglass optical microcavity structure is no longer strictly parallel to reflector I. The incident angle of light in reflective surface II has changed when it incides repeatedly on different positions of the reflective surface II. As is known from the Fresnel formula [19], the reflectivity of the same reflective surface is associated with the incidence angle of the light, so the multi-beam interference analysis of this structure is very complex. Research shows the interface reflectivity between optical fiber and air is less than 0.04, so we can use the double beam interference principle to simply analyze the hourglass optical microcavity sensor. The refractive index of air is 1. The interference light intensity of the sensor reflectance spectra is I r (λ) [29,30]: The maximum value of I r (λ) is I max : The interference contrast of the sensor reflectance spectrum is V: In the formula, R 1 is an effective reflectivity of the reflective surface I; R 2 is an effective reflectivity of the reflective surface II, L is the microcavity length; λ is the wavelength of incident light; I 0 (λ) is the light intensity of the incident light; I max is the maxima of interference spectral intensity; I min is the minima of interference spectral intensity. The strain sensing principle of the hourglass optical microcavity sensor is that the interference spectrum dip moves by the changing of microcavity length L. The deformation of the measured object makes the optical microcavity structure suffer an axial tension [31], and changes the microcavity cavity length, and this causes a reflection spectrum red shift. From Equation (1) the sensitivity of optical microcavity tension sensor can be obtained as follows: where K is the sensitivity of sensor; λ dip is the reflection spectrum peak/valley value; F is the axial tension; L is the microcavity length. From Equation (4), it is known that F is constant, and the sensitivity of the optical microcavity tension sensor mainly depends on ∆L/L. Optimal Design of the Hourglass Microcavity Sensor This section is divided into subheadings. It provides a concise and precise description of the experimental results, their interpretation as well as the experimental conclusions that can be drawn. Influence of Microcavity Structure Parameters on Spectral Characteristics In this paper, FDTD is used to establish the microcavity structure model, and the interference spectrum signal intensity and contrast of the reflection spectrum are simulated, the optical microcavity structure parameters are mainly analyzed, including cavity length L, cavity diameter ψ and the curvature radius of the reflective surface II φ 2 . The parameters of the sensor by simulation are as shown in Figure 4. The strain sensing principle of the hourglass optical microcavity sensor is that the interference spectrum dip moves by the changing of microcavity length L. The deformation of the measured object makes the optical microcavity structure suffer an axial tension [31], and changes the microcavity cavity length, and this causes a reflection spectrum red shift. From Equation (1) the sensitivity of optical microcavity tension sensor can be obtained as follows: where K is the sensitivity of sensor; dip  is the reflection spectrum peak/valley value; F is the axial tension; L is the microcavity length. From Equation (4), it is known that F is constant, and the sensitivity of the optical microcavity tension sensor mainly depends on / LL  . Optimal Design of the Hourglass Microcavity Sensor This section is divided into subheadings. It provides a concise and precise description of the experimental results, their interpretation as well as the experimental conclusions that can be drawn. Influence of Microcavity Structure Parameters on Spectral Characteristics In this paper, FDTD is used to establish the microcavity structure model, and the interference spectrum signal intensity and contrast of the reflection spectrum are simulated, the optical microcavity structure parameters are mainly analyzed, including cavity length L, cavity diameter  and the curvature radius of the reflective surface II 2  . The parameters of the sensor by simulation are as shown in Figure 4. . The simulation model of microcavity structure and parameter settings (the refractive index of the core is 1.4679; the refractive index of the cladding is 1.4613; the refractive index of air is 1; the wavelength range is 1520~1570 nm; the curvature radius of reflective surface II is 2  , microcavity wall is  , microcavity diameter is  , microcavity cavity length is L ). Before studying the influence of cavity diameter to contrast, a reasonable cavity length value 0 L and a curvature radius of reflector II 2  must be given. From Equation (1) nm, when k is respectively 1/2/3, and the cavity length L is 30 μm/60 μm/90 μm, 1-3 dips respectively appear in the corresponding spectrum. Because the spherical reflector II of the optical microcavity will have a certain degree to influence the interference of reflected light, if the cavity length is 60 μm that can guarantee at least a dip in the spectral range of 1530 nm to 1570 nm, and it can avoid that mixing phenomenon because of the small free spectral range of the microcavity. Firstly, by selecting 2  =  , reflector II is plane. The interference spectrum simulation result is shown in Figure 5. the wavelength range is 1520~1570 nm; the curvature radius of reflective surface II is φ 2 , microcavity wall is γ, microcavity diameter is ψ, microcavity cavity length is L ). Before studying the influence of cavity diameter to contrast, a reasonable cavity length value L 0 and a curvature radius of reflector II φ 2 must be given. From Equation (1), we can know that: 4πnL λ 1 − 4πnL λ 2 = 2kπ (k = 0, 1, 2, 3......) ; n = 1, λ 1 = 1530 nm, λ 1 = 1570 nm, when k is respectively 1/2/3, and the cavity length L is 30 µm/60 µm/90 µm, 1-3 dips respectively appear in the corresponding spectrum. Because the spherical reflector II of the optical microcavity will have a certain degree to influence the interference of reflected light, if the cavity length is 60 µm that can guarantee at least a dip in the spectral range of 1530 nm to 1570 nm, and it can avoid that mixing phenomenon because of the small free spectral range of the microcavity. Firstly, by selecting φ 2 = ∝, reflector II is plane. The interference spectrum simulation result is shown in Figure 5. are as follows: when the microcavity diameter is less than the maximum width of the diffraction field, it constrains the diffraction of light, and part of the diffracted light will undergo reflection or total reflection on the microcavity wall, and at this time there will be more reflected light coupled back to the fiber core. It is worth noting that the cavity diameter can't be too small due to the difficulty of preparation, so reflector II's impact on the spectral characteristics is very important. The cavity length L and radius of curvature 2  are related to the influence of the spectral characteristics based on the theoretical analysis. In this paper, the simulated cavity diameter is 60   μm, the cavity length L is respectively 30~90 μm, and the curvature radius of 2  is respectively 30~100 m, 150 m, 200 m,  , and the interference spectrum contrast curve is as shown in Figure 6. Figure 5 shows that the maximum value of the interference spectrum power (0.06~0.07) and the interference contrast (V ≈ 1) are all the best when ψ = 10µm. The maximum value of interference spectrum power (0.04~0.05) and the interference contrast (V = 0.78) are almost unchanged when ψ = 2~10 µm. This indicates the cavity diameter's impact on the spectral characteristics. The reasons are as follows: when the microcavity diameter is less than the maximum width of the diffraction field, it constrains the diffraction of light, and part of the diffracted light will undergo reflection or total reflection on the microcavity wall, and at this time there will be more reflected light coupled back to the fiber core. It is worth noting that the cavity diameter can't be too small due to the difficulty of preparation, so reflector II's impact on the spectral characteristics is very important. The cavity length L and radius of curvature φ 2 are related to the influence of the spectral characteristics based on the theoretical analysis. In this paper, the simulated cavity diameter is ψ = 60 µm, the cavity length L is respectively 30~90 µm, and the curvature radius of φ 2 is respectively 30~100 µm, 150 µm, 200 µm, ∝, and the interference spectrum contrast curve is as shown in Figure 6. are as follows: when the microcavity diameter is less than the maximum width of the diffraction field, it constrains the diffraction of light, and part of the diffracted light will undergo reflection or total reflection on the microcavity wall, and at this time there will be more reflected light coupled back to the fiber core. It is worth noting that the cavity diameter can't be too small due to the difficulty of preparation, so reflector II's impact on the spectral characteristics is very important. The cavity length L and radius of curvature 2  are related to the influence of the spectral characteristics based on the theoretical analysis. In this paper, the simulated cavity diameter is 60   μm, the cavity length L is respectively 30~90 μm, and the curvature radius of 2  is respectively 30~100 m, 150 m, 200 m,  , and the interference spectrum contrast curve is as shown in Figure 6. The abscissa in Figure 6 is the radius of curvature, the ordinate is interference contrast. Simulation found that the following parameters of microcavity have the best signal interference fringe contrast, and its signal intensity is greater than 0.08: L = 30 µm, φ 2 = 30 ∼ 50 µm; L = 40 µm, φ 2 = 40 ∼ 60 µm; L = 50 µm, φ 2 = 50 ∼ 70 µm; L = 60 µm, φ 2 = 60 ∼ 80 µm; L = 70 µm, φ 2 = 70 ∼ 90 µm; L = 80 µm, φ 2 = 80 ∼ 90 µm; L = 90 µm, φ 2 = 90 ∼ 110 µm. In order to fabricate the microcavity interferometer, a fusion splicer (Fitel, S178, Koga, Japan) and a mechanical fiber cleaver (S325 Fitel) are required in the experiment. The rectangular air FP cavity was made using a hollow-core fiber (HCF) section with a diameter of 50/125 µm sandwiched between two SMFs (SMF-28e+, Corning, NY, USA). The fusion parameter settings are as follows: discharge intensity of 110 unit; discharge time of 420 ms; first push distance of 8 µm, then stretch distance of 3 µm. After many experiments, we obtained a rectangular Fabry-Perot interferometer with a microcavity length of 60 µm. For the fabrication of the sensor with concave reflector, the spherical fiber end was made by electrical arc discharge on a section of HCF end face in a commercial fusion splicer. The experimental microcavity interference spectrum is shown in Figure 7. The extinction ratio is the difference between the peak value of the interference spectrum and the dip value of the interference spectrum. The experiment has been proved that the extinction ratio of Fabry-Perot interferometer with microcavity length 60 µm and concave reflector radius 60 µm is higher than the rectangular Fabry-Perot interferometer with microcavity length 60 µm (14 dB:11 dB). Due to the limitations of the experimental conditions, the experiment is not accurate enough, but it can match the simulation results. The abscissa in Figure 6 is the radius of curvature, the ordinate is interference contrast. Simulation found that the following parameters of microcavity have the best signal interference fringe contrast, and its signal intensity is greater than 0.08: In order to fabricate the microcavity interferometer, a fusion splicer (Fitel, S178, Koga, Japan) and a mechanical fiber cleaver (S325 Fitel) are required in the experiment. The rectangular air FP cavity was made using a hollow-core fiber (HCF) section with a diameter of 50/125 µ m sandwiched between two SMFs (SMF-28e+, Corning, NY, USA). The fusion parameter settings are as follows: discharge intensity of 110 unit; discharge time of 420 ms; first push distance of 8 µ m, then stretch distance of 3 µ m. After many experiments, we obtained a rectangular Fabry-Perot interferometer with a microcavity length of 60 μm. For the fabrication of the sensor with concave reflector, the spherical fiber end was made by electrical arc discharge on a section of HCF end face in a commercial fusion splicer. The experimental microcavity interference spectrum is shown in Figure 7. The extinction ratio is the difference between the peak value of the interference spectrum and the dip value of the interference spectrum. The experiment has been proved that the extinction ratio of Fabry-Perot interferometer with microcavity length 60 μm and concave reflector radius 60 μm is higher than the rectangular Fabry-Perot interferometer with microcavity length 60 μm(14 dB:11 dB). Due to the limitations of the experimental conditions, the experiment is not accurate enough, but it can match the simulation results. Influence of Microcavity Structure Parameters on Strain Sensing Characteristics From Equation (4), it is known that the sensitivity of an optical microcavity tension sensor mainly depends on ∆L/L when F is constant. This article uses the static mechanics of the finite element method to simulate the different size of hourglass microcavity structure, and gets the axial tension dependent variable. The parameters of the simulation model are shown in Figure 8, where the parameters settings are: the cavity diameter of the simulation model is ψ = 60 µm, the cavity length L is respectively 30~90 µm and the curvature radius φ 2 is 30~100 µm, 150 µm, 200 µm, ∝, while the other parameters remain constant. From mechanical knowledge, the thinner the cavity wall thickness, the easier the optical microcavity deformation [31]. Thus, this paper mainly studies the influence of cavity diameter  on cavity deformation with the same cavity wall thickness. Based on the analysis above, this paper chooses: cavity length 40 L  μm, curvature radius 2 50   μm, and the parameters of cavity wall and cavity diameter are as follows: 60   μm, Figure 9 shows the relationship diagram between cavity length L and ∆L/L when φ 2 changes. The abscissa is cavity length L, the ordinate is cavity length variable ∆L/L. The results show that when the optical microcavity cavity length L is the same,∆L/Lincreases with the increase of curvature radius φ 2 . When the radius of curvature φ 2 is the same, ∆L/L decreases with the increase of optical microcavity length L. From mechanical knowledge, the thinner the cavity wall thickness, the easier the optical microcavity deformation [31]. Thus, this paper mainly studies the influence of cavity diameter  on cavity deformation with the same cavity wall thickness. Based on the analysis above, this paper chooses: cavity length 40 L  μm, curvature radius 2 50   μm, and the parameters of cavity wall and cavity diameter are as follows: 60 From mechanical knowledge, the thinner the cavity wall thickness, the easier the optical microcavity deformation [31]. Thus, this paper mainly studies the influence of cavity diameter ψ on cavity deformation with the same cavity wall thickness. Based on the analysis above, this paper chooses: cavity length L = 40 µm, curvature radius φ 2 = 50 µm, and the parameters of cavity wall and cavity diameter are as follows: ψ = 60 µm, γ = 32.5 µm; ψ = 60 µm, γ = 10 µm; ψ = 40 µm, γ = 10 µm. Figure 10 shows the microcavity model field with different structural parameters under 1 N axial tension. Figure 10a is the stress distribution of the stress distribution field, Figure 10b-d are the displacement distribution fields. Blue color is minimum and the red color is the biggest in Figure 10. Figure 10a shows that when the optical microcavity sensor suffers axial tension stress, the cavity walls are the main stress area and two reflective surfaces are the minimum stress area. Therefore, we can ignore the influence of the curvature radius φ 2 on the strain sensitivity. From Figure 10b,c, the microcavity structure deformation increases when the cavity diameter remains unchanged. Figure 10c,d showed that microcavity structure deformation increases along with the reduction of the cavity diameter when the microcavity wall thickness is unchanged. Figure 10e showed the increasing trend of ∆L/L along with the decrease of microcavity wall thickness and cavity diameter. Figure 10a is the stress distribution of the stress distribution field, Figure 10b-d are the displacement distribution fields. Blue color is minimum and the red color is the biggest in Figure 10. Figure 10a shows that when the optical microcavity sensor suffers axial tension stress, the cavity walls are the main stress area and two reflective surfaces are the minimum stress area. Therefore, we can ignore the influence of the curvature radius 2  on the strain sensitivity. From Figure 10b,c, the microcavity structure deformation increases when the cavity diameter remains unchanged. Figure 10c,d showed that microcavity structure deformation increases along with the reduction of the cavity diameter when the microcavity wall thickness is unchanged. Figure 10e showed the increasing trend of / LL  along with the decrease of microcavity wall thickness and cavity diameter. Sensing Properties of the Hourglass Optical Microcavity Sensor The sensor structure is put forward to further optimize the structure of the traditional Fabry-Perot interferometer, and has practical application value. The total length of interferometer structure given by simulation is 200 µm, microcavity parameters are ψ = 20 µm, γ = 10 µm, L = 40 µm, φ 2 = 50 µm. The axial tension range is 0-1 N that already can satisfy microstrain measurement demands in most cases according to the actual situation. The structure of the sensor is as shown in Figure 1d. The simulation results are as shown in Figure 11. The sensor structure is put forward to further optimize the structure of the traditional Fabry-Perot interferometer, and has practical application value. The total length of interferometer structure given by simulation is 200 μm, microcavity parameters are 20   μm. The axial tension range is 0-1 N that already can satisfy microstrain measurement demands in most cases according to the actual situation. The structure of the sensor is as shown in Figure 1d. The simulation results are as shown in Figure 11. The inset in Figure 11a shows the wavelength shift to longer wavelength as the axial tension increases from 0-1 N (0-80 MPa). The fitting curve has a good axial-pull sensitivity of 20.46 nm/N and linearity of 99.947%. It is five times higher than the strain sensitivity of optical fiber strain sensor (about 4 nm/N) [32] in reports. The spectrometer resolution reaches to 1 mN based on the current equipment in laboratory. Figure 11b shows that the interference intensity of reflection spectrum I is above 0.08 and the interference contrast of reflection spectrum is greater than 0.97, that is much better than the flat Fabry-Perot (V = 0.78, I = 0.04-0.05). That has important significance for small reflectivity sensors. The inset in Figure 11a shows the wavelength shift to longer wavelength as the axial tension increases from 0-1 N (0-80 MPa). The fitting curve has a good axial-pull sensitivity of 20.46 nm/N and linearity of 99.947%. It is five times higher than the strain sensitivity of optical fiber strain sensor (about 4 nm/N) [32] in reports. The spectrometer resolution reaches to 1 mN based on the current equipment in laboratory. Figure 11b shows that the interference intensity of reflection spectrum I is above 0.08 and the interference contrast of reflection spectrum is greater than 0.97, that is much better than the flat Fabry-Perot (V = 0.78, I = 0.04-0.05). That has important significance for small reflectivity sensors. Figure 11c shows the temperature sensing properties of this sensor in the range of 0~600 • C based on finite element temperature field simulation. From the inset in Figure 11c it can be seen that a red-shift is observed with the increased temperature. It turns out that the temperature sensing properties of this sensor is only 0.001 nm/ • C. Figure 11d shows the free spectral range of these structures: L = 30 µm, φ 2 = 40 µm; L = 40 µm, φ 2 = 50 µm; L = 50 µm, φ 2 = 60 µm; L = 60 µm, φ 2 = 70 µm; L = 70 µm, φ 2 = 80 µm; L = 80 µm, φ 2 = 90 µm; L = 90 µm, φ 2 = 100 µm. From the curve trend, we can see that the free spectral range decrease with the increase of curvature radius φ 2 . When the cavity length L = 40 µm, ψ = 20 µm and φ 2 = 50 µm, the free spectral range is 28 nm that is very appropriate for 1520~1570 nm waveband. Conclusions An hourglass optical microcavity sensor structure is put forward in this paper. The sensor structure was analyzed based on theory and numerical simulation, and the theoretical calculation is consistent with the simulation results. The experimental results prove that the extinction ratio of an air microcavity Fabry-Perot interferometer with a cavity length of 60 µm and concave reflector radius of 60 µm is higher than that of a rectangular Fabry-Perot interferometer with a microcavity length of 60 µm. The hourglass microcavity strain sensor structure, compared with the common fiber Bragg grating strain sensor, has compact size, higher sensitivity, and temperature independence. The optimized structure obtained by simulation in this paper has an in-fiber microcavity length of 40 µm, microcavity wall of 10 µm, microcavity diameter of 20 µm and the curvature radius of reflective surface II is 50 µm. A good linearity of 99.947%, a resolution of 1 mN and a good axial-pull sensitivity 20.46 nm/N are achieved in the range of 0-1 N axial tension. The interference maximum intensity of the reflection spectrum is above 0.08. The intervening contrast of reflection spectrum is greater than 0.97. The results show that the proposed sensor structure has small volume, good mechanical strength, good quality spectrum, high sensitivity and so on.
2017-07-25T21:55:52.309Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "ab12b0b505826e5252e1eaf846424b797ecab22c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/6/1282/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab12b0b505826e5252e1eaf846424b797ecab22c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Computer Science", "Medicine" ] }
233208686
pes2o/s2orc
v3-fos-license
Appendectomy and Non-Typhoidal Salmonella Infection: A Population-Based Matched Cohort Study The potential association between appendectomy and non-typhoidal Salmonella (NTS) infection has not been elucidated. We hypothesized that appendectomy may be associated with gut vulnerability to NTS. The data were retrospectively collected from the Taiwan National Health Insurance Research Database to describe the incidence rates of NTS infection requiring hospital admission among patients with and without an appendectomy. A total of 208,585 individuals aged ≥18 years with an appendectomy were enrolled from January 2000 to December 2012, and compared with a control group of 208,585 individuals who had never received an appendectomy matched by propensity score (1:1) by index year, age, sex, occupation, and comorbidities. An appendectomy was defined by the International Classification of Diseases, Ninth Revision, Clinical Modification Procedure Codes. The main outcome was patients who were hospitalized for NTS. Cox proportional hazards models were applied to estimate the hazard ratios (HRs) and 95% confidence intervals (CIs). Two sensitivity analyses were conducted for cross-validation. Of the 417,170 participants (215,221 (51.6%) male), 208,585 individuals (50.0%) had an appendectomy, and 112 individuals developed NTS infection requiring hospitalization. In the fully adjusted multivariable Cox proportional hazards regression model, the appendectomy group had an increased risk of NTS infection (adjusted HR (aHR), 1.61; 95% CI, 1.20–2.17). Females and individuals aged 18 to 30 years with a history of appendectomy had a statistically higher risk of NTS than the control group (aHR, 1.92; 95% CI, 1.26–2.93 and aHR, 2.67; 95% CI, 1.41–5.07). In this study, appendectomy was positively associated with subsequent hospitalization for NTS. The mechanism behind this association remains uncertain and needs further studies to clarify the interactions between appendectomy and NTS. Introduction Appendectomy is one of the most-commonly performed surgical procedures in the world. A recent meta-analysis of the incidence of appendectomy in Northern America was 100 per 100,000 person years [1], while it was 107.76 in Taiwan [2]. Studies have shown that the appendix may be an important component of human immune function [3,4]. Absence of an appendix has been mentioned in relation to recurrent infection with Clostridium difficile [5]. Recently, a study that enrolled patients who underwent incidental prophylactic appendectomy during 2004-2008 showed profound and long-term dysbiosis in these patients, sometimes for years [6]. Reduced microbial diversity may reflect the severity of the disease in critically ill patients and be associated with mortality [7]. Appendectomy might disrupt the immune function and studies have observed the relationship between antecedent appendix removal and the risk of pulmonary tuberculosis and sepsis [8,9]. Global non-typhoidal Salmonella (NTS) infection occurs in millions of people annually [10][11][12][13][14]. NTS may cause severe invasive bacteremia or disseminated disease [15,16]. The numbers of host risk factors predispose individuals to NTS [17,18]. These risk factors include the extremes of age [19], diabetes [19], malignancy [20], rheumatologic disease [19,21], use of immunomodulatory drugs [18], transplantation [22], and HIV infection [17,23]. About half a century ago, gastrectomy had been shown to be associated with an increased risk of subsequent NTS infection due to achlorhydria, rapid food emptying and altered intestinal flora [24]. Nowadays, it is widely accepted that when the bacterial population in the gastrointestinal tract is unstable, NTS is more likely to take advantage of the situation and invade the gastrointestinal tract [25]; on the other hand, appendectomy might cause long-term disturbance of the microbiome [6]. We hypothesized that patients who experienced removal of the appendix were susceptible to NTS. This population-based propensity score-matched (PSM) cohort study was conducted to examine the impact of appendectomy on subsequent NTS infections requiring hospital admission. Data Source Since 1995, more than 99% of the Taiwan population have been insured through a single-payer National Health Insurance program launched by the government. The medical claims contribute to the National Health Insurance Research Database (NHIRD). Previous studies demonstrated the high validity of data derived from the NHIRD [26]. This study used the hospitalization dataset, which records the disease diagnosis and procedure of therapy received during the admission. The diagnostic codes of the claims are recorded according to the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). Standard Protocol Approvals, Registrations, and Patient Consents The Research Ethics Committee of China Medical University and Hospital in Taiwan (CMUH104-REC2-115(AR-4)) approved this study. As the data used consisted of the de-identified secondary data set released for research purposes and were analyzed anonymously, the need for informed consent was waived. Study Subjects An appendectomy was defined according to the ICD-9-CM procedure code 47. The appendectomy group consisted of 208,585 individuals ages 18 and over with a newly received appendectomy from 1 January 2000, through 31 December 2012; individuals who received an appendectomy from 1997 to 1999 were excluded. To minimize confounding from other alimentary surgical procedures, individuals who received a gastrectomy (ICD-9-CM procedure code 43.5-43.9), cholecystectomy (ICD-9-CM procedure code 51.2), or intestinal or large bowel resection (ICD-9-CM procedure code 45.6-45.9) before the index date or underwent multiple concurrent procedures at the time of appendectomy were excluded. Patients diagnosed with cancer (ICD-9-CM code 140-208) before the index date were also excluded. As proton pump inhibitors provide a favorable environment for NTS, patients with peptic ulcer disease (a proxy for proton pump inhibitors) before the index date were excluded. Individuals with hospitalized NTS within one month after the index date were also excluded to avoid confounding by the possible effect of perioperative antibiotics. The first date of hospitalization for appendectomy was the index date, and this date was assigned to the accordant matched controls (defined as the first healthcare use occurring in the index year) with the same criteria. Patients having a history of NTS are at risk for recurrent NTS, so those patients were also excluded. Finally, 208,585 patients with appendectomy without a medical history of NTS before the index date (traced back from 1997 through 1999) were included. To minimize surveillance bias, these exposed participants were compared with the 208,585 sex-, age-, index date-and comorbidity-matched individuals in the non-appendectomy group by propensity score matching (PSM) from the same inpatient dataset. We performed a rematch by greedy algorithm. For each study case with appendectomy, the corresponding comparison case without appendectomy was selected based on the closest propensity score. Propensity scores were calculated using a logistic regression model to calculate the probability of appendectomy assignment and included the following baseline variables: sex, age, occupation, and year of index date. The comorbidities analyzed in the study included hypertension (ICD-9-CM code 401-405), diabetes (ICD-9-CM code 250), hyperlipidemia (ICD-9-CM code 272), coronary artery disease (CAD) (ICD-9-CM code 410-414), cerebrovascular disease (CVD) (ICD-9-CM code 430-438), chronic kidney disease (CKD) (ICD-9-CM code 585), chronic obstructive pulmonary disease (COPD) (ICD-9-CM code 491, 492, 496), human immunodeficiency virus (HIV) (ICD-9-CM code 042), liver cirrhosis (ICD-9-CM code 571.5), and systemic lupus erythematosus (SLE) (ICD-9-CM code 710.0). Each case in the study and control groups were followed from individual index date until an event (hospitalization for NTS), withdrawal from the NHI program or December 2013. We adopted the PSM method to account for a similar distribution of baseline characteristics between both groups. Identification of Main Outcome The outcome was patients with NTS recorded in the hospitalization dataset, a subset of the NHIRD; the incidence of new-onset NTS depends upon the administrative ICD coding of 003.xx [11]. The physician responsible for the patient must make the diagnosis using the appropriate ICD code based on careful evaluation and examination, including analysis of stool and/or blood cultures. The coding system is considered validated as the government periodically audits claims for payment purposes. The fine for fraud is 100 times the amount of the fraudulent claims collected from the NHI Bureau. To control the possible bias due to perioperative antibiotics, individuals experienced NTS within one month of the index date were excluded. Negative Exposure Control Analysis Negative control has been used to detect unmeasured confounding. Diverticulitis was selected as an alternative exposure (ICD-9-CM code 562.x), and based on review of current pathophysiological mechanisms, it was not associated with subsequent NTS. Therefore, any association between diverticulitis and subsequent NTS may hint at the presence of unmeasured confounding factors. Statistical Analysis The first record of each participant hospitalized for NTS was used to calculate the risk of NTS. The density of NTS events per 10,000 person-years was calculated in both groups. We used PSM to control for sampling bias. The propensity score presented an individual's probability of developing NTS, and the score was determined by a multivariable logistic regression model. The difference of the baseline characteristics between the study and the comparison group were compared by the standardized mean difference (SMD). A SMD of 0.1 or less indicates a negligible difference between the two groups. We estimated the crude hazard ratio (HR) and 95% confidence interval (CI) using the univariable Cox proportional hazard model. Variables found to be statistically significant in the univariable model were further examined in the multivariable model. The multivariable Cox proportional hazard model was used to estimate the adjusted HR (aHR), including hypertension, diabetes, CAD, CVD, CKD, COPD, HIV, liver cirrhosis, and SLE. The Kaplan−Meier method was adopted to obtain the cumulative incidence of NTS in the two groups. The log-rank test was utilized to compare the differences between the two groups. All statistical tests were two-sided, and p values of 0.05 or less were considered statistically significant. In the main model, we excluded patients with PUD (a proxy to minimize the effect of proton pump inhibitors utilization) [11,27] before the index date and patients who had NTS within one month of the index date (a proxy to minimize the effect of perioperative antibiotic utilization). To validate the findings in the main model, several post hoc sensitivity analyses by different definitions of enrollment (model 2 to 5) were conducted. Because the effects of antibiotic treatment might be longer lasting than 30 days; in model 2, we excluded patients having NTS infection occurred within 90 days of the index date. In model 3, because antibiotic utilization is likely a significant confounder, and with no prescription information in the hospitalization dataset, we excluded patients having bacterial infection within 6 months before the index date (a proxy to minimize the effect of antibiotic utilization on the participants; ICD-9 codes of bacterial infection are 001-005, 008.1-008.5, 020-027, 030-041, 076, 320, 420. In model 4, PUD and other comorbidities related to immunocompromise that conferred increased risk of NTS infection were excluded before the index date. In model 5, PUD and other comorbidities related to immunocompromise that confer increased risk of NTS infection were not excluded and adjusted as covariates in regression analysis. Patient Characteristics Of 417,170 participants (215,221 (51.6%) male) aged 18 years and older, 208,585 individuals (50%) had experienced appendectomy (107,823 male (52%)) and 112 individuals 0.05% developed hospitalized NTS. The 208,585 individuals who did not have appendectomy (107,398 men (51%)) were matched by age, sex, and comorbidities (Table 1). PSM resulted in 208,585 matched individuals in each group. In the study group and comparison group, the baseline characteristics were well balanced. The mean (SD) age was 38.8 (15.2) years in the study group and 40.8 (16.7) years in the control group. The median (SD) follow-up times were 7.29 (3.87) years in the study group and 6.73 (3.54) years in the control group. Individuals in the study group, compared with those in the control group, had similar proportions of occupation and comorbidities but a lower proportion of hypertension (13,982 individuals (7%) vs. 26,344 individuals (13%); SMD, 0.20), diabetes (7646 individuals (3.7%) vs. 12,891 individuals (6.2%); SMD, 0.12), and cerebrovascular disease (3189 individuals (1.5%) vs. 1492 individuals (3.1%); SMD, 0.11). The mean (SD) hospital stay for appendectomy was 5.65 (42.9). Table 2 shows the results of the univariable and multivariable Cox regression analysis, in which the incidence rate of NTS after appendectomy was 0.74 per 10,000 person-years and that in the comparison group was 0.55 per 10,000 person-years. There were 77 events of hospitalized NTS without appendectomy, and 112 events of hospitalized NTS after undergoing appendectomy. The individuals who had histories of appendectomy were more likely to develop NTS (unadjusted HR, 1.35; 95% CI, 1.01-1.8). The multivariable Cox regression analysis showed a positive association between appendectomy and new-onset hospitalized NTS. After adjusting for demographics, occupation, and comorbidities (except hyperlipidemia) at the baseline, individuals with appendectomy had a 61% increased risk of developing hospitalized NTS than subjects without appendectomy, with an adjusted HR of 1.61 (95% CI, 1.20-2.17). Table 3 provides five models to examine the stability of HR of hospitalized NTS infection with the different definitions of appendectomy exposure and events of main outcome. In the subset of NHIRD (hospitalization dataset analyzed in this study), there was no prescription information to identify the length of antibiotic treatment. We developed 1, 2, and 3 to minimize the effect of antibiotic utility. The wash-out period in model 2 was up to 90 days. The aHRs were 1.61 (95% CI 1.20 to 2.17), 1.58 (95% CI 1.17 to 2.13) and 1.61 (95% CI 1.20 to 2.16) in models 1, 2 and 3. Furthermore, in model 4, we excluded immunocompromised cases potentially prone to have NTS infection and the aHR was 1.71 (95% CI 1.26 to 2.33) of NTS infection for appendectomy exposure. In model 5, PUD was included into regression analysis and the aHR was 1.24 (95% CI 1.02 to 1.52). 35), and 0.34 (95% CI, 0.07-1.63) respectively. The interaction for the age subgroup was not significant (p value for interaction = 0.10). In the sex subgroup analysis, females with a history of appendectomy had an increased risk of NTS infection compared with females without appendectomy (adjusted HR, 1.92; 95% CI, 1.26-2.93; p < 0.01), while there was no significant association between appendectomy and risk of NTS infection for male patients. However, the p value for interaction was not significant (0.82). In the occupation subgroup analysis, compared to matched participants without appendectomy, individuals with a white-collar occupation had a significantly increased risk of subsequent hospitalized NTS (adjusted HR, 1.76, 95% CI, 1.17-2.65; p < 0.01). In the comorbidity-subgroup analysis, in general, compared with matched patients without appendectomy, the risk of hospitalized NTS infection became nonsignificant whenever any one of the comorbidities presented. Appendectomy appeared to have a higher association in the relatively healthy participants in the study (e.g., within all participants without hypertension, subjects in the appendectomy group were at higher risk of NTS infection in comparison to participants without appendectomy; aHR, 1.77; 95% CI, 1.26-2.47; within all participants without diabetes, subjects in the appendectomy group were at higher risk of NTS in comparison to participants without appendectomy; aHR, 1.72; 95% CI, 1.25-2.37; and within all participants without SLE, subjects in the appendectomy group were at higher risk of NTS infection in comparison to participants without appendectomy, aHR, 1.67; 95% CI, 1.24-2.26). Table 5 displays our analysis stratified by the follow-up years. In the first six months, the relative risk of hospitalized NTS compared with the subjects without appendectomy was 1.83 (95% CI, 0.74-4.53). During the follow up of six months to one year after appendectomy, the relative risk of having hospitalized NTS was 0.68 (95% CI, 0.25-1.89). After >1 year of follow up, the adjusted HR was 1.74 (95% CI, 1.25-2.43). The alternative exposure (diverticulitis) showed no significant association between diverticulitis and subsequent hospitalizations for NTS (adjusted HR, 0.85; 95% CI, 0.18-3.95) ( Table 6). The cumulative incidence curve of NTS in the appendectomy cohort was significantly higher than that in the non-appendectomy group (log-rank test p-value = 0.04) (Figure 1). The alternative exposure (diverticulitis) showed no significant association between diverticulitis and subsequent hospitalizations for NTS (adjusted HR, 0.85; 95% CI, 0.18-3.95) ( Table 6). The cumulative incidence curve of NTS in the appendectomy cohort was significantly higher than that in the non-appendectomy group (log-rank test p-value = 0.04) (Figure 1). Discussion In this study, a prior appendectomy was associated with a 61% increase of risk of developing hospitalized NTS. This is a novel finding. The link between previous appendectomy and subsequent NTS infection requiring hospitalization has never been discussed or confirmed before. Acute appendicitis occurs predominantly at 20 to 30 years of age with male predominance [28]. Similarly, our study found that about sixty percent Discussion In this study, a prior appendectomy was associated with a 61% increase of risk of developing hospitalized NTS. This is a novel finding. The link between previous appendectomy and subsequent NTS infection requiring hospitalization has never been discussed or confirmed before. Acute appendicitis occurs predominantly at 20 to 30 years of age with male predominance [28]. Similarly, our study found that about sixty percent of patients receiving an appendectomy were aged <40 years, with a male predominance. Some studies denoted that there might be postoperative change of the microbiome. Change of microbial composition was observed in patients received cholecystectomy [29,30]. Gastrointestinal microbiota showed higher species diversity and richness after gastrectomy in patients with gastric cancer [31]. Some studies have further accessed the interaction between dietary intake, gastric bypass surgery, and the trend of microbial change [32]. The balance of intestinal microbiota is critical to support the resistance against colonization by exogenous microorganisms. NTS was found to be competitive against the microbiome during inflammation in the gut and subsided when the inflammation ceased [33]. Butyrate as a feed additive has been widely used to improve the intestinal health of poultry and reduce the proliferation of Salmonella [34]. Appendectomy was reported significantly associated with low levels of butyrate-producing bacteria [35]. Furthermore, in one recent study, authors found that patients who underwent prophylactic appendectomy had lower levels of abundance and diversity of normal gastrointestinal tract species over the long-term [6]. Our findings are in alignment that the relative risk of NTS infections rise to statistically significant after one year. A previous study demonstrated that risk factors for NTS infection consist of aging and immunocompromise [17], which corresponded with our findings in Table 2 (e.g., diabetes, COPD, liver cirrhosis, SLE, HIV). The underlying mechanism by which appendectomy is associated with the risk of developing NTS infection remains unclear. First, the appendix contains large amounts of gut-associated lymphoid tissue, which is thought to be involved in immune function. Peyer's patches, and the appendix, are the sites of antigen sampling and induction in the mucosal immune system [36]. Therefore, an appendectomy might change the immune system. Secondly, the appendix can provide a suitable environment for normal intestinal flora through biofilm formation [37,38]. As a result, an appendectomy may disrupt the gut microbiota configurations subsequently supporting NTS development [39,40]. The post hoc stratified analysis showed that compared with matched non-appendectomy controls, patients who received an appendectomy were associated with an increased risk of NTS especially in the subgroup of females and the subgroup for individuals aged 18 to 30 years. A recent meta-analysis of the global burden of invasive NTS disease did not find a link between sex and the incidence of invasive NTS disease [41]. It is intriguing however that this might not hold true in the context of prior appendectomy. Some animal and human studies have shown that disease patterns and gut microbiota differ by sex [42,43]. We speculate this novel result might be multifactorial, including environmental exposure (females are the main food handlers). However, further studies are needed to examine such discrepancies. Since advanced age is an independent risk factor for NTS infection, we specifically examined the interaction of age between appendectomy and outcome of interest in this study. In the age-subgroup analysis, compared with non-appendectomy controls, the population who received an appendectomy was at risk of new-onset NTS infection at the age of 18-30. The lack of association between appendectomy and NTS infection in the patients >50 years hold true in the elderly patients shown in Table 4. New-onset post-appendectomy-associated NTS infection was higher in patients without underlying diseases. It may be possible to avoid hospitalized NTS in post-appendectomy patients in these subgroups. Previous literature had described the advantages of using NHIRD in research [44]. These included enormous samples, one single ethnic population, and long-term comprehensive follow-up. We attempted to control the measurable covariates in both groups through PSM. In this study, we have examined and shown that diabetes, COPD, liver cirrhosis, SLE, and HIV infection are highly associated with NTS infection, and this is a kind of positive control analysis indicating the fitness of our models. Some limitations in this study should be addressed. First, the diagnoses of NTS infection were based on administrative ICD-9-CM codes rather than a bacterial culture. The Bureau of NHI had a regular auditing mechanism. Quarterly expert reviews on random samples of inpatient claims data with a sampling rate of 1 in 10 were performed by the Bureau of NHI to ensure the accuracy. Misclassification bias may have occurred and some of the subgroup analyses where very few events were included may not be relevant. Second, the NHI program began in 1995; medical utilization before 1995 could not be traced. Therefore, the possibility that patients selected in the comparison cohort had undergone surgery before 1995 cannot be completely excluded. However, such a sampling bias would, on the contrary, underestimate the risk of the primary outcome [45]. Third, NHIRD does not provide lifestyle information, such as tobacco use, physical activity, body mass index, diet, and exercise. We have carefully used diabetes, hypertension, and hyperlipidemia as a proxy of metabolic status and COPD for tobacco use. Fourth, there is no detailed information about the route of NTS infection, its specific serotype, and level of disease severity in NHIRD and this is an inherent major limitation. In this study, we recruited patients from the subgroup of hospitalized NHIRD as a proxy for alluding to severe NTS infection. Despite meticulous statistical analyses for possible confounding factor adjustment, bias may have occurred. We have applied a number of sensitivity analyses to control the measurable confounders and negative exposure controls to examine the unmeasured confounding. These observations suggest that the presence of confounding factors is less likely when assessed from this perspective. Finally, microbial dysbiosis may be a key intermediate between appendectomy and subsequent NTS infection, but that has not been established in the current study for the lack of detailed information regarding the interactions between appendectomy, change of microbiome metabolites (short-chain fatty acids, such as acetate, propionate, and butyrate), and the information of antibiotic use. Conclusions We conclude that Taiwanese residents with a history of appendectomy were associated with a risk of hospitalization for NTS. The risk was significant in women, and individuals aged 18-30 years. A small number of NTS infection diagnoses occurred in the study, thus limiting the conclusions somewhat. Clinicians are advised to implement prudently the post-operation education for patients to get rid of possible NTS contaminated food in the endemic area. It is of note that since this observation study was performed in one, relatively small country, if similar studies were to be done in the future in other countries of non-Asian origin, the results may be exactly the opposite.
2021-04-12T17:22:41.437Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "d904d8af887b391d7952d48bb0d985799b8202c3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/7/1466/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d904d8af887b391d7952d48bb0d985799b8202c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119087494
pes2o/s2orc
v3-fos-license
A Check of a D=4 Field-Theoretical Calculation Using the High-Temperature Expansion for Dyson's Hierarchical Model We calculate the high-temperature expansion of the 2-point function up to order 800 in beta. We show that estimations of the critical exponent gamma based on asymptotic analysis are not very accurate in presence of confluent logarithmic singularities. Using a direct comparison between the actual series and the series obtained from a parametrization of the form (beta_c -beta)^(-gamma) (Ln(beta_c -beta))^p +r), we show that the errors are minimized for gamma =0.9997 and p=0.3351, in very good agreement with field-theoretical calculations. We briefly discuss the related questions of triviality and hyperscaling I. INTRODUCTION The dimension four plays a doubly important role in physics. First, it is the dimension of space-time which is relevant for a relativistic description of a large class of phenomena, from electricity and magnetism to scattering processes at the highest experimentally accessible energies. Second, it is the upper critical dimension for scalar field theory. If one analytically continues the renormalization group equations [1] (usually derived within some approximation) to non-integer dimensions, it appears that when the dimension tends to four from below, the non-trivial fixed point merges with the Gaussian one. This justifies the ǫ-expansion. It is thus commonly accepted that in four dimensions, the critical exponents are the trivial ones (i.e. those obtained from mean field). Unfortunately, it often difficult to find clear evidence for or against trivial exponents, for instance, from high-temperature (HT) series [2,3] or a finite volume calculation [4]. The root of the problem is the existence of a marginal direction which makes the approach to the fixed point more intricated than in three dimensions. The corrections to the power laws can in principle be obtained from the Callan-Symanzik equations, provided we know the exact form of the various functions (beta, gamma, ... ) entering into them. Using the lowest order in perturbation theory, Brezin, Le Guillou, and Zinn-Justin [5] found that the trivial power divergences get multiplied by rational powers of Ln(β c −β). It is important to check this result with methods independent of perturbation theory. In particular, it is conceivable that there exist non-trivial fixed points which cannot be revealed by perturbation theory. The technical challenge which appears in any kind of calculation is to distinguish between a small change (with respect to the trivial value) in the critical exponent and a slowly varying (compared to the trivial singularity) multiplicative change. This difficulty appears clearly in the asymptotic analysis of the high-temperature expansion of the susceptibility, where the leading term of the extrapolated slope defined in Eq. (3.4) (γ − 1) can be small compared to corrections proportional to the inverse of the logarithm of the order, unless one can reach an astronomically large order. Another interesting feature of the field-theoretical method is the so-called hyperscaling relation among the power singularities of the 2-and 4-point (subtracted) Green's functions at zero momentum. In three dimensions, the violations of hyperscaling [6] are hard to resolve by high-temperature calculation. This is still a controversial [7] topic. In four dimensions, conflicting [2,3] conclusions were drawn from the high-temperature series. The confirmation of the field-theoretical results would require that an unbiased estimate of the main power singularity and the power of the logarithmic correction come close to their predicted values, with errors compatible with (small) higher-order corrections. We propose here to test the field-theoretical results using an expansion in the kinetic term (also called high-temperature expansion), in a model which is obviously non-trivial in three dimensions, but where calculations are easier than in nearest-neighbor lattice models. The hierarchical model [8] is a non-trivial approximation of models with short range interactions, which is well-studied [9,10], and for which we can calculate the high-temperature expansion [11] to a very large order. The recursion relation which summarizes the renormalization group transformation of this model is closely related to the approximate recursion formula discussed by Wilson [1]. The qualitative and quantitative aspects of this relationship are discussed in Ref. [12]. In recent publications [13,14], we reported results concerning the high-temperature expansion of Dyson's hierarchical model in three dimensions. We calculated the hightemperature expansion of the magnetic susceptibility up to order 800 with Ising and Landau-Ginzburg measures. This allowed us to obtain a value [14] of the critical exponent γ of 1.300 in D = 3, with estimated errors of order 0.002. This result is consistent with the results obtained with the ǫ-expansion [9,10]. We found clear evidence for oscillations in the quantity, called the extrapolated slope [15] (see section below), used to estimate the critical exponent γ. When using a log scale for the order in the high-temperature expansion, these oscillations become regularly spaced. Our interpretation of the data was consistent with the hypothesis that the eigenvalues of the linearized renormalization group transformation are real, but that the constants appearing in the conventional parametrization of the magnetic susceptibility should be replaced by functions of β c − β invariant under the rescaling of β c − β by λ 1 , the largest eigenvalue of the linearized renormalization group transformation. This possibility has been mentioned in the past by K. Wilson [1] and developed systematically by Niemeijer and van Leeuwen [16]. Our analysis provided good evidence that the oscillations appear with a universal frequency in good agreement with theoretical expectations, but with a measure-dependent phase and amplitude. Subsequently, more efficient methods of calculation, based on finite dimensional projections of the Fourier transform of the recursion formula, were developed. As explained in detail in Ref. [17], the effects of such truncations can be controlled with a precision which is better than exponential when the dimension of the truncated space increases. In this paper, we study the high-temperature expansion of Dyson's hierarchical model in dimension 4. For the sake of completeness, we briefly review the method of calculation in section II. The conventional methods [15,18] used to estimate the critical temperature and a critical exponent from a high-temperature series are reviewed in section III. We show that in the presence of logarithmic corrections to the scaling laws, the asymptotic behavior of the corrections is modified. The extrapolated ratio defined in Eq. (3.3) provides an estimate of the critical temperature with corrections of order m −1 × (Ln(m)) −2 , where m is the order in the high-temperature expansion. In the following, we continue to use the notation m with the same meaning. Using the expansion of the susceptibility up to order 800, we obtained a value of the critical temperature which agreed with the high-precision determination of Ref. [17] with errors of less than one part in 10,000. On the other hand, the extrapolated slope defined in Eq. (2) estimates the critical exponent minus one with corrections which are only suppressed by (Ln(m)) −1 . If this weak suppression is not recognized, one may conclude that the critical exponent γ takes a value larger than the trivial one. More generally, asymptotic analysis is not adequate to distinguish between a value of γ close to 1 and a correction to the scaling laws which is less singular than a power. In section IV, we analyze the high-temperature expansion of the susceptibility without relying on the asymptotic behavior of the coefficients. We use h(m) ≡ (r m β c − 1)m, a function which represents the difference between the ratio of successive coefficients r m and its asymptotic value β −1 c . The function h(m) can be calculated exactly using either the empirical series or the series corresponding to a given assumption on the analytical form of the susceptibility. Taking the sum over a large range of m of the square of the differences between these two values of (h(m)) −1 , one can get an error function which indicates how good the analytical assumption is. We found that the parametrization provides very good fits of the data for γ ≃ 1 and p ≃ 1 3 , which is the field-theoretical [5] result. In order to decide how accurate the agreement is, we have considered fixed values of γ in the vicinity of 1 and equally spaced by 10 −4 steps. For each of these values, we have determined the values of p and A 1 /A 0 which minimize the error function. This error function behaves like a paraboloid near its minimum at γ = 0.9997 and p = 0.3351, in good agreement with the field-theoretical calculation. The errors on this estimate are mostly systematic. To get more accurate results, one needs to replace the constant A 1 by a slowly varying function. Another quantity which can be studied using the high-temperature expansion is the dimensionless renormalized coupling constant [19], denoted λ 4 hereafter, obtained by multiplying the connected four-point function at zero momentum by the eighth (D + 4 in general) power of the renormalized mass. For D < 4, this quantity is designed to have a finite and non-zero limit when β → β c . In the case D = 4, we have checked with good accuracy [17] that λ 4 goes to zero like (Ln(β c − β)) −1 for the model studied here. The calculation of the HT coefficients of λ 4 involves the subtraction of the disconnected part and suffers the same type of numerical problems as the direct calculation of λ 4 , as discussed in Ref. [17]. For this reason, we were only able to extract a series of 30 coefficients. The analysis of this series is consistent with the fact that λ 4 goes to zero when β → β c (triviality), but it is not possible to distinguish a (Ln(β c − β)) −1 approach to zero from a (β c − β) 1/2 approach, which would be necessary to establish whether or not hyperscaling holds. This question has been settled in Ref. [17], and this section illustrates the inconclusiveness of results obtained from short series. In conclusion, we have shown that by using sufficiently long series and methods of analysis not relying on an asymptotic expansion, it is possible to obtain very good agreement between calculations based on field theory and those based on high-temperature expansion in the upper critical dimension. We emphasize that the main interest of the high-temperature expansion is to allow us to probe global features of the renormalization group flows which cannot be approached using renormalized perturbation theory or an analysis of the linearized behavior near the fixed point. An example of such a global feature is the existence of logperiodic oscillations [13,14], which play an important role in D = 3, but have an almost negligible effect in D = 4, as shown in section III. Another example of a global feature could be the existence of a non-trivial fixed point. The good agreement found in section IV makes this possibility very implausible for the model studied here. II. CALCULATIONS OF THE HT COEFFICIENTS The calculation of the high-temperature expansion of the unsubtracted 2k-point functions of Dyson's hierarchical model can be performed iteratively using the basic recursion formula in its Fourier form [11]. This method has been discussed extensively in Refs. [13,14]. For the sake of being self-contained, we briefly explain the basic method of calculation. More details, justifications, and motivations can be found in Refs. [4,13,14]. The recursion formula for the rescaled Fourier transform R n (k) of the local measure for blocks of 2 n sites reads where c is an adjustable parameter which takes the value 2 1− 1 D , in order to approximate D-dimensional models. In the following, we will only consider the case D = 4, which means c = √ 2. The rescaling operation commutes with iterative integrations, and the rescaling factor s can be fixed at our convenience. In order to obtain stabilized expressions in the high-temperature phase, we will take s = √ 2 in the following. We fix the normalization constant C n in such way that R n (0) = 1. R n (k) then has a direct probabilistic interpretation. If we call M n the total field φ x inside blocks of side 2 n , and < ... > n the average calculated without taking into account the interactions among these blocks, we can write We see that the Fourier transform of the local measure obtained after n iterations generates the zero-momentum Green's functions calculated with 2 n sites. All the calculations done here use an initial Ising measure, which means that R 0 (k) = cos(k). Since we are interested in the leading singularity, this choice should play no role [19] in the discussion. The high-temperature expansion of the zero-momentum Green's function can be obtained from an expansion of Eq. (2.1) in powers of β. The most important sources of errors are the round-off errors. After 100 iterations, the relative errors on the mth coefficient [14] are of the order of m × 10 −15 . With the choice s = √ 2, the coefficients reach a finite value in the infinite volume limit. Actual computations are made at large but finite volume (i. e. at finite n). The relative difference between the coefficients at finite and infinite n goes to zero [11] like ( c 2 ) n . For D = 4, the choice n = 100 means that ( c 2 ) n = 2 −50 , which is smaller than the numerical errors. Such a calculation is in general time-consuming when one wants to calculate more than 100 coefficients. It is, however, possible to save time by using finite dimensional approximations [17] of degree l for the generating function: R n (k) = 1 + a n,1 k 2 + a n,2 k 4 + ..... + a n,l k 2l , with l much smaller than the required dimension m+1 necessary for an exact [11] calculation. After each iteration, non-zero coefficients of higher order (a n+1,l+1 etc. ) are obtained, but set to zero in the next iteration. The l-dependence of the high-temperature coefficients of the susceptibility is discussed in Ref. [17]. If b (l) m denotes the value of b m in a truncated space of dimension l, we found that where s and i are, respectively, the slope and intercept of the corresponding fitted line, as shown in Fig. 1. The intercepts are approximately 2.3, while the slopes depend on m. Eq. (2.4) represents suppressions which are better than exponential. From this figure, we can check, for instance, that for m = 400 (which is the maximal value used in section IV), the extrapolated errors at l = 40 are significantly lower than the numerical errors. Using extrapolation in m, it was estimated in Ref. [17] that in the case D = 4, l = 38 was sufficient to calculate b 1000 . In summary, the following calculations will be performed with l = 50 and n = 100. The above discussion shows that this choice guarantees that the systematic errors are smaller than the numerical errors. III. THE LIMITATION OF THE ASYMPTOTIC ANALYSIS IN PRESENCE OF CONFLUENT LOGARITHMIC SINGULARITIES In this section, we study the singularities of the susceptibility using its high-temperature expansion We define r m = b m /b m−1 , the ratio of two successive coefficients. When D < 4, one expects [19] that and it is convenient to introduce quantities [15] called the extrapolated ratio ( R m ) and the extrapolated slope ( S m ) in order to estimate β c and γ. These quantities are defined as and where is called the normalized slope. When A 0 and A 1 are constant, one finds [15] that the 1/m corrections disappear: However, for the hierarchical model in D = 3, large oscillations were observed [13] in S m and it was recognized [13,14] that A 0 and A 1 should be considered as functions of β c − β invariant under the rescaling of β c − β by λ 1 , the largest eigenvalue of the linearized renormalization group transformation. The asymptotic analysis (when m becomes large) of the extrapolated slope in this modified situation is given in section 3 of Ref. [14]. It was found that 1/m corrections with rather large coefficients reappeared. Nevertheless, it was possible to extract the critical exponent γ with estimated errors of 0.2 percent. The situation is very different in D = 4, as shown in Fig. 2. The oscillations are barely visible for low values of m, and not visible at all for larger m, where S m appears to decay smoothly. If the parametrization of Eq. (3.2) and its corollary Eq. (3.6) applied, one might conclude that γ is close to 1.05. However, if we plot the inverse of ( S m ) −1 versus Ln(m), we find the linear behavior shown in Fig. 3. This shows that S m decays like 1/Ln(m), so Eq. (3.6) does not provide an adequate description of the situation. The deviation from the linear behavior shows an interesting fine structure shown in Fig. 4. For m near 400 (Ln(m) near 6), ones sees that the amplitude of oscillation is almost four orders of magnitude smaller than ( S m ) −1 itself. For such a values of m, the numerical errors become comparable with the oscillations. For larger values of m, the numerical errors become larger and wash out the oscillations. The numerical errors on ( S m ) −1 in D = 4 are of the same order of magnitude as what we would estimate in D = 3 from the error analysis of Ref. [14] . The main difference is that the oscillations have a much smaller amplitude in D = 4. In the following, we will treat the oscillations on the same footing as the numerical errors, which is justified for m sufficiently large. We will now revisit the asymptotic analysis of R m and S m in a more general case than Eq. (3.2) with A 0 and A 1 constant. Our main assumption will be that where G is such that This restriction includes the case where G(1 − β βc ) grows like a positive power of a logarithm when β goes to β c . We then proceed as in ref. [18] and explain the principle of the asymptotic expansion. We use the residue theorem in the complex β plane to get an integral representation of the coefficients. Next we treat the integral with the steepest descent method. Using an exponential parametrization for the integrand of the mth coefficient, one finds that the phase has a maximum for a value of y = β βc such that The basic principle of calculation is that the second term of the l. h. s. of this equation can be treated as a perturbation, for m sufficiently large. Neglecting this second term, we get Before going further, we introduce a parametrization of the ratio of successive coefficients: This definition is independent of any kind of expansion. From Eq. 3.11, we found the asymptotic estimate If we consider the case we obtain and ) . (3.16) From this, we can conclude that under the assumption of Eq. (3.14), asymptotic analysis justifies using R m as an estimator for 1 βc , with estimated errors on the order of 10 −4 . This quantity is displayed in Fig. 5. As expected, no oscillations are visible. The change between m = 200 and m = 800 is less than 10 −4 , which is consistent with 1 m(Ln(m)) 2 corrections. If we use R 800 as our best estimate, we obtain β c = 0.665548, which is in good agreement with our accurate calculation [17], where we found β c = 0.6654955715318593. The discrepancy has the same order of magnitude as the small variations noted above. On the other hand, for S m , the corrections to γ − 1 are not very small. For instance, for m = 800, (Ln(m)) −1 ≃ 0.15, and it seems implausible that one could establish that |γ − 1| < 10 −3 on the grounds of an expansion in this not-very small parameter. More generally, it takes exponentially large m for the "corrections" in (Ln(m)) −1 to become smaller than the "leading" γ − 1 when this quantity is small. Thus it seems desirable to use non-asymptotic methods, the subject of the next section. IV. A DIRECT ESTIMATION OF THE CRITICAL EXPONENTS In this section, we propose to use direct calculations of h(m), defined as h(m) = (r m β c − 1)m . This quantity can be calculated exactly under some assumption regarding the susceptibility, and calculated exactly from the empirical series. We emphasize that h(m) is defined from Eq. (3.12) and its calculation does not require any kind of expansion. However, we need to provide an estimate of β c . In the following, we will take the most accurate value [17] of β c quoted in the previous section rather than the approximate values obtained from R m . A simple assumption on the leading singularity of the susceptibility in D = 4 is given by the result of a field-theoretical calculation [5] This lowest-order result would also be obtained for Dyson's model, because at this order, numerical factors (integrations over the angles), which are model-dependent, cancel. Given that r m is the ratio of two successive coefficients, it is independent of A 0 and it transform as r m → r m s −1 under a rescaling β → sβ. Consequently, r m β c is independent of the choice of A 0 and β c . We have thus calculated h(m) from the expansion of in x, about x = 0. The variable x stands for β/β c . The division by x does not change the leading singularity [3] when x → 1 while providing a regular expansion around x = 0. Under the assumption of Eq.(4.2), we find from Eq.(3.13) that asymptotically h(m) tends to a small and possibly zero constant plus a correction which decays like 1 Ln(m) . It is thus natural to plot (h(m)) −1 versus Ln(m). Such a plot is provided in Fig. 6, where we compare with (h(m)) −1 calculated directly from the D = 4 HT series, using the definition Eq. (4.1). The two (approximate) lines are separated by an almost constant gap. We tried to modify the assumption Eq. (4.3) in such way that the two lines coincide. The only satisfactory solution we found was the modified assumption where r has to be determined by an error-minimization procedure which we now proceed to explain. For notational purposes, we call t(m) the "true" value of (h(m)) −1 obtained from the HT series and a(m) the value of (h(m)) −1 corresponding to an assumed series such as the one obtained from Eq. (4.4). In practical calculations, we have used the instruction Series in Mathematica to calculate a(m). It should be noted, that for large orders, rational values of the exponents give better numerical results. In addition, if the denominator of this rational exponent gets too large (typically 10 7 for a calculation up to order 400), one runs out of memory. This procedure is quite time-consuming when one goes to large order. Since such a calculation will have to be repeated many times in the rest of this section, we have used the region 300 ≤ m ≤ 400 to evaluate the discrepancy between a(m) and t(m). As we can see from the discussion of section III, in this range the oscillations are already small and the numerical errors not too large yet (see Fig. 4). We have thus determined the parameter r in Eq. (4.5) The values of E for values of r separated by 0.001 are shown in Fig. 7. The curve can be fitted very well by a parabola. The minimum of this parabola is then determined analytically from the three values defining the fitting parabola. This allows us to find the value of r with a precision of 10 −6 , three orders of magnitude smaller than the original resolution. The value of r minimizing E found with this procedure is -0.435622, corresponding to a value of E of order 10 −4 . Subsequently, we checked this answer by repeating the calculation of E with a resolution 10 −6 in r and found the same answer. For this value of r, t(m) and a(m) cannot be distinguished in a graph like Fig. 6. The difference between a(m) and t(m) is shown in Fig. 8. In the region where the fit was performed (300 ≤ m ≤ 400), the differences are 4 orders of magnitude smaller than the values themselves. We conclude from this analysis that Eq. (4.4) is a very good guess concerning the leading and subleading singularities of the susceptibility. However, we would like to see if it remains the best guess if we allow the exponents to change. In other words, we would to see if different values of the exponents could also be acceptable from the point of view of the high-temperature expansion. We have thus considered a more general assumption: and studied E as a function of γ, p, and r. Near a minimum, E behaves generically as a three-dimensional paraboloid. In this region, one can "eliminate" r by fixing its value in such a way that E is minimized with γ and p kept constant. The variable r is thus replaced by a linear combination of γ and p plus a constant and we can then work with a two-dimensional paraboloid. A section of this paraboloid defined by the condition γ = 1 is shown in Fig. 9. In a second step, one can similarly eliminate p with γ fixed by requiring that it takes the γ-dependent value that minimize E. In the case γ = 1 illustrated in Fig. 9, this value of p is 0.32775, not far from the expected [5] value 1 3 . This show incidentally that a biased estimate of p is in good agreement with the field-theoretical result. Taking values of γ separated by 10 −4 , we have similarly calculated the value of p given by the minimization condition. The results are shown in Fig. 10. The linear behavior was expected: since near the minimum E is a quadratic form, the minimization condition is linear. Using this linear relation to eliminate p, E(γ) becomes a parabola. The minimum value taken by this function is then the minimum of the initial function E(γ, p, r). This function is shown in Fig. 11. E is minimized for γ = 0.9997, which according to Fig. 10 corresponds to a value of p of 0.3351. In practical calculations, it is convenient to replace parabolic fits by successive applications of Newton's method. This method has an adjustable resolution and it allows one to start in regions away from the minimum, and where the parabolic behavior does not necessarily hold. It is difficult to estimate the errors on our result. Since the parabolas shown above are reasonably smooth, it seems unlikely that the numerical errors or the oscillations, which should have about the same size, play any significant role. Most likely, the main source of error is that r has been considered as a constant. If instead we allow r to be a slowly varying function of β, we expect in a model independent way, that these slow variations in β will induce slow variations in m of the quantity (h(m)) −1 . In the interval of m considered for the calculation of E, the slow variations can be approximated with polynomials. In order to get an idea about how low E could become under such a circumstances, we have fitted the differences between t(m) and a(m) displayed in Fig. 8 and calculated the value of E obtained after subtracting these fits from the original differences. For a linear fit, we obtained E = 6 × 10 −8 and for a quadratic fit E = 8 × 10 −10 . This shows that by keeping γ = 1 and p = 1/3 and allowing r to to be a slowly changing function written in terms of parameters which are adjusted to minimize E, we can obtain values of E comparable to those obtained by keeping r as a constant and allowing γ, p and r to be adjusted in order to minimize E. For definiteness, with γ = 1 and p and r varied to minimize E, we obtain E = 6 × 10 −9 . Varying γ, p and r, we obtain E = 4 × 10 −10 . In conclusion, if we want a more precise estimation of the critical exponents, we also need more information regarding the β-dependence of the subleading singularities. V. TRIVIALITY AND HYPERSCALING Another quantity which can be studied using the high-temperature expansion is the dimensionless renormalized coupling constant [19] where G c 4 is the the zero-momentum connected Green's function and m R the renormalized mass. For D < 4, this quantity is designed to have a finite limit when β → β c . In the case D = 3, we have checked [17] by a direct calcualtion that λ 4 reaches the value 1.92786 when β → β c . In the case D = 4, we have checked with a good accuracy that, in the same limit, λ 4 goes to zero like (Ln(β c − β)) −1 . Thus we have direct evidence that in these two cases the power singularities cancel in Eq. (5.1) -in other words, that hyperscaling holds. Bearing in mind that there is no wave function renormalization (η = 0) in the hierarchical model, we will define as in Ref. [17] that λ 4 is the limit where n → ∞ of The calculation of the HT coefficients of λ 4 involves the subtraction of the disconnected part and it suffers the same type of numerical problems as the direct calculation of λ 4 , as discussed in Ref. [17]. For this reason, we were only able to extract a series of 30 coefficients. The quantity h(m) defined in Eq. (4.1) corresponding to this series is displayed in Fig. 12. The figure indicates that this quantity has damped oscillations. The average value of h(m) in the displayed interval is -1.4. From Eq. (3.13), this is consistent with the fact that λ 4 has a finite limit when β → β c , but it not possible to distinguish a (Ln(β c − β)) −1 approach to zero from a (β c − β) 1/2 approach. For comparison, we have displayed in Fig. 12 the function h(m) corresponding to the series generated by −x/Ln(1 − x) and (1 − x) Another way of seeing that the series is too short to describe the details of the behavior near β c is to plot the truncated expansion of λ 4 up to order 30. This is done in Fig. 13. The HT expansion indicates correctly that λ 4 goes to zero when β increases. However, the behavior near β c is not accurate. For comparison, Fig. 13 also shows the leading critical behavior estimated in Ref. [17], namely . The data interpolates nicely between the two types of behavior, but we see that there is no region in the figure where they overlap. The order 30 HT expansion gives accurate results for β c − β > 3 × 10 −2 , while Eq. (5.4) becomes accurate when β c − β < 10 −3 . In summary, the truncated expansion makes clear that λ 4 goes to zero when β → β c . In other words, the theory is trivial. However, the series is too short to extract accurately the precise way it approaches zero, and one cannot decide from this information whether or not hyperscaling holds. VI. CONCLUSIONS There have been questions [6] in the past regarding possible discrepancies between fieldtheoretical calculations based on the renormalization group approach and calculations based on the high-temperature expansion. Using a scalar model in the upper critical dimension, where all the conventional expansions can be compared with direct calculations, we claim that the field-theoretical result concerning the leading singularity of the two-point function at zero momentum given in Eq. (4.2) can be reproduced very well by the high-temperature expansion. Using a parametrization of the subleading singularities depending on a single constant r, we obtained an optimal agreement for the choice γ = 0.9997 and p = 0.3351. With this choice, the error on (h(m)) −1 defined in section IV, is less than one part in a million for 300 ≤ m ≤ 400. The small discrepancies between our estimate of the critical exponents and the field-theoretical values γ = 1 and p = 1/3 are not significant because it is possible to show that small changes in the exponents and allowing r to slowly vary have comparable effects for the quality of the fit. The present study shows that the use of asymptotic analysis or the use of a short series can be misleading. Given the length of the series available, asymptotic analysis may be useful for order of magnitude estimates but not for an accurate determination of the exponents. There is still room for improvement. One could use calculations at fixed β to study the corrections to the parametrization of Eq. (4.6). This procedure could be pursued up to the point where the main source of errors would be the numerical errors on the coefficient. The use of the high-temperature expansion allows us to probe global features of the renormalization group flows which cannot be approached using renormalized perturbation theory or an analysis of the linearized behavior near the fixed point. In particular, our analysis makes implausible, for the model considered here, unconventional possibilities such as the existence in the upper critical dimension of a non-trivial fixed point characterized by non-trivial exponents.
2019-04-14T02:30:29.338Z
1997-10-02T00:00:00.000
{ "year": 1997, "sha1": "b08cbb0c20e8829f2068cce85afc54b29902a94c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-lat/9710016", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "143443beb35509aca5efe132ff8d3870759b5160", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55048577
pes2o/s2orc
v3-fos-license
Random regression test-day model for the analysis of dairy cattle production data in South Africa : Creating the framework E.F. Dzomba, K.A. Nephawe, A.N. Maiwashe, S.W.P. Cloete, M. Chimonyo, C.B. Banga, C.J.C. Muller and K. Dzama Department of Animal Science, University of Stellenbosch, Private Bag X1, Matieland 7602, South Africa Limpopo Department of Agriculture, 69 Biccard Street, Private Bag X9487, Polokwane 0700, South Africa Agricultural Research Council, Animal Breeding and Genetics, Private Bag X2, Irene 0062, South Africa Institute for Animal Production, Private Bag X1, Elsenburg 7607, South Africa Discipline of Animal & Poultry Science, University of KwaZulu Natal, Private Bag X01, Scottsville, Pietermaritzburg 3209, South Africa Introduction Genetic evaluation of dairy sires and cows has evolved immensely over the years.From the initial stages when simple dam-daughter comparisons were made, rapid advances in computer hardware and improvements in computing algorithms have made it possible to implement modern methods for analysis.Several countries are now using best linear unbiased prediction (BLUP) under animal models for national genetic evaluations based either on lactation yields or test-day yields. In South Africa, estimates of breeding values (BV) for production traits and somatic cell scores of dairy cattle are based on test-day (TD) yields of milk, protein, fat, as well as somatic cell count.Within the National Dairy Animal Improvement Scheme of South Africa, daily yields of milk, protein and fat percentages are recorded every five weeks.These recordings are subsequently used directly in genetic evaluations using a fixed regression test-day model (Mostert et al., 2006b) instead of yields aggregated over 305-days of lactation.A test-day model (TDM) is a statistical procedure which considers all genetic and environmental effects directly on a test-day basis (Swalve, 1995).Data for test-day production of dairy cows provide an example of repeated measures or longitudinal data, the essential feature of which is the presence of correlations between tests on the same animal.It is important to explore the potential of any statistical and computing technique which allows a direct and more efficient utilization of all available test-day records for genetic evaluation of dairy cattle. Use of the TDM approach allows a more detailed statistical model to be developed, which accounts for environmental variation specific to individual TD yields and genetic effects associated with individual animals.It offers the opportunity to directly account for short-term environmental factors specific to individual yields such as gestation period.The TDM also overcomes the need to predict 305-day yields or for projection of incomplete lactations.Furthermore, the TDM allows for precise definition of the contemporary group (CG).With the TD approach, definition of CG including test-month improves the properties of the statistical model.Solutions emanating from such CG effects can be utilized to improve herd management. Within the TDM approach, the genetic component of the lactation curve can be modelled by fitting regression coefficients for each animal, commonly referred to as random regression (RR) coefficients (Schaeffer & Dekkers, 1994).The additive genetic solutions can be extracted from the BV estimates for the RR coefficients (Jamrozik et al., 1997).It becomes possible to genetically rank animals for each TD yield by estimating a BV of each animal for each TD yield.The estimated BV is given as a product of the RR coefficients and the days in milk (DIM) dependent covariates.Monitoring of management of individual herds and of individual cows within a herd is also an added advantage through the simple comparison between actual and expected production. For the South African Holstein and Jersey cow populations, Mostert et al. (2004;2006a) reported genetic correlations between TD milk yields of different lactations to differ from one.This study led to the implementation of a fixed regression TD model, but recommended the use of RR functions in the genetic evaluation of South African dairy cattle.A random regression TDM approach was first implemented in Canada (Schaeffer et al., 2000) in 1999 and several countries that are members of Interbull have since adopted various forms of the methodology, including Belgium, Germany, the Netherlands, Italy, Finland, Denmark and Sweden (Interbull, 2009).Interbull is an international non-profit making organization responsible for promotion, development and standardization of genetic evaluation of dairy cattle.There are currently 27 countries, including South Africa, participating in Interbull evaluations.South Africa participates in international genetic evaluations of dairy cattle conducted by Interbull for which it has been a member since 1999. The purpose of this review is to describe the random regression methodology in dairy cattle genetic evaluation and explore how a framework of their adoption for TD data analysis in South Africa can be built. Methods for genetic evaluation of TD records Interest has grown in changing the data used in genetic evaluation of dairy cattle from combined 305-day mature equivalent lactation yields to individual TD yields.The 305-day mature equivalent adjusts the current production record for a cow to what she would be producing after three years in lactation or greater as a mature cow.The current method for genetic evaluation uses several daily measurements usually taken once a month (test-day) on an individual cow over the course of the lactation.The idea of using TD measurements in genetic evaluation has been a subject for research for a long time (Searle, 1961;Meyer et al., 1989;Stanton et al., 1992;Ptak & Schaeffer, 1993;Meyer & Hill, 1997;Wiggans & Goddard, 1997). Data from the milk recording scheme are often analyzed by regarding TD records from a cow in single or multiple trait analyses or as repeated measurements of the same trait along a lactation curve, potentially applying some correction for DIM or age at recording.Various methods have been used to analyze TD records which represent longitudinal data (Swalve, 1995;2000;Misztal et al., 2000;Schaeffer et al., 2000;Jensen, 2001).Most of these methods can be regarded as being derived from a model in which the traits have a patterned covariance matrix, but these methods vary in assumptions about the structure of the covariance matrix (White et al., 1999). Firstly, in single trait analysis with a repeatability model, constant genetic variance over DIM and a genetic correlation of one between TD records taken at different DIM, is assumed (Ptak & Schaeffer, 1993). Secondly, multivariate analysis treats each TD record at different DIM as a different trait (Meyer et al., 1989;Pander et al., 1992).Swalve (1995) observed that some authors arbitrarily divided the DIM range into intervals (early, mid and late lactation) that represent individual, but correlated traits and treated the measurements of these different intervals as different traits.The approach has major drawbacks that include inadequate use of information provided at test-days, hence fails to account for constraints imposed on the covariance structure. Thirdly, lactation curves have been fitted at a phenotypic level and the parameters of the curve have subsequently been analyzed as new traits (Stanton et al., 1992).However, this approach results in failure to fully account for the systematic environmental effects (VanRaden, 1997). Fourth, as a way of improving the current model for dairy cattle data analysis, the random regression approach has been proposed for South Africa (Mostert, 2007) and is already being applied in some countries' dairy cattle genetic evaluations (Hammami et al., 2008). The random regression approach The additive genetic values (estimated breeding values, EBV) of animals are usually obtained from mixed model analyses.For the trait under consideration, a linear regression of observations on indicator variables is performed.Animals' additive genetic effects are fitted as random effects.Because functions of time, such as DIM, can be readily modelled in the mixed model framework (Henderson, 1982), trajectories (e.g.lactation curve) can be described.The covariables are usually nonlinear functions such as polynomials or splines relating time to the traits e.g.milk, fat or protein yield.Fitting sets of RR coefficients for each individual random factor (e.g.additive genetic and permanent environmental effects) produces the estimates of the corresponding trajectories.This in short, describes the RR model. For the evaluation of TD records, the RR test-day animal model is considered the most appealing statistically.It is often used to fit the RR coefficients in a linear model to obtain genetic parameters and breeding values.There are two approaches to the RR model (RRM): RR on lactation curve functions (e.g. the Wilmink's function) or RR on polynomials or splines.The number of parameters that can be fitted to describe a lactation curve is flexible with the RR where a lactation curve function is used.Jamrozik & Schaeffer (2002) found that the TDM with Legendre polynomials outperformed the TDM with a lactation curve function, considering the same number of parameters in terms of statistics on the goodness of fit. History of random regression models in dairy cattle genetic evaluation The general concept of using RR for analysis of covariance in an animal breeding context was suggested by Henderson (1982).Kirkpatrick & Heckman (1989) and Kirkpatrick et al. (1990;1994) introduced the infinite-dimensional model for traits measured repeatedly per individual, and suggested to model genetic covariances of trajectories through covariance functions.However, initial applications of the RRM were in genetic evaluation of dairy cows, using records from individual test-days to model the lactation curve (Schaeffer & Dekkers, 1994;Jamrozik et al., 1997).Since then, the RRM has become a standard for analyses of repeated measured records from animal breeding schemes.Other areas of animal breeding that have already utilized RRM include conformation traits (Uribe et al., 2000), body condition scores (Berry et al., 2003a), feed intake (Veerkamp & Thompson, 1999); growth in pigs (Lorenzo Bermejo, 2003), sheep (Lewis & Brotherstone, 2002) and beef cattle (Nephawe, 2004;Meyer, 2005a); and litter size in pigs (Lukovic et al., 2004).The RRM has also been used for analysis of survival data (Veerkamp et al., 2001) and for assessing genotype by environment interactions using a continuum of an environmental parameter as covariance functions in reaction norms (Strandberg et al., 2000;Calus & Veerkamp, 2003;Berry et al., 2003b;Shariati et al., 2007). Differences between random regression and fixed regression test -day models The fixed regression TDM in current use for dairy cattle genetic evaluation in South Africa uses an animal model with test-day records that includes Wilmink's (1987) covariables to describe the general shape of the lactation curve within fixed subclasses for age and season of calving (Mostert et al., 2006b).Contemporary groups include cows tested on the same day within a herd (herd-test date, HTD) which reduces residual variation substantially more than would herd-year-season of calving groups (Ptak & Schaeffer, 1993).Further, the model assumes a standard fixed lactation curve for all cows in the same ageseason subclass, and the estimated additive genetic effects of animals reflect differences in the height of these curves.Thus, differences in lactation persistency are ignored.Correlations between yields at different days in milk are assumed to be the same regardless of time elapsing between test-day measures.The assumption that the variances are homogenous throughout the lactation is difficult to justify.Studies on heterogeneity of variance have been conducted in South Africa.Specifically, it had been discovered that older sire proofs were much higher than for younger bulls with progeny still active in the herds (Mostert et al., 2006a).As a result, the SA fixed regression test-day model incorporates a fixed calving year effect to account for this.However, failure to pre-adjust for heterogeneous variance in test-day models often inflates genetic variances resulting in biased estimated breeding values and lowers their accuracy (Strabel et al., 2006) This is likely due to a set of nonspecified factors in the model equation (e.g.days open, pregnancy status, characteristics of the dry period, body condition at calving, etc.) that make the temporary measurement errors larger and highly variable at the beginning and at the end of the lactation (Lopez-Romero et al., 2003).The reasons for pre-adjustment for heterogeneous variance due to DIM and parity in the South African fixed regression model are twofold; firstly, it is meant to correct the bias due to residual variances being higher in the beginning and end of lactation than in mid-lactation and secondly, it corrects for first lactations having higher residual variances compared to second and third lactations (Mostert et al., 2006a). A simplified scalar version of the fixed regression model would be: where HTD is the fixed herd TD effect, a is the random additive genetic effect of the cow, p is the random permanent environmental effect associated with each cow and e is the random residual (Swalve, 2000;Jensen, 2001).The lactation curve is modelled using the regression parameters b i , and x i are the corresponding time (days in milk) covariates. An extension of the fixed regression TDM to a RRM would be desirable in several ways.It will allow for the inclusion of random regression coefficients for the lactation curve for each cow (Henderson, 1982).The lactation curve for an individual cow could be viewed as two sets of regressions on DIM.Fixed regressions for all cows belonging to the same subclass of age-season of calving describe the general shape for that cow, and the random regressions for a cow describe the deviations from the fixed regressions, which allow cows to have differently shaped lactation curves. A random regressions test-day model (RR-TDM) is an extension of the TDM with fixed regressions.The basic structure of RRM is similar in most applications.The shape of the lactation curve is assumed to be influenced by random genetic and permanent environmental effects.As such, genetic and permanent environmental correlations between yields at different DIM can take values less than one.An added advantage is that the model can accommodate heterogeneous additive genetic and permanent environmental variances during lactation, the degree of which varies according to the regression functions chosen to model the trajectory of lactation.The covariates used in the regression part of TDM are usually functions of the day in lactation when the measurements were made. In simplified scalar form, the model is: , where y is an observation on an animal belonging to a certain fixed factor grouping at a certain time, HTD the herd-test date effect is independent of the time scale for the observations, ∑b i x i is a linear or nonlinear function or functions that account for the phenotypic trajectory of the average observations across all animals (it accounts for different lactation curve shapes for groups of animals defined by years of birth, parity number, and age and season of calving within parities, for example), a j is the additive genetic effect corresponding to regression coefficient j, x j are the corresponding time covariates, and similarly for the permanent environmental effect subscripted by k, m1 and m2 denote the order of the regression function, e is a random residual effect with mean zero and with possibly different variances for each time or functions of time (Swalve, 2000;Jensen, 2001).The different subscripts indicate that the covariates in different parts of the model are not necessarily the same.When compared with the fixed regression TDM, this corresponds to using regressions to model the additive genetic and the permanent environmental effects.In principle, the covariates x i can be any covariate but are usually relatively simple functions fitted on DIM such as polynomials, orthogonal polynomials (e.g.Legendre polynomials), splines or the parameters of lactation functions proposed by Wood (1967), Ali & Schaeffer (1987) and Wilmink (1987) Choice of basis functions Theoretically, any function can be used in RRM as a basis function (Swalve, 2000;Meyer, 2005b).Legendre polynomials are the most common, because the correlations between parameters are lower than with other functions (Kirkpatrick et al., 1990;1994;Van der Werf, 1997).Orthogonal polynomials are able to model lactation curves for a range of covariance structures, but they also have undesirable properties (Misztal, 2006).Fit at the extremes of the trajectories may be poor especially for high orders of fit (Meyer, 2005b) and there may be problems of convergence for large data sets.Several alternatives have been proposed and these include fractional polynomials and linear and B-splines.Fractional polynomials use roots and logs and were advocated for by Robert-Granie´ et al. (2002).Splines are curves constructed from piecewise lower degree polynomials which are joined smoothly at selected points (knots).Splines are readily fitted within the mixed model analyses (Verbyla et al., 1999;Ruppert et al., 2003).White et al. (1999) used cubic splines, while Torres & Quaas (2001) used B-splines with 10 knots in separate RR analyses of test-day records of dairy cows.Too many knots would increase model complexity, while too few knots would reduce accuracy in estimates (Meyer, 2005b).It is important to compare RR models with South African data using lactation curve functions, orthogonal polynomials and splines. Advantages of random regression models Advantages of RR test-day models over other approaches of evaluating test-day records are now widely acknowledged (Bohmanova et al., 2008;Hammami et al., 2008): 1.This type of model provides a continuous treatment of observation over time and is able to incorporate heterogeneous variances and covariances among measures along time (including days that were not sampled) with a potentially reduced number of parameters compared with the multiple trait approach (Schaeffer & Dekkers, 1994;Lidaeur et al., 2003). 2. Every record contributes information at the value of the control variable at which it is measured.Arbitrary or inappropriate corrections for the differences in the control variable are therefore rendered useless (Van der Werf, 1997).3.With regards to estimation of variance components, random regression models facilitate parsimonious description of changing and potentially complex covariance structures, thereby utilizing the data more efficiently and generating breeding values of higher accuracies (Jamrozik & Schaeffer, 1997;Meyer, 1998).4. Because the lactation curve is allowed to differ for each cow, this facilitates accounting for the variability in persistency and makes possible the prediction of evaluations for persistency, thereby providing additional information for selection (Jamrozik et al., 1998;Swalve & Gengler, 1999;Lin & Togashi, 2005). 5.The RRM also allows a cow to be evaluated on the basis of any number of TD records during lactation.Related to this, as only eight to 10 TD yields per cow per lactation may be collected, this could result in lower costs of recording (Schaeffer et al., 2000).However, there are issues of accuracy associated with this.EBVs based on one test tend to be of low accuracy.A number of countries require a minimum of three test-day records per lactation for inclusion in genetic evaluation.6.The RRM for TD yields can account more precisely for environmental factors that could affect cows differently during lactation (Schaeffer & Dekkers, 1994).7. Due to emphasis on more yield information, a RRM results in top animals which are less related and hence results in reduced rates of inbreeding compared to lactation models (Mrode & Coffey, 2008). While being conceptually appealing, practical applications of random regression models in animal breeding have been plagued by problems associated with large numbers of parameters to be estimated, poor polynomial approximation and therefore the necessity of analysing much larger sets of data, implausible estimates at the extremes of trajectories, and associated high computational requirements (Swalve, 2000;Jensen, 2001;Schaeffer, 2004;Meyer, 2005b;Misztal, 2006). Partitioning variance with random regression model The first estimates of variance components for test-day milk yields obtained by RRM were published by Jamrozik & Schaeffer (1997).The RRM were used for modelling genetic effects only.Meyer & Hill (1997) and Meyer (1998) demonstrated the use of covariance functions to model additive genetic and permanent environmental effects in random regression TDMs.The covariance function describes the covariance structure of an infinite-dimension character, such as test-day milk yields, as a function of time.The covariance function is equivalent to a RRM if the same functions are used (Meyer & Hill, 1997;Van der Werf et al., 1998).The equivalence of the RRM with the covariance function is useful when analyzing data observed at many time periods, because the number of regression coefficients determines the number of covariances to be estimated for each source of variation in a RRM.In a univariate RRM, k regression coefficients result in k(k+1)/2 covariance estimates.The covariance function is used to reduce the rank of the covariance matrix from n, the number of traits, to k, the number of functions, when starting from a multiple trait approach (Meyer & Fitzpatrick, 2005). Standard mixed-model-based variance component procedures (i.e.Restricted maximum Likelihood: REML or Bayesian methods based on Markov chain Monte Carlo methodology: MCMC) can be used to estimate covariance functions directly from the data (Jensen, 2001).High computational demands limit the size of the datasets and the nature of the models that can be analyzed using REML, but algorithms for multivariate analyses via AIREML are readily adapted to the estimation of covariances among random regression coefficients (Meyer & Kirkpatrick, 2005).Sorensen & Gianola (2002) noted that Bayesian estimation is now standard for quantitative genetic analyses.Particularly popular are schemes that sample from fully conditional posterior distributions of the parameters of interest.These are computationally easy to implement.Jamrozik (2004) discussed implementation issues of Markov chain Monte Carlo methods for random regression analyses. Modelling environmental effects in the random regression model Milk production is influenced by exactly the same environmental factors whether a TDM or lactation model is used in genetic evaluation.However, for a TDM, the stage of lactation is an important consideration, because of the curvilinear relationship that exists between the stage of lactation and milk production (Swalve, 1995;2000).The TDMs often use types of covariates or mathematical functions, in a regression, to account for stage of lactation.Meyer (2005a) and Meyer & Kirkpatrick (2005) noted that the resultant lactation curve parameters can be considered as examples of 'function-valued traits' implying that mathematical functions are in use. The adoption of TDM over the lactation model replaced the use of herd-year-season (HYS) with herdtest-date (HTD).The HTD accounts for the effects of herd and the year and the season of production whereas HYS effect is commonly used to account for the effects of the individual herd, the year, and the season of calving and the interactions among them.With a TDM, further effects that can be fitted in the analysis include age at calving, parity and pregnancy (Swalve, 1995). The random regression TDM can account for many environmental factors that could affect cows differently during the lactation (Schaeffer & Deckers, 1994).The lactation curve is split into two parts: a fixed part (average lactation curve) and a random animal specific part (deviation from the average curve).To account for the variability within lactation stage, an appropriate sub-model is fitted on stage of lactation, nested within parts of the model that account for environmental effects.There are profound differences in the manner in which environmental variation is accounted for with RRM in respect to definition of subgroups for fixed regression on the stage of lactation (Zavadilova et al., 2005).Frequently used factors are season of calving and/or classes of age at calving (Reents et al., 1998;Strabel & Misztal, 1999;Lidauer et al., 2000;Schaeffer et al., 2000).Other models used include the effects of days carried calf (Lidauer et al., 2000).For South Africa, it is important to investigate how best the information collected when testing herds can be used in genetic analysis to account for the environmental variation.Mostert et al. (2006b) defined a fixed regression TD-model which passed the necessary trend validation tests required by Interbull to ensure that the model sufficiently accounts for all environmental effects.Such studies can also attempt to recommend inclusion of valuable variables that the current milk recording system ignores or encourage inclusion of some traits such as fertility measures in the routine genetic evaluations.The SA Dairy Animal Improvement Scheme records artificial insemination information.Unfortunately, the participants of the Scheme are still reluctant to participate. Persistency of lactation Dairy breeders focus on modelling the individual genetic curves of the cows and estimating genetic parameters of the lactation curves to select for lactation yields or persistency (Shanks et al., 1981;Danell, 1982;Ferris et al., 1985;Gengler, 1996;Jamrozik & Schaeffer, 1997).Although the definition of persistency varies, generally it refers to the rate of decline in production after peak milk yield production has been reached (Swalve & Gengler, 1999).High persistency is associated with a slow rate of decline in production whereas low persistency is associated with a rapid rate of decline.Persistent cows are more desirable because they are more efficient in roughage usage, suffer less metabolic stress due to high peak yield and are thus more disease-resistant (Solkner & Fuchs, 1987).Genetic modification of the lactation curves are concerned with the artificial redistribution of total lactation responses among different stages of the lactation (Lin & Togashi, 2005).In a recent study, Mostert et al. (2008) laid out the framework for inclusion of persistency of lactation in genetic evaluation of South African dairy cattle based on the Canadian Persistency Index.As a result, persistency of production has been implemented in routine genetic evaluations thereby highlighting the economic importance of persistency. In describing the persistency of milk production during lactation, the choice of a parameter that gives a correct description of the shape of a lactation curve is important.It is therefore important to develop an evaluation method in which genetic differences in persistency can be evaluated on a routine basis. A key issue in genetic evaluation of persistency is trait definition.Gengler (1995;1996) identified three types of measures of persistency which are: measures based on ratios of yields, measures based on variation of yields and measures developed out of functions that describe lactation yields.There is, however, no clear consensus on how best to mathematically model persistency.The procedure most widely used to measure lactation persistency nowadays is based on the by-product of the random regression test day model.Druet et al. (2005) showed that the first and second eigenvectors of the estimated genetic covariance matrix in a random regression model may serve as proxies for yield and persistency.Use of these eigenvectors in random regression test-day models is computationally advantageous but there is still no clear biological interpretation of the eigenvectors. Conclusion Attempts to improve the accuracy of estimated breeding values, reduce the generation interval and boost response to selection for dairy cattle and the quest to provide more comprehensive management information to dairy farmers are stimulating interest in advancing the conceptual framework of the TDM.The RRM approach probably wields the potential to realize these benefits from the South African dairy cattle genetic evaluation programme.Replacing the current TDM with a RRM requires research to demonstrate the benefits.Currently research should be focused on defining the RRM to be implemented, investigating the environmental effects to be included in the model and estimating the covariance structure among observations and genetic parameters for traits to be included in the breeding programme for dairy cattle in South Africa.These are the requisite steps towards adoption of a RRM framework for analysis of dairy TD records. Table 1 .The results of the genetic evaluations for the South African dairy herd have had a fixed regression model defined by the following parameter sizes shown in Table1(Personal communication: B. Mostert, 2010, ARC, Private Bag X02, Irene 0062, South Africa).Using a random regression model would probably increase the number of dairy cattle evaluated thereby improving the accuracy of estimating their proofs.Size of parameters included in the fixed regression model used in genetic evaluation of South African Holstein cattle from 2007 to 2010 The South African Journal of Animal Science is available online at http://www.sasas.co.za/sajas.asp277
2018-12-07T09:58:37.724Z
2011-04-11T00:00:00.000
{ "year": 2011, "sha1": "7bde507f36a4786e4757b31b1786a8e0f8f93bbf", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/sajas/article/download/65235/52936", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3828b1800b3a3af03c005f46bd40ef341b83a3b9", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
18134048
pes2o/s2orc
v3-fos-license
Endogeneity in Logistic Regression Models To the Editor: Ethelberg et al. (1) report on a study of the determinants of hemolytic uremic syndrome resulting from Shiga toxin–producing Escherichia coli. The dataset is relatively small, and the authors use stepwise logistic regression models to detect small differences. This indicates that the authors were aware of the limitations of the statistical power of the study. Despite this, the study has an analytical flaw that seriously reduces the statistical power of the study. An often overlooked problem in building statistical models is that of endogeneity, a term arising from econometric analysis, in which the value of one independent variable is dependent on the value of other predictor variables. Because of this endogeneity, significant correlation can exist between the unobserved factors contributing to both the endogenous independent variable and the dependent variable, which results in biased estimators (incorrect regression coefficients) (2). Additionally, the correlation between the dependent variables can create significant multicollinearity, which violates the assumptions of standard regression models and results in inefficient estimators. This problem is shown by model-generated coefficient standard errors that are larger than true standard errors, which biases the interpretation towards the null hypothesis and increases the likelihood of a type II error. As a result, the power of the test of significance for an independent variable X1 is reduced by a factor of (1-r2(1|2,3,….)), where r(1|2,3,….) is defined as the multiple correlation coefficient for the model X1 = f(X2,X3,…), and all Xi are independent variables in the larger model (3,4). The results of this study clearly show that the presence of bloody diarrhea is an endogenous variable in the model showing predictors of hemolytic uremic syndrome, in that the diarrhea is shown to be predicted by, and therefore strongly correlated with, several other variables used to predict hemolytic uremic syndrome. Similarly, Shiga toxin 1 and 2 (stx1, stx2) genes are expected to be key predictors of the presence of bloody diarrhea, independent of strain, due to the known biochemical effects of that toxin (5,6). Because the strain is in part determined by the presence of these toxins, including both strain and genotype in the model means that the standard errors for variables for the Shiga-containing strains and bloody diarrhea symptom are likely to be too high, and hence the significance levels (p values) obtained from the regression models are higher than the true probability because of a type I error. This flaw is a particular problem with studies that use a conditional stepwise technique for including or excluding variables. The authors note that they excluded variables from the final model if the significance in initial models for those variables was less than an α level (p value) of 0.05. Given the inefficiencies due to the endogeneity of bloody diarrhea, as well as those that may result from other collinearities significant predictors were likely excluded from the study, although this cannot be confirmed from the data presented. The problems associated with the endogeneity of bloody diarrhea can be overcome by a number of approaches. For example, the simultaneous equations approach, such as that outlined by Greene (7), would have used predicted values of bloody diarrhea from the first stage of the model as instrumental variables for the actual value in the model for hemolytic uremic syndrome. Structural equations approaches, such as those suggested by Greenland (8), would also be appropriate. However, bloody diarrhea is not the only endogenous variable in their models, and extensive modeling would be necessary to isolate the independent effects of the various predictor variables. Given the small sample size, this may not be possible. The underlying problem in the study is the theoretical specifications for the model, in which genotypes, strains, and symptoms are mixed, despite reasonable expectations that differences in 1 level may predict differences in another. For example, the authors’ data demonstrate that all O157 strains contain the stx2 gene and have higher rates of causing hemolytic uremic syndrome and bloody diarrhea. This calls into question the decision to build an analytic model combining 3 distinct levels of analysis. Such a model depends on the independence of the variables to gain unbiased, efficient estimators. The model of the relationships one would develop from a theoretical perspective would predict the opposite (Figure). We expect that the genotypes (by definition) will predict the strain, and that strains have a differential effect on symptoms. The high level of intervariable correlation due to these relationships, coupled with the decision to exclude variables based on likely inefficient p values, raises questions concerning the reliability of the results and conclusions. In particular, the conclusions that strains O157 and O111 are not predictors of hemolytic uremic syndrome deserve to be revisited; other excluded variables may also be significant predictors when considered under an appropriate model. These problems point to the need to ensure proper specification of analytic models and to demonstrate due regard for the underlying assumptions of statistical models used. Figure Model for determining virulence factors for hemolytic uremic syndrome. Endogeneity in Logistic Regression Models To the Editor: Ethelberg et al. (1) report on a study of the determinants of hemolytic uremic syndrome resulting from Shiga toxin-producing Escherichia coli. The dataset is relatively small, and the authors use stepwise logistic regression models to detect small differences. This indicates that the authors were aware of the limitations of the statistical power of the study. Despite this, the study has an analytic flaw that seriously reduces the statistical power of the study. An often overlooked problem in building statistical models is that of endogeneity, a term arising from econometric analysis, in which the value of one independent variable is dependent on the value of other predictor variables. Because of this endogeneity, significant correlation can exist between the unobserved factors contributing to both the endogenous independent variable and the dependent variable, which results in biased estimators (incorrect regression coefficients) (2). Additionally, the correlation between the dependent variables can create significant multicollinearity, which violates the assumptions of standard regression models and results in inefficient estimators. This problem is shown by model-generated coefficient standard errors that are larger than true standard errors, which biases the interpretation towards the null hypothesis and increases the like-lihood of a type II error. As a result, the power of the test of significance for an independent variable X 1 is reduced by a factor of (1-r 2 (1|2,3,….) ), where r (1|2,3,….) is defined as the multiple correlation coefficient for the model X 1 = f(X 2 ,X 3 ,…), and all X i are independent variables in the larger model (3,4). The results of this study clearly show that the presence of bloody diarrhea is an endogenous variable in the model showing predictors of hemolytic uremic syndrome, in that the diarrhea is shown to be predicted by, and therefore strongly correlated with, several other variables used to predict hemolytic uremic syndrome. Similarly, Shiga toxin 1 and 2 (stx1, stx2) genes are expected to be key predictors of the presence of bloody diarrhea, independent of strain, due to the known biochemical effects of that toxin (5,6). Because the strain is in part determined by the presence of these toxins, including both strain and genotype in the model means that the standard errors for variables for the Shiga-containing strains and bloody diarrhea symptom are likely to be too high, and hence the significance levels (p values) obtained from the regression models are higher than the true probability because of a type I error. This flaw is a particular problem with studies that use a conditional stepwise technique for including or excluding variables. The authors note that they excluded variables from the final model if the significance in initial models for those variables was less than an α level (p value) of 0.05. Given the inefficiencies due to the endogeneity of bloody diarrhea, as well as those that may result from other collinearities significant predictors were likely excluded from the study, although this cannot be confirmed from the data presented. The problems associated with the endogeneity of bloody diarrhea can be overcome by a number of approaches. For example, the simultaneous equations approach, such as that outlined by Greene (7), would have used predicted values of bloody diarrhea from the first stage of the model as instrumental variables for the actual value in the model for hemolytic uremic syndrome. Structural equations approaches, such as those suggested by Greenland (8), would also be appropriate. However, bloody diarrhea is not the only endogenous variable in their models, and extensive modeling would be necessary to isolate the independent effects of the various predictor variables. Given the small sample size, this may not be possible. The underlying problem in the study is the theoretical specifications for the model, in which genotypes, strains, and symptoms are mixed, despite reasonable expectations that differences in 1 level may predict differences in another. For example, the authors' data demonstrate that all O157 strains contain the stx 2 gene and have higher rates of causing hemolytic uremic syndrome and bloody diarrhea. This calls into question the decision to build an analytic model combining 3 distinct levels of analysis. Such a model depends on the independence of the variables to gain unbiased, efficient estimators. The model of the relationships one would develop from a theoretical perspective would predict the opposite (Figure). We expect that the genotypes (by definition) will predict the strain, and that strains have a differential effect on symptoms. The high level of intervariable correlation due to these relationships, coupled with the decision to exclude variables based on likely inefficient p values, raises questions concerning the reliability of the results and conclusions. In particular, the conclusions that strains O157 and O111 are not predictors of hemolytic uremic syndrome deserve to be revisited; other excluded variables may also be significant predictors when considered under an appropriate model. These problems point to the need to ensure proper specification of analytic models and to demonstrate due regard for the underlying assumptions of statistical models used. (2), although we believe the critique of the methods is largely based on misunderstandings. We developed a model for the risk of progression to hemolytic uremic syndrome (HUS) containing 3 variables: whether the infecting Shiga toxin-producing Escherichia coli isolate had the stx 2 gene, age of the patient, and occurrence of bloody diarrhea. The critique relates to the fact that bloody diarrhea and stx 2 are not independent, since we showed that stx 2 was strongly associated with progression to HUS (odds ratio [OR] = 18.9) and also weakly associated with development of bloody diarrhea (OR = 2.5) (2). Avery uses the term endogeneity as it is used in econometric analyses; however, the term "intermediary variable," i.e., a factor in the causal pathway leading from exposure to disease, is more frequently used in epidemiology. In this context, we chose to consider bloody diarrhea as a potential confounder (3). A confounder is a risk factor but is also independently associated with the exposure variable of interest and is not regarded as part of the causal pathway (see online Figure at http://www.cdc. gov/ ncidod/EID/vol 11no03/05-0071-G.htm). Bloody diarrhea may act as a confounder if patients with bloody stools are treated differently by the examining physicians or if, for instance, unknown virulence factors contribute to the risk of having bloody stools. A second line of critique of our methods apparently develops from the idea that virulence factors determine the serogroup. This idea, however, is a biological misconception. In fact, virulence genes and serogroup are independent at the genetic level, and an important point of our article is that HUS is determined by the virulence gene composition of the strain rather than the serogroup. Regardless of the status of the bloody diarrhea variable, excluding it from the model doesn't change the conclusions of the article. A revised model contains only the significant variables age and stx 2 (Table). Serotype O157 is still not an independent predictor of HUS, and this result is robust. Rectal Lymphogranuloma Venereum, France To the Editor: Lymphogranuloma venereum (LGV), a sexually transmitted disease (STD) caused by Chlamydia trachomatis serovars L1, L2, or L3, is prevalent in tropical areas but occurs sporadically in the western world, where most cases are imported (1). LVG commonly causes inflammation and swelling of the inguinal lymph nodes, but it can also involve the rectum and cause acute proctitis, particularly among men who have sex with men. However, LGV serovars of C. trachomatis remain a rare cause of acute proctitis, which is most frequently caused by Neisseria gonorrhoeae or by non-LGV C. trachomatis (2). In 1981, in a group of 96 men who have sex with men with symptoms suggestive of proctitis in the United States, Quinn et al. found that 3 of 14 C. trachomatis infections were caused by LGV serovar L2 (3). In France, 2 cases of rectal LGV were reported in an STD clinic in Paris from 1981 to 1986 (4). In 2003, an outbreak of 15 rectal LGV cases was reported among men who have sex with men in Rotterdam; 13 were HIVinfected, and all reported unprotected sex in neighboring countries, including Belgium, France, and the United Kingdom (5). At the same time, a rise in C. trachomatis proctitis (diagnosed by using polymerase chain reaction [PCR]; [Cobas Amplicor Roche Diagnostic System, Meylan, France]) was detected in 3 laboratories in Paris and in the C. trachomatis national reference center located in Bordeaux. To identify the serovars of these C. trachomatis spp., all stored rectal specimens were analyzed by using a nested omp1 PCR-restriction fragment length polymorphism assay. The amplified DNA product was digested by restriction enzymes. Analysis of digested DNA was performed by elec-
2014-10-01T00:00:00.000Z
2005-03-01T00:00:00.000
{ "year": 2005, "sha1": "a7b8f42c45dd15f6ebf0e00b2930983e1980b46f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3201/eid1103.050462", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a9e4ff589fee82d3caaab362c4de32182a6d3311", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
9175994
pes2o/s2orc
v3-fos-license
The Serology Diagnostic Schemes in Borrelia burgdorferi Sensu Lato Infections – Significance in Clinical Practice Lyme borreliosis is a world-wide multi-organic disease caused by spirochete Borrelia burgdorferi sensu lato. Numerous gene-species Borrelia are identified with a various frequency in Europe, Asia and America (Ruderko, et al., 2009; Siegel, et al., 2008; Stanek G, 2011; Wilske, et al., 2007 Wodecka, 2006a). Within the last few years, as well as in Europe as in North America, there were prepared strategies, directives and guidelines for diagnostics and treatment of Lyme disease, including the frequency of occurrence of specific genespecies and a specification of clinical symptoms (Center for Disease Control and Prevention [CDC], 2011; European Concerted Action on Lyme Borreliosis [EUCALB], 2008). Lyme disease seems to be easy to diagnose and treat due to the pathogenic factor known for a long-time and elaborated diagnostic and therapeutic schemas. In serological diagnostics an impediment constitutes a wide range of genospecies B. burgdorferi, changes of expression of particular genes occurring in various stages of an infection, and cross reactions which occur in the presence of other pathogenic microorganisms and disease entities connected with an immune response disorders. It is connected with the necessary use of appropriately configured diagnostic tests and recombinant proteins common for particular genospecies and related to the immunological response at different stages of the infection (EUCALB, 2008; Zajkowska, et al., 2006a,2006b). As well as a diagnostician as a doctor has to consider not only the results of the serological tests but also numerous, coexisting, frequently unspecified factors in order to make an accurate diagnose confirming or excluding B. burgdorferi infection. In many cases, even early and accurate diagnosis and antibiotic therapy appropriately applied does not guarantee the effective eradication of a pathogen, and what is important for a patient a complete elimination of symptoms of the disease. Posttreatment Lyme disease Syndrome (PTLDS) has been confirmed in some patients – it is a complex of lingering, unspecified clinical symptoms which impede a complete physical and mental recovery in patients after being treated from Lyme disease. This is a crucial problem as well as in health as in social life which is frequently ignored. The symptoms concerning Lyme arthritis and neuroborreliosis are frequently the cause of an immense disability in patients in numerous life activities and it is required to undertake a rehabilitation program (Tokarska-Rodak et al., 2007). Introduction Lyme borreliosis is a world-wide multi-organic disease caused by spirochete Borrelia burgdorferi sensu lato. Numerous gene-species Borrelia are identified with a various frequency in Europe, Asia and America (Ruderko, et al., 2009;Siegel, et al., 2008;Stanek G, 2011;Wilske, et al., 2007Wodecka, 2006a. Within the last few years, as well as in Europe as in North America, there were prepared strategies, directives and guidelines for diagnostics and treatment of Lyme disease, including the frequency of occurrence of specific genespecies and a specification of clinical symptoms (Center for Disease Control and Prevention [CDC], 2011; European Concerted Action on Lyme Borreliosis [EUCALB], 2008). Lyme disease seems to be easy to diagnose and treat due to the pathogenic factor known for a long-time and elaborated diagnostic and therapeutic schemas. In serological diagnostics an impediment constitutes a wide range of genospecies B. burgdorferi, changes of expression of particular genes occurring in various stages of an infection, and cross reactions which occur in the presence of other pathogenic microorganisms and disease entities connected with an immune response disorders. It is connected with the necessary use of appropriately configured diagnostic tests and recombinant proteins common for particular genospecies and related to the immunological response at different stages of the infection (EUCALB, 2008;Zajkowska, et al., 2006aZajkowska, et al., ,2006b. As well as a diagnostician as a doctor has to consider not only the results of the serological tests but also numerous, coexisting, frequently unspecified factors in order to make an accurate diagnose confirming or excluding B. burgdorferi infection. In many cases, even early and accurate diagnosis and antibiotic therapy appropriately applied does not guarantee the effective eradication of a pathogen, and what is important for a patient -a complete elimination of symptoms of the disease. Posttreatment Lyme disease Syndrome (PTLDS) has been confirmed in some patients -it is a complex of lingering, unspecified clinical symptoms which impede a complete physical and mental recovery in patients after being treated from Lyme disease. This is a crucial problem as well as in health as in social life which is frequently ignored. The symptoms concerning Lyme arthritis and neuroborreliosis are frequently the cause of an immense disability in patients in numerous life activities and it is required to undertake a rehabilitation program (Tokarska-Rodak et al., 2007). www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. Two-step laboratory testing process in diagnostics of Lyme disease European Concerted Action on Lyme Borreliosis (EUCALB) and Center for Disease Control and Prevention (CDC) recommend a two-step testing process in serological diagnostics of Lyme disease (CDC, 2011;EUCALB, 2008). It has been assumed that, a diagnosis of every f o r m o f c l i n i c a l d i s e a s e , e x c e p t e r y t h e m a migrans (EM), requires the two-step testing process. The first step in the testing process uses enzyme immunoassay techniques: Indirect Immunofluorescence Assay (IFA) or Enzyme Linked Immunosorbent Assay (ELISA) in order to detect the presence of specific antibodies IgM and/or IgG (in relation to the stage of illness). ELISA or IFA tests should be confirmed by immunobloting (Western blot, Wb). In other European countries and the USA a two test procedure is recommended: a sensitive screening test such as ELISA supported by immunoblot. All specimens positive or equivocal by a ELISA or IFA should be tested by a standardized Western blot. Specimens negative by a sensitive ELISA or IFA need not by tested further. It was recommended that an IgM immunoblot be considered positive if two of the following three bands are present -OspC, (24 kDa), BmpA (39 kDa) and Flagelina (41 kDa). It was further recommended that an IgG immunoblot be considered positive if bands for antigen proteins are present: p17, p18, p21 (DbpA), OspC (p22,23,24,25), OspD (p29), p30, OspA (p31), OspB (p34), p58, p83/100 and VlsE (Aberer, 2007;Deutsche Borreliose-Gesellschaft e.V, 2010;EUCALB, 2008, MMWR, 1995. Complete standardization of immunobloting protocols in Europe is unrealistic at present. Lyme borreliosis is not the same in all geographic areas due to different local prevalence of species and strains of B. burgdorferi s.l. and to heterogeneity within those strains. Recommendations for the interpretation of Western blots have not always been applicable to populations in geographic areas other than where they were developed. The development of immunoblots using defined recombinant or synthetic antigens is promising for the future (EUCALB, 2008;Robertson, 2000). The test for Lyme disease (ELISA, IFA, Western blot) measures antibodies made by white blood cells in response to infection. The antibodies IgM are produced in human's body in the response against the antigen proteins B. burgdorferi in the 2nd-4th week since the EM exposure, reaching the ultimate in the 4th-6th week, and they are most frequently IgM anti-OspC and anti-p41. If the serological test is made too soon, it can show falsely negative results in the scope of the presence of IgM because of the low symptomatic features. Antibody response in early Lyme borreliosis may by weak or absent, especially in erythema migrans and antibiotic treatment may abrogate antibody production. Serology may also be negative in acute neuroborreliosis with short duration of the disease (EUCALB, 2008;Stanek, 2011). The repetition of Wb after 2-4 weeks should be considered in patients at an early stage in case of a positive result of an enzyme immunoassay test and negative confirmation test (Depietropaolo, et al., 2005;EUCALB, 2008;Flisiak& Pancewicz, 2011;Przytuła, et al., 2006). When Western blot is used during the first 4 weeks of disease onset (early Lyme disease), both IgM and IgG procedures should be performed (MMWR, 1995). Specific IgG and/or IgM are found in only 40-60% of untreated cases EM, particularly in patients with signs haematogenous spread (EUCALB, 2008). In most patients with an active Lyme disease, the level of the antibodies IgM decreases after about 4 months. The antibodies IgG start to emerge in a serum in the 4th week since the infection. In the early disseminated stage or the acute neuroborreliosis, the IgM/IgG seropositiveness increases 70-90%. In this period, the immunological response can be manifested in relation to few antigens. For the diagnosis www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. of Lyme arthritis, it is essential to demonstrate the presence of specific IgG antibodies, usually in high levels. A positive IgM test in the absence of IgG antibodies argues against the diagnosis of Lyme arthritis. Follow-up is recommended only in cases with short duration of symptoms. For the diagnosis of acrodermatitis chronica atrophicans (ACA), it is essential to demonstrate high levels of IgG anti-B. burgdorferi. A positive IgM test in the absence of IgG antibodies argues against the diagnosis of ACA (EUCALB, 2008). Technical problems that contribute to false-negative or false-positive results include the adoption of inadequate cut-off levels, the presence of cross-reacting antibodies, false positive reactions caused by some autoimmune diseases and inappropriate interpretation criteria for Western blots (Stanek, 2011 doing the Western blot. Doing so will increase the frequency of false positive results and may lead to misdiagnosis and improper treatment (CDC, 2011). According to the directives, which are mandatory in Europe, the serological tests determining the level of antibodies IgM/IgG anti-Borrelia burgdorferi should not be used in the assessment of the effectiveness of the therapy. The effectiveness of the antibiotic therapy should be assessed only on the basis of the dynamics of the clinical picture (EUCALB, 2008;Flisiak & Pancewicz, 2011). It has not been defined so far that there is a parameter, which marking would reliably determine the effectiveness of the elimination of the pathogen. It has been suggested that, the decrease of the titre of the antibodies for C6 protein can be interpreted as an indicator of the effectiveness of the therapy (Aberer, 2007). The probability that a patient with a positive serological test actually has Lyme borreliosis and the probability that a patient with a negative test does not have the disease depends on the performance characteristics of a given assay (sensitivity and specificity) and also on the prevalence of the disease in the population (Stanek, 2011). Both diagnostic tests (ELISA/IFA and Wb) complement each other mutually. Serological diagnosis is always a balance between sensitivity and specificity of the assays. A high level of specificity is always more important than a high level of sensitivity (EUCALB, 2008). The enzyme immunoassay tests are characterized by high sensitivity and relatively low specificity, whereas the Wb test is characterized by high specificity and low sensitivity (Flisiak & Pancewicz, 2011). A minimum standard of a least 90% specificity for the screening tests (ELISA, IFA) and 95% specificity for the immunoblot should be established in the population where the assay is to be used (EUCALB, 2008). The two-step laboratory testing process is designed to eliminate unspecific falsely positive results which occur in a various frequency during the diagnosis with the use of one test and it allows on an explicit assessment with the interpretation of the limit results. The PCR methods are not used in a routine diagnosis of Lyme disease on account of lack of gained standards, although there are a number of researches done in this matter by many research establishments. The detection of bacteria's DNA made by the PCR method can be interpreted in two ways: it can confirm the presence of a living bacteria in an organism or it can signify on the presence of DNA coming from bacteria killed with antibiotics. The PCR method does not allow on a differentiation of DNA between living and dead being, or free DNA coming from the disintegration of the bacteria cell (Wodecka, 2006b). The immunological response against the infection of B. burgdorferi in the aspect of clinical symptoms The dissemination of the spirochetes into further tissues and organs occurs in a short period of time since the transmission of the infection through blood and lymphatic vessels and it is possible that through peripheral nerves as well (Sigal, 1997). The innate defense mechanisms are initiated as the first in the process of the immunological response of an organism against infection, in which as well as phagocytes as complement system, lysozyme and interferon take part. All disorders of the unspecific mechanisms in the early stage of the infection can prevent from an effective elimination of the pathogen in further stages of the immunological response, and consequently lead to the development of a chronic state of the illness (Bykowski et al., 2008;Siegel, et al., 2010). The diagnosis of the illness in patients with EM is usually made on the basis of a clinical picture without a confirmation of the serological tests, which results are frequently negative during this period. The erythema migrans usually www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. exposes in the place of a tick's bite after 1-3 weeks. Typical EM has a form of a spot with a tendency of expanding, a diameter of more than 5 cm and a brightening in the middle. Untypical forms do not demonstrate central brightening. They can be shaped irregularly or have hemorrhagic features. The exposure of EM within the period of time shorter than 2 days after a tick's bite and of a diameter less than 5 cm oppose to the diagnosis. Erythema migrans disappear spontaneously within few days since an inception of the antibiotic therapy however it does not mean that the infection has been eliminated. The untreated lesions can stay even for few months and disappear spontaneously, though the infection still lasts (Flisiak & Pancewicz, 2011;Tokarska-Rodak, et al., 2010a, 2010b. It is essential to implement the serological test when EM takes the untypical form or does not appear and there is a suspicion of the infection with the spirochetes of B. burgdorferi . The symptoms of the disseminated Lyme disease concern: nervous system, heart, muscles and joints (Aberer, 2007;Depietropaolo, 2005). This stage of the disease appears within a few weeks till more than a year since the infection, while the late stage of Lyme disease can occur even many years after the invasion of Borrelia spirochetes into human body. The late stage of Lyme disease is manifested with skin lesions (acrodermatitis chronica atrophicans), chronic neurological symptoms or chronic arthritis. Borrelia arthritis can take on a chronic form leading to a permanent joint damage, and it can also be manifested by: chronic and migrating muscle pains, recurring arthritis, a pain caused by an inflammatory reaction within the scope of motor organs and the weakening of skeletal muscles (Singh, & Girschick, 2004a;Wilgat, et al., 2004). The presence of IgG for the broad antigen spectrum is observed in persons with symptoms of late Lyme disease, in 100% of patients (EUCALB, 2008;Wilske, et al., 2007). The examination of cerebrospinal fluid is made in the diagnostics of neuroborreliosis. The presence of antibodies anti-B. burgdorferi in the cerebrospinal fluid cannot result from their production in the cerebral space but it can be an effect of the penetration of antibodies from blood through the damaged barrier bloodbrain (EUCALB, 2008). The decision concerning the diagnosis and treatment of Lyme disease is based on the clinical picture with the results of serological rests taken into account. Lyme disease should not be considered in case of the positive results of the tests and without the presence of clinical symptoms of the disease. Although, it is possible that there is a certain percentage of people in a healthy population, in whom the seropositiveness changing along with the age is observed in comparison with B. burgdorferi connected with outdoor activities (Bacon, et al., 2003;Wilske, et al., 2007). The immunological response against B. burgdorferi infection in the aspect of the diversity of genospecies There are two risk factors of acquiring infection Borrelia in a relation to existence on the area where the ticks are present. The first concerns the estimation of spreading B. burgdorferi sensu lato in ticks Ixodes ricinus which is the main vector of pathogen in Europe. The second factor concerns the determination of diversity of gene-species on a particular area. The risk of human infection B. burgdorferi s.l increases along with the number of ticks being infected on a particular area. It depends from the multiple of stabs during the haematophagy season of ticks and it becomes bigger when the time of infected ticks present on a human skin is longer (Ołdak, et al., 2009;Wodecka, 2006a (Aguero-Rosenfeld et al., 2005;Ruderko, et al., 2009;Sicklinger, et al., 2003;Wodecka, 2006a). According to some sources, there are also B. californiensis sp. nov. in this group (Siegel, et al., 2008). B. afzelii, B. garinii, B. burgdorferi sensu stricto and occasionally B. spielmani, B. valaisiana, B. lusitaniae are responsible for causing Lyme disease in Europe whereas in South America only B. burgdorferi sensu stricto (Siegel, et al., 2008;Stanek G, 2011;Wilske, et al., 2007). Even though Lyme disease is most generally caused in Europe by the above mentioned three genospecies Borrelia, it cannot be excluded that there are other genospecies causing the symptoms of the disease. DNA of B. valaisiana was detected in the cerebrospinal fluid of a patient with chronic neuroborreliosis in Greece and in a patient with erythema migrans. B. lusitaniae was isolated from a patient with suspected Lyme disease in Portugal (Derdáková & Lenčáková, 2005). Direct relation of skin changes of erythema migrans (EM) type with an infection of B. spielmani was revealed in some of the European countries (Netherlands, Germany, Hungary, Slovenia). Thus, the relation of these gene-species with Lyme disease has been proven (Maraspin, 2006;Wilske, et al., 2007). The genetic changeability of Borrelia has an influence as well as on the spirochetes' pathogenicity as on the clinical manifestations of the disease. Consequently, the heterogeneity of microorganisms causing Lyme disease in Europe should be taken into consideration in the serological and microbiological diagnostics (Derdáková & Lenčáková, 2005;Richter et al., 2004;Wang et al., 1999;Wilske, et al., 2007). It has been proven that Borrelia afzelii is responsible for skin lesions of type acrodermatitis chronica athropicans (ACA) and the presence of borrelia lymphocytoma, whereas B. garinii has been isolated more frequently from the cerebrospinal fluid and thus its relation with neuroborreliosis is emphasized. B. burgdorferi s.s. is responsible for lesions type arthritis. As well as the late skin lesions as neuroborreliosis are more often detected in Europe whereas Lyme arthritis is more often diagnosed in the USA (Derdáková & Lenčáková, 2005, Wilske, et al., 2003, 2007. All three genospecies B. burgdorferi s.s., Borrelia afzelii and Borrelia garinii can participate in the development of erythema migrans. However, there are differences present in the clinical manifestation of EM caused by those genospecies (Maraspin, 2006;Wodecka, 2006a). According to the serological diagnostics, it is essential to notice whether antibodies IgM/IgG, which are produced in the autoimmune response against the infection caused by other genospecies than those three known as pathogenic in Europe, can be detected by diagnostic tests used in a standard diagnostics of Lyme disease. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. as EM. Diagnostic tests used routinely in the Lyme disease diagnosis in Europe (ELISA, Western blot) usually have antigen extracts of B. burgdorferi s.s., B. afzelii, B. garinii or electrophoreticaly separated antigen extracts of Borrelia afzelii enriched with recombinant VlsE antigen. According to researchers, the antigens B. spielmani -gene-species, which researchers mentioned as the fourth next to B. burgdorferi s.s., B. afzelii, B. garinii, should be additionally used in the tests and be considered in the diagnosis of Lyme disease within Europe (Derdáková & Lenčáková, 2005;Maraspin, 2006;Tokarska-Rodak, et al., 2010c). The antigen proteins B. burgdorferi diagnostically significant In the clinical practice the evaluation of the active stage of infection is primarily based on the clinical symptomatology, routine enzyme immunoassays, and confirmatory tests such as Western Blot (CDC, 2011;Štefančikowá, et al., 2005;Tokarska-Rodak, et al., 2010a). The identification of the antibodies IgM and IgG directed against specific antigenic proteins B. burgdorferi constitutes the basis of the serological diagnostics of Lyme disease. It becomes essential in Europe to use tests with appropriately selected antigenic panel considering the heterogeneity of the proteins Borrelia burgdorferi (Štefančiková, et al., 2005). The evolution of a production of the antibodies directed against various antigens B. burgdorferi is observed along with the development of the disease process after the transmission of the spirochetes into human body. In the early stage of the infection (2 to 4 weeks) the immunological system detects only few antigens Borrelia as p41 (flagellin) and proteins Osp and produces the antibodies IgM against them. OspC is an immunogenic lipoprotein and main virulent factor of the infection in people (particularly genotypes OspC A, B, I, K). OspC poorly succumbs to an expression in a tick's bowels and in a cultivation, however it undergoes intensively only after the transmission of a spirochete into a mammal organism. OspC and OspA are the most important proteins of the outer membrane in the cell of B. burgdorferi. OspC is characterized by large polymorphism and a substantial reactivity in comparison with OspA. Both antigens are connected with genetic and antigen heterogeneity among various species. The classification is made on the basis of various genotypes or serotypes. There have been 8 different serotypes OspA and 16 serotypes OspC (B. burgdorferi s.s 6 serotypes, B. afzelii 4 serotypes, B. garinii 6 serotypes) registered in Europe (Aberer, 2007;Wodecka, 2006a). The surface proteins of a spirochete affect significant stages of the immunological response: OspA inhibits the spirochetes' phagocytosis and an oxygen explosion in neutrophils especially at the low concentration of a complement, which substantially simplifies the survival and the dissemination of bacteria. The spirochetes can bind with the receptor cells molecules of a host and the extracellular matrix as integrins, glycoproteins, and proteoglycans (Hartiala, et al., 2008). The antibodies IgG anti-B. burgdorferi appears after several weeks since the bite, and their level can remain increased and continue even after the resolution of the clinical symptoms. As far as the infection develops, the immunologic response extends on the increasing number of antigen proteins -p83, p58, p53, p43, p39, BBK32 (p35), p31, p30 (OspA), p25 (OspC), p21, p19, DbpA (p17). The recombinant antigens OspC, p100, VlsE, DbpA (p17), BBK32p66, peptides C10 and C6 are used in order to improve the diagnostics of Lyme disease and for a better prediction of the duration of the infection (Aberer, 2007;Aguero-Rosenfeld et al., 2005;Tokarska-Rodak, et al., 2008, 2010aWilske, et al., 2007). The selected antigens such as p83/100, BmpA (p39) or antigens of high specificity but www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. common for many microorganisms (e.g. the protein of flagellum-flagellin) are introduced. Flagellin (p41) is one of the most immunogenic protein which occurs in the cell of B. burgdorferi and causes very strong and early humoral response. Epitopes, which are characteristic for B. burgdorferi, only occur between 129 and 251 aminoacid. The protein which comes from the initial and final part of the chain shows a high degree of the homology with the sequence of flagellin's aminoacids Bacillus subtilis (65%) and Salmonella Typhimurium (56%). The use of the parts only specific for B. burgdorferi in diagnostic tests has an influence on the decrease of the percentage of results falsely positive, especially for IgM (Aguero-Rosenfeld, et al., 2005). Other sensitive and specific antigen which may be used in the serological confirmation of the infection is DbpA (p17). Its presence was confirmed in 93% of patients with Lyme arthritis and in 100% of patients with neuroborreliosis (Aberer, 2007). As indicated by the diagnostics of infections caused by B. burgdorferi s. l., the essential antigens are highly immunogenic proteins developing in vivo after the spirochetes' transmission into human body. The antigens VlsE, BBA 36 (22kDa), BBO 323 (42 kDa), Crasp 3 (21 kDa), pG (22 kDa) show the expression in vivo and contain highly immunogenic epitopes common for B. burgdorferi sensu lato, which are an important determinant for an advanced stages of Lyme disease in the serology of IgG (Bykowski, et al., 2007;Hofmann, et al., 2006;Tokarska-Rodak, et al., 2010a, Wilske, et al., 2007, Zajkowska et al., 2006b. The researchers believe VlsE protein is the most sensitive recombinated B. burgdorferi s.l. antigen used in the diagnostics. It is possible to detect IgM/IgG anty-VlsE in all pathogenic Borrelia burgdorferi sensu lato genospecies and the risk of false positive results is ten times lower in comparison to other Borrelia antigens (Chmielewska-Badora et al., 2006, Liang et al., 2000, Wilske, et al., 2007. In spite of an advanced stage of Lyme disease in some patients, there can be the continuity of the antibodies IgM in relation to the outer superficial protein OspC and VlsE (Hofmann, 2006, Tokarska-Rodak, 2010a. As far as antigen VlsE currently occurs in mostly used serological test ELISA and Western blot, the other antigens are not included into routine diagnostics. There are highly immunogenic proteins CRASPs (complement regulator-acquiring surface proteins) found beside antigen VlsE during the infection of B. burgdorferi e.g. CRASP-3, proteins belonging to Erp family (pG), and lots of membrane proteins (immunogenic membrane-associated proteins) among which there is BBO323 (Nowak, et al., 2006;Singh & Girschick, 2004a). The researches confirm the significance of the antigens in vivo in the immunological response against the infection B. burgdorferi. The researches conducted by Hofmann and his associates shown that antigens BBA36, BBO323, Crasp3, pG are characteristic for the late infections of Borrelia. It has been confirmed that there are the antibodies IgG for BBO323 (90%), BBA36 (67%), p83 (71%) in patients with Lyme arthritis but very seldom antibodies for Crasp3 (38%) and pG (33%) (Hofmann, et al., 2006). The presence of the antibodies IgG anty-VlsE, Crasp3, BBO323, BBA36 has been confirmed with various frequency in patients bitten many times by ticks and with clinical manifestation of Lyme arthritis (Tokarska-Rodak, et al., 2008, 2010a. The presence of antibodies IgG for VlsE and BBO323 have also been confirmed in persons being suspected of the disease, who had erythema migrans (Zajkowska, et al., 2006a(Zajkowska, et al., , 2006b. It has been assumed that, the routine use of a broaden spectrum of the antigens in vivo (beside VlsE) in Western blot tests can contribute to the designation of the severity and dynamics of the immunological response against the used antigens, what will provide more possibilities in the assessment of the immune reactions in relation to a clinical state of a patient. www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. The immunological factors essential in the response of a hosts' organism against the infection of B. burgdorferi In the light of the current knowledge, some diseases and infections are started to be considered in the aspect of probable disfunctions in the control of functioning of the elements of immunological system, including complement system. It allows to look from a different perspective on many disease entities, which are caused by infections of particular pathogenic microorganisms (Klaska, & Nowak, 2007). The complement system The dissemination of spirochetes Borrelia in the human organism and the development of the infection is a complex, omnidirectional process which occurs owing to many adjustments and mechanisms allowing bacteria to survive. It seems to be essential that B. burgdorferi is able to avoid destructive effect of the congenital defence mechanisms. The complement system participates in the elimination of B. Burgdorferi, and its activation on the surface of the pathogen leads to the cytolitical damage of bacteria (Bykowski, et al., 2007). The disactivation of activation cascades of the complement allows Borrelia to survive and also determines a competent reservoir for particular genospecies of bacteria (Siegel, et al., 2010). There are also other microorganisms apart from Borrelia burgdorferi, like: Echinococcus granulosus, Neisseria meningitidis, Neisseria gonorrhoeae, Streptococcus pyogenes, Streptococcus pneumoniae, Yersinia enterocolitica, Candida albicans and human immunodeficiency viruses which have developed mechanisms allowing to overcome the destructive process of the complement system. The acquisition of regulatory molecules of a host allows to avoid an adverse effect of the complement (Klaska, & Nowak, 2007). The microorganisms bind the human fluid phase complement regulators factor H or FHL-1 and some also bind the clasical pathway regulator C4Bp directly to the surface (Krajczy, et al., 2001). Precise analysis in vitro within many isolates of three pathogenic genospecies Borrelia burgdorferi s.l. shown that all isolates of the same genospecies have similar sensitivity on the complement's effect, however there are significant differences among genospecies. The isolates of B. afzelii are particularly resistant to the complement's effect, the majority of B. burgdorferi s.s. isolates are on average sensitive, whereas B. garinii are fundamentally sensitive on the effect of the complement system (Suchonen, et al., 2002). It is well known that B. burgdorferi s.s B31, which come from North America, are less sensitive on the complement's effect than those which come from Europe. The difference comes from a various capacity to bind component C9, which as a result leads to the reduction of the living functions of the spirochetes, and morphological changes and the fragmentation of a bacteria cell (Krajczy, et al., 2001). Proteins CRASPs (Complement Regulator -Acquiring Surface Proteins) are responsible for the ability to disactivate the complement in the case of Borrelia burgdorferi s.l., which are able to bind regulatory proteins of an alternative way, and as a result have an influence on the inhibition of the activation cascade of the complement. CRASPs (from CRASP-1 to CRASP-5) are connected with the soluble forms of the two regulatory proteins -factor H and factor H-like protein 1 (FHL-1) and hence the activation of the complement on the surface of bacteria does not occur. A lot of strains of B. afzelii and some of B. burgdorferi s.s. are capable to control the alternative way of the complement through the absorption of FHL-1 and H molecules. That kind of capability does not have B. garinii which are sensitive on the complement's effect (Krajczy, et al., 2001;Suchonen, et al., 2002;Zajkowska, et al., 2006c). Serum resistance of B. burgdorferi B31 is mainly associated witch CRASP-1 and mediated by www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. binding of complement regulator factor H. OspA and OspC do not bind factor H (Hartiala, et al., 2008). Regardless of the way on which the activation of the complement occurs, the development of the membrane -attack complex (MAC) is a key stage. The researches indisputably confirmed the significance of the complement in the bacteriolysis of Borrelia. The spirochetes induce oxidative burst and calcium mobilization and are susceptible to phagocytosis dependent on the complement (Suchonen, et al., 2000(Suchonen, et al., , 2002Krajczy, et al., 2001). The lack of susceptibility to the effect of the mechanisms of innate immunity, end especially the immunity on the destruction with the use of the complement, is determined as the virulent factor of Borrelia burgdorferi (Siegel, et al., 2010). Lyme disease in the aspect of the autoimmunological processes The researchers name the long duration of the disease as one of the risk factors of the occurrence of Borreliosis which is not curable. One cannot exclude the possibility that the long lasting infection of B. burgdorferi, next to typical symptoms of Lyme disease, may also induce the autoimmunological changes in a small percent of patients. The autoimmunological processes can contribute to maintain excessive inflammatory response in late Lyme disease and can be responsible for the inflammatory reaction maintenance, even after the elimination of the pathogen , Kisand, 2007Singh & Girschick, 2004bWilgat, 2004. In certain conditions of the environmental stress, the spirochetes can undergo reversible transformation from the motile and helical into inactive, spherical cysts. That kind of forms was observed in the cerebro-spinal fluid and tissues of the patients with Lyme disease (Singh & Girschick, 2004b). The metabolically unactive alveolar forms of Borrelia (blebs forms) containing lipoproteins OspA, OspB, OspD are named as a source of long-term antigen stimulation, which lasts even during the absence of bacteria able to multiple (Stere, 2003;Śpiewak, 2004). The examination of the patients with early Lyme disease did not reveal direct connection between the presence of antibodies anti-Borrelia and antinuclear antibodies (ANA) (Śpiewak, 2004). It is possible that there is a relation between the initial diagnosis of Lyme disease as erythema migrans and the occurrence in the late stage in a small percentage of people, in spite of the used treatment on arthral symptoms with simultaneous presence of the antibodies ANA (Tokarska-Rodak, 2010b). According to Singh, one potential explanation for antibiotic-resistant Lyme disease is the generation of autoimmunity mediated directly or indirectly by the pathogen (Singh, & Girschick, 2004b). Apoptosis plays the most important role in the control and physiological extinguishing of the inflammatory reaction in the infections, including the infection of B. burgdorferi. The impairment of apoptosis of lymphocytes and other leukocytes can be connected with the risk of autoimmunization . Problems in the diagnosis of Lyme disease are connected with the occurrence of other disease entities There are many disease states which presence should be considered while interpreting the results of the screening tests and the confirmation tests in the direction of Lyme disease. The antibodies present in the serum of people infected with EBV, CMV or Mycoplasma can react crosswise with the antigens of B. burgdorferi e.g. p41, OspC, BmpA (p39) which direct the diagnostic proceedings in a wrong direction. The antibodies of the cross-reaction for antigens OspC, p39 B. burgdorferi were also observed in the samples with serum of patients www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. with the infection Treponema pallidum, Herpes simplex virus (HSV) type 2 (Depietropaolo et al., 2005;Strasfeld, et al., 2005). The antigen Epstein Barr VCA-gp125 (Virus Capsid Antigen) together with antigens B. burgdorferi were applied in one of the WB tests used in the diagnostics of Lyme disease. Mononucleosis should be excluded in the mode of various diagnosis in the case when there is a reactivity against EBV-gp125 next to reactivity IgM against specific proteins Borrelia. It is widely acknowledged that in persons with autoimmune diseases carried with high index of auto-antibodies (hypergammaglobulinemia), it is necessary to consider the possibility of obtaining falsely positive results in Lyme disease serodiagnostics (EUCALB, 2008;Flisiak & Pancewicz, 2011). The denotations of the antibodies anti-B. burgdorferi conducted by Hofmann et al in patients with autoimmune diseases revealed a possibility of the occurrence of the cross-reaction and the obtainment of falsely positive results pointing to the existence of Lyme disease in this group of patients (Hofmann, et al., 2006). Multiple sclerosis, lupus erythematosus can give positive results, especially when the test which is used in order to determine the level of IgM anti-B. burgdorferi is based on sonicate antigens (EUCALB, 2008). Due to the growing number of people diagnosed in the direction of Lyme disease, the problem concerning the results falsely positive resulting from the cross reactions seems crucial, especially as regards to people whose symptoms of Lyme disease are unspecific and slightly intensified. Thus, in order to decrease its percentage in the largest extent, the available possibilities of diagnostics should be used. Post -Treatment Lyme Disease Syndrome (PTLDS) About 10-20% of patients with the diagnose of Lyme disease suffers from the clinical symptoms of constant, repeating or persistent capacity from few months to a year after the use of appropriate antibiotic therapy. The symptoms are nonspecific: muscle and joint pains, cognitive defects, increased fatigue, irritability, emotional lability, disturbances in sleep, concentration, and memory (Feder, et al., 2007). In that kind of cases, the clinical and laboratory assessment aims to exclude the possibility of treatment failure or the presence of a new condition unrelated to previous Lyme borreliosis. That kind of state is defined as post-treatment Lyme disease syndrome (PTLDS) if it is characterised by the presence of persistent symptoms syndrome and lasts longer than 6 months since the treatment. PTLDS cannot be defined as "chronic" Lyme disease, and the occurrence of the symptoms mentioned above do not justify the use of antibiotic therapy, which in these cases is useless and potentially harmful for the patient with PTLDS. The use of symptomatic treatment is recommended for the patients with PTLDS (CDC, 2011;Stanek, et al., 2011). The reason of the occurrence of PTLDS is not entirely explained. It has been assumed that lingering symptoms are due to residua damage to the tissues and immune system that occurred during the infection. Similar complications and auto-immune responses are known to occur following other infectious diseases (CDC, 2011;Seidel, et al., 2007). Conclusion Highly immunogenic proteins produced in vivo after spirochete transmission into the human body are significant antigens for the diagnostics of B. burgdorferi s.l. infections. Antigens VlsE, BBA36, BBO323 and Crasp 3 demonstrate in vivo expression and comprise highly immunogenic epitopes, common for B. burgdorferi s.l., which are important IgG www.intechopen.com Please use Adobe Acrobat Reader to read this book chapter for free. Just open this same document with Adobe Reader. If you do not have it, you can download it here. You can freely access the chapter at the Web Viewer here. serological markers of advanced stages of borreliosis. Thus a serologic test with those antigens involved creates better potential to evaluate immune response with account for clinical status of the patient. The detection of antibodies directed against specific B. spielmani antigens suggests that this microorganism may be responsible for triggering borreliosis both as a single etiologic agent and with other Borrelia genospecies. The long-lasting persistence of the disease and thus long-term antigenic stimulation can be considered as a factor enabling the initiation of autoimmune reactions. This process can exist in a small percentage of patients with Lyme disease but the possibility of its inception cannot be completely negated.
2017-08-15T06:37:34.770Z
2012-02-17T00:00:00.000
{ "year": 2012, "sha1": "e641db4ce1d37f49fe58d10660fc57bd8aa63597", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/28827", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2af6302b2c90009889bb12ab5251a3a7f8070ffd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233862097
pes2o/s2orc
v3-fos-license
Investigation of the D-loop sequence of mitochondrial DNA of the Volgograd sheep breed The paper presents the results of studies of the sequence of the D-loop of mitochondrial DNA of the Volgograd sheep breed. Mitochondrial DNA has a number of unique features that make it possible to effectively use markers based on it in phylogenetic studies of a wide range of organisms. Introduction Genomic selection of animals in the XXI is becoming increasingly important in connection with the need to accelerate the selection process and achieve the desired results as soon as possible [1,2]. It includes many aspects, the most important of which are: genotyping of animals with reli-able values of breeding value, development of a prediction equation for the reference population, assessment of young animals using regression models based on the use of genetic markers (Direct Genomic Value, DGV); selection of assessed juveniles that meet the requirements of the breed-ing program, etc. [3,4,5]. DNA markers are a convenient tool in studies to reconstruct the origin of breeds of domestic animals, in particular sheep. The synthetic origin of domestic sheep breeds, confirmed at the molec-ular level, limits the information content of nuclear DNA markers when studying their demo-graphic history. Analysis of the sequence of mitochondrial DNA, which has a maternal character of inheritance, serves as an effective way to assess the historical origin of breeds [6]. Various types of DNA markers are used in the study of sheep biodiversity. Mitochondrial DNA develops 5-10 times faster than nuclear DNA, and different parts of its genome change at differ-ent rates. This suggests that mitochondrial DNA is the most suitable marker for studying the pro-cesses of micro-and macro-development. The most informative region of mitochondrial DNA for the study of maternal inheritance due to its high variability is the Control region or D-loop, the size of which ranges from 16613 to 16620 BP in domestic sheep, and 16613 to 16696 BP in wild sheep. The difference in the length of mitochondrial DNA is mainly due to the variability in the length of tandem repeats (75-76 BP) and their number in the D-loop (D-loop, control region, CR) of mitochondrial DNA. IOP Conf. Series: Earth and Environmental Science 677 (2021) 052111 IOP Publishing doi:10.1088/1755-1315/677/5/052111 2 Mitochondrial DNA has a number of unique features that make it possible to effectively use markers based on it in phylogenetic studies of a wide range of organisms. Mitochondrial DNA is easily isolated from biological samples, since it is represented in cells by a large number of copies. In most cases, mitochondrial DNA does not recombine and is inherited through the maternal line, which greatly simplifies the study and subsequent analysis of the results obtained. The aim of this work was to obtain data on the nucleotide sequence of the mitochondrial DNA Dloop of the Volgograd sheep breed. Based on the data obtained, the analysis of mitochondrial DNA Dloop polymorphism was carried out in a comparative aspect with breeds of domestic and foreign selection. The use of this marker in the light of modern ideas about the origin and genetic diversity of sheep expands our understanding of the history of formation and the current state of the gene pool of the Volgograd sheep breed. Materials and methods Sheep of the Volgograd breed (n = 13) served as the material for research on mitochondrial DNA of Agricultural production cooperative -Plemzavod «Romashkovsky» of the Volgograd region. Mitochondrial DNA was isolated from tissue samples using a set of regents «K-Sorb-100» (Lim-ited Liability Company «Sintol») in accordance with the manufacturer's instructions. The poly-merase chain reaction was carried out according to the standard procedure. The following primers were used to amplify fragments of the mitochondrial DNA D-loop: The visualization of the polymerase chain reaction products was carried out in a 2% agarose gel with the addition of ethidium bromide. Specific fragments of the polymerase chain reaction were isolated from the gel using the Cleanup Mini kit for purifying DNA from the gel (Evrogen Lim-ited Liability Company, Russia). Fragment sequencing services were provided by Syntol. Editing and sequence alignment was performed using the BioEdit v 7.2.6 and MEGA 7 programs. The NCBIA ccession sequence NC_001941.1 was used as a reference. To determine the belonging of the studied samples to haplogroups from the NCBI database, the sequences of the D-loop of mitochondrial DNA belonging to haplogroups A, B, C, D, and E were selected (table 1). To assess the genetic diversity of the Volgograd breed, the number of haplotypes (H), haplotype (HD) and nucleotide (π) diversity, the average number of nucleotide substitutions per site (k), and genetic distances between populations were determined using the DNASP 5.10 program. Calculations and construction of ML (maximum likelihood) trees were performed using the MEGA 7.0 program. To determine the genetic distance between the breeds, the analysis included the sequences of the Dloops of mitochondrial DNA belonging to different breeds of sheep and to determine the belonging of the studied samples to haplogroups from the NCBI database, the sequences of the D-loops of mitochondrial DNA belonging to haplogroups A, B were selected. C, D and E (table 2). Results and discussion Among the most popular methods for studying the domestication of farm animals, including sheep, is the analysis of polymorphism of mitochondrial DNA sequences: either a noncoding control region (Dloop) or complete mitochondrial genomes [7,8]. We analyzed the complete sequence of the control region (D-loop) of mitochondrial DNA in 13 individuals of the Volgograd breed. All nucleotide sequences had a length of 1179 BP, and we also determined the primary structure of nucleotides between positions 15437-16616. In all studied animals, 4 tandem turns of 75 BP were established. Based on the obtained sequences of 13 fragments of the D-loop of mitochondrial DNA, 88 polymorphic sites were identified in the studied group of sheep. As a result, 13 haplotypes were identified. The variety of haplotypes in sheep of the Volgograd breed was 1,000. The number of nucleotide substitutions per site was 25,231. Nucleotide diversity in general for the study group is 0.02207. The results of the study, coming from foreign sources, showed that domestic sheep, wild rams, mouflons and argali had 4 tandem repeats of 75 BP, in contrast to Urial sheep, which had one repeat of 75 BP and 4 repeats 76 BP long. Genetic differences between populations make it possible to determine similarities or differences between breeds. There are different formulas for calculating genetic distances (Wright, Ney, Cavalli-Sforza and Edwarts, some others), but in practice they all give similar results. In our studies, we calculated the distance according to the Tamura-Ney model, between the merino and fine-wool breeds of domestic (Kazakh, Kulunda) and foreign breeding (Altai merino landscape, Australian merino and Australian romney march, texel) [9,10]. The phylogenetic relationships between breeds are constructed using the maximum likelihood method based on the Tamura-Ney model. The analysis included 8 nucleotide sequences of the mitochondrial DNA D-loop fragment. All items containing spaces and missing data have been excluded. There were 1177 positions in the final dataset. According to the results obtained, the following breeds can be distinguished into a separate group -Volgograd, Kazakh, Texel. The fine-wooled Volgograd breed of sheep was bred by the method of complex reproductive crossing of coarse-wooled fat-tailed queens, chosen as a mother line with fine-wool rams of the Novokaukau and Prekos breeds, obtained as a result of hybrid queens in the desired type, mainly of the second generation, were bred in themselves. The offspring from these crosses, in the first place, did not satisfy in terms of wool productivity. Therefore, simultaneously with the improvement of meat qualities and early maturity, in order to improve the indicators of shearing and quality of wool, obtained by crossbred queens since 1948, they began to cross with rams of the Caucasian and, to a lesser extent, Grozny breeds. IOP Conf. Series: Earth and Environmental Science 677 (2021) 052111 IOP Publishing doi:10.1088/1755-1315/677/5/052111 4 The Kazakh fine-wool breed of meat and wool direction was created at the Kazakh Scientific Research Institute of Animal Husbandry in 1931-1946. The uterus of local fat-tailed sheep was taken as a basis, which were crossed with rams of the Precos and Rambouillet breed. To improve the quality of the wool during repeated crossing, rams of the Caucasian, Grozny and Askanian breeds were used. Work on breeding Texel sheep was begun in the middle of the 19th century on Texel Island by crossing low-yielding marching queens with British rams Lincoln, Leicester, Wenleidale and Hampshire. The import of breeding Texel sheep from Holland, Finland and Australia to the territory of the Russian Federation was carried out in 1996-1998. They were used as enhancers of meat productivity and other economically useful traits when creating an early maturing type of meat sheep. Merino-landscape sheep were bred by crossing the Spanish fine-fleeced sheep with ewes of the local South German breed. This breed is distinguished by problem-free maintenance, high growth rates, excellent meat qualities, endurance, good wool performance. From the analysis of our data, it follows that the gene pool of the Volgograd sheep breed is represented by variants of haplotypes included in the widespread haplogroup B, which is typical for European sheep breeds ( figure 1, 2). Also in haplogroup B, there are breeds: Kulunda, Kazakh, Texel, Merinolandschaf. This circumstance confirms that the ancestral populations, on the basis of which the Volgograd breed was formed, are of European origin. Merino Australian breeds belong to haplogroup A, and the Altai breed belongs to haplogroup C, which is characteristic of Chinese sheep breeds. Based on the study of the variability of the D-loop of various sheep breeds, three haplogroups A, B, and C were identified. In further studies, two more haplogroups B and E were identified. Haplogroup B is observed mainly in mouflon and in European domestic sheep, as for Asian sheep, haplogroup A dominates here. A high frequency of haplogroup A was also established in sheep in New Zealand due to the early imports of Indian animals to Australia. Haplogroup C is less common and is found in domestic sheep in Portugal, Turkey, China, and the Caucasus. Haplogroup D is found in sheep in Romania. The rarest haplogroup E has been identified in Turkish sheep breeds. Conclusions Thus, the data on the nucleotide sequence of the D-loop of mitochondrial DNA of the Volgograd sheep breed were obtained and a comparative analysis with some breeds of the world gene pool was carried out. Our analysis also showed that all evaluated fine-wool breeds of domestic selection, except for Altai, belong to haplogroup B, merino Australian breeds -to haplogroup A, and Altai sheep breed belongs to haplogroup C.
2021-05-07T00:03:44.206Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "3c4de71cc1e0c53c1a8c4fcc6e82890fd1717155", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/677/5/052111", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "157c2b49f75e4bb0cc0ebbd438f931cf6bd7bad1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
263539244
pes2o/s2orc
v3-fos-license
How effective are social distancing policies? Evidence on the fight against COVID-19 To fight the spread of COVID-19, many countries implemented social distancing policies. This is the first paper that examines the effects of the German social distancing policies on behavior and the epidemic’s spread. Exploiting the staggered timing of COVID-19 outbreaks in extended event-study models, we find that the policies heavily reduced mobility and contagion. In comparison to a no-social-distancing benchmark, within three weeks, the policies avoided 84% of the potential COVID-19 cases (point estimate: 499.3K) and 66% of the potential fatalities (5.4K). The policies’ relative effects were smaller for individuals above 60 and in rural areas. Introduction Since its outbreak in Wuhan, the SARS-CoV-2 virus causing the respiratory disease COVID-19 has spread across the globe [1][2][3]. To prevent human-to-human transmission of the virus, many governments have adopted social distancing (SD) policies. For example, more than 190 countries have implemented nationwide school closures [4]. These and similar policies aimed at reducing interpersonal contacts to dissipate the epidemic and, ultimately, save lives. In this paper, we evaluate the effectiveness of the German social distancing policies in the fight against COVID-19. We offer two contributions. First, we provide a comprehensive analysis of the policies. We do not only estimate their impact on confirmed COVID-19 cases but also on fatalities. Additionally, we investigate if the policies affected certain socio-demographic groups more than others, and we use cell phone data to link the policies to changes in mobility behavior. Second, we propose a flexible quasi-experimental strategy that can be applied to many settings. At the core, it exploits variation in the spread of COVID-19 at the subnational level. As for the policy variation, we focus on nationwide SD policies that the German federal and state governments jointly enacted in mid-March 2020. The most significant pieces of this policy response were Chancellor Merkel's televised appeal for voluntary social distancing (March 12), the closure of schools, childcare facilities, and retail stores (March 16), and the implementation of a national contact ban (March 23). Our paper identifies the combined effect of all these policies. As the entire set of policies were simultaneously introduced in all German districts and within a short period of time, it is impossible to estimate their effects separately. It is challenging to identify the effects of nationwide SD policies. The impact of such policies is the difference in an outcome of interest (e.g., confirmed cases) between states of the world with and without them. After the policy interventions, however, we cannot observe how the outcome would have developed without the policies. We tackle this problem with an extended event-study approach that exploits variation at the level of German districts (NUTS-3 regions; comparable to US counties). Some districts experienced a COVID-19 outbreak several weeks before the policies took effect; others were not yet affected. Hence, we can compare how the outcomes developed after a local outbreak without SD policies (former districts) and with SD policies (latter districts). This comparison identifies the policies' effects if, in the counterfactual state without policy interventions, the outcome would have evolved similarly in both types of districts. We cannot test this assumption directly, but we verify its plausibility. Three features render our approach and setting especially suited to estimate the policies' effects. First, focusing on within-country variation lowers the potential for bias from heterogeneity in institutions, measurement, and populations. Particularly, in the first phase of the epidemic, all German districts faced the same policies and identical testing and reporting rules. Second, the German data are sufficiently granular for quantitative impact analysis: 401 districts with varying local outbreak dates report cases and fatalities to one federal agency. Third, the data quality is arguably high, and the expected share of undetected cases is lower than in most other countries [5]. The main reasons why epidemiologists expect a low share of undetected cases in Germany are low case fatality rates and extensive testing. As of April 22, Germany conducted 2.07 million tests (2.6% of the population). Fig 1 gives a graphical account of our key results. It shows how the SD policies affected the number of cases ( Fig 1A) and fatalities (Sub Fig 1B) within three weeks of implementation. According to our estimates, the policies avoided 499.3 thousand cases (95% CI: 389.4K-634.1K) and 5.4 thousand fatalities (95% CI: 3.0K-8.7K) until April 2. Put differently, the policies prevented around 84% of the confirmed cases and 66% of the fatalities that would have occurred without policy interventions by that time. The heterogeneity analysis shows that the policies' relative effects were smaller for (a) the oldest age group (60+) and (b) in rural areas. Furthermore, our analysis of cell phone data implies that, without SD policies, citizens would not have limited their social contacts to the same extent: According to our estimates, individuals reduced their mobility by about 30.7% with SD policies. Without them, they only would have lowered it by 3.9%. This suggests that the citizens limited their social contacts as intended by the German authorities. By providing the first comprehensive evaluation of the German SD policies, we add to an ongoing public and scientific debate on whether SD policies contained the virus. In Germany, for example, many citizens consider the lockdown measures appropriate to fight the pandemic [6]. Other individuals, including German or US citizens, stage large-scale protests against them and question their effectiveness. The scientific debate on the policies' effectiveness is not settled, either. Particularly, researchers have argued that the prevalent model-based evaluations of SD policies [7][8][9][10][11][12][13][14][15] suffer from methodological issues [16]. In particular, it has been argued that epidemiological models (a) are frequently weakly identified as they fit many parameters to a single time series [17,18], (b) rely on too restrictive assumptions [19], and (c) often have limited predictive ability [20]. A recent suggestion to evaluate SD policies while avoiding these issues is to use quasi-experimental (instead of model-based) methods [21]. In this spirit, we propose a flexible and widely-applicable quasi-experimental approach that exploits district-level variation in the spread of COVID-19. We then apply this method to provide a broad analysis of SD policies, including their effects on individual behavior, confirmed cases, and fatalities. Hereby, we contribute to a small but growing literature that exploits quasi-experimental techniques to evaluate the effectiveness of non-pharmaceutical interventions [22][23][24]. Compared to our study, the corresponding papers have different focuses: They study fewer or other outcomes, employ different identification approaches, and investigate other policies. For example, one study investigates the effectiveness of travel restrictions [22], a second one studies the effect of SD policies on hospitalizations and cases [23], and a third paper examines the impacts of shelter-in-place orders on cases in the US [24]. Methodologically, these papers exploit policy variation across countries or regions in difference-in-difference models. Another study employs a similar empirical strategy to examine the role of social capital in the spread of COVID-19 [25]. In contrast to us, they do not focus on the impact of policies. The paper's structure is as follows: Section 2 describes the institutional background and Section 3 our estimation approach. Section 4 contains the results for mobility (Subsection 4.1), cases and deaths (Subsection 4.2), each with a description of the relevant data. The results section also features our heterogeneity analyses (Subsection 4.3) and robustness checks (Subsection 4.4). Section 5 concludes. COVID-19 outbreak In Germany, COVID-19 spread after the detection of two cases on February 25, 2020 (an earlier outbreak detected on January 27 had been completely contained). In the following weeks, the infection propagated to the entire country. On March 20, there were confirmed infections in all but one of the 401 German districts. S7 Fig in S1 File shows the distribution of districtspecific outbreak dates. We define the local outbreak date as the first day when ten cases had occurred within two weeks. S8 and S9 Figs in S1 File consider other outbreak definitions. Social distancing policies We classify the policy response during the first month of the epidemic into three phases (see Fig 2). In the first phase, starting with the detection of the first COVID-19 cases, the German authorities only took limited actions: They put infected persons under quarantine, recommended intensified hygiene practices to the public, and canceled large events with more than thousand participants around March 9. In the second phase, the German federal and state governments agreed on additional simultaneous, nationwide containment policies to fight the epidemic. This phase began on March 12, when Chancellor Merkel appealed to all citizens to avoid social contacts whenever possible. Between March 13-15, the state governments announced the closure of schools, childcare facilities, and most retail stores starting on March 16. On March 22, they declared a strict contact ban: From March 23, meeting more than one person from outside the household was prohibited and keeping a minimum distance of 1.5 meters was required. As apparent, these policies' goal was to limit the social contacts of German citizens. Henceforth, we refer to them as "social distancing policies." Notably, our analysis identifies the combined effect of all these nationwide social distancing policies. In a different vein, data on internet search behavior suggests that citizens did not anticipate these interventions (see S16-S19 Figs in S1 File). The third phase started on April 20, when the authorities gradually relaxed the policies. COVID-19 testing Official guidelines determine who qualifies for a COVID-19 test. During the study period, patients with flu-like symptoms were tested if they had been in contact with a person diagnosed with COVID-19 or in a high-risk area. This general rule applied in all federal states and remained almost unchanged during the sample period. After the virus had spread all over Europe, the authorities dropped the high-risk criterion on March 24. Estimation method Because it is impossible to observe the scenarios with and without SD policies simultaneously, one cannot estimate the policies' effects by directly comparing outcomes between both states. Instead, we need to find a way to approximate the latter, counterfactual scenario. To that end, we present an extended event-study model [26][27][28]. While this section briefly introduces the event-study model, the S.1 Section in S1 File discuss in more detail how this model identifies the effects of SD policies. Model We estimate the following multivariate event-study regression with ordinary least squares (OLS), where Y it refers to the outcome of interest (e.g., confirmed cases). Moreover, et it denotes the "epidemic time" in district i at date t (i.e., the number of days since the local outbreak of COVID-19). Regression (1) (1), ε it , represents the error term. Interpretation of estimated parameters While the α parameters account for how the outcome would have developed without the SD policies, the β coefficients capture the policies' effects (henceforth, SD effects). To see why β captures the SD effects, note that the outcome with SD policies in district i at t is Y it . If the epidemic time in district i is et it = k, the predicted outcome without SD isŶ it ¼â k . Consequently, We can estimateâ andb separately because, conditional on the calendar date, there is variation in the districts' epidemic times. Further aspects Two aspects of our approach are essential to note. First, the identifying assumption (known as "parallel trends") is that the expected outcome without SD policies would have been E i ½Ŷ it � for t � March 12. Intuitively, this assumption implies that, without SD policies, the outcomes in districts with outbreaks after the intervention would have developed similarly as in districts with outbreaks before the intervention. We can conduct plausibility checks. Before the implementation of the policies, the estimated SD effectsb t should not differ from zero. Furthermore, prior to the intervention, the outcomes in districts with earlier and later outbreaks should be similar conditional on epidemic time. Both checks suggest that the assumptions hold (see Fig 3 and S2 Table in S1 File). Second, the further away date t moves from the implementation date, March 12, the fewer no-SD observations we can use to estimate the counterfactual E i ½Ŷ it � and, in consequence, the treatment effect β t . To see this, note that, at the date of the policy intervention, the districts are at epidemic times between a lower threshold k and an upper threshold � k. Then, Δ days after the intervention, the districts have progressed to epidemic times between k þ D and � k þ D. Loosely speaking, we estimate the treatment effect by comparing the districts' outcomes with SD to no-SD outcomes from before the intervention, but with the same epidemic time above k þ D (and below � k). Hence, the larger Δ, the smaller is the number of no-SD observations that we can use. Until April 2, there is, on average, at least one no-SD control observation per district i. Therefore, our analysis focuses on the policy effects up to this date. For interested readers, we nevertheless provide estimates from an extended analysis until the policies' relaxation on April 19 (see S1 File). Mobility Did citizens limit their human-to-human interactions due to the SD policies? One approach to studying this question is to examine if the policies changed mobility [15]. The rationale is simple: Individuals who travel less or stay at home cut back the personal contacts outside their household. Measuring mobility. When cell phone users move, their phones switch cell towers to ensure connectivity. From these switches, providers can determine the number of trips starting or ending in a given geographic zone [29]. We obtained data on the number of trips at the district level from Teralytics, a business partner of Telefónica. Specifically, for each district i, N it denotes the number of trips on date t in March or April 2020, and � N i ðdÞ is the average number of trips on weekday d 2 {Monday, . . ., Sunday} in March 2019. Our mobility measure on date t in district i is: where d t is the weekday corresponding to date t. Hence, the measure adjusts for weekday-specific mobility patterns. Moreover, it has a simple interpretation: ΔN it measures mobility relative to the number of movements in 2019. Results. To study the effects of the SD policies on mobility, we use measure (2) as the outcome of model (1) . Fig 3 presents our results graphically. Fig 3A plots the estimated mobility behavior without policies (blue line) and the actually realized behavior with SD policies (red line). For each date t, the SD effect corresponds to the vertical difference between these two lines. Fig 3B shows this effect measured in percentage points. In both figures (and all following figures), the dashed lines represent 95% confidence intervals based on district-level clustered standard errors. Three observations stand out. First, before the start of the second phase on March 12, individuals hardly changed their behavior relative to the baseline year 2019 (see red line in Fig 3A). This suggests that the cancellation of large events around March 9 did not affect mobility. Second, from March 12 on, citizens became less and less mobile. Shortly after Merkel's appeal, they traveled slightly less. Mobility decreased more sharply and persistently, however, after the school and business closures on March 16. From March 16 to April 2, individuals traveled, on average, 30% less than in 2019. This reduction is six times larger than the estimated change without SD policies (-3.9%). Third, the effects of SD policies on mobility are large over the entire second phase, although they decrease over time (see Fig 3B). In sum, Fig 3 suggests that the SD policies reduced mobility considerably and, presumably, also social contacts. Further evidence. In the S1 File, we provide additional descriptive evidence that Germans became less mobile after the implementation of SD policies. For example, they reduced their trips to workplaces by more than 30% and used public transportation by about 50% less (see S14 Fig in S1 File). Cases and fatalities Next, we explore if SD policies effectively constrained the spread of COVID-19. Again, we focus on the period until April 2. S1 and S2 Figs in S1 File provide our estimates for the extended period until April 19. Measuring COVID-19 cases and fatalities. The district-level health offices are legally obliged to report confirmed COVID-19 cases and fatalities to the federal Robert Koch Institute, which collates and publishes these data daily [30,31]. We use the data set provided on April 30, 2020. S5 and S6 Figs in S1 File show descriptive statistics. The data quality is comparatively high. First, the share of undetected cases is expected to be lower than in many other countries [5]. Second, all COVID-19 cases and fatalities are laboratory-confirmed. Third, all health offices apply the same testing and reporting criteria. Fourth, the data contain information on the day of the first symptoms for most cases and fatalities. For asymptomatic cases, the day of the first symptoms is set equal to the registration date. Additionally, we gathered statelevel data on the numbers of conducted COVID-19 tests for robustness checks. Results for confirmed COVID-19 cases. To study the SD effects on COVID-19 cases, we use the inverse hyperbolic sine (IHS) of the cumulative number of confirmed cases in each district as the outcome in model (1). To simplify the interpretation, Fig 4 presents our estimation results re-transformed to cases (rather than IHS-values). Fig 4A displays how the confirmed cases per district truly evolved with SD policies (red line) and, according to our estimations, would have evolved without SD policies (blue line). We present the results on a log scale. Importantly, our analysis is based on the day of the first symptoms. Therefore, if the blue line lies above the red one, our estimates imply that the number of individuals suffering from first symptoms at date t would have been higher without SD policies. Fig 4B depicts the corresponding effects of the SD policies on a linear scale. To make the timing of the policy effects more easily visible, the figure zooms in on the period until April 2. As the mean incubation period is 5-6 days [32], we do not expect to find significant policy effects before March 16. The key insights from the Fig 4A and 4B are as follows: First, before the closure of schools, the case numbers with and without SD policies match closely (see Fig 4A). This finding suggests that our identifying assumption holds. Second, the growth rate of actual cases (red line) starts to diminish a few days after the start of the nationwide policy response, while counterfactual cases (blue line) continue to grow at a similar rate as before. Specifically, the first significant (yet small) SD effects appear on March 18, six days after Merkel's appeal (see Fig 4B). Given the mean incubation period, the timing of the effects is hence in line with the policies' implementation dates. Third, on April 2, our point estimate indicates that the SD policies reduced COVID-19 cases by about 84% or 846 cases per district. Converted to the national level, this estimate indicates that the SD policies prevented 499.3 thousand cases (95% CI: 389.4K-634.1K). The extended analysis suggests that the effects would have continued to grow strongly over time. Fourth, we can also interpret our results in terms of the reproduction PLOS ONE number R, calculated according to the official methodology of the Robert Koch Institute [33]. After the policies' introduction, R quickly decreased from above 2 to below 1 (see S12 Fig in S1 File). Our estimates suggest that, without SD policies, it would have stayed above 2 until April 2. In summary, the analysis implies that the SD policies effectively contained COVID-19. Results for fatalities. Next, we estimate model (1) with the IHS of the cumulative number of fatalities as our outcome. Recall that each fatality is reported together with the day of the patient's disease onset. Hence, we study the effects of the SD policies on the number of (eventually) lethal infections that started on date t. Fig 4C and 4D present our estimation results for fatalities. The abscissa starts with February 26, the first day with ten or more eventually lethal infections. The results are in line with those for confirmed cases: The fatalities in the SD and no-SD scenarios initially follow the same growth path (see blue and red lines in Fig 4C). A few days after the implementation of the policies, however, the scenarios diverge: While actual growth in fatalities slows down sharply (red line), the number of counterfactual fatalities continues to grow at a similar rate (blue line). Specifically, the SD effects are significant from March 21 on and increase strongly over time. We estimate that during the period March 11 to April 2, the SD policies decreased fatalities by 66% or 8.8 per district. Transformed to the national level, this estimate suggests that the policies reduced lethal cases with first symptoms until April 2 by 5.4 thousand (95% CI: 3.0K-8.7K). Again, the extended analysis hints at steadily growing effects over time. Heterogeneity analyses Some groups are at higher risk to suffer from severe COVID-19 progressions [32,34]. International data show that hospitalization rates increase above 60 years of age. Furthermore, men seem to be at higher risk than women. Therefore, in the next step, we investigate how the SD effects differ (a) across age groups, (b) by gender, and (c) between urban and rural districts. Fig 5 shows the subgroup-specific SD effects on confirmed cases. The estimates rely on sample splits and indicate the percentages of cases avoided due to the policies until April 2. Three results emerge. First and foremost, we find large effects in all groups. The point estimates range from 76% to 88%. Second, Fig 5 presents evidence for age-group heterogeneity. The policies prevented 88% (182.5K) of the cases among individuals below 35 that would have otherwise occurred, 86% (257.5k) of the cases in the medium-age group, and 76% (76.9k) of the cases among persons of age 60 and above. The difference between the relative effects in the last and the two former groups is significant at the 5% level. The finding is in line with the observation that, after the policy intervention, the share of persons above 60 among all infected persons increased from about 20% (March 11) to 27% (April 2). This age heterogeneity seems plausible: Policies such as school and business closures likely have stronger implications for the working-age population and for children and their parents than for retirees. Third, we also find somewhat larger relative effects for urban districts than for rural districts and for men than for women. While the former difference is significant at the 10% level, the gender difference is not. S3 Fig in S1 File shows the same pattern of subgroup-specific SD effects on fatalities. However, due to lower numbers, we cannot study some groups and the estimation uncertainty is higher. S4 Fig in S1 File extends the analysis until April 19. Again, the patterns are similar. Robustness analyses We probe the robustness of our results in various dimensions. First, we explore different definitions of a local outbreak. Second, we run additional regressions in which we control for the number of conducted COVID-19 tests per day. Third, we repeat our analysis using alternative outcome definitions. For example, we drop districts with zero cases or fatalities and estimate models in logarithms. We also apply the ln(1+ x) transformation to our outcomes. Fourth, we cluster the standard errors at the state level. S10 and S11 Figs in S1 File report the corresponding results. All conclusions remain substantially unchanged. Conclusion This paper provides evidence on the effects of the German social distancing (SD) policies on (a) individual behavior, (b) confirmed COVID-19 cases, and (c) fatalities. We show that, first, the SD policies affected individuals' mobility. Second, we find that the policies sharply slowed down the spread of the epidemic: According to our estimates, they precluded about 84% (499.3K) of the COVID-19 cases and about 66% (5.4K) of the related fatalities that would have occurred without SD policies within three weeks (until April 2). While large effects emerged across the entire population, the relative effects were smaller for the oldest age group. From a broader perspective, we have made a step towards quantifying the effects of SD policies. At the same time, we believe that we still need a more comprehensive evaluation of such policies. First, the evidence on confirmed cases may not capture the entire impact of the policies on the epidemic spread. One reason is that, although we use high-quality data, not all infections are detected. If the data improve over time, our analysis can be repeated. Second, we estimate the number of confirmed COVID-19 cases and fatalities avoided within three weeks after the policies' introduction. In the medium or long run, the picture might change in many ways. On the one hand, some of the avoided infections may have only been shifted to a later time. On the other hand, medical capacities may have been exceeded without SD policies, resulting in even higher numbers of fatalities. Third, our analysis identifies the joint effects of all elements of the policy response. Ideally, future studies find ways to disentangle the effects of appeals for voluntary SD, school closures, and contact bans. To shed light on these and other issues, researchers could adapt the event-study approach to new data and settings, including the policies' removals.
2020-12-17T04:15:59.965Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "b39e67e9cdc31ed74e6767a2986a8d59e0425e13", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0257363&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c5d155189e76549726f4ac747908efda97d94c5", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
53093193
pes2o/s2orc
v3-fos-license
The excess electron in polymer nanocomposites We have used ab initio molecular dynamics and density-functional theory (DFT) calculations at the B3LYP/6-31G** level of theory to evaluate the energy and localisation of excess electrons at a number of representative interfaces of polymer nanocomposites. These modelled interfaces are made by combining liquid water and amorphous slabs of polyethylene and silica. The walls of the amorphous silica slabs are built with two surface chemistries: Q or fully-dehydroxylated and Q/Q or partiallyhydroxylated with a silanol content between 1.62 and 6.86 groups per nm. Our results indicate that in silica/polyethylene systems an excess electron would sit at the interface with energies between 1.75 eV with no hydroxylation and 0.99 eV with the highest silanol content. However, in the presence of a free water film, the chemistry of the silica surface has a negligible influence on the behaviour of the excess electron. The electron sits preferentially at the water/vapour interface with an energy of minus a few tenths of an eV. We conclude that the moisture content in a wet polymer nanocomposite has a profound influence on the electron trapping behaviour as it produces much lower trapping energies and a higher excess-electron mobility compared to the dry material. I. Introduction Electron transfer underpins technologies such as photovoltaics, 1,2 organic thin film transistors, light-emitting diodes, 3 photocatalysis, 4 DNA based molecular electronics, 5,6 as well as energy transfer in nature 7 including radiation damage and repair. 8,9 The injection of excess electrons plays a major role in the performance of electrical insulation with significant economic consequences. 10 Polymers are commonly employed as insulators in electric and electronic devices such as capacitors, [11][12][13] transistors, 14,15 fuel cell membranes, 16,17 and in high-voltage cables. 18 When polymers are used for highvoltage cables their insulating properties such as the electrical conductivity or breakdown strength degrade over time due to exposure to heat, light, moisture, surfactants, mechanical stress, and high electric fields. 19,20 This degradation process is thought to be related to the trapping processes of charge carriers, the so-called space charge. 19 It has been suggested that the insulating properties of polymers can be improved by the addition of nanoparticles to form nanocomposites: a homogeneously-dispersed blend of a dielectric material with a filler whose particles have radii of up to a few tens of a nanometre. 21 Polymer nanocomposites made by blending a polymer with oxide nanoparticles have been reported to have higher effective permittivities, 22,23 and enhanced electrical breakdown strength 24 than those of the base polymer. It is thought this enhancement is a consequence of a reduction of injected (excess) electron mobility caused by trapping at the new interfaces created by the presence of nanoparticle surfaces, in addition to those already present in the base polymer (through nanovoids 25 and chemical defects and impurities 26 ). In polyethylene, the addition of nanoparticles has been shown to suppress space charge. 27 A common choice of nanoparticle additive is silicon oxide (SiO 2 ). Thermally grown oxides on silicon and silicon carbide supports exhibit high electric breakdown field strengths of up to 9.2 MV cm À1 . 28 Photon stimulated tunnelling (PST) of electrons at the Si/SiO 2 interface show the presence of very deep electron traps of 2.77 AE 0.05 eV (below the conduction band). 29 Electron trapping is known to have a dramatic effect on the performance and reliability of electronic devices employing silicon dioxide as gate insulators and in charge trap flash memory devices. 30,31 While it is clear that addition of silica nanoparticles changes the electrical characteristics of polymer nanocomposites, there is some doubt as to whether these effects are due to the presence of nanoparticles (with or without a surface coating) or due to water adsorption or entrained solvent associated with the creation of the nanocomposite. 32 Therefore, the goal of this work is to study the properties of excess electrons at a number of interfaces relevant to nanocomposites by combining amorphous polyethylene, water, and amorphous silica phases using density functional theory (DFT). Our aim is to understand at a fundamental level the nanoscopic processes underlying the experimental data discussed above, in particular the possible role of water. This DFT work significantly extends our previous studies of pure polyethylene 25,26,[33][34][35]41 to encompass mixtures. In the past DFT has been used to study the degree of localisation of excess electrons at polyethylene interfaces 36,37 and in bulk 38,39 by assuming that the excess electron can be described by Kohn-Sham orbitals. However, it has been unclear to what extent the use of these orbitals could be justified 40 to model excess electrons, especially when employing hybrid functionals as the Koopmans' theorem is only valid for closedshell Hartree-Fock theory. Nevertheless, we have recently used the all-electron CRYSTAL17 59 code at the B3LYP level of theory to compare the representation of an excess electron in polyethylene in bulk and its vacuum interfaces by the lowestunoccupied molecular orbital (LUMO) of a N electron system and the highest-occupied molecular orbital (HOMO) of an N + 1 system (the +1 electron balanced by a uniform background charge). 41 These two representations have also been compared with the single-electron Lanczos behaviour 34 employing an excess electron-polyethylene pseudopotential fitted 34 to experimental data for the bottom of the conduction band of alkanes measured with respect to vacuum. Although both orbitals localise excess electrons at similar positions in the interface and have similar localisation lengths, the excess electron energy predicted by the LUMO(N) corrected to a zero at the vacuum level, agrees best with the Lanczos' for the larger vacuum gaps 41 and hence will be used in this work. The surface chemistry of nanoparticles used to create nanocomposites affects the dispersion of nanoparticles in a polymer matrix as well as the measured electrical properties. Thus, we vary the silica surface chemistry in this study to make it more or less hydrophilic. A silica surface can be characterised by the coordination number Q n of the surface silicon atoms. We create both Q 3 and Q 4 amorphous surfaces. On a Q 4 surface all dangling silicon atoms are bridged with four oxygen neighbours, similarly, on a Q 3 surface silicon atoms are connected with three oxygen neighbours and a hydroxyl group -OH (silanol). The Q 4 surface is obtained experimentally by calcination of Q 3 and Q 2 surfaces at 900 K 42 and is hydrophobic with a heat of immersion of 22 mN m À1 whereas the silanated surfaces are hydrophilic. 43 We also consider the effect of adding a free water film to the silica surface and the role of silanols formed spontaneously in the ab initio simulations. The manuscript is organised as follows. In Section II, we briefly describe the methods employed to build the bulk and interfacial systems and the parameterisation of the DFT calculations. Given the large amount of detail, we present the full methodology in the ESI. † In Section III, we present the degrees of localisation and energies of excess electrons in bulk systems and for the interfacial systems of amorphous polyethylene and water, amorphous polyethylene and silica, and water and amorphous silica. In Section IV, we compare the excess electron properties in bulk and at interfaces, we also predict the valence and conduction band offsets at the interfaces. Finally, Section V concludes our main findings and looks at the implications for future work. II. Simulation methods In this section we describe the main procedures used to prepare the systems of amorphous polyethylene, amorphous silica, and liquid water, and their interfaces as well as the ab initio calculations to obtain their excess-electron properties. Full details are given in Section SI of the ESI. † Slabs of amorphous silica are prepared in a two-stage process. In the first stage, we use the classical molecular package LAMMPS 44 to generate a bulk system made with 3  3  2 unit cells in the beta-cristobalite lattice. We then follow a melting-quench process to yield an amorphous cube with a density of 2.18 g cm À3 , close to the experimental value of 2.20 g cm À3 . 45 The resulting configuration is then imported to the Materials Studio package, 46 where a vacuum gap of 2 nm is imposed in the z-direction to create two surfaces. We then bridge a small number of Si atoms to four O atoms on the surface and in the inner regions of the slab. We next optimise the geometry with the COMPASS2 47 force field available in the Forcite Module and then with the DFTB+ Module and finally with ab initio package CP2k 48 in its QUICKSTEP 49 implementation. CP2k uses GTH pseudopotentials 50,51 and double-z basis sets with polarization functions (DZVP) as well as Grimme scheme 52 to include long-range forces. This bridging and optimisation is repeated until the surface is stable at which point we run an ab initio molecular dynamics simulation with CP2k with a timestep of 0.25  10 À3 ps. The final Q 4 (no dangling silanol Si-OH groups) surface has a cross section of 1.578 nm  1.572 nm in the y and z directions and a length of 2.475 nm in the x-direction. We believe that this length is sufficiently long to reproduce bulk-like conditions in the inner layers of this silica slab as Goumans et al. 53 showed that a slab of quartz-silica with a thickness of 1.125 nm is sufficiently large to represent bulk-like behaviour in the innermost layers. We use this Q 4 surface to create others with Q 3 /Q 4 chemistries, in which a few Si-O are broken in the Q 4 surface and replaced with silanol Si-OH groups with concentrations of up to 6.86 groups per nm 2 . 54 Once the silanol groups are built, the slabs are optimised with CP2k and run with ab initio molecular dynamics again. The amorphous polyethylene systems are also prepared using the Materials Studio software using four chains of 40 carbons with the COMPASS2 force field. The amorphous solid has the same cross section as the amorphous silica slab and a depth of 1.76 nm. Chains are not allowed to cross the boundaries in the x direction to avoid splitting the chains when a vacuum gap of 2.0 nm is imposed (see later) though periodic boundary conditions are still applied in all three directions. These configurations are then relaxed with CP2k with the local-density approximation (LDA) and the long-range forces are described by the Dion-Rydberg-Schroeder-Langreth-Lundqvist (DRSLL) nonlocal van der Waals density functional. 55 After the structure is optimised, we run a short ab initio molecular dynamics simulation of 1 ps to further relax the structure. Finally, ensembles of liquid water at 300 K are first prepared using the TIP04/2005 model, 56 which gives excellent density predictions at 278 K. The system is composed of 150 molecules, occupying a parallelepiped region with the same cross section as the silica and polyethylene samples. We next run an ab initio molecular dynamics simulation with CP2k for 25 ps with a timestep of 0.25  10 À3 ps to relax the bond lengths with the PBE 57 functional revised for small molecules (revPBE 58 ) in the bulk and, with a vacuum gap of 2.0 nm, to create a water/vapour interface. We have used the revPBE functional because from simulations with a cubic system of 343 molecules at constant temperature 300 K and pressure of 1 bar, we have found that this revised form improves significantly the agreement with the experimental density with respect to the original parameterisation PBE: the disagreement is only 3.9% using revPBE, whereas it deviates up to 13.4% with PBE. Note though that the high computational cost of these ab initio molecular dynamics calculations restricts the length of time for which we can simulate these water systems. One ps of simulated time with CP2k needs to be run on 64 processors for nearly 10 hours for the bulk phase and 13 hours with the 2.0 nm vacuum gap. Once the pure systems have been prepared, we build interfaces of water/polyethylene, silica/polyethylene, and silica/ water/silica. The first is prepared with no vacuum using the revPBE functional and Grimme scheme. The second is prepared with and without a 2.0 nm vacuum gap, which requires the LDA/DRSLL setup for the former and the PBEsol 59 /Grimme for the latter. The use of LDA is justified because the polymer chains evaporate after a few ps of simulation. The third type of interface is prepared with the same vacuum gap and employs the revPBE functional. Once the computational setup is ready, the geometry of each two-component system is optimised with CP2k to avoid undesirable orbital overlap between the atoms at each surface. Thanks to this optimisation, the simulations require a shorter time to equilibrate. The systems are run in C2PK for 7.5 ps for polyethylene/silica and 15 ps for polyethylene/water and 15 ps for water/silica/water to obtain a sufficient number of configurations with a timestep of 0.25  10 À3 ps. The resulting configurations from the ab initio molecular dynamics run are then used by the all-electron CRYSTAL17 DFT code 60 to obtain their excess-electron properties. The DFT calculations are carried out using the hybrid B3LYP exchangecorrection functional, which as shown in our previous work 41 is able to reproduce the experimental band gap of crystalline polyethylene and the energy and degree of spatial localisation of excess electrons in amorphous polyethylene. A standard all-electron 6-31G** 61,62 basis set is used to represent the local atomic orbitals in terms of primitive Cartesian Gaussian functions. Polarization functions (p-functions for hydrogens and d-functions for carbons, oxygens, and silicons) are used to ensure that the orbitals can distort from their original atomic symmetry, and to adapt to the molecular surroundings leading to a better prediction of the total energy of the system with high hydrogen content. 63 The size of the systems allows us to restrict the reciprocal space integrations to the G-point of the Brillouin zone and ground-state energy convergence is enforced at 1  10 À6 Ha. We have used the default parameters in CRYSTAL17 to calculate the two-electron coulombic and exchange integrals. Periodic boundary conditions are imposed on the x, y, z directions in systems with no vacuum using the keyword CRYSTAL and on the x and y directions in systems with a 2.0 nm vacuum gap using the keyword SLAB. We refer hereinafter as '3D-periodic' to the first type of simulations and '2D-slab' to the second. We use this second type of calculations in systems where surface dipoles are present, such as liquid water or silica slabs with hydroxylated walls, to obtain a well-defined vacuum level, where the zero of the electrostatic potential V z is defined by the CRYSTAL code in such a way that V z (+N) = ÀV z (ÀN). See, for example, the case with a slab of amorphous silica with a silanol surface concentration of 1.61 nm À2 in Fig. S1 in the ESI. † In this case, the permanent dipole in the silica slab divides the space in two parts, with higher and lower potential, having two equally valid vacuum states. An electron extracted from the material will chose the side with a positive potential to have a negative (minimum) potential energy in the vacuum. Hence, we correct the energies of the LUMO using where LUMO corrected and LUMO uncorrected are the correspondent corrected and uncorrected energies of this orbital, e the electron's charge, and hV z i vacuum is the positive average of the electrostatic potential. For systems with no dipoles, such as slabs of amorphous polyethylene or silica with no silanols, the LUMO energies are corrected with the following expression: where the difference between E band 1,bulk-like and E band 1,bulk corresponds the core-level shift of the energies of the band 1 in the bulk case and the bulk-like region from a second simulation with a 2.0 nm vacuum gap with a Q 4 surface. Eqn (2) is also applied to obtain excess electron energies for amorphous polyethylene slabs. III. 1. Pure systems of silica and water Before studying the interfacial systems, we first analyse the behaviour of excess electrons in bulk water and silica, as represented by the LUMO of our N-electron system. This analysis complements our recent work on bulk amorphous polyethylene and its vacuum interfaces. 41 In bulk polyethylene an excess electron localises in naturally occurring nanovoids 25 with radii less than 0.4 nm. In the presence of large empty gaps between planar amorphous slabs, the excess electron localises on the unfilled side of the polyethylene surface, its density peaking at 0.2 nm into the vacuum with a localisation length of 0.34 nm and an energy of around À0.2 eV in good agreement with single-electron methods. For pure silica, a 3D-periodic calculation with CRYSTAL17 predicts that 80% of the LUMO's charge sits in a small cavity between atomic rings in the bulk phase as shown in Fig. 1. We obtain a corrected DFT excess electron energy of À1.54 eV for the 3D-periodic and À1.43 eV for the 2D-slab calculations (see Table 1). This strong localisation in the bulk of amorphous silica has been also observed in other DFT studies using plane waves 64 which predict a comparable excess electron localisation energy of À1.25 eV. Our 2D-slab calculations allow us to investigate the effect of adding silanol groups on the silica walls. The results in Table 1 indicate the LUMO energy increases with increasing surface hydroxylation, which suggests that the electron prefers to move away from the hydroxyl groups. This preference has been also observed in polycrystalline MgO, where hydroxyl groups form shallower and more diffuse electron traps than those created on the dehydroxylated surface. 65 Turning now to liquid water, Fig. 2 illustrates the distribution of 80% of the LUMO charge in the bulk phase with a 3D-periodic calculation and with vacuum in a 2D-slab simulation. In both cases, the electron with 80% of its charge is found between 0 and 25 ps in a region that encloses 46 molecules for the bulk and 42 molecules for the ensemble with a vapour (vacuum) interface. These two specific volumes are similar to those reported using a Lanczos algorithm, where the wavefunction overlapped around 37 water molecules within two 3D-periodic cubic simulation cells of 1.817 nm (200 molecules) and 2.464 nm (499 molecules). 66 In the presence of a water interface, the excess electron sits at the interface as shown by the averaged profiles along the z-direction plotted in Fig. 3. Our simulations predict a diffusion coefficient for the excess electron of 1.16  10 À9 m 2 s À1 for the 3D-periodic system and an order of magnitude higher at 10.68  10 À9 m 2 s À1 for the 2D-slab with vacuum (see Fig. S2 and S3 in the ESI †), which compare reasonably well with the experimental measurement of 4.90  10 À9 m 2 s À1 at 298 K for bulk water. 67 As previously discussed, the ground-state energy of the excess electron simulations is approximated as that of the LUMO(N). By averaging the energies of this orbital every 0.25 ps between 10.0 and 25.0 ps, we find an excess-electron energy of À0.12 (AE0.30) eV in a 2D-slab simulation with its centre of charge located at z = À4.09 nm from the centre of mass of the 150molecule system. With 200 molecules, this energy increases slightly to À0.03 (AE0.37) eV but note the large standard deviation. Our predictions are in good agreement with those obtained with single-electron pseudopotential methods and experiments. For instance, Turi et al. 68 found that the excess electron localises in cavities of bulk water with an average energy of À0.23 eV averaging over 500 configurations and 500 molecules. His predictions and ours compare well with experimental measurement of À0.12 eV (see Table 1). Note that, one advantage of using CRYSTAL17 with respect to pseudopotential methods, is that we can extract the energy of all electronic bands, including the HOMO. Doing so, we obtain an electronic band gap of 6.04 (AE0.39) eV with 150 molecules and 5.53 (AE0.59) eV with 200 molecules using 2D-slab water simulations, which underestimate the experiments by around 1.2 eV. This difference may arise from the presence of the interface or be reduced using more accurate techniques such as GW. Note relying strictly on a pure generalised gradient approximation (GGA) produces worse predictions. 69 Of course, we can always tune the contribution of the Hartree-Fock exchange in B3LYP or other hybrid DFT functionals to obtain the experimental result. III. 2. Interfaces of amorphous polyethylene and water Taking as a reference the behaviour of excess electrons in pure water, we now analyse the interfacial system made by combining water with a slab of amorphous polyethylene. Unfortunately, in this case we cannot calculate the energy of the LUMO from a 2D-slab calculation with a system made of water, polyethylene, and vacuum as it requires running ab initio molecular dynamics with CP2k. This CP2k calculation requires in turn choosing an appropriate exchange-correlation functional and here we find that any PBE functional will ultimately cause the polymer chains to evaporate in the presence of vacuum. The alternative is to use LDA, which we find severely overestimates the density of water at 300 K. We therefore study interfacial systems of amorphous polyethylene and water with revPBE, but without a vacuum phase, thus preventing the disintegration of the polyethylene phase. Fig. 4 shows that in these systems the electron prefers to sit on a water region near the interface with polyethylene similar to the water/vacuum case. This result is consistent with the nature of polyethylene, a compound that repels injected negative charge carriers due to its negative electron affinity. Although we cannot provide LUMO energies, we believe that the impact of the polymer on the localisation and energy of the solvated electron should be small and therefore, we can assume that its energy can be taken from the 2D-slab calculations of water and vacuum. Furthermore, this two-component system offers us the possibility of studying the transport of the extra charge across the interfaces via calculating the valence (VBO) and conduction band (CBO) offsets. The former are determined using the method of layer-decomposed density of states proposed by Shi and Ramprasad 70 since the traditional line-up method 71 is more appropriate for crystalline systems as it assumes that the variation of the electrostatic potential is only affected by the changes of the outmost atomic layers assembled at the interface formed by two compounds. In contrast, in our work, the potential will also change due to the lack of periodic arrangement of the atomic positions in our amorphous and liquid systems. This aperiodicity obliges to us to apply the method of layer-decomposed density of states for a number of configurations (obtained every 0.25 ps) to obtain robust statistical results. In each configuration, we calculate the density of states with CRYSTAL17 in 15 slices of atoms with a thickness of 0.26 nm and from each slice spectrum we find the HOMO. We plot the HOMOs of each layer vs. its position perpendicular to the interface (e.g. z) and the valence band offset is taken as the maximum difference between the HOMOs in water and polyethylene (see Fig. S4 in the ESI †): where E HOMO,a-PE is the energy of the HOMO in a slice of the amorphous polyethylene slab and E HOMO,water is the energy of the HOMO in a division of the water ensemble. Fig. 5(a) shows the valence band offset of an interfacial system of water and amorphous polyethylene vs. time between 0 and 12 ps. After an initial drop between 0 and 1 ps, the offset oscillates between 0.1 and 1.1 eV with an average value of 0.58 (AE0.23) eV between 1.0 and 12.0 ps. In addition, this property is positive at all times, which indicates that holes will move from water to polyethylene. The large fluctuations arise from the variation in the bulk water HOMO À7.49 (AE0.18) eV rather than the bulk polymer À7.25 (AE0.06) eV (from 3D-slab CRYSTAL17 simulations) as shown in Fig. 5(b). Clearly, the configurational fluctuations associated with the liquid water phase introduce large statistical fluctuations in the VBO. A phenomenon that will be present wherever we have a water/solid interface. Having determined the valence band offsets, we now calculate the conduction band offsets (CBO) which are defined as where E g,a-PE and E g,water are the band gap energies of amorphous polyethylene (8.01 eV taken from averaging between 1 and 12 ps a bulk simulation without ghost atoms) and water (taken the experimental value of 6.9 eV from ref. 74). Inserting these two energies in eqn (4), we find that the average CBO 1.69 (AE0.23) eV between 1.0 and 12.0 ps. This result suggests that an extra electron will require 1.69 eV to move from the liquid to the polymer, the water acting as a deep trap with respect to the polymer (for the trap depth with respect to the vacuum, see below). This conclusion is consistent with the localisation picture seen in Fig. 4 and reflects the negative electron affinity of the polymer. III. 3. Interfaces of amorphous polyethylene and silica The isosurface at 80% of an excess electron charge represented by the LUMO from a 3D-periodic calculation with amorphous polyethylene and silica is illustrated for a Q 4 surface in Fig. 6(a) and for a Q 3 /Q 4 surface with a silanol concentration of 6.86 nm À2 in Fig. 6(b) after 7.5 of a CP2k run with PBEsol. These two isosurfaces show that the addition of OH groups on the surface makes the excess electron migrate to the inner layers of the silica, which is demonstrated quantitatively in the average charge profile drawn in Fig. 6(c) for all five silanol concentration. In contrast, when we impose a vacuum on both sides of the interfacial system, the excess electron sits at the silica/polyethylene interface for any finite silanol concentration, see the isosurfaces in Fig. 7(a) and (b) and the average charge profiles in Fig. 7(c) after 7.5 of a CP2k run with LDA to avoid polymer evaporation. In addition, we also compare the valence band-offsets obtained from the 3D-periodic and 2D-slab Q 4 calculations. We obtain 1.14 (AE0.19) eV for the 3D-periodic case and 0.84 (AE0.36) eV for the 2D-slab case. Using eqn (4) to calculate the CBO in this system by (experimental silica's band gap of 8.8 eV and the DFT amorphous polyethylene's of 8.01 eV) the CBOs will be 0.79 eV lower than the VBOs, at 0.33 eV for the 3D-periodic and 0.05 eV for the 2D-slab. The latter figure consistent with the For these 2D-slab cases, we plot in Fig. 8 the LUMO energies vs. silanol concentrations after correcting with respect to the vacuum level. Our results show that the energy of the excess electron increases with increasing concentration. We argue that this increase is caused by the increasing delocalisation of this orbital. The most negative energy is À1.75 eV with the Q 4 surface, which is consistent with the value of À1.54 eV obtained when this slab is isolated with vacuum on both sides. When the silanol content increases, the LUMO position transits from the left interface to the right wall with some charge penetrating III. 4. Interfaces of amorphous silica and water We now study excess electrons in systems made with an amorphous silica slab sandwiched with two ensembles of 150 water molecules on both sides as shown in Fig. 9 using the revPBE functional. These systems do not need to include polyethylene slabs which will repel the extra charge due to the negative electron affinity of this polymer. This behaviour is shown in water/polyethylene and silica/polyethylene interfaces, where the excess electron sits away from the polymer, which has little effect on its degree of localisation and energy. This water/silica/water system is run with CP2k and revPBE for 15 ps, during which the interfacial chemistry spontaneously produces four silanols on each silica wall equivalent to a silanol surface concentration of 1.62 nm À2 . This concentration is slightly higher than the 1.37 nm À2 reported on nanoporous silica surfaces modelled with an NVT molecular dynamics for 30 ps using CP2k and the BLYP functional. 72 However, both values underestimate the experimental range between 4.2 and 5.7 nm À2 collected in ref. 53 for a range of different treatments to rehydroxylate silica surfaces. This disagreement is expected given the different length and time scales in simulations and experiments. Once the system is well stabilised after 7 ps, the spatial localisation of the excess electron shows a very similar picture as in the water/vacuum case given that this particle again sits near the vacuum on the left liquid ensemble as shown at several CP2k simulation times in Fig. 10(a). Moreover, this same behaviour is also seen in the LUMO energies of this system in Fig. 10(b), which show an increase as water density slightly decreases and silanol groups are formed on both surfaces with an average value of À0.27 (AE0.53) eV between 4 and 15 ps. With a standard deviation nearly two times higher than the absolute value of the average, our DFT methodology predicts that the excess electron in this system oscillates around the same average energy as in the case of a vacuum/ water systems, leading us to conclude that, in the presence of water, the detailed interfacial chemistry of the silica surface does not influence the behaviour of the excess electron in nanocomposite materials. IV. Conclusion We have calculated the energy and degree of localisation of excess electrons at a number of interfaces made by combining amorphous polyethylene, silica, and liquid water to be representative of interfaces to be found in wet and dry polymer nanocomposites. For pure silica we find an excess electron would be strongly localised with an energy of around À1.5 eV. Hydroxylating the silica surface and adding a polyethylene interface produces a strong dependence of the excess electron behaviour on the silanol concentration. An excess electron sits at the interface with an energy of between À1.75 eV for the Q 4 surface chemistry and À0.99 eV for the Q 3 /Q 4 , becoming less localised with increasing silanol density. However, in the presence of a water film, the detailed interfacial chemistry of the silica surface becomes irrelevant and the excess electron sits preferentially at the water/vapour interface with an energy of minus a few tenths of an eV. We conclude that the moisture content of a polymer nanocomposite has a profound influence on the electron trapping behaviour with a wet interfacial material producing much lower trapping energies and a high excess electron mobility (from the Einstein relation). We would then expect that a wet nanocomposite with a percolating water film would show increased electrical conductivity, independent of surface chemistry, which is consistent with our experimental data for low density polyethylene octylnanosilica composites. 32 Conflicts of interest There are no conflicts to declare.
2018-11-11T01:39:44.450Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "e1e4e71f3e03f27faa3343274c7e229f5760e684", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/cp/c8cp04741c", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0efa18d441b78247d8fffa94c50031ebb5cf4a8b", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
246636527
pes2o/s2orc
v3-fos-license
Assessing the effect of an educational intervention on early childhood development among Mexican preschool children in the state of Oaxaca: a study protocol of a cluster randomized stepped-wedge trial Background Early childhood development (ECD) is essential in human capacity building and a critical element in the intergenerational process of human development. In some countries, social programs targeted at improving ECD have proven to be successful. Oaxaca is one of the States with the greatest social inequities in Mexico. Therefore, children in Oaxaca are at a high risk of suboptimal ECD. In 2014, the non-governmental organization (NGO) Un Kilo de Ayuda started to implement the Neurological and Psycho-affective Early Childhood Development Program in eighty marginalized communities of Oaxaca. In this article, we present the impact evaluation design to estimate the effect of this program on ECD. Methods We will use a cluster randomized stepped-wedge design with an allocation ratio of 1:1. Communities will be randomly assigned to each study group: four groups of twenty communities each. We expect that children from intervened communities will show better ECD outcomes. Discussion This study is one of the few rigorous assessments of the effect of an ECD program on the neurodevelopment of Mexican children recruited in their first 3 years of life from communities of high social vulnerability. Our study design is recommended when the way in which outcomes are measured and assessed depends on age, self-selection is present, and assignment is performed at an aggregate level. Implementation research will be conducted prior to study launch and quality control measures will be in place to maximize the fidelity of study design implementation. Trial registration ClinicalTrials.gov NCT04210362 Background {6a} Early childhood development (ECD) is essential in human capacity building and a critical element in the intergenerational process of human development [1]. ECD is multidimensional and influenced by many factors such as genetics, biological status (health and nutrition), the immediate environment (caregiving components), and community characteristics [2]. Sensitive and responsive nurturing care along with education and good nutritional health can improve ECD; however, the most sensitive window of opportunity to advancing ECD, including its social, emotional, and cognitive aspects, is narrow because the greatest developmental benefits and returns on investment are achieved when nurturing care is offered during gestation and the first 3 years of life [3,4]. Suboptimal ECD affects not only the child, but also society's social and economic development [5]. Failure to provide nurturing care in early life to the most vulnerable will lead to high subsequent costs due to excess mortality and morbidity as well as in reduced human capital productivity, perpetuating the vicious cycle that leads to ever increasing social and economic inequities [6]. Studies conducted across different countries have shown that social protection policies and programs have been successful at improving ECD. These interventions include childhood care education, promotion of maternal mental health and wellbeing, and conditional cash transfer programs [7,8]. In Latin America, "Chile Crece Contigo" is an example of a successful multisectoral evidence-based large-scale program. Funded by the Chilean government and emerging from a national consensus in which the civil society participated, the program offers high-quality ECD information for families and healthcare providers among its various health and education benefits [8]. Another example of a large-scale program is "Cuna Mas" in Peru which consists of home visiting interventions aimed at improving parenting practices; it has showed a positive impact in developmental outcomes [9,10]. In Colombia, Ecuador, and Mexico, existing cash transfer programs have been used to deliver ECD interventions [11]. Multiple studies across the globe, including Jamaica, Pakistan, and Turkey, have shown that incorporating nurturing care elements in interventions improved child development and later adult outcomes [7]. The most rigorous evaluations of ECD interventions have followed experimental designs which are considered the gold standard to estimate effects. Quasiexperimental designs may be used when randomization is not possible due to self-selection amd ethical or logistical considerations. Studies have found that in hard-toreach communities with high levels of poverty, children live at risk of nutritional deficiencies and suboptimal levels of neurodevelopment [12]. Often governments face difficulties to reach these populations, many of which are geographically isolated. Therefore, nongovernmental organizations (NGOs) are key for complementing and expanding the reach of governmental efforts seeking to improving ECD in the most socially isolated communities. Located in the South of the country, Oaxaca is one of the States with the greatest social inequities in Mexico. In 2018, 66% of its population lived in poverty, and only 16% had access to health services and 27% had major gaps in the education system [13]. Hence, it is not surprising that Oaxaca has a life expectancy at birth lower than the national average [14] and that a large proportion of children may be at risk of suboptimal ECD. Since 1986, the NGO Un Kilo de Ayuda A. C. (UKA) has been involved in preventing child undernutrition in contexts of high poverty in Mexico. In 2014, UKA started to implement the Neurological and Psycho-affective Early Childhood Development Program (NPECDP-UKA) in eighty socially deprived communities of Oaxaca. This program seeks to improve levels of ECD on children from these communities and is one of the three programs constituting UKA's Integral Model of Early Child Development. The other two programs focus on improving physical development of children and fostering community development, respectively. Assessing ECD requires addressing serious methodological challenges given its multidimensional nature. Ethical matters are also important; for example, interventions that include already proved beneficial components must be offered to all groups in a research study limiting the possibility of randomization and inclusion of control groups without any intervention. Interventions designed to improve ECD also face logistical challenges, since they typically include more than one component and numerous instruments to assess all its dimensions [15][16][17][18]. Furthermore, they require interdisciplinary teams to deliver the interventions and to conduct unbiased assessments. This paper aims to present the impact evaluation protocol to assess the effect of the NPECDP-UKA on ECD in preschool children from eighty high social deprived communities in Oaxaca, Mexico. The evaluation has the potential to visualize the effects of an educational intervention performed by an NGO on ECD. It represents an opportunity to assess the developmental lag in the studied communities as well as to provide elements for the continuation, expansion, or modification of the interventions. As part of the civil society and in coordination with authorities, UKA provides a channel to deliver ECD parenting education focused on responsive caregiving. Given the multiple aspects of nurturing care, it is important to have in place multisectoral interventions [19]. Along with the important role from the government, the private sector as well as the civil society can add coordinated contributions to improve and sustain ECD interventions. The present protocol shows a novel way to assess the effects of an intervention on developmental outcomes where difference scores are not possible due to the agespecific nature of developmental scales. The proposed stepped-wedge experimental design overcome this difficulty and tackle different sources of biases from selfselection, cohort, and community effects. Additionally, the quasi-experimental component of the evaluation allows to study determinants of participation and controls for community and cohort effects. The design could be adapted and applied for studying any other outcome for which age-specific scales are used. Furthermore, it provides useful elements for designing future evaluations by making explicit important biases that may be at play. To the best of our knowledge, this evaluation study and its design is the first effort of its kind applied to ECD outcomes in Mexico. We hypothesized that children from intervened communities will show better ECD outcomes. Design and setting {9} To assess the effect of NPECDP-UKA on ECD, we will use a cluster randomized stepped-wedge design [20] with an allocation ratio of 1:1. A total of 80 communities will be randomly assigned to four study groups using blocking. Each block will be comprised of four communities (twenty blocks in total) with a similar percentage of indigenous population, social marginality level, and urbanicity measured in the 2010 Census [21,22]. Within each block, the four treatments will be randomly allocated to communities (each community receiving exactly one treatment). For ethical reasons, interventions will not be allocated at the individual level. Instead, all study groups consist of communities in which eligible caregivers will be invited to enroll in the NPECDP-UKA, but the program will be deployed sequentially at the community level, according to the timing randomly assigned. This defines the distinctive characteristic between study groups, for example, whereas group A will have a total of 30 months of the exposure to the program at its last assessment; group D, the last study group to be incorporated, will have no exposure to the program during the study and will be measured only once (Table 1). In all study groups, a baseline assessment will be performed before implementing the intervention. The main feature of the proposed design is the possibility of comparing groups of children at the same age range but with a different time of exposure to NPEC DP-UKA, at the same calendar period. This design feature will allow to eliminate potential confounding cohort effects since children with the same age across study groups will also share the same year of birth. Table 1 shows the age range of children for each study group at different study calendar periods. The upper age limit of the ECD assessment will be 60 months; therefore, the first group of communities to be exposed to the program will include children aged 1-30 months at their first assessment and these children will be aged 31-60 at their final assessment. At given calendar periods (t = 2, 4, 6), the group unexposed to the intervention will function as a control group. For example, the last group of communities to be incorporated to the study (group D) will function as a control group at the last calendar period (t = 6). This will allow to estimate the effect of NPECDP-UKA on ECD for exposure times of 30 months (group A vs group D), 24 months (group B vs group D), and 12 months (group C vs group D). Given the rapid changes at early ages and considering the first thousand days of life as a critical opportunity window, we consider that the planned exposure times are adequate to detect changes as well as a gradient of the effect with respect to time exposure. Comparisons between groups will be performed cross-sectionally for children of the same age range. There will be a total of six calendar periods, with a time span of 6 months between the mid-point of consecutive periods and an approximate time span of 6 months between consecutive individual measurements. In addition to the assessment of the effect of NPEC DP-UKA on ECD through a cluster randomized stepped-wedge design, children from the initial communities (group A) but whose caregivers refuse to participate in the NPECDP-UKA and continue participating in the study measurements will be assessed at the same time as the intervened children for three consecutive measurements. This will allow to identify predictors of participation and approximate program effects through a quasi-experimental analysis after 6 and 12 months of intervention using propensity score matching techniques to adjust for self-selection predictors [23]. For outcome variables with well-defined changes (e.g., nutritional status indicators), a difference in difference estimator along with propensity score balancing will be used. Figure 1 shows a simplified version including both the cluster randomized stepped-wedge design and the quasi-experimental design. The former is shown just for the comparison between group B and group A in children aged 7 to 36 months at t = 2, where group B works as a control group. Participants {10} Communities were selected if they met the following inclusion criteria: located in municipalities where the NPECDP-UKA was not currently operating and with more than thirty-five inhabitants under 5 years, according to the census of 2010 [22]. Within selected communities, children will be included if they match the designed age range, and their caregivers agree to participate in the study. One of the children's parents or legal caregiver will be asked to sign the study's consent form by UKA staff (details mentioned below, in the recruitment section). Children's blood samples will be obtained from capillary blood samples by trained personnel to Table 1 Study groups, exposure times, and age of children assess their anemia status. One of the children's parents or legal caregivers will be asked to sign an additional consent form prior to blood collection. Children whose parents or legal caregivers refuse to provide consent for blood sample collection will not be excluded from the study since the outcomes of interest do not require analysis of blood samples. Intervention {11a} Explanation for the choice of comparators Under the stepped-wedge design framework, communities that will function as a control group will not receive any intervention at the moment of their first measurement. Groups will be compared in a parallel fashion at the same calendar study period, the distinctive feature of study groups is time of exposure to the program, and the group that has not yet been exposed to the program will function as a control group for comparisons at a given study calendar time (see Table 1). Due to ethical considerations, every child, who is diagnosed with undernutrition, anemia, obesity, or developmental delays, will be referred for further assessment and remedial services, regardless of group assignment or program participation status. Intervention details The NPECDP-UKA is one of the three components constituting UKA's Integral Model of Early Child Development. The other two components focus on improving physical development of young children and fostering community development, respectively. NPECDP-UKA UKA will implement an integrated responsive parenting nurturing care approach to promote different child development domains, i.e., motor, language, cognitive, and social. Workshops on appropriate responsive parenting practices will encourage nurturing interactions between parents and/or other caregivers with children, from pregnancy and up to 60 months of age. These workshops are supported by two sets of manuals: the first one is an unpublished pedagogical support guide for the facilitators to deliver program in a standard way, and the set contains the workshop materials needed to support the facilitator's counseling to families. The model promotes play as a form of learning and addresses responsive parenting skills for healthy feeding, sleeping, soothing, and physical to promote selfregulation of behaviors and emotions. Physical development component UKA will promote a healthy, balanced, and varied diet and encourage the consumption of locally available foods. Standardized workshops and advice will be implemented to provide guidance on optimal health and nutrition for preschool children (i.e., under 5 years of age). These workshops include a neurodevelopment component and a responsive nurturing care component which covers four dimensions: feeding, sleep, movement, and self-regulation. As part of this component, the implementation team will monitor weight, height, or length, quarterly. Whenever signs of malnutrition are identified, caregivers will be referred to clinical services that may include provision of vitamin supplements and malnutrition recovery advice. Infectious diseases such as diarrhea and acute respiratory infections will also be monitored, and appropriate referrals will be made for clinical services including oral rehydration and counseling on proper hygiene practices. During pregnancy, iron and micronutrient supplementation may be provided if warranted. A comprehensive counseling model will be overseen by community commissioners-contracted by UKA and trained and supervised by the research team-who support the project as translators and interpreters for situations where beneficiaries speak an indigenous language and are not fluent in Spanish. Women will be provided personalized advice during their last trimester of pregnancy and during the first month postpartum. Every 3 months, hemoglobin will be measured in a capillary blood sample for the diagnosis and timely treatment of anemia in pregnant women and children between 6 months and 5 years of age. Treatment of anemia for pregnant women will be iron and folic acid supplementation. Micronutrients will be provided for children at risk of anemia, and prophylactic treatment based on iron will be also provided for children. Community development component This component has two sub-components. The first addresses household food insecurity. The UKA team will promote access and availability of fresh, healthy, and nutritious food to improve the diet of families with children under 5 years and pregnant women, through local food production based on sustainable community and family farming models. The second sub-component centers around UKA's effort to provide access to basic WASH services needed for proper ECD including dry ecological toilets, access to water with rainwater collection systems, safe water storage and water purification systems, sludge water treatment systems, and environmentally friendly energy efficient friendly stoves that save wood and decrease the emission of air born pollutants inside the home. Criteria for discontinuing or modifying allocated interventions The criteria for discontinuing or modifying allocation of communities will be sudden inaccessibility to the community due to external problems such as public insecurity or refusal to participate from local authorities. Strategies to improve adherence to interventions All training facilitators will participate in training and face-to-face sensitization activities in order to ensure fidelity of program implementation. In order to improve participation rates, before each training program delivery workshop, participants will receive a telephone reminder. During the trial, adherence will be monitored at the beginning and end of each program session. Relevant concomitant care permitted or prohibited during the trial There are no restrictions on the involvement of participants who receive other interventions such as government programs. We expect that such exposure will be similarly distributed over study groups under random allocation. In case unbalances in this characteristic are detected, participation in other inventions will be adjusted for in analyses. Provisions for post-trial care The NPECDP-UKA will continue indefinitely after the impact evaluation concludes, but it can be modified as a result of the study to improve its effectiveness. Outcomes {12} Primary outcome measures The primary outcomes of the study are the ECD domains assessed through the Child Development Evaluation Test 2nd Edition (CDE-II). The CDE-II, or Evaluación del Desarrollo Infantil (EDI-II) in Spanish, was developed and validated in Mexico to screen populations for risk of developmental delays in early childhood. The test has specific items for fourteen age groups of children aged 1-60 months. ECD domains assessed included gross motor, fine motor, language, social, and cognitive skills. The CDE-II is based on age-group specific items, and score results are categorized into three levels following a traffic light interpretation: green (normal development), yellow (developmental lag), and red (at risk of development delay). These three categories will be used as the specific measurement variable, the analysis metric will be the final value (by ECD domain), and the method of aggregation will be the proportion of children falling within each traffic stoplight category. EDI-II will be carried out according to its application guidelines [24,25], by trained, standardized, and certified research personnel. Secondary outcome measures Our secondary outcomes will be child nutritional status and ECD measured assessed through two additional instruments. Nutritional status Length/height-for-age, weight-for-age, and hemoglobin measurements will be used to assess children's nutritional status. Anthropometric measurements will be made by trained personnel and standardized according to international protocols [26,27], using SECA digital scales (874 TM) with an accuracy of ±50 g and SECA portable stadiometers (217 TM) with an accuracy of ±1 mm. After applying data cleaning procedures [28] and following the WHO reference standards [29], our main suboptimal nutritional development indicator will be chronic undernutrition or stunting defined as a length (or height) for age Z score below −2. ECD measured through the Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) This is a diagnostic test that consists of the following scales: 1) cognitive scale, assesses the child's non-verbal responses and measures learning processes, problem solving ability, attention, the ability to count and classify objects, and the ability to play 2). Language and communication scale, which includes the subscales to assess receptive and expressive language. The first subscale measures the child's ability to understand different stimuli, words, or instructions. The second subscale assesses language development through vocalizations, word use, and sentence construction 3). Motor scale, which includes a fine motor subscale that measures hand-eye and hand-to-finger coordination and the gross motor subscale that measures the child's control over his or her body and abilities to move the torso and limbs 4). Social-emotional scale, which assesses the main milestones of social-emotional development, such as selfregulation, attention, child's ability to relate to and interact with family members and strangers, among other temperamental and social aspects. These scales are administered and scored independently, resulting in domain-specific assessments. The cognitive, language, and motor scales are assessed through direct observation of the child's abilities on various items that are ordered in an ascending order of difficulty. The socio-emotional scale comprises thirty-five questions with five Likertscale-like response points answered by the caregiver. The BSID-III will be applied in a subsample of children from the group conformed by the first set of communities exposed to the program (group A) and its comparison group of children from caregivers who refused to participate in the study but live in the same communities (i.e., quasi-experimental analyses). Additionally, the BSID-III will be applied in a subsample of group B during its baseline assessment period, i.e., before any NPECDP-UKA exposure occurs. These data will be compared to the subsample of children from group A at their third follow-up, i.e., once they had been exposed for 12 months to the NPECDP-UKA. The BSID-III will be applied to children aged 1 to 42 months. A concurrent validation using data from children with both BSID-III and CDE-II measures will also be performed. ECD measured through McCarthy Scales of Children's Abilities (MSCA) An adapted version in Spanish of the original version of the MSCA [30] will be used. This test includes five scales to assess diverse ECD domains: Verbal, Quantitative, Executive-Perceptual, Memory, and Motor. The combination of the first three scales provides a General Cognitive Index (GCI), which is considered equivalent to the IQ. The test will be applied to children from 42 to 60 months of age. Table 2 shows the chronogram of study activities including intervention implementation schedule across study groups and the corresponding measurements of ECD and nutritional status outcomes among pregnant women and children. The number of repeated measurements across time will depend on the timing when each group starts being exposed to the intervention. Sample size {14} Our main outcome statistic is the proportion of children with developmental lag (yellow category) or at risk of development delay (red category). The effect will be assessed comparing these proportions between the exposed groups and the unexposed group. Sample sizes were planned so that there are approximately 150 children for each 6-month age interval; this approximates a uniform distribution of observations across age groups and study groups for the relevant age ranges. As mentioned before, given the stepped-wedge design of the study, groups will be incorporated sequentially but comparisons between study groups will be performed at the same study calendar Nutritional status assessment X X X X X X *Intervention starts right after their first measurement. There will be a time span of 6 months between the mid-point of consecutive periods and an approximate time span of 6 months between consecutive individual measurements period and for children at the same age interval (see Table 1 for a representation of the stepped-wedge design). Group A will include 1-30-month-old children (n = 750); these children will be 31-60 months old at their final measurement and compared to children with the same ages in group D (n = 750). This will allow to assess the effect of NPECDP-UKA on ECD after an exposure time of 30 months. Other comparisons are possible. For example, ECD outcomes from group B vs group D (n = 900) and group C vs group D (n = 1200) at the final study calendar period (Table 1). Table 3 shows the effect size in terms of a difference of proportions given different levels of proportions in the unexposed group (group D), different design effects (DEFF) that consider that measurements in the same community are correlated (DEFF = 1.5, 2.0) [31], a significance level of 0.05 under a bilateral test, and a statistical power of 80.0%. Recruitment {15} There are two important categories for study participants: those who enroll in the NPECDP-UKA and those who refuse to participate in the NPECDP-UKA but still consent in participating with study measurements. Measurements on non-participants will be performed for the first group of communities (group A) to identify predictors of participation and to perform the quasi-experimental evaluation. Recruitment will be performed in every community in two different stages. The first stage will be based on public convening by the municipal authorities. Local authorities will facilitate the initial contact, aimed at identifying children of interest along with their caregivers, who will be asked about their willingness to participate in the program. The convening will target the population of interest to participate in a meeting where the NPECDP-UKA and its activities will be explained. At these meetings, children and their main caregivers will be identified and their intention to participate in the program will be discussed. For the first group of communities (group A), those who refuse to participate in the program will be asked to participate with study measurements. The second stage of recruitment will be based on the census in the selected municipalities. For this purpose, the housing census provided by the municipal authorities will be used. In homes having children within the designed age range, their primary caregivers will be asked about their intention to participate in the program. Those who refuse to enroll in the program will be asked to participate with study measurements as described before (group A). The participation of the studied population will be voluntary and written consent will be obtained. The research protocol of this evaluation was approved by the Ethics Committees on Research and Biosafety of the National Institute of Public Health in Mexico (CI-896-2018/1538), and the study is registered in ClinicalTrials.gov (CT/ID: NCT04210362). Allocation {16a, 16b, 16c} The stratification will be made by population size, percentage of the indigenous population, and municipal marginalization. Twenty blocks of four municipalities each will be defined, within which the study group will be assigned through a random-number generator in Stata 15 [32]. The allocation will be carried out at the community level, so it will not be necessary to establish a concealment mechanism. Community allocation will be performed by the National Institute of Public Health Mexico. Within communities, UKA will invite participants to enroll, and they will self-select to 1) enroll in the NPECDP-UKA and the evaluation study, 2) not to enroll in the NPECDP-UKA but participate in the evaluation study, or 3) neither enroll in the NPECDP-UKA nor participate in the evaluation study. The experimental part of our study will be singleblinded. Participants from each community will not know to which group of communities they belong. No procedure for unblinding will be needed. For those collecting and analyzing the data, there will be no blinding given the stepped nature of the study and the defining characteristics of study groups. Regarding the quasiexperimental aspect of the study, there is no blinding since non-participation is the defining characteristic of study groups. Data collection and management {18a, 18b, 19} Plans for assessment and collection of outcomes Data collection of outcomes is planned to occur in the participants' households by trained personnel not involved in the delivery of the program. There will be up to six assessment timepoints per participant depending on the assigned study group (see the "Participant timeline" section). To promote data quality, besides the training of personnel, there will be duplicate measurements for weight and length (or height). Study instruments 1. The CDE-II, which was developed and validated in Mexico to screen populations for lag and for risk of delay in child development, consists of specific items for fourteen age groups of children aged 1 to 60 months. Assessed developmental areas include gross motor skills, fine motor skills, language, social skills, and cognitive skills. Score results are categorized into three levels: green (normal development), yellow (developmental lag), and red (at risk of development delay) [24]. Bayley Scales of Infant and Toddler Development, Third Edition (BSID-III) [33]. This diagnostic test consists of the following scales: (1) cognitive scale, based on the child's non-verbal responses and measures learning processes, problem solving ability, attention, the ability to count and classify objects, and the ability to play, among other constructs. (2) Language and communication scale, which contains the subscales of receptive and expressive language; the first measures the child's ability to understand different stimuli, words, or instructions in the environment. The second assesses language development through vocalizations, word use, and sentence construction. (3) Motor scale, which includes the fine motor subscale that measures hand-eye and handto-finger coordination and the gross motor subscale that measures the child's control over his or her body and abilities to move the torso and limbs. (4) Social-emotional scale, which assesses the main milestones of social-emotional development, such as self-regulation, attention, the child's ability to relate to and interact with family members and strangers, among other temperamental and social aspects. These scales are administered and scored independently, resulting in domain-specific assessments. The cognitive, language, and motor scales are assessed through direct observation of the child's abilities on various items that are ordered in an ascending order of difficulty. Start (base) and stop (ceiling) criteria determine which test items each child takes. For each item that the child performs correctly, he or she receives a score of 1; if he or she fails to perform the item, the score is 0. The raw score is the sum of correct responses, including items prior to the starting point (base). As mentioned above, the focus of this study is on cognitive, language, and motor development. The socio-emotional scale comprises thirty-five questions of five points each to be answered by the caregiver, so its administration is quick and easy. McCarthy Scales of Children's Abilities (MSCA). This test is made up of five scales: Verbal, Quantitative, Executive-Perceptual, Memory, and Motor. The combination of the first three scales provides a General Cognitive Index (GCI), which is considered equivalent to the intelligence quotient [34]. We will also collect the following data: 1. Household socioeconomic and demographic characteristics. Includes information on the composition of the household, state of health, education, employment situation, assets, income, social security, and access to social programs of the members living in the same household as the minor of interest. 2. Characteristics of the mother of the selected child. It explores aspects of community organization, participation in organizations, safety in the neighborhood, family support networks, socialemotional characteristics of the mother (depression, stress, anxiety, and self-esteem), opinion on social roles and distribution of tasks within the home, and the mother's pregnancy history. 3. Characteristics of children from 0 to 30 months. Includes information on pregnancy, delivery and postpartum of the mother of the selected child, addictions of the mother during pregnancy and breastfeeding of the child, health status, nutrition and education of the selected child, and parenting practices (feeding, hygiene, sleep) of the selected child. 4. Knowledge of physical, neurological, and psychoaffective child development. It explores the appropriation of knowledge of the child's mother from the information presented in the workshops given by the UKA facilitators. 5. Dissemination and acceptance of the UKA program. Collected information on the knowledge, permanence, and desertion of the program by the families of the selected child. 6. Addictions of the members of the household. Explores the risk factors to which the selected child is exposed due to the consumption of licit and illicit substances by members of the household. 7. The last booklet corresponds to Raven's progressive matrix test [35], applied to the primary caregiver and nuclear family of the selected child. Plans to promote participant retention and complete follow-up To promote participant retention, a community commissioner will be identified in each community. These commissioners will be women who support the NPECDP-UKA implementation as translators and interpreters. They will contact the study participants during the intervention, motivating them to attend all workshops and the data collection during the whole duration of the study. Data management The data management team, based at the National Institute of Public Health, will elaborate capture masks in REDCap for e-tablets [36]. The data capture system will include automated skip patterns and data value range checks according to instrument structure. The data will be securely stored locally in tablets and then transferred to a centralized data management system with a data quality control protocol overseen by the lead data manager. Study staff will employ several strategies to promote data quality, including double data entry, and range checks for data values during study analyses and applying auditable algorithms for the systematization and automatic identification of possible errors in the values of the measured characteristics. Daily visual cross-validation of the data for complex errors, and regular on-site monitoring, the quality and completeness of the data will be reflective of the state of the trial. Confidentiality To protect participants' confidentiality, participant data will be labeled using a unique participant identification code that contains no personal identifiers. Access to the collected participants' data will be restricted to the principal investigator and appropriately trained Institutional Review Board (IRB)-approved research study staff as required. All laboratory samples, completed forms, reports, and other records will be identified using an unlinked unique participant ID number to maintain participant confidentiality. Plans for collection, laboratory evaluation, and storage of biological specimens for genetic or molecular analysis in this trial/future use In order to know the anemia status of children, hemoglobin will be measured every 3 months. Trained personnel will obtain capillary blood samples for the diagnosis and timely treatment of anemia in children between 6 months and 5 years and pregnant women. For the detection of anemia, the Hemocue Hb 201™ analyzer will be used. This analyzer provides a measurement of total hemoglobin in whole blood, capillary, venous, or arterial, with the same quality as a hematology analyzer. This system is designed for the quantitative determination of hemoglobin at the point of care in primary care areas and is for in vitro diagnostic use only. No storage and future use of this biological material will be needed. Statistical methods {20a, 20b, 20c} Statistical methods for primary and secondary outcomes The effects of the NPECDP-UKA on ECD for primary and secondary outcomes will be assessed by the difference of proportions for binary outcomes and the difference of means for quantitative outcomes. In case there are unbalanced observed characteristics across groups, effects will be estimated with logistic multiple regression for binary outcomes and with multiple linear regression for quantitative outcomes. Covariate-adjusted means or proportions will be obtained after model estimation as predictive margins [37]. Standard errors will be adjusted for clustered data using the method of linearization [33]. Additionally, the difference in difference estimators with propensity score matching will be performed to approximate effects with a quasi-experimental approach [23]. In this analysis, the unexposed group consists of children of caregivers who declined to participate in the NPEC DP-UKA but acceded to participate in the evaluation study. Interim analyses No interim analyses will be performed. Analyses with measurements before the final data point will be performed only for subsamples or comparisons for which measurements will be completed by then: for example, the quasi-experimental part of the study or the concurrent validation analysis of the EDI-II test results. Therefore, no interim analyses will be used for deciding on study termination. Methods for additional analyses An analysis of the mediating role of parenting practices between intervention exposure and ECD will be carried out using structural equation models. Parameters will be estimated through weighted least-squares with mean and variance adjustment and the theta parameterization [38]. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data Complete case analysis will be performed as well as multiple imputation analysis when appropriate [39]. In regard to adherence, analyses will be complemented with a dose-response analysis. Plans to give access to the full protocol, participant-level data, and statistical code Full protocol and used code will be shared upon proper and formal request for academic reasons. Datasets are not public so access should be requested formally. Oversight and monitoring {21a, 21b, 22, 23} Composition of the coordinating center and trial steering committee The execution of the trial will be performed by UKA and its research department will function as the coordinating center. The steering committee will be composed of the study investigators and the head of the research department of UKA. The data management team will include IT experts from both UKA and the sponsor institution. At the field, UKA experts will be in charge of electronic data generation through specialized hardware and the InfoKilo v2 information system. IT experts from the National Institute of Public Health of Mexico will monitor data quality and provide advice and recommendations based on auditable algorithms developed for quality control of the data collected. Composition of the data monitoring committee, its role, and reporting structure The data monitoring committee will be presided by one investigator from the National Institute of Public Health of Mexico who will coordinate with the data management team to review data generating processes and their quality. Adverse event reporting and harms No unintended adverse effects are expected from the intervention; however, any adverse events related to the execution of the study will be reported to the supervisor in charge of the corresponding area who in turn will immediately report to the IRB. Frequency and plans for auditing trial conduct The principal investigator will designate appropriately qualified personnel to periodically perform quality assurance checks at mutually convenient times during and after the study and based on auditable algorithms that were developed for quality control of the data collected. These monitoring visits provide the opportunity to evaluate the progress of the study and the adherence to the intervention and obtain information about potential problems. Scheduling monitoring visits will be a function of participant enrollment, site status, and other commitments. The monitor will assure that data are accurate and in agreement with any paper source documentation used, verify that subjects' consent for study participation has been properly obtained and documented, confirm that research subjects entered into the study meet inclusion and exclusion criteria, and verify that study procedures are being conducted according to the protocol guidelines. If a problem is identified during the visit (i.e., poor communication with the data coordinating center, inadequate or insufficient staff to conduct the study, etc.), the monitor will assist the site in resolving the issues. Some issues may require input from the IRB or of the principal investigators. Plans for communicating important protocol amendments to relevant parties {25} Protocol amendments will be submitted to the research committee of the National Institute of Public Health Mexico and when necessary and appropriate to the research ethics committee. Authorized changes will be submitted to the Clinical Trials profile of the study. Dissemination plans {31a} Plans for dissemination include national and international congresses, academic events. and peerreviewed publications of results at different stages of the project. Discussion We have presented an evaluation design to estimate the effect of a nurturing care intervention on ECD. Most common designs in evaluation are not applicable to estimating effects on ECD given the nature of the outcome. Scales used to assess dimensions of ECD depend on the specific age of subjects; comparing scores across time is problematic since the way in which ECD is measured varies with age. On the other hand, multiple sources of bias should be considered when selecting a design. The main sources of bias are due to confounding factors such as cohort effects, community effects, selfselection, aging effects, and period effects. Community effects can be controlled by randomization. This type of design is known as a cluster randomized trial [40], where the unit that receives the intervention is an aggregate unit, typically subjects are nested into community clusters. Another advantage of the cluster randomized trial design is related to potential spillover effects. For example, in an educational intervention, subjects that receive the intervention may communicate what they learn to neighbors. Special consideration should be given to the number of communities to be randomized so that intervention effects may be properly separated from community effects. In the extreme case of just two clusters, even with a random allocation, effects from the intervention are totally confounded with the specific characteristics of the two communities. One of the communities may have better outcomes simply because of its own characteristics and not necessarily because of the applied intervention. It has been suggested a total of between ten and fifteen communities per arm [40] to better separate community effects from intervention effects. In the present study, we specified a total of twenty communities per arm. The stepped-wedge design has been proposed for tackling limitations of classical designs when a control group is not feasible given ethical or logistical considerations [20]. Our design corresponds to a specific steppedwedge design where effects are assessed as in a parallel design. The main feature of this design is the sequential incorporation of study groups; the defining characteristic of study groups is the time of exposure to an intervention. The assignment of experimental units to study groups is randomized and evaluation can be performed at the same calendar period across study groups; this characteristic precludes effects from time of measurement to be confounded with intervention effects. An alternative version of the stepped-wedge design proposes comparing measurements of the same group before and after intervention; since this occurs in different calendar times, confounding due to period effects cannot be ruled out under this setting [20]. Another type of confounding relates to age; given the nature of ECD outcomes, it is key to compare intervened and not intervened subjects at the same ages. This guarantees that the very same items from ECD scales are used to assess intervention effects. On the other hand, changes in child development at early ages occur very rapidly. This characteristic of ECD complicates using classical estimators such as the difference in differences estimator mainly for three reasons: (1) at the individual level, it is difficult to interpret changes when assessment items vary with age; (2) time differences between measurements across study groups are required to be balanced to avoid confounding; and (3) the distribution of initial ages should also be balanced across study groups. Although these imbalances may be attenuated by using adjustment covariates in models, it would be preferable to avoid these sources of potential bias with a robust design. Another important source of bias, especially in programs that are not possible to randomize individuals for ethical reasons, is self-selection. Our study has a quasi-experimental component where self-selection is tackled analytically through propensity score matching techniques and difference in difference estimators. The experimental component of our design avoids selfselection bias since all subjects are self-selected to receive the intervention. The key difference between groups is the moment at which intervention is implemented. Groups are incorporated in stages; the last group incorporated is measured before the intervention starts so it serves as a comparison group. Effects are assessed as in a parallel design. In other contexts, the stepped-wedge design has been identified as a quasiexperimental approach; however, it has been noted that a well-conducted stepped-wedge trial where period effects are controlled and participants experience only one condition can in principle be as rigorous as a standard control randomized trial [41]. The intervention proposed share components with other interventions that have proved beneficial effects. Interventions that provide micronutrients for pregnant women and undernourished children have shown improvements in infant nutrition [16,42,43]. Also, interventions that include parenting counseling about proper diet and complementary and responsive feeding have showed benefits in the nutritional status of young children [44,45]. Parent counseling on stimulation has been successful in improving ECD, and this counseling could be made by peers [46], through home visits [47] or workshops and parent sessions [48][49][50][51]. Our proposed design has its own limitations that include vulnerability to exogenous shocks that may compromise effects estimation. Although random allocation of communities to the order of intervention implementation balances (in expected value) observed and unobserved characteristics across study groups, benefits of realized interventions could be lower compared to what would be obtained in a situation without an external shock. Our study design is recommended when the way in which outcomes are measured depends on age, selfselection is present, and assignment is performed at an aggregate level. Although key sources or biases are avoided (e.g., randomization within blocks guarantees that community characteristics that were used to define blocks are balanced between study groups), implementing our design may be challenging given its required sample size and the coordination efforts necessary. According to our knowledge, this is the first experimental study on ECD in Mexico and the Latin American region which will evaluate a social program designed by a Mexican non-governmental organization, aimed at impacting neurological development through the improvement of child rearing practices. Likewise, this study will allow to generate robust and rigorous information on the causal mechanisms that determine the achievements in neurodevelopment in contexts of high social vulnerability, and this will be useful for the design and implementation of effective ECD interventions. Trial status Recruitment started in July 2019 and was scheduled to end in June 2022. During the first year of the study, once potential participants had been identified, researchers conducted two recruitment phases: the first one from July 15 to December 19, 2019, and the second one from January 21 to February 10, 2020. Due to the COVID-19 public health emergency, recruitment was suspended during early March 2020. Baseline measurements were obtained for a total of 1176 children (764 whose caregivers decided to enroll and 412 whose caregivers decided not to enroll in the NPECDP-UKA).
2022-02-08T14:16:57.564Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "dc7028b241ef8208235e62a974d9bfe8662641c8", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-022-06024-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8778a70664849164c667ab0608cb8878d385668", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257566321
pes2o/s2orc
v3-fos-license
Assessment of Magnetic Resonance Imaging Artefacts Caused by Equine Anaesthesia Equipment: A Cadaver Study Acquisition of magnetic resonance images of the equine limb is still sometimes conducted under general anaesthesia. Despite low-field systems allow the use of standard anaesthetic equipment, possible interferences of the extensive electronic componentry of advanced anaesthetic machines on image quality is unknown. This prospective, blinded, cadaver study investigated the effects of seven standardised conditions (Tafonius positioned as in clinical cases, Tafonius on the boundaries of the controlled area, anaesthetic monitoring only, Mallard anaesthetic machine, Bird ventilator, complete electronic silence in the room (negative control), source of electronic interference [positive control]) on image quality through the acquisition of 78 sequences using a 0.31T equine MRI scanner. Images were graded with a 4-point scoring system, where 1 denoted absence of artefacts and 4 major artefacts requiring repetition in a clinical setting. A lack of STIR fat suppression was commonly reported (16/26). Ordinal logistic regression showed no statistically significant differences in image quality between the negative control and either the non-Tafonius or the Tafonius groups ( P = 0.535 and P = 0.881, respectively), and with the use of Tafonius compared to the other anaesthetic machines ( P = 0.578). The only statistically significant differences in scores were observed between the positive control and the non-Tafonius ( P = 0.006) and the Tafonius groups ( P = 0.017). Our findings suggest that anaesthetic machines and monitoring do not appear to affect MRI scan quality and support the use of Tafonius during acquisition of images with a 0.31T MRI system in a clinical context. Introduction Imaging of the equine limb using MRI (magnetic resonance imaging) has proved a valuable modality as part of the investigation of lameness, providing the clinician with additional diagnostic information to allow tailored individual prognostication and treatment [1] .Advantages are particularly evident when investigating lameness that affects the equine foot, where the hoof capsule limits the diagnostic value of conventional imaging modalities [2 , 3] .In Animal welfare/ethical statement: The study was approved by the Ethical Committee of the University of Glasgow School of Biodiversity, One Health and Veterinary Medicine (EA29/22). Declaration of Competing Interest: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.addition, when compared to computed tomography (CT), low-field MRI systems (LFMRI) have been shown to produce higher anatomic visualisation scores of structures like the distal sesamoidean impar ligament, synovial structures, and the distal deep flexor tendon (DDFT) [4] .Furthermore, a study comparing CT, and LFMRI showed that, despite CT allowing identification of an overall higher number of lesions in the DDFT compared to LFMRI, lesions distal to the proximal margin of the navicular bone, splits, and core lesions were identified with LFMRI only [5] . Equine MRI is commonly conducted in the standing horse to avoid the potential risks of general anaesthesia [2 , 6] .However, patients are sometimes anaesthetized for acquisition of images, particularly for regions proximal to the foot (more susceptible to pendulous sway motion) in order to reduce movement artefacts [1 , 3] .Anaesthesia of the equine patient for MRI presents multiple challenges, some of which are specifically related to this imaging modality.The use of high-field MRI systems (1.5 Tesla and above) requires a dedicated room to accommodate the MRI unit, the use of MRI compatible anaesthetic equipment, and remote patient monitoring during image acquisition [7] .Furthermore, the requirements for patient positioning and limb traction during acquisition of images have been associated with post-anaesthetic complications, such as myopathies and neuropathies [8 , 9] .Although lowfield MRI systems may offer lower image resolution for structures such as articular cartilage [1 , 10] , they can permit more straightforward patient access and allow use of standard positioning (including surgical tables) and anaesthetic equipment [11] .Nevertheless, anaesthesia in horses involves a higher degree of morbidity and mortality [12] compared to anaesthesia in other veterinary species [13] , with respiratory complications being responsible for approximately a quarter of all non-fatal complications [14] . Various advanced ventilation strategies have been proposed to reduce the effects of recumbency and anaesthesia on the equine lung while trying to minimise the cardiocirculatory effects of mechanical ventilation [15 -19] .Advanced respiratory management requires complex equipment.The Tafonuis (Vetronic Services Ltd), a recently developed new generation large animal ventilator [20] , delivers spontaneous and mechanical ventilation to support anaesthetized horses using a microprocessor/servo-controlled piston.This device allows the delivery of more accurate tidal volumes compared to traditional pneumatic large animal ventilators [21] , independent control of inspiratory time and respiratory rate, adjustable inspired fraction of oxygen (FiO 2 ), adjustable continuous positive airway pressure (CPAP), and adjustable positive endexpiratory pressure (PEEP).The advanced features of the Tafonius have been successfully employed to apply CPAP [22 , 23] , and stepwise increases in PEEP and alveolar recruitment manoeuvres to improve ventilation and oxygenation in anaesthetized horses [24] and mules [25] . However, the use of complex electronic equipment utilised in modern anaesthesia ventilators and monitors such as the Tafonius may generate electrical interference adversely affecting MRI image quality when compared to using other simpler machines (for example the Mallard medical equine anaesthesia machine or the Bird anaesthesia ventilator) with separate anaesthesia monitoring (Datex S5).Zipper artefacts, characterized by one or more spurious bands of electronic noise extending across the image, can be caused by radio waves (RF) entering the scanning room from electronic equipment during the acquisition of images. To the best of the authors' knowledge, a rational and unbiased assessment of the effects of modern anaesthetic equipment, such as the Tafonius, on low-field MRI image quality has not been conducted.Avoiding the use of more advanced anaesthesia technology without good evidence that it is affecting MRI image quality may unnecessarily compromise outcomes in equine patients.Conversely, potential anaesthesia equipment-related artefacts may have the capacity to reduce the diagnostic yield of MRI and may significantly increase scan and anaesthesia time and any associated morbidities.Therefore, the aim of this study was to test whether the use of Tafonius consistently affected image quality through the acquisition of a series of images of the fetlock of a cadaver limb under standardised conditions.We hypothesised that, despite the extensive electronic components, the Tafonius would not generate artefacts that would affect image quality when used at a clinically useful distance from the MRI system isocentre. Specimens and MRI unit This prospective, blinded, cadaver study was conducted at Glasgow Equine Hospital and Practice, University of Glasgow, following approval from the Ethical Committee of the University of Two equine cadaver forelimbs were sourced from the Undergraduate School (Vet Anatomy), University of Glasgow School of Biodiversity, One Health and Veterinary Medicine with appropriate consent for research and teaching purposes already in place.The specimens were collected soon after euthanasia and refrigerated for a maximum of 24 hours before their use. The feet of the limbs collected were checked visually for the presence of shoes and other ferromagnetic material, and were cleaned before application of a light bandage for imaging ( Fig. 1 a and Fig. 1 b). As this study aimed to assess solely the presence of electronic interference-related artefacts, the only inclusion criteria were that the limbs were cut above the carpus, and of suitable size to fit into the coil of a 0.31T Esoate O-Scan equine MRI scanner ( Fig. 1 a).Notably, as we did not investigate any biological or pathological features of the limb itself, the limb was used as a phantom to determine the presence of interferences across the image.As such, the leg did not constitute the experimental unit in this study and the same leg could be used across multiple replicates.One limb was used for two replicates of data collection on the first study day and a second limb for the other two replicates on data collection day 2. The controlled area, defined as the area outside which it is considered safe to place ferromagnetic objects and electronic equipment around a particular MRI scanner, with the minimum distance set at least at 0.5mT, was marked with a yellow and black tape on the floor.This area measured 198 cm from the magnet isocentre on each side on a Y axis, and 120 cm from the magnet isocentre on each side on a Z axis, leading to a distance of 232 cm from the magnet isocentre to each corner of the delimited controlled area ( Fig. 2 a).This latter distance was used for comparative distances between the anaesthetic machines tested and the magnet core. Study design Limbs were positioned into the MRI coil and stabilized by MRIcompatible foam positioners ( Fig. 1 a and 1 b).Limbs were positioned in a manner that replicated clinical positioning at the institution, typically performed with the patient in left lateral recumbency with the limb positioned parallel to the horizontal ground surface.With the exception of the negative control (see below), the equine surgery table (Haico Telgte II, DRE Vet) and monitoring cables (electrocardiographic monitoring, pulse oximeter, and invasive blood pressure monitoring transducer) were positioned as for normal clinical cases in all the conditions tested; the surgery table was left unplugged from the power supply.All anaesthetic machines were utilized in mechanical ventilation mode, ventilating an equine 30L black rubber rebreathing bag (Burtons) during images acquisition. At the beginning of each data collection day, standard localizer sequences were performed in each orthogonal plane to ensure positioning of the equine limb was subjectively comparable between the two data collection days. Images were acquired under the following conditions ( Fig. 2 and Fig. 3 ): The distance between the front frame of the Bird anaesthetic machine and the MRI isocentre was 248 cm. 6. Tafonius.Tafonius positioned normally in room as used for a clinical case.In mechanical ventilation mode with rubber bag attached and monitor on ( Fig. 2 f and Fig. 3 e).The distance between the front frame of the Tafonius and the MRI isocentre was 243 cm. 7. Tafonius in close proximity.As 6 but with the Tafonius positioned as close to the MRI machine as the controlled area boundaries allow ( Fig. 2 g and Fig. 3 ).The distance between the front frame of the Tafonius and the MRI isocentre was 232 cm. For each experimental condition, three MRI sequences of the fetlock region were acquired, replicating the routine limited fetlock MRI study used at the institution.The sequences were: transverse turbo multi echo (proton density and T2-weighted), sagittal short tau inversion recovery (STIR), sagittal turbo 3D T1 weighted.The three sequences for each of the seven experimental conditions were replicated 4 times during two separate experimental sessions.The order of the experimental conditions, the replicates, and the identifying number assigned to each condition were randomised by the primary investigator (BT).The only exception was within the positive controls, which were replicated twice at the end of all the experiments.This decision was dictated by concerns of causing damage to the equipment which would have prevented acquisition of further sequences. Images were reviewed and scored for artefact in a blinded fashion by a board-certified veterinary radiologist experienced with the interpretation of low-field equine fetlock MRI examinations (MB).MRI studies were uploaded on and digitally transferred using PACS (picture archiving and communication system) and viewed in DI-COM (digital imaging and communications in medicine) format. The degree of artefact in each study was graded using a 4point scale adapted from Byrne et al (2021) ( Table 1 ).A free text qualitative description was also recorded for the nature of any artefact detected by the observer. Images were graded assigning separate soft tissue and bone scores for each overall study, without differentiation of scores for individual sequences acquired within each study as a whole.Such grading was adopted as it reflects what would be done for normal clinical cases at the institution. Statistical analysis Scores included in the analysis were divided by scan type (soft tissue and bone) and by condition.Separate soft tissue and bone scores were used as independent outcomes.Data were analyzed using descriptive statistics in Jamovi (the Jamovi project, Sydney, Australia) and presented as median and range in box and whisker plots.To further investigate whether the choice of machine influenced scan quality, we used ordinal outcomes logistic regression in Jamovi.Conditions were grouped into negative, positive, Tafonius (which encompassed Tafonius as used in normal clinical cases and Tafonius in close proximity), and non-Tafonius (which encompassed monitoring only, Bird, and Mallard) groups and included in the model as the independent variable, while the dependent variable was represented by the grade assigned to each study.Model strength was evaluated using the Akaike Information Criterion (AIC) and McFadden's R 2 .The level of significance was set at P < 0.05. Results A total of 78 sequences were acquired from 26 complete MRI studies (four replicates for six of the seven conditions and two replicates for the positive control group).Data analyzed by MRI study type therefore generated 52 scores used for analysis, with eight scores for six out seven conditions and four scores for the positive controls. Qualitatively, all images displayed some degree of loss of definition of the palmar soft tissue both at the proximal and distal edge of the scanned area.The distinction between trabecular and cortical / subcondral bone was very good in all images.The definition of soft tissue was very similar between studies other than the positive controls, and difference between grade 1 and 2 was deemed minimal.Overall, eight replicates were scored 1 (two Bird, two Tafonius in close proximity, one Tafonius as in normal clinical cases, one negative, one monitoring only, and one Mallard). The two positive controls were associated with background noise and poor signal-to-noise ratio, which would have required repetition of scans in a clinical setting.In particular, one of the two positive controls would have required repetition of the transverse turbo multi echo (proton density and T2-weighted) sequence, while the other positive control repetition of all the sequences.In the other conditions, the artefact detected in images graded 2 or higher was lack of STIR fat suppression (16/26).Examples of sagittal T1 sequences of each grade are shown in Fig. 4 .As shown in Table 2 , scan quality was good in all groups and comparable to the negative control with only the positive control group requiring repeats due to major artefacts (grade 4). Figure 5 provides a graphic representation of the distribution of scores in the seven conditions tested, which were then grouped into positive, negative, Tafonius and non-Tafonius groups as shown in Figure 6 . The grouped conditions were included into an ordinal logistic regression model to investigated whether scores significantly differed between the Tafonius and non-Tafonius groups ( Table 3 ).No statistically significant differences in image quality were detected between the negative control and the non-Tafonius group ( P = 0.535), as well as with the use of Tafonius compared to other large animal anaesthetic machines ( P = 0.578).The only statis-tically significant difference in scores was observed between the positive control and the non-Tafonius group reference ( P = 0.006). Similarly, when the grouped conditions were included into the ordinal logistic regression model utilizing the Tafonius group as reference, the only statistically significant difference in scores was observed between the positive control and the Tafonius group ( P = 0.017) ( Table 3 ). Discussion We acquired a series of images of the fetlock of two equine cadaver limbs to investigate the impact of large animal anaesthetic types of equipment on image quality using a low-field MRI system.Results of the present study showed no significant effect of The box represents the 25th-75th quartile (interquartile range), the horizontal line within the box represents the median, the vertical lines (whiskers) represent the minimum and maximum value, and outliers are shown as dots beyond the whiskers. Table 3 Results of ordinal logistic regression to investigate whether there was a statistically significant difference between scores in groups when compared to non-Tafonius and to the Tafonius groups.the anaesthetic equipment used on image quality.In particular, our findings support the research hypothesis, suggesting that Tafonius, when used at a clinically useful distance from the MRI system isocentre, does not appear to generate clinically significant artefacts that affect image quality.Also, on the basis of the positive control, the internal shielding of the system seemed to prove effective in protecting against major interferences.The vast majority of the MRI studies acquired were of satisfactory to excellent quality (grades 1, 2, and 3), with no significant differences between groups and when compared to the negative control.Our results therefore show that various large animal anaesthetic machines, when used appropriately, do not significantly interfere with image quality when using a 0.3T lowfield MRI system with internal shielding such the one utilized in this study.This is consistent with previous published veterinary literature [26] , where the signal-to-noise ratio and the presence of artefacts were investigated in relation to the use of an equine surgery table and anaesthetic equipment utilizing an open bore 0.2T magnet with no internal shielding.As in our study, also this work demonstrated no statistically significant effect of anaesthetic equipment on MRI image quality.Contrary to the findings of the current study, the same work [26] observed a significant negative effect of anaesthetic monitoring.This highlights that the findings of the current study are applicable to the Esoate O-scan MRI system but may not directly apply to different low-field MRI acquisition systems (with different shielding properties).A particular consideration pertains to the distance between any potential source of electronic interference and the MRI isocentre: the demarcated controlled area at our institution is broader compared to the minimum safety distances from the MRI isocentre advised by the manufacturer.This implies that all our anaesthetic monitoring equipment, including monitoring cables (all positioned outside or on the boundaries of the controlled area) sat at a greater distance compared to that normally considered safe for electronic equipment and ferromagnetic material.Furthermore, the previous study was conducted in live patients, while we used fresh, refrigerated cadaver limbs. Predictor To the best of the authors' knowledge, this is the first study specifically testing the possible interference caused by the extensive electronic components of Tafonius on MR images.In order to improve ventilation and oxygenation in anaesthetized equine patients, this state-of-the-art anaesthetic equipment enables the anaesthetist to apply advanced ventilatory strategies when required.Previous studies have demonstrated that the degree of alveolar collapse can be reduced by application of an alveolar recruitment manoeuvre (ARM) and continuous PEEP [16 , 27 , 28] .An alternative approach consists of an ARM provided by stepwise incremental and decremental peak inspiratory pressure (PIP) and PEEP, which has been found to be amongst the most effective methods to redistribute ventilation and improve oxygenation [29 , 30] .Additional strategies involve the use of continuous CPAP, which also proved effective in redistributing ventilation to the dependent lung regions, thereby decreasing ventilation/perfusion mismatch [23] .Tafonius allows the delivery and accurate titration of all the above-mentioned strategies, with the ultimate goal of improving patients' outcome.Results of our study showed no evidence that Tafonius causes poorer scan quality compared to images acquired with the other anaesthetic machines tested, hence supporting its use in a clinical context also in patients anaesthetized for MRI procedures. In the present study, only the positive controls were graded as nondiagnostic and would have been repeated in a clinical context.The presence of true positive controls is reassuring in terms of validation of our findings. However, we observed a certain degree of variability in image quality between replications, notably also amongst the positive and negative control groups.In fact, not all the positive control sequences demonstrated the same degree of artefacts, although they were conducted in similar conditions.Likewise, the negative controls failed to display the best image quality amongst all the other setups.This variability did not seem to be directly related to the anaesthetic equipment used.Additional methodological factors and sources of variability might explain variations in grading detected in our study and the lack of fat suppression as the artefact detected. Fluctuations in environmental temperature can affect the homogeneity of the magnetic field, impacting the quality of images and consistency between scans, while the temperature of tissues can alter suppression of the fat signal [31] . Ambient temperature was kept as constant as possible in the MRI room at our institution, although minor fluctuations cannot be ruled out, and temperature was not specifically recorded in the present study. The use of equine cadaver limbs may be associated with altered tissue contrast between structures compared to the live patient.This might have affected image quality due to tissue autolysis [32] thus leading to overall inferior image quality compared to live patients [33] .However, recent equine research in advanced imaging techniques on the fetlock of Thoroughbred racehorses using a LFMRI system has demonstrated that diagnostic quality of MR images doesn't significantly differ between live, fresh cadaver, and frozen/thawed tissues [32] .Moreover, the use of cadaver limbs in equine MRI studies has been widely adopted by multiple other authors and has demonstrated high diagnostic value to investigate the pathology of the equine distal limb [34][35][36][37][38][39] . STIR sequences are particularly affected by tissue temperature, and fat signal might not be sufficiently suppressed unless the inversion time is adjusted [31] .In our study, STIR sequences were acquired following the usual clinical protocol, with no adjustments of the TE (echo time) to accommodate the lower temperature of cadaver limbs.While this might explain why lack of suppression was frequently detected, it is worth mentioning that the same protocol was consistently applied to all scans, and approximately 40% showed adequate fat suppression.It is therefore unlikely for this aspect to account for all of the variation observed. Study planning by the operator and positioning of the area of interest at the magnet isocenter are crucial steps to optimize image quality [31] .During the research presented here, accurate and reproducible positioning of the limb as in normal clinical cases was performed by an experienced veterinary surgeon (CB) in both experimental sessions, with the fetlock placed at the MRI isocenter.As shown by our results, loss of definition was observed solely at the distal edges of the image, where the magnetic field homogeneity is lower compared to the center of the field of view.Furthermore, while the use of multiple observers might have strengthened our findings, we believe the use of a single operator (BT) to run all the sequences minimized the variability associated with study planning. Other electrical activity in the room or in the adjacent rooms might have affected the homogeneity of the magnetic field, introducing a source of variability.We ensured that electrical silence was present in the room during images acquisition, and all doors leading to the MRI suite were kept close at all times.We could not control activities (which also encompassed the use of electronic equipment) in adjacent rooms, and although this might explain part of the variability we observed in our studies, no zipper artefacts were detected in any of our images. Finally, regular servicing of the machine, for example shimming, is important to optimize the magnet homogeneity [31] .During the present project, all the sequences were acquired in close succession, hence this aspect should be relatively consistent across the study. Overall, the considerations elucidated above can account for relatively minor variations (especially within grade 1 and 2) which we believe would have limited relevance in a clinical context. There are a number of limitations in our study to acknowledge.Firstly, we acknowledge the relatively low sample size, especially for the positive controls.As this represents pilot work, an a priori sample size calculation could not be performed.However, considering that we endeavoured to keep confounding variables constant and that the focus was solely on the presence of artefacts, we feel a higher number of scans would be unlikely to yield different results.This preliminary study will be valuable to inform prospective clinical studies using non-cadaver limbs.Secondly, only one radiologist reviewed and scored the scans.Nonetheless, scoring was performed by a specialist veterinary radiologist with extensive experience specifically in equine imaging, who was unaware of group assignment.The presence of artefacts was also graded utilizing a subjective scoring system adapted from a previously published scoring system [3] , and detailed definitions were provided for each grade.Thirdly, some other factors possibly affecting scan quality were not fully controlled or recorded, such as temperature and activity in adjacent rooms.We suspect all these factors were responsible for only minor, non-clinically significant variations, and they might explain the baseline variability observed in our study. Conclusions Our results suggest that MRI scan quality is not affected by anaesthetic equipment when this is used at the recommended distances from the MRI isocenter.In particular, there is no clear evidence that the extensive electronic components of the Tafonius are responsible for clinically relevant artefacts.Based on our results, use of Tafonius is suitable during acquisition of images with a 0.31T MRI system equipped with internal shieling in a clinical context. Financial disclosure This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Fig. 1 . Fig. 1.Foot of an equine limb covered with a light bandage and positioned into the coil of a 0.31T Esoate O-Scan equine MRI scanner with foam positioners, patientside (A) and operator-side (B). Fig. 2 . Fig. 2. Schematic representation of the setup for the seven conditions under which the MRI images were acquired.In all the conditions, distance A measures 120 cm (Z axis), distance B measures 198 cm (Y axis), distance C measures 232 cm (distance from the magnet isocentre to each corner of the controlled area); (A) negative control; (B) positive control.Distance D (front frame of the Tafonius to the magnet isocentre) measures 91 cm; (C) Datex S5 monitor only.Distance D measures 268 cm; (D) Mallard anaesthetic machine with Date S5 monitor.Distance D measures 244 cm; (E) Bird anaesthesia ventilator with Datex S5 monitor.Distance D measures 248 cm; (F) Tafonius positioned as in clinical cases.Distance D measures 243 cm; (G) Tafonius in close proximity.Distance D equals C and measures 232 cm. Fig. 3 . Fig. 3. Set up of 6 of the seven conditions tested (the negative control is not displayed).(A) Positive control: MRI RF shields removed and Tafonius positioned within the controlled area; (B) Standard monitoring only: Datex S5 monitor on, monitoring equipment, and equine surgery table positioned as in normal clinical cases; (C) Mallard anaesthesia machine positioned as for standard clinical cases; (D) Bird anaesthesia ventilator positioned as for standard clinical cases; ( E) Tafonius with integrated monitor on placed as for standard clinical cases; (F) Tafonius in close proximity (i), with detail of the front frame on the boundaries of the controlled area (ii). 1 . Negative control.Complete electronic silence in the room.No electronic equipment was present in the room, all sockets were turned off, and doors were kept closed during image acquisition ( Fig. 2 a). 2. Positive control.RF shields removed from machine.Source of electrical interference (Tafonius, Vetronic Services Ltd) placed within the controlled area boundaries (within which the field strength is 0.5 m Tesla or greater) ( Fig. 2 b and Fig. 3 a).The front frame of the Tafonius was placed at 91 cm from the magnet isocentre.3. Standard monitoring only.Datex S5 monitor (Datex-Ohmeda S/5 compact anesthesia monitor) on and monitoring equipment (ECG, pulse oximeter probe, invasive blood pressure transducer) positioned in room as for standard clinical cases ( Fig. 2 c and Fig. 3 b).The distance between the monitor and the MRI isocentre was 268 cm. 4. Standard machine + monitoring.Mallard anaesthesia machine (Mallard Medical Model 2800CP) positioned in room as for standard clinical cases and ventilating a rubber bag to simulate mechanical ventilation.Monitor positioned as above and turned on ( Fig. 2 d and Fig. 3 c).The distance between the front frame of the Mallard anaesthetic machine and the MRI isocentre was 244 cm. 5. Standard machine 2. As 4 but with Bird anaesthesia ventilator (Bird Mark 7 respirator) in place of Mallard medical machine ( Fig. 2 e and Fig. 3 d). Fig. 4 . Fig. 4. Sagittal turbo 3D T1 weighted magnetic resonance images of the fetlock of an equine cadaver limb showing differences in image quality as graded according to the scoring system utilized in our study. Fig. 5 .TFig. 6 . Fig. 5. Box plot of the distribution of scores split by soft tissue and bone scan type in the seven conditions tested.The box represents the 25th-75th quartile (interquartile range), the horizontal line within the box represents the median, the vertical lines (whiskers) represent the minimum and maximum value, and outliers are shown as dots beyond the whiskers. Table 1 Scoring system for the presence of artefacts with the use of Esaote 0.31T O scan .Adapted from Byrne et al 2021. The study is characterized by optimal tissue definition with no artefacts detected.Images in the study are of ideal quality * .The study would not be repeated in a routine clinical context 2High diagnostic qualityMild loss of tissue definition with presence of artefacts that do not limit interpretation.The study would not be repeated in a routine clinical context 3Satisfactory diagnostic qualityModerate loss of tissue definition with presence of artefacts that do not limit interpretation.The study would not be repeated in a routine clinical context 4Non-diagnostic Presence of artefacts severely affects the study and prevents assessment of significant structures.The study would be repeated in a routine clinical context Table 2 Descriptive statistics of the scores generated for all the conditions tested.
2023-03-17T15:07:12.207Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "e561bb52f0b2bf62fa7c0ce1bf78489e301e92b9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jevs.2023.104492", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e7b1b6ccdc4c6ea8e2005203f2ea451ab6858d8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
13632994
pes2o/s2orc
v3-fos-license
Variation in the TLR10/TLR1/TLR6 Locus is the Major Genetic Determinant of Inter-Individual Difference in TLR1/2-Mediated Responses Toll-like receptor (TLR)-mediated innate immune responses are important in early host defense. Using a candidate gene approach, we previously identified genetic variation within TLR1 that is associated with hyper-responsiveness to a TLR1/2 agonist in vitro and with death and organ dysfunction in patients with sepsis. Here we report a genome-wide association study designed to identify genetic loci controlling whole blood cytokine responses to the TLR1/2 lipopeptide agonist, Pam3CSK4 ex vivo. We identified a very strong association (p<1×10−27) between genetic variation within the TLR10/1/6 locus on chromosome 4, and Pam3CSK4-induced cytokine responses. This was the predominant association explaining over 35% of the population variance for this phenotype. Notably, strong associations were observed within TLR10 suggesting genetic variation in TLR10 may influence bacterial lipoprotein-induced responses. These findings establish the TLR10/1/6 locus as the dominant common genetic factor controlling inter-individual variability in Pam3CSK4-induced whole blood responses in the healthy population. Introduction The innate immune system provides early recognition of microbial pathogens important to host defense. Toll like receptors (TLRs) play a key role in host defense, providing a mechanism to respond to highly conserved pathogen-associated molecular patterns (PAMPs). 1 In humans, there are ten unique TLR genes coding for receptors that initiate responses to PAMP ligands a robust inflammatory response. TLR2 heterodimerizes with TLR6, TLR1, and possibly TLR10, and these combinations facilitate the recognition of multiple distinct bacterial patterns diversifying innate immune sensing. [2][3][4] The importance of TLR2 in host defense has been well-established in mice where its deficiency has been associated with increased susceptibility to mycobacterial infection, pneumococcal meningitis, and sepsis due to Staphylococcus aureus and Listeria monocytogenes. [5][6][7][8] TLR1/2 and TLR2/6 heterodimers can discriminate the acylation of bacterial lipopeptides recognizing triacyl-and diacyl-lipopeptides respectively. 2,[9][10][11] The synthetic triacyl lipopeptide N-palmitoyl-S-dipalmitoylglyceryl Cys-Ser-(Lys) 4 (Pam 3 CSK 4 ) and diacyl lipopeptide Fibroblast Stimulating Ligand-1 (FSL-1) derived from Mycoplasma salivarium have been shown to stimulate via TLR1/2 and TLR2/6 heterodimers. 2,12 Additionally, TLR2/6 heterodimers recognize peptidoglycan (PGN) and a yeast cell wall particle, Zymosan. 13,14 A role for TLR1/2 and TLR2/6 in human disease has been suggested by candidate gene studies. We and others have demonstrated that there exists high inter-individual variability in terms of human leukocyte inflammatory responses to PAMPs 15,16 and that a portion of this variability is attributable to common genetic variants. Genetic variation in TLR2 has been shown to confer reduced responses to peptidoglycan and heat-killed S. aureus in vitro. 17 More recently, we have demonstrated that variants in TLR1 are highly associated with Pam 3 CSK 4 -induced whole blood cytokine production. We reported that common genetic variants in TLR1 conferred marked hyper-responsiveness to Pam 3 CSK 4 and these same variants were associated with increased risk of organ dysfunction and death in septic shock. 15,18 Other studies have demonstrated associations between genetic variation in TLR1 with susceptibility to leprosy and tuberculosis. 19,20 These data support a role for TLR1/2mediated responses in human disease. However, to date, our understanding of the role for genetic variation in TLR-mediated responses has been based on targeted candidate gene studies. Thus, in order to more comprehensively assess the genetic factors controlling TLR2-mediated responses in the healthy human population we undertook a genome wide association study to identify loci modifying Pam 3 CSK 4 -induced cytokine production in whole blood ex vivo. Results We employed samples from 360 healthy Caucasian subjects who had an average age of 35±14 years and were 39% male. Given that many innate immunity genes demonstrate population differences in allele frequencies including the genes coding for TLRs, 21 we performed principal components analysis (PCA) to address the possibility that there might exist population admixture within our genotyped subjects. PCA revealed that subjects who self-reported as Caucasian cluster with Caucasians from Utah (CEU) and the Toscani in Italia (TSI) populations from the HapMap3 collection 22 (Supplemental Figure 1). However, we did identify associations between eigenvalues from the first three principal components and TLR agonist-induced cytokine production and so these eigenvalues were used as covariates in the multiple linear regression models for the GWAS. We used a genome wide association test adjusted for age, gender and the first 3 principal components, and identified 19 SNPs within the TLR10/1/6 locus on chromosome 4 that were associated with Pam 3 CSK 4 induced IL-6 ( Figure 1A), IL-1β, and TNF-α production in whole blood (Supplemental Figure 2) at a genome-wide level of significance (p = 1 × 10 −8p = 1 × 10 −27 ) ( Table 1). No other loci achieved associations at a genome-wide level of significance including SNPs found in genes involved in TLR1/2 signaling such as TIRAP, IRAK4, and IRAK1 that we had anticipated a priori would be associated with the cytokine induced phenotypes (Table 2). Notably, all cytokine values obtained from the whole blood assay were normalized to a monocyte count obtained from the donor at the time of phlebotomy. In this way we mitigated the chances of identifying variation that merely affected the number of circulating monocytes. We next sought to identify loci associated with responses to TLR2/6 ligands FSL-1, PGN, and Zymosan in 167 subjects for whom we had measured whole blood responses to these ligands. We did not identify any associations reaching genome-wide significance ( Figure 1B) and, notably, no SNPs within the TLR10/1/6 locus or TLR2 were even nominally associated (p>0.05) with responses to these ligands. Nonetheless, there were several moderately strong associations detected at other genomic loci with these cytokine responses ranging from p=1.55×10 −6 (Zymosan-induced IL-6), p=3.30×10 −6 (FSL-induced IL-6) to p=4.37×10 −6 (PGN-induced IL-6). Since these analyses included fewer subjects than the GWAS of Pam 3 CSK 4 -induced responses we re-ran the GWAS of Pam 3 CSK 4 -induced IL-6 using only these 167 subjects. This analysis still identified multiple SNPs that were associated at a genome-wide level of significance (p<4.7×10 −12 ) demonstrating that while statistical power for this sub-study may have been limiting, the associations with Pam 3 CSK 4 -induced responses are orders of magnitude stronger than any associations with TLR2/6 agonist-induced responses. In order to identify SNPs within the TLR10/1/6 locus not directly genotyped by our platform that may be driving the observed associations with Pam 3 CSK 4 -induced cytokine production, we used imputation to infer missing genotypes on chromosome 4 using 1000 genomes NCBI Build 37 23 as a reference population. These imputed SNPs were tested for association with the Pam 3 CSK 4 -induced cytokine phenotypes. We observed a 222kb region across the TLR10/1/6 locus that was associated with Pam 3 CSK 4 -induced IL-6 at a genome wide level of significance ( Figure 2). The SNP most highly associated with hypermorphic responses was rs67719080 (p=1×10 −27 ), an intergenic SNP between TLR10 and TLR1. Of the SNPs that fell within genes, SNPs within TLR10 were most highly associated with hypermorphic cytokine responses ( Figure 2). The most highly associated TLR10 coding SNP was rs4129009 (TLR10 2323A/G ), a non-synonymous polymorphism that causes an amino acid change in the highly conserved Toll/Interleukin-1 receptor (TIR) domain. Individuals homozygous for the rare allele had increased IL-6 production consistent with a hypermorphic response ( Figure 3). In addition to the TIR domain SNP, we also identified a missense SNP in TLR10, rs11096955 (I369L), near leucine-rich repeat 9 (LRR9: aa 349-368) of TLR10 that was strongly associated with hypermorphic responses to Pam 3 CSK 4 (p=5.36×10 −16 ). Coding SNPs within TLR1 were also highly associated with the Pam 3 CSK 4 -induced cytokine phenotype including rs4833095 (TLR1 742A/G ) and rs5743618 (TLR1 1805G/T ) but were not in high LD with the TLR10 coding SNP rs4129009 (Table 3) suggesting a distinct association. Notably, rs5743551 a SNP found 5′ to TLR1 that we have previously shown to be highly associated with death and organ dysfunction in sepsis was also highly associated (p=2.8×10 −24 ). Finally, we also found a strong association with a non-synonymous variant in TLR6 (rs5743818, TLR6 1932T/G ) and Pam 3 CSK 4 -induced responses (p=1.28×10 −9 ). This SNP was not found to be in high linkage disequilibrium with the other most-highly associated coding SNPs in TLR1 (R 2 =0.11) and TLR10 (R 2 =0.08) ( Table 3). Discussion In this genome-wide association study, we found that the TLR10/1/6 region on chromosome 4 is the dominant common genetic locus controlling inter-individual variation in responses to Pam 3 CSK 4 in whole blood from healthy subjects ex vivo. While the genes coding for TLRs are distributed throughout the genome, TLR10, TLR1, and TLR6 cluster at a locus on chromosome 4p14. Evidence suggests that this tandem arrangement arose from a gene duplication event. 24 Notably, all three of these genes have significant allelic heterogeneity with an abundance of rare variants that may indicate an influence of purifying selection. 24 In addition, there exist significant geographic differences in genetic variation between European populations within the TLR10/1/6 locus. 21 However, our principal components analysis shows that our subjects clustered with Caucasian populations in HapMap3 and our adjustment with principal components in the linear regression suggests that the association testing is not confounded by cryptic population substructure. Among the SNPs within TLR1 showing the strongest associations in our study were several that have been previously associated with susceptibility to leprosy (rs5743618) 25 , risk for prostate cancer and placental malaria (rs4833095). 26,27 These findings are consistent with the assertion that functional responses mediated by TLR1/2 heterodimers might drive important biologic responses and alter risk for disease. We were more surprised to find strong associations with coding SNPs within TLR10 as there is no known ligand specific for TLR10 and it is not known that TLR10 ligation actually generates an intracellular response. 4,28 These findings suggest that SNPs within TLR10 may contribute to associations between disease susceptibility and the TLR10/1/6 locus. The most highly associated non-synonymous SNP in TLR10, rs4129009 causes an amino acid change in the TIR domain of the intracellular portion of the protein. The TIR domain is critical for intracellular signaling in other TLR family members. 29,30 A recent study has shown that a chimeric receptor containing the extracellular domain of TLR10 and the intracellular domain of TLR1 (including the TIR domain) induced a cellular response to Pam 3 CSK 4 comparable to wild-type TLR1. 4 This study suggests that the extracellular portion of TLR10 recognizes Pam 3 CSK 4 but that the intracellular portion of TLR10 does not translate this recognition event to an intracellular signal. Our study shows that individuals homozygous for the rare allele of rs4129009 in TLR10 have increased cytokine responses suggesting that this genetic alteration of the TIR domain may result in a functionally active TLR10 molecule. Of note, this SNP has previously been reported to be associated with decreased risk of atopic asthma. 31 In addition to this SNP in the TIR domain, we identified another highly associated missense SNP in TLR10, rs11096955 (I369L), near LRR9, that could alter ligand binding. In order to best identify whether the TLR10 signal is an independent association, future research should be aimed at other racial groups where haplotype blocks in these region are smaller. Future work will need to more finely delineate whether SNPs in TLR10 or TLR1 (or both) are causally responsible for the associations observed. However, due to moderate LD, conditional regression analysis adjusting for the top SNPs in this analysis was underpowered to detect independent associations. The importance of genetic variation in TLR genes and downstream TLR signaling genes is highlighted by candidate gene studies that have demonstrated associations between variants in these genes and diseases for which host defense and inflammation is pathologic. With respect to genes encoding the TLR1/2 heterodimer, functional polymorphisms within the TLR10/1/6 locus and TLR2 have been associated with altered susceptibility to the mycobacterial infections of leprosy and tuberculosis. 19,20,32 A TLR1 polymorphism (rs5743618, Ser602Ile) that mediates higher levels of signaling and cell surface expression 15,19 is associated with protection from recurrent urinary tract infection and pyelonephritis. 33 In sepsis, where severe infection leads to overwhelming inflammation and end-organ dysfunction, a TLR 1 polymorphism (rs5743551) associated with marked hyperresponsiveness has been associated with risk of death and organ dysfunction and sepsis induced acute lung injury. 15,18 Outside of infectious diseases, polymorphisms within the TLR10/1/6 locus have been variably associated with prostate cancer, non-Hodgkin lymphoma, Crohn's disease, asthma, and chronic sarcoidosis. 26,31,[34][35][36][37][38][39][40] Our findings that the TLR10/1/6 locus explains a large portion of population variance in TLR1/2-mediated responses in vitro provides additional support for the importance of this locus in human disease. Several previous reports have demonstrated associations between disease risk and genetic variation in TLRs and genes of the TLR intracellular signaling pathway including TLR2, TIRAP, IRAK4, and IRAK1. [41][42][43] In spite of these previous findings, we detected only a nominally significant association with variants in some TLR-related genes ( Table 2). It should be noted that this study was designed to have adequate statistical power to detect associations with common genetic factors (MAF >5%). This study is inadequately powered for detection of associations with rare genetic variants (MAF<1%) and, therefore, we cannot exclude the possibility that rare variants within these or other genes may also play a role in modulating these effects. Nonetheless, our findings suggest that common genetic variation in TLR pathway genes outside of the TLR10/1/6 locus play only a minor role in modifying TLR1/2 responses in the Caucasian population. In summary, our study shows that genetic variation within the TLR10/1/6 locus is the major common genetic factor explaining inter-individual variation in TLR1/2-mediated cytokine responses to Pam 3 CSK 4 in vitro. We find that the mostly highly-associated SNPs fall within TLR10 and that some of these SNPs are located in or near important functional domains (TIR domain and LRR9) of TLR10 suggesting that this receptor might have functional relevance. Overall, this study supports ongoing efforts to understand the importance of this locus to human diseases involving innate immunity. Study Subjects We used DNA samples and innate immune response phenotypes collected from 360 healthy Caucasian volunteers recruited from the Seattle metropolitan area from whom written informed consent was obtained. This was approved by the University of Washington Human Subjects Committee. This population has been previously described by our group. 15 Genotyping and Imputation Genomic DNA was genotyped using the Illumina ™ Human 1M Beadchip array. In addition, we imputed genotypes on chromosome 4 not present on the array with the BEAGLE software package version 3.3 44 using EUR genotypes from 1000 Genomes 23 as a reference. Quality Control Quality control was performed as described by Anderson et al. 45 We assessed for discordance between reported sex and genotype-determined sex, excess autosomal heterozygosity, excess relatedness (identity by descent of > 0.1875), and population substructure using principal components analysis (PCA) and removed 14 subjects resulting in a total of 346 subjects. All subjects had a genotype call rate of over 97%. The 561,491 SNPs were filtered to remove all SNPs with a minor allele frequency (MAF) <0.05, Hardy-Weinberg equilibrium p<0.001, or a call rate ≤ 0.90 resulting in 493,197 SNPs that were used for association testing. Imputed SNPs for chromosome 4 were filtered for an allelic R 2 of 0.85. Data analysis We tested for associations between genome-wide genotypes and log 10 -transformed, monocyte normalized, cytokine values by multiple linear regression assuming additive effects. Subjects and SNPs passing QC filtering were tested for association with Pam 3 CSK 4 -induced, monocyte-normalized, whole blood cytokine production adjusting for covariates including age, gender and eigenvalues from the first three principal components generated by PCA clustering subjects with samples from HapMap3 (Release 3, NCBI build 36). 22 Correcting for multiple tests, we considered a p< 1 × 10 −8 to be indicative of genome-wide significance. We assigned p-values to TLR signaling genes anticipated a priori to be associated with the cytokine phenotype by choosing the p-value of the highest SNP within a 50kB range from the 5′ and 3′ end of the gene. All above analyses were performed and linkage disequilibrium calculated using the Golden Helix ™ software package. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Genes Immun. Author manuscript; available in PMC 2013 July 01. 1 TLR and TLR signaling genes anticipated to be associated with the agonist-induced cytokine concentration. 2 For each gene, a window 50Kb from either end of the gene was included to select the most highly associated SNP. 3 SNP most highly associated within the gene range. Asterisk signifies the SNP was imputed. Most highly associated coding SNPs to PAM3CSK4-induced IL-6. 2 Adjusted for age, gender, and eigenvalues from first 3 principal components. 3 Linkage disequilibrium (R 2 ) between each SNP and the highest TLR1 coding SNP rs4833095
2017-11-08T21:56:40.077Z
2012-10-05T00:00:00.000
{ "year": 2012, "sha1": "c306501570d1706106bed7aec0844b3f4441b9be", "oa_license": null, "oa_url": "https://www.nature.com/articles/gene201253.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "c306501570d1706106bed7aec0844b3f4441b9be", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10831935
pes2o/s2orc
v3-fos-license
Early decreased TLR2 expression on monocytes is associated with their reduced phagocytic activity and impaired maturation in a porcine polytrauma model In their post-traumatic course, trauma patients suffering from multiple injuries have a high risk for immune dysregulation, which may contribute to post-injury complications and late mortality. Monocytes as specific effector cells of the innate immunity play a crucial role in inflammation. Using their Pattern Recognition Receptors (PRRs), notably Toll-Like Receptors (TLR), the monocytes recognize pathogens and/or pathogen-associated molecular patterns (PAMPs) and organize their clearance. TLR2 is the major receptor for particles of gram-positive bacteria, and initiates their phagocytosis. Here, we investigated the phagocytizing capability of monocytes in a long-term porcine severe trauma model (polytrauma, PT) with regard to their TLR2 expression. Polytrauma consisted of femur fracture, unilateral lung contusion, liver laceration, hemorrhagic shock with subsequent resuscitation and surgical fracture fixation. After induction of PT, peripheral blood was withdrawn before (-1 h) and directly after trauma (0 h), as well as 3.5 h, 5.5 h, 24 h and 72 h later. CD14+ monocytes were identified and the expression levels of H(S)LA-DR and TLR2 were investigated by flow cytometry. Additionally, the phagocytizing activity of monocytes by applying S. aureus particles labelled with pHrodo fluorescent reagent was also assessed by flow cytometry. Furthermore, blood samples from 10 healthy pigs were exposed to a TLR2-neutralizing antibody and subsequently to S. aureus particles. Using flow cytometry, phagocytizing activity was determined. P below 0.05 was considered significant. The number of CD14+ monocytes of all circulating leukocytes remained constant during the observational time period, while the percentage of CD14+H(S)LA-DR+ monocytes significantly decreased directly, 3.5 h and 5.5 h after trauma. The percentage of TLR2+ expressing cells out of all monocytes significantly decreased directly, 3.5 h and 5.5 h after trauma. The percentage of phagocytizing monocytes decreased immediately and remained lower during the first 3.5 h after trauma, but increased after 24 h. Antagonizing TLR2 significantly decreased the phagocytizing activity of monocytes. Both, decreased percentage of activated as well as TLR2 expressing monocytes persisted as long as the reduced phagocytosis was observed. Moreover, neutralizing TLR2 led to a reduced capability of phagocytosis as well. Therefore, we assume that reduced TLR2 expression may be responsible for the decreased phagocytizing capacity of circulating monocytes in the early post-traumatic phase. Introduction Traumatically induced tissue damage leads to an inflammatory response. Inflammation itself is not detrimental but rather necessary for the resolution of injury and the healing progress. This complex process integrates and coordinates cytokines, chemokines and immune cells to deal with the damage. [1,2] One attempt to characterize this inflammatory reaction was the concept of a trauma-induced hyper-inflammatory systemic inflammatory response syndrome (SIRS) and the counterbalancing hypo-inflammatory state (compensatory anti-inflammatory response syndrome, CARS) in the later clinical course. [1,3] The balance between pro-and anti-inflammatory components, the SIRS-CARS paradigm, was assumed to be crucial for a successful recovery and a positive outcome. [1,4,5] However, the classification terms of SIRS and CARS are more than 20 years old, and are only of limited usefulness to describe the patient's current immune status because they do not always correlate well with immunofunctional parameters. Moreover, the injured tissue releases a large number of soluble factors that act on the endocrine, lymphoid and haematopoietic organs as well. [2] Monocytes play a crucial role in the early immune response after trauma and infection. On one hand they constitute a cellular link between the innate and the adaptive immune system in case of infection, and on the other hand, are capable of recognizing pathogens or pathogen-associated molecular patterns (PAMPs) directed by their pattern recognition receptors (PRRs), and subsequently inactivate invading pathogens by phagocytosis. [6][7][8] The impact of trauma onto the function of monocytes is still not clear and can be conflicting. Anupamaa Seshadri et al. (2017) found significantly increased levels of circulating monocytes in trauma patients in the first five days after trauma compared to healthy volunteers. [9] Conversely, they found a significant depression of monocytic cytokine production (TNF-α, IL-1β) as well as significantly impaired expression of MHC-II molecules on the surface, while the phagocytic capability was not affected over the 5 days post-trauma. [9] Heftrig et al. (2017) reported different results for the circulating levels of monocytes, while a constant depression of MHC-II expression as well as an impaired production of IL-1β over the time course of ten days after trauma compared to healthy volunteers was observed. [10] An important subset of PRRs are Toll-like Receptors (TLRs), of which there are 10 known types in humans. [11][12][13] TLR2 is the major receptor for bacterial lipoproteins, lipopeptides and lipoteichoic acid (LTA), which are common for gram-positive bacteria. [14] S. aureus is the most common gram-positive bacterium, which causes nosocomial infections like pneumonia and sepsis, and is therefore highly associated with increased morbidity and mortality in chronically ill patients. [15][16][17] Furthermore, S. aureus is feared for its potential to infect wounds and enter the bloodstream after trauma or surgery. [18,19] Phagocytosis seems to be initiated by the activation of PRRs, and in particular TLRs. [20] Several studies show an impaired phagocytic capability of monocytes in the post-traumatic course as described below. [21,22] The data is inconsistent regarding TLR2 expression on monocytes after trauma. Perez- Barcena et al. (2010) have shown increased levels of TLR2 on monocytes in trauma patients compared to healthy volunteers over a time course of 14 days. [21] They further have shown a significantly decreased expression of TLR2 in patients who developed any infection compared to those without infections. [21] Furthermore, they have shown an impaired phagocytic activity of monocytes during the 14 days in trauma patients compared to healthy volunteers. [21] In contrast to those findings, Adib-Conquy et al. (2003) reported a constant TLR2 expression on monocytes in trauma patients at admission compared to healthy volunteers. [23] Other studies have shown an impaired expression of TLR2 in trauma patients during the first 48 hours or over a time course of 10 days after trauma compared to healthy volunteers. [10,24] Expression of the antigen presenting human leukocyte antigen (HLA) molecules, the cell surface proteins human major histocompatibility complex MHC-II, has been known for a long time and is well described on human monocytes from healthy volunteers. [25,26] However, early after hemorrhagic shock or severe abdominal surgery, an impaired MHC-II expression on macrophages and specifically on monocytes has been reported. [27,28] With regard to trauma, a decreased MHC-II expression in human monocytes after trauma has been reported. [10,29] Nonetheless, the expression profile of H(S)LAs, the porcine MHC molecules, still remains not fully discovered, and even less is known about their behavior after trauma. MHC-I and MHC-II molecules were first observed on monocytes in pigs by Chamorro et al. [30] Raymond et al. (2005) also described the expression of MHC-II on porcine monocytes, showing an increase after adding both lipopolysaccharide (LPS) or lipoteichoic acid (LTA). [31] To summarize, there are alterations in TLR2 and MHC-II expression, as well as modulations in phagocytizing behavior of monocytes after severe trauma, and the discrepancy between the results in previous studies is tremendous. And still, mechanisms are not discovered yet, and even less is known with regard to the experimental polytrauma model in a large animal. The varied findings in the mentioned studies above could be due to the uncontrollable nature of trauma and its complications. Several studies only ackowledged one time point in the clinical course (e.g. admission or in the first 48 hours after admission). Furthermore, there are very few studies that combine phenotyping of immune cells with their physiological function after trauma. Therefore, in our experiment, the expression of MHC-II (SLA-DR) and TLR2 on circulating monocytes were measured in a time course of 72 hours after severe trauma in a controlled porcine long term polytrauma model. In parallel, the phagocytizing capacity of the monocytes was evaluated as one of their physiological function. Furthermore, the direct association of TLR2 with phagocytosis was analyzed. In this prospective study, we hypothesized we would observe similar results in the individual parts of the study as in previous human trauma studies with two goals: first to establish the porcine polytrauma model as a way to further investigate patients' post-traumatic physiology immune response, and second to associate our individual parts of the study to get more knowledge about mechanistics behind post-traumatic immune (dys-) regulation. Ethics The experiments were authorized by a responsible government authority ("Landesamt für Natur, Umwelt und Verbraucherschutz": LANUV-NRW, Germany: AZ TV-Nr.: 84-02.04. 2014.A265) and performed in compliance with the federal German law with regards to the protection of animals, Institutional Guidelines and the criteria in "Guide for the Care and Use of Laboratory Animals" (Eighth Edition The National Academies Press, 2011). [32] In our study, we handled the animals consistently in accordance with the ARRIVE guidelines. [33] Animal experiments were performed at the Institute for Laboratory Animal Science & Experimental Surgery, RWTH Aachen University, Germany. Animals A total of twelve male German landrace pigs (Sus scrofa; 3 months old, 30 ± 5 kg) from a disease-free barrier breeding facility were included in this study. Placed in air conditioned rooms, all animals were examined by a veterinarian and allowed to acclimatize to their environment for at least 7 days prior to the experiments. The night before the experiments, the pigs were fasted but had a free access to water. This study presents partial results obtained from a large animal porcine multiple trauma model. The model has been previously described in detail by Horst et al. [34] Anesthesia and preparation Animals were pre-medicated with an intramuscular (IM) application of Azaperone (Stresni TM , Janssen, Germany) in a dose of 4 mg/kg. Anesthesia was induced with an intravenous injection of Propofol (3 mg/kg) followed by orotracheal intubation (7.5 ch tube, Hi-Lo Lanz TM ). During the study period over 72 h, anesthesia and analgesia was continuously maintained with intravenous (IV) injection of Propofol and Sufentanil at a sufficient level to prevent any periods of pain or consciousness. The animals were not awakened at any time point after the induction of polytrauma. The animals were ventilated on volume control mode (Draeger, Evita, Lübeck, Germany) with room air at a tidal volume setting of 6-8 ml/kg, positive end expiratory pressure (PEEP) of 8 mmHg (plateau pressure < 28 mmHg), and pCO 2 of 35-45 mm Hg as previously described. [34] Catheters were aseptically inserted in multiple locations: the external jugular vein for administration of fluids, anesthesia and continuous monitoring of central venous pressure (CVP, central venous catheter 4-Lumen Catheter, 8.5 Fr., ArrowCatheter, Teleflex Medical, Germany), the right femoral vein to induce hemorrhage (3-Lumen hemodialysis, 12.0 Fr., ArrowCatheter, Teleflex Medical, Germany) and into the femoral artery for continuous blood pressure monitoring (4.0 Fr. arterial line catheter, Vygon, Germany). A urinary catheter was also inserted in the bladder (12.0 Fr, Cystofix, Braun, Melsungen, Germany). Crystalloid fluid (Sterofundin ISO 1 ) was used for continuous fluid management (2 ml kg/BW/h). The baseline measurements were acquired after instrumentation and calibration, prior to starting experimentation. Induction of polytrauma The polytrauma was induced as previously described. [34] In brief, antibiotic prophylaxis (Ceftriaxon1 2 g) was administered before surgery and after every 24 h until sacrifice. Prior to initial trauma induction, the fraction of inspired O 2 (FiO 2 ) was set at 0.21 and the fluid administration was reduced to 10 ml/h. At this phase, the animals were allowed to descend into a hypothermic state following hemorrhagic shock period mimicking the pre-clinical scenario. The animal was positioned on the right side and a femur fracture was induced with a bolt shot on the right hind leg (Blitz-Kerner, turbocut JOBB GmbH, Germany, 9x17, Dynamit Nobel AG, Troisdorf, Germany). After being placed back in the dorsal position, blunt thoracic trauma with a bolt shot on the right dorsal lower thorax was induced. Finally, a midline-laparotomy and uncontrolled bleeding for 30 seconds after crosswise incision of the caudal liver lobe (4.5 x 4.5 cm) was induced. Using sterile gauze-compresses the liver was packed. Pressure-controlled haemorrhagic shock using exsanguination from right femoral artery was performed until a mean arterial blood pressure (MAP) of 40 ± 5 mm Hg was reached and maintained for 90 minutes. Resuscitation started immediately after hemorrhagic shock by adjusting FiO 2 to baseline values, and re-infusing with previously withdrawn blood and additional fluids (Sterofundin ISO 1 ; 2 ml kg/BW/h). Rewarming was performed using forced-air warming systems until normothermia was reached (38.7-39.8˚C). After experimentation, clinical treatment of the open femur fracture was performed according to established trauma guidelines. The intensive care and complications management followed the standardized clinical protocols according to the latest recommendations of the European Resuscitation Council and Advanced Trauma Life Support (ATLS). [35,36] After the observational period the animals were euthanized by using potassium chloride until cardiac arrest. Blood sampling Blood samples were withdrawn directly after surgery in ethylenediaminetetraacetic acid (EDTA) tubes (Sarstedt, Nürmbrecht, Germany) before trauma induction as control at 3.5 h, 5.5 h, 24 h and 72 h after trauma. The samples were kept at room temperature until subsequent analyses. Ex vivo in vitro whole blood stimulation for phagocytosis assay Blood samples (40 μl) were transferred into polystyrene FACS tubes (BD Pharmingen TM ) and incubated with 40 μl of pH Rodo red (pHrodo1 Red S. aureus BioParticles1 Conjugate for Phagocytosis, ThermoFisher, Germany) for 1 h at 37˚C, 5% CO 2 according to the manufacturer's instructions. A negative control without S. aureus Red BioParticles Conjugate was included. Afterwards, 1 ml of FACS lysing solution (FACS Lysing Solution, 1:10, BD Pharmingen TM , Heidelberg, Germany) was added, followed by another incubation step (10 minutes) for red blood cells lysis. Thereafter, 2 ml of phosphate buffered saline (PBS) were added and the samples were centrifuged at 800G for 8 minutes at room temperature. Subsequently, the samples were washed with 3 ml of PBS with supplements (0.5% bovine serum albumin (BSA), FACS buffer) and centrifuged again at 800G for 8 minutes. The pellet was resuspended in 400 μl FACS buffer. The measurement was performed by flow cytometry using BD FACS Canto TM and FACS DIVA TM software (FACSCanto II, BD Biosciences). The monocyte population was discriminated by forward and sideward scatter. 300,000 monocytes were measured in each sample. The phagocytizing activity of monocytes was quantified as a percentage of cell population gated. Ex vivo in vitro whole blood cell surface receptor analysis Blood samples (100 μl) were incubated for 30 minutes at room temperature in darkness with mouse anti-human CD14 PE (Clone TÜ K4, BD Bioscience), mouse anti-human HLA-DR PerCP-Cy5.5 (Clone HL-38, Novus Biologicals), or polyclonal rabbit anti-human TLR2 Alexa Fluor 700 (Bioss Antibodies, 5 μl). Control samples were incubated with suggested isotype controls for the settings. After incubation, 3 ml of FACS lysing solution (BD Pharmingen TM ) were added and incubation for 10 minutes at room temperature in darkness followed. Subsequently, the samples were centrifuged at 800G for 5 minutes at room temperature. The pellet was resuspended in 500 μl of FACS buffer. The samples were measured by flow cytometry with BD FACS Canto TM using FACS DIVA TM software (BD Biosciences). The monocytes were discriminated by gating CD14 + cells. 300,000 monocytes were measured at least from each sample. Unstimulated samples were measured as control. TLR2 neutralization followed by phagocytosis assay Blood samples (EDTA tubes, Sarstedt) from ten male Pietrain pigs (Sus scrofa, 6 months old, 100 ± 5 kg) from a disease-free barrier breeding facility Bundes Hybrid Zucht Programm (BHZP) were drawn immediately after slaughtering for mechanistic studies. 50 μl were transferred into polystyrene FACS tubes (BD Pharmingen TM ). The samples were kept at room temperature and incubated for one hour at room temperature in darkness with the rat IgG polyclonal neutralizing human TLR2 (Invivogen) (20 μg/ml). Control samples were incubated for one hour at room temperature in darkness with the normal rat IgG control PAb antibody (Invivogen) (20 μg/ml) or without antibodies (ctrl). In the following, 40 μl of each sample were incubated with pH Rodo red bioparticles (pHrodo1 Red S. aureus Bio-Particles1 Conjugate for Phagocytosis, ThermoFisher, Germany) according to the manufacturer's instructions to determine the phagocytizing capacity of cells (see protocol above). The monocyte population was discriminated by forward and sideward scatter. 300,000 monocytes were measured in each sample. The phagocytizing activity of monocytes was quantified as a percentage of cell population gated. Statistical analysis All statistical analyses were performed employing GraphPad Prism 6 (Graphpad Software, Inc., San Diego, CA). D'Agostino-Pearson normality test was applied to test the normality of data. To compare the differences between the groups, the matched-pair statistical analysis was performed by using repeated measures ANOVA (Friedman test) with a Dunn's post-hoc test. A p value of less than 0.05 was considered significant. Data are given as mean ± standard error of the mean (sem). Discussion Monocytes play a pivotal role in inflammation and show massive functional modulations in trauma patients e.g. in phagocytosis, their maturation and TLR expression or impaired cytokine secretion after ex vivo stimulation with endotoxin. [21,[37][38][39] Several clinical studies and experimental in vivo studies in small animals have shown an altered surface expression of TLR and HLA-DR as well as an impaired phagocytic activity after severe trauma. Despite numerous studies in the last decades, the importance of functional modulations of monocytes after trauma is still not fully understood. [30,31] Further analyses have shown that porcine SLA-1 Ã 0401 and human leukocyte antigen (HLA) class I HLA-A Ã 0101 can present the same peptides, but in different conformations, demonstrating cross-species epitope presentation. [41] This is so far important, as it has been verified before that human anti-HLA-DR recognizes porcine leukocyte antigens. With regard to the relevance of its expression, Monneret et al. (2006) could correlate the persisting lower expression of MHC-II molecules on human monocytes in patients suffering infections who had increased mortality rates, while survivors have shown an increasing MHC-II expression. [42] The authors concluded that the expression of MHC-II may constitute a highly potent marker for the outcome. [42] Regarding trauma, a significantly decreased expression of MHC-II molecules on human monocytes was reported, however, this phenomenon did not correlate with post-injury infections. [10,29]. In line with those studies, our study shows the post-traumatic maturation of porcine monocytes was investigated for the first time. We could demonstrate a decreased MHC-II expression on porcine monocytes in the early post-traumatic course with a recovery phase after 24 hours. Due to the limited experimental design with regard to the duration of the observational period, we did not evaluate the clinical course, and therefore, we are unable to make a statement regarding the influence of the observed decrease in H(S)LA-DR positive monocytes on the possibly emerging complications in our model. Nonetheless, our findings confirm the data from trauma patients, and indicate at a lower stage of maturation of monocytes with subsequently impaired function after trauma. The relevance of this findings remains to be evaluated in further studies. The activation of monocytes is triggered via TLRs. Thus, despite their maturation as described above, considering functional alterations of monocytes after trauma, it seems reasonable to analyze the TLR expression on monocytes after trauma. The genes of 10 porcine TLRs (TLR1-10) are described and listed in the public nucleotide database. So far, however, little is known about their expression on porcine monocytes and little with regard to porcine severe trauma. Liu et al. (2009) demonstrated the gene expression for TLR2, 3, 4, 7, 8 and 9, and have shown a significant upregulation ex vivo in vitro for TLR2, 4 and 8, after infection in porcine peripheral blood mononuclear cells. [43] Interestingly, in surgical trauma patients increased TLR2 in comparison to the control group was reported. [44] Lendemans et al. (2007), however, have shown a significantly downregulated expression of TLR2 receptor on monocytes within the first 48 hours after severe trauma in patients. [24] This is in line with data reported here, confirming the early decrease in TLR2 expressing monocytes immediately after trauma, indicating that the porcine trauma model represents the changes in TLR2 expressing monocytes, which were observed in trauma patients. In general, nothing is known about the post-traumatic behavior of porcine monocytes so far and also their phagocytic activity after trauma was investigated for the first time in a porcine trauma model. The expression of TLR2 on CD14 + monocytes was decreased simultaneously with the phagocytic activity, in particular directly after surgery, 3.5 and 5.5 hours after trauma. Subsequently, these findings led us to the hypothesis that the impaired TLR2 expression on monocytes is associated with their reduced phagocytic activity of S. aureus particles in the early post-traumatic course. Previous publications already suggested the dependence of phagocytosis on PRRs and supported our hypothesis. Attempts to correlate the TLR2 expression with the diminished immune activity on monocytes, such as phagocytosis after severe trauma are limited. Sturm et al. (2017) have shown a decreased phagocytic activity of human monocytes in the first two days after severe trauma, with a recovery beginning at post-injury day 3. [22] Data from this study confirm the early diminished phagocytosis, which was observed in patients after trauma as well. There is, however, apparently a faster recovery in porcine monocytes as compared to human samples after trauma. Freeman and Grinstein (2014) addressed the association of TLRs to the phagocytic activity before. [20] In brief, TLR activation leads to a so called inside-out activation, increasing the mobility of phagocytic receptors on the cell surface/ cell membrane, thereby facilitating the engulfment of particles/pathogens. [20] In another in vitro model, it has been demonstrated that specific phagocytosis probably involved recognition of cell wall components that requires participation of a TLR2-dependent pathway. [45,46] Human peripheral blood monocytes, treated with the TLR2 agonist have shown significantly enhanced phagocytic ability in vitro. [47] Consistently, our results have shown as well that early diminished TLR2 expression in porcine monocytes was associated with a simultaneously decreased phagocytizing activity after trauma. To define the role of TLR2 in the phagocytosis response of monocytes against S. aureus, we investigated the phagocytosis in TLR2-neutralized monocytes obtained from healthy pigs. We could confirm our hypothesis, TLR2 neutralization reduced the phagocytizing rate. Taken together, our data suggest that the decrease of the ratio of TLR2 positive monocytes after trauma may be responsible for the decrease in monocyte phagocytosis, which we have observed in the present study. Combined with results of previous publications, we assume that TLR2 might probably mediate the phagocytizing capability of monocytes for certain bacterial particles in the porcine polytrauma model. Despite new insights in the porcine cellular physiology, our study has several limitations. First, the very low phagocytizing capacity of the cells may possibly be caused by the use of EDTA and not heparin blood for the assay. Furthermore, it remains to be elucidated, if the early phagocytosis-depression of monocytes will influence the outcome after trauma. Due to the limited observational period (72 hours), our findings cannot be linked to defined outcomes like infections, sepsis, length of stay on ICU or requirement for ventilation which occur way after 72h. Several studies suggest intracellular processes as reason for an impaired post-traumatic function of monocytes. There are different mechanisms of TLR regulation, which have not been addressed here, such as localization and trafficking between the Golgi and the cell surface, interactions of the transmembrane domain or TLR adapter molecules as crucial targets of trauma causing alteration of TLR expression and function. [48,49] The same applies for the phagocytic activity of monocytes. Seshadri et al. observed a consistent phagocytic activity of S. aureus despite an impaired TLR2 and MHC-II expression. [9] The authors concluded the reduced TLR2 expression can be compensated by other molecules involved in the regulation of phagocytosis. Another limitation of our study is clearly the use of different subspecies of pigs (German Landrace vs Petrain) in different parts of the study. Additionally to the different subspecies, the "polytraumapigs" had to undergo pre-trauma procedures (anesthesia and multiple catheters as described), which might have led to a priming of the immune cells, leading to an increased phagocytic activity before trauma compared to the "healthy" pigs. In future studies, different parts should be executed with specimen of the same subspecies receiving same treatments. However, the results in both parts of our study build on one another and the results and trends are reasonable. Additionally, other populations of monocytes (weakly CD14-positive, CD16-positive cells, etc.) have not been included in the analyses. [50][51][52] Our study must be regarded as a first approach to explore such phenomena, if the post-traumatic cellular physiology of pigs is comparable to the one of humans. We started experimentation for monocytes and our findings are promising. However, further studies must be done for different cell lines, even for the other mentioned populations of monocytes in order to truly establish a porcine large animal polytrauma model. In summary, our results demonstrate that porcine monocytes, undergoing polytrauma, had a decreased TLR2 expression and an impaired maturation as well as a reduced bacterial clearance via phagocytosis in the early post-traumatic course. The deranged phagocytizing capacity is highly associated with the reduced TLR2 expression on monocytes, and may provide a new therapeutic target to improving phagocytosis during infection. Moreover, this porcine model is representative for the initially suppressed function of monocytes after trauma in humans. Conclusions • The ratio of porcine monocytes on all leukocytes does not change significantly over the posttraumatic course. • Porcine monocytes fail in maturation in the early post-traumatic course. • Porcine monocytes have a decreased expression of TLR2 in the early post-traumatic course. • Porcine monocytes have an impaired phagocytic activity in the early post-traumatic course. • Impaired phagocytizing function of monocytes is closely associated with their reduced TLR2 expression.
2018-04-03T06:21:03.172Z
2017-11-10T00:00:00.000
{ "year": 2017, "sha1": "45f0c00c25685f8c10075019c08782f4a0d50cc7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0187404&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45f0c00c25685f8c10075019c08782f4a0d50cc7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
228864866
pes2o/s2orc
v3-fos-license
Reactions of quinine with 2-chloro-4,6-dimethoxy-1,3,5-triazine Quinine reacts with 2-chloro-4,6-dimethoxy-1,3,5-triazine (CDMT) via a multistage process leading to destruction of the quinuclidine fragment and attachment of two triazinyl substituents. In the first, reversible stage, CDMT reacts with the aromatic nitrogen of the quinoline, followed by the slow migration of triazine moiety on the bridgehead nitrogen atom of quinuclidine. The bicyclic system, after quaternization with CDMT, was opened by privileged attack of nucleophilic chloride on methylene carbon in the bridge substituted with vinyl group. In the final stage the second 4,6-dimethoxy-1,3,5-triazin-2-yl moiety was attached to the hydroxy group. The product structure was confirmed by X-ray crystallographic measurements, MS, 1 H and 13 C NMR, and IR spectroscopy Introduction 2][3] Their synthetic potential was also noticed in our attempts to design and develop traceless, predictable coupling reagents dedicated to the synthesis of enantiomerically homogeneous peptides directly from racemic N-protected amino-acids. 4,5This new group of enantioselective reagents consists of chiral components and the classical achiral coupling reagent.In the case of the enantioselective, predictable coupling reagent, the chiral component participates only in the enantioselective activation of the carboxylic group.Its departure after this stage affords carboxylic component activated by the known, achiral coupling reagent. 6Thus, optimized coupling conditions remain intact and independent on the structure of the chiral counterpart of the predictable coupling reagent.Moreover, configuration and enantiomeric enrichment of the given pair of carboxylic substrate and chiral component also remain the same in all the subsequent coupling stages and can be established easily in a single, model experiment. In practice, enantioselective coupling of two equivalents of racemic N-protected amino acids by means of achiral triazine based coupling reagent and strychnine or brucine used as chiral component gave peptides with very high yield precisely predictable configuration and enantiomeric enrichment up to ee 99%.Under such conditions, Kagan's coefficient exceeded 100 in favorable cases.Nevertheless, brucine and strychnine are affordable in one configuration only, but one can presume that the application of the above mentioned pseudo-enantiomeric cinchona alkaloids would pave the way to the activation of any desired enantiomer. Unpredictably, attempts to perform enantioselective peptide syntheses in the presence of quinine and/or quinidine failed to yield peptide products, suggesting an unknown process leading to the degradation of the alkaloids.To identify the structures obtained in this unprofitable transformation, the reaction of quinine (1) with 2-chloro-4,6-dimethoxy-1,3,5-triazine (CDMT) (2) was studied. Synthesis In the quinine molecule, there are two tertiary nitrogen atoms prone to quaternization when treated with CDMT (2).Their usefulness in enantioselective syntheses is diverse.The bridgehead aliphatic nitrogen of the substituted quinuclidine fragment is chiral and its high expediency to act as chiral selector in enantioselective reactions is expected.On the other hand, the aromatic nitrogen of quinoline is located far away from the chirality center of quinine and its effectiveness as enantioselector is problematic.Therefore, to identify the more reactive one, quinine (1) was treated with equimolar amounts of CDMT (2) (Scheme 1).Unexpectedly, instead product 3 (path A) or 4 (path B) a mixture of other products was obtained. In the solution of the equimolar mixture of both substrates a trace of unreacted CDMT (2) was identified even after an overnight reaction at room temperature.To complete the consumption of CDMT (2), an additional 24 h heating in boiling dichloromethane was necessary.The analysis of the mixture obtained by TLC showed the presence of one major and four minor products, all of them UV active.Major component 5 was isolated by silica gel column chromatography.Its purity determined by HPLC was further substantially improved by recrystallization from hexane/EtOAc enabling structural studies with the use of 1 H, 13 C NMR, MS, IR and X-ray crystallography. Crystal structures determination A single crystal of compound 5 suitable for X-ray diffraction was obtained from n-heptane/ethyl acetate solutions by slow evaporation of the solvents at room temperature. Spectroscopic studies 1 H NMR spectrum of isolated product (Figure 4) implies the presence of methoxyquinoline and two methoxytriazine fragments. 10In the 1 H NMR spectrum of product 5 (Figure 4), four groups of signals were identified.In the region typical of aromatic hydrogens 7.00-8.75ppm, five doublets at 8.69 (1H), 8.03 (1H), 7.81 (1H), 7.55 (1H), and 7.40 (1H) ppm were attributed to methoxyquinoline fragment of 1.The sixth doublet at 7.12 ppm was correlated on HSQC spectrum with C1 methine carbon at 76 ppm (see Figure 5).In the native quinine molecule (1), this signal is observed at 1 ppm higher field.The 1 ppm lower field shift could be caused by de-shielding effect of the aromatic triazine ring located in the close proximity of H-C1 methine proton after its reaction with bridgehead aliphatic nitrogen.The next group of signals is located at 5.00-5.80ppm.Two protons at 5.09 and 5.19 ppm are attached to sp 2 carbon C48 observed in HSQC spectrum at 118.3 ppm, characteristic of C=CH2 fragment.They are coupled with a 1H multiplet similar to the double triplet at 5.75 ppm.The coupling constants of the doublets at 5.09 and 5.19 ppm are 17.17 and 10.12 Hz, respectively, which suggests their assignment to -CH=CH2 fragment. The blurred 1H triplet at 5.67 ppm correlates with the C41 sp 3 carbon 51.9 ppm.Its down-field shift suggests attachment to the strongly electronegative atom and additionally the de-shielding effects caused by the presence of the aromatic ring.It is coupled with three other protons, two of them with similar coupling constants and according to the COSY spectrum, they are two H-C42 methylene protons 1.27 and 2.05 ppm and H-C1 methine proton 7.12 ppm.Therefore, this could be the proton of methine group H-C41 in the ring, attached to the N atom and to the H-C1 hydroxymethyl group. The group of signals in the range 3.30-4.90ppm is fairly diversified.The most intensive were attributed to the five methoxy group.Two of them at 3.87 ppm were magnetically equivalent and were representative of the free rotating 4,6-dimethoxy-1,3,5-triazin-2-yl fragment linked via oxygen (H-C52 and H-C72).Two nonequivalent methoxy groups at 3.90 and 4.07 ppm are typical of the presence of the second 4,6-dimethoxy-1,3,5-triazin-2-yl fragment (H-C58 and H-C60).In this case, non-equivalence can result from the limited rotational freedom triazine ring caused by its coupling with the quinine frame possible via nitrogen atom.The undoubtedly documented incorporation of two triazinyl rings raise the question of where they are attached.Relatively undisturbed chemical shifts of the aromatic quinoline fragment, but strongly modified shifts of aliphatic hydrogens suggest the attachment of CDMT to nitrogen in the bridgehead position of quinine and to the hydroxyl group.This assumption is unexpected, because until now a faster reaction of CDMT with aromatic nitrogen rather than with more sterically hindered tertiary nitrogen attached to the aliphatic frame or to the less nucleophilic hydroxyl group has been observed in all cases. 11,12he chemical shift to 3.78 ppm of protons of fifths methoxy group H-C92 was typical of the substituent at C29 position in the quinoline ring system. According to HSQC spectrum, 1H broad doublet at 4.85 ppm and 1H double triplet at 3.34 ppm are attached to the same methylene carbon identified at 41.1 ppm.The double triplet multiplicity suggests the presence of three protons in the neighborhood with two of them as equivalents.The diversified, downfield shift of signals of this methylene hydrogens suggests their location in the ring C45 and close proximity of the strongly electronegative nitrogen atom N46. The 2H doublet at 3.56 ppm is coupled with multiplet at 2.09 ppm and correlates with methylene carbon 46.4 ppm.The equivalence of both protons and their downfield shift suggest the location in the carbon chain with electronegative heteroatom in their neighborhood >CH-CH2-X C49.This means that the bicyclic quinuclidine system was selectively opened by splitting N46-C49 bond of the fragment with the attached vinyl group but not the -CH2-CH2-one. Multiplets in the region 1.20-2.49ppm confirm the presence of six different aliphatic protons.Protons 1.17 and 1.92 ppm correlate to methylene carbon 28.6 ppm C44 and protons 1.27 ppm H-C42 and 2.05 ppm H-C42 correlate to methylene carbon 29.0 ppm.The diversified chemical shift of methylene protons may result from limited conformational freedom and strongly suggests their incorporation into the saturated ring system. The comparison with 13 C-and DEPT-135 spectra of two additional multiplets of C-H groups at 2.09 ppm H-C46 and 2.46 ppm H-C43 confirmed their attachment to two different methine carbons at 51.2 ppm C46 and 31.7 ppm C43, respectively.Moreover, multiplet 2.09 ppm H-C46 is coupled with methylene proton 3.56 ppm H-C49 suggesting the location of them both in the native quinine molecule in the bridge of the quinuclidine fragment bearing a vinyl substituent. The NMR data presented above made it possible to correlate structural fragments of product 5 with 1 H and 13 C NMR signals as shown in Table 2. The occurrence of two different DMT substituents in product 5 was further supported by MS studies.In ESI + mode (Figure 6), two molecular ions M+1 were observed at 639.8; 641.8 with intensities of signals 3:1 characteristic of the presence of the chlorine atom in the molecule.Their degradation pathway gave two pairs of the most abundant ions 482.6; 484.6 and 311.5; 315.5 with a characteristic 3:1 intensity.The first pair involved splitting of 157.2 ion assigned to C5H7N3O3 of 2-hydroxy-4,6-dimethoxy-1,3,5-triazine or its isomer.The second pair was formed by splitting 171.1 ion (C6H12N4O2), which could be assigned to protonated 2methylamino-4,6-dimethoxy-1,3,5-triazine. 10 Previous studies revealed that quaternization of tertiary amines in the reaction with CDMT (2) is very sensitive towards steric hindrance. 10Careful observations of the progress of the reaction of quinine with CDMT (2) suggested relatively fast consumption of CDMT (2) in the preliminary phase of the reaction accompanied by the formation of the non-identified intermediate followed by its slow decay.Also, the presence of traces of CDMT (2) was detected in the reacting mixture up to 48 h, although in the quinine molecule there are three nucleophilic centers potentially prone to reaction with CDMT (2).To identify the intermediate and prevent its further transformations, the nucleophilic chlorine anion was substituted with the non-nucleophilic tetrafluoroborate counter ion 6 as depicted below in Scheme 2. In the 1 H NMR spectrum of isolated crude product 7, both methoxy substituents of the triazine ring are equivalent and a down-field chemical shift from 8.70 ppm was observed for proton N-CH in native quinine to 9.25 ppm, expected for the N + -CH quinoline fragment in 7.This strongly suggests the attachment of the DMT fragment to aromatic nitrogen in the quinoline ring.Thus, it becomes obvious that the presence of nucleophilic chloride is crucial for the reaction pathway leading to product 5. Two diverse reaction pathways could be rationalized assuming the reversibility of the first step in the reaction between quinine and CDMT (2), leading to 4, which is an equivalent of 7 with a chloride substituting tetrafluoroborate (see Scheme 3).Scheme 3. Postulated transformation path leading to 5. Most CDMT (2) is relatively rapidly consumed in the reaction with quinine, although a reverse process of SNAr substitution involving quinolinium 4 and nucleophilic chloride generates minute amounts of CDMT, detected by color NBP test 12 in the reacting mixture for up to 48 h.The occurrence of CDMT (2) for the prolonged time supports the slow process involving the bridgehead aliphatic nitrogen atom of the quinuclidine fragment followed by the irreversible step of the nucleophilic attack of the chloride anion opening the bicyclic system with the formation of 8.The presence of the basic triazine ring in the close proximity of the hydroxylic group in 8 strongly promotes its deprotonation and reaction with CDMT (2), yielding 5 as the final product of the transformation. Conclusions This new transformation of quinine retains intact most of the stereogenic centers of the alkaloid molecule.The reaction with the bicyclic system with the bridgehead nitrogen atom resembles the von Braun reaction 13 involving cyanogen chloride.Nevertheless, the important advantage of the new procedure over the von Braun process is the easy modification of the triazine component paving the way to the plethora of new chiral products even from the single tertiary amine component.The scope and limitations of the above procedure are under investigation.5][16] While other potential applications include use as new chiral auxiliaries 17 building blocks, and pesticides 18 etc. Figure 1 . Figure 1.The molecular structure of compound 5, showing the atom-labelling schemes.Displacement ellipsoids are drawn at the 50% probability level except H atoms. Figure 2 . Figure 2. Packing of compound 5 in the crystal lattice. Figure 3 . Figure 3. Stacking interactions of compound 5 in the crystal structure. Figure 4 . 1 H Figure 4. 1 H NMR spectrum of the main product 5 formed in the reaction of quinine with CDMT (2).Residual CDCl3 and H2O signals marked with a red cross. Figure 5 . Figure 5. 1 H-13 C HSQC spectrum in CDCl3 of the main product 5 formed in the reaction of quinine with CDMT (2). Figure 6 . Figure 6.Mass spectrum (ESI + ) of the main product 5 formed in the reaction of quinine with CDMT (2) isolated after crystallization from ethyl acetate / n-heptane.
2020-11-12T09:02:08.185Z
2020-11-18T00:00:00.000
{ "year": 2020, "sha1": "75f4462c57429a650727553107e6606c89a65500", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/71748/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2c8cc627813510071b554713fcb4cb6522888c97", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
218971603
pes2o/s2orc
v3-fos-license
High-yield, wafer-scale fabrication of ultralow-loss, dispersion-engineered silicon nitride photonic circuits Low-loss photonic integrated circuits and microresonators have enabled a wide range of applications, such as narrow-linewidth lasers and chip-scale frequency combs. To translate these into a widespread technology, attaining ultralow optical losses with established foundry manufacturing is critical. Recent advances in integrated Si3N4 photonics have shown that ultralow-loss, dispersion-engineered microresonators with quality factors Q > 10 × 106 can be attained at die-level throughput. Yet, current fabrication techniques do not have sufficiently high yield and performance for existing and emerging applications, such as integrated travelling-wave parametric amplifiers that require meter-long photonic circuits. Here we demonstrate a fabrication technology that meets all requirements on wafer-level yield, performance and length scale. Photonic microresonators with a mean Q factor exceeding 30 × 106, corresponding to 1.0 dB m−1 optical loss, are obtained over full 4-inch wafers, as determined from a statistical analysis of tens of thousands of optical resonances, and confirmed via cavity ringdown with 19 ns photon storage time. The process operates over large areas with high yield, enabling 1-meter-long spiral waveguides with 2.4 dB m−1 loss in dies of only 5 × 5 mm2 size. Using a response measurement self-calibrated via the Kerr nonlinearity, we reveal that the intrinsic absorption-limited Q factor of our Si3N4 microresonators can exceed 2 × 108. This absorption loss is sufficiently low such that the Kerr nonlinearity dominates the microresonator’s response even in the audio frequency band. Transferring this Si3N4 technology to commercial foundries can significantly improve the performance and capabilities of integrated photonics. Supplementary Note 3. Statistical process analysis of multiple wafers Wafer-scale distribution of resonance linewidths on a 10-GHz-FSR wafer: In the main manuscript, Fig. 2(b) shows the wafer map of 40-GHz-FSR chips' Qs in each stepper exposure field. Here we show that a high Q is also obtained reproducibly over the full 4-inch wafer scale, with 10-GHz-FSR chips. Supplementary Figure 3(c) shows our mask layout constituting 4 × 4 chip designs on the DUV stepper reticle. Each chip contains only a single 10-GHz-FSR microresonator. Supplementary Figure 3(a) shows that the DUV stepper uniformly exposes the reticle pattern over the full 4-inch wafer scale in discrete fields. The calibration chips studied here are the C15 chips. The most probable values of κ 0 /2π histograms of these C15 chips are measured and plotted in each exposure field, as shown in Supplementary Fig. 3(b). In most fields, κ 0 /2π ≤ 9.5 MHz is found. The reticle design contains sixteen chips and is uniformly exposed in discrete fields on the wafer. NA: not applicable, due to visible photoresist coating defects or the design missing in particular fields close to the wafer edge. Statistical analysis of process reproducibility: Using the same chip characterization and analysis methods, multiple wafers fabricated using the same process but at different times in our university cleanroom have been measured, as listed in Supplementary Table I. The intrinsic Q 0 is summarized from multiple chips' histograms. Quality factors of Q 0 > 10 × 10 6 have been achieved in all fabricated wafers. Some wafers have been used in our published works. We note that Supplementary Table I only lists the wafers whose fabrication runs were smooth and had no error reported during the processing. Operation of our process in a foundry could significantly enhance the stability, reproducibility and even the performance of our wafer fabrication. Supplementary Note 4. 40 GHz single soliton generation without EDFA Using the Si 3 N 4 microresonators featuring Q 0 = 30 × 10 6 and anomalous group-velocity dispersion (GVD), here we demonstrate soliton microcomb generation at 40.6 GHz repetition rate, with only 10.2 mW optical power on chip (16 mW input power in the fiber), without using an erbium-doped fiber amplifier (EDFA). Supplementary The microresonator transmission trace from 1500 nm to 1630 nm is obtained using the frequency-comb-assisted diode laser spectroscopy 7 with one external-cavity diode lasers (ECDL, Santec TSL-510) that can scan the laser wavelength continuously (i.e. mode-hop-free) 8 . The precise frequency of each data point is calibrated using a commercial femtosecond optical frequency comb with 250 MHz repetition rate. For the TE 00 mode family, the FSR of the microresonator and the anomalous GVD are extracted from the calibrated transmission trace by identifying the precise frequency of each resonance. The total (loaded) linewidth κ/2π = (κ 0 + κ ex )/2π, the intrinsic linewidth (intrinsic loss) κ 0 /2π and the coupling strength κ ex /2π are extracted from each resonance fit 9,10 . Supplementary Figure 4(a) shows the measured linewidth of each TE 00 resonance in a critically coupled microresonator. Supplementary Figure 4(b) shows the measured microresonator integrated dispersion D int /2π. The FSR is D 1 /2π = 40.6 GHz, and the GVD is D 2 /2π = 224 kHz, obtained from fitting the measured GVD profile. Different comb states, including the modulation-instability (MI) comb, multi-soliton, perfect soliton crystal (PSC), and the single soliton, are generated in the same device. Using only a diode laser without an EDFA, the single soliton state is accessed with 10.2 mW power on the chip (input pump powers of P in = 16.0 mW), as shown in Supplementary Fig. 4(c). The single soliton state is accessed via only laser piezo frequency tuning 11,12 , and does not require complex soliton tuning methods. Supplementary Note 5. Broadband linewidth measurement The wavelength range of measured resonances can be extended using the frequency-comb-assisted cascaded diode laser spectroscopy 8 with three ECDLs covering different wavelength ranges (1260-1360 nm, 1355-1505 nm, and 1500-1630 nm). Supplementary Figure 5 Supplementary Note 6. Reflow's impact on loss In the photonic Damascene process, after dry etching, the patterned SiO 2 preform is thermal-annealed at 1250 • C over its glass transition temperature. This allows to reflow the thermal wet SiO 2 13 , in order to reduce the surface roughness introduced by the dry etching. The reflow step is performed in a standard silicon carbide atmosphericpressure CVD tube. Two-times improvement in the microresonator Q factors has been reported in Ref. 13 , which has been attributed to this preform reflow technique, however with a deformation of the waveguide cross-section as a trade-off. Here the impact of preform reflow on Q factors is studied in our high-Q microresonator fabricated with the optimized Damascene process. Supplementary Figure 6 compares the κ 0 /2π histograms of the TE 00 mode, for 1500 nm waveguide width, with and without the preform reflow. The reflow was implemented at 1250 • C for 24 h, same as reported in Ref. 13 . Without the reflow, the most probable κ 0 /2π = 15.5 MHz is only marginally larger than the value with the reflow (κ 0 /2π = 14.5 MHz). We attribute this to the improved lithography and dry etching in the current fabrication process, which have resulted in better waveguide sidewall quality and reduced roughness. Therefore the efficacy of reflow is reduced in high-Q microresonators, and might not be necessary. Despite the fact that the reflow can increase Q, it also deforms the waveguide cross-section, leading to a slanted sidewall from 90 • to 98 • angle as shown in Ref. 13 . This deformation causes difficulties in the control of critical dimensions. However, by reducing the reflow time to only 3 hours, the sidewall slant effect can be significantly reduced. In the main manuscript, Fig. 1(b) has shown nearly maintained sidewall angles with 3 h reflow instead of 24 h. All the 40-and 10-GHz-FSR high-Q chips shown in this work were fabricated with 3 h reflow time. Supplementary Note 7. Etchback planarization The etchback planarization process consists of dry etching and chemical-mechanical polishing (CMP). Supplementary Figure 7(a, b) shows the process flow and the SEM images of each step. After LPCVD Si 3 N 4 deposition, continuous Si 3 N 4 films are coated on the wafer's frontside and backside. The wafer is then coated with common photoresist (PR) on the frontside. Depending on the PR viscosity, the spin-coating speed and the waveguide width, a proper PR thickness is needed for sufficient coating conformality. In our case, 600 nm PR is coated on the wafer, followed by PR reflow, to achieve a flat wafer top surface. Then a dry etching with an etch selectivity of Si 3 N 4 : PR : SiO 2 =1 : 1 : 1 is performed, to uniformly remove the excess Si 3 N 4 together with the PR. In the recipe, adding O 2 increases the PR etch rate without affecting the Si 3 N 4 etch rate. Therefore, the etch rates of Si 3 N 4 and PR can be controlled independently. Supplementary Figure 8( Next, the wafer's backside Si 3 N 4 is removed by dry etching, to reduce the wafer bow 14 . The measured bow value of the wafer frontside using a laser interferometer is below 5 µm, indicating that the wafer is sufficiently flat. The reason to perform the frontside etchback before the backside Si 3 N 4 etch is to avoid potential crack formation during the wafer transfer and clamping in the dry etcher. The etchback process creates a wafer top surface which is flat but not smooth. A short CMP step, removing only a thin layer of materials (less than 50 nm), is already sufficient to reduce the surface roughness to sub-nanometer levels (measured using an atomic force microscopy as shown in Ref. 13 ). All the excess Si 3 N 4 has been removed during the etchback and backside etch, resulting in a small wafer bow below 5 µm. Therefore, the CMP's polishing rate and uniformity can be easily calibrated. This final CMP step serves as a fine control of the waveguide height. Supplementary Figure 7(c) shows the measured Si 3 N 4 waveguide height in different places on a full 4-inch wafer, using a reflectometer (Nanospec M6100). The measured waveguide height map shows ± 30 nm variation, corresponding to ±3% of 950 nm waveguide height, a value comparable to typical LPCVD Si 3 N 4 deposition uniformity. We note that our current height uniformity is limited by both the CMP and the etchback (dry etching). The height variation in the radial direction (i.e. center is thinner, edge is thicker) is caused by the CMP / photoresist coating (as a result of the edge effect). The height variation showing that the right-bottom is thicker is caused by the etchback, as the wafer chuck of our dry etcher has a non-uniform temperature distribution which introduces an etch-rate variation over the 4-inch wafer scale. To further improve the height uniformity, it is preferred to use larger wafers (as the edge effect is effectively weaker), and a dry etcher with a wafer chuck of a uniform temperature distribution. Supplementary Figure 8(b) compares the photographs of two wafers, one prepared with only the CMP and the other with combined etchback and CMP. The visible color patterns are due to natural light interference, caused by the SiO 2 thickness variations on the wafer. It is clear that the combination of etchback and CMP gives better thickness uniformity over the wafer scale. This process enables full control of polishing depth, sub-nanometer surface roughness (see Ref. 13 ), and wafer-scale uniformity of Si 3 N 4 waveguide height with 3% variation. Based on this process, monolithic or heterogeneous integration of piezoelectric aluminium nitride actuators (Ref. 6,15 ), electro-optic lithium niobate modulators (Ref. 16 ) and metallic heaters (Ref. 17 ) has been demonstrated. Furthermore, to verify the wafer-scale planarization uniformity, we measure the microresonator GVD parameter (D 2 /2π) of each 40-GHz-FSR samples (C7), as shown in Supplementary Fig. 9. Note that the wafer-scale uniformity of the most probable value κ 0 /2π of the C7 chips has been already shown in Fig. 3(b) in the main manuscript. In Supplementary Fig. 9(a), the minimum and maximum values of D 2 /2π in the center 9 fields (F1 -F9) Supplementary Note 8. Stress release with filler patterns We have not observed any cracks in more than 30 wafers fabricated using the current process. The stress-release filler patterns extending to the wafer edge significantly prevent crack formation starting from the wafer edge. The design criteria of stress-release filler patterns are: • The filler pattern should contain the same structure and density in the horizontal and vertical directions. As shown in Ref. 18 , if only horizontal bars are used, cracks are likely to form in the vertical direction. The horizontal bars, which create film discontinuity of LPCVD Si 3 N 4 in the vertical direction, relax the film stress in the vertical direction. Therefore, only cracks in the vertical direction are generated as a result of accumulated horizontal stress. • The filler pattern should have sufficient density, such that the film stress does not accumulated over a larger area of continuous film. Ideally, the higher the density is, the better the stress release is. In our case, the choice of a moderate density of filler patterns is to account not only the stress release but also the dry etching and CMP uniformity. • The filler pattern used in our current process consists of horizontal and vertical bars, forming "#" structures. The bar is a 2 × 20 µm 2 rectangle. The choice of 2 µm width is to match the typical width of the main functional waveguides (i.e. bus waveguides and microrings) which are between 1.5 µm to 2.5 µm; The choice of 20 µm length is to match the pattern density of the main functional waveguides with an exclusion zoom. It should also be mentioned that, the bar width should not be much smaller than 2h, where h is the thickness of the deposited Si 3 N 4 film (in our case, h is around 1000 nm). The reason is that, the LPCVD Si 3 N 4 growth on the substrate is conformal 18,19 , i.e. the film grows not only from the bottom of the etched trench but also from the sidewall. Therefore, if the bar width is much smaller than 2h, the conformal deposition of LPCVD Si 3 N 4 can completely fill the filler pattern trenches and form a continuous film, resulting in continuously accumulated stress that can cause cracks. The overall filling ratio of the current "#" filler patterns is approximately 24% in our design. • There is no filler pattern applied in the coverage area of meter-long spiral waveguides. However, still no cracks are formed due to the fact that the filling ratio of the functional waveguides in this design is approximately 34%, sufficiently high for crack prevention. Therefore, the design of stress-release filler patterns is highly flexible: In the design of sparse functional waveguides, filler patterns can be placed in the available open area; In the design of dense functional waveguides that already provide sufficient spatial topography for stress release, no filler pattern is needed. Supplementary Note 9. Waveguide layout designs Supplementary Figure 10 shows the GDS design layouts of microring resonators of 10, 40, and 100 GHz FSRs, on 5 × 5 mm 2 chips. The microresonator is coupled to a bus waveguide whose waveguide width is identical to the microresonator's waveguide width, to achieve high coupling ideality 20 . For FSRs below 40 GHz, the rings are densely packed on the chip and the space is fully used. Thus the maximum number N of the microresonators on the wafer is approximately calculated as N ≈ A 0 /A r , where A 0 is the wafer area (for the 4-inch wafer, A 0 ≈ 63 cm 2 , calculated with an effective radius of 4.5 cm) and A r is the area of the microresonator for a given FSR. For FSRs above 100 GHz, the space is currently not fully used on the 5 × 5 mm 2 chip. In principle, the design density can be significantly increased by making the chip smaller (e.g. 2 × 2 mm 2 ). The 5 × 5 mm 2 chip size chosen here is to facilitate the manual handling of chips with tweezers, not to increase the pattern density. For meter-long spirals, the design density is shown in the Fig. 4 in the main manuscript. In the currently case, the separation distance between waveguides is 4 µm. The minimum distance depends on the mode coupling between adjacent waveguides. Based on our experiments and eigenmode simulations, the minimum distance can be further reduced to less than 2.5 µm. Supplementary Note 10. Comparison of silicon nitride fabrication processes Supplementary Table II compares Supplementary Note 11. Derivation of response relation In the linear regime, the frequency response dν m to the modulated pump power dn c (in the microresonator) at modulation frequency ω/2π is given by In the DC modulation regime (ω → 0), the Kerr response term χ Kerr (0) is calculated 21 as where we use dν m ,Kerr /ν m = 2n 2 dI/n eff . c is the speed of light, h is the Planck constant, n g = 2.1 is the group index, n eff = 1.8 is the effective refractive index, V eff is the effective optical mode volume, n 2 = 2.4 × 10 −19 m 2 /W is the nonlinear index of Si 3 N 4 . The factor of 2 comes from the cross-phase-modulation, as the pump and probe modes are two distinct resonances in our experiment (i.e. m = m and ν m = ν m ). The thermal response term χ therm (0) is calculated as where we use dν m ,therm /ν m = dn mat /n mat and dP abs = κ abs hν m dn c . The material refractive index of Si 3 N 4 at 1550 nm is n mat = 2.0, and its thermo-optic coefficient 22 is dn mat /dT = 2.5 × 10 −5 /K. Supplementary Note 12. Fitting of the measured response For the microresonator response data presented in Fig. 5 in the main manuscript, we use the fitting function below χ therm (0) ) 1 1+2iω/κ probe 1 1+2iω/κpump (4) to extract the response ratio γ = χ therm (0) χKerr(0) from the response measurement. Here the free fitting parameters are only κ pump , the ratio γ, and an arbitrary constant pre-factor. The normalized thermal response χ therm (ω) χ therm (0) is retrieved from the frequency domain heat transfer COMSOL simulations, and κ probe is measured and kept the same for all measurements performed on the same microresonator. Importantly, only the data above 10 kHz is used in the fitting due to the locking distortion at lower frequencies. The validity of the dynamical heating simulations, and hence that of the simulated response function, is verified by benchmarking the model with our recent thermorefractive noise measurement 23 of similar Si 3 N 4 samples, where the measured noise spectrum is connected to the real part of our response function through the Fluctuation-Dissipation Theorem (FDT) in the frequency range that we are interested in. We notice that the fitting function (which we refer to as "analogue" fitting) where the "analogue" function 1 1+(ω/ω th ) ζ with free parameters, thermal cutoff frequency ω th and pole number ζ, replaces the simulated thermal response function χ therm (ω) χ therm (0) , could in principle fit the curve better to the measured response data. However, we observed that using this model, the fitting tends to systematically overestimate the response ratio γ for high absorption resonances as illustrated in Supplementary Fig. 11. This fitting artifact is manifested in the absorption rate calibration as shown in Supplementary Fig. 11(c), where for resonances with higher than 10 MHz absorption rate, the method starts to overestimate the absorption rate, yielding absorption rates that are larger than the actual, i.e. physically observed, absorption rates. This overestimate occurs due to the complicated thermal One could easily see that the analogue fitting result does not correctly capture the thermal response of the device, and therefore tends to overestimate the response ratio γ for high absorption resonances leading to unphysically high intrinsic absorption that exceeds the measured cavity loss rate. This feature of the analogue fitting is reflected in the absorption rate calibration, as is shown in Panel (c), that for resonances with absorption rate higher than 10MHz the analogue fitting method starts to overestimate the absorption rate. Probe λ=1541 nm γ=17.5 Probe λ=1559 nm γ=9.4 Probe λ=1586 nm γ=8.3 Probe λ=1618 nm γ=7.2 Probe λ=1532 nm γ=1.2 Probe λ=1538 nm γ=1.0 Probe λ=1553 nm γ=0.7 Probe λ=1571 nm γ=0.4 Probe λ=1595 nm γ=0.9 Probe λ=1604 nm γ=0.8 Supplementary Figure 14: More resonance response data fittings similar to those shown in Fig. 5(c, e) in the main manuscript. Panels (a -f) correspond to data shown in Fig. 5(c) in the main manuscript. Panels (g -l) correspond to 40-GHz-FSR data shown in Fig. 5(e) in the main manuscript.
2020-05-29T01:00:54.908Z
2020-05-25T00:00:00.000
{ "year": 2021, "sha1": "0a23d03b026b0ca1aa6730935fe9ccf3c3b0c264", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-021-21973-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bbbd9e596717835d4f4a218a8f268b1fe47abbc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Materials Science" ] }
118002821
pes2o/s2orc
v3-fos-license
BCS-BEC crossover in an optical lattice We model fermions with an attractive interaction in an optical lattice with a single-band Hubbard model away from half-filling with on-site attraction $U$ and nearest neighbor hopping $t$. Our goal is to understand the crossover from BCS superfluidity in the weak attraction limit to the BEC of molecules in the strong attraction limit, with particular emphasis on how this crossover in an optical lattice differs from the much better studied continuum problem. We use a large-$N$ theory with Sp(2N) symmetry to study the fluctuations beyond mean field theory. At T=0, we calculate across the crossover various observables, including chemical potential, gap, ground state energy, speed of sound and compressibility. The superfluid density $n_s$ is found to have non-trivial $U/t$ dependence in this lattice system. We show that the transition temperature $T_c$ scales with the energy gap in the weak coupling limit but crosses over to a $t^2/U$ scaling in the BEC limit, where phase fluctuations controlled by $n_s$ determine $T_c$. We also find, quite contrary to our expectations, that in the strong coupling limit, the large-$N$ theory gives qualitatively wrong trends for compressibility. A comparison with a simple Hartree shifted BCS theory, which takes into account both pairing and Hartree shifts, and correctly recovers the atomic limit and the right qualitative trend for compressibility, reveals that the large-$N$ theory on the lattice, although considers a larger number of diagrams, is in fact inferior to the simpler Hartree shifted BCS theory. The failure of the large-$N$ approach is explained by noting (i) the importance of Hartree shift in lattice problems, and (ii) inability of the large-$N$ approach to treat particle-particle and particle-hole channels at equal footing at the saddle point level. The inclusion of a lattice in the system leads to several qualitative differences with the continuum. One of the key features distinguishing a lattice system from the continuum is the dependence on interaction strength and filling fraction of the superfluid stiffness of the gas even at T = 0. This is in contrast to the continuum case, where the T = 0 superfluid stiffness is fixed by the particle mass and density due to Galilean invariance. Consequently, when phase fluctuations play a dominant role in the loss of phase coherence and the superfluid stiffness sets the scale for transition temperature [16], the above mentioned difference between the lattice and continuum becomes explicit. A second difference, which is not entirely unrelated to the previous point, that arises between the continuum and the lattice is regarding the effective mass of the bound pairs in the BEC limit. In the continuum, the mass of the bound pair in the BEC limit is simply twice the mass of the fermions and hence does not scale with the coupling strength. In contrast, the effective mass of the bosons on the lattice becomes increasing larger with the strength of the coupling. This is due to the fact that the bosons on the lattice can only move around by virtual ionization, and hence the corresponding hopping matrix element for the bosons, calculated within a simple perturbation theory, has an energy denominator equal to the coupling strength. Consequently, the boson mass which is inversely proportional to the hopping becomes larger with the strength of coupling. Thirdly, on a bipartite lattice there is a particle-hole (p-h) transformation that puts additional constraints on thermodynamics. Finally, there is an emergence on a lattice of a Charge Density Wave order at half-filling that competes with the superfluid (pairing) order. This new order arises because at half-filling the lattice Hamiltonian has a higher SU(2) symmetry in the spin space that is spontaneously broken. Our primary objective for studying attractive fermionic atoms on a lattice is therefore to understand how the broken translational invariance affects various physical quantities across the crossover. Moreover, there is a growing interest in performing experiments with ultra-cold fermionic clouds of both 40 K [17] and 6 Li [18] atoms in optical lattices, and although the entropy in the current experiments needs to be reduced by a factor of 3 or higher in order to reduce the temperature to below T c [19], we believe that in future these experiments should be able to test the findings of our current work. The paper is organized as follows: In section II we introduce the Hamiltonian and discuss the p-h constraints on a lattice. In section III we discuss a Hartree shifted BCS (HBCS) theory that respects the lattice p-h constraints. In section IV we next outline a diagrammatic approach to include the effects of quantum fluctuations on top of the HBCS theory and discuss the problem in this approach. Next, in section V we develop a large-N formalism that we use in the rest of the paper. We discuss the T = 0 results for different properties of the system within the large-N approach. We conclude this section with a comparison between the HBCS and large-N , and show that the former theory gives a more correct account of the chemical potential and the compressibility at T = 0. In section V we calculate the zero temperature superfluid stiffness. In Section VI we outline the calculation and results for the critical temperature. We conclude in section VIII. II. HAMILTONIAN AND PARTICLE-HOLE CONSTRAINTS In this section we first introduce the Hamiltonian that describes two kinds of fermions in a lattice and calculate the strength of interactions for which a two-particle bound state appears (unitarity condition). We next derive a set of p-h constraints imposed on the thermodynamics and develop a simple HBCS theory that respects these constraints. A. Hamiltonian The study of the BCS-BEC crossover in the absence of an optical lattice uses the divergence of the scattering length near a Feshbach resonance to tune the strength of the interactions between the fermions. Although this same technique has been applied to fermions in optical lattices [18], the Hamiltonian that describes this system near resonance is poorly understood. This is due to the inherent multi-band nature of the system when the (continuum) scattering length between the atoms diverges [20]. We do not have a separation of energy scales that would allow us to study an effective Hamiltonian in a single band. However, as we will show below, the lattice strongly modifies the scattering properties of fermions restricted to the lowest band, to the point that it takes a finite amount of on-site interaction to form a (molecular) bound state. Thus, a Feshbach resonance is not needed to achieve a unitary gas in a lattice. The Hamiltonian we will study is the single-band attractive Hubbard Hamiltonian: (1) Here c jσ is the fermion annihilation operator at site j, the pseudo-spin index σ =↑, ↓ represent the two-hyperfine states, t is the hopping matrix element between adjacent sites and the summation indices i, j represent sums over nearest-neighbor sites. The on-site attractive coupling is given by −U with U > 0, and it is assumed that both the hopping t and U are much smaller than the inter-band gap. Finally, n iσ = c † iσ c iσ is the number operator at site i of fermions with spin σ, and µ is the chemical potential. For simplicity, we will study homogeneous systems; i.e. we neglect the effects of the (typically harmonic) external trapping potential, which can eventually be included using a local density approximation. Throughout the paper, we have set = k B = 1 and we shall use the convention that all 3-momenta sums are summed over the first Brillouin zone and then divided by the total number of lattice sites. The scattering amplitude between fermions in the lattice can be obtained by summing up all possible interaction events of fermions with the dispersion relation obtained from the kinetic energy in (1), k = −2t[cos(k x a)+ cos(k y a) + cos(k z a) − 3] (which we conventionally measure from the bottom of the band), where a is the lattice constant. The scattering amplitude can be calculated as f = (m/4π) Γ(0, 0) where Γ(q, ω) = U/(1 + U Π(q, ω)) is the four-point vertex function for a pair of fermions of mass m with center of mass momentum q and Π(q, ω) is the corresponding polarization, which in our case (and in the limit T → 0) is of the form where the integration is over the Brillouin zone. We can now see that the condition for a diverging scattering amplitude (i.e. unitarity) in the lattice is [21]: For most experiments, the values of U and t can be more or less independently chosen. While U is primarily fixed by the magnetic field strength and the latter can be chosen such that one is always far from a Feshbach resonance, t can be adjusted by tuning the height of optical lattice. Therefore, the single Bloch band picture remains valid for the purpose of studying the BCS-BEC crossover as depicted by Hamiltonian (1). B. Particle-Hole Constraints Lattice systems have an additional symmetry stemming from the possibility of describing the physics in terms of either particles or holes; the choice of description is usually made in order to simplify the resulting Hamiltonian. In the case of the Hamiltonian (12) we can obtain an exact relationship between a system with n fermions (particles) and one with 2 − n fermions (holes). Let us for the moment work in the canonical ensemble and look for the ground state of the Hamiltonian (1) with the constraint that the number of particles per site is n = n ↑ +n ↓ . If we now perform the particle-hole transformation c † i,σ = (−1) i d i,σ [22], it can be easily verified that the kinetic energy term maintains its form with the replacement of the c operators with d operators. On the other hand, the on-site interaction term (with the site index omitted for clarity) transforms as Given that d † ↑ d ↑ + d † ↓ d ↓ = 2 − n is fixed in the calculation, the terms in the second line of (4) are constant within the Hilbert space of interest. Thus, the Hamitonian maintains its operational form under the particlehole transformation and the ground state wavefunction for a system of n particles is related to the ground state wavefunction for a system of 2 − n particles. Their corresponding energies are related as Differentiating with respect to n we see that the chemical potential, defined as µ(n) = ∂E(n)/∂(n), satisfies Finally, the thermodynamic potential Ω(µ) = E(n(µ)) − µ n which is the quantity we shall calculate in the grandcanonical ensemble, satisfies We stress that any approximation method used to solve the problem would have to satisfy this symmetry in order to yield physically consistent results. III. HARTREE + BCS THEORY AT T = 0 The starting point of the Hartree + BCS theory is the Hamiltonian (1). In mean field theory the contribution of the interaction termV k = −U c † k,↑ c † −k,↓ c −k,↓ c k,↑ to the ground state energy can be written as The first term is a constant and is the Hartree correction to the ground state energy. Note that since the chemical potential µ is the derivative of the ground state energy w.r.t. n, we can absorb the overall shift of the ground state energy due to the Hartree term in µ by adding (nU/2) to it. The quantity F k in the second term is self-consistently obtained by minimizing the ground state energy w.r.t. F k and this gives: F k = ∆ 0 /2E k . The gap and number equations in HBCS respectively read, where E k = ξ 2 k + ∆ 2 0 . We note that the single particle energies ξ k in HBCS include a Hartree shift: ξ k = k − µ − nU/2, where the last term is the Hartree term. It can be easily verified that the HBCS theory satisfies the particle-hole constraints on thermodynamics (see Eqs. 5, 6, and 7) derived from the attractive Hubbard model. Eqs. (9) and (10) are then self-consistently solved for µ and ∆ 0 for a given value of n [23,24]. The result for the chemical potential is plotted in Fig. (4) for a given filling. The chemical potential is monotonically suppressed as a function of coupling. Within the HBCS theory the strong coupling expansion of the chemical potential and the gap are respectively given by: We next develop a diagrammatic formulation to include the effects of quantum fluctuations on top of the HBCS theory. IV. DIAGRAMMATIC METHOD FOR INCLUDING QUANTUM FLUCTUATIONS ABOUT HBCS THEORY In this section, we outline a diagrammatic approach to include Gaussian fluctuations on top of the HBCS theory. By including quantum fluctuations we expect to account for the zero-point motion of the collective mode and the virtual scattering of gapped quasiparticles. However, since we have already included in our HBCS theory the leading order Hartree term from the Gaussian corrections (see Fig. (1)), we should be careful not to double count it. We begin by noting that the Hartree term that we have included in the HBCS theory can be systematically introduced in the mean field propagators using the Luttinger-Ward formalism. The details are outlined in Appendix A. We next use the Hartree shifted propagators to include RPA corrections to the thermodynamic potential. In order to avoid double counting of the Hartree term, we subtract by hand this term from the Gaussian thermodynamic potential Ω g . We have explicitly verified that by doing so, we not only restore the correct p-h symmetry for Ω g , the resulting expression for Ω g is also rendered manifestly convergent in the absence of convergence factors which makes it easier to compute Ω g . However, inspite of the compactness of this approach it leads to an unphysical negative compressibility in the strong coupling limit (see Fig. 10). The result is unphysical since the system in this limit is comprised of hard-core bosons with nearest neighbor repulsion and can hence neither collapse (prevented by Pauli exclusion) nor phaseseparate (ruled out on energetic grounds). The failure of the diagrammatic formalism, which was to include quantum fluctuations on top of HBCS theory, therefore necessitates a different approach and we next turn to a large-N formalism. V. LARGE-N THEORY FOR CROSSOVER ON LATTICE In this section we give a brief account of the large-N formalism, which as we shall see, starts with a saddle point that is different than HBCS, obeys the p-h constraints appropriate for the large-N model at zeroth and first order in 1/N , and most importantly predicts positive compressibility for all parameters. In addition to satisfying the p-h constraints on the lattice, the way our large-N theory on the lattice differs from other large-N approaches in the continuum [10] is the way we treat the fluctuation feedback (see subsection B). Additionally, at half-filling there is an emergence of charge density wave (CDW) order that the large-N theory is unable to capture (for reasons discussed later). Hence, we shall work away from half-filling where the ground state of the system is a non-degenerate superfluid. The starting point of our large-N formalism is a generalization of the Hamiltonian of Eq. (1) to include N fermion flavors for each spin in the form where α is the index for each of the N flavors. This Hamiltonian is invariant under the Sp(2N ) symplectic group and reduces to the original attractive Hubbard model (1) after setting N = 1. As shown in section V B, the virtue of working with the above form of interaction, where the flavor index α is not conserved, is that it lends itself to a systematic expansion in the parameter 1/N around the mean field theory results, exact in the limit N → ∞. Although such an expansion is strictly valid in the large-N limit, it is assumed that the general trends of the results found will be correct after setting N = 1 at the end of the calculation. A. Particle-Hole constraints for the large-N model Following the discussion in section (II B), we next derive a set of p-h constraints appropriate for the large-N model (12). Using the Hamiltonian (12) and a p-h transformation: c † iα,σ = (−1) i d iα,σ , we obtain exact relationships between thermodynamic variables with n fermions (particles) per flavor and ones with 2−n fermions (holes) per flavor. The ground state energy (E), the chemical potential (µ) and the thermodynamic potential (Ω) now respectively transform as follows: We next develop a functional integral formalism with the large-N model and show that it respects the above constraints at zeroth order and also at O(1/N ). B. Functional Integral Formalism In this section we shall outline the key steps in formulating a functional integral approach with the large-N model. The details are given in Appendix B. The thermodynamic properties of the system can be obtained from the partition function which can be expressed as a Feynman path integral over Grassmann fieldsΨ ασ and Ψ ασ . We next introduce a Hubbard-Stratonovich field ∆(x) at each x = (x i , τ ) which couples to αΨ iα↑ (τ )Ψ iα↓ (τ ), and decouple the quartic fermionic interaction term in the action. This makes the functional integral both Gaussian in the fermionic fields and diagonal in the flavor index α. After integrating over these Grassmann variables we get an effective action in terms of the Hubbard-Stratonovich fields ∆(x). It can be easily shown that the space-and time-independent saddle point of this effective action corresponds to a thermodynamic potential that is linear in N and Gaussian fluctuation corrections to the saddle point are zeroth order in N , so that the total thermodynamic potential per flavor can be expanded as To find the uniform, static saddle point of the effective action S ∆ , we replace ∆(x) by the space-time independent quantity ∆ 0 . The saddle point condition is [27] dS 0 /d∆ 0 = 0, which can be rewritten as where E k = ξ 2 k + ∆ 2 0 . The mean field number equation can be obtained from the mean field thermodynamic potential Ω 0 as Eqs. (16) and (17) must be solved self-consistently to obtain the mean field gap parameter ∆ 0 corresponding to the mean field chemical potential µ, as well as finding the chemical potential which yields the desired density n. The results of this calculation are presented as dashed lines in Fig. 2. It is instructive to show that this mean field theory satisfies the particle-hole constraints in the lattice (see section II(B)) to the proper order, i.e. to zeroth order in 1/N . From Eq. (6) we see that this corresponds to the chemical potentials on particle and hole sides being related by µ(n) = −µ(2−n). The validity of this equation can be seen by replacing µ → −µ without modifying ∆ 0 ; this leaves (16) unchanged while replacing n → 2 − n in (17). The large U/t limit of this theory can be easily obtained from the equations. To zeroth order in t/U , the chemical potential becomes µ = (1 − n)U/2 and the gap parameter is ∆ 0 = 1 − (1 − n) 2 U/2. We finally emphasize that the essential difference between the previously described HBCS theory and this mean field theory is the absence of the Hartree term in the latter. Such a term, which corresponds to the particle-hole channel cannot be easily obtained at the mean field level of any functional integral formalism. We recover this important contribution in our theory as a 1/N order correction in what follows. We next expand our action in terms of the fluctuations around the saddle point and truncate to Gaussian order or O(1/N ). The Gaussian thermodynamic potential Ω g can be expressed in terms of the fluctuation propagator (see Appendix B), which has poles on the real axis corresponding to the collective mode and branch cuts corresponding to the two-particle continuum. Using the new approximation to the thermodynamic potential per flavor, Ω 0 (µ) + (1/N ) Ω g (µ), we next obtain expressions for the properties of the system to linear order in 1/N . At this point, we want to emphasize that we do not treat the chemical potential µ and the auxiliary field ∆ 0 at equal footing [26]. Indeed, in our approach the former is a thermodynamical variable while the latter is merely a parameter in the theory, obtained as the saddle point of a variable that is integrated over in the partition function. As such, it is not an independent variable but it is defined as a function of µ, i.e. ∆ 0 (µ) is the saddle point field used to calculate the partition function at such a chemical potential, obtained from the solution of Eq. (B10). As we make expansions in powers of 1/N this equation is left unchanged, as the saddle point condition is exact to all orders [27]. In order to calculate the leading order corrections to the thermodynamic quantities we next expand the renormalized number equation and the saddle point condition using: µ = µ 0 + (1/N )δµ and ∆ = ∆ 0 + (1/N )δ∆. This gives us the Gaussian corrections δµ and δ∆ to the chemical potential and gap parameter respectively. In order to make a connection to the original system with two spin components, we set N = 1. Finally, we note that, even though our approach to the 1/N expansion is different from the one introduced in reference [10] it can be shown (see Appendix E) that the first order corrections to the chemical potential, δµ, in both approaches are equivalent while the corrections to the gap parameter δ∆ 0 are different; the latter is due to a modification of the gap equation at the 1/N level which we do not include. As long as the emphasis is in the calculation of the thermodynamics of the system which depend solely on the chemical potential, this difference is not as relevant. C. Zero-temperature Results for the large-N formalism Using our formalism we can calculate all thermodynamic quantities for the system. In this section we present our results, both for the mean field approximation and up to linear order in the 1/N expansion; in the figures we have set the number of flavors N equal to 1 in the expansion. Chemical potential and gap parameter The chemical potential and the gap parameter across the entire crossover and for a typical density (quarter filling, n = 0.5) are plotted in Fig. 2; while the fluctuations are unimportant for small interactions U , the correction becomes important at unitarity and in the BEC limit. The fluctuations decrease the value of the order parameter, as well as decrease the value of the chemical potential; as we shall see this is related to the Hartree shift in the energy of the system. We can show that our theory satisfies particle-hole symmetry to first order in 1/N . As we can see from (14) and the expansion (B7), particle-hole symmetry at this order implies that where µ = µ(n). This property of our Ω g (µ) can be directly seen from the second line in (B18); once again making the transformation µ → −µ as well as switching variables to k → −k and q → −q we can see the first term is left unchanged while in the second one u ↔ v. Thus, we recover (18). Compressibility We next calculate the compressibility of the system defined as κ = dn/dµ to order 1/N . Using the number equation n = −d(Ω 0 + (1/N )Ω g )/dµ, differentiating with respect to µ and evaluating the resulting expression at µ = µ + δµ/N , we obtain with, we plot the chemical potentials for the two theories in Fig. (4). We note that the results from the two theories match in the BCS regime; however there is a large deviation in the BEC regime. In particular, when U/t 1, On the other hand, the strong coupling limit of the chemical potential within the large-N theory scales like which has a different leading order term compared to the HBCS µ. We already see that there are quantitative differences between the HBCS and the large-N results. In order to understand the results (20) and (21), we next solve the problem exactly in the atomic limit: t/U = and has four eigenstates: | 0 , |↑ , |↓ and |↑↓ with respective energies 0, −µ, −µ and −2µ − U . To study the broken symmetry state we now introduce a fictitious pairing field h to obtain a Hamiltonian After setting up the gap and number, we finally take h = 0, solve the gap and number equations for µ and ∆ 0 , and obtain µ = −U/2 and ∆ 0 = U n(2 − n)/2. These are exactly the values obtained from solving the HBCS number and gap equations (9, 10) [30]. Physically, the result µ = −U/2 in the atomic limit can be explained by noting that the chemical potential of the fermions is just one-half of the binding energy ∼ U for the molecules. From these considerations, we conclude that the large-N theory gives quantitatively incorrect results for the density (n) dependence of the chemical potential in the atomic limit. This also leads to problems with the compressibility since the compressibility is the derivative of µ with respect to n. We turn to this next. In order to calculate the compressibility dn/dµ within HBCS we note that the Hartree shift to the single particle energies can be incorporated into the chemical potential. Following [31] we write the renormalized chemical potential asμ = µ + nU/2, and evaluate dn/dµ as The quantity ∂n/∂μ can be calculated from Eqs. (9 and 10) Fig. (5) shows a comparison of dn/dµ as obtained from HBCS using Eqs. (24,25) and large-N respectively. In the strong coupling limit the HBCS compressibility scales as This is again expected on general grounds, when one notes that the chemical potential µ, written in powers of t/U , has a zeroth term equal to −U/2 (atomic limit). Any n dependence of µ is hence at least O(t 2 /U ), which implies that the compressibility must increase with U . From these considerations we come to the conclusion that the compressibility should be a monotonically increasing function of U/t. In contrast, the compressibility for large-N scales like dn/dµ ∼ 2/U . To summarize we find that the much simpler BCS plus Hartree theory works better in the BEC limit compared with the more sophisticated large-N approach where we included the 1/N Gaussian fluctuation corrections to the saddle point result, with both approximations satisfying the particle-hole constraints on the thermodynamics. By better we mean that BCS + Hartree reproduces the atomic limit behavior of the chemical potential and the strong coupling behavior of the compressibility expected for a BEC of hard-core lattice bosons, while the large-N approach (with N set equal to 1 at the end) does not. We can also compare our results with the available Quantum Monte Carlo data, which however is only for the two-dimensional attractive Hubbard model. We find that the results for the chemical potential [32] and for the compressibility [31] at moderately large | U | /t are both in good (semi-quantitative) agreement with the Hartree + BCS theory. Although there exists no QMC data on the 3D attractive Hubbard model at T = 0, we note that fluctuations should be less important in 3D than in 2D. It is therefore reasonable to expect that the agreement between Hartree + BCS theory and QMC should only improve in 3D. The comparison between the two approaches (large-N and HBCS) is quite surprising and unexpected. We should emphasize that the two theories start with quite different mean field solutions (or saddle points). The BCS + Hartree solution incorporates both the particleparticle (p-p), or pairing, and the particle-hole (p-h) Hartree physics on an equal footing at the mean field level. The large-N solution, on the other hand, is designed to focus only on the p-p channel at the saddle point level, and include all other effects as fluctuations about the saddle point. One might have thought that since the Hartree correction to the thermodynamics is included at the 1/N level (along with higher order terms), the large-N approach would "go beyond" the simpler BCS + Hartree approach. But we find that "more" (diagrams, for instance) is not necessarily "better" for quantum many-body systems! It might also be worth contrasting the optical lattice calculations presented here from BCS-BEC crossover in the continuum. In the continuum, one does not in general have a Hartree term in thermodynamics, which is proportional to both the interaction and the density (except in the BCS limit). The reason is as follows: the bare interaction g(Λ) actually goes to zero as the ultraviolet cutoff (inverse range of potential) goes to infinity. Thus a "bare Hartree term" proportional to g(Λ)n vanishes throughout the crossover. Also there can be no term proportional to the renormalized interaction a s in the ground state energy in general, since that would diverge at unitarity! As shown in Ref. [6], the Hartree diagram in the Gaussian fluctuation correction to the BCS theory does indeed lead to the expected Hartree correction of relative order k f a s in the BCS limit. But it is not possible to isolate "the Hartree correction" to the ground state energy or chemical potential at arbitrary values of 1/k f a s in the continuum problem. In the following section we compute the superfluid density on a lattice at T = 0 and show that because of the broken translational invariance, the superfluid density is not equal to the total density as is the case with the continuum. VI. SUPERFLUID DENSITY It has been shown in Ref. [28], on quite general grounds, that the superfluid density of a translationally invariant superfluid possessing time reversal invariance at T = 0, is equal to the total density. For a one component system, barring pathologies (e.g. He 3 -He 4 mixtures), the statement can be proved using the Gibbs-Duhem relation and Landau's two-fluid model. A phase twist put in the boundary conditions for the order parameter in a translationally invariant system is uniformly distributed across the system. The situation is different on a real optical lattice -because of the broken translation symmetry, the many-body wavefunction has a very small amplitude between the lattice sites and it is energetically advantageous to distribute the phase twists at these locations. As a result the superfluid density, which is the response of the system to this phase twist, turns out to be different on a lattice. Indeed one can show, using Kubo formalism, that for a translationally invariant system the paramagnetic part of the current-current correlation function vanishes due to the commutativity of the total momentum operator with the Hamiltonian. The superfluid density in such a system is therefore entirely given by the diamagnetic part of the response and turns out to be equal to the total density. In a discrete lattice model, the total momentum operator does not commute with the Hamil-tonian and hence the paramagnetic part of the response is non-zero. Consequently, the superfluid density differs from the total density on a lattice even at T = 0. The superfluid density is computed by comparing the free energy F (n) = Ω + µn of the gas at rest with the free energy of a gas moving with a superfluid velocity v s = Q/(2m) in the limit Q → 0; indeed, F (Q, n) − F (0, n) = 1 2 n s mv 2 s so that [29] n s = 4m We can relate this derivative of F with derivatives of Ω recognizing that F (Q, n) = Ω(Q, µ(Q, n))+µ(Q, n)n and thus where we have used the number equation at Q = 0 in the last line. Following [29] we calculate, within our large-N formalism, the superfluid density on a lattice as a response of the system to a phase twist on the order parameter. We give the details of this calculation in Appendix F and present the results here. The mean field superfluid density for the large-N theory is given by From this formula we can see that in general we have n 0 s < n. The Galilean invariant result in the dilute, continuum limit is recovered in the limit of small density n and interaction U/t, in which the chemical potential µ is near the bottom of the band. Thus, for the momenta contributing to the sum we have cos(k x a) ≈ 1 and n 0 s = n. Next, we obtain the 1/N corrections to ρ s by including Gaussian fluctuations in the calculation of thermodynamic potential (see Appendix F for details). We obtain the following expansion for the superfluid density up to O(1/N ): where d/dµ = ∂/∂µ + (∂µ/∂∆ 0 )∂/∂∆ 0 and α(µ) is given by We finally set N = 1 and plot the mean field superfluid density and the one including 1/N corrections as a function of coupling strength in Fig. (6). As it can be seen, n s falls off like t 2 /U in the strong coupling limit which can be explained by noting that the system in this regime comprises of hard-core bosons on a lattice with an effective effective mass ∼ U/t 2 . In other words, the tightly bound pairs in the BEC limit can hop only through virtual ionization and hence has an hopping parameter ∼ t 2 /U . Further, Gaussian fluctuations reduce the superfluid number density from its mean field value across the entire crossover with an increased suppression in the strong coupling regime. This is expected because the BCS mean field theory reduces the problem to one of non-interacting Bogoliubov quasiparticles with a gapped excitation spectrum. However, the low lying excitations in the strong coupling regime, are the gapless collective modes of the composite bosons, which are not captured by the BCS mean field theory. In the next section we calculate the critical temperature within the large-N theory and show that the large-N theory for the attractive Hubbard model, inspite of its above limitations, predicts the correct qualitative trends for T c in the two limits. VII. CRITICAL TEMPERATURE Let us finally calculate the critical temperature T c of the fermionic gas in the lattice, as well as the pairing temperature T * . Just like in the continuum [5], these two temperatures are approximately the same in the BCS limit and widely diverging in the BEC limit. In the former, the formation of Cooper pairs and their condensation are governed by the same physics. In the BEC limit, on the other hand, the temperature for the formation of pairs will be of the of the order of the binding energy (which is proportional to U ) while the critical temperature will be decreasing as U increases, as the effective mass of the pairs increases, as we shall show. To calculate both T * and T c we need to consider the temperature at which the t-matrix has a divergence at zero energy and momentum for a given chemical potential µ. This condition for the inverse temperature β = T −1 is The two temperatures differ, however, in the equation of state that is used to calculate the density. The pairing temperature is obtain using the mean field approximation to the thermodynamical potential, in which only fermionic excitations are included (i.e. pair breaking at that temperature). The calculation of the critical temperature corresponds to the addition of the effects of Gaussian fluctuations in which bosonic excitations (Goldstone modes) are included, leading to a large renormalization of the equation of state. In order to set up the number equation we use the same functional integral formalism that we developed at zero temperature. The mean field thermodynamic potential at high temperatures β < β c is: The mean field number equation at T = T c is then given by ∂Ω 0 /∂µ = −n, or n = k 2 exp(βξ k ) + 1 . Solving Eqs. (33,35) self-consistently we obtain the mean field β c for a given density n and denote it by β 0 c = (T * ) −1 . The calculation of the Gaussian contribution to the thermodynamic potential at these high temperatures (at which the gas is normal) is also similar to the one presented at T = 0 setting ∆ = 0 everywhere. Thus, the fluctuation propagator M is diagonal, with M 11 (q) = Γ(q) where iq l = i2πl/β are Bose Matsubara frequencies and f (ξ) = 1/[exp(βξ)+1] is the Fermi distribution function. Hence, Using our large-N formalism, we shall obtain the 1/N expansion of the diagrammatic approach to this problem, which was discussed by Nozieres and Schmitt-Rink (NSR) [3]. Following NSR, we maintain the same form of the t-matrix equation (33). Nevertheless, just like at T = 0 the 1/N corrections to the thermodynamic potential will renormalize the chemical potential and hence change β c . The inverse temperature β and the chemical In the above equation all the derivatives are evaluated at ∆ = 0, µ = µ and β = β 0 c . Setting the coefficient of the 1/N term to zero we obtain Similarly we expand the number equation and obtain and setting the coefficient of the 1/N term to zero we get Eqs. (38, 40) are then simultaneously solved to obtain the 1/N corrections to β 0 c and µ. We obtain the mean field temperature scale T * and the critical temperature T c after including 1/N corrections from T * = 1/β 0 c and T c = (β 0 c + δβ) −1 (after setting N = 1) respectively. The results are shown in Fig. (7). As expected, there is a large deviation of T c from its mean field value T * in the strong coupling regime. The phase diagram is as follows: above T * the system is a normal fermi gas, for temperatures below T * and above T c there are preformed uncondensed pairs and below T c we have a condensate of pairs. In the weak coupling limit T c approaches the BCS value ∆ 0 /1.75. Beyond the BCS regime, the pairing temperature grows linearly with U/t while T c goes through a maximum near unitarity and then falls off as t 2 /U . The solid black line is Tc/t, the dotted line is the zero temperature gap parameter ∆0/t (rescaled by a factor α = 0.57), the dashed line is zero temperature superfluid stiffness Ds (rescaled by a factor γ = 6.67). One can see that Tc scales like ∆0 in the BCS limit and like ns in the BEC limit. The scalings of T c in the two regimes are summarized in Fig. (8). In the weak coupling regime the pair breaking energy scale ∆ 0 is much smaller than the energy scale set by the superfluid stiffness, D s = n s t and hence the transition temperature T c is governed by the zero temperature gap. In the strong coupling regime, the energy scale for phase fluctuations is the smaller one compared to the pair breaking one and hence the scale for T c is dictated by the zero temperature superfluid stiffness. The precise value of U/t for which T c goes to a maximum depends on filling (see Fig. 9). As explained earlier, the reason why T c scales like t 2 /U in the BEC limit can be explained by considering hardcore bosons on a lattice. A simple second order perturbation theory in t/U then gives an effective hopping parameter proportional to t 2 /U for the composite bosons. Since the superfluid stiffness is proportional to the effective hopping parameter and T c in this regime is governed by phase fluctuations of the lattice Bose gas, therefore T c ∼ t 2 /U in this limit [16,34]. Lastly, we have compared our large-N results with a Hartree shifted NSR theory (HNSR), which uses Hartree shifted propagators in the scattering matrix. The details are given in Appendix G. We find reasonable agreement between the two approaches at BCS and BEC limits (see Fig. 12). However, there are quantitative differences between the two results around unitarity which we do not understand at this stage. VIII. CONCLUSIONS In this paper we have addressed the BCS-BEC crossover on a 3D optical lattice. We have developed a simple Hartree + BCS theory that satisfies the p-h constraints imposed by the lattice on the thermodynamics. Since, inclusion of Gaussian fluctuations on top of the HBCS theory led us to an unphysical negative compressibility in the strong coupling limit, we were forced to start from a different saddle point. We developed a large-N approach for the attractive Hubbard model in 3D, where the large-N saddle point did not include the Hartree shift but still respected the respective p-h constraints of the large-N model. Most importantly, inclusion of Gaussian fluctuations led to a finite and positive compressibility for all parameters. We calculated the ground state chemical potential, gap, ground state energy etc. away from half-filling. The superfluid density at T = 0 on the lattice was found to deviate from the total density and in the BEC limit was determined by the single-boson hopping matrix element which scales as t/U . However, we find that the large-N theory predicts quantitatively inaccurate results for the chemical potential in the strong coupling limit, and qualitatively incorrect trends for the compressibility across the crossover. A comparison with the HBCS theory, which correctly recovers the atomic limit and predicts the right qualitative trends for compressibility, reveals that the large-N theory on the lattice, although considers a larger number of diagrams, is in fact inferior to the simpler Hartree shifted BCS theory. The limitation of the large-N approach is explained by noting (i) the importance of Hartree shift in lattice problems, and (ii) inability of the large-N approach to treat particle-particle and particle-hole channels at equal footing at the saddle point level. Inspite of the limitations of the large-N approach in describing the two-component fermionic system on the lattice at T = 0, we obtain correct trends for the critical temperature within this theory by approaching the superfluid state from above T c . The transition temperature is shown to scale like two different ground state quantities in the two regimes: in the BCS regime T c scales like the gap, while in the BEC regime T c scales like the zero temperature superfluid stiffness. These two different scalings show that in the weak coupling regime coherence is lost due to pair breaking and in the strong coupling the superfluid order is destroyed by phase fluctuations of the lattice Bose gas. ACKNOWLEDGMENTS The author wishes to thank Roberto In this appendix we develop a diagrammatic formulation of the crossover problem in the lattice in order to include gaussian fluctuations on top of the Hartree + BCS theory. The starting point of this discussion is the attractive Hubbard Hamiltonian (1). The thermodynamics under this Hamiltonian obeys the p-h constraints discussed in Eqs. (5, 6, and 7). It is easy to see that starting with Hamiltonian (1) if we were to develop a functional integral formalism (like the one used for the large N model), we would find that the saddle point violates the p-h constraints. The reason can be traced back to the choice of Hubbard-Stratonovich field (either p-h or p-p channel). At this stage let us anticipate that a Hartree shift to the mean field chemical potential would correct this problem. We use a Luttinger and Ward formalism [35] to systematically introduce a Hartree shift in our single particle propagators. Introducing the Luttinger-Ward functional Φ[G] we write the thermodynamic potential Ω as where Tr ≡ (1/β) k,ikn tr and the self-energy Σ is obtained by evaluating the functional derivate of Φ[G] at the exact Green's function where G 0 is the non-interacting Green's function. In the Luttinger-Ward formalism Φ is obtained by summing up an infinite series of closed diagrams without any self-energy insertion (generally called skeleton diagrams) and replacing all free propagators by fully interacting ones. At the mean field level we need to retain only the first diagram in the series and thus We define δΦ[G]/δG 11 = U Tr G 22 = Σ which implies δΦ[G]/δG 22 = U Tr G 11 = −Σ. We can further associate Tr G 21 = Tr G 12 with the Hubbard-Stratanovich field ∆ and therefore δΦ[G]/δG 12 = −U Tr G 21 = −∆. The Luttinger-Ward functional at the mean field level is therefore given by and the self-energy matrix is given by We next use the form of the free Green's function given by and the Dyson equation to calculate the inverse of the full Green's function Note, the Green's function in Eq. (A7) has its single particle propagators Hartree shifted. Mean field theory at T = 0 Using Eq. (A1) and the fact that Tr Σ = 0 we can obtain an expression for the mean field thermodynamic potential where E k = ξ 2 k + ∆ 2 0 and ξ k = k − µ + Σ. This form of thermodynamic potential, as anticipated earlier, obeys the correct particle-hole constraints. Then the spatially uniform, static saddle point at T = 0 is given by the following condition The mean field number equation can be obtained from the condition and the Hartree shift Σ is given by Eqs. (A9), (A11), and (A10) are then solved selfconsistently and we obtain the mean field values for ∆ 0 , µ and Σ. Gaussian fluctuations at T = 0 In order to go beyond the mean field approximation we next consider fluctuations of the order parameter ∆ around its static saddle point value ∆ 0 and expand the action S ∆ to Gaussian order. The first order term vanishes due to the saddle point condition (A9) and we obtain The mean field piece S 0 has been defined above and Gaussian piece has the form where iq l = i2πl/β are the Bose-Matsubara frequencies and the matrix elements of the inverse fluctuation propagator M are given by and Here G 0 is the same Nambu propagator defined in Eq. (A7) with ∆ = ∆ 0 , u 2 k = 1 − v 2 k = (1/2)(1 + ξ k /E k ) are the standard BCS coherence factors and k = k + q. While calculating the thermodynamic potential including Gaussian fluctuations we need to remember that the first term in the Gaussian part (Ω g ) is indeed the Hartree term (−Σ 2 /U ). Since the Hartree contribution has already been included at the mean field level to preserve particle-hole symmetry, we need to take it out from Ω g to avoid double counting. Writing the partition function upto Gaussian order and integrating out the Gaussian fluctuations we obtain the Gaussian contribution to the thermodynamic potential where the matrix elements M 11 etc. have been rescaled as M 11 → U M 11 . It is easy to see that in the limit of large iq l and hence the Matsubara sum iq l ln M 11 Det M/M 22 without the convergence factor diverges for large iq l . However, the correct Ω g also has a correction term given Upon changing the sign of q in the second term of the second line and noting that the sum over q is over both positive and negative values we have for large iq l which exactly cancels the linear term in the large (iq l ) expansion in Eq. (A18). Summing up the above results we obtain the Gaussian correction to the thermodynamic potential where we are justified to drop the convergence factor e +iq l 0 + from the right hand side of Eq. (A21) since in the large (iq l ) limit, the leading order term in the sum is now of the order 1/(iq l ) 2 and thus the Matsubara sum is convergent. Thus the same scheme that restores the correct particle-hole symmetry in our theory, also makes the Matsubara sum convergent at the Gaussian level. To evaluate the Matsubara sum in Eq. (A21) we analytically continue in the complex plane and convert the sum over the bosonic Matsubara frequencies to an integral over a closed contour enclosing the imaginary axis counter clockwise iq l → (dz/2πi)n B (z) where n B (z) is the Bose distribution function. We evaluate the integral over z along a contour parallel to the Matsubara axis: z → 0 − + iy keeping in mind that the phase of ln M 11 (q, y)/M 22 (q, y) and the imaginary part of (M 11 (q, y) − 1) are both odd functions of y and hence do not contribute when integrated over positive and negative values of y. Therefore, we obtain at T = 0 To obtain ∆ 0 , µ and Σ including gaussian corrections we start with a grand canonical ensemble and treat both µ and Σ as thermodynamic variables. For convinience we switch toμ = µ − Σ and Σ as our independent variables. Then, the thermodynamic potential can be written as Ω(μ, ∆(μ); Σ) = A(μ, ∆(μ)) + Σ 2 /U , where the function A(μ, ∆(μ)) has no explicit dependence on Σ. The gap ∆(μ) is obtained from the saddle point Eq. (A9). To obtain the number equation and the equation for Σ we construct a function F (μ, Σ) = Ω(μ, Σ) + (μ + Σ)n. The condition for Σ is then given by The number equation reads (A24) We next switch to a canonical ensemble and for a fixed n numerically calculate A[μ, ∆(μ)] = A 0 [µ, ∆(µ)] + A g [µ, ∆(µ)]. Eq. (A24) then gives the value of the renormalized Hartree shifted chemical potentialμ for the corresponding value of n which when combined with Eq. (A23) gives the renormalized chemical potential µ without the Hartree shift. The problem with this diagrammatic approach is that it predicts an unphysical negative compressibility in the BEC limit. In Fig. (10) we have plotted µ as a function of n for U/t = 20.0. Clearly, the slope of µ versus n is negative for a large range of n indicating negative compressibility. However, we know that in this limit the system is a lattice Bose-gas with a hardcore repulsion coming from Pauli exclusion and a nearest neighbor repulsion proportional to t 2 /U . Hence the system is stable in the BEC limit and the negative compressibility within the diagrammatic approach is therefore an unphysical result. Note that the slope is negative upto n 0.7 indicating a negative compressibility. The range of fillings for which dn/dµ < 0 increases with U/t, so that eventually for very large couplings the system is unstable for all fillings. Appendix B: Details of large-N formalism The thermodynamic properties of the system can be obtained from the partition function in the grandcanonical ensemble Z(µ, β), where β −1 is the temperature T of the system. Indeed, Z is related to the thermodynamic potential as Ω(µ, β) = −β −1 ln Z. This partition function can be expressed as a Feynman path integral over Grassmann fieldsΨ ασ and Ψ ασ with the action in imaginary time τ (B2) The quartic fermionic interaction term in the Hamiltonian can be decoupled by introducing a Hubbard-Stratonovich field ∆(x) at each x = (x i , τ ) which couples to αΨ iα↑ (τ )Ψ iα↓ (τ ). The partition function can then be written as Z = D∆D∆ * DΨ iασ DΨ iασ exp(−S Ψ,∆ ) with a full action where we have introduced the Nambu spinors ψ † iα (τ ) = (Ψ iα↑ (τ ), Ψ iα↓ (τ )). The inverse Nambu-Gorkov Green's function G −1 ij (τ, τ ) is given by with the notation δ <i,j> = 1 only if the i and j sites are nearest neighbors and zero otherwise. The functional integral is now both Gaussian in the fermionic fields and diagonal in the flavor index α. After integrating over these Grassmann variables we get with an effective action S ∆ which only depends on the auxiliary fields ∆(x) in the form Assuming that the saddle-point auxiliary field is spaceand time-independent (i.e. ∆(x) = ∆ 0 ), the thermodynamical potential Ω is of the form Ω(µ, β) N Ω 0 = S ∆ (∆(x) = ∆ 0 )/β. Fluctuations around the saddle point yield corrections that are smaller than this term by powers of 1/N ; thus the full thermodynamic potential will be expanded in the form Saddle point approximation -Mean field theory at T=0 To find the uniform, static saddle point of the effective action S ∆ , we replace ∆(x) by the space-time independent quantity ∆ 0 . Fourier transforming all the fields to the reciprocal (momentum) lattice and Matsubara frequencies, the effective action is given by where ik n = (2n + 1)πi/β are the fermionic Matsubara frequencies. The saddle point condition is [27] dS 0 /d∆ 0 = 0, which can be rewritten as where E k = ξ 2 k + ∆ 2 0 . The thermodynamic potential in the mean-field approximation is then The mean field number equation can be obtained from Eqs. (B10, B12) must be solved self-consistently to obtain the mean field gap parameter ∆ 0 corresponding to the mean field chemical potential µ, as well as finding the chemical potential which yields the desired density n. Gaussian fluctuations at T = 0 In order to go beyond the mean field approximation we must consider perturbations of the auxiliary field ∆(x) beyond the saddle-point, in the form where the complex bosonic field η(x) describes spacetime dependent fluctuations around the uniform static value ∆ 0 . We next expand the action S ∆ in Eq. (B6) to quadratic order in η, using that the saddle point condition (B10) ensures that there is no term linear in η. Thus, the action is of the form S ∆ = N S 0 + S g + ... with a Gaussian piece of the form where iq l = i2πl/β are the Bose-Matsubara frequencies and the matrix elements of the inverse fluctuation propagator M are given by Here we use the standard BCS notation u 2 Writing the partition function upto Gaussian order and integrating out the Gaussian fluctuations we obtain (see Appendix C for details) the Gaussian contribution to the thermodynamic potential In a previous article [6] some of us showed that this Gaussian fluctuation contribution can be physically interpreted by analytically continuing the bosonic Matsubara frequency to the real axis iq l → z = ω + i0 + . We are thus led to the study of the analytical properties of ln Det M(q, z). The zeroes of Det M(q, z = ω 0 (q)) (which correspond to poles of the fluctuation propagator M −1 ) correspond to the frequencies ω 0 (q) of collective excitations of the system with momentum q. These excitations are the q → 0 Goldstone modes of the order parameter in the broken symmetry superfluid state. Additionally, the fluctuation propagator has branch cuts on the real axis originating at E c (q) = ± min(E k + E k+q ). These branch cuts represent the two-particle continuum of states for scattering of gapped quasiparticles. The Gaussian contribution (B18) can be then rewritten as where the last integral describes the contribution of the virtual scattering of quasiparticles with a phase shift δ(q, ω) whose particle continuum begins at E c (q) and the last term R comes from using the correct convergence factors in the calculation (see Appendix B). To illustrate this excitation spectrum we plot in Fig. 11 the two particle continuum and the collective excitations along the main diagonal q (1, 1, 1) of the Brillouin zone, at unitarity and for n = 0.5. For small q, the collective excitation spectrum is linear indicative of sound modes, eventually hitting the two-particle continuum. In the BCS limit, the contribution of the collective mode is negligible due to phase space restrictions and the two-particle continuum dominates. In the BEC limit, the two-particle continuum lies at a much higher energy scale and the low-energy excitations are entirely given by the gapless sound modes. Further, at half-filling, one would expect the collective excitation spectrum to be gapless at q = (π, π, π) indicating new Goldstone modes due to the onset of CDW order [25]. However, since we only decouple the quartic interaction in the p-p channel, we do not see the CDW order and hence there is no softening of (π, π, π) mode at half-filling within our theory. Corrections of order 1/N In order to calculate the leading order corrections to the thermodynamical quantities, such as the chemical potential in this case, we write it as the expansion Naturally, given that the gap parameter ∆ 0 is a function of µ, it will also have an expansion in powers of 1/N derived from this expansion. Indeed, Next, expanding the number equation to linear order in 1/N and remembering that in calculating derivatives with respect to µ (which we denote here as d/dµ) the parameter ∆ 0 actually changes with µ, we get where all quantities are evaluated at the mean field value µ = µ. gives (U/2) k (u 2 −v 2 ) for the second term and thus the correct form of Ω g is given by In the next appendix (Appendix D) we outline the numerical steps for evaluating Ω g . Appendix D: Numerical evaluation of Ωg The first step in the calculation of the Gaussian part of the thermodynamic potential is to solve the gap equation for a given chemical potential. Since we do not know the analytical form of the number equation once we include Gaussian fluctuations, we work in the grand canonical ensemble and obtain ∆(µ) from equations (B10). We next numerically compute Ω g [µ, ∆(µ)] using the formula in equation (B18). All the 3 momenta sums are over the entire Brillouin zone for a 20 × 20 × 20 lattice and have an implicit factor of total number of lattice sites in front. The Matsubara sum over the imaginary frequencies iq n is computed along the imaginary axis for each q mode. The integral in equation(B18) is split as follows: where the first integral on the left hand side is computed numerically and the second integral is evaluated analytically using the large y asymptote of the integrand. The function F (y) is given by Here one has to be careful about the integrable logdivergence at q = (0, 0, 0), y = 0 coming from Goldstone's Theorem. To take this into account we expand the integrand for q = (0, 0, 0) and small y and obtain ln(Det M(0, y)) ≈ ln(Ky 2 ), where K = a 2 + b 2 − g 2 and We note that the terms independent of y in the expressions for a, b and g cancel due to Goldstone's theorem and the term linear in y cancel due to symmetry. The integrand ln(Det M(0, y)) for q = 0 is then integrated between limits 0 and a small value of y = y s . The rest of the integral for q = 0 is evaluated numerically between y s and y c , and analytically between limits y c and ∞ using the asymptotic form F (y). with so that the inverse Green's function in (B4) now becomes As can be easily verified, this leaves the form of G −1 ij (τ, τ ) unchanged from (B4) except for the hopping term, which gains the phase difference t → t exp(±i(θ j − θ i )/2) in the first (second) diagonal element. Transforming to the reciprocal lattice, we obtain an inverse Green's function of the form (ik n + ξ k−Q/2 )e +ik l 0 + (F5) Taking the limit of small θ, we see that this corresponds to shifting the Matsubara frequencies and the energy dispersion respectively as The effective action at a fixed θ, from which the saddle point condition is derived, satisfies where we have expanded the energy dispersions to quadratic order in θ and used the fact that the shift in the Matsubara frequencies is not important once they are summed over as long as θ is small. The saddle point condition δS 0 /δ∆ 0 = 0 yields the small θ expansion ∆ 0 (µ, θ) = ∆ 0 (µ, 0) + α(µ)θ 2 , where and all quantities are evaluated at θ = 0. Mean Field Superfluid Density The mean field thermodynamic potential per flavor at T = 0 for a system with a phase twist is given by Using that the ∆ 0 dependence on θ is obtained from the saddle point condition δS 0 /δ∆ 0 = 0 we can obtain the superfluid density as Calculation of ns including Gaussian fluctuations In order to include Gaussian fluctuations, we need to calculate the Gaussian part of the action S g (∆ 0 ; θ, µ) in the presence of the phase twist θ. The inclusion of the effects of Gaussian fluctuations in the calculation of the superfluid density follows the same methodology used in section V C. The thermodynamic potential per flavor to first order in 1/N is of the form Ω(µ, θ) N = Ω 0 (µ, θ) + 1 N β Ω g (µ, θ). where again Ω g (µ, θ) = S g (∆ 0 (µ, θ); µ, θ)/β. Before we give an explicit expansion of the superfluid density in orders of 1/N on a lattice, let us go back to the continuum limit and outline the calculation of the 1/N corrections to n s in the continuum. This would hopefully elucidate some of the technical points differentiating a lattice calculation from the continuum. To this effect we prove that for a translationally invariant system, the relation n s = n, is respected even at the 1/N level. For an energy dispersion k = k 2 /2m, the shift in single particle energies due to the introduced twist (see second line in Eq. F7) is only a constant and can be incorporated into the chemical potential. Since, the phase twist is uniformly distributed across the system the order parameter transforms as: ∆(x) → ∆(x)e iQ.r and the Green's function transforms as where the ik n = ik n − k.Q/2m are the Doppler shifted Matsubara frequencies. Then, M 11 (iq l , q; Q) = 1/U + ikn,k G 22 (ik n − k.Q 2m , k; µ − Q 2 8m )G 11 (ik n − (k+q).Q 2m + iq l , k + q; µ − Q 2 8m ) = M 11 (iq l − q.Q 2m , q; µ − Q 2 8m ). The effects of the phase twist at T = 0 is therefore to shift the contour of integration for the Matsubara sum by an amount proportional to Q along the real axis and to shift the chemical potential µ by a constant amount Q 2 /8m. In the limit Q → 0, the shift of the contour of integration to the right keeps the Matsubara sum invariant and hence the phase twist enters the thermodynamic potential only through a shift of the chemical potential. This means that the saddle point condition in presence of the phase twist remains unchanged from the one in absence of the same. Further, Ω(Q) only contains terms in powers of Q 2 and therefore we obtain, By the same logic, 4m(∂Ω 0 /∂Q 2 ) µ,Q→0 = −(∂Ω 0 /∂µ) µ . Since, the number equation is given by ∂(Ω 0 + Ω g /N )/∂µ = −n we obtain We next consider the lattice and write the 1/N correc-tions to the thermodynamic potential as follows Ω(µ, ∆ 0 (µ, θ), θ) = Ω 0 (µ, ∆ 0 (µ, θ), θ) N Ω g (µ, ∆ 0 (µ, θ), θ) (F16) Further, µ = µ + δµ/N , which then combined with Eq. (F2) yields the following expansion for the superfluid density where d/dµ = ∂/∂µ + (∂µ/∂∆ 0 )∂/∂∆ 0 . Note the presence of an explicit ∂/∂∆ 0 derivative which was absent in the expression for n 0 s because of the saddle point condition ∂Ω 0 /∂∆ 0 = 0. Eq. F17 gives the expression for n s that was used in section VI. Appendix G: Critical temperature using Hartree shifted NSR In this appendix we shall use the Nozieres and Schmitt-Rink approach [3] for calculating the critical temperature, with the modification that the single particle Green's function G 0 now includes a Hartree shift which we call Σ. We shall work in the grand canonical ensemble at a fixed µ and hence Σ is a function of µ and T . We approach the transition from above T c and look for the divergence of the t-matrix. This gives us a relation between T c and µ Here the Hartree shift is contained in ξ k = k − µ + Σ. Since we are working in the grand canonical ensemble the filling fraction would depend on the value of µ we choose. At a fixed temperature this dependence is given through the number equation The Hartree shift, which depends on the filling fraction, is then given by Σ(µ, T ) = −n(µ, T )U/2 (G3) In order to implement the number equation (G2) we proceed as follows: For a given U and µ, we calculate T c and Σ(µ, T c ) by simultaneously solving Eqs. (G1) and (G3). With these values of T c and Σ, we evaluate Ω(µ, T = T c ). Next, keeping the temperature fixed we change µ to µ+δµ and evaluate Σ(µ + δµ, T = T c ) from Eq. (G3). This lets us evaluate Ω(µ + δµ, T = T c ). The number equation can then be written in the form We next give an explicit formula for the thermodynamic potential Ω = Ω 0 + Ω g . In presence of the Hartree shift, the mean field thermodynamic potential Ω 0 is given by As a check, note that setting Ω = Ω 0 in Eq. (G2) the above form for Ω 0 gives us the familiar mean field number equation n = k 2 exp(βξ k ) + 1 (G6) To obtain the thermodynamic potential upto Gaussian order for T ≥ T c , we note that u k = 1 and v k = 0 and hence M 12 = M 21 = 0. Therefore dependence of T c , the procedure is repeated for various values of µ. The results are plotted in Figure (12). We notice that there are quantitative differences between the HNSR and large-N results. At this stage, we do not understand why the T c from HNSR is lower than the T c from large-N theory.
2019-04-13T00:00:01.813Z
2011-07-19T00:00:00.000
{ "year": 2011, "sha1": "68cdc5be385e33a89ac6564194a0883230d1e99d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "38776a6354ae48fe3177f95b8a03a974ffa1bb25", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253605280
pes2o/s2orc
v3-fos-license
Development of a species-specific qPCR assay for the detection of invasive African sharptooth catfish (Clarias gariepinus) using environmental DNA Detection and monitoring of target species is the primary strategy in the management and control of biological invasions. Traditional methods to detect invasive species are time consuming and cumbersome, with a requisite for trained taxonomists for identification of aquatic species. Environmental DNA (eDNA)-based molecular methods offer an alternative, as they are quick, cost-effective and require minimal manpower. In this study, we design and optimize a reliable eDNA-based quantitative PCR assay to detect the African Sharptooth catfish, a highly invasive and banned species in India. Here, we delineate the step-by-step processes involved in the design and optimization of the assay, and show its performance through field-testing in selected water bodies in and around the city of Hyderabad. The present workflow can be used to design assays to detect a wide range of aquatic species. Introduction The intricate connection of biological invasions with increasing globalization, trade, culture, and human and climate-mediated events makes prevention and control of invasive species an exceptional challenge (Meyerson et al. 2022). With predictions that the number of alien species, as well as the intensity of biological invasions will accelerate on most continents (Seebens et al. 2021), there is an urgent need to develop and implement effective solutions for control, monitoring and management. Such strategies are particularly required in regions noted for their exceptional biodiversity and endemism, and where threats from alien species are remarkably high (see Dawson et al. 2017). In India, a mega-diversity country, threats to biodiversity from alien invasive species are on the rise, but management and control of such species is inadequate because of insufficient research, funding and policies (Goyal et al. 2021;Mungi et al. 2019). A recent synthesis by Bang et al. (2022) revealed that the Indian economy had incurred a loss of at least US$ 127.3 billion between 1960 and 2020 due to Abstract Detection and monitoring of target species is the primary strategy in the management and control of biological invasions. Traditional methods to detect invasive species are time consuming and cumbersome, with a requisite for trained taxonomists for identification of aquatic species. Environmental DNA (eDNA)-based molecular methods offer an alternative, as they are quick, cost-effective and require minimal manpower. In this study, we design and optimize a reliable eDNA-based quantitative PCR assay to detect the African Sharptooth catfish, a highly invasive and banned species in India. Here, we delineate the step-by-step processes involved in the design and optimization of the assay, and show its performance through field-testing in selected water bodies in and around the city of Hyderabad. invasive species, with an average annual cost of US$ 2.1 billion. Bang et al. (2022) further cautioned that these calculations could likely be gross underestimations, since they contribute to only a small fraction of the actual costs incurred. About 7.2% of global fish diversity occurs in India (Froese and Pauly 2022). Though there is no comprehensive assessment, or a list of alien invasive fish species in India, it has been estimated that > 600 fish species have been introduced into the country, with 55 of them having established sustainable reproductive populations (Sandilyan 2022). Among those 55 species, the National Biodiversity Authority (NBA) has declared 14 inland and freshwater species as 'highly invasive' (Sandilyan et al. 2018), which includes Clarias gariepinus (African sharptooth catfish). Further this species has been banned from farming and selling by the Government of India since the year 1997 (Gopi and Radhakrishnan 2002). Clarias gariepinus is widely regarded as one of the world's most successful aquatic invader due to its generalist (but mostly piscivorous and predatory) feeding habits, high fecundity, fast growth and eurytopic physiological traits (Booth et al. 2010). The species has a widespread presence in around 30 countries, including India, impacting native fish and other aquatic species through competition and predation. Despite its significance as an invasive alien species, only few studies have focused on the occurrence and distribution of C. gariepinus in the country (Krishnakumar et al. 2011;Singh et al. 2013;Roshni et al. 2020), and no attempt has been made so far to map its distribution in India. Traditional methods to detect invasive fish species such as visual observations, using traps, nets, bioacoustics, etc., and also employing morphological and behavioural data can be biased, intrusive and requires some level of taxonomic expertise (Beng and Corlett 2020). An alternative method that has been highly favoured by conservationists in recent years is the use of environmental DNA (eDNA) to detect aquatic species. eDNA, which is a complex mixture of DNA obtained from different organisms in soil, sediment, water and even air (Taberlet et al. 2012), can be exploited to detect aquatic species including invasive species (Jo et al. 2021) with high accuracy at relatively low cost. With sequencing techniques becoming more affordable and easily available, eDNA based molecular methods are increasingly adopted in conservation science. With a big chasm in invasion biology research in India, detection of target species remains the crucial step in the control and management of invasive species. Here, we successfully show the validation of a quantitative PCR based assay utilising eDNA, rigorously designed, optimized, and tested to specifically detect the invasive Clarias gariepinus. Primer designing and screening We targeted the 'Cytochrome b' region of the mitochondria to design primers, and retrieved the sequences of three species of Clariids, Clarias gariepinus, C. dussumieri and C. batrachus from NCBI GenBank (accession numbers: NC_027661.1:14366-15503, NC_037193.1:14365-15502, NC_023923.1:14361-15498 respectively). These sequences were subsequently aligned using Clustal Omega (https:// www. ebi. ac. uk/ Tools/ msa/ clust alo/), and the alignment file was imported to the primer designing module of ssPRIMER (https:// www. matto rtona pps. com/ shiny/ ssPRI MER), a GUI based tool used for designing species specific primers for qPCR assays. Potential primer pairs were designed by selecting appropriate parameters (see supplementary file for the parameters and primer binding visualization). From the list of designed primers, the primer pairs that had a higher propensity to form primer dimers were omitted from consideration for qPCR assay. To further screen the primers, shortlisted primers were assessed in silico using NCBI's Primer-Blast tool, and any primers that amplified other sympatric species were excluded. The details of the finalised primer pair are shown in Table 1. In vitro testing of primers Since the DNA sequence databases were not complete with sequences of all extant species, in silico analysis alone does not determine specificity of the screened primer pair. Hence, we tested them for specificity in vitro against the genomic DNA extractions of C. gariepinus and a few additional non-target species by performing a PCR assay. The non-target species selected for this assay comprised phylogenetically closely related sympatric species within the genus Clarias (C. dussumieri, C. magur and C. batrachus), sympatric species within Siluriformes (Glyptothorax gracilis, Pangasianodon hypophthalmus, Horabagrus brachysoma and Plotosus canius) and a few distantly related sympatric species from orders other than Siluriformes (Labeo rohita, Labeo catla, Cyprinus carpio, Oreochromis niloticus and Tenualosa ilisha). The PCR reactions had a total volume of 10 µl and included 1 µl of dNTP mix (10 mM each dNTP), 1 µl of 10X PCR Buffer, 0.05 µl of Taq DNA Polymerase (Takara, India), 0.2 µl of forward primer (10 µM), 0.2 µl reverse primer (10 µM), 6.55 µl of Nuclease free water and 1 µl of template DNA. The thermocycle program consisted of an activation step at 94 °C for 5 min, 35 cycles of PCR step at 94 °C for 30 s, 58 °C for 30 s, 72 °C for 30 s, and a final extension step at 72 °C for 5 min. The PCR products were then visualised in 2% agarose gel. qPCR assay optimization To determine efficiency of the primers and linear range, a standard curve was generated by including dilutions of standards, i.e., known copies of target amplicons comprising the target sequence. The standards were prepared in the laboratory by amplifying the target region through bulk PCR reactions (5 reactions of reaction volume 20 µl with the same components and conditions mentioned previously). Upon visualizing the PCR products in 2% agarose gel to confirm amplification, the PCR products were combined and purified using NucleoSpin® Gel and PCR Clean-up (Macherey-Nagel, Germany) kit following the manufacturer's instructions. The purified product was then quantified using Nanodrop spectrophotometer. With the known concentration of PCR product (ng/µl) and the length of the amplicon (171 bp), the number of copies of the amplicon was determined using the online calculator at http:// cels. uri. edu/ gsc/ cndna. html. After determining the copy number, the purified PCR product was then serially diluted to prepare the standards. A standard curve was generated in qPCR with the dilutions of standards ranging from 10 0 copies/reaction to 10 5 copies/reaction. To determine the Limit of Detection (lowest initial DNA concentration with 95% detection) and Limit of Quantification (lowest initial DNA concentration quantifiable with a coefficient of variation below 35%), we performed a qPCR assay using fourfold dilutions of standards ranging from 1 copy/reaction to 1024 copies/reaction. The Limits of Detection and Quantification were calculated using the LOD/LOQ calculator (Klymus et al. 2020a). The qPCR reactions had a total volume of 10 µl and included 5 µl of TB Green® Premix Ex Taq™ II-Tli RNaseH Plus (Takara (Japan), 0.2 µl of forward primer (10 µM), 0.2 µl of reverse primer (10 µM), 3.6 µl of Nuclease free water and 1 µl of template DNA. The qPCR program consisted of an initial denaturation step at 95 °C for 30 s, 40 cycles of PCR step at 95 °C for 5 s and 58 °C for 30 s, a melting step at 95 °C for 5 s, 60 °C for 60 s till reaching 95 °C, and a final cooling step at 50 °C for 30 s. All qPCR experiments were performed in Roche Lightcycler 480 II instrument. Sampling for field testing To test the performance of the assay in field conditions, we randomly selected 11 lakes in and around Hyderabad, Telangana State, India, for the pilot study (Suppl. Figure 2). Two litres of water were collected in triplicate from each site at the shore without disturbing the sediment, during the month of January 2021 and filtered in the laboratory on the same day. 250 ml of water from each sample was filtered using disposable 50 ml syringes through mixed cellulose ester membrane of 47 mm in diameter and of 0.45 µm pore size (Merck life science Pvt. Ltd.). After filtration, the filter paper was cut into two halves, with one half utilised for DNA isolation and the other stored in − 30 °C. Besides the 11 lakes, we also included a sample from a pond located inside the Nehru Zoological Park in Hyderabad as the positive control, where the presence of C. gariepinus was visually confirmed. Since we could not reliably identify any natural water body in and around Hyderabad where C. gariepinus is confirmed to be absent, we included a sample from our laboratory aquarium as the environmental negative control. Between filtration of each sample, the filter assemblies were bleached with 4% Sodium hypochlorite solution to prevent contamination between samples. eDNA was extracted from the filters by the standard Phenol-Chloroform-Isoamyl alcohol method. The eDNA filter was placed in a 2 ml microcentrifuge tube and 1 ml of tissue lysis buffer (pH = 8.0), 100 µl of SDS (20%) and 20 µl of Proteinase K (20 mg/ml) were added and vortexed for 1 h. The tube was placed in a rotating wheel at 56 °C for 2 h for lysis. The filter paper was then removed and 700 µl of Phenol:Chloroform:Isoamyl alcohol mixture (25:24:1 proportion) was added to the aqueous content and kept in rotating wheel for 10 min for thorough mixing. The contents were then centrifuged at 10,000 rcf and the aqueous layer was transferred to a new centrifuge tube. 700 µl of Phenol:Chloroform:Isoamyl alcohol mixture was added again to the transferred aqueous layer and the previous steps of mixing and centrifugation were repeated. The aqueous layer was then transferred to a new centrifuge tube, and 700 µl of Chloroform:Isoamyl alcohol mixture (24:1) was added to the transferred aqueous layer and the previous steps of mixing and centrifugation were repeated. The aqueous layer was then carefully transferred to a new 1.5 ml centrifuge tube and 50 µl of 5 M NaCl and 700 µl of chilled isopropanol were added. The contents were mixed well by gently inverting the tubes and stored at 4 °C for 2 h for precipitation. The tubes were centrifuged at 10,000 rcf for 30 min and the aqueous contents were discarded without disturbing the pellets. 500 µl of 70% molecular grade ethanol was added to the pellet and centrifuged at 10,000 rcf for 10 min and ethanol was discarded without disturbing the pellets. The ethanol wash step was repeated with 100% molecular grade ethanol. Upon discarding ethanol, the pellet was air dried till complete removal of any trace ethanol. 100 µl of 1X TE buffer was then added to dissolve the pellet by placing the tubes in dry bath at 56 °C for 30 min and the DNA was stored at 4 °C for further analysis. After eDNA extraction, the DNA concentrations of all samples were adjusted to 20 ng/µl for the subsequent qPCR assay, and no adjustments were made for samples having concentrations less than 20 ng/µl. qPCR detection of eDNA Each eDNA sample was loaded in three technical replicates, along with negative controls, and the assay was run in Roche Lightcycler 480 II. The copy numbers of the target gene in each eDNA sample were calculated with the help of the standard curve generated during the qPCR assay optimization stage. For the reactions where primer dimers were observed through the Melt curve analysis, the assay was repeated for the respective samples. The qPCR assays did not involve any additional probes, since the designed primers were highly specific to the target species. All qPCR assays were performed in a separate laboratory space in a different floor/section of the building dedicated for qPCR experiments to avoid contamination. The qPCR products were purified using NucleoSpin® Gel and PCR Clean-up (Macherey-Nagel, Germany) kit and sequenced on ABI 3730XL DNA sequencer platform using the BigDye Terminator (version 3.1) Cycle Sequencing Kit and POP-7 Polymer separation matrix (Applied Biosystems, Inc.). Upon trimming the sequences in Chromas (Version 2.6.6) software (the sequence lengths were ranging from 96 to 114 bp), the sequences were then analysed in NCBI's nucleotide BLAST tool to verify the species identity of the amplified product. Selected primer pair After rigorous screening of the designed primers and their in silico analysis, we finalised a primer pair (Table 1) for the study. In vitro primer specificity assay Figure 1 provides details of the PCR products of tissue DNA samples from the target and non-target species amplified with the selected primer pair. A crisp band was observed only in the PCR product of C. gariepinus DNA sample at the expected range of 171 bp, while no other bands were observed for the closely related and other non-target species. qPCR assay standardization To estimate the absolute copy number of the target gene in eDNA samples, we generated the standard curve with the R 2 value of 0.9973, efficiency of 96.5% and the y-intercept value (the predicted Cp of a reaction with 1 copy of the target sequence) at 34.76 cycles (Fig. 2). Through LOD/LOQ assay, the limit of detection was found to be four copies and the limit of quantification was found to be nine copies. eDNA detection in representative samples Of the 12 lake-water samples, 11 produced amplifications, including the positive sample from the pond inside the Nehru Zoological Park (Fig. 3). Only one sample (site KC) was negative. The copy numbers calculated for all the positive samples using the standard curve were above the limit of detection and quantification, confidently indicating the presence of C. gariepinus. No amplification was observed in the environmental negative control (ENC) as well as in the no template controls. When the sequences of qPCR products were analysed in NCBI's nucleotide BLAST tool, the input sequences showed significant alignments with multiple sequences of C. gariepinus species with percentage identity ranging from 100 to 93% and query coverage from 100 to 97%. We also observed possible hits with other species namely, Clarias anguillaris, Bathyclarias gigas, Bathyclarias ilesi, Bathyclarias nyasensis and Bathyclarias worthingtoni with percentage identity ranging from 98 to 93%. Since, these species are either endemic to Africa Fig. 1 In vitro PCR validation of primers against the target species C. gariepinus and 12 additional non-target species or not present in India, the identity of the analysed sequences can be confirmed that of C. gariepinus. Discussion While real-time quantitative PCR assays have been used in various fields involving detection and quantification of specific nucleotide sequences (Kubista et al. 2006), its potential as a tool for environmental DNA studies is only recently emerging. In this study, we designed a cost-effective eDNA based qPCR assay for the detection of North African sharptooth catfish in natural aquatic ecosystems. Before embarking on a large-scale eDNA based study to map the distribution of any target species, a pilot study is recommended to standardize the design, including the development and validation of assay, as well as considerations for contamination and suitable analysis methods (Goldberg et al. 2016). Such a pilot study also enables reoptimization and validation of the assay when applied in different geographical regions. Mitochondrial gene sequences are preferred as the target sequence for eDNA studies as it increases the chances of detection because of the high copy number in cells (Rees et al. 2014), despite their inability to distinguish hybrids (Evans and Lamberti 2018). Also, incorporation of specificity (i.e., detection of only the target species) and sensitivity (i.e., detection of target DNA at low quantities) assessments is vital to make the assay more reliable (Klymus et al. 2020b). While the in silico and in vitro validation of primers against the non-target species informs about the specificity of the assay, the limit of Detection (LOD) and Limit of Quantification (LOQ) assessments inform about sensitivity. To ensure that the primers are specific to the target species, it is imperative to include the phylogenetically closely related species and distantly related sympatric species in the in vitro specificity assay. In addition, verification of the positive detections by sequencing the PCR products adds another layer to assay integrity. Keskin (2014) and Elberri et al. (2020) have previously demonstrated the importance of qPCR based eDNA studies to detect C. gariepinus. However, the primers used in their studies were amplifying other Indian congeneric Clarias species (e.g., C. magur, C. dussumieri) as well. Hence, our assay was optimized to detect C. gariepinus in India with high specificity, by including all native species of the genus Clarias, and selected species of other closely related families in the in vitro validation. For future studies outside India, we suggest revalidation of the specificity of our assay by including other co-occurring closely related species in the geographical range of interest. Since our assay does not include any additional probes, this has reduced the cost involved and can be used by any laboratory equipped with a basic qPCR machine. The goal of developing, optimizing and testing a species-specific qPCR assay for eDNA based studies with stringent quality control measures is to detect the target species. This pilot study will serve as a foundation to map the distribution of invasive C. gariepinus, and also as a useful tool to inform management authorities for timely control and regular monitoring of this species. Finally, the workflow employed in this study can also serve as a template to design and optimize eDNA based assays to detect other invasive and/ or threatened species for improved aquatic management and conservation.
2022-11-18T16:05:32.105Z
2022-11-16T00:00:00.000
{ "year": 2022, "sha1": "82628a9b7e7dbd9de38a02203cc1714ab0cea1ea", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-1850303/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "b8bfb4dd810407fa22de44ce5fb0fe95f6ef8abb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
4819587
pes2o/s2orc
v3-fos-license
Side Population Cell Level in Human Breast Cancer and Factors Related to Disease-free Survival While overall cancer incidence has been increasing in China during the recent years, breast cancer has become the top one in females. The relative survival rate in Chinese patients with breast cancer was 78.7% in the period 19921995 (Sankaranarayanan et al., 2011). The majority of breast cancer deaths occur as a result of recurrence or metastasis of the disease rather than from the effects of the primary tumor and cancer stem cell (CSC) is an important character in metastatic behavior (Croker et al., 2009). According to the theory of stem cell, the most important and useful property of stem cells is that of self-renewal. This is a common property of both stem cells and cancer cells since tumours often originate from the transformation of normal stem cells, thus, similar signaling pathways may regulate self-renewal in both stem cells and cancer cells. Most cancer cells are considered to Side Population Cell Level in Human Breast Cancer and Factors Related to Disease-free Survival CG Jin 1 , TN Zou 2 , J Li 1 , XQ Chen 1 , X Liu 1 , YY Wang 3 , X Wang 1 , YH Che 4 , XC Wang 1 , Hutcha Sriplung 5 * be differentiated from CSC -rare cells with self-renewal ability that drive tumorigenesis (Reya et al., 2001). Studies have identified tumorigenic cells with stem/progenitor cell properties from human breast cancers (Ponti et al., 2005). In identification and isolation of CSCs, several approaches have been proposed. In breast cancers, cell surface markers, such as CD24 and CD44, have been proved to be useful for the isolation of subsets enriched for breast CSCs (Al-Hajj et al., 2003;Visvader, 2008). In 1996, Goodell MA, et al. discovered and isolated a special population of cells while working with staining of murine bone marrow cells with the vital dye, Hoechst 33342, and named them as side population (SP) cells (Goodell et al., 1996). These cells shared characteristics of CSCs and were enriched in tumor initiating capacity and expressed stem-like genes (Zhang et al., 2012). SP cells have later been identified in both mouse and human mammary gland tissue (Alvi et al., 2003;Clayton et al., 2004). The cells have been identified in a variety of normal tissues and cancers and have shown to exhibit stem celllike properties in capable of self-renewal (Wu and Alman, 2008;Nakanishi et al., 2010). SP cells have also been found to be chemoresistant (Song J et al., 2010;Yang M et al., 2010), radioresistant (Woodward et al., 2007) and have increased invasive potential exhibited in vitro (Fuchs et al., 2009) compared to nonside population (NSP) cells. In vivo studies have also demonstrated that in most cancer cell lines SP are more tumourigenic (Yin et al., 2008;Chen et al., 2009;Song et al., 2010) and have greater metastatic potential (Chen et al., 2009;Nishii et al., 2009) than their NSP counterparts (Mitsutake et al., 2007). There exist many factors having the potential to become relevant predictive factors in the past year (Cianfrocca and Goldstein, 2004). Prognostic and predictive factors for breast cancer include classical clincopathological features derived from breast cancer samples including stage, tumor size, histological subtype and grade, lymph node metastases (Rosa Mendoza ES et al., 2013;Nogami T et al., 2014 ), type of tumor, lymphatic and vascular invasion (Jatoi et al., 1999 ), and hormone receptor status (Weigel, 2010). The early detection of cancer and improvements of cancer treatments provide better progress for cancer, however, the ability to prolong cancer survival is still limited. It is important to evaluate the potential role of these SP cells in tumor metastasis and recurrence of patients with breast cancer. This study aimed to investigate the proportion of SP cells, the factors associated with their presence and to identify the effect of SP cells in the human breast cancer cell on prognosis of patients with breast cancer Study subjects Breast cancer patients who were diagnosed according to the National Comprehensive Cancer Network (NCCN) Clinical Practice Guidelines (Sun et al., 2011). Those complicated with chronic wasting and infectious diseases, or having multiple primary malignancies were excluded. Based on the criteria above, breast cancer patients diagnosed from January 1, 2006 to December 31, 2007 and in a maximum 7-year follow up period that observed both the occurred recurrence or metastasis and also those without these outcomes after completed conventional treatment were consecutively recruited in this study at Yunnan Tumor Hospital. The human breast cancer cells were obtained from fresh tissue of the cancer patients. The cells were cultured by the nucleic acid dye Hoechst33342 added with Verapami. Flow cytometry (FCM) was employed to isolate the cells of SP. The protocol of this study was approved by the Ethical Committee of Yunnan Tumor Hospital. All eligible participants were informed of the study protocol and informed consent was signed before the patients accepted admission into the study. Cell culture and flow cytometry analysis Cell culture for each patient was performed at 10% FBS RPMI-1640 (Gibco) medium in a humidified 37°C, 5% CO 2 in Forma311 carbon dioxide incubator (Thermo Fisher Scientific, USA). Cells were selected in the logarithmic growth phase, removed from the culture dish with trypsin and EDTA, pelleted by centrifugation, washed with PBS. The cell suspension diluted to 106/ ml with 10% FBS RPMI-1640 solution. The cells were then labeled with Hoechst 33342 (Sigma-Aldrich) at a concentration of 5μg/ml. The labeled cells were incubated for 120 minutes at 37°C, either alone or with 100 μl verapamil hydrochloride (Sigma-Aldrich) and were swirled every 15 minutes. Two hours later, 4°C PBS was used to terminate labeled process. After centrifugation with desk centrifuge (Heraeus, Germany), the cells were washed with 2% fetal bovine serum (FBS; Gibco) PBS and maintained at 4°C 2% FBS PBS and were processed for flow cytometry analysis. Cells were counterstained with 2μg/ml propidium iodide (PI; Sigma-Aldrich) to identify dead cells. Then 1 x 10 6 viable cells were analyzed and sorted by an Epics Altra flow cytometry (Beckmancoulter, USA). The Hoechst dye was excited at 407 nm UV (blue, 450/40; red, 695/40). PI was excited at 488 nm blue (red, 575/26). Clinical data collection All recruited subjects were recorded on basic demographics such as age at diagnosis, basic clinical and pathological information such as tumor size, clinical stage and pathological findings. Records for this information were independent with cell culture and cytometry results. Both data were later merged by patient identification number. Recurrence or metastasis status as the outcome of this follow up study was recorded by schedule revisiting of patients. Data analysis Descriptive statistics methods were adopted to present the distribution of data. T test was utilized to compare the differences of distribution of SP cell with age group at the cut point of 60 years old and 6 tumor characteristics. Kaplan-Meier was used to descript the distributions of DFS and log rank test was adopted to detect difference of median survival time between groups. Cox regression models were employed to explore the risk factors of disease free survival. Modeling used the manual backward exclusion method, sequentially removing variables not contributing significantly to the fit of the model based on the likelihood ratio of successive models. SP status, clinical and pathological information were all included in the first full model. All the tests were as two-sided and P value of less than 0.05 was generally considered as statistical significance. Results A total of 122 patients had recurrence or metastasis and 524 patients had no evidence of the outcome of interest. Among these 122 patients, the range of proportion of SP cell was 1.1% to 6.2% with the mean of proportion of 3.2% and standard deviation of 1.2%. Table 1 shows the percentage of SP cell grouped by age DOI:http://dx.doi.org/10.7314/APJCP.2015.16.3.991 Side Population Cells in Breast Cancer and Disease-free Survival and tumor characteristics. Distribution of SP cell differed significantly across 6 tumor characteristics. Higher SP cell was common in the more serious status of breast cancer than in the lighter illness status. Disease-free survival (DFS) was compared among patients' age group less that 60 or equal or above, tumor characteristics and SP cell levels of less than or equal and above 3.6%. Statistics significant difference was determined by the tumor characteristics as well as SP cell levels ( Table 2). The significant difference in cumulative survival was detected by log rank test (p<0.001) for the two categories of SP cell levels among breast cancer patients (Figure 1). Four out of all 8 variables were identified as significant independent risk factors for disease-free survival in multivariate Cox regression model. Those factors were axillary lymph node metastasis (ALNM), invasiveness of the tumor, tumor volume doubling time (TVDT) and SP cell level (Table 3). Discussion In this study, the SP cells were identified by a flow cytometric analysis using Hoechst 33342 dye efflux and the mean of SP cell proportion in breast cancer tissue was 3.25±1.21%. Previous studies have reported the percentages of SP cells to range from 0.2% to 7.5% (Patrawala et al., 2005;Engelmann et al., 2008;Han, 2009). Such the variation could be determined by the difference in machines and reagents used in different studies. In order to better understand the relationship between the percentage of SP cells and prognosis of breast cancer patients, since the standard had not been set, we attempted to select the cutoff point at 3.6% in this study which based on the mean of SP cells percentage. Under the control of age and other tumor pathological characteristics, the SP cell level equal to or more than 3.6% was an independent risk factor of DFS among breast cancer patients. In the univariate analysis of this study, SP cell level and the pathological factors named tumor size, ALNM, invasiveness of the tumor, stage, degree of differentiation and TVDT were statistically significant related to DFS. However, only SP cells level, invasiveness, ALNM and TVDT were independent factors which statistically significant correlated with prognoses after all variables were added for adjustment of inter-dependency in the Cox model. The study clearly showed a higher hazard ratio of 1.75 (Table 3) among those with higher level of SP cells. Such the phenomenon could be in part explained by the nature of CSC in which SP cells are among them. CSCs are considered to be responsible for tumor initiation and progression. The residual CSC may involved in the process of tumor metastasis and tumor progression which is a result of the metastatic spread of these cells (Campbell, 2007). Although chemotherapy and radiotherapy would have killed most cells and cell lines in a tumour, it is believed to leave tumour stem cells behind as they are resistant to the treatment modalities. While those treatments could reduce large amount of cancer cells and cause observable tumor shrinkage, the effect on CSC could be minimal (Mimeault et al., 2007;Raaijmakers, 2007;Sakariassen et al., 2007;Zeppernick et al., 2008). CSC might also be one of important mechanisms of multidrug resistance that leads to tumor recurrence after chemotherapy. The SP cells possessing stem cell-like properties have been found to be associated with a poorer prognosis (Van den Broeck et al., 2013). Most of patients with cancer with or without cell killing therapeutic methods could progress and/or develop metastases. Although there is no literature investigating the SP cells and metastatic probability in breast cancer, studies in other cancers have elaborated an important role of SP cells in metastasis of cancer (Kato et al., 2007;Nishii et al., 2009). The study of Kato and co-workers on cultured SP and NSP cells from the human endometrial cancer cell disclosed that tumours derived from SP cells showed the characteristics of progenitor cells. The progenitor cell potential of SP cells showed long-term repopulation properties (>24 weeks), and cultured SP cells produced gland (CD9+)-and stroma (CD13+)-like cells. In contrast, NSP cells became senescent within 1-3 months (Kato et al., 2007). SP isolated from the pancreatic cancer cell line PANC-1 have been shown to have an increased invasive potential in vitro and increased metastatic potential in vivo when compared to NSP cells using a murine liver metastasis model (Kabashima et al., 2009). Nishii and colleagues in 2009 have shown SP cells from gastric cancer increased a high adhesion ability to peritoneum related to the expression of several adhesion molecules which resulted in SP cells having a greater potential for peritoneal metastases (Nishii et al., 2009). In this study we divided SP cells size into two levels based on the mean size instead of NSP cells. The results found that the patients' SP cells levels less than 3.6% has longer median DFS than those equal to or more than 3.6% SP cells level which means it had better progress. It is probably be explained by above research results, in NSP cells had less metastasis. Thus the finding from this study is closed to those studies results. The significantly higher hazard ratio for metastasis and recurrence in the group of patients with SP cells level equal and more than 3.6% than the group with less than 3.6% SP cells (Table 3) is consisted with the results in previous studies which illustrated SP cells being an important element for breast cancer prognosis. The breast cancer SP cells in vivo have been shown to be more tumourigenic than NSP cells when tumor cell cultures in immunodeficient mice (Patrawala et al., 2005;Steiniger et al., 2008;Yin et al., 2008). In Zhou et al. (2008) study, the sphere cells cultured from SP cells of breast cancer cell MCF7 were enriched in CSC surface markers CD44+CD24-and had higher tumorigenicity. It implied the sphere cells enrich breast cancer stem/ progenitor cells and preferentially inhibited by NF-κβ pathway inhibitors compared to their non-stem cell counterparts. Studies have demonstrated that MCF7 SP in the mammary gland are more clinically relevant doses of radiation resistant (Woodward et al., 2007) and clinical resistance to chemotherapy (Steiniger et al., 2008;Yin et al., 2008) than the NSP cell population. These properties would suggest that SP cells may also play a key role in breast cancer progression. In addition to the levels of SP cells, Table 3 shows other (Jatoi et al., 1999) illustrated that DFS after relapse was poorer in node-positive cases compared with node-negative cases. The hazard ratio for patients with one to three node involvement compared with none was 1.2 (95% CI=0.8-1.9) and it was 2.5 (95% CI=1.8-3.4) among four or more involved nodes, respectively. Other studies reported the significant higher HR of DFS among patients with than those without axillary lymph node metastasis (ALNM) (Rosa Mendoza et al., 2013;Nogami et al., 2014 ). As shown in Table 3 that TVDT was an independent variable in the final Cox model in predicting DFS. TVDT reflects the natural tumor growth rate and is an indicator of the biological malignant potential of a tumor. The evaluation of TVDT is able to provide a parameter as progress of cancer. The study of Kusama et al. (1972) in 34 cases with breast cancer illustrated that the growth of secondary tumor was related to that of the primary site. Invasive status of the tumor was found to be one of the predictors of DFS in our analysis. Tumor invasiveness would determine treatment choices and the response to the treatment received. Non-invasive breast cancers are limited to the basement membrane of the milk ducts or lobules (the round sacs in the breast that produce milk) while invasive ones break such the barrier into the surrounding breast tissue and could have been spreading to the lymphatic system and then to other parts of the body. Therefore, invasive carcinomas have poorer prognosis and survival than non-invasive tumors. In conclusion, SP cells level has been demonstrated to have independent association with tumor progression and clinical outcome after controlling for other clinical and pathological factors. Axillary lymph node status, TVDT and invasiveness of cancer were also identified as independently predictors for the prognosis of breast cancer.
2018-04-03T05:58:16.938Z
2015-03-04T00:00:00.000
{ "year": 2015, "sha1": "14784d76f756bcd5afa98d1dcfed25cf23e6bf26", "oa_license": "CCBY", "oa_url": "http://koreascience.or.kr/article/JAKO201510534323739.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2461ba29d1145f0e942a08dea9cb307a17ea8711", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256863788
pes2o/s2orc
v3-fos-license
Two Faces of Catechol-O-Methyltransferase Inhibitor on One-Carbon Metabolism in Parkinson’s Disease: A Meta-Analysis Levodopa (L-dopa) and catechol-O-methyltransferase (COMT) inhibition are widely used therapeutics in Parkinson’s disease (PD). Despite their therapeutic effects, it was raised that nutrients involved in one-carbon metabolism can be deteriorated by PD therapies. The aim of this meta-analysis was to investigate the impact of L-dopa and COMT inhibitors on levels of homocysteine (Hcy), vitamin B12 and folate in patients with PD. A total of 35 case-control studies from 14 different countries were selected through PubMed, MEDLINE and Google Scholar and were meta-analyzed. In the L-dopa group, the Hcy level was higher compared to the PD without L-dopa group (SMD: 5.11 μmol/L, 95% CI: 3.56 to 6.66). Moreover, vitamin B12 and folate levels in the L-dopa group were lower compared to the healthy control (SMD: −62.67 pg/mL, 95% CI: −86.53 to −38.81; SMD: −0.89 ng/mL, 95% CI: −1.44 to −0.33, respectively). The COMT inhibitor group showed lower levels of Hcy (SMD: −3.78 μmol/L, 95% CI: −5.27 to −2.29) and vitamin B12 (SMD: −51.01 pg/mL, 95% CI: −91.45 to −10.57), but higher folate levels (SMD: 1.78 ng/mL, 95% CI: −0.59 to 4.15) compared to the L-dopa group. COMT inhibitors may ameliorate L-dopa-induced hyper-homocysteine and folate deficiency but exacerbate vitamin B12 deficiency. Introduction Parkinson's disease (PD) is a typical neurodegenerative disease characterized by motor dysfunction, such as tremors, rigidity and slow movements, resulting from the loss of dopaminergic neurons [1]. It is mainly caused by environmental factors. Its incidence is approximately 1% among people over the age of 60 worldwide and steadily increasing [2]. For patients with PD, dopamine replacement is the treatment of choice, and the most commonly used drug is levodopa (L-dopa), a dopamine precursor [1]. Because dopamine itself cannot cross the blood-brain barrier (BBB) owing to its large molecular weight, Ldopa is administered [3]. However, L-dopa can easily convert to other structures, such as 3-O-methyldopa catalyzed by the enzyme catechol-O-methyl transferase (COMT) before it crosses the BBB or reaches the brain. [4]. To prevent this undesirable conversion, L-dopa is often prescribed along with COMT inhibitors, such as entacapone [5]. Moreover, it can cause serious side effects, such as dyskinesia [6]. It accelerates PD progression by inducing neuronal cell death through self-oxidation [7]. Furthermore, L-dopa can exacerbate elevated homocysteine (Hcy) levels in patients with PD [8]. Hcy is a thiol-containing amino acid and an intermediate product in the folatemethionine cycle [9]. Under healthy conditions, Hcy reverts to methionine in the presence of B vitamins, such as folate and vitamin B 12 [10]. When folate enters the folate cycle in the form of tetrahydrofolate (THF), it converts into 5-methyl-THF. Together with vitamin B 12 , 5-methyl-THF can provide its methyl group to Hcy, catalyzed by methionine synthase, to produce methionine and, consequently, maintain a lower Hcy level [10]. However, Hcy conversion is not appropriately facilitated in PD, resulting in elevated circulating Hcy levels, which leads to cellular damage through oxidative stress and inflammation [11]. L-dopa aggravates Hcy metabolism by directly involving the folate-methionine cycle or one-carbon metabolism [12]. Briefly, in the presence of COMT, S-adenosylmethionine provides its methyl group to L-dopa to produce S-adenosylhomocysteine (SAH) [13]. SAH is rapidly hydrolyzed to Hcy, which is elevated in patients with PD [14]. Therefore, the side effects of L-dopa in patients with PD might be closely related to the dysregulation of Hcy, vitamin B 12 and folate. To optimize nutritional approaches for patients with PD, a comprehensive metaanalysis of various studies worldwide should be performed comparing the circulating levels of Hcy, vitamin B 12 and folate in patients with PD taking L-dopa. The aim of the present study was to assess the impact of L-dopa administration on the folate-methionine cycle based on circulating levels of Hcy, vitamin B 12 and folate. The effect of the COMT inhibitor on L-dopa-induced dysregulation of the folate-methionine cycle was also investigated, as COMT inhibitors are often combined with the L-dopa treatment. Literature Search A literature search for the current meta-analysis was performed using PubMed, MED-LINE and Google Scholar for articles published until 25 August 2022. The literature was searched using the following keywords: "Parkinson's disease (PD)," "Levodopa," "Ldopa," "Homocysteine (Hcy)," "Vitamin B 12 ," "Folate" and "COMT inhibitor." In addition, Chinese literature was searched using the China Knowledge Resource-Integrated Database. In addition, this study was conducted based on the PRISMA 2020 Checklist. Inclusion and Exclusion Criteria Inclusion criteria were: (1) full-text articles of randomized or non-randomized clinical trials involving humans; (2) studies on the relationship between L-dopa use and Hcy, vitamin B 12 or folate in patients with PD; (3) studies comparing L-dopa with a healthy control or COMT inhibitor group; (4) articles containing data expressed as mean ± standard deviation and complete information. Exclusion criteria were: (1) reviews or articles without data, (2) studies involving patients on vitamin B supplementation and (3) duplicate publications. Study selection was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses flow diagram. Data Extraction Data were tabulated in a spreadsheet under the following headings: first author, publication year, title and parameters analyzed. "Healthy control" was defined as a group with participants without PD. "Without L-dopa" was defined as a group of patients with L-dopa-naïve PD. "L-dopa" was defined as a group of patients with PD taking only L-dopa. "COMT inhibitor" was defined as a group of patients with PD taking both L-dopa and the COMT inhibitor together. Data are available within the selected studies and/or their Supplementary Materials as well as upon request from the authors. Statistical Analyses Statistical analyses were performed using Review Manager (RevMan) 5.2. The degree of inconsistency within studies, or heterogeneity (I 2 ), was interpreted based on the following reference points and their p-values: 0%, 25%, 50% and 75%, representing no, low, moderate and high heterogeneity, respectively [15]. When heterogeneity was low, a randomeffects model was used; otherwise, a fixed-effects model was used. The standardized mean difference (SMD) was used as the effect size for Hcy, vitamin B 12 and folate comparisons. Statistical significance was set at a p-value < 0.05. Each analysis was presented as a forest plot. The green squares represent the weighted mean difference in each study, and the black diamonds represent the summary of the weighted mean difference in the forest plots. Funnel plots were constructed using RevMan to assess the publication bias. When the plot showed symmetrical and equal scattering, the results indicated no publication bias. Search Results In the literature search, 341 clinical trials were initially screened, and duplicate articles were removed. After applying the exclusion criteria, a total of 35 clinical trials were included in the meta-analysis (Figure 1), with blood/serum Hcy, vitamin B 12 and folate levels as measured biomarkers in the selected studies. Figure 1 shows the details of the selection procedure. Statistical Analyses Statistical analyses were performed using Review Manager (RevMan) 5.2. The degree of inconsistency within studies, or heterogeneity (I 2 ), was interpreted based on the following reference points and their p-values: 0%, 25%, 50% and 75%, representing no, low, moderate and high heterogeneity, respectively [15]. When heterogeneity was low, a randomeffects model was used; otherwise, a fixed-effects model was used. The standardized mean difference (SMD) was used as the effect size for Hcy, vitamin B12 and folate comparisons. Statistical significance was set at a p-value < 0.05. Each analysis was presented as a forest plot. The green squares represent the weighted mean difference in each study, and the black diamonds represent the summary of the weighted mean difference in the forest plots. Funnel plots were constructed using RevMan to assess the publication bias. When the plot showed symmetrical and equal scattering, the results indicated no publication bias. Search Results In the literature search, 341 clinical trials were initially screened, and duplicate articles were removed. After applying the exclusion criteria, a total of 35 clinical trials were included in the meta-analysis ( Figure 1), with blood/serum Hcy, vitamin B12 and folate levels as measured biomarkers in the selected studies. Figure 1 shows the details of the selection procedure. Table 1 summarizes the characteristics of the included clinical trials. Among the 35 clinical trials, 12 were carried out in China and Taiwan, while 7 were in Italy, 4 in Germany, 2 in Greece and 1 each in Australia, Poland, the Republic of Korea, Japan, Thailand, Czechia, Turkey, the United States, Slovakia and Canada. The number of participants enrolled in the selected studies ranged from 12 to 212, and participants were generally over the age of 60 years. The L-dopa dose was ≥300 mg/day in the L-dopa group. Treatment duration ranged from 3.0 to 12.4 years for L-dopa and 4.3 to 13.5 years for the COMT inhibitor. Comparison of Hcy Levels in Blood The effect of the L-dopa treatment on blood Hcy levels was examined in 26 studies (Figure 2A). Blood Hcy levels were significantly higher in the L-dopa group than in the healthy control group (Figure 2). The overall SMD of the 26 studies was 6.43 µmol/L (95% confidence interval [CI]: 5.60 to 7.25; p < 0.00001) in random-effects models with high significant heterogeneity (I 2 = 76%, p < 0.000001). Subsequently, the impact of the L-dopa treatment on blood Hcy levels in patients with PD was examined ( Figure 2B). Hcy levels were significantly higher in the L-dopa group than in the without L-dopa group. The overall SMD of the 17 studies was 5.11 µmol/L (95% CI: 3.56 to 6.66; p < 0.00001) in randomeffects models with high significant heterogeneity (I 2 = 81%, p < 0.00001). Moreover, the influence of the COMT inhibitor on high blood Hcy levels induced by the L-dopa treatment in patients with PD was investigated ( Figure 2C). The group treated with both L-dopa and the COMT inhibitor had lower levels of Hcy in the blood compared to the group taking L-dopa alone. The overall SMD of the seven studies was −3.78 µmol/L (95% CI: −5.27, −2.29; p < 0.00001) in the random-effects models with no significant heterogeneity (I 2 = 25%, p = 0.24). Finally, the blood level of Hcy in the L-dopa plus COMT inhibitor treatment group was compared with that in the healthy control group ( Figure 2D). Hcy levels were significantly higher in the COMT inhibitor group than in the healthy control group. The overall SMD of the six studies was 2.40 µmol/L (95% CI: 0.28 to 4.51; p = 0.03) in random-effects models with modest substantial heterogeneity (I 2 = 74%, p = 0.002). Comparison of Vitamin B 12 Levels in Blood Blood vitamin B 12 levels were lower in the L-dopa group than in the healthy control group ( Figure 3A). The overall SMD of the 18 studies was −62.67 pg/mL (95% CI: −86.53 to −38.81; p < 0.00001) in random-effects models with modest and significant heterogeneity (I 2 = 58%, p = 0.001). In addition, blood vitamin B 12 levels were even lower in the COMT inhibitor group ( Figure 3B). The overall SMD of the five studies was −51.01 pg/mL (95% CI: −91.45 to −10.57; p = 0.01) in fixed-effects models with no heterogeneity (I 2 = 0%, p = 0.55). Moreover, blood vitamin B 12 levels were still lower in the COMT inhibitor group than in the healthy control group ( Figure 3C). The overall SMD of the three studies was −86.60 pg/mL (95% CI: −171.09 to −2.10; p = 0.04) in random-effects models with no significant heterogeneity (I 2 = 57%, p = 0.10) and larger than the overall SMD of the 18 studies comparing the L-dopa and healthy control groups (−62.67 pg/mL). Comparison of Hcy Levels in Blood The effect of the L-dopa treatment on blood Hcy levels was examined in 26 studies (Figure 2A). Blood Hcy levels were significantly higher in the L-dopa group than in the healthy control group (Figure 2). The overall SMD of the 26 studies was 6.43 μmol/L (95% confidence interval [CI]: 5.60 to 7.25; p < 0.00001) in random-effects models with high significant heterogeneity (I 2 = 76%, p < 0.000001). Subsequently, the impact of the L-dopa treatment on blood Hcy levels in patients with PD was examined ( Figure 2B). Hcy levels were significantly higher in the L-dopa group than in the without L-dopa group. The overall SMD of the 17 studies was 5.11 μmol/L (95% CI: 3.56 to 6.66; p < 0.00001) in random-effects models with high significant heterogeneity (I 2 = 81%, p < 0.00001). Moreover, the influence of the COMT inhibitor on high blood Hcy levels induced by the L-dopa treatment in patients with PD was investigated ( Figure 2C). The group treated with both L-dopa and the COMT inhibitor had lower levels of Hcy in the blood compared to the group taking L-dopa alone. The overall SMD of the seven studies was −3.78 μmol/L (95% CI: −5.27, −2.29; p < 0.00001) in the random-effects models with no significant heterogeneity (I 2 = 25%, p = 0.24). Finally, the blood level of Hcy in the L-dopa plus COMT inhibitor treatment group was compared with that in the healthy control group ( Figure 2D). Hcy levels were significantly higher in the COMT inhibitor group than in the healthy control group. The overall SMD of the six studies was 2.40 μmol/L (95% CI: 0.28 to 4.51; p = 0.03) in random-effects models with modest substantial heterogeneity (I 2 = 74%, p = 0.002). Comparison of Vitamin B12 Levels in Blood Blood vitamin B12 levels were lower in the L-dopa group than in the healthy control group ( Figure 3A). The overall SMD of the 18 studies was −62.67 pg/mL (95% CI: −86.53 to −38.81; p < 0.00001) in random-effects models with modest and significant heterogeneity (I 2 = 58%, p = 0.001). In addition, blood vitamin B12 levels were even lower in the COMT inhibitor group ( Figure 3B). The overall SMD of the five studies was −51.01 pg/mL (95% CI: −91.45 to −10.57; p = 0.01) in fixed-effects models with no heterogeneity (I 2 = 0%, p = 0.55). Moreover, blood vitamin B12 levels were still lower in the COMT inhibitor group than in the healthy control group ( Figure 3C). The overall SMD of the three studies was −86.60 pg/mL (95% CI: −171.09 to −2.10; p = 0.04) in random-effects models with no significant heterogeneity (I 2 = 57%, p = 0.10) and larger than the overall SMD of the 18 studies comparing the L-dopa and healthy control groups (−62.67 pg/mL). Comparison of Folate Levels in Blood Blood folate levels were compared between the L-dopa and healthy control groups ( Figure 4A). Blood folate levels were significantly lower in the L-dopa group than in the healthy control group. The overall SMD of the 18 studies was −0.89 ng/mL (95% CI: −1.44 to −0.33; p = 0.002) in random-effects models with modest heterogeneity (I 2 = 62%, p = 0.00003). Subsequently, the impact of the COMT inhibitor on circulating folate levels in the L-dopa group was examined ( Figure 4B). Blood folate levels did not differ significantly Comparison of Folate Levels in Blood Blood folate levels were compared between the L-dopa and healthy control groups ( Figure 4A). Blood folate levels were significantly lower in the L-dopa group than in the healthy control group. The overall SMD of the 18 studies was −0.89 ng/mL (95% CI: −1.44 to −0.33; p = 0.002) in random-effects models with modest heterogeneity (I 2 = 62%, p = 0.00003). Subsequently, the impact of the COMT inhibitor on circulating folate levels in the L-dopa group was examined ( Figure 4B). Blood folate levels did not differ significantly between the COMT inhibitor and L-dopa groups (SMD = 1.78 ng/mL, 95% CI: −0.59 to 4.15; p = 0.14) in random-effects models with no heterogeneity (I 2 = 52%, p = 0.08). Furthermore, blood folate levels were compared between the COMT inhibitor and healthy control groups ( Figure 4C). Blood folate levels did not differ significantly between the COMT inhibitor and healthy control groups (SMD = 1.20 ng/mL, 95% CI: −0.78, 3.18; p = 0.24) in random-effects models with no heterogeneity (I 2 = 65%, p = 0.06). Publication Bias Publication bias was assessed using funnel plots (Supplementary Figures S2-S4). There was no evidence of publication bias with funnel plots of circulating Hcy, vitamin B12 or folate for the healthy control or COMT inhibitor group. Discussion In the current meta-analysis, the impact of L-dopa and the COMT inhibitor on the folate-methionine cycle in PD was evaluated based on the blood levels of Hcy, vitamin B12 and folate. Two previous meta-analyses investigated if there is any correlation between the ratio of Hcy/vitamin B12/folate and PD incidence [48,49]. However, one of them was confined to studies in China and did not consider the impact of L-dopa or COMT inhibitor Publication Bias Publication bias was assessed using funnel plots (Supplementary Figures S2-S4). There was no evidence of publication bias with funnel plots of circulating Hcy, vitamin B 12 or folate for the healthy control or COMT inhibitor group. Discussion In the current meta-analysis, the impact of L-dopa and the COMT inhibitor on the folate-methionine cycle in PD was evaluated based on the blood levels of Hcy, vitamin B 12 and folate. Two previous meta-analyses investigated if there is any correlation between the ratio of Hcy/vitamin B 12 /folate and PD incidence [48,49]. However, one of them was confined to studies in China and did not consider the impact of L-dopa or COMT inhibitor therapy [48]. The other included a small number of studies and did not evaluate vitamin B 12 or folate levels, which should be considered in terms of the folate-methionine cycle [49]. To the best of our knowledge, the current study would be the first report on the effects of L-dopa and its concomitant COMT inhibitor on the folate-methionine cycle based on the blood levels of Hcy, vitamin B 12 and folate and integrating numerous reports worldwide. Hcy is closely associated with the promotion of PD pathologies, and its elevation should be considered as a main risk factor for PD [50]. In this meta-analysis, the circulating Hcy concentrations in various subgroups of patients with PD were investigated ( Figure 2). The blood Hcy level was elevated in the L-dopa group compared to both the healthy control and without L-dopa groups. These results are similar to those of previous meta-analyses, showing that the same results were obtained even when the number of studies increased [48]. In addition, the normal range of blood Hcy levels is 5-15 µmol/L, and circulating Hcy concentrations ≥15 µmol/L are considered to indicate hyper-homocysteinemia [10]. In this study, 25 of 26 included studies showed that the mean blood Hcy concentration in the L-dopa group was >15 µmol/L (Figure 2A), while the mean blood Hcy levels in the without L-dopa group in 15 of 17 included studies were below the borderline ( Figure 2B). High Hcy causes various pathogenic outcomes, such as oxidative stress and inflammation which can accelerate neuronal cell death [10]. In addition, many studies reported that Hcy has a high correlation with the cardiovascular system by causing pathological problems in cardiovascular endothelium and smooth muscle cells [51][52][53]. For these reasons, the current results imply that various side effects reported in PD patients with prolonged L-dopa intervention might be related to the elevation of circulating Hcy and its pathogenic responses. In contrast, the COMT inhibitor group showed significantly lower Hcy levels compared to the L-dopa group ( Figure 2C). This might result from the fact that the COMT inhibitor suppressed the overproduction of SAH by COMT after taking L-dopa in one-carbon metabolism [12]. Therefore, the current study suggests that COMT inhibitors can not only involve in the secure delivery of L-dopa but reduce potential side effects such as hyperhomocysteinemia caused by L-dopa intervention. In addition to Hcy, vitamin B 12 or folate can be found as methyl donors in one-carbon metabolism or the folate-methionine cycle [14]. Thus, deficiencies in folate and vitamin B 12 can interfere with DNA synthesis or replication, subsequently contributing to cell death or the development of diseases such as cancer [54][55][56]. In a previous study, folic acid deficiency with transformed Hcy inhibited DNA repair in the hippocampal neurons [57]. Similarly, low serum vitamin B 12 concentrations are related to brain damage [58]. Thus, folate and vitamin B 12 are important nutrients facilitating the proper folate-methionine cycle and inhibiting neuronal damage. In this meta-analysis, both blood vitamin B 12 and folate levels were significantly reduced in the L-dopa group compared to the healthy control group. In particular, circulating vitamin B 12 levels were further reduced in the COMT inhibitor group compared to the L-dopa group ( Figure 3B), but blood folate concentrations did not change in the COMT inhibitor group and rather showed an increasing tendency ( Figure 4B). A previous meta-analysis showed that patients with PD had lower circulating vitamin B 12 and folate levels than healthy controls [48]. Therefore, the current data imply that L-dopa and/or COMT inhibitor therapy might deteriorate vitamin B imbalance in patients with PD. To understand the dual effect of the COMT inhibitor on the folate-methionine cycle, the normal range of each factor examined in this study should be reviewed. The normal range of blood concentration is 5-15 µmol/L for Hcy [10], 200-900 pg/mL for vitamin B 12 [59] and 3.0 ng/mL or greater for folate (sufficient concentration of folate controver-sial) [60]. Accordingly, the levels of vitamin B 12 and folate were within the normal range, even in the COMT inhibitor group. However, in the case of Hcy, the risk of hyperhomocysteinemia exceeded the borderline of 15 µmol/L in the L-dopa group. Thus, the current data suggest that the COMT inhibitor reduces the side effects of L-dopa related to hyperhomocysteinemia. In addition to the normal range of each factor, the organs involved in the folatemethionine cycle should be considered. The main organ involved in one-carbon metabolism is the liver Vitamin B 12 is mostly stored in the liver until the body requires it. Half of the folate concentration is stored in the liver, and the rest is stored in blood and tissues. Therefore, alterations in the blood folate concentration might not indicate metabolic processes, unlike those in vitamin B 12 levels. Therefore, alterations in circulating vitamin B 12 and folate showed different patterns after the drug treatment. Finally, the disease duration of each group should be considered. In this analysis, the COMT inhibitor group had a longer disease duration than the L-dopa group (Supplementary Figure S1). This may be because, in general, the longer the disease duration, the greater the chance of taking a combination of L-dopa and the COMT inhibitor. In addition, patients with a longer disease duration might undergo further PD progression, which may affect the status of vitamin B 12 and folate, as they play roles in the folate-methionine cycle, erythropoiesis and iron homeostasis [61]. Therefore, the difference in disease duration would also affect the results showing the dual nature of the COMT inhibitor. Nevertheless, the current data provide evidence that L-dopa or COMT administration can affect vitamin B 12 , folate and Hcy levels to a certain degree. Collectively, this study demonstrated hyperhomocysteinemia and lower blood levels of vitamin B 12 and folate in PD patients with L-dopa administration. An increase in blood Hcy levels induces neuronal cell death, and vitamin B 12 and folate deficiencies suppress their neuroprotective effects [11]. These results are in line with previous reports showing that L-dopa can damage dopaminergic neurons, thereby accelerating PD progression [7]. Thus, possible causes of disease exacerbation from L-dopa therapy would be an elevation in Hcy but a reduction in vitamin B 12 and folate. Moreover, this meta-analysis elucidated that COMT inhibitors might be beneficial in ameliorating hyperhomocysteinemia and folate deficiency induced by the L-dopa treatment. However, COMT inhibitors can exacerbate vitamin B 12 deficiency in patients with PD on L-dopa. In conclusion, the current data suggest that a COMT inhibitor with vitamin B supplementation could reduce L-dopainduced PD deterioration. The present study had the strengths of integrating global data and considering concomitant L-dopa therapy. However, this study had some limitations. First, the clinical trials included both randomized and non-randomized ones, such as cross-sectional studies. Second, no quality assessment was performed. Third, the studies showed heterogeneity (Figure 2A,B,D, Figure 3A and Figure 4A). The observed heterogeneity disappeared when studies were stratified by the geographic region, as previously described [62]. The North American region showed a markedly higher incidence compared to other regions, whereas some countries, such as Italy, showed decreasing trends in the estimated annual percentage change in PD from 1990 to 2019 [63]. Fourth, the group taking vitamin B supplements was not considered in the current study. Most of the studies excluded participants taking vitamin B supplements, which may strengthen or contradict the main conclusion of the current meta-analysis. Lastly, as the analysis of vitamin B 12 or folate was not conducted as much as that of Hcy, the difference between the without L-dopa and L-dopa groups was not determined, and the sample sizes of other studies on vitamin B 12 and folate were small as well. Therefore, more clinical studies are required to clarify the impact of L-dopa and the COMT inhibitor on the folate-methionine cycle based on various factors, such as geographical regions and biodiversity. Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/nu15040901/s1: Figure S1: The difference in mean of disease duration between L-dopa group and COMT inhibitor group; Figure S2: Funnel plot of Hcy-related studies; Figure S3: Funnel plot of Vit B12-related studies; Figure S4: Funnel plot of folate-related studies; Figure S5: PRISMA 2020 Checklist. Data Availability Statement: The data that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2023-02-15T16:16:15.609Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "2cda3bc67ef368ff4ee48b2620bb1ab0e575edb9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/15/4/901/pdf?version=1676042993", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "525fff99d569e6705832ea6856d832bfef48306d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15029323
pes2o/s2orc
v3-fos-license
Model of separated form factors for unilamellar vesicles New model of separated form factors is proposed for the evaluation of small-angle neutron scattering curves from large unilamellar vesicles. The validity of the model was checked by comparison to the model of hollow sphere. The model of separated form factors and hollow sphere model give reasonable agreement in the evaluation of vesicle parameters. Information about the internal membrane structure is mainly derived from X-ray diffraction experiments on multilamellar vesicles (MLVs) [1]. A single lipid bilayer possesses the structure typical of most biological membranes. Unilamellar vesicles (ULVs) appear to be more biologically appealing model of the lipid membrane than multilamellar vesicles. Moreover, vesicles are used as delivery agents of drugs, genetic materials and enzymes through living cell membrane and other hydrophobic barriers [2,3]. Today, the problem of accurate and simultaneous determination of the vesicle radius, polydispersity, and the internal membrane structure is not yet solved in SAXS and SANS experiments [5]- [9]. The information about internal membrane structure derived from the SANS experiment is based on the strip-function model of the neutron scattering length density across bilayer ρ(x) [4] and application of the hollow sphere model for the vesicle [5,7]. Important problem is to develop new approach to the evaluation of SANS and SAXS experimental curves with possibility to describe ρ(x) as an any analytical or numerical function. The purpose of present work is to propose and verify the new analytical equations for the calculation of the SANS curves from the phospholipid vesicles. Experiment and Model Dipalmitoylphosphatidylcholine (DPPC) was purchased from Sigma (France), and D 2 O was from Isotop (S.-Peterburg,Russia). LUVs were prepared by extrusion of MLVs through polycarbonate filter with pore diameter 500Å as described in ref. [7]. The spectra from unilamellar DPPC vesicles were collected at YuMO smallangle spectrometer of IBR-2 reactor (Dubna, Russia) at T = 20 • C [10]. Incoherent background was subtracted from normalized cross section of vesicles as described in ref. [5]. DPPC concentration in sample was 1% (w/w). Macroscopic cross section of monodispersed population of vesicles [11] dΣ dΩ mon (q) = n · A 2 (q) · S(q) where n is the number of vesicles per unit volume, A(q) is scattering amplitude of vesicle, and S(q) is vesicle structure factor. S(q) ≈ 1 for 1% (w/w) DPPC concentration [12]. Scattering amplitude A(q) for the case of vesicles with spherical symmetry [11] A(q) = 4π ρ(r) sin(qr) qr where ρ(r) is neutron contrast between bilayer and solvent. Integration o (2) over the hollow sphere with ρ(x) ≡ ∆ρ leads to the hollow sphere (HS) model of the vesicle [11] dΣ dΩ mon (q) = n(∆ρ) 2 4π where A i = sin(qR i ) − (qR i ) cos(qR i ), R 1 is inner radius of hollow sphere, R 2 = R 1 + d is outer radius of hollow sphere, d is membrane thickness. For the bilayer with central symmetry, (2) can be rewritten as Integration of (4) gives exact expression for scattering amplitude of vesicle with separated parameters R, d, In the case of R ≫ d/2, R + x ≈ R, one can obtain from (4) (6) and the macroscopic cross section can be written as where F s (q, R) is the form factor of a infinitely thin sphere with radius R [9] and F b (q, d) is the form factor of the symmetrical lipid bilayer Eqs. (7)-(9) present a new model of separated form factors (SFF) of the large unilamellar vesicles. SFF model has advantage relative to the HS model due to possibility to describe the internal membrane structure via presentation of ρ(x) as an any integrable function. The approximation of neutron scattering length density across the membrane with a constant ρ(x) ≡ ∆ρ is far from being realistic [4,5,7], but gives possibility to make comparison of HS and SFF models. In the approximation of ρ(x) ≡ ∆ρ, (9) is integrated to the expression In present study, vesicle polydispersity was described by nonsymmetrical Schulz distribution [13] whereR is the average vesicle radius. The polydispersity of vesicles was characterized as relative standard deviation of vesicle radius σ = 1 m+1 . Experimentally measured macroscopic cross section dΣ/dΩ was calculated via convolution of the dΣ/dΩ mon with the vesicle distribution function G(R) by integration over the vesicle radius from R min = 110Å to R max = 540Å Finally, dΣ/dΩ values were corrected for the resolution function of the YuMO spectrometer as described in ref. [14]. The parameter was used as a measure of fit quality, N here is a number of experimental points. Results and Discussion The validity of SFF model comparing to HS model was examined in the approximation of ρ(x) ≡ ∆ρ. Fig. 1 presents experimentally measured coherent macroscopic cross-section of DPPC vesicles and fitted model curves. The SFF model was applied via (7),(8),(10), (12), and the model of hollow sphere via (3), (12). As it is seen from Fig. 1, both models describe the experimental curve well enough. Free parameters used in the fit were: average vesicle radiusR, membrane thickness d, and parameter m in (11). The results of calculations are presented in Table 1. Both HS and SFF models fit experimental curve with the same accuracy, the difference in the value of the R f parameter is negligibly small, 1.3%. HS model gives larger value of polydispersity (σ = 0.24) relative to that of SFF model (σ = 0.22), the difference is 9%. HS model gives smaller value of average radius, the difference in radius value is 8%. Though HS model provides the exact solution, the results of SFF model in the evaluation of vesicle radius and polydispersity do not differ more than 10%. Important result is the same value of the calculated membrane thickness d for both models. The proposed SFF model for the evaluation of SANS spectra from large unilamellar vesicles has a fundamental advantage over the model of hollow sphere. In a framework of hollow sphere model one can describe the inner structure of the membrane only in terms of a system of several inclusive concentric spheres, each having a constant scattering length density [5,7]. The problem of water distribution function inside the lipid membrane, particularly in the region of polar head groups, is being widely discussed now. In first approximation one can use linear or exponential distribution of water from the membrane surface further inside the bilayer. This kind of water distribution will generate linear or exponential term in the function of scattering length density, which is beyond the capability of the HS model, based only on the strip-function distribution of scattering length density. The model of separated form factors introduced in the present work is deprived of this imperfection, because any integrable analytical or numerical function can be used as a function of scattering length density (see. eq.9). Future investigation of the internal membrane structure via application of SFF model can give new interesting results for binary phospholipid/water, ternary phospholipid/cryoprotector/water or phospholipid/surfactant/water systems. Conclusions New model of separated form factors (SFF) is proposed for large unilamellar vesicles. SFF model gives an opportunity to analyze vesicle geometry and internal membrane structure separately. The validity of SFF model was examined by comparison with hollow sphere (HS) model for large unilamellar vesicles. Both models give the same value of membrane thickness, the difference in the value of vesicle average radius and vesicle polydispersity is inside of 10% accuracy. SFF model is proposed as prospective method of the internal membrane structure evaluation from the SANS experiment on large unilamellar vesicles.
2018-05-31T08:15:11.671Z
2001-08-09T00:00:00.000
{ "year": 2001, "sha1": "c2daed7a84600e58715958fe2fb05411ebedbfef", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0108014", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c2daed7a84600e58715958fe2fb05411ebedbfef", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
19791122
pes2o/s2orc
v3-fos-license
Ab initio investigation of Elliott-Yafet electron-phonon mechanism in laser-induced ultrafast demagnetization The spin-flip (SF) Eliashberg function is calculated from first-principles for ferromagnetic Ni to accurately establish the contribution of Elliott-Yafet electron-phonon SF scattering to Ni's femtosecond laser-driven demagnetization. This is used to compute the SF probability and demagnetization rate for laser-created thermalized as well as non-equilibrium electron distributions. Increased SF probabilities are found for thermalized electrons, but the induced demagnetization rate is extremely small. A larger demagnetization rate is obtained for {non-equilibrium} electron distributions, but its contribution is too small to account for femtosecond demagnetization. Ultrafast demagnetization of ferromagnetic metals through excitation by a femtosecond laser pulse was discovered fifteen years ago by Beaurepaire et al. [1].In spite of intensive investigations the microscopic origin of the ultrafast demagnetization could not be disclosed and continues to be controversially debated (see [2] for a recent review).Several mechanisms have been proposed to explain the observed ultrafast phenomenon [3][4][5][6][7][8][9][10][11].Most of these theories assume the existence of an ultrafast spinflip (SF) channel, which would cause dissipation of spin angular momentum within a few hundred femtoseconds. Elliott-Yafet electron-phonon SF scattering has been proposed as a mechanism for ultrafast spin-dissipation [4].Strong support in favor of electron-phonon mediated spin-flips as the actual mediator of the femtosecond demagnetization was made in a very recent work, in which ab initio calculated SF probabilities for thermalized electrons compared favorably to SF probabilities derived from pump-probe demagnetization measurements [8].While these results definitely favor the Elliott-Yafet SF scattering mechanism, the calculation of the electron-phonon scattering involved several serious approximations.Applying the so-called Elliott approximation [12] only spin-mixing due to spin-orbit coupling in the ab initio wavefunctions was included, but no electronphonon matrix elements and no real phonon dispersion spectrum was considered.The thus-obtained SF probability is however not a direct measure of demagnetization.Recent model simulations for thermalized hot electrons [9] using the Landau-Lifshitz-Bloch equation [13] and assuming a fitted SF parameter did reproduce the experimental magnetization response, but couldn't assign the SF origin.Hence, it remains a crucial, open question whether laser-induced demagnetization can indeed be attributed to electron-phonon mediated SF scattering. Here we report an ab initio investigation to accurately establish the extent to which the Elliott-Yafet electron-phonon SF scattering contributes to fs demag-netization.To this end we perform ab initio calculations for ferromagnetic Ni, which ultrafast magnetization decay is well documented [1,8,14].We include the full electron-phonon matrix elements and phonon dispersions in our calculations.Introducing an energy-dependent SF Eliashberg function we compute SF probabilities and demagnetization rates for laser-heated thermalized electrons as well as laser-induced non-equilibrium electron distributions, from which we draw qualified conclusions on the possibility of phonon-mediated demagnetization. To treat phonon-mediated SF scattering at variable electron energies we define a generalized energy-and spin-dependent Eliashberg function, which comprises initial and final electron states with quantum numbers kn, k n that interact through a phonon with frequency Ω=ω qν , ν and q denote its mode and wavevector.M is the ionic mass, σ=↑, ↓ denote the spin majority, miniority components.For E=E F (the Fermi energy) the SF part α 2 ↑↓ F (E F , Ω) gives the SF Eliashberg function [15] and the sum over all σσ corresponds to the standard Eliashberg function, α 2 F (E F , Ω) [16].The (squared) electron-phonon matrix elements are where V is the potential, u qν the phonon polarization vector and |Ψ σ kn are the eigenstates in the ferromagnet.Momentum conservation requires q = k − k.SF scattering becomes possible through the relativistic spin-orbit coupling.The majority, minority Bloch states |Ψ ↑ kn and |Ψ ↓ kn can be decomposed in pure spinor components where the components b σ kn are nonzero only if spin-orbit coupling is present and represent the degree of spinmixing, which is a precondition for nonzero g ν↑↓ kn,k n .To study demagnetization we consider two quantities, SF probabilities and spin-resolved transition rates.The latter are defined as [17] Here N (Ω) is the phononic Bose-Einstein distribution, f σ the Fermi distribution, and Θ(Ω) the Heaviside function.Important for the effective demagnetization is the spin decreasing rate S − , which corresponds to S ↑↓ , while the increasing one S + corresponds to S ↓↑ .An approximation of Eq. ( 4) is helpful to achieve a faster evaluation and provide more insight in the process.Energy conservation during electron-phonon scattering requires E k n − E kn = Ω, but the phonon energy Ω is usually very small (< 0.04 eV) compared to electron related properties.Already in the standard Eliashberg formulation Eq. ( 1) an energy difference between initial and final states is neglected while the δ-functions δ(E σ kn − E) are broadened with a parameter (0.03 eV, here).Similarly, one can neglect the energy variation due to Ω in the Fermi function f σ (E + Ω), as long as the temperature is high enough.We can then rewrite spin-resolved transition rates in the form where we introduced the energy-and spin-dependent specific scattering rate for electrons w σσ given by Note that w ↑↓ (E) = w ↓↑ (E).All calculations were checked against a more accurate numeric implementation not involving this approximation.The SF probability for an electron with energy E is defined as the ratio of the SF part to the corresponding total counterpart, p S (E) = 2w ↑↓ (E)/ σσ w σσ (E).Analogously, the total SF probability during a scattering event can be defined as Although the SF probability has been used in recent discussions of laser induced-demagnetization [8,18], it is actually not the crucial quantity (as a high but equal SF probability for both spin channels would not cause a demagnetization).We define therefore the normalized demagnetization ratio, D S = (S − − S + )/ σσ S σσ , which tracks the difference of magnetic moment increasing and decreasing SF contributions. To investigate phonon-induced demagnetization in laser-excited Ni we proceed now in three steps.First, we compute the ab initio SF probability P S for equilibrium Ni, i.e., for E = E F .Second, we compute SF probabilities P S for laser-heated Ni, by treating a range of electron energies that correspond to those in a hot, thermalized electron gas after laser-excitation.Thermalization to electron temperatures T e of a few thousand K occurs quickly within about 200 fs after the laser pulse, but the hot electrons are not in equilibrium with the lattice and the lattice temperature is not altered significantly.In the third step we consider the SF probability for non-equlibrium (NEQ) electron distributions [19] that are expected to be present within ∼100 fs after laser stimulation.Demagnetization ratios D S are subsequently evaluated for these three situations.The results obtained in these steps are furthermore compared to values which we compute with the so-called Elliott relation (see below). An ab initio evaluation of the SF probability of equilibrium Ni requires calculated phonon dispersions and a relativistic electronic structure.Such calculation has previously been done for paramagnetic Al [15], but has not yet been accomplished for ferromagnets.An approximation was introduced years ago by Elliott [12], who pointed out a possible source of SF scattering arising from the spin-mixing of eigenstates.Employing several assumptions, viz. a paramagnetic metal, nearly constant electron-phonon matrix elements, b kn constant in the Brillouin zone, and b σ kn a σ kn , Elliot derived a relation between the spin lifetime τ S for a general kind of scattering event with lifetime τ .This so-called Elliott relation uses the Fermi surface averaged spin-mixing of eigenstates b 2 = σ,n dk|b σ kn | 2 δ(E σ kn − E F ) and predicts the SF probability P b 2 S = (τ S /τ ) −1 = 4 b 2 .In a similar way as introduced above, the influence of spin-mixing on the SF probability in laser-heated Ni can be evaluated.We define a SF density of states (DOS) as A generalized Elliott SF probability for an electron with energy E is then given as P b 2 S (E) = 4n ↑↓ (E)/ n(E) (with n(E) the total DOS) which yields the standard Elliott expression b 2 in the limit b σ kn a σ kn and E = E F .The total SF probability P b 2 S of a laser-heated system with electron distribution f σ (E) is obtained from Eqs. ( 7) and (5), where w ↑↓ (E) is replaced by n ↑↓ (E) and w(E) by n(E).Note that although the treatment is intended for phonon scattering the Elliott relation in fact does not take the character of scattering involved into account.Also, the assumption of a paramagnetic material is essential in Elliott's derivation as this permits SF scattering in each k point in the spin-degenerate majority, minority bands at E F .Experimentally the Elliott relation was found to be valid up to a multiplication by a material specific con- stant with variation smaller than one order of magnitude for various paramagnetic metals [20].Recently it has also been applied to ferromagnetic metals [8,18], even though for exchange-split ferromagnetic bands there exist far less k points at which spin-degenerate bandcrossings occur. We have tested the implementation by computing first Al and Ni in equilibrium at low temperature (<300 K).Our calculations are based on the density functional theory (DFT) within the local spin-density approximation (LSDA), see [21] for details.For Al our calculated α 2 ↑↓ F is of the order of 10 5 smaller than α 2 F and in good agreement with the existing previous result [15].The ab initio calculated SF and non-SF Eliashberg functions of equilibrium Ni are shown in Fig. 1.For Ni the computed SF α 2 ↑↓ F function is only about 50 times smaller than the ordinary α 2 F function; this is due to the larger spin-orbit coupling.The resulting total SF probability, P S =0.04, is given in Table I.To estimate the accuracy of the Elliott approximation we have calculated the Elliott SF probability and obtain P b 2 S =0.07.This value is in rough agreement with P b 2 S =0.10 computed in Ref. [18].Thus we find that the Elliott relation overestimates the SF probability in equilibrium Ni by about a factor two. Next we turn to the topic of current controversy, the actual amount of phonon-induced demagnetization in laser-excited Ni.In Fig. 2(top) we show calculated energy-resolved SF and non-SF scattering rates (w ↑↓ (E) and w(E)).Note the strong energy variations of w(E).In Fig. 2(bottom) we compare the computed electronphonon SF probability P S (E) to that obtained from the Elliott relation.At some energies, e.g., 0.5 -1 eV, these two quantities are nearly the same, but at other energies there is no direct relation other than that SF probability is large where band states are present.An interesting difference in the context of ultrafast demagnetization is the suppression of P S (E) around E F , which is not captured by P b 2 S (E).The features of P S (E) that are not captured by P b 2 S (E) can be understood by comparing Eqs. ( 1) and (8).One of the differences is the presence/absence of summation over destination eigenstates k n .The latter are restricted in Eq. ( 1) by the construction of g ν↑↓ kn,k n to correspond to a different spin than the source state kn.The number of available end states is however not taken into account in Elliott formula (which, derived for a paramagnetic metal, assumes that the same number of states is available for both spins, and hence suppresses this distinction).The mentioned discrepancy between P S and P b 2 S above E F is thus easily explained by the lack of states with the same energy and opposite spin in the Ni DOS (see Fig. 3).Hence, the Elliott relation fails for ferromagnets in strongly exchange-split energy regions. After laser-excitation electrons equilibrate quickly due to electron-electron scattering at a high electron temperature T e of the order of thousands K. To describe this situation we use appropriate f σ (E), but note that the chemical potential must be adjusted also.Spin conservation leads to differences between f ↑ (E) and f ↓ (E), namely f ↓ (E) has a lower chemical potential than f ↑ (E) in Ni due to the shape of its DOS.SF probabilities P S computed for several T e are given in Table I.With increasing T e P S increases, too.Also the Elliott SF probability P b 2 S increases with T e , but it deviates still from P S .A previous work [8] used a Gaussian smearing to stimulate a thermalized system (without E F adjustment) and obtained P b 2 S ≈0.18.Our values are smaller, but note that the way the thermalized distribution is described is different. As mentioned before, a large SF probability does not necessarily imply a large demagnetization.Evaluating the demagnetization rate dM/dt=2µ B (S − −S + ) for thermalized electron distributions we obtain quite small values, of the order of 0.08µ B /ps.The reason is that not just a large SF probability, but also an imbalance between f ↑ (E) and f ↓ (E) is essential for a magnetization change.The distributions of spin populations specific to Ni imply that for thermalized electrons below E F most spin-flips increase the spin moment, spin-reducing transitions occur only above E F .In that region the SF scattering rate is however very low (Fig. 2).The situation is illustrated in Fig. 3.As a consequence the spin-decreasing rate (S − −S + ) is thus much lower than the SF rate (S − +S + ), and in addition it exhibits only a weak temperature dependence.Hence we find that phonon-mediated SF scattering in thermalized Ni cannot be the mechanism of the observed ultrafast demagnetization. One remaining possibility for a fast demagnetization is an enhanced SF rate in the NEQ distribution present immediately after the laser pulse.Previous ab initio calculations showed that minority-spin electrons are excited more than majority-spin ones, see [19].Assuming a 1.5-eV pump-laser and a simplified step-like electron distribution reduced by about 5% in the 1.5-eV energy window below E F , the calculated demagnetization ratio D S is higher than for thermalized distributions (Table I).A critical role is played here by holes deep below E F with high SF probability as well as a significant difference between majority and minority occupations (see Fig. 3).An important yet unknown element in estimating the demagnetization is the laser fluence.Nonetheless, we find that phonon-mediated demagnetization in Ni is much more effective in the NEQ state than in the thermalized state, as was proposed recently for Gd [22].An important aspect is the time scale on which the NEQ demagnetization is active.Electron thermalization proceeds fast in Ni and transforms the initial NEQ distribution to a thermalized one in ∼200 fs.A rough estimate of the demagnetization in this time-window is 0.1µ B , i.e. smaller than the observed experimental demagnetization.The precise amount of the demagnetization depends however on the time-evolution of the distributions, which requires further investigations. Using relativistic ab initio calculations we have evaluated the phonon-induced SF probability and demagnetization in laser-pumped Ni.A strong dependence of these quantities on the electron energy is observed, which is not tracked by the Elliott approximation.In the electron thermalized state Elliott-Yafet phonon-mediated demagnetization is too small to explain the ultrafast demagnetization, despite reasonably large SF probabilities.We find that Elliott-Yafet SF scattering contributes more to the demagnetization for NEQ distributions immediately after the fs laser-excitation.We note lastly that the existence of other fast SF channels [5][6][7]11] cannot be excluded. FIG. 3 . FIG. 3. (Color online) Spin-resolved DOS (filled areas) and phonon induced spin-flips (arrows) of NEQ and electron thermalized Ni.The equilibrium DOS is shown by thin lines.SF transitions are significantly different at energies above and below EF (=0 eV).The arrows thickness corresponds to the transition rate, its direction and length give which direction is dominant and how much.The amount of laser redistributed electrons has been enlarged to improve visibility. TABLE I . [18]ulated spin-flip probabilities PS and demagnetization ratios DS for laser-pumped Ni.Results are given for equilibrium (low T ), for thermalized electrons at a high Fermi temperature Te, and for the non-equilibrium (NEQ) electron distribution created by fs laser-excitation.Results obtained for the approximate Elliott SF probability P b 2 S (this work and[18]) are given for comparison.
2011-10-03T14:59:00.000Z
2011-10-03T00:00:00.000
{ "year": 2011, "sha1": "984667cb0495849b8bd5f9eccab63b4d52ac965b", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1110.0371", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "984667cb0495849b8bd5f9eccab63b4d52ac965b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
19156265
pes2o/s2orc
v3-fos-license
Promoting Self-Regulation in Health Among Vulnerable Brazilian Children: Protocol Study The Health and Education Ministries of Brazil launched the Health in School Program (Programa Saúde na Escola - PSE) in 2007. The purpose of the PSE is two-fold: articulate the actions of the education and health systems to identify risk factors and prevent them; and promote health education in the public elementary school system. In the health field, the self-regulation (SR) construct can contribute to the understanding of life habits which can affect the improvement of individuals' health. This research aims to present a program that promotes SR in health (SRH). This program (PSRH) includes topics on healthy eating and oral health from the PSE; it is grounded on the social cognitive framework and uses story tools to train 5th grade Brazilian students in SRH. The study consists of two phases. In Phase 1, teachers and health professionals participated in a training program on SRH, and in Phase 2, they will be expected to conduct an intervention in class to promote SRH. The participants were randomly assigned into three groups: the Condition I group followed the PSE program, the Condition II group followed the PSRH (i.e., PSE plus the SRH program), and the control group (CG) did not enroll in either of the health promotion programs. For the baseline of the study, the following measures and instruments were applied: Body Mass Index (BMI), Simplified Oral Hygiene Index (OHI-S), Previous Day Food Questionnaire (PFDQ), and Declarative Knowledge for Health Instrument. Data indicated that the majority are eutrophic children, but preliminary outcomes showed high percentages of children that are overweight, obese and severely obese. Moreover, participants in all groups reported high consumption of ultraprocessed foods (e.g., soft drinks, artificial juices, and candies). Oral health data from the CI and CII groups showed a prevalence of regular oral hygiene, while the CG presented good oral hygiene. The implementation of both PSE and PSRH are expected to help reduce health problems in school, as well as the public expenditures with children's health (e.g., Obesity and oral diseases). INTRODUCTION Health promotion for children has been receiving the attention of educators and researchers, and there has been a particular focus on oral health and eating habits (Yekaninejad et al., 2012;World Health Organization, 2016). According to the WHO report, the prevalence of obesity among children under the age of five has increased from 4.8 to 6.1% between 1990 and 2014; this entails that the number of children affected by this phenomenon has grown from 31 million to 41 million (World Health Organization, 2016). Oral health involves health and well-being in an integral way, and despite being a preventable situation, oral diseases are considered endemic (Yekaninejad et al., 2012). Notwithstanding some improvements in oral health in developed countries, oral diseases such as dental plaque, gingival bleeding and dental caries are prevalent among schools worldwide, and are still considered public health problems (Yekaninejad et al., 2012). In 2007, the Ministries of Health and Education from Brazil created the Health in School Program (Programa Saúde na Escola-PSE) with the aim of improving the school health system in Brazil (Brasil, 2007). The PSE is a school-based program built on the articulation of the educational and health systems to promote the education of health for the public schools students (Brasil, 2007). The main objective of the PSE is to detect risk factors and identify acts of preventive care while promoting the health of public elementary school students (e.g., assessing nutritional status, early incidence of hypertension and diabetes, caries control, visual and auditory acuity) (Brasil, 2015). The social cognitive framework provides a relevant theoretical framework to the present study (Bandura, 1986). Social cognitive researchers have been specifically stressing the importance of people's agency as a construct of the assumption of one's personal responsibility in one's own behaviors (Bandura, 1986). This has been the major focus of the research in the field of Self-Regulation in Health-SRH- (Bandura, 1986). Extant research has been focused on mapping the intervening variables in the process of building autonomy and responsibility (Zimmerman, 1986;Rosário et al., 2012aRosário et al., , 2015. Moreover, the design of intervention projects to promote self-regulatory processes as well as individuals' implications on their own health issues and the health outcomes have been receiving researchers' attention (Bandura, 2005;Silva and Pereira, 2012). Self-regulation (SR) models have three customary subfunctions: (i) self-control of health-related behaviors and the cognitive and social conditions attached, (ii) adoption of objectives and strategies to achieve this self-control and (iii) self-reactivity, which involves self-motivating stimuli and social support networks that sustain healthy practices (Bandura, 2005). When focused on health, the SR construct can help build understanding of the processes involved in promoting lifelong habits. Thus, the promotion of SR is likely to improve individuals' health and personal well-being (Bandura, 2005). Extant literature has shown the efficacy of using the SR framework in health programs (e.g., the use of self-management strategies during the treatment of chronic diseases) (West et al., 1997;Fu et al., 2003;Clark et al., 2005) designed for improving health, decreasing the need for hospitalizations, and increasing the adherence to treatment (Haskell et al., 1994;West et al., 1997;Fu et al., 2003;Clark et al., 2005). However, research on the efficacy of schoolbased programs focused on promoting SR competencies in the health domain is still lacking (e.g., interventions targeting health eating and oral health) (Bandura, 2005). School Health Program-PSE Programa Saúde Na Escola The PSE is offered to Brazilian cities by the central government, and it involves the combined efforts of primary health care units and public schools (Brasil, 2007). The program has three components: (a) evaluation of the health conditions of the children and adolescents enrolled in public schools, (b) training on a set of activities of health promotion and risk prevention, and (c) the professional development and ongoing training of professionals from the educational and health systems (Brasil, 2015). To develop these actions, health professionals [nurses, community health workers (CHWs), dentists] conduct anthropometric evaluations (weight and height) and health assessments (healthy eating habits, oral health, and visual acuity) to students from all school grades (Brasil, 2015). Program to Promote Self-Regulation in Health (PSRH) The PSRH is a program designed to promote the SR of health. The health contents of the PSRH are the same as those of the PSE (i.e. healthy eating and oral health habits). Moreover, the program is rooted on the social cognitive framework and the construct of SR (Rosário et al., 2012b). Both components are the theoretical ground for the story-tool, Yellow's Trials and Tribulations, which will be used to deliver the health contents and SR strategies (Rosário et al., 2017). This story-tool aims to promote SR skills in children aged up to 10 years by teaching them learning strategies designed to accompany activities proposed by the PSRH. The book tells the story of the disappearance of the Yellow color from the Rainbow and the adventures of the other rainbow colors as they search for their missing friend (Rosário et al., 2012c). This story-tool addresses many practical examples of how children can use SR strategies to resolve their daily difficulties by increasing their autonomy in a responsible manner (Núñez et al., 2014;Rosário et al., 2017). The present study should be interpreted as a response to three current issues: the health of Brazilian children, which in general is showing a negative trajectory despite the efforts of the PSE; the difficulties of children in developing systematic actions involving routine follow-up activities in PSE; and the lack of actions to promoting healthcare (Machado et al., 2015). To address the latter, this research aims to present a program that promotes SR in health. This program includes topics on healthy eating and oral health from the PSE; it is based on the social cognitive framework and uses story tools (Cabanach et al., 2009;Rosário et al., 2017) to train SRH in 5th grade students from the South of Brazil. The current program aims to promote the development of self-regulatory skills, which are considered essential for changing health-related behaviors (Bandura, 2005). METHODS The current paper is a protocol study that describes a quasiexperimental study (Bedard et al., 2017). The development of the project will have two phases: Phase I: Deliver Training in Health Self-Regulation; and Phase II: Set an Intervention Program to Promote Self-regulation in Health (Figure 1). Contextualization of the Study Site The study will be conducted in a city in the south of Brazil (i.e., Sapucaia do Sul). This city has ∼138,357 inhabitants and a lower average monthly income compared to neighboring cities in the region (IBGE, 2016). The high social vulnerability of the inhabitants of the city was the reason this town was chosen for the investigation. Sapucaia has 23 elementary schools working with Primary Health Care Units that employ doctors, nurses, nursing technicians, CHWs, dentists and oral health technicians (Sapucaia do Sul, 2016). Of these, 16 elementary schools are currently engaged with the PSE program. To engage in PSE, elementary schools and the primary health care units should form a dyad: each school has a health care unit partner to work with regarding health issues (Sapucaia do Sul, 2016). For the current study, the schools enrolled should have two classes in 5th grade. Only 14 out of 16 elementary schools in Sapucaia engaged in PSE met this criterion. All were invited to participate. Finally, seven dyads (school-health care unit) agreed to participate in the current investigation (response rate of 50%). Seven elementary schools which were not enrolled in PSE were contacted to participate as CG, but only three agreed (Figure 2). The reasons given by the schools for not enrolling in the research were not related with the nature or goals of the intervention, but with social and administrative limitations (e.g., general strikes that paralyzed public schools for several months, high workload and low salaries). The latter are examples that reflect the actual educational political environment in Brazil and stresses the relevance of developing research projects with vulnerable children, to help them with learning and health issues (Ribeiro, 2013;Casemiro et al., 2014). Recruitment and Randomization The school boards of 10 elementary schools agreed to participate. Participants were students and teachers from seven PSE elementary schools and health units, and three non-PSE schools. Finally, these schools were randomized into three groups: Control Group-CG-(eight classes)-schools not participating in PSE, Condition I-CI-(eight classes)-schools participating PSE; and Condition II-CII-(nine classes)schools participating in the Phases I and II of the project. Study Participants Six hundred and twenty-five fifth grade students and their parents were contacted through face-to-face contacts (parent meetings and meetings with the teachers). Finally, 429 students [215 girls] were enrolled. These students are nested in 24 classes and their allocation to the three conditions was as follows: 8 classes with 118 students [62 girls] not enrolled in the PSE participated as CG; the remaining 17 classes were randomly split into two groups, 9 classes with 198 students [92 girls] in the CII, and 8 classes with 113 students [61 girls] in the CI. Inclusion Criteria To be enrolled in this study, participants must meet the following criteria: Teachers must teach a 5th grade class in a public elementary school; Health professionals must be working in a primary care unit; Students must be enrolled in the 5th grade in a public elementary school; Parents/guardians: must be responsible for a child enrolled in a 5th grade class in a public elementary school; All participants (parents and children) must be volunteers and must sign the Free and Informed Consent Term and Free and Informed Consent Term for parents/guardians authorizing their children to participate in the study. All subjects gave written informed consent in accordance with the Declaration of Helsinki. Exclusion Criteria Potential participants who do not meet all the inclusion criteria, including 5th grade students with special educational needs that limit their cognitive autonomy, will be excluded from the study. Program Rationale The PSRH is grounded in the SR framework which describes the degree in which students are metacognitively, motivationally, and behaviorally engaged in their own learning processes (Zimmerman, 1989). SR processes may be described as open and dynamic processes proceeding through three main phases (i.e., forethought phase, the performance phase, and the selfreflection phase) (Zimmerman, 2002). The cyclical nature of this model aims to explain how students initiate, keep and control their behaviors, thoughts, and emotions toward specific goals. Motivational beliefs and task analysis are the two areas of the forethought phase, and they describe processes prior to learning efforts (e.g., goal setting, self-efficacy beliefs) . The performance phase, describes the processes used by students' during learning. For example, self-instruction is a strategy that may help students focus their attention on homework assignments and eliminate distractors; and selfrecording notes is a strategy that may help students selfmonitor their performance (Zimmerman, 1989). Both strategies may facilitate self-control and self-observation, which are key components of the performance phase (Zimmerman, 2002). Lastly, the self-reflection phase describes methods intended to help students understand the processes that may have led to the outcomes and the reactions to these outcomes (Zimmerman, 1989). Self-judgments and self-reactions are the two areas of this last phase of the SR cycle (Zimmerman, 2002). For the purposes of the current work, the PLEE model, which is a SR model grounded on the model by Zimmerman (2002), will be used FIGURE 1 | Promoting self-regulation in health among vulnerable Brazilian children: protocol study. Flow diagram for study procedures. PSE, Programa Saude na Escola; PSRH, Program to Promote Self-regulation in Health. (Rosário et al., 2012a;Núñez et al., 2013). The abbreviation PLEE stands for the three phases that comprise the structure of the model: planning, task execution and evaluation (Pina et al., 2010). In this model, the logic and the cyclic movement is present at all times; during the planning phase, the execution and evaluation phases are still carried out (Rosário et al., 2017). For example, when children plan what they want to eat for lunch, they fulfill the execution phase by placing healthier foods in their lunch pack and they complete the self-reflection phase by evaluating their choices based on their learned experiences regarding nutrition. Phase 1-Training in Self-Regulation in Health During this stage, the training aimed to equip the participating professionals with the skills needed to conduct a program in SR focused on healthy eating and oral health habits. This training occurred in 2017 and was delivered by the authors and research assistants who have knowledge and skills in SR of health. The training duration was a total of 24 h, divided into 3 months (4 h sessions every 2 weeks of each month). The participants were health professionals (dentists, nurses, nursing technicians, CHWs) and 5th grade teachers of the CII schools. These sessions addressed the theoretical content related to SR, healthy eating, oral health and the chapters of the story-tool, Yellow's Trials and Tribulations, which was to be read and discussed (Rosário et al., 2012c). The sessions also included hands-on activities to build the support materials needed to work with children (e.g., drawings, worksheets, food maps;) (see Figure 3). Phase 2-Intervention: Program to Promote Self-Regulation in Health The intervention program with children will be run by teachers and health professionals (CII) in 50-min biweekly sessions that will take place in class throughout the 2018 school year. During these sessions, the children will discuss the chapters of Yellow's Trials and Tribulations (Rosário et al., 2012b), one chapter per week, as well as the discussions and activities related to the topics of healthy eating and oral health ( Table A1 in Appendix section). The practice of storytelling has become an educational tradition that occurs in a variety of cultures. One of the reasons for using this technique may be related to the fact that stories are efficient ways of organizing knowledge (Rosário et al., 2017). When children become involved in a narrative, through reading or listening, they are likely to learn how to organize the information in a logical sequence (Alna, 1999). Extant research indicates that discussion and interpretation of narratives may contribute to children's awareness of SR behaviors, which may be translated into their learning processes . This process takes place through the development of vicarious learning by observing and expanding upon behaviors and expressions that help structure future modulations (Bandura, 1986;Schunk, 2000). The Yellow's Trials and Tribulations story-tool is divided into three steps, each of them with specific goals and contents to be learned by children. By the end of the first step of the book (i.e., Chapters 1-7), the children are expected to be able to define the three phases of the SR process (PLEE) (Rosário et al., 2012a). After completing the second step (i.e., Chapters 8-12), the children are expected to be able to apply the PLEE model to situations of their everyday lives (Rosário et al., 2017). After completing the entire assigned reading, children are expected to be able to reflect on the importance of the SR strategies learned and transfer this knowledge to distinct domains of their lives (e.g., behavior in class, healthy food habits, time management; oral hygiene). Monitoring The program will be monitored by researchers through case discussions and theoretical group meetings with teachers and health professionals which will occur during the biweekly visits to classes. Students in the three groups will be assessed five times throughout the year: before the initiation of the intervention program, 3 and 6 months later, at the end of the intervention, and 6 months post-intervention to check for the impact of the program on children's health. Ethics Statement This project was approved in the Ethics Committee of the Federal University of Health Sciences of Porto Alegre/Brazil-UFCSPA, n • 1.151.220 and through the Coordination for the Improvement of Higher Education Personnel (CAPES), Brazilian Federal agency for the Support and Evaluation of Graduate Education. The participation of children and parents, as well as their parents' consent was voluntary and unrewarded. Finally, informed consent was obtained from all parents/guardians regarding authorization of their children to participate in this study. All subjects (children, parents and consent of parents/guardians) gave written informed consent in accordance with the Declaration of Helsinki. Instruments and Measures The effectiveness of the intervention will be assessed five times throughout the program. Ten self-reports (e.g., Food Preference Instrument, Students' Attitudes and Perceptions and Parents' Perceptions and Influences on the Health Instrument, Food Availability and Oral Health Instrument, Self-Regulation for Health Scale, Self-Efficacy for Health Scale) and two physical measures (e.g., BMI and OHI-S) will be used. To characterize the baseline of this study, two questionnaires were used in 2016, before the start the program: Previous Day Food Questionnaire-PDFQ; and Declarative Knowledge and physical measures assessments (BMI e OHI-S) (Greene and Vermillion, 1964;World Health Organization and Multicentre Growth Reference Study Group, 2006;Penkilo et al., 2008;Assis et al., 2009;Wall et al., 2012). Body Mass Index (BMI) This anthropometric evaluation is one of the less invasive methods and has well established cutting techniques and points (Greene and Vermillion, 1964;World Health Organization and Multicentre Growth Reference Study Group, 2006;Brasil, 2008). It is the most commonly used method in interventions that focus on obesity prevention (Kamath et al., 2008;Friedrich et al., 2012;Bogart et al., 2014). Validation studies of this instrument are limited in quantity. An internal and external validation study showed valid estimates regarding the weight of the subjects evaluated by this scale (Deurenberg et al., 1991). The data were obtained by determining the weight and height of the students by using an electronic scale and stadiometer for each measurement respectively. The devices were calibrated in the Nutrition Laboratory of the Federal University of Health Sciences of Porto Alegre and operated by nutrition researchers from these labs. To guarantee the reliability of the measures, all researchers followed the same protocol throughout evaluations. To classify the nutritional status of the schoolchildren, the data on height/age z scores (E/I) and body mass/age index (BMI/I) were used, following the standards of the World Health Organization and Multicentre Growth Reference Study Group (2006). We used the following cut-off points for E/I: z <-3 (low height), −3 ≤ z <−2 (low height), z ≥ −2 (height suitable); the following cut-off points were used for BMI/3 (thinness): −3 ≤ z <−2 (severe thinness), −2 ≤ z < +1 (eutrophic/normal), +1 ≤ z < +2 (overweight), +2 ≤ z < +3 (obesity), Z ≥ + 3 (severe obesity) (World Health Organization and Multicentre Growth Reference Study Group, 2006). Simplified Oral Hygiene Index OHI-S is a classic measure used to determine the impact of health education on oral hygiene (Greene and Vermillion, 1964;Silveira et al., 2002;Cardoso et al., 2011;Scopel et al., 2011). To assess the oral health condition the index OHI-S was used. This index measures plaque accumulation on six dental surfaces (16, 11, 26 and lingual vestibular of 31, 36, 46) (Greene and Vermillion, 1964). Each surface is evaluated according to the scores on a scale from 0 to 3: 0-The surface is free of plaque; 1-Less than 1/3 of the tooth covered per plate; 2-Between 1/3 and 2/3 of the tooth is covered per plate; 3-More than 2/3 of the tooth is covered per plate. The final result of this evaluation is obtained by dividing the sum of the values by the number of surfaces evaluated (Greene and Vermillion, 1964). The values obtained indicate the oral health on a range between good and poor hygiene: values from 0.0 to 0.6 indicate good hygiene, values from 0.7 to 1.8 indicate regular hygiene, and values from 1.9 to 3.0 indicate poor hygiene (Greene and Vermillion, 1964). Oral Hygiene Index (OHIS) is recognized to be a useful index for evaluation of dental health education in public school systems. Literature has been stating that OHIS is a sensitive method that can be used to evaluate oral hygiene of population groups with confidence (Greene and Vermillion, 1964;Mbawalla et al., 2010). Previous Day Food Questionnaire (PFDQ) The PFDQ is an illustrated instrument that seeks information from schoolchildren about the food they consumed on the day prior (Assis et al., 2009). The meals were arranged in chronological order: breakfast, mid-morning snack, lunch, afternoon snack, dinner, and evening snack (Assis et al., 2009). Each meal was illustrated by 21 individual foods and some food groups: dry beans, rice, milk, coffee with milk, chocolate milk, cheese, yogurt, beef or poultry, pasta, bread or crackers, French fries, pizza or hamburger, leafy vegetables, starchy vegetables, vegetable soup, fruits, sweets, chips, fish/sea foods, soft drinks, and fruit juices (Assis et al., 2009). The reliability of this instrument to assess the foods consumed was 70.2% and the non-consumed food was 96.2%. In Brazil, studies were also conducted using multivariate logistic regression. Data showed that the frequency of discordance ranged from 3.7 to 39.6% (Assis et al., 2009). Children in 5th grade classes will complete this questionnaire three times a week in class and at home, the latter with the parents/guardians acting as responsible mediators. Declarative Knowledge for Health Instrument (DKH) In this study, the Declarative Knowledge for Health (DKH) is an adaptation of the Nutritional Monitoring questionnaire (Penkilo et al., 2008;Assis et al., 2009). Questions aim to evaluate children's knowledge about healthy eating and oral health (Penkilo et al., 2008;Wall et al., 2012). This instrument consists of 20 questions (10 questions for each theme). In the current study, the coefficient of Alpha of Cronbach indicated an internal consistency of 0.71 for healthy eating and 0.76 for oral health. PROPOSED ANALYSIS Data will be analyzed with linear mixed models using IBM SPSS Statistics version 22 with alpha levels set at p ≤ 0.05. It is expected that at the end of the intervention significant differences will occur with an increase in SRH, self-efficacy, and declarative knowledge in both domains (healthy eating and oral health) for CII in relation to the other two groups (CI and CG). Moreover, in relation to healthy eating, at the end of the program it is expected that the consumption of fruits and vegetables may increase, and the consumption of ultraprocessed foods may decrease; consequently a reduction in obesity and overweightness is expected. While focusing on oral health, at the end of the intervention it is expected for the CII group to show better brushing and care in relation to oral health and consequently an improvement in the health situation reflected in dental plaque reduction (and possible oral diseases prevention). These hypotheses will be studied through the intragroup analysis and intergroup with ANOVA of repeated measures, during the five moments of evaluation of the program. Differences between conditional and control groups at baseline were examined using Chi-square test (χ²) of heterogeneity comparing the proportions between the groups, significance levels were set at p < 0.05, and descriptive analysis of frequencies, mean, and standard deviation were done. Baseline The preliminary outcomes show the first application of PFDQ, DKH, and physical measurement evaluation (BMI and OHI-S) (Greene and Vermillion, 1964;Penkilo et al., 2008;Assis et al., 2009;Wall et al., 2012;World Health Organization, 2016). These data were collected prior to the beginning of the project in order to characterize the baseline. In terms of the anthropometric data, 429 students (198 from the CII, 113 from the CI and 118 from the CG) participated in data collection. The mean age of participants was 10.61 years (SD−1.06). When assessing the nutritional status of children, according to z-score of BMI/weight and height/ age, the researchers observed that there were no differences between groups (World Health Organization and Multicentre Growth Reference Study Group, 2006). The prevalence of eutrophic/normal children with adequate heights for their age was 96 (24.5%), 57 (14.5%), and 67 (17.1%), respectively. However, there were high percentages of overweight, obesity and severe obesity in all groups ( Table 1). Regarding the Declarative Knowledge in health, we did not observe statistically significant differences in the number of correct answers between healthy eating and oral health in the participating groups (the level of significance was 0.05). Focusing on the knowledge related to the theme of healthy eating, only two items (3 and 5) presented significant differences in the number of correct answers between the groups [item 3: χ 2 = 7.20, p = 0.027 and Item 5: χ 2 = 12.38, p = 0.002]. These questions relate to fruits and vegetables (e.g., "It is necessary to eat fruits and vegetables but not all days") and biscuits with sugar (e.g., "Biscuits with sugar are industrial foods"). In the two items, the CI Group (schools enrolled in the PSE), had more correct answers than the other two groups. This may indicate that this content knowledge had already been addressed during the PSE sessions run by the primary care teams (Brasil, 2015) as well as by the teachers in science classes (Brasil, 2016). Regarding the oral health data, a difference was found between the groups. Globally, the CII and CI groups showed a prevalence of regular oral hygiene, 103 (25.0%) and 58 (14.1%), respectively; while the CG presented a good oral hygiene, 67 students (16.3%), as shown in Table 2. This good oral hygiene status of the CG may be due to more frequent and adequate brushing techniques used. Regarding the oral health topic, three items presented differences between the groups in terms of the number of correct answers (item 1: χ 2 = 9.15, p = 0.010; Item 8: χ 2 = 8.50, p = 0.014 and item 9: χ 2 = 7.35, p = 0.025). These items regard caries disease (e.g., "Carie is not caused by bacterias"), toothbrush cleanliness (e.g., "I need to change my toothbrush once per year") and bacterial plaque (e.g., "Bacterial plaque can be removed by brushing teeth"). The highest number of correct answers was obtained by CI and CII groups. The outcomes related to the previous day food questionnaire (PDFQ) describe the food that was consumed on 3 days of the week (1 weekend day and 2 weekdays), six meals per day (breakfast, mid-morning snack, lunch, afternoon snack, dinner, and evening snack) (Assis et al., 2009). Data were organized and analyzed as follows: the 21 foods depicted in the PDFQ were collected in ten large food groups in accordance to the Food Guide of the Brazilian Population, which describes the food groups that should have highest prevalence for each meal ( Table 3) (Assis et al., 2009;Brasil, 2014). Moreover, the students in the three groups reported the food they had consumed within the previous 3 days prior to the questionnaire. The food that was most commonly consumed for breakfast and mid-morning snack meals included the following: dairy products (milk, yogurt, cheese, chocolate), cereals (bread, wafer, rice, pasta), and fruit (fruit and fruit juices) groups. Another observation worth noting is that for snacks, the children tended to consume candies (cakes and sweets in general), soft drinks (and artificial juices), and chips (Currie et al., 2012). The most frequent food groups reported in lunch and dinner were cereals (bread, crackers, rice, pasta, and potatoes), proteins (meats in general, eggs) and soft drinks ( Table 4). It should be noted that the vegetable groups had a lower prevalence than that of soft drinks (and artificial juices). The afternoon and evening snacks included the following as main food groups: cereals, milk and dairy products, with an emphasis on fast food in the evening (e.g., chips, hamburgers, pizza, and ultra-processed snacks) (CGI−26%, CGII−18%, CG−27%, see Table 4) (Brasil, 2014). DISCUSSION The data found are consistent with general data from the Brazilian population-based studies involving school children: findings show a low rate of nutritional deficits and an increase in overweightness and obesity (Ruiz et al., 2009). In a study involving 3,387 school children between the ages of seven and ten, in the public school system of Rio de Janeiro, data showed that the students had a prevalence of eutrophy/normal weight, followed by overweightness and obesity (Anjos et al., 2003). Percentages of overweightness and obesity were identified in all groups as well as unhealthy eating habits such as the following: the consumption of soft drinks and artificial juices for almost all meals; the consumption of snacks, candies and fast food for breakfast; and the higher consumption of soft drinks and candies over vegetables. These eating behaviors, particularly low consumption of fruits and vegetables and high consumption of sweets, candies, and beverages rich in sugar and fats, have been indicated by literature as significant risk factors for overweightness and obesity (Neutzling et al., 2007;Tarek et al., 2008;Bertin et al., 2010). Declarative knowledge describes how people define their knowledge. It is comprised of the process of information and how people understand concepts (Rosário et al., 2017). Data on declarative knowledge about healthy eating have indicated that children maintain unhealthy eating habits even though they have knowledge about healthy eating (Gaspar et al., 2014). This may suggest that children have difficulties in regulating their eating behavior (Anderson et al., 2007). Therefore, school interventions in this topic may wish to develop a set of self-regulatory skills needed to develop healthy eating habits (Anderson et al., 2007). Data for oral health were also gathered from all the groups enrolled. The students in the groups that participated previously in PSE showed lower oral hygiene than the students in the control group; the latter will not participate in school activities systematically implemented on this topic. This finding was unexpected because the students in Condition I and Condition II participated in the PSE activities and had several opportunities to practice oral care, while students in the Control group did not have this training. This good practice prevents the generation of plaque on the surfaces of the teeth as well as, overall, oral disease. Prior research indicates that adequate oral hygiene is related to the absence of caries in school children (Anagnostopoulos et al., 2011). Another study pointed out that educational-preventive activities with school children and preschoolers, even for a short period, may be effective for reducing visible plaque and gingival bleeding (Barreto et al., 2013). However, the current study findings suggest that to maintain positive results, the intervention program must be long-term (Pauleto et al., 2004). In recent years, the number of oral health programs offered to school children has increased. However, the programs still hold an approach more focused on medicalized treatments than on educational promotion, for example stressing students agent role in their health (Pauleto et al., 2004). Notwithstanding, even school-based programs with an educational approach lack opportunities to discuss and reflect on health behaviors and improve SR. The SR practices are important because they are likely to promote autonomy and instigate good oral health care (Pauleto et al., 2004). Initial data allows us to conclude that children's participation in the PSE intervention for 4 years (from 1st to 4th grade) and the acquisition of knowledge on healthy eating and oral health is not enough to promote and sustain good health habits. Data seem to be indicating that besides the health knowledge learnt with the PSE intervention, children may need intentional educational training on SR to help them change their health behavior. Present findings indicate the need to expand PSE interventions while emphasizing the development of SR competences for self-management and self-control of health-related behaviors. This training is expected to help children set goals and display strategies to achieve, and afterwards sustain, good health practices. LIMITS The possible limitations of this study may stem from the restricted numbers of participants enrolled and the design, as follows: the study was restricted to only one city with their schools and health services; data collection may be impacted by the possible loss of participation, especially considering that this investigation will be run throughout a school-year and per losses of the schools who did not agree to participate in the study. The biweekly monitoring of the Condition II, the training program and the guidance of materials are among the several strategies expected to help solve these possible external situations (e.g., stoppages due to teachers strikes, withdrawal of study participation, transfers of students and school teachers). CONCLUSIONS The present study is expected to contribute to understand the impact of a health public policy implemented all over the country. The findings are expected to help reinforce the importance of the multidisciplinary action of health and education professionals, and this interdisciplinary articulation favors health promotion. The PSRH is designed to respond to this call. This program aims to equip the students with the skills and knowledge to improve their self-care habits, their organization in their daily life and their overall autonomy. Moreover, it can be further used as a tool to train teachers and health professionals so that they help students throughout the stages and processes of SR (e.g., PLEE) . It is hoped that the training provided on SRH for health professionals and teachers, and the implementation of the PSRH in schools help children to become more autonomous and responsible regarding their self-care on healthy eating and oral health. In consequence, PSRH is expected to help reduce children's health problems, as well as public expenditures with children's health (e.g., obesity and oral diseases). AUTHOR CONTRIBUTIONS LM, CM, PR, MB, and MS: contributed to the build design and conduct the training of the Program; MM and AB: contributed to the organization and analysis of data; CR, CM, and PR: contributed to the writing, discussion, and approval of the manuscript. FUNDING The intervention program described in this study was funded by Coordination for the Improvement of Higher Education Personnel (Coordenação de Aperfeiçoamento de Pessoal de nível superior-CAPES), Brazilian Federal agency for the Support and Evaluation of Graduate Education in Public Notice 09/2014, Science without Borders Program/ Special Visiting Researcher Program-PVE.
2018-05-07T17:49:29.998Z
2018-05-07T00:00:00.000
{ "year": 2018, "sha1": "898d1c9d011d37f303932b4f3c3224a923cd1a4f", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00651/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "898d1c9d011d37f303932b4f3c3224a923cd1a4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
115415892
pes2o/s2orc
v3-fos-license
Retrospective analysis of junior female handball players’ priorities Purpose: fulfillment of retrospective analysis of junior female handball players’ tactic priorities. Material: in the research junior female handball players of 15-16 yrs age (n=60) participated. The researches were conducted in 2006, 2010 and 2016 on the base of sport schools and physical culture colleges of Ukraine. We used author’s programs «Balltest» and «Handball skills». Results: indicators of junior female handball players’ abilities and tactical thinking effectiveness in different periods of the research were received. Correlations of these indicators with physical potentials and throw fitness point at tactical priorities of the players. Comparative characteristic showed that junior female handball players of 2016 year of the research had better abilities for solution of complex team tasks with low sensor indicators. We found handball players’ preferences to defensive and attacking actions in central zone of site. Conclusions: by universal character of tactic priorities junior female handball players of 2016 year of the research yield to the players of 2006 and 2010 years of the research. Junior female handball players of 2016 year of the research prevail in successful mental solution of position defense tactic tasks, especially in readiness to act as supporters. Introduction Striving for show value and records, modern sports reached the level of athletes' contest at extreme of human potentials. Such athletes' performances are pointed at a fan as an active participant of sport action [4]. Spectator has a demand -to enjoy the fight of opponents. In this case show value of sports is defined as "fight of characters and tactical plans" [4]. Especially it is noticeable in team kinds of sports. By the words of D. Alberto Lorenzo Calvo [19], sport game teams have their own concept of success. It implies individual sportsmanship of players and their actions' coordination in constantly changing site situations and resistance of opponent [19]. To ensure such activity in handball the players shall have the following: quickness of perception [39]; ability to predict situations, solve them and take adequate solutions [5]; to have cognitive abilities [17,18,20]. Analysis of scientific works showed that study of athlete's cognitive abilities is still an urgent problem. Such studies have different orientation: Study of efficient team thinking, based on non-verbal, emotional solutions [39]; Tactical thinking with expected feedback of the taken decision. In this case intuitive, analytical and subjectively oriented models of game situations are used [44]; Intuitive thinking as quicker and more effective mean of taking correct decision in definite game episode [41]; Emotional component of decision-taking. It is necessary for developing of own behavioral style and confidence in critical game situations [22,38]; Testing of perceptive-cognitive differences between age groups, licenses levels of different age coaches [28]; Correlation between motivation, purpose and perception level of motivation climate and their influence on cognitive and somatic components of young athletes' contest anxiety [27,30]; Success in training of general and special physical qualities at different stages of athletes' training [31]; Impulse and subjective indicators of athletes' reaction to physical load [37]; Indicators for prediction of martial arts athletes successes [35,40]; Optimization of physical loads [34] considering athletes' individual characteristics [25,26] and health indicators [42]. Other works were directed at solution of problem of athletes' cognitive sphere. They expanded knowledge about handball players' tactical thinking [10,12,13]. Tactical thinking is defined as ability to choose rational decision in game situation [14]. It is a complex of brain operations, ensured by potentials of human supreme nervous system's activity [8] and individual-typological specificities of neuro-physiological processes [10]. The method of handball players' tactical thinking definition was worked out on the base of these principles [3]. It included game situations' models, which were displayed with variant of complex and simple tasks' solution. Usage of virtual board for dynamic presentation of tactic tasks is shown in other methodic [17,44]. This methodic is characterized by the presence of program algorithm and division into blocks. Among other researches one can find the following tactical models of athletes' and teams' behavior: Methodology of assessment of tactical attacking behavior in handball [32]; Usage of gradient contest. The authors found that usage of gradient contest can increase success of students with higher and lower qualification level [33]; Working out of strategy: for prevention from young athletes, who are trained in elite educational structures, "burning out"; for facilitating long term participation and increase of welfare in sport activity [36]. Tactical thinking is a part of athlete's cognitive strategies [6]. There is interconnection of anthropometric, technical and physical indicators of an athlete and realization of his/her tactical plan [15,19,43]. Computer program permits to find tactical preferences of elite female handball players for controlling over competition functioning [13]. Usage of such program in case of junior female handball players will provide information about tactical priorities. In its turn, it will permit to raise effectiveness of training process. The purpose of the work is to fulfill retrospective analysis of junior female handball players' tactic priorities by their tactical thinking, considering physical indicators and throw fitness. For this purpose it is necessary: 1) to study characteristics of tactical thinking, physical indicators and throw fitness of different research periods' junior female handball players; 2) determine tactical preferences of junior female handball players to actions in different game situations. Material and methods Participants: retrospective analysis was fulfilled on identical by age and qualification groups of junior female handball players tested in different periods. In the research junior female handball players of 15-16 yrs (1 st sport category) participated. 20 athletes, tested in 2006 and 22 -tested in 2010 -pupils of Zaporozhye and Krivoy Rog sport schools; 18 athletes, tested in 2016, were the students of Kherson and Brovary higher physical culture colleges. The researches were conducted in leading handball schools, which successfully train athletes for teams of masters and combined teams of Ukraine. All participants gave consent for participation in the research. Organization of the research: junior female handball players were tested by computer program «Handball skills» [13]. The program is based on two tests for handball players' tactical thinking. These tests were worked out with the help of virtual board for presentation of different complexity game situations' schemes. First test «Balltest» [3] consisted of 4 blocks: tactical thinking in attack; tactical thinking in defense; situational thinking in attack and situational thinking in defense. Each block consisted of 100 schemes with variants of solution, which were positively assessed by experts. In the process of testing 15 game situations, arbitrary chosen by computer, were displayed. Every schema was displayed during 7.33 sec. for analysis and taking decision. By blocks we determined coefficient of thinking, mean time of correct decisiontaking and calculated effectiveness of thinking. The methodic of tactical thinking finding was experimentally tested on handball players, basketball players and football players of different age. Informative value and reliability of this methodic has been proved in other researches [1,10,13]. The second test [12] included 400 schemes from «Balltest». The test consisted of three blocks: situations in left, right and central parts of site. 30 schemes of game situations, arbitrary chosen by computer, were displayed (10 situations for every zone). On the base of coefficient and mean time of correct decision taking we determined territory priority of players' tactical thinking [12]. In creation of «Handball skills» computer program tactical thinking indicators and main factors, which to the largest extent influence on players' mental actions in different game situations, were considered. They are: body parameters, quickness and accuracy of throws. That is why in formulas of «Handball skills» program the following indicators were introduced: tactical thinking; body length; speed of 28 meters' run; accuracy and quickness of four throws from 7 meters' distance to squares 40х40 cm (special screen). At the end of experiment we received information, which permitted to find territorial and tactical preferences of junior female handball players. Statistical analysis: all experimental data were processed with the help of Excel program. Results Before solution of tactical task athletes create own mental plan of actions. Mental planning is interconnected with tactical thinking and player's potentials in realization of his/her ideas. It forms players' tactical priorities. Our methodic of tactical priorities determination is not intended for preparation of handball player to game with definite opponent. The methodic informs about mental tactical schema of actions, which can be effectively used by a coach. We provide the data of junior female handball players' tactical priorities in retrospective analysis, which are not connected with game with definite opponent. We (see table 4). There is no difference between indicators of the tested groups by territorial advantage in left and right site zones. The players of 2016 were better in central zone by 54%. In group of 2006 priorities by zones were not found. Junior female handball players, tested in 2016, demonstrated higher quality of tasks' solution in central part of site: by 37% in left and by 48% in right zones (see table 5). In tactical priorities of 2016 junior female handball players we found the following: preferences to actions in left and right zones was lower by 15%; territorial universality of attacking actions -by 10% and preconditions for attacks were higher in central zone by 10% (see table 6). Sportswomen of 2016 had lower indicators of territorial universality of defense actions by 20% and defense actions with outcome -by 24%. They have higher indicators in central zone by 20% and interaction in support -by 24%. Readiness for group actions in all tested groups is equal. In tactical priorities to team actions in attack the bent In both tested groups we received the same indicators of bent to improvising in defense. Sportswomen of 2016 demonstrated higher readiness for standard actions in defense by 63% and lower bent to universality of actions by 79%. Discussion Results of our researches comply with high requirements to handball players' intellectual sphere, put forward by high contest of teams on international level [5]. The received data confirm the opinion [5] that in conditions of strong contest handball players shall be able to promptly perceive large volume of different signals. Our results confirm the importance of cognitive strategies for athletes [6] and show their presence in female handball players. The results, received by «Handball skills» program [13] demonstrate tactical priorities of junior female handball players on the base of tactical thinking coordination, considering physical and technical parameters. The study of tactical thinking was fulfilled with the help of virtual board for dynamic presentation of tactical tasks [3]. Other program models with virtual board for presentation of game situation differ from method «Balltest»: slide tests [17] and video tests [41] for handball players; video tests for basketball players «BasketballTest» [1], video model for football players [18]. They imply presentation of situations in the forms of photos or video segments of real games. The mentioned program models included analysis of game situation, prediction of actions, intuition. M. Raab, S. Laborde [41] point at advantages of handball players' intuitive solutions in complex and unknown situations. V.A. Tishchenko, A.A. Shipenko [11] are sure in significant influence of players' anticipation on effectiveness of tactic actions. We think that intuition and anticipation shall be excluded from indicators of tactic thinking. Methodic «Balltest» offers stand displaying of schemes instead of real game's fragments. Comparison of «Balltest» methodic with other programs showed their distinctions. In works of P. Weigel, M. Raab, R. Wollny [44] program model DEMATS (decision making in team sports) is presented. The Table 5. Territorial priority of tactical tasks' solution by junior female handball players [21] worked out virtual simulator of football ball CoPeFoot, which stipulates complex registration of decision making elements. Random selection of players does not consider emotional empathy factor that can influence on adequacy of the made decision in phases with ball. «Balltest» methodic is intended for individual testing that permits to avoid emotional empathy influence. By the data of Z. Certel, Z. Bahadir, T. Sönmez Gül [22] in female handball empathy in respect to current emotional state of other player is rather high. Periods of the research The received by us data about tactical thinking confirm the data of other scientists [21,24] about dynamic character of game situations in time aspect. Qualitative indicators of tactical thinking witness about changes in mental planning of players' actions. In 2006 and 2010 junior female handball players successfully solved situational tasks, which were based on individual-group actions with simple choice of decision (independent on game phase). In 2016 they solved more successfully the tasks in attacks, independent on complexity of game situation. Other authors [5,9] note that attacking actions prevail over defense of high effectiveness. I. T. Gasanov [2] and V. Tsyganok [16] specify changes in tactic and positional attacks, where individual actions with quick transition dominate. In the researches of 2016 we obtained indicators of high effectiveness of situational and tactical thinking in defense. It permits for junior female handball players to successfully solve defensive positional tasks. It is in agreement with opinion of T. Debanne, V. Angel, P. Fontayne [24] that junior athletes' coaches prefer defensive strategy of tactical training. Such strategy can reflect in athletes' mental plans. Collective game in defense with some moments of individual realization of tactic task creates difficulties for opponent [20,24]. Study of tactical thinking sensor components showed that quickness of decision making in complex game situations does not differ in the tested groups. Junior female handball players, tested in 2016, were slow in solution of simple tasks with little quantity of players. Other data [17] show that quickness of decision making in team tasks is higher that quickness of thinking about decision. Here we can appeal to Z. Certel, Z. Bahadir, T. Sönmez Gül [22], who noted that for young athletes alert style of decision making is characteristic. This style includes carefulness and reasonability of complex situations' assessment. For junior handball players of 2016 it was difficult to limit time for fulfillment 7 meters' throws. They had low accuracy and great time losses. Short time for information processing by junior athletes negatively influences on actions and reduces their effectiveness [44]. Study of territorial priority in tactical tasks' solution showed that focusing on central zone is characteristic for all tested groups. But they are more expressed in 2016. These data are confirmed by the data of other researches [7,23]. With constant players' concentration in center their actions' elements are better perceived. In tactical priorities of junior female handball players, tested in 2016, we observed bent to successful solution of tasks in the center of site (independent on game phase). As L. Červar [23] notes dynamic game requires quickness of tactic responding. But the players' cognitive potentials [39,44] do not permit to successfully solve the tasks in complex and badly known zones of site. N. Rogulj, V. Srhoj, L. Srhoj [43] note that limited physical or technical data of players correct their functioning. It influences on thinking stereotype [44]. In tactical priority of junior female handball players, tested in 2016, there is readiness to realize standard schemes in defense. To improvise [29] it is necessary to be ready for variable actions. It requires ability to think in space from athlete. In junior female handball players cognitive and emotional uncertainty appears due to high responsibility in defense [22]. That is why the game by standard tactical schemes permits to observe tactical plan of coach [24]. It releases pressure on decision making [29]. Conclusions We found tactical priorities of junior female handball players in different research periods by tactical thinking indicators, considering physical potentials and throw fitness. We determined, that handball players, tested in 2016, universality in tactical preferences yield to players of 2006 and 2010. Junior female handball players, tested in 2016, have higher bent to solve tactical tasks in central zone of site in attack and defense as well as to solve tactical tasks of positional defense. They also are ready to act on support. In junior female handball players of 2006 and 2010 we observed abilities for successful solution of tactical tasks, which do not depend on site zone. They are ready: to defend with outcome and on support; to improvise in attack.
2019-04-16T13:25:32.956Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "4e752fcdd85b2a57a302370250cb2cc3706ed3ef", "oa_license": "CCBY", "oa_url": "https://sportpedagogy.org.ua/index.php/PPS/article/download/781/643", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9877beec25c243b39144239291dad88d59ff28e2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Engineering" ] }
221673269
pes2o/s2orc
v3-fos-license
Do earthworms (D. veneta) influence plant-available water in technogenic soil-like substrate from bricks and compost? Topsoil and peat are often taken from intact rural ecosystems to supply the urban demand for fertile soils and soil-like substrates. One way of reducing this exploitation is to recycle suitable urban wastes to produce Technosols and technogenic soil-like substrates. In this study, we investigate the role earthworms can play in impacting the hydraulic properties of such a soil-like substrate. In a 4-month microcosm experiment, the influence of the earthworm species D. veneta on the hydraulic properties of brick-compost mixture was examined. Of the ten boxes filled with ca. 11 dm3 of ground bricks (0.7 cm3 cm−3) and green waste compost (0.3 cm3 cm−3), five contained earthworms (W-boxes) and the remaining five were used as controls (C-boxes). The substrate was periodically irrigated and the weight of the boxes and of the drained water was monitored. At the same time, images were taken from the front of the boxes to quantify the activity of the earthworms by image analysis and soil aggregation was studied with micrographs. Before and after the experiment, water retention curves were determined from disturbed samples of the substrate using the simplified evaporation method. After 6 weeks, differences between the C- and the W-boxes were evident. Micrographs showed brick-compost aggregates only for the substrates processed by earthworms. The earthworm activity leads to reduced evaporation and an increased water content in the respective microcosms. The effect persists even after disturbing the substrate. The proportion of plant-available soil water is about 0.02 cm3 cm−3 higher for the substrate processed by earthworms (0.250 ± 0.009 cm3 cm−3) compared with the control (0.230 ± 0.008 cm3 cm−3). This study shows that earthworms are capable of ingesting and processing crushed bricks together with compost. The earthworms produced aggregates which persisted after disturbance and had a positive influence on the water retention capacity of such a soil-like substrate constructed from waste. Introduction Cities and their growing population tend to utilize resources from urban surroundings or the countryside to satisfy their needs. High-quality topsoils, for example, are imported from the countryside to be used for urban greening (Cannavo et al. 2018;Deeb et al. 2016b;Rokia et al. 2014) or bog peat is used as main constituent of growing media (Schindler et al. 2016). Thereby, functioning rural ecosystems and landscapes are exploited and disturbed. Soils are a fundamental ecological resource in a city. They provide several ecosystem functions, such as infiltration, buffering, and cooling (Herrán Fernández et al. 2016). Soil itself is a biological habitat and hence is conducive to biodiversity (Dominati et al. 2010). Being the basis for biomass production, soils provide the habitat for plants which in turn provide multiple ecosystem functions, such as cycling of nutrients, water, and energy . Especially in cities, plants do not only supply a yield, but greenery helps to alleviate air pollution (Rawski 2019) and to mitigate the urban heat island effect by shading and evaporative cooling Price et al. 2015;Santamouris et al. 2018). Additionally, greenery provides recreational space and cools buildings thus having a direct influence on the well-being of urban dwellers (Buchin et al. 2016;Yilmaz et al. 2016). However, urban soils are often degraded, sealed, and contaminated (Abel et al. 2015;Séré et al. 2008), limiting their ability to fulfill the necessary ecosystem services (Morel et al. 2015). Therefore, the remediation, reconstruction, amelioration, and finally the purpose-designed construction of Technosols (see WRB 2006) is a systemic approach to improve the sustainability of cities (Flores-Ramírez et al. 2018;Rokia et al. 2014). This should not lead to ecosystem degradation elsewhere. One way to spare the ecosystems of the countryside from the usurpation of the cities and still guarantee the ecosystem services provided by urban soils is to make use of another problem of continuous urbanization-the growing amounts of waste produced in cities (Cannavo et al. 2018;Deeb et al. 2016b). Particular parts of the urban waste can be employed for the construction of Technosols or soil-like substrate. Construction and demolition waste or excavated soil material can serve as mineral component (Rokia et al. 2014). Green waste, compost, and sewage sludge are typical organic components (Deeb et al. 2016b). The variety of components and the means of mixing them in certain ratios can enable the design of soil-like substrates with properties that suit their application (Willaredt and Nehls 2020). Technosols and soillike substrates constructed from urban wastes have been studied regarding their soil physical properties, such as porosity, hydraulic conductivity, and plant-available soil water (PAW) (Deeb et al. 2016a;Jangorzo et al. 2013;Yilmaz et al. 2016), plant growth (Cannavo et al. 2018;Krawczyk et al. 2017), contaminant eluviation (Herrán Fernández et al. 2016;Séré et al. 2008), and further agronomic properties, such as nutrient availability (Rokia et al. 2014;Vidal-Beaudet et al. 2016). According to Herrán Fernández et al. (2016), construction and demolition waste, bio-stabilized material, and green waste can be used as growing media without negative influence on the environment. Soil-like substrates constructed from bricks and green waste have a high porosity, a high proportion of plant-available water (PAW), and high saturated hydraulic conductivities compared with natural soils (Blume and Runge 1978;Nehls et al. 2013;Yilmaz et al. 2016). The water retention and hydraulic conductivity of soils are determined by the soil texture and soil structure (Amezketa 1999;Vogel et al. 2006) as well as by the share of organic matter (Blume et al. 2016;Smagin et al. 2002). In many soils, the activity of soil organisms, such as earthworms, is one of the main processes for the creation of soil structure (VandenBygaart et al. 2000). Ingestion and digestion of organic and also mineral matter by earthworms leads to the formation of casts which are stable organo-mineral aggregates (Lavelle et al. 1997). These aggregates modify the micro-and mesoporosity of soils (Blouin et al. 2013) and are more stable than other aggregates in soils (Jangorzo et al. 2015). Furthermore, earthworms have an influence on the structure and the porosity of soils by their burrowing activities creating macropores whilst compressing the adjoining soil (Jangorzo et al. 2015;Kooistra and Pulleman 2010). These influences on the physical structure consequently affect the hydraulic properties (Blouin et al. 2013) and the water balance, i.e., infiltration, drainage, and evaporation, of the soil. According to Deeb et al. (2016a), the presence or absence of earthworms better explained the differences in the total moisture ratio of a constructed Technosol than differences in the ratio of the composition of the parent material. There are three different ecological types of earthworms, according to the classification by Bouché (1977). These three types, anecic, epigeic and endogeic, are not strictly separable and there are many earthworm species that cannot be allocated to one type but rather are intermediate types with characteristics from two or even from all three types (Dunger 1983). Endogeic and-to a lesser extent-epigeic earthworms enhance the diffuse infiltration of water into the topsoil (Ernst et al. 2009;Shuster et al. 2002;Van Schaik et al. 2016). Anecic earthworms create stable macropores which facilitate preferential flow and hence increase water infiltration and drainage (Larink 2008). Ernst et al. (2009) found a tendency of an endogeic and an anecic earthworm species to enhance the drying of the soil probably due to an enhanced evaporation while an epigeic species enhanced the water storage in the topsoil. Milleret et al. (2009) observed a compacting influence by an endogeic earthworm species leading to a decrease in the PAW. In general, bigger earthworms have the tendency to compact parts of the soil which increases the water retention capacity and leads to more preferential infiltration patterns, while smaller worms tend to be de-compacting. They tend to decrease water retention and homogenize infiltration (Blanchart et al. 1999). Yet, there are interaction processes between these groups that seem to be necessary to maintain and improve a natural soil structure (Blanchart et al. 1999). Jangorzo et al. (2015) found a combination of anecic and endogeic species to be best considering the stability of aggregates. Consequently, the influence on water storage and transfer also varies greatly depending on the earthworm species (Bastardie et al. 2003). These influences of earthworms on soil properties are intensely studied (e.g., Blanchart et al. 1999;Blouin et al. 2013;Edwards and Lofty 1972), the application of epigeic worms for vermicomposting is widely spread (Domínguez 2018;Edwards and Burrows 1988). Earthworm can be regarded as a resource that needs to be properly managed to enhance ecosystem services provided by soils (Lavelle et al. 2006). Yet the deliberate use of earthworms as engineers in the production of Technosols or soil-like substrates, to improve physical soil properties like the water retention capacity, is barely considered. For instance, it is not clear, if earthworms are able to ingest both compost and technically crushed, sharp brick particles. Therefore, the aim of this study is to examine the capacity of earthworms to change the structure and thus the hydraulic properties of a soil-like substrate constructed from urban wastes. Since such a substrate is not constructed in-situ, it is important to know if earthworm induced soil structures are stable enough to persist, even if the soil material is disturbed and transported to its final destination after the production process. In a laboratory experiment with ten microcosms (five with and five without earthworms) the following hypotheses are tested: (i) Earthworms (Dendrobaena veneta) are able to process ground bricks together with organic material to form aggregates. (ii) The deliberate treatment of a soil-like substrate constructed from urban waste with earthworms changes its water balance compared with a substrate not impacted by earthworms. (iii) The earthworm activity increases the PAW of the processed material and this change persists even in disturbed samples. Substrates, earthworms, and experimental setup In order to evaluate the effect of earthworms on the soil hydraulic properties of a substrate from bricks and organic waste, an experimental setup with standardized microcosms with and without earthworms has been chosen. The substrate used in this experiment consists of a mix of ground bricks (GB) sieved to pass 2 mm and green waste compost (GWC) sieved to pass 5 mm. Both materials were purchased from a local composting company (Galafa GmbH, Falkensee, Germany). Based on the experience by Deeb et al. (2016a), a ratio of 0.7 cm 3 cm −3 GB and 0.3 cm 3 cm −3 GWC was chosen which equals a dry weight ratio of 0.82 g g −1 GB and 0.18 g g −1 GWC (see Table 1 for a characterization of the materials). The rather high fraction of GWC was chosen to ensure sufficient feed for a high number of earthworms over the total experimental period. In order to guarantee the same soil texture for each of the replicates, the GB were sieved into four fractions: coarse sand (0.63-2 mm), medium sand (200-630 μm), fine sand (63-200 μm), and silt/clay (< 63 μm). Then, these fractions were mixed again in the same original mixing ratio for all replicates. The GB and GWC portions were slightly moistened, homogenously mixed with an electrical stirrer (Collomix Xo4 with WK 120), and then filled into microcosm boxes. Ten PE-boxes each with a rectangular base area of 21.6 cm × 26.4 cm and one acrylic glass side wall were used as microcosms ( Fig. 1). The acrylic glass was installed in order to observe the earthworm activity. Between the observations it was carefully covered with black cardboard as earthworms flee light. The shading permits earthworm activity close to the acrylic glass front even when the light in the laboratory was on. To allow evaporation while keeping the worms inside the boxes, the lids were prepared with mesh-covered holes. The boxes were installed in a slightly tilted position and a fiber glass wick (60 cm hanging water column) was attached in their lower rim to drain any stagnant water. The ends of the plastic-foil coated wicks were inserted in glass bottles to collect drained water. A total dry weight of 9.67 kg of the mixture was finally filled into the boxes and compacted to achieve a height of 19 cm (i.e., a bulk density of 0.90 g cm −3 ). The experiment was set up in a cooling chamber at 20°C for 140 days. Five boxes (W1-5) were equipped with earthworms, the other five boxes (C1-5) served as control. The earthworms placed in the microcosms were chosen based on the following criteria: (i) they should be active under the given laboratory conditions, (ii) they need to burrow in the topsoil layer, and (iii) they must be available. Dendrobaena veneta was hence chosen as an epi-endogeic species, which usually lives and feeds in the litter layer (epigeic part), but also burrows vertically and horizontally up to 0.3 to 0.5 m deep in the topsoil while it consumes organic matter that is incorporated into the soil (endogeic part) (Dunger 1983;Felten and Emmerling 2009). D. veneta prefers temperatures of 15-25°C and tolerates a wide moisture range (Domínguez and Edwards 2011;Edwards and Bohlen 1996). As D. veneta is commonly used as bait worm, the earthworms were bought from a fishing supply shop. Before introducing the earthworms into the microcosms, they were washed with tap water and kept in wet paper towels for about 24 h for intestinal voiding. Then they were washed again, dried cautiously with paper towels, weighed, and divided into five portions of 50 individuals of approximately the same weight (Table 2). Earthworms live in aerobic and moist conditions. Thus, the containers were initially irrigated to a water content of 0.29 cm 3 cm −3 , slightly more than the water content at field capacity (FC, pF 1.8). The boxes were regularly weighed and subsequently irrigated to this water content to compensate for evaporation and drainage losses: twice a week until day 52, then once a week until day 108. Finally, starting on day 115 the boxes were left to dry out for five weeks and were still monitored weekly to observe how the earthworm activity influenced the water balance in drier circumstances. Despite the effort to make closed boxes with meshes on the openings, during the first 14 days a total of 30 earthworms managed to escape through and were found outside the boxes. Unfortunately, it could not be traced from which boxes the individuals escaped. From day 14 on no further earthworms were found outside the boxes as then the cooling chamber was continuously illuminated. The illumination led to a higher heat load which increased the number of cooling cycles thereby decreasing the relative air humidity. One earthworm was seen in control box C2. Between days 108 and 111, the climatic chamber was accidentally switched off leading to a rise in air temperature to 30°C on the third day. Monitoring the water balance and earthworm activity during the experiment Before each irrigation, all the boxes and the drained water were always weighed (with Sartorius Signum® 1 balance). The box weights were further monitored weekly after the irrigation was stopped. Each time the boxes were weight, images of the front of the microcosms were taken in order to observe the effects of the earthworms, e.g., aggregation and structuring processes. This was done using a flatbed scanner (EPSON perfection 2480) with a resolution of 1200 dpi which was mounted on the acrylic glass side of the microcosm boxes. Sample preparation after the experiment After the end of the experiment, the boxes were emptied onto a plastic sheet; the earthworms were collected by handsorting, counted, and weighed after intestinal voiding (see Table 2). In this study the experimental set up in boxes is considered a production step. The further use of the obtained soil-like substrates as planting substrates requires dislocation. As we are interested in whether the impact of earthworms remains relevant when soil-like substrates are used as a plant habitat, disturbed material was used for further investigation. The substrate was mixed, homogenized, and partitioned according to the standardized procedure dividing by quartering (LABO 2002). This procedure was repeated until about 400 g of the substrate were left as samples for the measurement of water retention curves, determination of Ctot (using LOI in the muffle furnace according to the German norm DIN 19684-3 (2005)), and for light microscopy. The samples for light microscopy were sieved to pass 1 mm. Of both fractions (< 1 mm, > 1 mm) micrographs were taken through a light microscope for two different magnifications (Nikon, SMZ-U). Measurement of the water retention curve The water retention curves of (i) the original substrate (T0) and disturbed samples of (ii) the C substrates and (iii) the W substrates at the end of the experiment were determined using the simplified evaporation method (HYPROP® device, METER group AG) (Peters and Durner 2015;Schindler 1980). Therefore, one sample of the disturbed processed substrate of each box and five replicates with original substrate were packed into 250 cm 3 steel cylinders at a bulk density of 1.15 g cm 3 . This 1.3-times higher density than the bulk density in the experimental boxes was necessary to be able to handle the samples after their full saturation. In order to calculate the PAW, the volumetric water content at pF 4.2 was determined using a pressure plate extractor (Soilmoisture Equipment Corp., Santa Barbara, USA). Here, too, one sample of the disturbed processed substrate of each box and five replicates of the original substrate were analyzed. Data processing and analyses The images taken from the front of the boxes and the micrographs of the substrates have been analyzed qualitatively comparing results of earthworm activity (fraction of substrate affected by earthworm activity) and aggregate formation between the W and the C treatments and in between the W treatments. To analyze the influence of the earthworms on the soil moisture, the different components of the water balance were measured or calculated right before each irrigation: the gravimetric water content g (g g −1 ), the volume of seepage water S (mm d −1 ), and the daily evaporation E (mm d −1 ). The water content in the boxes is stated as gravimetric water content as the volume of the substrate in the microcosms was subject to change. The data of the water retention curve measurements with HYPROP was revised for inconsistencies. The PAW was calculated from the water contents of the samples at pF 1.8 and pF 4.2. In order to test the treatments for significant differences, the data of the water balance and of the measurements of the water retention for W, C, and T0 was tested for normality of distribution with the Shapiro-Wilk tests (R, version 3.3.1). If normality was verified, Student's t test was used to test for significant differences in the means of the variables for the treatments. Linear regression models were calculated to test for significant correlations between the variables of the water balance and time. Correlations and differences in mean were considered significant at p < 0.05 (*), i < 0.01 (**), and p < 0.001 (***). Impacts of earthworm activity Regarding the images taken of the W-boxes, an increase in the fraction of substrate impacted by the activity of the earthworms was observed over time (Fig. 2). The earthworms immediately started to burrow. There is a peak in the area of the burrow system at the end of the second irrigation regime (day 108). The reduction of the irrigation frequency to once a week (after day 52) did not influence the earthworms' activity while after stopping irrigation (after day 108) new burrows were not detected. While some of the burrows seem to be stable over time, others-especially in the upper horizon-were refilled during subsequent earthworm activities. The boxes with earthworms show a similar alteration of the visible surfaces of the substrate over time. Only in box W3 the burrowing activity visible on the front wall is distinctly higher. The changes in the irrigation scheme (after day 52 and day 108) are also visible in the changing color of the substrate of the C-boxes (Fig. 2). The activity of the earthworms could also be observed on the surface of the substrate. The surface of C stayed flat throughout the experiment. Only the color changed, turning lighter, especially in the last phase of the experiment. The surfaces in the W-boxes changed visibly since the first day of the experiment. The burrowing activity as well as the casts of the earthworms created a rough, uneven surface. The deposition of casts on the surface became less after the cooling chamber was permanently illuminated. The color of the Wsubstrate only changed slightly. The volume of the substrate decreased in all ten microcosms-primarily in the phase without irrigation-but not measurably, leading to gaps between the substrate and the inner walls of the boxes. A change of the structure of the substrate due to the earthworms' activity was also observed at the microscopic scale (Fig. 3). This change is especially visible in the micrographs of the particle size fraction > 1 mm where aggregates that combine organic and mineral components were detected only in the W-substrate. A slight decomposition of carbon was observed for both treatments. However, the differences in C tot are not significant (n = 5, p > 0.05), neither between T0 (5.42 ± 0.36 g g −1 ) and the processed substrates nor between C-(5.08 ± 0.37 g g −1 ) and the W-substrate (5.10 ± 0.19 g g −1 ). At the end of the experiment, the number of earthworms collected from the boxes is clearly lower than their initially introduced number ( Table 2). Some of the collected earthworms were identified as juvenile. Water balance of the microcosms during the experiment During the time span of 20 weeks, the gravimetric water content g , the daily evaporation E, and the daily seepage water S were monitored (Fig. 4). As the results are based on a number of five samples per treatment, all statistical results are to be handled with care. After an initial phase of two weeks, the 18 weeks starting on day 17 were analyzed. During the first phase (irrigation twice a week) seepage hardly changes with time, there is only a small decrease measurable in C-boxes (Fig. 4, bottom panel, R 2 = 0.06, p < 0.05). After day 17, coinciding with the continuous illumination of the cooling chamber, there was a pronounced increase in evaporation (Fig. 4, middle panel). Then the evaporation just like the water content stays rather stable during the first phase (Fig. 4, top panel). The longer timespan between measurements in the second phase (irrigation once a week) leads to a decrease of seepage by a factor of about 1.7 compared with the higher irrigation frequency. During this phase seepage is stable. The magnitude of evaporation is not influenced by the frequency of irrigations and stays more or less the same from day 49 to the end of the second phase of the experiment. For the C-treatment, evaporation is about twice as high as seepage; for, W this difference is either slightly less or-for box W3non-existent. The water content is lower at the time of the measurement in the second phase compared with the first phase. In the third phase, as the soil dries out, water content and evaporation decrease steadily while seepage drops to zero a week after the irrigation is stopped. Differences concerning the three variables g , E, and S between W-and C-boxes are hardly evident throughout the first weeks of the experiment. The mean and median evaporation for W are higher than for C between day 21 and day 38 and the water content is lower. On day 42 this relationship inverses abruptly, and evaporation for C is higher than for W and the water content for C is slightly lower than for W. Evaporation remains higher for C until the last day (141) of the experiment, when W and C reach the same value. On this last day of the experiment, evaporation is especially reduced for the boxes C1 and C3, the ones with the lowest water content of less than 13 g g −1 . The box W3 differs from the other boxes. It has the highest seepage and the lowest evaporation of all the ten microcosms from day 42 on. During the third phase of the experiment, when the irrigation is stopped and seepage drops to zero, W3 is the box with the highest water content. Water retention of the processed substrate The differences observed for the water balance in the microcosms between W and C during the experiment persists when disturbed samples of the processed substrate are analyzed for water retention (Fig. 5) using the HYPROP-device. There is a high variance of the volumetric water content v in the replicates at low pF-values. The variance of the water content for the original substrate (T0) is high for the whole range of pF-values, whereas for C it gets close to zero for pF-values > 1.6. At pF 1.8 the water content is about the same for T0 and C, while it is higher for W. The difference between C and W of almost 0.02 cm 3 cm −3 is highly significant (n = 5, p < 0.01). Such a difference can be observed for the whole pF-range covered by the HYPROP-measurements (pF 1.5 -2.8). At higher pF-values of this range, differences between W and T0 become significant (n = 5, p < 0.05). At pF 4.2 the water content was determined with the pressure plate extractor and shows almost the same values for all three treatments. In sum, the mean volume of plant-available water of the Wsubstrate is 0.015 and 0.02 cm 3 cm −3 higher than of T0 and C Fig. 3 Micrographs of a brickcompost mixture processed by earthworms (W3) and in from a control box (C2) (Fig. 6). Here again, the difference between the W-and the Csubstrate is highly significant (n = 5, p < 0.01). The variance of the PAW of T0 is again higher than for the processed substrates. Discussion The earthworm D. veneta is able to process urban wastes like ground bricks. The micrographs from the aggregates show, that the earthworms do not only incorporate the organic material of the compost but also bricks. As the fraction impacted by earthworm activity increases during the phases with irrigation and as young earthworms occurred, the composition of the substrate seems to provide an acceptable habitat for the earthworms as long as tolerable humidity and temperature are kept. It cannot be stated for sure when in the experimental period the number of earthworms in the boxes was reduced. The escape of some earthworms from the boxes during the first 2 weeks of the experiment is not necessarily due to the quality of the substrate. Earthworms are very active in the dark and especially epigeic worms are known to disappear from experimental units (Chatelain and Mathieu 2017;Fründ et al. 2010;Wurst et al. 2008). With the illumination of the cooling chamber, the escaping could hence be stopped. However, the continuous illumination had further side effects. Since the relative air humidity was decreased due to the higher heat load and the necessary higher number of cooling cycles, the evaporation from the boxes was increased. In accordance a decreasing seepage during the first phase of the experiment is observable for both W-and C-boxes. From this time onwards, the earthworms stayed within the substrate and were less active on its surface. This could have influenced the impact of the earthworms on the evaporation since the transport of wetter material from deeper layers to the surface was reduced and there were less macropores enlarging the evaporative surface. Therefore, we discuss all the results starting from day 17. We observed a compaction of the substrate in the microcosms. According to Jangorzo et al. (2013), Technosols compact due to gravity and rainfall/irrigation. Such a compaction leads to a reduction of macro-and mesopores in the relatively loosely packed substrate. A slight decrease of the filling height of the substrate in the boxes due to compaction-particularly in the last phase of the experiment-can in fact be noticed on the scans from the front of the microcosms. This change is yet quite small and due to high variability in the substrate-surface in the W-boxes the accuracy of possible measurements of this compaction is too low to quantify it. There are statistically significant alterations regarding the water balance during the experiment especially in the phase without further irrigation (after day 115). Until day 108, differences in the water content and evaporation between W-and C-boxes are not statistically significant. However, starting from day 42 the influence of the earthworm activity on the water content becomes more obvious. Consequently, there is a trend of a lower evaporation from the W-substrate compared with the C-boxes. Fig. 4 Development of the moisture content ( g ), the evaporation (E), and the seepage water (S) during the experiment in the control and the worm boxes. Boxplots should be interpreted with care since they are the result of only five measurements each. Differences in mean were considered significant at p < 0.05 (*) and p < 0.01 (**) According to analyses by Smagin and Prusak (2008) earthworm casts have a higher water retention capacity than the surrounding soil throughout the WRC. This could be due to finer pores in the casts and a stronger adsorption of water due to the formation of organo-mineral aggregates. The reduced evaporation from the W-substrate could be caused by the burrows of the earthworms leading to a loss in connectivity of fine pores from the lower layers of the substrate to the surface (Kutílek and Nielsen 1994). Additionally, these burrows enhance the infiltration of irrigation water to deeper substrate layers (see below). On the other hand, these macropores may lead to a higher evaporation from wet soils as they increase the evaporative soil surface. However, the gas exchange between the substrate and atmosphere was limited in the cooling chamber. Even though the differences between the W-and the Cboxes showed a systematical trend starting from day 42, the low number of replicates and the high variability between the boxes restricted statistical significance at the chosen 95 %-level. Additionally, during the phases with irrigation, the substrates were almost saturated with water and evaporation rates were mainly driven by atmospheric conditions and not limited by soil suction. Thus, the influence of the activity of earthworms could not become effective. Consequently, these differences become significant during the drying phase. In the box with the highest earthworm activity-W3-the contradicting influences of the earthworm activity on the water balance of the substrate are most pronounced: the seepage was the highest and the evaporation the lowest compared with all the other boxes. On the one hand, earthworms increase the macroporosity due to their burrowing activity and thus enhance the seepage, e.g., due to preferential flow. On the other hand, they form aggregates that retain the water more strongly and result in lower evaporation rates. In sum, the water content does not differ from that of the control boxes. By taking disturbed samples of the substrate and relocating it, the structure of the burrowing activity of the earthworm, Fig. 5 Water retention curves, i.e., volumetric water content ( v ) against the pF, of packed cylinders with initial substrate (T0) as well as disturbed samples of the processed substrate from the control boxes (C) and the boxes with earthworms (W). The data for pF < 3 were obtained using the simplified evaporation method, for pF = 4.2 using a pressure plate extractor. The bold lines are the mean of each group. The shaded sections depict the standard deviation. The boxplots show the water retention capacities at the four displayed pF-values in more detail. They should be interpreted with care since they are the result of only five measurements each. Differences in mean were considered significant at p < 0.05 (*) and p < 0.01 (**) i.e., the macropores, is destroyed. Yet, the aggregates formed by the ingestion of the earthworms persist (see Fig. 3). Since differences in water retention characteristics between the substrate processed by earthworms and the control are also observable in the disturbed samples, we assume that the increase in retained water in these disturbed samples is caused by the produced aggregates. In order to verify if the enhanced water retention capacity of the substrate of the W-boxes persists after transport and with time, the stability of the aggregates formed by earthworms could be measured. Deeb et al. (2017) reported that Aporrectodea caliginosa was responsible for the aggregate stability of processed Technosols consisting of excavated sub soil material and green waste compost. In this study, we thoroughly sorted the earthworms by hand from the substrate. In practical applications, it is likely that at least some earthworms and cocoons will be transported with the processed material. They could further process the substrate on site and continue to influence the soil structure. Such earthworm introduction could imply the risk of establishment of non-native species and interference with existing species (Craven et al. 2017). Before such a substrate can be applied for urban greening, some additional points should be considered. The organic matter content of the substrate used in this experiment is quite low compared with usually purchasable growing substrate. However, compared with natural soils the C tot -concentration is rather high. Therefore, the risk of enhanced leaching of nutrients should be studied. In this study, we used only one mixing ratio of only two components with one earthworm species. It is well known that the original material as well as the earthworm species might have a large influence on the aggregate formation and stability (Schrader and Zhang 1997). Other mixing ratios, e.g., with less organic material, other input materials, and other earthworm species could lead to different results. Generally, such a soil-like substrate will be subjected to a certain evolution once it is implemented as a planting substrate. Therefore, approaches aiming for designing soil-like substrates with purpose specific properties should be aware of this early pedogenesis. It should be taken into account that properties may improve but may also worsen. Conclusions In this study, the capacity of earthworm D. veneta to change the hydraulic properties of a soil-like substrate made from urban waste was tested in a microcosm experiment for 4 months. With the images taken from the front of the microcosms, it could be observed that the earthworms are able to process the mixture of crushed bricks and compost. As long as the moisture conditions were adequate, the earthworms were active, burrowed the substrate, and formed aggregates. The evaporation in the microcosms with earthworms was reduced compared with the controls. This is most likely due to a higher water retention capacity of the earthworm cast. For the volume of seepage water, an effect of the earthworms could only be observed for the box with the highest earthworm activity. Here, there was strong burrowing activity and increased seepage compared with the other boxes, most likely due to preferential flow through the macropores. The measurements of the water retention curves of the disturbed and translocated substrate reveal a persistence of the earthworms' influence on the water content observed during the experiment. At field capacity, the water content in the substrates processed by earthworms is higher than for the control and the initial substrate. Differences in the water content at the permanent wilting point are not as distinct. This results in a proportion of PAW that is on average 8.6% higher for the substrate treated with earthworms than for the control. While the macropores are disturbed when relocating the substrate and thus preferential flow is no longer a relevant factor, the structure of the cast with its higher water retention capacity persists. Authors' contributions All authors contributed to the study conception and design. Material preparation, data collection, and analysis were performed by Susanne Ulrich, Moreen Willaredt, Thomas Nehls, and Loes van Schaik. The first draft of the manuscript was written by Susanne Ulrich and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Fig. 6 Plant-available soil water (PAW) for the initial substrate (T0) and the processed substrate of the control boxes (C) and the boxes with earthworms (W). Boxplots should be interpreted with care since they are the result of only five measurements each. Differences in mean were considered significant at p < 0.01 (**) Funding Open Access funding provided by Projekt DEAL. The authors are grateful for the support of the Technische Universität Berlin. MW acknowledges Berlin International Graduate School in Model and Simulation Based Research (BIMoS) for support with a PhD-fellowship. LvS acknowledges support by the DFG (scha1719/1-2). TN acknowledges support by the project Vertical Green 2.0 (BMBF grant no. 01LF1803A). Additionally, the ZELMI of the TUB, especially Irene Preuß, is gratefully acknowledged for the support and production of the micrographs. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-09-15T14:10:58.325Z
2020-09-15T00:00:00.000
{ "year": 2020, "sha1": "fc0c49b4e8137bcef072494adaaa7f47b7a7521e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11368-020-02772-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fc0c49b4e8137bcef072494adaaa7f47b7a7521e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
268338155
pes2o/s2orc
v3-fos-license
The importance of protein domain mutations in cancer therapy Cancer is a complex disease that is caused by multiple genetic factors. Researchers have been studying protein domain mutations to understand how they affect the progression and treatment of cancer. These mutations can significantly impact the development and spread of cancer by changing the protein structure, function, and signalling pathways. As a result, there is a growing interest in how these mutations can be used as prognostic indicators for cancer prognosis. Recent studies have shown that protein domain mutations can provide valuable information about the severity of the disease and the patient's response to treatment. They may also be used to predict the response and resistance to targeted therapy in cancer treatment. The clinical implications of protein domain mutations in cancer are significant, and they are regarded as essential biomarkers in oncology. However, additional techniques and approaches are required to characterize changes in protein domains and predict their functional effects. Machine learning and other computational tools offer promising solutions to this challenge, enabling the prediction of the impact of mutations on protein structure and function. Such predictions can aid in the clinical interpretation of genetic information. Furthermore, the development of genome editing tools like CRISPR/Cas9 has made it possible to validate the functional significance of mutants more efficiently and accurately. In conclusion, protein domain mutations hold great promise as prognostic and predictive biomarkers in cancer. Overall, considerable research is still needed to better define genetic and molecular heterogeneity and to resolve the challenges that remain, so that their full potential can be realized. Introduction Cancer is a complex disease characterized by uncontrolled growth and division of cells in the body.The disease is caused by genetic and environmental factors which lead to the accumulation of mutations [1].It is considered a leading cause of death in many countries.In the United States alone, it is predicted that there will be 1,958,310 new cancer cases and 609,820 cancer-related deaths in 2023 [2].In India, cancer cases are estimated to increase by 12.8 percent in 2025 as compared to 2020 [3].Cancer treatment typically involves a combination of surgery, radiotherapy, chemotherapy, endocrine therapy, targeted therapy, or a combination thereof [4].However, these conventional treatments are often ineffective due to the development of multidrug resistance and severe side effects [5].Furthermore, drugs currently available for cancer treatment have drawbacks such as poor selectivity, limited pharmacodynamic properties, and poor oral bioavailability [6].Therefore, there is a critical need for novel treatment strategies to address the limitations of current cancer therapies.Developing effective diagnostic and treatment methods for cancer is an area of great interest and challenge [7].Numerous studies have been conducted to identify mutations in genes that play a role in the development of cancer [8].The primary role of mutations in oncogenes has been widely recognized, which has led to an extensive screening of clinical samples and identification of thousands of cancer-associated gene mutations using next-generation sequencing [9,10].These mutated genes serve as biomarkers that help in determining the tumor characteristics and subsequently guide the selection of appropriate cancer treatment [11,12].Although scientists have made significant progress in identifying genes that are closely linked to cancer, they lag behind the genomic (transcriptomic) analysis [13].Comprehensive global proteomics analysis has revealed only a small percentage of single nucleotide variants detected by DNA and RNA sequencing as single amino acid variants [14,15].It's crucial to conduct proteomic analysis of mutations, as it can help develop cancer biomarkers and identify new pharmacological targets for effective cancer therapy [16].It is possible that changing the order of amino acids in a protein may not lead to a loss of protein function, but can alter its structure [17].Recently, computational predictive models have been developed to assist experimental studies in drug discovery, specifically virtual screening based on drug target protein interaction (DTI) [18].There are two types of virtual screenings ligand based and structure based [19].These methods come with limitations such as difficulty in discovering new scaffolds and obtaining the 3D structures of the proteins [20].To address these precincts in drug discovery new computational approaches have developed incorporating machine learning and networking analysis [21].Mutations in protein domains can hinder the normal functioning of proteins [22,23].Therefore targeting the functional domains can be an effective therapeutic approach for cancer [24].Many computational predictive models have been developed based on chemogenomic-based drug/compound-target protein domain interaction predictive systems using publicly available databases such as ChEMBL, PubChem, UniProtKB, and InterPro.One of such tool developed in recent times is "DRUIDom" [25].This approach is based on drug associating with protein domains based on their structural properties.This makes likely that other proteins containing the same mapped domain(s) will have the necessary structural properties to interact with the drug of interest [26]. Protein domains can be affected by mutations, which may disrupt their regular functions [27].The gene-based approach used in computational structural studies does not consider the position of the mutation within the gene or the mutation's functional context.However, by examining the impact of mutations on specific sections of a protein, we can gain valuable insights, as illustrated in Fig. 1 [28][29][30].Targeting protein domain mutations is a superior method because it is more precise [31].This approach involves mapping mutation positions to specific domains that are driving tumorigenesis.By doing so, targeted therapies can be selected based on the genetic alterations [26].Targeting domain mutations can also offer valuable information about disease severity and modulate specific signalling pathways, leading to better outcomes with reduced toxicity [32].Studying the effect of protein domain mutations on cancer prognosis has become increasingly important in recent years [33].These mutations can provide valuable insights into the severity of the disease and its response to treatment.Mutations in specific domains such as P53, PI3K, tyrosine kinase, zinc finger, and catalytic protein domains have been found to be significant in cancer [34][35][36].For instance, P53 mutations, which are common in many cancers, can cause uncontrolled cell proliferation by undermining its tumor-suppressor activity.These mutations can lead to severe illness and therapeutic resistance [37].PI3K mutation activates signalling pathways that promote cell survival and proliferation.They are common in breast, ovarian, and other malignancies, requiring specific treatments [38].Mutations in the tyrosine kinase domain activate growth-promoting signals constitutively [39].In leukemia and lung cancer, targeted tyrosine kinase inhibitors have transformed treatment [40].Zinc finger domain mutations disrupt gene regulation, promoting cancer [41].These mutations underscore the intricate nature of cancer genetics and underscore the critical need for personalized treatments in the ongoing pursuit of better cancer therapies and outcomes [42].Understanding their effects can inspire new treatments.These mutations highlight the complexity of cancer genetics and the need for personalized treatments, reflecting the ongoing search for better cancer treatments and patient outcomes [43].It is important to examine the role of protein domain mutation across various cancer types to develop targeted therapies [44], estimate the prognostic value of the disease, personalize therapies to specific mutations, identify the mutations that may confer resistance to cancer treatment, detect cancer development early, provide critical information in cancer biology and in clinical trials [45].Recent studies have shown that protein domain mutations can serve as prognostic indicators in various types of cancer [46].For example, mutations in Telomerase reverse transcriptase (TERT) genes, positioned − 124 and − 146 bp upstream from the ATG start site, increase TERT promoter activity by creating GGAA consensus binding sites for ETS transcription factors and increase the aggressiveness of glioblastoma (GBM) [47,48].These mutations are independently associated with poor survival and disease relapse of GBM [49,50].Another study evaluating 126 patients with non-small cell lung cancer (NSCLC) showed that targeting the mutant KRAS G12C (G domain) by AMG510 (Sotorasib) allowed for direct pharmacological inhibition of KRASp.G12 mutations and increased the overall response rate significantly [51,52].Furthermore, novel most-potent-in-class natural inhibitors, selective inhibitors approved for clinical development, could extend patient life and improve the quality of life [53,54]. Current research in cancer is focused on studying the mutations in the cancer genomes to identify specific mutations in protein domains that may be associated with a poor prognosis [55].Bioinformatics studies of mutations in protein domains in ovarian cancer also reveal therapeutic targets and prognostic indicators [56].However, protein domain mutations as prognostic indicators in cancer still face challenges due to the complexity of the molecular pathways of cancer growth and the potential for mutational interactions.Despite these challenges, mutation patterns can still predict disease prognosis [57].For instance, tumors with many alterations in DNA repair genes such as BRCA1 and BRCA2 in the ovarian cancer genome can act as biomarkers in predicting the disease [58].It is worth noting that detecting rare or unusual mutations may require large samples and time-consuming genetic analysis.This review aims to provide an overview of protein domains and their functions and investigate the impact of mutations in protein domains on the onset and progression of cancer.It seeks to explore the underlying mechanisms of how these mutations affect cancer and identify the protein domains that are most frequently mutated in cancer.The study will also discuss the clinical relevance of protein domain mutations as prognostic and predictive biomarkers in cancer.The rationale of the study is to increase our understanding of the role of protein domain mutations in cancer and identify potential therapeutic targets and prognostic and predictive biomarkers for cancer patients.By examining the mechanisms underlying the impact of protein domain mutations on cancer and identifying the most frequently mutated protein domains in cancer, we can gain insights into the molecular pathways involved in cancer development and progression.This knowledge can be used to develop targeted therapies that address the underlying genetic mutations in cancer cells, leading to more effective treatments and improved patient outcomes.Additionally, we can identify new ways to personalize cancer treatment and enhance patient care by exploring the clinical relevance of protein domain mutations as prognostic and predictive biomarkers.Further, this review will explore the relationship between mutations in protein domains and cancer development.It covers the various effects of different types of mutations, the mechanisms that underlie their impact, and how protein domain mutations can act as biomarkers for predicting cancer prognosis and response to targeted therapies.Moreover, it highlights the most frequently mutated protein domains in cancer and their potential as therapeutic targets.It discusses the challenges and future directions in using protein domain mutations as biomarkers. Protein domain mutations and cancer Mutations refer to alterations in the DNA sequence, which may involve single nucleotide changes or structural rearrangements [59].Mutations play a significant role in the development and progression of cancer [60].These genetic changes can affect oncogenes and tumor suppressors, causing uncontrolled cell growth by disrupting essential regulatory pathways [61].Mutation analysis is crucial in identifying these genetic aberrations that contribute to malignancy [62].Tumor heterogeneity occurs when mutations lead to the formation of distinct subclones within tumors.This presents a challenge for targeted therapies.Additionally, mutations contribute to therapeutic resistance, so ongoing research is necessary to overcome this challenge [63].Precision medicine uses knowledge of specific mutations to offer targeted therapies for improved cancer management.This opens up the possibility of personalized treatment strategies in the evolving field of oncology [64]. Overview of protein domains and their functions Domains, which are internal protein structures, play a crucial part in the assembly process.These protein domains are conserved across all species [49][50][51][52].Many novel protein domains identified using computational methods and high-throughput sequencing improve the understanding of the structure and function of proteins [52].For example, the biological roles of certain protein domains, such as zinc finger domains, have been studied.It has been established that it is involved in ubiquitination and can also function as a DNA-binding domain, demonstrating that domains play a crucial [53,65].Recent studies discovered that a protein domain controls transcription, disclosing gene expression, DNA repair, and RNA processing [54,[66][67][68].In addition, new research has found protein domains that control protein synthesis.These domains have shed light on the intricate regulatory systems that control protein synthesis and may lead to the development of innovative cancer therapies [69,70].Therefore, understanding the evolution of proteins in organisms requires a complete understanding of protein domains. Protein domain mutations and their effects on cancer onset and progression Protein structure and function changes brought on by protein domain mutations may contribute to the onset and progression of cancer.Much research has investigated how different protein domain changes impact the development and spread of cancer [71][72][73][74].This section discussed the most typical varieties of protein domain mutations. Missense mutations Missense mutations were the most prevalent form of mutation in protein domains after assessing the genomic data from over 8657 tumors representing 32 distinct cancer types [75][76][77].Another study found that missense mutations make up about 88% of the gene variants in the COSMIC Catalog of Somatic Mutations in Cancer [78].Missense mutations happen when a single nucleotide alteration modifies the protein's amino acid sequence.The research also raises the possibility that various cancer types may have unique K.K. Chitluri and I.A. Emerson carcinogenesis pathways.For instance, the study discovered that lung and colorectal cancer typically had mutations in the KRAS gene, which codes for a protein essential for cell signaling, and that many of these changes were identified in the protein's GTPase domain [79]. Nonsense mutations A nonsense mutation causes the coding sequence of an mRNA to contain an early termination codon.This mutation stops translation and, in most cases, results in the synthesis of a truncated and dysfunctional protein, which in turn causes cancer [80].According to COSMIC mutational data, nonsense mutations account for 40% of the variance [78].The nonsense mutations in the tumor suppressor gene play a crucial role in cancer development.For instance, cancer prognosis is affected by nonsense mutations in tumor suppressor genes like TP53, RBI, and PTEN.Recent mutational landscape analyses found that nonsense mutations were detected in 11% of TP53, 25-34% of RBI, and 17.3% of PTEN [81].Also, the COSMIC database (http://cancer.sanger.ac.uk/cosmic/) revealed that the percentage of nonsense mutations in TP53, BRCA1, PTEN, and APC samples was 7.7% of 2129 TP53 mutations, 11.4% of 413 BRCA1, 15.8% of 3250 PTEN, and 41.5% of 4216, respectively.However, nonsense mutations are less common.Most of them are found close to stopping codons in genes connected to cancer, making them less harmful compared to the nonsynonymous and synonymous ratio [80]. Frameshift mutations Translational frameshift is caused by the insertion or deletion of a nucleotide in the triple codon nature of gene expression, which disrupts the gene's function.Microsatellite instability (MSI) causes a large number of these insertion and deletion (INDELS) events in repetitive DNA sequences [82].According to a cosmic mutational data study, 23% of mutations are recorded as frameshifts [78].Recent studies show frameshift mutations are the most prevalent form of mutations linked to colorectal cancer [83].MSI-induced frameshift mutations account for 15% of colon cancer cases [84].For instance, frameshift mutations (interstitial deletions) in the N-terminal region of APC's -catenin binding domain leads to APC's functional gain and plays a role in the stimulation of the Wnt signalling pathway, which is crucial for the growth of colorectal cancer.TP53 frameshift mutations are seen in more human malignancies [85]. Split site mutations Splicing needs to be controlled to determine a cell's identity and developmental programs, and its dysregulation is directly connected to conditions like cancer [85].Even it disrupts the protein-protein interaction pathways that lead to tumor formation [86,87].Several investigations have discovered a connection between malignancies and alternative splicing [88,89].Spliceosome mutations in cancer have brought attention to the importance of the spliceosome pathway as a direct contributor to carcinogenesis and prompted questions regarding the molecular mechanisms and functional implications of these aberrations [90].Recent decades have seen recurrent somatic alterations in human solid tumors in various parts of the splicing system.Since the HEAT (Huntingtin, elongation Fig. 2. Representation of mechanisms underlying the impact of protein domain mutations in cancer. factor 3) domain of SF3B1 is the most significant component of the SF3B complex and a crucial element in spliceosomes [91], abnormalities in this domain can result in improper splicing and cancer [92]. Altered protein-protein interactions Protein domains frequently mediate protein-protein interactions; mutations in these domains can change a protein's capacity to interact with other proteins [93].For instance, a mutation in the DNA-binding region of the p53 gene at position R175H can block the interaction of RSL1D1 and p53 and activate downstream tumor-suppressing pathways, resulting in colorectal cancer [94,95].Similarly, mutations in the exons 18-21 of EGFR allow the kinase domain to interact abnormally with downstream signalling molecules through the PI3K/AKT and MAPK/RAF pathways, resulting in uncontrolled cell proliferation [96,97].Recent studies have revealed further information about the effects of mutations in protein-protein interaction domains in cancer.For instance, multi-omics screening research shows that EGFR is a crucial modulator of cancer progression and that mutations in the LGR4 domain cause downstream signalling to do so [98].Furthermore, it was shown that in breast cancer, mutations in the TAZ domain of the transcriptional coactivator YAP enhanced its capacity to interact with TEAD transcription factors and activate downstream oncogenic pathways [99] (Fig. 2). Dysregulated signalling pathways Many protein domains are involved in the signalling networks that control cellular activities such as proliferation, differentiation, and death.Mutations in these domains can result in constitutive activation of these pathways, promoting unchecked cell proliferation and the emergence of cancer.For instance, in chronic myeloid leukemia, mutations in the BCR-ABL fusion protein's kinase domain result in the constitutive activation of the downstream signalling cascade, which promotes unchecked cell growth and the growth of cancer (Fig. 1).Recent studies have further supported the contribution of dysregulated signalling pathways to the emergence of cancer.For instance, mutations in the RET's tyrosine kinase domain, which is fused in-frame to the NH2-terminal partner and the RET's TKD and COOH tail, encourage the activation of downstream oncogenic signalling pathways and lead to differentiated thyroid cancer (DTC) [100].Similar to this, it has been shown that the G1202R mutation in the kinase domain of the ALK receptor causes non-small cell lung cancer by promoting resistance to ALK inhibitors (alectinib, crizotinib) via activating downstream signalling pathways [101]. Impaired protein degradation Defective protein degradation is caused by the accumulation of mutations in oncoprotein domains, including TP53, AKT1, and IDH1, which are highlighted as candidate genes for post-translational modification (PTM) related processes.These PTM mutations disrupt the control of protein deterioration, causing a buildup of oncogenic proteins that aid in the development of cancer [102].For example, hypoxia-inducible factor 1 (HIF1) can accumulate as a result of mutations in the von Hippel-Lindau tumor suppressor protein, which has a ubiquitin ligase domain and can prevent it from targeting HIF1 for destruction, activating downstream tumour-promoting pathways [103].Recent studies have provided further information on how defective protein breakdown contributes to cancer development.Similarly, mutations in the E3 ubiquitin ligase of RNF43 were found to impair its degradation and promote the activation of Wnt signalling pathways in colorectal cancer [104,105]. Alterations in protein stability and folding Protein misfolding and aggregation are caused by protein stability and folding changes.As several protein domains are involved in these processes, changes to these domains can affect protein stability and folding.These aggregation factors can affect cellular functions and promote the growth of cancer.When the p53 tumor suppressor protein, which has a DNA-binding domain and a tetramerization domain, is altered, this can disrupt the protein's stability and folding, accumulating misfolded p53 and activating oncogenic pathways [106].In addition, the C-terminal domain of BRCA1 (BRCT) p.M1775R variation modifies the interaction between BRCT and histone deacetylase in breast cancer.In this variation, R1835 rotates away from Q1811 to form a new salt bridge with E1836, and R1699 maintains the salt bridge with D1840 but no longer contacts it and instead coordinates an anion [107,108].Recent studies have provided additional information on how protein stability and folding alterations contribute to cancer development.In esophageal squamous cell carcinoma (ESCC), mutations in the RRM domain of the RNA-binding protein TIA1 were discovered to alter its RNA-binding activity and promote the creation of stress granules, which are associated with the development of cancer [108,109].In addition, it has been revealed that mutations in the TET2 protein, which contains a catalytic domain that regulates DNA methylation, impair protein folding and lead to an accumulation of incorrectly folded TET2, which promotes the development of myeloid malignancies [110]. Altered gene expression Protein domain mutations can alter gene expression patterns, leading to abnormal protein function and cancer.Mutations in the BET family of proteins' bromodomain can cause dysregulation of gene expression in cancer cells, promoting the development and K.K. Chitluri and I.A. Emerson spread of triple-negative breast cancer (TNBC) [111].Moreover, an oncogene known as a master regulator of cell cycle entry and proliferative metabolism (MYC) promotes cell growth and proliferation.Mutations in the MYC basic helix-loop-helix leucine zipper (bHLH-LZ) domain, which is involved in dimerization and DNA binding of the MYC gene, and in the MYC transactivation domain (TAD), which is responsible for target gene activation, can result in overexpression or stabilization of the MYC gene, which eventually causes uncontrolled cell growth and division, leading to cancer [112]. Most frequently mutated protein domains in cancer Various biological processes trigger cancer development and spread, including mutations in crucial proteins and signalling networks.A key component of cancer genetics is the identification of commonly changed protein domains in cancer [39].As mentioned, domains are essential for controlling cell proliferation, differentiation, and survival.These domains are susceptible to mutations that might cause uncontrolled cell proliferation, which can be a hallmark of cancer [113].In our earlier research, we discovered several protein domains that are frequently mutated in a range of malignancies like p53, PI3-Kinase alpha (PI3Ka), Nebuline (NEBL), and zf-H2C2_2 [114].Identifying these commonly altered protein domains has provided a crucial understanding of the molecular processes underpinning the development and spread of cancer.It has also revealed potential medicinal targets for the fight against cancer.The discovery of frequently mutated protein domains in cancer is just the start.It is essential for the creation of efficient cancer treatments to comprehend the functional implications of these mutations on protein-protein interactions and signalling cascades.It is also crucial to stress that not all mutations in these protein domains have functional consequences or influence the onset of cancer.Most of these mutations either happen in benign tumors or have no impact on cellular activity.The mutations that contribute to the emergence and spread of cancer must therefore be identified through further investigation. Protein domain as therapeutic targets Many proteins comprise numerous distinct domains and can serve as targets for therapeutic intervention [115].Since protein domains are crucial to controlling the wide range of biological processes and are referred to as structural and functional units of a protein, mutations in these domains may lead to cancer.Many independent studies reveal that clustering these mutations at specific catalytic positions leads to cancer [116].Hence, targeting these regions can be promising for creating novel therapeutics.Further, due to the conservatory nature of the domains amongst proteins, these can act as potential targets with a variety of therapeutic uses with minimal likelihood of unintended side effects creating room for personalized medications.For example, the most common mutations in the tyrosine kinase domain (TKD) of the FMS-like tyrosine kinase (FLT3) gene were targeted by Midostaurin [117][118][119], Gilteritinib [120,121], Quizartinib [122,123] Numerous examples of powerful drugs target specific protein regions (Table 1).These cancer Table 1 List of Protein domains with targeted drugs (* [58]).protein domains have been predicted as frequently mutated domains and reported in the DCMP database.medications have transformed how cancer is treated and set the bar for developing domain-specific therapeutics.Protein domains are proteins' functional units that function through their constituent domains.The protein sequence is subject to mutations in natural evolution and somatic development, especially in cancer tissues.Accumulation of mutation in oncogenes and tumor suppressor genes causes cancer.Very little research has been conducted at the domain level in the last ten years.Among those, the domain-mutation landscape across 21 cancer types accomplished identifying the domains with high mutational density in specific tissues (Table 2).In addition, the domain-level study helps identify known and novel candidate driver mutations.It has shown that these domain instances play important roles in cell-cell communication and, thus, are essential for the cell's normal functioning.At the same time, the NGS analysis is still far away from analyzing the mutations at the protein domain level, and this would provide a broad spectrum of opportunities for the researchers to uncover the novel and candidate mutations not only in cancer but also in other diseases. Prognostic value of protein domain mutations in cancer patients Protein domain mutations have diverse clinical implications in cancer, depending on the type of mutation, the affected protein domain due to mutation, signalling pathways, and the tumor microenvironment.Research suggests that protein domain mutations can potentially be prognostic or predictive biomarkers and treatment responses.For example, the L2 and L3 zinc-binding domains and DNA-binding domain of the TP53 gene are linked to patient poor prognosis and resistance to radiotherapy and chemotherapy in most cancers [116,124].Similarly, mutations in kinase and helical domains in the phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha (PIK3CA) have been found in a variety of malignancies, including breast, ovarian, endometrial, and colorectal cancers, and have been linked to a poor prognosis as well as resistance to immunotherapy and targeted treatments [125].Individuals with melanoma who have the BRAF kinase V600E mutation respond better to targeted therapies like vemurafenib and dabrafenib.A study of 675 individuals with advanced melanoma showed that those with the BRAF kinase V600E mutation had considerably longer progression-free survival and overall survival than those who did not have this mutation [126].Mutations in the adenomatous polyposis coli (APC) tumor suppressor gene have been linked to a poor prognosis in colorectal cancer patients.A study on individuals with colorectal cancer showed that those with APC mutations had considerably lower overall survival than those without these mutations [127].Another example is where mutations in the cyclin-dependent kinase inhibitor 2A (CDKN2A) gene have been linked to a poor prognosis in melanoma.A study of melanoma patients indicated that individuals with CDKN2A gene (protein kinase domain) mutations had considerably lower overall survival than those who did not have these mutations [128,129]. Table 2 Reported potential protein domain targets that can act as therapeutic targets (* [58]).protein domains have been predicted as frequently mutated domains and reported in the DCMP database. Predictive values of protein domain mutations for targeted therapy response and resistance Cancer detection and diagnosis are two distinct types of predictive values.These values can help anticipate cancer in its early stages and after remission and explain patient survival following a cancer diagnosis.It may additionally predict the disease's prognosis after it has been diagnosed.Protein domain mutations can provide predictive value for the responsiveness and resistance of targeted therapies in cancers.The effectiveness of many targeted medicines is based on their ability to block specific proteins or signalling pathways that are frequently disrupted in cancer patients.For instance, mutations at L858R in the kinase domain of EGFR have significant prognostic and predictive consequences in non-small cell lung cancer (NSCLC) [130,131]. In contrast, mutations at C797S and T790 M have shown resistance to EGFR tyrosine kinase inhibitors (TKIs) [132].In addition, a study that was carried out by Chen et al. found that a mutation site located at R53Q/Q55Pf*29 in the DUF758 domain of tumor necrosis factor alpha-induced protein 8-like 2 (TNFAIP82) was regarded to be a possible predictive marker for tumors such as stomach cancer and colorectal cancer [133].In addition, a mutation at V600E in the ATP-binding domain of BRAF is regarded as a negative prognostic indicator.This mutation is also associated with resistance to traditional chemotherapeutics, which suggests using a personalized treatment approach in patients with BRAF-mt metastatic colorectal cancer [134].A comprehensive understanding of the specific mutation and its biological context is required for reliable prediction for the research to show that protein domain mutations can be valuable predictors of therapeutic response and resistance in some circumstances. Mutations in protein domains can act as biomarkers for both diagnosis and prognosis, as well as guide treatment decisions, identify therapeutic targets, and influence drug resistance (Fig. 3B).There are certain genes, such as ABCB1 or multidrug resistance 1 (MDR1), that are associated with multidrug resistance in cancer.Mutations in transmembrane domain (TMD) such as F72Y, F303Y, I306Y, F314Y, F336Y, and L339Y, as well as nucleotide-binding domain (NBD) (F480Y), can lead to drug resistance [135].Additionally, mutations in DOT1 domain at R231Q of the DOT1L gene can induce drug resistance in lung cancer [136]. Differentiating between driver and passenger mutations, and managing tumour heterogeneity and resistant clonal populations has posed a significant challenge.It is a well-established fact that proteomic alterations, including post-translational modifications, play a pivotal role in the development of cancer.However, it is noteworthy that proteomics technology has recently attained a level of depth and precision that is comparable to RNA sequencing (Fig. 3A).Promising mass spectrometry-based proteome research that opens the path for clinical use is emphasised.The potential of proteomics and phosphoproteomics to bridge the gap and enable the clinical application of omics analysis is a subject of debate.Comprehending the effects of these domain mutations is crucial in formulating individualised therapeutic approaches and enhancing clinical results within the realm of oncology. Challenges and future directions in the use of protein domain mutations as prognostic and predictive biomarkers in cancer Numerous genetic tests are accessible to detect biomarkers that are associated with particular types of cancer [137].There are three types of tests that are used to diagnose cancer: cytogenetic tests, gene tests, and biochemical tests [138][139][140].Cytogenetic tests involve examining chromosomes for abnormalities that could indicate the presence of cancer [141].Some cancers are characterized by specific changes in chromosomes [142].Gene tests look for biomarkers like gene duplications, deletions, or mutations [9].Tissue samples are usually taken for gene tests, but blood tests are becoming more common [143].Biochemical tests are used to identify abnormal proteins that may be produced by mutated genes.These tests require a tissue sample to identify the proteins [144].Biochemical tests can also be used to monitor how well a cancer is responding to treatment.Identification and characterization of mutations in cancer patients may therefore aid in directing personalized treatment options and enhancing patient recovery outcomes [145].The use of protein domain mutations as cancer biomarkers is not without limitations, and further study is required to get through these problems and access their full potential.The complexity and heterogeneity of cancer present a significant barrier to the application of protein domain alterations as biomarkers.Even within the same type of cancer, considerable heterogeneity in the genetic changes fuelling tumor growth can exist [146].Various cancer types can exhibit specific genetic and molecular traits [147] (Table 3).As a result, detecting protein domain mutations relevant to a particular cancer subtype or patient population necessitates rigorous genomic and molecular profiling, which can be time-consuming and expensive. Furthermore, several factors may affect therapy response and clinical outcomes, so even when a relevant mutation is found, it may not necessarily be predictive of these outcomes.Another difficulty is determining the functional significance of protein domain mutations.While some mutations are well-known to provide resistance to specific medicines or enhance tumor growth, many alterations are unknown or have equivocal effects on protein function [148].Furthermore, the influence of mutation varies depending on the cellular or environmental conditions.As a result, establishing the functional importance of a specific mutation requires extensive experimental validation, which can be difficult in a clinical environment [149,150]. Notwithstanding these limitations, there have been several notable achievements in using protein domain mutations as prognostic and predictive biomarkers in cancer.One such example is mutations in the protein kinase domain of the BRAF gene, implicated in the MAPK/ERK signalling pathway, which has been discovered as predictive indicators for responsiveness to BRAF inhibitors [151].Furthermore, mutations in the Ras domain that encodes for GTPase of the KRAS gene, implicated in the RAS/RAF/MEK/ERK signalling pathway, are predictive indicators in various cancers, including lung and colorectal cancer, as discussed earlier [152].Another potential area of research is using protein domain mutations as immunotherapy response indicators.Immunotherapy, which uses the immune system to fight cancer cells, has emerged as a promising treatment method for many types of cancer. Nevertheless, not all patients respond equally to immunotherapy, so biomarkers that can predict treatment response and guide treatment decisions are required.Many studies have found protein domain alterations linked to immunotherapy response, including mutations in kinase domain at V617F of JAK1/2, DNA-binding domain of TP53 at L22Q, W23S influencing the genes immune checkpoint inhibition (ICI) including PD-1, PD-L1 and CTLA-4 [153][154][155].However, predicting individual responses remains intricate, demanding a comprehensive approach considering multiple biomarkers and the intricate dynamics of the tumor microenvironment.Further study is needed, however, to validate these indicators and find techniques for incorporating them into clinical practice.Aside from identifying new biomarkers, additional methods and approaches for characterizing protein domain changes and predicting their functional effects are required.Machine learning and other computational tools have shown potential in predicting the impact of mutations on protein structure and function, which could help in the clinical interpretation of genetic data.Furthermore, developments in genome editing tools such as CRISPR/Cas9 have allowed for more efficient and precise experimental validation of mutant functional significance. Although protein domain mutations hold great promise as prognostic and predictive biomarkers, their successful implementation confronts obstacles relating to data acquisition, functional annotation, validation, and statistics.Future directions include technological advancements, functional studies, combination biomarkers, targeted therapies, real-world evidence, and inter-disciplinary cooperation.Taking on these challenges and investigating these future avenues will pave the way for the effective use of protein domain mutations as biomarkers, resulting in enhanced patient care that is more individualised. Discussion Cancer is a complex disease and thus requires advanced research techniques to identify its underlying causes.Mutation analysis plays a crucial role in identifying the genetic aberrations that lead to malignancy.The application of high throughput sequencing techniques, such as Next-Generation Sequencing (NGS), along with the implementation of hidden Markov models (HMMER), has markedly enhanced mutation analysis.This can offer valuable insights and provide crucial information, such as the genomic location of the mutations, the nucleotide alterations between wildtype and mutated genes, and the type of mutations [156].Mutations in cancer occur throughout the gene, including non-coding regions, which ultimately impact various cellular processes [157].And gene regulatory networks (GRNs) that characterize the relationship between genes in a cell are rewired due to gene mutations.Studying such perturbation of GRNs and mutational information becomes crucial during disease prognosis and treatment response, making the mutations important biomarkers for specific phenotypic states [158]. On the other hand, protein domains are functional units of a protein that contribute to the protein's overall structure and functionality.The domain positions are converted into genomic positions using an application programming interface (API) felicitating in mapping the mutations to protein domains [114].These mutations can provide a more precise understanding of advanced GRNs and specific protein functions that play a critical role in cancer development [159].Targeting mutations in protein domains associated with different forms of cancer can improve personalized cancer treatment.Identifying these specific mutations is crucial in identifying potential therapeutic targets and revealing fundamental molecular pathways [114,160].By incorporating mutation analysis and biomarkers into cancer management strategies, clinicians can develop targeted therapies designed to address the specific vulnerabilities arising from protein domain mutations.This not only enhances the treatment efficacy but also minimizes collateral damage to healthy tissues, reducing adverse effects commonly associated with conventional treatments [161]. Conclusion Exploring this field further has the potential to improve the diagnosis, prognosis, and treatment of cancer, ultimately leading to better outcomes for patients.Protein domain mutations have shown great promise as biomarkers in cancer; however, their implementation faces some challenges.One significant obstacle is the complexity and heterogeneity of cancer, which makes identifying relevant mutations difficult.Even when relevant mutations are identified, predicting their functional significance and impact on treatment response can be challenging.Additionally, the cost and time required for genomic and molecular profiling can limit the widespread use of protein domain mutations as biomarkers.Finally, determining the functional importance of specific protein domain mutations requires extensive experimental validation, which can be difficult to achieve in a clinical setting.Despite these challenges, protein domain mutations offer great potential for improving cancer diagnosis, prognosis, and treatment outcomes. Fig. 1 . Fig. 1.Schematic representation of drug targeted similar domain of different mutated proteins. Fig. 3 . Fig. 3. Clinical relevance of protein domain mutation in cancer.A. Flow chart for predicting protein domain as biomarkers from multi-Omics.B. Protein domains can serve as diagnostic and prognostic biomarker in cancer disease. Table 3 List of carcinoma types caused by mutations in protein domains.
2024-03-12T15:15:10.815Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "dbab3316cfb9a29222ff20982cf7f2caa74f84e0", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844024036867/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9dbc22387931dfd1d7c10eef39d10c4375ead424", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
201090307
pes2o/s2orc
v3-fos-license
Cronobacter sakazakii, Cronobacter malonaticus, and Cronobacter dublinensis Genotyping Based on CRISPR Locus Diversity Cronobacter strains harboring CRISPR-Cas systems are important foodborne pathogens that cause serious neonatal infections. CRISPR typing is a new molecular subtyping method to track the sources of pathogenic bacterial outbreaks and shows a promise in typing Cronobacter, however, this molecular typing procedure using routine PCR method has not been established. Therefore, the purpose of this study was to establish such methodology, 257 isolates of Cronobacter sakazakii, C. malonaticus, and C. dublinensis were used to verify the feasibility of the method. Results showed that 161 C. sakazakii strains could be divided into 129 CRISPR types (CTs), among which CT15 (n = 7) was the most prevalent CT followed by CT6 (n = 4). Further, 65 C. malonaticus strains were divided into 42 CTs and CT23 (n = 8) was the most prevalent followed by CT2, CT3, and CT13 (n = 4). Finally, 31 C. dublinensis strains belonged to 31 CTs. There was also a relationship among CT, sequence type (ST), food types, and serotype. Compared to multi-locus sequence typing (MLST), this new molecular method has greater power to distinguish similar strains and had better accordance with whole genome sequence typing (WGST). More importantly, some lineages were found to harbor conserved ancestral spacers ahead of their divergent specific spacer sequences; this can be exploited to infer the divergent evolution of Cronobacter and provide phylogenetic information reflecting common origins. Compared to WGST, CRISPR typing method is simpler and more affordable, it could be used to identify sources of Cronobacter food-borne outbreaks, from clinical cases to food sources and the production sites. Some molecular subtyping methods have been developed to study the epidemiology of pathogenic bacteria, including pulsed field gel electrophoresis (PFGE) and multi-locus sequence typing (MLST), but both still have some disadvantages (Ogrodzki and Forsythe, 2017). PFGE is limited for a portion of Cronobacter strains that cannot be typed due to intrinsic DNase activity; moreover, it does not provide the phylogenetic relationship between strains. MLST has been established for Cronobacter genus based on seven housekeeping genes (Joseph et al., 2012b). A curated open access MLST database has been established for the genus with more than 2200 strains and associated metadata 1 . This database has enabled the recognition of certain Cronobacter clonal lineages within the genus as pathogenic variants, whereas others are primarily commensal organisms associated with the environment. The discrimination power of MLST is weaker than that of whole genome sequence typing (WGST), and this method lacks information about historical ancestors . WGST is a new method for subtyping bacteria, but its high costs still limit its application Deng et al., 2015). CRISPR-Cas system is an adaptive immune system for bacteria, providing bacteria with sequence-specific, acquired defense against phages and plasmids (Barrangou, 2013;Westra et al., 2014). The evolution of CRISPR-Cas has led to the discovery of a diverse set of CRISPR-Cas systems, which can be then classified into distinct classes, types, and subtypes, combined with the analysis of signature protein families and features of cas loci architectures that unambiguously partition most CRISPR-Cas loci (Makarova et al., 2015;Shmakov et al., 2017). The activity of a CRISPR locus occurs in three stages as follows: adaptation through the incorporation of new spacers into the existing repeat-spacer array; expression of the repeat-spacer array and the consequent processing of that array into CRISPR RNAs (crRNAs); interference during which invasive target sequences are recognized and destroyed by the crRNA-effector complex (Barrangou et al., 2007). As new spacers are added to one end of the CRISPR array, polarity exists; specifically, spacers at the leader distal end are more ancient and are often shared among bacterial common ancestors. The acquisition, loss, and duplication of spacers have made CRISPR arrays be the fastest evolving loci in bacteria (Paez-Espino et al., 2013;Shariat and Dudley, 2014). The first application of CRISPR loci in bacterial genotyping was spacer-oligonucleotide typing (or "spoligotyping") of Mycobacterium tuberculosis strains (Groenen et al., 1993;Streicher et al., 2007). Its principle is PCR amplification of the 1 https://pubmlst.org/cronobacter/ CRISPR array with labeled primers that recognize the directed repeat sequences, then hybridization of the PCR products to a membrane containing probes bearing spacer DNA sequences (Streicher et al., 2007). The "next-generation" microbeadspoligotyping approach was an assay termed CRISPOL (for "CRISPR polymorphism") applied to Salmonella (Fabre et al., 2012). The first application of sequence-based CRISPR typing was group A Streptococcus (GAS) M1 serotype (Hoe et al., 1999). Considered the temporal organization of spacers, the sequencing of CRISPR arrays has been a extremely useful tool to genotype bacteria like Yersinia species, E. coli, and Salmonella enterica (Cui et al., 2008;Fricke et al., 2011;Yin et al., 2013;Li et al., 2014;Bugarel et al., 2018), and it has also been used to investigate bacterial diversity based on metagenomic data (Berg Miller et al., 2012;Sun et al., 2016). Recently, some useful tools to extract spacers and visualize the spacer content with color schemes were developed (Biswas et al., 2016;Couvin et al., 2018;Dion et al., 2018;Nethery and Barrangou, 2019). In previous studies, six CRISPR arrays were detected in conserved regions of the Cronobacter genomes; among these, CRISPR1 and CRISPR2 neighbor the I-E type of the "complete" cas gene cluster, whereas CRISPR3 and CRISPR6 integrate with the I-F type of the "complete" cas gene cluster comprising subtype I-E and I-F CRISPR-Cas systems, respectively. Two CRISPR-Cas systems (Subtype I-E and I-F) were found only in C. sakazakii, C. malonaticus, and C. dublinensis isolates, specifically. Unlike subtype I-E, which was commonly detected among Cronobacter strains, subtype I-F was found to be significantly more prevalent in the plant-associated species C. dublinensis than in the human virulence-related species C. sakazakii and C. malonaticus. However, C. condimenti lacked intact CRISPR-Cas system (Zeng et al., 2017(Zeng et al., , 2018b. At the same time, significantly higher CRISPR activity was also observed in the plant-associated species C. dublinensis than in the virulencerelated species C. sakazakii and C. malonaticus (Zeng et al., 2018b). Similar CRISPR array spacers have been rarely detected among species, indicating intensive changes through adaptive acquisition and loss. Thus, differentiated CRISPR activity appears to be the product of environmental selective pressure and might contribute to the bidirectional divergence and speciation of Cronobacter (Zeng et al., 2018b). CRISPR arrays will be a promising typing method compared with MLST Forsythe, 2016, 2017;Zeng et al., 2018b). However, the identification of CRISPR arrays in Cronobacter was based on whole genome sequences by next generation sequencing, which is associated with high cost. It is necessary to establish this new molecular subtyping method using routine PCR and define the nomenclature system. Among six types of CRISPR arrays, CRISPR1 and CRISPR2 were found in almost all Cronobacter strains, whereas CRISPR3 and CRISPR6 were also found to be preserved in many Cronobacter strains, and all of them have a high diversity among different isolates (Zeng et al., 2018b). However, CRISPR4 and CRISPR5 were not suitable for genotyping, as they were only found in a few C. sakazakii isolates and lacked spacer diversity (Zeng et al., 2017). The diversity of four CRISPR arrays in isolates of this genus could therefore provide a powerful tool to track the origin of genetically similar strains within an outbreak. In this study, we established a CRISPR-based subtyping method for C. sakazakii, C. malonaticus, and C. dublinensis using routine PCR and examined the relationship between CRISPR profiles and other genetic factors. Bacterial Isolates A total of 257 Cronobacter isolates used in this study were collected from four types of food (powdered infant milk, ready-to-eat food, vegetables, and edible mushroom) in China, including 161 C. sakazakii, 65 C. malonaticus, and 31 C. dublinensis strains. All strains belonged to the large-scale and systematic investigation on the prevalence of Cronobacter spp. in food in China, and detailed information about these strains, including O serotypes, STs, and antibiotic-resistance profiles, are provided in Supplementary Data Sheets 1-3 (Zeng et al., 2017(Zeng et al., , 2018bLing et al., 2018;Li et al., 2019). CRISPR PCR Amplification and Sequencing The primers location for the amplification CRISPR1, CRISPR2, CRISPR3, and CRISPR6 are shown in Figure 1, and are in accordance with the genomic sequences encoding CRISPR-Cas systems in Cronobacter reported previously (Zeng et al., 2018b). The sequences of primers for the amplification and sequencing of CRISPR1, CRISPR2, CRISPR3, and CRISPR6 loci are listed in Table 1. PCR reaction was performed using a 50-µL volume, which contained 0.5 µL of PrimeSTAR R HS DNA Polymerase (2.5 U/µL; Takara, Dalian, Japan), 4 µL of 2.5 mM dNTPs, 0.5 µL of each 10 mM primer, 10 µL of 5 × PrimeSTAR Buffer, and 1 or 2 µL of bacterial DNA template for CRISPR1 and CRISPR2 or CRISPR3 and CRISPR6, respectively; the remaining volume consisted of sterile water. The PCR conditions were as follows: initial denaturation at 98 • C for 1 min; 30 cycles of 98 • C for 10 s, 58 • C for CRISPR1 and 57 • C for CRISPR2, CRISPR3, and CRISPR6 for 5 s, and 72 • C for 4 min; a final extension at 72 • C for 5 min. After identification by electrophoresis, the PCR products were subjected to DNA sequencing directly (Beijing Genomics Institute, Guangzhou, China). All PCR products were sequenced using amplification primers in both the forward and reverse directions to obtain a double-stranded sequence. PCR products larger than 2-kb sometimes required the design of additional primers based on Sanger sequencing results by amplification primers to obtain the complete sequences. CRISPR Typing and Cluster Analysis The orientation of CRISPR spacers were determined by CRISPRDetect and the spacers were extracted using CRISPRCasFinder (Biswas et al., 2016;Couvin et al., 2018). A similarity search of the identified spacer sequences (84% similarity) and the establishment of a unique spacer library were performed as described previously (Zeng et al., 2017). A comparison of these unique spacers to previously studied elements within the ACLAME database (Leplae et al., 2010) was performed to identify potential targets. Every unique spacer among different CRISPR arrays of one species was assigned a single number beginning with 1 from the leader distal end, and lists of CRISPR spacer sequences for C. sakazakii, C. malonaticus, and C. dublinensis are provided as Supplementary Data Sheets 4-6, respectively. Then, every CRISPR array with multiple spacers was assigned a number as a spacer code. CRISPR typing was performed by combining CRISPR1, CRISPR2, CRISPR3, and CRISPR6 into one allele and displayed this as an arrangement of CRISPR spacers. The CRISPR type (CT) of each isolate was defined using a specific number to reflect its unique allelic type. The discrimination index (D) was calculated based on the Simpson's index of diversity with the equation as previously defined (Hunter and Gaston, 1988). To depict the clustering of subtypes determined by CRISPR diversity, the binary distribution (presence as "1" or absence as "0") of every spacer in each CRISPR locus was profiled for FIGURE 1 | Outline of the new molecular typing method based on four CRISPR arrays of Cronobacter. The locations of PCR primers used to amplify CRISPR loci are shown. Compared to that in C. sakazakii and C. malonaticus, there was a 1-kb plus nucleotide sequence region including one hypothetical protein between the hypothetical protein used for the design of primer E-1F and CRISPR1 in C. dublinensis isolates. The orientation of CRISPR arrays and extraction of spacers were completed by CRISPRDetect. The specific CRISPR type was determined by a combination of sequenced incorporated spacers in CRISPR arrays. each strain. The binary distribution patterns of all strains were then combined and used to create a minimum spanning tree, developed utilizing BioNumerics version 7.6.3 (Applied Maths, Belgium). To explore the genetic relationships between CRISPR sequence variability and food type or serotype and CTs and food type, CTs and serotypes were displayed according to the results of the cluster analysis, respectively. Differences in CRISPR spacers comparing antibiotic-resistant and susceptible isolates were also examined. The spacer comparison and conversions to HEX color code were performed using CRISPRstudio software (Dion et al., 2018). Core Genome Phylogenetic Analyses Among 257 Cronobacter isolates, whole genome sequences of 117 isolates were established based on core genome analyses and reported in our previous study (Zeng et al., 2017(Zeng et al., , 2018b). Next, a core genome ML phylogenetic tree was generated based on 287,220 nucleotides from concatenated 563 single-copy core genes sequences using FastTree (Price et al., 2009). The display and annotation of phylogenetic trees were performed using iTOL (Letunic and Bork, 2016). Relationship Between CRISPR Sequence Variability and Food Type and Serotype and Antibiotic Resistance Minimum spanning trees were generated using BioNumerics software to analyze the distribution of CTs among different types of food and their relationship with serotypes (Figure 3). FIGURE 3 | Minimum spanning tree of CRISPR data from 161 C. sakazakii, 65 C. malonaticus, and 31 C. dublinensis isolates. Minimum spanning tree of C. sakazakii (A), C. malonaticus (C), and C. dublinensis (E) isolates with color corresponding to each type of food indicated in the legend on the right side of (A). Minimum spanning tree of C. sakazakii (B), C. malonaticus (D), and C. dublinensis (F) with color corresponding to each serotype indicated in the legend on the right side. Each circle represents one CRISPR type (CT), and the area of the circle corresponds to the number of isolates. The maximum distance between nodes in the same partition was set to 10. C. sakazakii, C. malonaticus, and C. dublinensis strains isolated from vegetables showed higher CRISPR diversity than those from other types of food, and this was especially true for C. dublinensis. This was in accordance with previous studies showing a higher frequency and diversity of Cronobacter in vegetables compared to that in other types of food, supporting the contention that FIGURE 4 | Phylogeny of 118 C. sakazakii, C malonaticus, and C. dublinensis strains inferred by whole genome sequences types (WGSTs). The STs and CRISPR types (CTs) of each isolate were listed on the right side and the CRISPR profiles of clonal complex 4 (CC4), CC8, CC7, ST148, ST60, and ST77 strains were also shown. this species is plant-associated (Ueda, 2017;Ling et al., 2018;Silva et al., 2019). There was also a relationship between CT and serotype. As shown in Figure 2, when the maximum distance between nodes in the same partition was set to 10, CT6-, CT64-, CT15-, CT41-, and CT85-associated partitions were the five main partitions in C. sakazakii (Figure 3A). C. sakazakii serotype O2 was found among all strains of the CT85-associated partition and most strains of partition CT6; moreover, serotype O1 predominated the CT64-and CT48-associated partitions and most strains in the CT15-associated partition were serotype O4 ( Figure 3B). For C. malonaticus, CT13-, CT23-, and CT2associated partitions were the three major partitions (Figure 3C). All strains in CT23-and CT3-associated partitions were serotype O1, whereas serotype O2 was predominant in the CT13associated partition (Figure 3D). Based on the limited number of isolates and high diversity of CRISPR sequences in C. dublinensis, there were no major partitions reported in this study, whereas O1 was the predominant serotype ( Figure 3E). In accordance with previous studies, 96.9% (249/257) of isolates were resistant or intermediate to cephalothin, whereas most were susceptible to other antibiotics (Brandao et al., 2017;Ling et al., 2018). In total, there were three isolates resistant to two or more antibiotics in this study. Comparisons of CRISPR sequence variability between the resistant strains and other strains were also performed (Supplementary Data Sheets 1-3), there was no significant relationship between antibiotic resistance and CRISPR variability in C. sakazakii, C. malonaticus, and C. dublinensis. Accordance Among CRISPR Typing, MLST, and WGST A core genome ML phylogenetic tree based on whole genome sequences of 117 strains was generated to evaluate the consistency between CRISPR typing and WGST. As shown in Figure 3, CRISPR profiles were conserved among phylogenetically related strains and these had a close relationship with ST types. At the same time, the strains with different STs but belonging to the same clonal complex (CC) also had similar CRISPR profiles and belonged to the same partition. C. sakazakii CC4, C. sakazakii CC8, and C. malonaticus CC7 were major pathogenic CCs in previous studies, all the strains in these CCs formed distinct clusters in the phylogenetic tree, and belonged to C. sakazakii CT6-, C. sakazakii CT64-, and C. malonaticus CT13-associated partitions, respectively. Moreover, this approach was found to distinguish the same ST into smaller units (Figures 4, 5). For example, seven ST64 isolates formed a small lineage in this phylogenetic tree, and three ST64 strains within the CT89associated partition were more closely related than other strains of different CTs. At the same time, the phylogenetic distance between other ST64 strains was also in accordance with the differences in CRISPR spacer composition (Supplementary Data Sheet 1). The same phenomenon was observed for the ST23 strain (Figures 4, 5A). In C. malonaticus CC7, C. malonaticus ST7 isolates typed as CT12, CT14, CT13, and CT15 were more closely phylogenetically related to C. malonaticus ST211 isolates typed as CT11 than other ST7 isolates themselves (Figures 4, 5B), implying better accordance between the CRISPR typing method and WGST. However, there were also few inconsistent results, for example, C. sakazakii ST4 isolate cro7 and ST267 isolate cro1511C1 were C. sakazakii CT2, but cro7 have more closely phylogenetic relationship with another C. sakazakii ST4 isolate FIGURE 5 | CRISPR spacer overview. Organization of spacer content of CRISPR alleles identified in 20 C. sakazakii isolates (A) and 33 C. malonaticus isolates (B). Repeats were not shown in this figure, and only spacers were displayed. Color schemes were provided at the spacer level to visualize differences among isolates based on the software CRISPRStudio. Spacers are shown in the order of predicted acquisition in the locus (right, ancestral spacers; left, newly acquired spacers). 7G. Combined with all the results, the CRISPR typing method shows better discriminatory power than MLST and has a better accordance with WGST. The Phylogenetic Information Inferred by Sequence Diversity in CRISPR Arrays In addition to the good discriminatory power of CRISPR arrays in distinguishing Cronobacter strains, the phylogeny information conserved in the iterative spacer acquisition process can be used to infer common ancestry. The CRISPR alleles of C. sakazakii ST23, ST264, ST148, and ST566 strains are shown in Figure 5A. These ST isolates belonged to different CCs and no apparent close phylogenetic relationship among these STs was observed in Figure 3. In contrast to the high diversity of CRISPR spacers among these strains at four CRISPR loci, it was interesting to note that all of these strains harbored some conserved ancestral spacers in CRISPR1. There were seven ancestral spacers conserved in CRISPR1, and some ST23, ST264, and ST148 strains preserved all of these ancestral spacers. Moreover, one additional spacer inserted in the fourth and fifth ancestral spacers was detected in all ST148 strains. Unlike the first seven conserved spacer sequences, the newly incorporated spacers showed lineage specificity. The ancestral spacers might be important proof of lineage divergence. As shown in Figure 5B, C. malonaticus CC7, ST211, ST11, and ST139 isolates preserved one ancestral sequence in CRISPR2. ST139 strains had three to four ancestral spacers that were common with CC7 isolates, which suggests a closer phylogenetic relationship between these lineages. DISCUSSION The Cronobacter genus including seven species are opportunistic foodborne human pathogens that can cause rare but serious diseases in neonates and immune-compromised infants (Iversen et al., 2008;Kucerova et al., 2011;Joseph et al., 2012a). In our previous studies, C. sakazakii, C. malonaticus, and C. dublinensis were three prevalent species in food, however, we have never isolated C. universalis and C. condimenti strains (Xu et al., 2015;Ling et al., 2018;Li et al., 2019). In total, five C. turicensis and one C. muytjensii strains were isolated from vegetables and ready-to-eat foods (Xu et al., 2015;Ling et al., 2018) and we have successfully performed CRISPR arrays on these isolates using the same primers used for C. sakazakii. However, for the limited number of strains, whether these primers will be suitable for these species is unknown. Thus, we only constructed a CRISPR typing method for C. sakazakii, C. malonaticus, and C. dublinensis in this study. In this study, CRISPR arrays were detected in all Cronobacter isolates; moreover, 1706, 487, and 1361 unique spacers were identified in 161 C. sakazakii, 65 C. malonaticus, and 31 C. dublinensis isolates. In accordance with a previous study (Zeng et al., 2018b), the number of CRISPR spacers in C. dublinensis isolates was greater than that in C. sakazakii and C. malonaticus. CRISPR1 and CRISPR2 were preserved in all three species and more active than other CRISPR loci; further, CRISPR3 was found in some strains of these species; however, CRISPR6 was only detected in some C. sakazakii and C. dublinensis strains (Supplementary Data Sheets 1-3). Whether there is a need to use four CRISPR loci for C. malonaticus CRISPR typing should be examined in the future using more isolates. Moreover, in these C. sakazakii, C. malonaticus, and C. dublinensis isolates, the discriminatory powers of the CRISPR typing method for all three species were comparable. According to our results, the CRISPR typing method shows better discriminatory power than MLST and has a better accordance with WGST. The largest outbreak of C. sakazakii occurred in a neonatal intensive care unit in France (1994), lasting over 3 months and claiming the lives of three neonates. A recent study used whole genome sequencing data of 26 isolates obtained from this outbreak to reveal relatedness (Masood et al., 2015). To examine the accuracy of CRISPR typing for the identification of pathogens in the Cronobacter outbreak, we downloaded these genome sequences and extracted CRISPR arrays for molecular typing. All C. sakazakii ST4, ST12, and ST13 strains belonged to CT2, CT50, and CT52, respectively. This was in accordance with the data obtained from the outbreak but had weaker discriminatory power compared to whole genome SNP analyses. In this study, 19 C. sakazakii ST4 strains isolated from several types of food in China were divided into 14 CTs including CT2, whereas four C. sakazakii ST13 isolates were divided into four CTs, but without CT52. Thus, the better discriminatory power of CRISPR typing could make it more useful than MLST to differentiate potential sources of Cronobacter outbreaks in the future. Polarity exists as new spacers are always added to the proximal end of the CRISPR array; in addition, spacers at the leader distal end were found to be more ancient and were shared among phylogenetically related Cronobacter isolates. Spacer loss and gain make CRISPR elements the fastest evolving loci in Cronobacter, supporting previous speculation that CRISPR-Cas systems have an important impact on the evolution of this genus (Zeng et al., 2018b). CRISPR spacer variability in Cronobacter can divide an ST into smaller units and has better accordance with WGST than MLST. The CRISPR1 and CRISPR2 spacers in three species were more active, as shown in Figure 5, and some phylogenetically distant lineages were found to preserve some ancestral spacers at CRISPR1 or CRISPR2, respectively, although no similar spacers existed in other CRISPR loci. These ancestral spacers are important proof of lineage divergence; thus, CRISPR1 and CRISPR2 in Cronobacter can provide phylogenetic anchors reflecting common origins. Unfortunately, despite the extremely high variation in CRISPR spacer sequences of Cronobacter, many lineages had a unique CRISPR pattern, and no common ancestral spacers were found among these different clonal isolates. In summary, CRISPR diversity can be used to unfold a complete evolutionary story of strain divergence and relatedness, showing unique advantages compared to other genotyping methods. The advantages of CRISPR-based genotyping methods have been demonstrated for some bacteria widely found in the food supply chain such as Streptococcus thermophilus (Horvath et al., 2008) and Lactobacillus buchneri (Briner and Barrangou, 2014). It can also be used for pathogenic strains like Escherichia coli (Yin et al., 2013;Barrangou and Dudley, 2016), Salmonella (Fabre et al., 2012;Li et al., 2014), Clostridium difficile (Andersen et al., 2016), and Mycobacterium tuberculosis (Streicher et al., 2007;Zhang et al., 2010). Finally, we also found a relationship between CT, ST, food types, and serotypes among Cronobacter isolates, and this phenomenon has also been found in other foodborne pathogens (Li et al., 2014Bugarel et al., 2018). CONCLUSION In conclusion, we developed a CRISPR typing method for C. sakazakii, C. malonaticus, and C. dublinensis. Compared to MLST, this new molecular method has greater power to distinguish similar strains and had better accordance with WGST. Compared to WGST, CRISPR typing is simpler and more affordable, and it could be useful for the identification of sources of Cronobacter outbreaks, in addition to performing microbial risk assessment during food processing. More importantly, CRISPR diversity can be used to infer the divergent evolution of Cronobacter and provide phylogenetic anchors reflecting common origins. In the future, it would be meaningful to generate a comprehensive Cronobacter complex database of CRISPR spacers for the global application of CRISPR typing, and pool the results of different research groups to explore the epidemiology and reservoirs of Cronobacter spp. DATA AVAILABILITY The raw data supporting the conclusions of this manuscript will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS HZ and QW conceived and designed the experiments. HZ, CL, WH, MC, TL, HW, and NL performed the experiments. HZ, JZ, SC, JW, and YD analyzed the data. HZ drafted the manuscript. QW supervised the project. All authors read and approved the final manuscript.
2019-08-28T13:07:07.330Z
2019-08-28T00:00:00.000
{ "year": 2019, "sha1": "477baf39d660b2c47e2e26e3504ef1fccf07e24c", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.01989/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d97e38c284f4ff7f514bfc853c2d073048476f8f", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
55675056
pes2o/s2orc
v3-fos-license
Finite-Time Bounded Tracking Control for Linear Discrete-Time Systems A finite-time bounded tracking control problem for a class of linear discrete-time systems subject to disturbances is investigated. Firstly, by applying a difference method to constructing the error system, the problem is transformed into a finite-time boundedness problem of the output vector of the error system. In fact, this is a finite-time boundedness problem with respect to the partial variables. Secondly, based on the partial stability theory and the research methods of finite-time boundedness problem, a state feedback controller formulated in form of linear matrix inequality is proposed. Based on this, a finite-time bounded tracking controller of the original system is obtained. Finally, a numerical example is presented to illustrate the effectiveness of the controller. Introduction In 1961, Dorato proposed the concept of finite-time stability (FTS) in [1]. The main concept is that if the bound of the initial condition is given, the state of the system does not exceed a certain bound over a given finite time-interval. Since then, many scholars have conducted indepth research on FTS. In the 1960s, Kushner investigated the FTS of stochastic systems in [2]. Weiss and Infante discussed the FTS of nonlinear systems in [3,4]. However, due to the lack of effective mathematical tools at that time, the research progress is relatively slow. With the development of linear matrix inequality (LMI) theory, the research on FTS yielded fruitful results. In [5,6], Amato et al. extended the concept of FTS to the linear continuous-time system with external disturbances and presented the concept of finite-time boundedness (FTB). The FTB of time-varying continuous-time systems was discussed in [7]. Subsequently, the discrete-time system was investigated in [8,9] and further research was done in [10][11][12]. In [10], the state feedback controller and output feedback controller were designed to guarantee the FTB of the discretetime system with disturbance. In [11], the FTS of discrete systems was analyzed by using polyhedral Lyapunov function. In [12], the sufficient conditions for FTS of time-varying discrete systems were given and an output feedback controller was developed. Following the pioneering work of Amato et al., many scholars extended the research of FTB of discrete-time systems. In [13], the finite-time control for linear discrete-time system with external disturbances was studied. The FTS of discrete-time stochastic systems with time-varying delays and its application to multiagent systems were considered in [14]. In [15], a finite-time optimal control method for a class of linear discrete-time systems with parameter variation was presented. By constructing the Lyapunov-Krasovskii functional, the FTS of discrete time-delay systems with nonlinear perturbations was studied in [16]. In [17], a robust controller was proposed to address the finite-time control problem of linear uncertain discrete systems by using an augmented LMI 2 Mathematical Problems in Engineering method. In [18], the FTS and ∞ control problem of discretetime systems were discussed and a robust finite-time control scheme was provided. On the basis of [13], the tracking control problem of linear discrete-time systems with disturbances in a finite timeinterval is considered in this paper. Firstly, the error system is constructed based on the preview control theory [19,20], and the problem is turned into a FTB problem of the output vector of the error system. Then a state feedback controller is designed for the error system via the LMI approach. Finally, a finite-time state feedback controller of the original system is derived. Preliminaries and Basic Concepts This paper considers the following linear discrete-time system: where ( ) ∈ and ( ) ∈ are the state vector and the disturbance vector of the system, respectively. ∈ × and ∈ × are known constant matrices. In [10][11][12][13], the FTB problem of system (1) was investigated and its basic definition was described as follows: system (1) is said to be finite-time bounded with respect to ( , , , , ), where ≥ 1, > 0, > 0, > 0, and > 0, if For convenience, hereinafter, the state vector of system (1) is also said to be finite-time bounded with respect to ( , , , , ). The object of this paper is to generalize this concept and further study the finite-time bounded tracking problem of control systems. In the following, we first propose a definition of finite-time bounded tracking. Consider the discrete-time system where ( ) ∈ , ( ) ∈ , ( ) ∈ , ∈ × , and ∈ × . In some practical problems, it is hoped that the output of system (3) is always located in a neighborhood of a reference signal under some certain conditions. This kind of problem is referred to as "finite-time bounded tracking problem." Let the reference signal be ( ) ∈ . And the error signal ( ) is defined as The concept mentioned above can be described by the following definition. Remark 2. The conclusion of Definition 1 is equivalent to the fact that the error signal ( ) is finite-time bounded with respect to ( , , , , ); that is, the output ( ) of system (3) is always located in the neighborhood of the reference signal ( ) within a given time-interval {1, 2, ⋅ ⋅ ⋅ , }. In [13], the sufficient conditions for FTB of system (1) with respect to ( , , , , ) were presented in terms of LMI. In this paper, the research methods in [13] will be modified and combined with the error system method in preview control theory to study the finite-time bounded tracking problem. The Schur complement lemma is needed to deduce an LMI feasibility problem. Problem Description Let us consider the linear discrete-time system with disturbance where ( ) ∈ , ( ) ∈ , ( ) ∈ , and ( ) ∈ are the state vector, the input vector, the disturbance vector, and the output vector of the system, respectively. ∈ × , ∈ × , ∈ × , and ∈ × are known constant matrices. The difference operator Δ is defined as The reference signal is ( ) ∈ , and the error signal is defined by (4). The assumptions on disturbance signal and reference signal of system (7) are presented as follows: A1: Assume that the disturbance vector satisfies the condition, ∑ =1 Δ ( )Δ ( ) ≤ 2 1 , where 1 > 0. A2: Assume that the reference signal satisfies the condition, ∑ =1 Δ ( )Δ ( ) ≤ 2 2 , where 2 > 0. The purpose of this paper is to design a controller with preview action for linear discrete-time system (6) so that the closed-loop system can achieve the finite-time bounded tracking of the reference signal ( ) with respect to ( , , , , ). To achieve the above objective, an error system that includes the information of the error signal ( ) will be first constructed. Then, the error signal is considered as the output vector of this system. By this means, the original problem is converted into a FTB problem of the output vector of the error system. Based on the above discussion, the finite-time bounded tracking problem of system (6) is transformed into the FTB problem of the output vector ( ) of the closed-loop system of error system (12). Design of the Controller Let us consider the following state feedback controller: where = [ ] will be determined later. Applying this controller to system (12) results in Compared with system (3), it can be seen that system (14) is exactly same as system (3) except Δ ( +1). Hence, Δ ( + 1) can be treated as the external disturbance. Putting Δ ( + 1) and Δ ( ) together, a new disturbance vector ( ) = [ Δ ( ) Δ ( +1) ] is obtained. In this way, the closed-loop system (14) becomes Remark 4. System (15) is now fully in the form of system (3), which will facilitate the controller design. Since system (15) contains disturbances Δ ( +1) and Δ ( ), the corresponding assumptions can be relaxed to A1 and A2. For ( ), it is easy to prove that A1 is much weaker than that of Mathematical Problems in Engineering So far, the original problem has been converted into a FTB problem of partial variable ( ) of system (15). The conclusion of [13] cannot be directly applied to system (15). Therefore, it is necessary to combine relative ideas on partial stability with the proof methods in [13] to obtain the results of this paper. The following Theorem 5 is the first main result of this paper. By observing the inequality (17) carefully, it can be seen that (17) is not an LMI. Hence, it cannot be easily solved by Matlab LMI toolbox. To this end, a tractable LMI form will be presented in the following. This is the second main theorem of this paper. ( , , , , ), if for a given scalar > 1, there exist matrices 1 > 0, 2 > 0 and scalars 1 > 0, 2 > 0 such that Theorem 6. The closed-loop system (15) achieves finite-time bounded tracking of the reference signal ( ) with respect to Since the equivalent transformation of this inequality cannot yield the desired result, the matrix = [ ] and the expressions of 0 , Φ, , , , , and in the closedloop system (15) are substituted into this inequality. Then the following can be obtained: that is, Rewriting the left side of (42) yields ] ≤ 0. Remark 8. The results in this paper can be readily extended to linear discrete-time systems with state delay. In this case, we can construct a delay-free error system by applying the difference method and the discrete lifting technique [22]. Furthermore, the error vector is still taken as the output vector of the error system. Then, applying the controller design method in this paper, a finite-time bounded tracking controller of discrete time-delay systems can be obtained. Simulation Example The effectiveness of the proposed method will be shown by a numerical example, in which two different reference signals are considered. The disturbance is taken as By calculation, we have 8 Mathematical Problems in Engineering Below, two different reference signals are considered to do the numerical simulation. (61) Figure 1 shows the output response of the closed-loop system, and Figure 2 shows the tracking error between the closed-loop output and the reference signal. As shown in Figures 1-2, the proposed controller guarantees that the closed-loop output signal is always in the neighborhood of the reference signal ( ) within a given time-interval {1, 2, ⋅ ⋅ ⋅ , 100} and the error signal is always in a given range. That is to say, the closed-loop system achieves finite-time bounded tracking of the reference signal ( ) with respect to (0.1, √ 6/2, √ 5, , 100). It needs to be emphasized that the tracking error is very small even if a strong disturbance signal exists in the system. Note that from Definition 1, if (0) Re(0) ≤ 2 and other conditions are satisfied, ( ) Re( ) ≤ 2 holds. This result has nothing to do with the selection of initial state (0). But in fact, due to Note that the reference signal (58) is very valuable in practice. In fact, the desired trajectory of a biped robot in the upslope process is usually in the form of function (58) [23]. (2) The reference signal is taken as the periodic function given by In this case, the following can be obtained: which implies ∑ =1 Δ ( )Δ ( ) + ∑ =1 Δ ( )Δ ( ) ≤ 2 . Thus, the condition of Theorem 6 holds. Figure 3 shows the output response of the closed-loop system, and Figure 4 shows the tracking error between the actual output and the desired output. It can be seen that the closed-loop system achieves finite-time bounded tracking of the reference signal ( ) with respect to (0.1, √ 6/2, √ 5, , 100). Conclusion In this paper, the concept of finite-time bounded tracking control for linear discrete-time systems is proposed. Using the difference method, we construct an error system where the tracking error is only a part of the augmented state vector. Then, by constructing a Lyapunov function with respect to the tracking error, a sufficient condition guaranteeing that the norm of tracking error is finite-time bounded is presented in terms of a set of LMIs. Based on this criterion, a feedback controller of the original system is derived, under which the closed-loop output achieves finite-time bounded tracking of the reference signal. Numerical simulation shows the effectiveness of the proposed controller.
2018-12-12T10:38:04.789Z
2018-06-25T00:00:00.000
{ "year": 2018, "sha1": "650b38aca57bbde978200f025d758bba94af3f1c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mpe/2018/7017135.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e21d4853a6848cd3fef857a08fcc248b07ff54b8", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }